My team is the CORTEX team at LORIA/INRIA Lorraine.
Our current overall objectives and main domains of activity are:
- Computational neuroscience: behavioral approach
- Computational neuroscience: spiking neurons
- Intelligent information processing
- Connectionist parallelism
You can also see a recent activity report.
The goal of our research is to study the properties and capacities of distributed, numerical and adaptive automated information processing and to show that that kind of processing may allow to build ``intelligent'' systems, i.e. able to extract knowledge from data and to manipulate that knowledge to solve problems. More precisely, these studies rely on the elaboration and analysis of neuromimetic connectionist models, developed along two sources of inspiration, computational neuroscience and machine learning.
Both sources of inspiration are studied together because they are interested in better understanding how such distributed models can learn internal representations, and manipulate knowledge and both propose complementary approaches allowing cross-fertilization. Machine learning proposes connectionist numerical models for information processing in a statistical framework, to extract knowledge from data. Computational neuroscience proposes distributed theoretical models and elementary mechanisms that aim at explaining how the human or animal nervous system processes information at various levels, from neuronal mechanisms to behaviour.
Complementarily to our multidisciplinary domains of inspiration, our research is applied in domains like data and signal interpretation, intelligent sensors, robotics, and computer-aided decision. More generally, our models are dedicated to monitoring complex, multimodal processes, perceiving and acting on their environment.
These models are firstly implemented on classical computers, but other architectures are also explored, namely parallel machines, autonomous robots, and more generally specialized circuits for embedded systems, as suggested by our applications.
Accordingly, four topics of research are currently carried out.
- In computational neuroscience, at a behavioral level, we are developing models of cerebral neuronal structures, to allow the navigation of autonomous robots.
- In computational neuroscience, at the neuronal level, we are modeling spiking neurons, seen as dynamic systems with temporal behavior, allowing synchronization within populations of neurons.
- From a more statistical point of view, we are studying how classical continuous neuronal models can be adapted to database and signal interpretation, for knowledge extraction.
- From a more technological point of view, all the above-mentioned models are adapted to allow implementations on dedicated architectures.
Computational neuroscience: behavioral approach
Keywords : computational neuroscience, cortical model, population of neurons, cortical column, behavioral model.
With regards to the progress that has been made in anatomy, neurobiology, physiology, imaging, and behavioral studies, computational neuroscience offers a unique interdisciplinary cooperation between experimental and clinical neuroscientists, physicists, mathematicians and computer scientists. It combines experiments with data analysis and computer simulation on the basis of strong theoretical concepts and aims at understanding mechanisms that underlie neural processes such as perception, action, learning, memory or cognition. Today, computational models are able to offer new approaches of the complex relations between the structural and the functional level of the brain thanks to realistic models. Furthermore, these computational models and methods have strong implications for other sciences (e.g. psychology, biology) and applications (e.g. robots, cognitive prosthesis) as well.
Our research activities in the domain of computational neurosciences are centered around the understanding of higher brain functions using both computational models and robotics. These models are grounded on a computational paradigm that is directly inspired by several brain studies converging on a distributed, asynchronous, numerical and adaptive processing of information and the continuum neural field theory provides the theoretical framework to design models of population of neurons.
The main cognitive tasks we are currently interested in are related to the autonomous navigation of a robot in an unknown environment (perception, sensorimotor coordination, planning). The corresponding neuronal structures we are modeling are part of the cortex (perceptive, associative, frontal maps) and the limbic system (hippocampus, amygdala, basal ganglia). Corresponding models of these neuronal structures are defined at the level of the population of neurons and functioning and learning rules are built from neuroscience data to emulate the corresponding information processing (filtering in perceptive maps, multimodal association in associative maps, temporal organization of behavior in frontal maps, episodic memory in hippocampus, emotional conditioning in amygdala, selection of action in basal ganglia). Our goal is to iteratively refine these models, implement them on autonomous robots and make them cooperate and exchange information, toward a completely adaptive, integrated and autonomous behavior.
Computational neuroscience: spiking neurons
Keywords : computational neuroscience, spiking neurons, synchronization of activity, olfaction, neural code.
Computational neuroscience is also interested in having more precise and realistic models of the neuron and especially of its dynamics. Compartmental models describe the neuron through various compartments (axon, synapse, cellular body) and coupled differential equations. Such models describe the activity of real neurons to a high degree of accuracy. However, because of their complexity, these models are difficult to understand and to analyze. For this reason our work focuses on the use of simplified models, i.e. simple phenomenological models of spiking neurons, that try to capture the dynamic behavior of the neuron in leaky integrators that explain how spikes can be emitted through time from input integration.
These models are interesting for several reasons. From a neuroscience point of view, they allow a better understanding of neuronal functioning. Indeed, although it is well known that real neurons communicate with spikes, i.e. a short electrical pulse also called action potential, the precise nature of the neural code is a topic of intense debate. The firing-rate coding hypothesis stating that the firing frequency of a neuron estimated by temporal averaging encodes information is now challenged by a number of recent studies showing that precise spike timing is a significant element in neural encoding. In particular, stimulus-induced synchronization and oscillatory patterning of spike trains have been experimentally observed in perceptive systems like in vision or olfaction. Moreover, synchronization of neural activities seems to play a role in olfactory perception; for example, when the synchronization is pharmacologically abolished, honeybees do not discriminate anymore between similar odors.
From a computer science point of view, we investigate the spatio-temporal dynamics of simplified models of spiking networks using both mathematical analysis and numerical simulations. Therefore, we have to define (i) a tractable mathematical analysis with methods coming from the theory of nonlinear dynamical systems and (ii) an efficient computing scheme with either event-driven or time-driven simulation engines. These models can also be applied to difficult coding tasks for machine perception like vision and olfaction, and can help to understand how sensory information is encoded and processed by biological neural networks.
Intelligent information processing
Keywords : data analysis, pre-processing, neuro-symbolic integration, visualization, knowledge extraction.
Artificial neural networks are information processing systems that can be widely applied to data mining. They have a lot of capabilities for analyzing and pre-processing data, as well as visualizing and extracting knowledge. These capabilities can be developed through unsupervised and supervised networks or by combining them to obtain data analysis and forecasting models close to the ones performed by statistical methods but with other interesting properties.
To improve the performance of such information processing systems, several approaches can be followed depending of the prior knowledge available. Indeed, depending on additional labels (class or continuous value) which can be used (or available) on none of the patterns, on a subset of the patterns or on all of them, unsupervised or supervised learning can be sequentially performed. When there is no prior knowledge on the problem to be solved, knowledge extraction may use an unsupervised neural network as a front-end for forecasting applications or extracting rules. Because of its synthesis capabilities, an unsupervised neural network can be used both for limiting the computation complexity and for extracting the most significant knowledge. Moreover, knowledge extraction is facilitated as soon as multi-viewpoint unsupervised neural network model is used. This kind of methods also allows using in a second step additional information when it is available for optimizing a forecasting problem. However, for a forecasting problem where all patterns are labelled, classical networks using supervised learning can be successfully improved by finding the minimal architecture using pruning algorithms. The pruning methods consist in removing, during learning, the connections or neurons, or both, that have the least influence on the system's performance. Reducing the complexity of the networks prevents overtraining and allows easier implementation and knowledge extraction (variable selection, rule extraction). In any case, combining several models into a committee helps to improve the quality of the knowledge extracted or the forecasting and the proposed methods must be efficient for typical real-world in our domain, dealing with large amount of noisy and temporal data. Both topics are recently developed in the project.
Whenever they can be associated to such information processing techniques, new visualization techniques represent high added value as soon as their original processing results are mostly represented in high dimensional space.
Keywords : connectionism, parallelism, digital circuits, FPGA.
Connectionist models, such as neural networks, are the first models of parallel computing. Artificial neural networks now stand as a possible alternative with respect to the standard computing model of current computers. The computing power of these connectionist models is based on their distributed properties: a very fine-grain massive parallelism with densely interconnected computation units.
The connectionist paradigm is the foundation of the robust, adaptive, embeddable and autonomous processings that we develop in our team. Therefore their specific massive parallelism has to be fully exploited. Furthermore, we use this intrinsic parallelism as a guideline to develop new models and algorithms for which parallel implementations are naturally made easier.
Our approach claims that the parallelism of connectionist models makes them able to deal with strong implementation and application constraints. This claim is based on both theoretical and practical properties of neural networks. It is related to a very fine parallelism grain that fits parallel hardware devices, as well as to the emergence of very large reconfigurable systems that become able to handle both adaptability and massive parallelism of neural networks. More particularly, digital reconfigurable circuits (e.g. FPGA, Field Programmable Gate Arrays) stand as the most suitable and flexible device for fully parallel implementations of neural models, according to numerous recent studies in the connectionist community. We carry out various arithmetical and topological studies that are required by the implementation of several neural models onto FPGAs, as well as the definition of hardware-targetted neural models of parallel computation.
A prospective view of our activities
written end of 2006.
Primary keywords: computational neuroscience, neural networks, learning, spiking neurons, emergence
Secondary keywords: memory, multimodality, distributed models, asynchronous computing, event-driven computation, architecture/algorithm adequacy, embedded systems, temporal coding, self-organization, dynamical systems
Related stakes: perception, autonomous systems, ambient intelligence, medical technology, brain computer interface
A recent and very strong trend in computer science is to develop models and algorithms in interaction with biology and medical science. Today, computing resources allow to deal with the huge amount of data and the complexity of biological phenomena. Biologists now have great expectations from computer science, through both data mining and computational modeling. Several multidisciplinary research labs have been thus created, which illustrates the increasing significance of this collaboration between biologists and computer scientists.
Neuroscience is a very active field of research, where the multidisciplinary aspect is prominent. Even if they still often work separately, physiologists, neuropsychologists, computer scientists, physicists, anatomists bring together different means to study the wide complexity of the brain.
Among these lines of research, computational neuroscience more specifically aims at using computational principles to better understand the brain. With regards to the progress that has been made in mathematics, computer science, anatomy, neuro-biology, physiology, imaging, and behavioral science, computational neuroscience provides a new and unique interdisciplinary cooperation framework between researchers of these scientific domains. It combines experiments with data analysis and computer simulation on the basis of strong theoretical concepts, and it aims at modelling, simulating and also understanding mechanisms that underlie neural processes such as perception, action, learning, memory or cognition. Two fields of research are generally considered:
- The first one corresponds to understanding the adaptative and distributed computation mode that is used by neural systems. This requires a computational study of the properties of these mechanisms, such as: emergence, asynchronism, temporality, genericity, modularity, robustness and adaptability. Two levels of description are studied in the field.
- Spiking models focus on very specific data and brain functions, at the neuronal level, and organize computation around fundamental neuronal events: spikes.
- Behavioral models are elaborated from integrated data and multimodal functionalities and wish to understand more complex functions, described in terms of information flow, at the level of populations of neurons and more global neuronal activity.
- The second field of research deals with experimental data (from cellular recordings to behavioral analysis) which are today available in huge quantities, more and more precise but also more and more complex to analyze. Such data can be exploited to extract new knowledge directly from living neuronal structures and to feed computational models with real information. Data mining approaches and other signal interpretation techniques are reconsidered and adapted to the specific nature of such data, i.e. temporal, highly multidimensional, noisy, multiscale and often sparse data.
Results obtained from these researches are twofold:
- Building and assessing models, as well as mining experimental data can lead to predictions and other hypotheses that can orient further research in experimental neuroscience. Today, computational models are able to offer new approaches of the complex relations between the structural and the functional level of the brain.
- Inspiration from these elementary biological mechanisms can bring new and powerful algorithms and computation paradigms to computer science. Similarly, the fundamental duality of neurons seen as processing units and elementary data storage is a major source of inspiration to adapt processing architecture to algorithms and to embed neuronal processing in fine grain distributed processing.
Our researches in the domain of computer science have to be oriented towards the following goals:
- Understand how information is encoded in the brain. In the domain of spiking neurons, the key problem is that of the neural code, i.e. the way neurons exchange information and coordinate their actions through spike generation.
- Master and exploit the power of learning phenomenon in biological networks. Understand how information is stored in the brain. This point is closely related to memory and neural plasticity. Activity dependent synaptic plasticity induced by learning leads to memory formation. On the other hand the spatiotemporal dynamics is a function of the underlying neural connectivity. Explore the interplay of spiking activity, spike-dependent plasticity and memory.
- Understand how a consistent decision process might emerge from asynchronous distributed computations. In the domain of behavioral models, major insights are expected from computational neuroscience that might bridge the gap between biological mechanisms and cognition. Emergent properties under investigation are related to integrating motivation and emotion to sensorimotor operations, toward consciousness phenomenon.
- Establish how interactions with the external world underlie the structuration of the cerebral architecture, through learning, self-organization, etc. A more general question is that of combining information flows related to various sensory and motor information, to motivational signal and to internal consistency.
- Unify and elaborate bridges between the various levels of description usually considered: cellular, network and behavioral levels. Developing unified models induces many issues to be addressed such as scalability of spike handling, interpretation of inhibitory and excitatory phenomenons in terms of spike interactions, introduction of temporal information and synchronization in behavioral models, etc.
- Develop data mining and signal processing tools adapted to the kind of signal manipulated in the domain. (e.g. to analyze at various scales and in real-time a large amount of noisy experimental data such as EEG, MEG and MRI to understand better brain activities).
- Develop models able to bring knowledge about neurobiology: a predictive approach for neuroscience may help to understand biological phenomenons or help to detect pathologies, formalized computational models help to study the properties of the modeled phenomenons.
- Assess the autonomous and other robust properties of neuronal models, embedding them in adapted architecture for autonomous systems and using interactive technology to cope with real problems in the real world.
- Understanding how sensory information is encoded and processed by neuronal systems can be achieved by means of computational modeling. The modeling and the simulation at scale one of simple neuronal structures, such as earlier olfactory systems (antennal lobe, olfactory bulb), becomes possible by using simplified integrate-and-fire models of neurons and with close interactions with biologists.
- Simple spiking neurons capture the fundamental mechanisms of detailed neuron models and allow mathematical analysis. Integrate-and-fire models provide a framework to study the neural code and the memory.
- Understanding emergence first requires a thorough study of the topological organization of the information stored and propagated in neural networks. This study should focus on feedback loops that are a major component of the stability of neural systems. Emergence might appear as a result of the stabilization of complex interlaced forward and backward local information flows within a large scheme of small local influence ranges. A precise study of such phenomenons in visual systems is a priviledged way to understand the emergence and the mechanisms that bind together low-level distributed perception and high-level (though also distributed) phenomenons such as attention or motivation.
- Visual systems are also an interesting framework to study the interaction between neural mechanisms and external (meaning here non-visual) stimuli such as oculo-motor or equilibrium informations. More generally, this study may find strong bases in the perception-action loops and the environment coding schemes that are two fundamental concepts for autonomous robotics.
- A major way to better understand our models is to try and unify the levels of realistic modeling (spikes) and functional modeling. Both levels can also be approached using variationnal calculus (namely EDP). Event-driven computation is an interesting way to develop spiking neuron models. Stochastic models may be compared to behavorial models. These studies should help to understand how common phenomenons take place within the different levels through different mechanisms: for example what is the link between synchronization/desynchronization induced by spike inhibition and the concept of inhibition that is used in behavioral models, or how temporal information should be dealt with in non-spiking models, before temporally-coded spikes may be introduced.
- To use and develop knowledge discovery methods dealing with noisy temporal signal.
- Comparing brain processings for healthy persons and patients using non-invasive or minimally invasive techniques.
- An adapted architecture, emulating distributed and possibly asynchronous computation, can be used to study properties of the models and also to embed them in silico. An adapted architecture can also be seen as an autonomous robot with multimodal sensor and with interactive actuators, able to learn from experience in an unknown environment.
State-of-the-art (international, INRIA)
"23 problems in systems neuroscience" Leo Van Hemmen and Terry Sejnowski Computational Neuroscience Series Oxford University Press (2006)
"Spiking neuron models: single neurons, populations, plasticity" Wulfram Gerstner and Werner Kistler Cambridge University Press (2002)
"Neural bubble dynamics in two dimensions: foundations" John Taylor in Biological Cybernetics, vol 80, 1999
"An unbiased implementation of regularization mechanisms" Thierry Viéville in Image and Vision Computing, vol 23, 2005
Begg RK, Kamruzzaman J & Sarker R (eds) (2006) Neural Networks in Healthcare: Potentials and Challenges. IGI Publishing Outcomes (Horizon 2012, Long term)
In computational neuroscience, two kinds of results can be envisioned:
- Future major progress in neuroscience will come from predictions and simulation of computational models;
- Future autonomous systems (including robots, ambient intelligence, medical technology, brain-computer interfaces) will be based on bio-inspired computations.
Why should it be part of the INRIA Strategic Plan ?
The rapid major progresses in computer science have been exploited by biological research to make great advances. These advances are now such that it is time for computer science to take advantage of the knowledge of the living organisms. Computational neuroscience appears as a major opportunity for that, considering the possibilities of the models that might emerge from this research.