This is a general presentation of my research activities. For my detailed current activities, see here.
My research interests concern artificial neural networks and their relation to computational neurosciences and machine learning.
For the first aspect, I am interested in adaptive behavior, autonomous robotics and biologically inspired architectures and learning rules. On the one hand, I propose to better define elementary mechanisms of neuronal functioning and learning. On the other hand, I design models of cerebral structures including the posterior and prefrontal cortex, cerebellum, amygdala, hippocampus with application to perceptive scene interpretation, planning and various aspects of memorization.
For the second aspect, my fields of expertise are related to pattern recognition, data, signal, vision and speech processing and neurocontrol, with a special emphasis on neuro-symbolic integration (knowledge extraction and assimilation in neural networks, coupling of connectionist and symbolic techniques).
Several periods of research:
- 1986-1990: Master and PhD period: Artificial Intelligence. In 1986, I have completed a Master in computer sciences in computer vision, with Roger Mohr. 1987-1990 was my PhD period (supervised by Jean-Paul Haton, RFIA team). RFIA was a very large team covering most topics in Artificial Intelligence, which was an excellent introduction for me to this domain. I had the great luck to choose the topic of my PhD by myself: understanding the brain! This work was the opportunity for me to introduce the domain of connectionism in my lab and also the specificity of biological inspiration, thanks to Yves Burnod's participation to the supervision of my work. The topic of my PhD was a connectionist model of the cortex, at the level of the cortical column and was applied to vision, speech and motion processing.
- 1990-1997: Neuro-symbolic integration. After my recruitment at INRIA as a junior researcher, I decided to better understand the relations between symbolic AI and connectionism, seen as a numerical AI. I was the coordinator of the European MIX project (1994-1997) that theoretically studied coupling strategies between symbolic and connectionist models and proposed a methodology for software implementation of such couplings. I was also the editor, with Prof. Ron Sun, of a book founding the domain of Neuro-Symbolic Integration. Among possible relations between AI and connectionism, one states that connectionism alone can emulate any cognitive function, as illustrated by our brain. This was also linked to my PhD period and I decided to investigate this topic more deeply, but not alone: the domain is so wide...
- 1998-2008: The Cortex period. On the basis of numerical computations and pluridisciplinary work with Life Sciences, I proposed to create a research team in computer science, on the topic of mastering neuronal computation, i.e. numerical, distributed and adaptative computation. CORTEX was created in 1998 and associated to INRIA in 2000. We are now 30 people: one third permanent staff, one third PhD students, one third long term members (postdocs, trainees, engineers). Our domains of inspiration are neuroscience, distributed computation and machine learning.
My past PhD students:
- Nicolas Pican: Static and dynamic approach of modulation of synaptic efficiency in neural networks. (01/1995).
- Jean-Claude Di Martino: Knowledge integration in distributed systems for spectral lines extraction in sonar images. (01/1995).
- Brigitte Colnet: Neuromimetic approaches for acoustic sources localization. (06/1995).
- Stéphane Durand: TOM, a connectionist architecture for sequence processing: application to speech recognition. (12/1995).
- Lionel Beaugé: Definition of memorization mechanisms for neuromimetic systems. (12/1995).
- Yannick Lallement: Neuro-symbolic integration and artificial intelligence. (06/1996).
- Jean-François Remm: Symbolic-connectionist cooperation for radar signal interpretation. (11/1996).
- Hervé Frezza-Buet: The motivation in connectionist systems for the organization of behavior. (10/1999).
- Yann Boniface: Platform of distributed implementation of connectionist mechanisms. (10/2000).
- Laurent Bougrain: Coupling of neuronal and physical models: the contextual dimension. (10/2000).
- Nicolas Rougier: Connectionist methods situated in action for the autonomous behavior of a mobile robot. (10/2000).
- Bruno Scherrer: Interfacing cortical maps and markovian processes for the realization of autonomous, cooperating robotic agents. (01/2003), co-supervised with François Charpillet.
- Olivier Rochel: Implementation on parallel machines of spiking neural networks. (10/2004), co-supervised with Dominique Martinez.
- Claudio Castellanos Sanchez: Connectionist models and visual perception for autonomous embedded systems. (10/2005); co-supervised with Bernard Girau.
- Shadi Al Shehabi: Mapped neuronal models for the processing of documentary multimedia data: application to the web analysis. (06/2006); co-supervised with Jean-Charles Lamirel.
- Julien Vitay: Combining information flows for multimodal neuronal circuits learning. (06/2006).
- Georges Schutz: Knowledge extraction by artificial neural networks: application to the control of an industrial furnace. (10/2006).
- Mohamed Attik: Intelligent processing of data by artificial neural networks for SIG valorization. (12/2006). co-supervised with Laurent Bougrain.
- Olivier Ménard: Object recognition and localization in a visual scene by a robotic system. (12/2006). co-supervised with Hervé Frezza-Buet.
- Jérémy Fix: Numerical and distributed mechanisms of motor anticipation. (10/2008). co-supervised with Nicolas Rougier
- Maxime Ambard: Emergence of the role of the oscillatory dynamics in a computationel model of the olfactory system. (06/09). co-supervised with Dominique Martinez.
- Randa Kassab: Analysis of emerging and stationary properties in information flows changing over time: application to filtering and analysis of information from the web. (05/09).
- Thomas Girod: A model of multimodal learning for a cortically-inspired distributed substratum. (12/2010).
- Lucian Alecu: A neuro-dynamic approach for the conception of self-organizing processes cortically inspired neuromimetic approach. (06/2011).
Keywords : connectionism, artificial neural network, perceptron, multi-layer perceptron, self-organizing map.
Connectionism can be defined as the study of graphs of simple interconnected units, performing elementary numerical computations, derived from their input and internal parameters. In particular, neuro-inspired connectionism is interested in artificial neural networks, like perceptrons or self-organizing maps. These models have been thoroughly studied in the domain of machine learning for their properties of learning and approximation and their links with other statistical tools.
Artificial neural networks have been successfully applied to a variety of tasks (pattern matching, prediction, control) in a variety of domains (signal processing, industrial processes, medicine). Beyond computing statistics on databases from such domains, one can also wonder about using such capabilities on databases including an important temporal dimension and on more cognitive tasks like interpretation and knowledge extraction. Both characteristics are not classical properties of artificial neural networks, but are fundamental from an expertise point of view. Current research aims at extending their capabilities to these tasks.
Other connectionist approaches aim at going back to the basis of connectionism and look for a tighter inspiration from neuroscience. The inspiration can be local and look for more realistic models of neuronal functioning and particularly of its dynamical aspect. It can be global with the goal of implementing tasks related to the modeling of integrated behavior. Both biologically inspired approaches are refered to as computational neuroscience. They are multidisciplinary and aim at a better understanding of brain function (biological aspect) and of neuronal computation, seen as a new paradigm of computation (computer science aspect).
Another important issue in connectionism is to take benefit from the parallel distributed nature of its computation and to develop implementations that exploit those characteristics. As such implementations lead to cope with the real nature of neural computation, they may improve the performance of algorithms and be embedded in electronic devices.