Unsupervised STDP-based learning and computer vision applications
Place: Large Lecture Room - CVC
Affiliation: Computational Neuroscience Group, ICREA and Univ. Pompeu Fabra, Barcelona, Spain
Spike Timing-Dependent Plasticity (STDP) is a physiological mechanism of activity-driven synaptic regulation, where an excitatory synapse receiving a spike before a postsynaptic one is emitted is reinforced (Long-Term Potentiation) whereas its strength is weakened the other way around (Long-Term Depression).
We have shown that STDP enables single neurons to detect repeating spatiotemporal input spike patterns, even in clock free systems– a computationally difficult problem. Furthermore, the mechanism still works if firing is probabilistic (inhomogeneous Poisson process), as long as the firing probability has narrow temporal peaks (~10-20 ms). This means that neither the absence of reference time, nor Poisson variability is an obstacle for temporal coding theories, which claim that the brain could make use of the spike times to rapidly transmit and process information, as opposed to time-averaged firing rates. In addition, we suggested that brain oscillations may help in both generating precise spike patterns, and decoding them with STDP, which is especially useful for static or slowly varying stimulus.
We applied these ideas in a hierarchical model of the visual cortex inspired by the HMAX model. However, our model uses a finer level of description, individual neurons and spikes, and the sort of coding scheme involved is different.
Essentially, we have found that a combination of a temporal coding scheme where in the input layers the most strongly activated neurons fire first with STDP leads to a situation where neurons in subsequent layers will gradually become selective to prototypical visual patterns that are both salient and consistently present in the images. At the same time, their responses become more and more rapid. Up to this point the learning was fully unsupervised. However, the output of these higher order neurons can be fed into a supervised classifier, leading to robust object recognition.
More recently, together with Linares’ group, we have worked on hardware implementations able to deal with continuous vision. In a first attempt to simulate the early visual system, we used a simple set up combining an artificial retina, and a spiking neural network mimicking the primary visual cortex (V1). The artificial retina sensed the external world in a continuous (framefree) manner, and generated spikes that were asynchronously propagated, as they flowed in, until the V1 layer, using Address Event Representation. In this layer, neurons were equipped with memristor-based STDP (for now simulated), which enabled them to gradually become orientation selective, as the system was exposed to natural stimuli. These results are still preliminary, but very encouraging.
We bet this line of research will yield revolutionary results in the next decade.