Neuronal circuits in our brain are plastic, a fact that we presumably use to form memories. As we go through life we constantly form new memories and recall things we already know. Often it is hard to imagine that there is a high-level switch which ensures there is a clear separation between learning and recall. Moreover, when we recall a concept, it seems that in large part the same neurons and brain areas are activated as if we were directly confronted with the concept. But if the evoked and “recalled” activity look the same, or at least very similar, how does a synapse know when we are only thinking of something as opposed to really experiencing it? In other words, when should a synapse, which has access to limited information, update its synaptic states that represent its believes about the outside world and when should it lay still and do nothing? Somehow biological neural networks are incredibly good at this online learning. A large part of my research is dedicated to the question of how neural circuits achieve this.
The need for rapid compensatory processes (RCPs)
While Hebbian plasticity is believed to form associative memory traces or engrams and is has been found in different flavours in wide parts of the brain, it is still not entirely clear why memories, once formed, remain relatively stable when we recall them or other memories stored in the same network. It has been known for decades that Hebbian plasticity needs to be constrained by one or several forms of negative feedback mechanisms to counteract the positive feedback generated by Hebbian plasticity alone. Multiple theoretical works have suggested different ways of achieving this. For instance, by rescaling or limiting the growth of synaptic weights (von der Malsburg, 1973, Miller and MacKay, 1994). In models, these mechanisms are often chosen to act rapidly or even instantaneously.
Experimentally, on the other hand, the probably best characterized form of homeostatic plasticity, which was discovered in the nineties, is called synaptic scaling (Turrigiano et al., 1998). Its characteristic timescale is on the order of hours to days. Hebbian plasticity in form of STDP or classic LTP induction protocols on the other hand can be induced in a matter of minutes.
This discrepancy of time scales between the two mechanisms, however, poses a severe problem for stability in neural network models (Zenke et al., 2013) — imagine driving a car while your average reaction time is orders of magnitude larger than the time it takes you to reach the next traffic light… To avoid an explosive increase in neural activity, models actually need rapid compensatory processes (RCPs) which are much faster than experimentally known forms of synaptic scaling. One mechanism which could achieve this is heterosynaptic plasticity (Chen et al., 2013, Chistiakova et al., 2014), but it is conceivable that network-wide mechanisms possibly related to synaptic inhibition may serve a similar purpose.
Hebbian and non-Hebbian plasticity orchestrated for learning and stability
The idea behind what we called orchestrated plasticity in Zenke et al. (2015) or Taro Toyoizumi et al. (2014) call intrinsically stable plasticity, is that multiple competing plasticity mechanisms at the synaptic level are working hand in hand such that they cancel each other most of the time. Thus during regular network dynamics (e.g. background activity or the recall of memory patters) weights hardly change (see video below). Synaptic change and learning only takes place when it is needed to either form new memories (or potentially update or even overwrite old ones). Interestingly, the requirements for this can be fullfilled by purely local rules acting at individual synapses. However, the idea entails that single synapses express a well-orchestrated combination of both Hebbian und non-Hebbian forms of plasticity.
For my current work, I am interested in understanding which RCP or RCPs actually exist at biological synapses or at the network level (and why we seem to have missed them so far in many of our models). Because this topic has a high priority on my research agenda, I am always eager to talk to experimentalists who work on plasiticty and in particular the ones who might be interested in (or already are) exploring these avenues in their experiments.
- Zenke, F., Agnes, E. J., Gerstner, W., (2015). Diverse synaptic plasticity mechanisms orchestrated to form and retrieve memories in spiking neural networks. Nature Commun 6. doi: 10.1038/ncomms7922
- Zenke, F., Hennequin, G., Gerstner, W., (2013).
Synaptic Plasticity in Neural Networks Needs Homeostasis with a Fast Rate Detector. PLoS Comput Biol 9, e1003330. doi:10.1371/journal.pcbi.1003330
- Chen, J.-Y., Lonjers, P., Lee, C., Chistiakova, M., Volgushev, M., and Bazhenov, M. (2013). Heterosynaptic Plasticity Prevents Runaway Synaptic Dynamics. J Neurosci 33, 15915–15929.
- Chistiakova, M., Bannon, N.M., Bazhenov, M., and Volgushev, M. (2014). Heterosynaptic Plasticity Multiple Mechanisms and Multiple Roles. Neuroscientist 20, 483–498.
- von der Malsburg, C. (1973). Self-organization of orientation sensitive cells in the striate cortex. Kybernetik 14, 85–100.
- Miller, K.D., and MacKay, D.J. (1994). The role of constraints in Hebbian learning. Neural Comput 6, 100–126.
- Toyoizumi, T., Kaneko, M., Stryker, M.P., and Miller, K.D. (2014). Modeling the Dynamic Interaction of Hebbian and Homeostatic Plasticity. Neuron 84, 497–510.
- Turrigiano, G.G., Leslie, K.R., Desai, N.S., Rutherford, L.C., and Nelson, S.B. (1998). Activity-dependent scaling of quantal amplitude in neocortical neurons. Nature 391, 892–896.
- Zenke, F., Hennequin, G., and Gerstner, W. (2013). Synaptic Plasticity in Neural Networks Needs Homeostasis with a Fast Rate Detector. PLoS Comput Biol 9, e1003330.
Zenke, F., Agnes, E.J., and Gerstner, W. (2015). Diverse synaptic plasticity mechanisms orchestrated to form and retrieve memories in spiking neural networks. Nat Commun 6.
This is the Supplementary Movie from our paper: Zenke, F., Agnes, E.J., and Gerstner, W. (2015) [fulltext]: