Most of our existing plasticity models are essentially a mathematical description of how to move a single parameter w, the synaptic weight, of a synapse up and down. We are good at casting this into a differential equation framework which works for both rate-based and spiking neurons and can be fitted to data from STDP experiments.
However, what we are often forgetting in this treatment is that synapses are complicated dynamical systems in their own right whose state space is presumably not one-dimensional, but much higher dimensional. But what is the role if this synaptic complexity? Is it just an epiphenomenon of how biology implements LTP and LTD or do the complicated temporal dynamics do something essential for learning?
My colleague Lorric Ziegler, for instance, had to work rather hard to build a model which had enough complexity to capture a large body of data on synaptic tagging and capture (Redondo and Morris, 2011) with one model (Ziegler et al., 2015). To do this, he had to make the synaptic state space three-dimensional. To me it really doesn’t feel right to think of such complex temporal dynamics as an epiphenomenon, does it?
[Figure: Illustration of the three synaptic state variables in the Ziegler model evolving in double well potentials. Adapted from Ziegler et al. (2015)].
In one of my models, I introduced a second dimension into the synaptic state space to introduce a notion of synaptic consolidation (Zenke et al., 2015). In this model these consolidation dynamics served a particular functional purpose which protects synapses which belong to the “overlap” of two or more memories from being destroyed by rapid negative feedback processes (RNFPs) when either memory is being recalled repeatedly, but not the other.
While Lorric and me had to invent specific synaptic state spaces for a specific problem, Stefano Fusi, Marcus Benna, my colleague Subhy Lahiri and my boss Surya instead considered more general synaptic state space models (Fusi et al., 2005, Lahiri and Ganguli, 2013, Benna and Fusi, 2015) and analyzed their theoretical capacity for single synapses. However, nobody knows how such synpses behave in network models and in which way networks can access information stored in complex synapses. A large part of my work with Surya and Subhy is thus centered around the question of the computational role of complex synaptic dynamics for learning and recall in networks.
While a large part of theoretical understanding on complex synapses can be gained analytically, to study complex synapse models in spiking neural networks it is inevitable to simulate them efficiently. I thus had to dedicate some of my time to extend my neural simulation library Auryn to be able to handle complex synaptic models efficiently. Most of that work is now done for continuous state space models (but discrete state space models are still pending). To that end, I recently released a new Auryn version v0.8-alpha which supports complex synapses and efficient synaptic state upgrades using vectorization. One of the first models to make use of this is for instance Lorric’s 3D state space model which can be found here.
- Benna, M.K., and Fusi, S. (2015). Computational principles of biological memory. arXiv:1507.07580 [Q-Bio].
- Fusi, S., Drew, P.J., and Abbott, L.F. (2005). Cascade models of synaptically stored memories. Neuron 45, 599–611.
- Lahiri, S., and Ganguli, S. (2013). A memory frontier for complex synapses. In Advances in Neural Information Processing Systems, (Tahoe, USA: Curran Associates, Inc.), pp. 1034–1042.
- Redondo, R.L., and Morris, R.G.M. (2011). Making memories last: the synaptic tagging and capture hypothesis. Nat Rev Neurosci 12, 17–30.
- Zenke, F., Agnes, E.J., and Gerstner, W. (2015). Diverse synaptic plasticity mechanisms orchestrated to form and retrieve memories in spiking neural networks. Nat Commun 6.
- Ziegler, L., Zenke, F., Kastner, D.B., and Gerstner, W. (2015). Synaptic Consolidation: From Synapses to Behavioral Modeling. J Neurosci 35, 1319–1334.