Bernstein Satellite Workshop on ” Emergent function in non-random neural networks “

Mark the dates September 25th-26th for our Bernstein Satellite Workshop on “Networks which do stuff” which Guillaume Hennequin, Tim Vogels and myself are organizing this year at the Bernstein meeting in Berlin.

Abstract

Computation in the brain occurs through complex interactions in highly structured, non-random networks. Moving beyond traditional approaches based on statistical physics, engineering-based approaches are bringing new vistas on circuit computation, by providing novel ways of i) building artificial yet fully functional model circuits, ii) dissecting their dynamics to identify new circuit mechanisms, and iii) reasoning about population recordings made in diverse brain areas across a range of sensory, motor, and cognitive tasks. Thus, the same “science of real-world problems” that is behind the accumulation of increasingly rich neural datasets is now also being recognized as a vast and useful set of tools for their analysis.

This workshop aims at bringing together researchers who build and study structured network models, spiking or otherwise, that serve specific functions. Our speakers will present their neuroscientific work at the confluence of machine learning, optimization, control theory, dynamical systems, and other engineering fields, to help us understand these recent developments, critically evaluate their scope and limitations, and discuss their use for elucidating the neural basis of intelligent behaviour.

Date and venue

September 25, 2018, 2:00 – 6:30 pm
September 26, 2018, 8:30 am – 12:30 pm

Marchstrasse 23
10587 Berlin
Germany

Schedule

Tue, Sept 25, 2018
14:00 Nataliya Kraynyukova, MPI for Brain Research, Frankfurt a.M., Germany
Stabilized supralinear network can give rise to bistable, oscillatory, and persistent activity
14:40 Jake Stroud, University of Oxford, UK
Spatio-temporal control of recurrent cortical activity through gain modulation
15:20 Jorge Mejias, University of Amsterdam, The Netherlands
Balanced amplification of signals propagating across large-scale brain networks
16:00 Coffee Break
16:30 Srdjan Ostojic, Ecole normale supérieure, Paris, France
Reverse-engineering computations in recurrent neural networks
17:10 Chris Stock, Stanford University, USA
Reverse engineering transient computations in nonlinear recurrent neural networks through model reduction
17:50 Guillaume Hennequin, University of Cambridge, UK
Flexible, optimal motor control in a thalamo-cortical circuit model
Wed, Sep 26, 2018
08:30 Aditya Gilra, University of Bonn, Germany
Local stable learning of forward and inverse dynamics in spiking neural networks
09:10 Robert Gütig, MPI for Experimental Medicine Göttingen, Germany
Margin learning in spiking neurons
09:50 Claudia Clopath, Imperial College London, UK
Training spiking recurrent networks
10:30 Coffee Break
11:00 Friedemann Zenke, University of Oxford, UK
Training deep spiking neural networks with surrogate gradients
11:40 Christian Marton, Imperial College London, UK
Task representation & learning in prefrontal cortex & striatum as a dynamical system
12:20 Wrap up

More details here and general information here.

Tagged with: , , ,

Talk at MPI Göttingen on June 28th

Thrilled to share the latest results on learning in multi-layer spiking networks using biologically plausible surrogate gradients at the “Third workshop on advanced methods in theoretical neuroscience” at the Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany. Thanks to the organizers for inviting me!

More info under: http://www.wamtn.info/

Tagged with: , , ,

SuperSpike: Supervised learning in spiking neural networks — paper and code published

I am happy to announce that the SuperSpike paper and code are finally published. Here is an example of a network with one hidden layer which is learning to produce a Radcliffe Camera spike train from frozen Poisson input spike trains. The animation is a bit slow initially, but after some time you will see how the hidden layer activity starts to alight in a meaningful way to produce the desired output. Also check out this video of a spiking autoencoder which learns to compress a Brandenburg gate spike train through a bottle neck of only 32 hidden units. Of course these are only toy examples, but since virtually any cost function and input/output combination can be cast into the SuperSpike formalism, it has several immediate future uses for i) testing and developing analysis methods for spiking data ii) hypothesis building of how spiking networks solve specific tasks (e.g. in the early sensory systems) and finally iii) engineering spiking networks to solve complex spatiotemporal tasks. Cool beanz.

 

 

Tagged with: , , ,

Talk at TU Berlin on Network and Plasticity Dynamics

I am delighted to get the chance to present my work on learning in spiking neural networks on Tuesday, 15th of May 2018 at 10:15am at TU Berlin.

Title: What can we learn about synaptic plasticity from spiking neural network models?

Abstract: Long-term synaptic changes are thought to be crucial for learning and memory. To achieve this feat, Hebbian plasticity and slow forms of homeostatic plasticity work in concert to wire together neurons into functional networks. This is the story you know. In this talk, however, I will tell a different tale. Starting from the iconic notion of the Hebbian cell assembly, I will show the challenges which different forms of synaptic plasticity have to meet to form and stably maintain cell assemblies in a network model of spiking neurons. Constantly teetering on the brink of disaster, a diversity of synaptic plasticity mechanisms must work in symphony to avoid exploding network activity and catastrophic memory loss. Specifically, I will explain why stable circuit function requires rapid compensatory processes, which act on much shorter timescales than homeostatic plasticity and discuss possible mechanisms. In the second part of my talk, I will revisit the problem of supervised learning in deep spiking neural networks. Specifically, I am going to introduce the SuperSpike trick to derive surrogate gradients and show how it can be used to build spiking neural network models which solve difficult tasks by taking full advantage of spike timing. Finally, I will show that plausible approximations of such surrogate gradients naturally lead to a voltage-dependent three-factor Hebbian plasticity rule.

Tagged with: , , ,

Neuroplasticity meeting “From Bench to Machine Learning” in July in Guildford, UK

Mark the date for the Neuroplasticity meeting “From Bench to Machine Learning” at the University of Surrey, UK (13 July 2018 – 14 July 2018).
http://www.ias.surrey.ac.uk/workshops/Neuroplasticity/

This meeting has some really cool speakers which I am eager to meet. I am happy to be invited to talk about some recent results on supervised learning in spiking nets. Thanks to the organizers!

Tagged with: , ,

Talk at University of Bristol on Friday 23rd of February 2018

I am looking forward to give a talk on the role of Rapid Compensatory Processes for learning and memory on Friday, 23rd of February 2018 at 1pm at the University of Bristol.

Title: Making Cell Assemblies: What can we learn about plasticity from spiking neural network models?

Abstract: Long-term synaptic changes are thought to underlie learning and memory. Hebbian plasticity and homeostatic plasticity work in concert to combine neurons into functional cell assemblies. This is the story you know. In this talk, I will tell a different tale. In the first part, starting from the iconic notion of the Hebbian cell assembly, I will show the difficulties that synaptic plasticity has to overcome to form and maintain memories stored as cell assemblies in a network model of spiking neurons. Teetering on the brink of disaster, a diversity of synaptic plasticity mechanisms must work in symphony to avoid exploding network activity and catastrophic memory loss – in order to fulfill our preconception of how memories are formed and maintained in biological neural networks. I will introduce the notion of Rapid Compensatory Processes, explain why they have to work on shorter timescales than currently known forms of homeostatic plasticity, and motivate why it is useful to derive synaptic learning rules from a cost function approach. Cost functions will also serve as the motivation for the second part of my talk in which I will focus on the issue of spatial credit assignment. Plastic synapses encounter this issue when they are part of a network in which information is processed sequentially over several layers. I will introduce several recent conceptual advances in the field that have lead to algorithms which can train spiking neural network models capable of solving complex tasks. Finally, I will show that such algorithms can be mapped to voltage-dependent three-factor Hebbian plasticity rules and discuss their biological plausibility.

Tagged with: , , , , ,

Talk in Oxford on Rapid Compensatory Processes

I am delighted to get the chance to present my work on learning in spiking neural networks next week (Tuesday, 17 October 2017, 1pm to 2pm) in Oxford at the “EP Cognitive and Behavioural Neuroscience Seminar”.

Title: Making Cell Assemblies: What can we learn about plasticity from spiking neural network models?

Abstract: Long-term synaptic changes are thought to underlie learning and memory. Hebbian plasticity and homeostatic plasticity work in concert to combine neurons into functional cell assemblies. This is the story you know. In this talk, I will tell a different tale. In the first part, starting from the iconic notion of the Hebbian cell assembly, I will show the difficulties that synaptic plasticity has to overcome to form and maintain memories stored as cell assemblies in a network model of spiking neurons. Teetering on the brink of disaster, a diversity of synaptic plasticity mechanisms must work in symphony to avoid exploding network activity and catastrophic memory loss – in order to fulfill our preconception of how memories are formed and maintained in biological neural networks. I will introduce the notion of Rapid Compensatory Processes, explain why they have to work on shorter timescales than currently known forms of homeostatic plasticity, and motivate why it is useful to derive synaptic learning rules from a cost function approach. Cost functions will also serve as the motivation for the second part of my talk in which I will focus on the issue of spatial credit assignment. Plastic synapses encounter this issue when they are part of a network in which information is processed sequentially over several layers. I will introduce several recent conceptual advances in the field that have lead to algorithms which can train spiking neural network models capable of solving complex tasks. Finally, I will show that such algorithms can be mapped to voltage-dependent three-factor Hebbian plasticity rules and discuss their biological plausibility.

Tagged with: , , ,

ICML Talk on “Continual Learning Through Synaptic Intelligence”

I am looking forward to presenting our work on synaptic consolidation at ICML in Sydney this year. The talk will be held on Tue Aug 8th 11:06–11:24 AM @ Darling Harbour Theatre. Ben and I will also present a poster (#46; see below) on the same topic on Tuesday.

See also the paper, the code, talks slides, and an older blog post on the topic.

 

Tagged with: , , , , , ,

Supervised learning in multi-layer spiking neural networks

We just put a conference paper version of “SuperSpike”, our work on supervised learning in multi-layer spiking neural networks to the arXiv https://arxiv.org/abs/1705.11146. As always I am keen to get your feedback.

Tagged with: , ,

The temporal paradox of Hebbian learning and homeostatic plasticity

I am happy that our article on “The temporal paradox of Hebbian learning and homeostatic plasticity” was just published in Current Opinion in Neurobiology (full text). This article essentially concisely presents the main arguments for the existence of rapid compensatory processes (RCP) in addition to slow forms of homeostatic plasticity. It then reviews some of the top candidates to fill this role. Unlike other articles that we have written before, the present one has a control theoretic spin.

Here is the journal version and a preprint in case the former does not work for you. Many thanks to the people who contributed to this article, either in their role as anonymous reviewers or as the ones who gave their input to the preprint on bioRxiv.

I hope you will find this article thought provoking and helpful.

Tagged with: , ,
Top