- Auryn simulator
I just put up a beta version of a tutorial showing how to train spiking neural networks with surrogate gradients using PyTorch:
I am happy to announce a long overdue maintenance release of Auryn v0.8.2m with plenty of fixes and improvements. Most notable improvement are the added support for non x86 architectures such as Arm and PowerPC. Special thanks to Ankur Sinha for his support on this one.
Up next, is improved support and fixes for large-scale deployments (>1000 cores) which was mainly spearheaded by Anders Lansner. We are currently still testing this, but you find the corresponding code in the develop branch.
I am eagerly anticipating fun discussions at Cosyne 2019. We have a poster at the main meeting and I will give two talks at the workshops. If biological learning and spiking neural networks tickle your fancy, come along. I will post some details and supplementary material below.
Thursday, 28 February 2019, 8.30p — Poster Session 1
I-98 Rapid spatiotemporal coding in trained multi-layer and recurrent spiking neural networks. Friedemann Zenke, Tim P Vogels
Please try this at home! I have put the early beginnings of a tutorial on how to train simple spiking networks with surrogate gradients using PyTorch here:
9:25-9:50 on Monday 4th of March 2019 in the workshop “Continual learning in biological and artificial neural networks” (more)
Title: “Continual learning through synaptic intelligence”
PDF slides download
The talk will be largely based on:
9:40-10:10 on Tuesday, 5th of March 2019 in the workshop “Why spikes? – Understanding the power and constraints of spiking based computation in biological and artificial neuronal networks” (more)
Title: “Computation in spiking neural networks — Opportunities and challenges”
I will talk about unpublished results and:
I am very excited to start my research group at the FMI in Basel, Switzerland in June 2019. My group will conduct research on learning and memory at the intersection of computational neuroscience and machine learning. The lab will be embedded in the exciting, collaborative environment of the existing neurobiology groups at the FMI.
More information on our lab website zenkelab.org.
I am currently looking for potential candidates who are passionate about neuroscience and computation. To tackle the problems we are interested in, we often have to be creative and apply analytical and computational tools from other areas. This creative mix includes, but is by no means limited to: dynamical systems, control theory, and machine learning. Ideal candidates should be curious about the neural underpinnings of computation and learning, but should also enjoy taking on difficult math and coding problems.
Topics of interest include:
There are currently several PhD positions with competitive salaries available in my lab starting from June 2019. PhD students will work on projects centered around plastic neural networks and will typically be involved in at least one experimental collaboration. If the above applies to you, and you would like to work and learn in an international and interdisciplinary environment, please consider applying.
Applicants will go through the FMI PhD program selection process. Application deadlines are on Nov 16th and May 1st with the associated hiring days in late January and June, respectively.
Note for applicants: When applying under “Specific Scientific Interests” my name currently does not show up in the drop down menu. Instead select “Keller, G.” twice and indicate your preference in the field below.
I am always looking for apt candidates who are interested in an internship, a master thesis or a post-doc. In case you are interested, I am always happy to discuss.
Mark the dates September 25th-26th for our Bernstein Satellite Workshop on “Networks which do stuff” which Guillaume Hennequin, Tim Vogels and myself are organizing this year at the Bernstein meeting in Berlin.
Computation in the brain occurs through complex interactions in highly structured, non-random networks. Moving beyond traditional approaches based on statistical physics, engineering-based approaches are bringing new vistas on circuit computation, by providing novel ways of i) building artificial yet fully functional model circuits, ii) dissecting their dynamics to identify new circuit mechanisms, and iii) reasoning about population recordings made in diverse brain areas across a range of sensory, motor, and cognitive tasks. Thus, the same “science of real-world problems” that is behind the accumulation of increasingly rich neural datasets is now also being recognized as a vast and useful set of tools for their analysis.
This workshop aims at bringing together researchers who build and study structured network models, spiking or otherwise, that serve specific functions. Our speakers will present their neuroscientific work at the confluence of machine learning, optimization, control theory, dynamical systems, and other engineering fields, to help us understand these recent developments, critically evaluate their scope and limitations, and discuss their use for elucidating the neural basis of intelligent behaviour.
September 25, 2018, 2:00 – 6:30 pm
September 26, 2018, 8:30 am – 12:30 pm
|Tue, Sept 25, 2018|
|14:00||Nataliya Kraynyukova, MPI for Brain Research, Frankfurt a.M., Germany
Stabilized supralinear network can give rise to bistable, oscillatory, and persistent activity
|14:40||Jake Stroud, University of Oxford, UK
Spatio-temporal control of recurrent cortical activity through gain modulation
|15:20||Jorge Mejias, University of Amsterdam, The Netherlands
Balanced amplification of signals propagating across large-scale brain networks
|16:30||Srdjan Ostojic, Ecole normale supérieure, Paris, France
Reverse-engineering computations in recurrent neural networks
|17:10||Chris Stock, Stanford University, USA
Reverse engineering transient computations in nonlinear recurrent neural networks through model reduction
|17:50||Guillaume Hennequin, University of Cambridge, UK
Flexible, optimal motor control in a thalamo-cortical circuit model
|Wed, Sep 26, 2018|
|08:30||Aditya Gilra, University of Bonn, Germany
Local stable learning of forward and inverse dynamics in spiking neural networks
|09:10||Robert Gütig, MPI for Experimental Medicine Göttingen, Germany
Margin learning in spiking neurons
|09:50||Claudia Clopath, Imperial College London, UK
Training spiking recurrent networks
|11:00||Friedemann Zenke, University of Oxford, UK
Training deep spiking neural networks with surrogate gradients
|11:40||Christian Marton, Imperial College London, UK
Task representation & learning in prefrontal cortex & striatum as a dynamical system
Thrilled to share the latest results on learning in multi-layer spiking networks using biologically plausible surrogate gradients at the “Third workshop on advanced methods in theoretical neuroscience” at the Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany. Thanks to the organizers for inviting me!
More info under: http://www.wamtn.info/
I am happy to announce that the SuperSpike paper and code are finally published. Here is an example of a network with one hidden layer which is learning to produce a Radcliffe Camera spike train from frozen Poisson input spike trains. The animation is a bit slow initially, but after some time you will see how the hidden layer activity starts to alight in a meaningful way to produce the desired output. Also check out this video of a spiking autoencoder which learns to compress a Brandenburg gate spike train through a bottle neck of only 32 hidden units. Of course these are only toy examples, but since virtually any cost function and input/output combination can be cast into the SuperSpike formalism, it has several immediate future uses for i) testing and developing analysis methods for spiking data ii) hypothesis building of how spiking networks solve specific tasks (e.g. in the early sensory systems) and finally iii) engineering spiking networks to solve complex spatiotemporal tasks. Cool beanz.
I am delighted to get the chance to present my work on learning in spiking neural networks on Tuesday, 15th of May 2018 at 10:15am at TU Berlin.
Title: What can we learn about synaptic plasticity from spiking neural network models?
Abstract: Long-term synaptic changes are thought to be crucial for learning and memory. To achieve this feat, Hebbian plasticity and slow forms of homeostatic plasticity work in concert to wire together neurons into functional networks. This is the story you know. In this talk, however, I will tell a different tale. Starting from the iconic notion of the Hebbian cell assembly, I will show the challenges which different forms of synaptic plasticity have to meet to form and stably maintain cell assemblies in a network model of spiking neurons. Constantly teetering on the brink of disaster, a diversity of synaptic plasticity mechanisms must work in symphony to avoid exploding network activity and catastrophic memory loss. Specifically, I will explain why stable circuit function requires rapid compensatory processes, which act on much shorter timescales than homeostatic plasticity and discuss possible mechanisms. In the second part of my talk, I will revisit the problem of supervised learning in deep spiking neural networks. Specifically, I am going to introduce the SuperSpike trick to derive surrogate gradients and show how it can be used to build spiking neural network models which solve difficult tasks by taking full advantage of spike timing. Finally, I will show that plausible approximations of such surrogate gradients naturally lead to a voltage-dependent three-factor Hebbian plasticity rule.
Mark the date for the Neuroplasticity meeting “From Bench to Machine Learning” at the University of Surrey, UK (13 July 2018 – 14 July 2018).
This meeting has some really cool speakers which I am eager to meet. I am happy to be invited to talk about some recent results on supervised learning in spiking nets. Thanks to the organizers!