Home

Friedemann Zenke

Computational neuroscientist at the FMI in Basel (zenkelab.org). Main interests: Memory, spiking neurons, and learning in biologically inspired neural networks.

 

News

  • Online workshop: Spiking neural networks as universal function approximatorsOnline workshop: Spiking neural networks as universal function approximators
    Dan Goodman an myself are organizing an online workshop on new approaches to training spiking neural networks, Aug 31st / Sep 1st 2020. Invited speakers: Sander Bohte (CWI), Iulia M. Comsa (Google), Franz Scherr (TUG), Emre Neftci (UC Irvine), Timothee Masquelier (CNRS Toulouse), Claudia Clopath (Imperial College), Richard Naud (U Ottawa), and Julian Goeltz (Uni Bern) There are… Read more »
  • Preprint: The remarkable robustness of surrogate gradient learning for instilling complex function in spiking neural networksPreprint: The remarkable robustness of surrogate gradient learning for instilling complex function in spiking neural networks
    We just put up a new preprint https://www.biorxiv.org/content/10.1101/2020.06.29.176925v1 in which we take a careful look at what makes surrogate gradients work. Spiking neural networks are notoriously hard to train using gradient-based methods due to their binary spiking nonlinearity. To deal with this issue, we often approximate the spiking nonlinearity with… Read more »
  • Paper: Finding sparse trainable neural networks through Neural Tangent TransferPaper: Finding sparse trainable neural networks through Neural Tangent Transfer
    New paper led by Tianlin Liu on “Finding sparse trainable neural networks through Neural Tangent Transfer” https://arxiv.org/abs/2006.08228  (and code) which was accepted at ICML. In the paper we leverage the neural tangent kernel to instantiate sparse neural networks before training them. Deep neural networks typically rely on dense, fully connected layers and… Read more »