By training I am a physicist and worked in nuclear and medium energy particle physics in the groups of Professor Hinde in Canberra, Australia and Professors Beck and Thoma at the Crystal Barrel experiment in Bonn, Germany. I obtained my diploma in physics in March 2009. I then turned my back on experimental physics and started my PhD in theoretical neuroscience in the group of Wulfram Gerstner in Lausanne, Switzerland. My thesis research was on associative memories and how they can form in an unsupervised way in recurrent neural networks. Moreover, I have worked as a postdoc in the labs of Surya Ganguli at Stanford and Tim Vogels at the University of Oxford. I am still interested in learning and memory in (spiking) neural networks in the broader sense. More specifically, I am interested in supervised and unsupervised learning in biologically plausible networks. To that end I often study the interplay of homeostatic, inhibitory and excitatory plasticity with the ultimate aim to understand how plasticity, network dynamics and homeostasis interact to form non-random networks that serve a particular purpose. I use tools from deep learning, dynamical systems and control theory. To verify analytical results I often run large-time-scale simulations for which I am pushing the boundaries of GPU and parallel computing using close-to-hardware programming techniques. The efforts behind this are also reflected in the open source simulation software Auryn that I actively develop.
tl;dr? There is a short 3rd person version here.