I'm a principal investigator at the Department of Computer Science of ETH Zurich hosted at Angelika Steger's group. My research interests lie at the intersection of neuroscience and machine learning. My goal is to develop better neuroscience-inspired machine learning algorithms, and in turn to use insights gained from designing these to understand learning in the brain.

My research is supported by a SNSF Ambizione Fellowship and an ETH Zurich Research Grant.

Previously, from 2015 to 2018, I was a postdoc with Walter Senn at the University of Bern. Together with Rui P. Costa and Yoshua Bengio we developed a model for error backpropagation in the cortex.

I received my PhD in computer science from IST (University of Lisbon, 2014) where I studied neural network models of memory with Andreas Wichert. Still at IST, in 2015, I was awarded a short research fellowship to work with Francisco C. Santos. During this period I studied energy-efficient synaptic plasticity rules with Mark van Rossum.

## Current students and collaborators

**Simon Schug** — PhD student

**Nicolas Zucchet** — PhD student

**Johannes von Oswald** — PhD student (co-supervised with Angelika Steger)

**Alexander Meulemans** — Collaborator at the Computer Science Department of ETH Zürich

**Seijin Kobayashi** — Collaborator at the Computer Science Department of ETH Zürich

**Angelika Steger** — Collaborator at the Computer Science Department of ETH Zürich

**Maciej Wołczyk** — Visiting student

**Anja Šurina** — Master's student

## Alumni

**Dominic Zhao** — Bachelor's and exchange student, now at Common Sense Machines

**Alexandra Proca** — Research assistant, now PhD student at Imperial College London

## News

**COSYNE 2023**: We'll be presenting two posters at the main meeting and giving two talks at the *Top-down interactions in the neocortex: Structure, function, plasticity and models* workshop.

**Nicolas's talk for the Brain & AI group at Meta AI**: Nicolas gave a talk presenting our least-control principle for learning to Jean-Rémi King's Brain & AI group at Meta AI on November 16.

**Mathematics, Physics & Machine Learning seminar talk**: I gave an IST Mathematics, Physics & Machine Learning seminar talk on November 10.

**Panel discussion on lifelong learning**: I will participate on a panel discussion on Lifelong Learning Machines at NeurIPS 2022.

**Visit to Mila**: Simon, Alexander, Nicolas and I will be visiting Blake Richards's lab at Mila.

**Doctoral symposium at EPIA**: Together with Fernando P. Santos and Henrique Lopes Cardoso I'm organizing a doctoral symposium at EPIA, the Portuguese conference on artificial intelligence which will be held in Lisbon.

**DeepMind talk by Johannes**: Johannes gave a talk at DeepMind, London presenting our models and algorithms for continual learning and meta-learning.

**MLSS ^{N} 2022 lecture** (video on YouTube): I gave a lecture with Alexander, Simon and Nicolas for the MLSS

^{N}2022 summer school in Krakow, Poland where we discussed bilevel optimization problems involving neural networks. We covered how to solve them with recurrent backpropagation, equilibrium propagation, and some of our own work on learning and meta-learning without error backpropagation. The lecture is now on YouTube.

**Oxford seminar talk**: I gave an Oxford NeuroTheory Forum seminar presenting our work on biologically-plausible meta-learning.

**NAISys 2022 poster**: Alexander presented ongoing work on our new principle for learning at the NAISys 2022 conference in Cold Spring Harbor, NY.

**Swiss Computational Neuroscience Retreat**: Nicolas presented our work on biologically-plausible meta-learning at the Swiss Computational Neuroscience Retreat in Crans Montana.

## Recent papers

Johannes von Oswald, Eyvind Niklasson, Ettore Randazzo, João Sacramento, Alexander Mordvintsev, Andrey Zhmoginov, Max Vladymyrov (2022). .Transformers learn in-context by gradient descentPreprint: arXiv:2212.07677[ preprint ] |

Alexander Meulemans*, Nicolas Zucchet*, Seijin Kobayashi*, Johannes von Oswald, João Sacramento (2022). .The least-control principle for local learning at equilibriumNeurIPS 2022 (Oral)[ paper ] * — equal contributions |

Nicolas Zucchet*, Simon Schug*, Johannes von Oswald*, Dominic Zhao, João Sacramento (2021). .A contrastive rule for meta-learningNeurIPS 2022[ paper ] * — equal contributions |

Nicolas Zucchet, João Sacramento (2022). .Beyond backpropagation: bilevel optimization through implicit differentiation and equilibrium propagationNeural Computation[ link to journal | paper pdf ] |

Alexander Meulemans*, Matilde T. Farinha*, Maria R. Cervera*, João Sacramento, Benjamin F. Grewe (2022). .Minimizing control for credit assignment with strong feedbackICML 2022 (Spotlight)[ paper ] * — equal contributions |

Johannes von Oswald*, Dominic Zhao*, Seijin Kobayashi, Simon Schug, Massimo Caccia, Nicolas Zucchet, João Sacramento (2021). .Learning where to learn: Gradient sparsity in meta and continual learningNeurIPS 2021[ paper ] * — equal contributions |

Alexander Meulemans*, Matilde T. Farinha*, Javier G. Ordóñez, Pau V. Aceituno, João Sacramento, Benjamin F. Grewe (2021). .Credit assignment in neural networks through deep feedback controlNeurIPS 2021 (Spotlight)[ paper ] * — equal contributions |

Christian Henning*, Maria R. Cervera*, Francesco D'Angelo, Johannes von Oswald, Regina Traber, Benjamin Ehret, Seijin Kobayashi, Benjamin F. Grewe, João Sacramento (2021). .Posterior meta-replay for continual learningNeurIPS 2021[ paper ] * — equal contributions |

Jakob Jordan, João Sacramento, Willem A. M. Wybo, Mihai A. Petrovici*, Walter Senn* (2021). .Learning Bayes-optimal dendritic opinion poolingPreprint: arXiv:2104.13238[ preprint ] * — equal contributions |

Johannes von Oswald*, Seijin Kobayashi*, Alexander Meulemans, Christian Henning, Benjamin F. Grewe, João Sacramento (2020). .Neural networks with late-phase weightsICLR 2021[ paper | code ] * — equal contributions |

Dominic Zhao, Seijin Kobayashi, João Sacramento*, Johannes von Oswald* (2020). .Meta-learning via hypernetworksNeurIPS Workshop on Meta-Learning 2020[ paper ] * — equal contributions |

Alexander Meulemans, Francesco S. Carzaniga, Johan A. K. Suykens, João Sacramento, Benjamin F. Grewe (2020). .A theoretical framework for target propagationNeurIPS 2020 (Spotlight)[ paper | code ] |

Johannes von Oswald*, Christian Henning*, Benjamin F. Grewe, João Sacramento (2019). .Continual learning with hypernetworksICLR 2020 (Spotlight)[ paper | talk video | code ] * — equal contributions |

Blake Richards*, Timothy P. Lillicrap*, ..., João Sacramento, ..., Denis Therien*, Konrad P. Körding* (2019). .A deep learning framework for neuroscienceNature Neuroscience[ link to journal ] * — equal contributions |

Milton Llera, João Sacramento, Rui P. Costa (2019). .Computational roles of plastic probabilistic synapsesCurrent Opinion in Neurobiology[ link to journal ] |

João Sacramento, Rui P. Costa, Yoshua Bengio, Walter Senn (2018). .Dendritic cortical microcircuits approximate the backpropagation algorithmNeurIPS 2018 (Oral)[ paper | talk video ] |

If my articles are behind a paywall you can't get through please send me an e-mail.

A complete list of publications is here.

## Teaching

From 2019 to 2021, I was a guest lecturer for the Learning in Deep Artificial and Biological Neuronal Networks course offered at ETH Zürich.

Before that I served as a teaching assistant at the Department of Computer Science and Engineering of IST, where I lectured practical classes on computer programming and basic algorithms.

Object-oriented Programming (Fall 2013)

Foundations of Programming (Spring 2011)

Object-oriented Programming (Fall 2010)

Algorithms and Data Structures (Spring 2010)

Object-oriented Programming (Fall 2009)

Algorithms and Data Structures (Spring 2009)

Information Technology Systems Design (Spring 2009)

Object-oriented Programming (Fall 2008)

Data Centres (Fall 2008)