Deep meta-learning

  • Date: May 10, 2021
  • Time: 03:00 PM - 04:00 PM (Local Time Germany)
  • Speaker: Matthew Botvinick
  • Director of Neuroscience Research, Team Lead in AI Research, DeepMind & Honorary Professor, Gatsby Computational Neuroscience Unit, University College London
  • Location: Zoom
Deep meta-learning

As applications of deep learning in AI have grown in scale and sophistication, there is an emerging aspiration to move beyond systems with narrow expertise toward a more general form of AI, which can adapt flexibly in a dynamic, open-ended, multi-task environment. As one means toward this end, there has been growing interest in meta-learning, that is, in systems that 'learn how to learn.' A number of approaches now exist for building deep learning systems with meta-learning abilities, which together have opened up a new horizon in deep learning research. Adding to their interest, these techniques resonate in several ways with issues in cognitive science and neuroscience. At the same time, recent work suggests that current techniques for deep meta-learning may be limited in their ability to uncover shared structure across tasks, indicating that further innovation may be required. I will present recent work from our group at DeepMind looking at all of these issues, highlighting both the promise and potential limitations of deep meta-learning.

Go to Editor View