Igor Babuschkin’s Lectures
Lecture 1: An Introduction to Deep Reinforcement Learning
Reinforcement Learning (RL) is a subfield of machine learning that is concerned with training agents to take decisions such that a cumulative reward signal is maximized in an environment. It provides a highly general and elegant toolbox for building intelligent systems that learn through interacting with their environment rather than supervision. In the past few years RL (in combination with deep neural networks) has shown impressive results on a variety of challenging domains, from games to robotics. It is also seen by some as a possible path towards general human-level intelligent systems. I will explain some of the basic algorithms that the field is based on (Q Learning, Policy Gradients), as well as a few extensions to these algorithms that are used in practice (PPO, IMPALA, and others).
Lecture 2: Milestones in Large-scale Reinforcement Learning: AlphaZero, OpenAI Five and AlphaStar
Over the past few years, we have seen a number of successes in Deep Reinforcement Learning: Among other results, RL agents have been able to match or exceed the strength of the best human players at the games of Go, Dota II and StarCraft II. These were achieved by AlphaZero, OpenAI Five and AlphaStar, respectively. I will go into details of how these three systems work, highlighting similarities and differences. What are the lessons we can draw from these results, and what is missing to apply Deep RL to challenging real world problems?
Lecture 3 – Tutorial: JAX, A new library for building neural networks
JAX is a new framework for deep learning developed at Google AI. Written by the authors of the popular autograd library, it is built on the concept of function transformations: Higher-order functions like ‘grad’, ‘vmap’, ‘jit’, ‘pmap’ and others are powerful tools that allow researchers to express ML algorithms succinctly and correctly, while making full use of hardware resources like GPUs and TPUs. Most importantly, solving problems with JAX is fun! I will give a short introduction to JAX, covering the most important function transformations, and demonstrating how to apply JAX to several ML problems.