As a large and ever-increasing part of our economic and social interactions moves to the cyberspace, data-driven algorithmic decision making by autonomous agents is fast becoming an integral and inseparable part of our lives. These agents are competing in uncertain and volatile environments and must in turn learn aspects thereof, and of each other, in order to dynamically optimize their performance. What is more, even the humans in the loop are obliged to depend more and more on data-driven signals for their own decision making, e.g., on automated rankings and recommendations. Given the inherently distributed, strategic, dynamic nature of this ethos, learning in dynamic games, with its broad spectrum of modeling and analysis tools, is a prime candidate for providing this endeavor the theoretical underpinnings, with a balance between unification of the mathematical substructure, and retaining the distinct flavors and diversity of the competing paradigms. On modeling front, this ranges from dynamic cooperative games to mean field and evolutionary games and, for learning paradigms, from reinforcement learning to learning by imitation.

This nascent role of dynamic games has already registered its presence in many different ways and is increasingly doing so. The time is thus ripe for taking stock of where we are and where we should be headed. This is the motivation behind this special issue. The subarea is still too young to be put into a straitjacket of well defined boundaries. As an outcome, we have here a collection of fourteen articles that represent the many strands in this area, some of them straddling more than one. This includes reinforcement learning, network games, evolutionary games, distributed resource allocation, prospect theoretic considerations, information structures, etc.

Specifically, the contributions are as follows. There is an excellent survey by Sylvain Sorin about various approaches to continuous time models of learning in games [10]. There are several articles dedicated to reinforcement learning, a very active area in machine learning and control, now already making inroads into dynamic games. They deal with multi-agent versions of classical reinforcement learning [13], robustness and approximation issues in stochastic games [11], mean field games [1, 14], and with learning coarse correlated equilibria in stochastic games [6]. The articles [2, 3] address robustness issues in the context of network games and resource allocation problems. We also have contributions to learning aspects of evolutionary games [4], prospect theoretic learning [8], decentralized bandits [7], learning for coordination [9], information structures [12] and opinion dynamics [5].

This field interests multiple communities such as dynamic games, control theory and machine learning. The editors hope that this special issue makes a small contribution toward building bridges between them to increase the synergistic interaction that will spur further advances in this field that we all love.

We would like to thank all authors who submitted a paper to this issue. Special thanks are also due to all the reviewers for their criticisms and suggestions, without which this issue would not have been possible.