# zbMATH — the first resource for mathematics

Probabilistic theory of mean field games with applications I. Mean field FBSDEs, control, and games. (English) Zbl 1422.91014
Probability Theory and Stochastic Modelling 83. Cham: Springer (ISBN 978-3-319-56437-1/hbk; 978-3-319-58920-6/ebook; 978-3-319-59820-8/set). xxv, 713 p. (2018).
This is the first part of a two volume textbook dealing with a quite self-contained description of mean field games (shortly MFG in the sequel) and related topics. The theory of MFG was initiated simultaneously by J.-M. Lasry-P.-L. Lions and M. Huang-R. Malhamé-P. Caines in 2006 with an aim to describe limits of Nash equilibria of (stochastic) nonatomic differential games with symmetric interactions when the number of agents tends to infinity.
In the past decade the theory received lots of attention, involving researchers working in many fields of pure and applied mathematics, including partial differential equations, optimal control theory, probability and others. The present text provides the state of the art of the probabilistic side of MFG.
In a nutshell, a model MFG can be described as follows: the game takes place in the $$d$$-dimensional Euclidean space $$\mathbb{R}^d$$ for a given time horizon $$T>0$$. The initial agent configuration is given by $$\rho_0\in\mathcal{P}_1(\mathbb{R}^d)$$, a nonnegative Borel probability measure with finite first moment. A representative agent located at $$x\in\mathbb{R}^d$$ a time $$t\in(0,T)$$ predicts the evolution of the agents’ density, $$(\rho_t)_{t\in[0,T]}$$ (a curve in $$\mathcal{P}_1(\mathbb{R}^d)$$) and solves the optimization problem $\inf\mathbb{E}\left\{\int_{t}^T L(X_s,\alpha_s)+f(X_s,\rho_s){\mathrm{d}} s + g(X_T,\rho_T)\right\},$ subject to ${\mathrm{d}}X_s=\alpha_s{\mathrm{d}} s + \sqrt{2}{\mathrm{d}} B_s,\ s\in(t,T); \ X_t=x,$ where $$L:\mathbb{R}^d\times\mathbb{R}^d\to\mathbb{R}$$ is a given Lagrangian, $$f,g:\mathbb{R}^d\times\mathcal{P}_1(\mathbb{R}^d)\to\mathbb{R}$$ stand for the given running and final costs, respectively and $$(B_s)_{s\in(0,T)}$$ is a standard Brownian motion in $$\mathbb{R}^d$$.
We are interested in an ‘equilibrium’ situation, when the prediction corresponds to the true evolution of the agents’ density, i.e. the optimizer $$(X_s)_{s\in(0,T)}$$ has the property $${\mathrm{Law}}(X_s)=\rho_s$$. When we are in this scenario, $$(\rho_t)_{t\in[0,T]}$$ describes the Nash equilibria in the game.
The main questions in the theory are: study the existence and uniqueness of such equilibria, give further characterizations of them, incorporate other sources of randomness in the models (such as common noise) and study the convergence problem of Nash equilibria of games with finitely many agents, when the number of agents tends to infinity.
Chapter 1 represents the introduction to MFG, where the authors provide lots of examples in an increasing order of complexity. In the next chapter a crash course on stochastic differential games with finite number of agents is given. In this chapter there is a big accent on so-called ‘linear quadratic’ problems which can be solved by solving matrix Riccati equations.
In Chapter 3, the authors start to construct the building blocks of the probabilistic approach to MFG. In particular they motivate the study of backward and forward-backward stochastic differential equations. The important class of examples of so-called McKean-Vlasov type is also studied in this chapter.
Chapter 4 presents the basis of the stochastic analysis used later in the text. In particular, to solve forward-backward stochastic differential equations, the authors present standards fixed point techniques, tailored to the setting of MFG. The chapter is ended with the presentation of so-called ‘extended mean field games’, where the agents interact not only through the distribution of their states, but also through the distribution of their controls.
Chapter 5 is devoted to the study of functions defined on the space of probability measures equipped with the so-called Wasserstein metric (also referred to as Wasserstein space). Notions of convexity and differentiability of such functions are presented in details (in connection also with the differentiability notions introduced by Lions and the one studied by Ambrosio-Gigli-Savaré in the context of the optimal transportation theory). These notions will serve as very important tools when deriving and studying master equation in the later chapters.
Since there are strong similarities between stochastic control problems of McKean-Vlason type (also called as ‘mean field type control problems’) and MFG, Chapter 6 is dedicated to the explanation of these similarities and the major differences between the two problems. The first type of problems can be tackled by the means of the Pontryagin stochastic maximum principle. The authors emphasize that the backward stochastic differential equation arising in this model include a term in which one needs to differentiate the Hamiltonian with respect to the measure argument. This term can be handled by using elements from the previous chapter.
Chapter 7 represents the capstone of this first part, where the authors revisit some examples from Chapter 1, using the knowledge accumulated in Chapters 2-6.
For the second volume of this series we refer to [R. Carmona and F. Delarue, Probabilistic theory of mean field games with applications II. Mean field games with common noise and master equations. Cham: Springer (2018; Zbl 1422.91015)].

##### MSC:
 91-02 Research exposition (monographs, survey articles) pertaining to game theory, economics, and finance 91A16 Mean field games (aspects of game theory) 91A15 Stochastic games, stochastic differential games 91A23 Differential games (aspects of game theory) 91A55 Games of timing 91A80 Applications of game theory 60H30 Applications of stochastic analysis (to PDEs, etc.) 93E20 Optimal stochastic control
##### Keywords:
mean field games; probabilistic approach
Full Text: