## Controlled Markov processes and viscosity solutions.(English)Zbl 0773.60070

Applications of Mathematics. 25. New York: Springer-Verlag. xv, 428 p. (1993).
The aim of this book is to give a reasonably self-contained introduction to optimal stochastic control for continuous time Markov processes, from the point of view of dynamic programming, and to the theory of viscosity solutions. Particular attention is given to controlled Markov diffusion processes, whose dynamic programming equation becomes a nonlinear partial differential equation of second order, called Hamilton-Jacobi-Bellman (HJB) equation. Typically, the value function is not smooth enough to satisfy the HJB equation in a classical sense, but is the unique viscosity solution of it under quite general assumptions. Thus, the theory of viscosity solutions provides a powerful method for studying control problems. A special feature of the book is the presentation of control problems in the viscosity solution framework, the first detailed presentation in book form, and the authors played a key role in the development of this field.
The book consists of nine chapters and an appendix. The first part (Chapters 1-2) presents deterministic optimal control and its corresponding viscosity solution. Main topics are (1) dynamic programming approach to deterministic control problems and connection with Pontryagin’s principle, (2) calculus of variations and (3) viscosity solutions for nonlinear partial differential equations of first order.
The second part (Chapters 3-5) deals with the stochastic case. Chapter 3 gives an introduction to abstract dynamic programming for controlled Markov processes. Here, the dynamic programming equation is described using the infinitesimal generator of semigroups. Chapter 4 is concerned with the control of diffusion processes governed by stochastic differential equations. Some regularity properties of value functions are presented, by using probability arguments. Chapter 5 is devoted to study viscosity solutions of nonlinear differential equations of second order. A comparison result is proved in detail. Here the value function turns out to be the unique viscosity solution of the HJB equation by the dynamic programming principle. For readers interested in viscosity solutions but not in control theory, this chapter together with Chapter 2 may be interesting.
The third part (Chapters 6-9) consists of four topics. Chapter 6 (logarithmic transformation) and Chapter 7 (singular perturbations) deal with asymptotic problems by stochastic control method. Chapter 8 presents singular stochastic control and Chapter 9 is concerned with numerical methods for solving HJB equations approximately. Each chapter contains historical remarks.
The appendix consists of five subjects: (A) duality relationships; (B) Dynkin’s formula for random evolutions with Markov chain parameters; (C) extension of Lipschitz continuous functions, smoothing; (D) stochastic differential equations: random coefficients; (E) a result of Alexandrov.
This book may be interesting for mathematicians and engineers working with control problems and the theory of viscosity solutions.

### MSC:

 60J25 Continuous-time Markov processes on general state spaces 60-02 Research exposition (monographs, survey articles) pertaining to probability theory 93E20 Optimal stochastic control