Hernández-Lerma, Onésimo; Lasserre, Jean Bernard Further topics on discrete-time Markov control processes. (English) Zbl 0928.93002 Applications of Mathematics. 42. New York, NY: Springer. xii, 276 p. (1999). This advanced book is the second volume organized as Chapters 7-12 of the authors’ monograph on the theory and applications of discrete-time Markov control processes (MCPs) or Markov control models (MCMs). For Volume 1, organized as Chapters 1-6, see O. Hernández-Lerma and J. B. Lasserre, Discrete-time Markov control processes. Basic optimality criteria (1995; Zbl 0840.93001). The topics of this second volume are particularly based on the very recent developments in the field of MCPs and a great deal of material comes from researches of the authors as well as others since the publication of the first volume.Chapter 7 deals with noncontrolled Markov chains and presents background knowledge for the book, e.g., the concepts of weighted-norm spaces, \(w\)-geometric ergodicity with respect to some weight function \(w\), Poisson’s equation, etc. Chapter 8 studies the infinite-horizon discounted cost MCPs, using dynamic programming techniques under a suitable \(w\)-geometric convergence. In Chapter 9 the authors consider the expected total cost (ETC) criterion and search conditions for the existence of the ETC optimal control policies. Especially, an important class of MCMs, i.e., transient MCMs, and optimality-related problems are quite completely studied. Chapter 10 deals with undiscounted cost criteria under various meanings and the corresponding optimal policies. Starting from an average cost (AC) criterion and passing through canonical and bias-optimal policies, problems under various undiscounted criteria are studied very deeply, particularly the relationships among these various criteria and among the corresponding optimality policies. In Chapters 11 and 12 the AC problems are continuously dealt with, but from very different viewpoints, i.e., considering sample path optimality and minimization via probabilistic methods and treating AC problems via linear programming approach, respectively.The control models here virtually cover all the usual discrete-time stochastic control models that appear in important applications in engineering, economics, population processes and management science. The present volume is almost self-contained and can be read independently. Moreover, the assumptions on the MCMs (e.g., positive and negative values of the cost functions are allowed) and the control-constraint sets considered here are somewhat different from those in Volume 1, thus, the models are more practical and the results obtained are usually more delicate. Under the authors’ good organization and very lucid mathematical language with lists of rich references and standardized notation and abbreviations, the readers will be very comfortable in following the whole text and gain a great deal of enlightenment. Good examples, notes and remarks with the authors’ deep ideas on MCPs are provided for most of the chapters. The book is really worth recommending as an excellent monograph in the field. Reviewer: Wu Chengxun (Shanghai) Cited in 259 Documents MSC: 93-02 Research exposition (monographs, survey articles) pertaining to systems and control theory 93E20 Optimal stochastic control 60J05 Discrete-time Markov processes on general state spaces 90C39 Dynamic programming 90C40 Markov and semi-Markov decision processes 49L20 Dynamic programming in optimal control and differential games Keywords:dynamic programming; expected total cost criterion; discrete-time Markov control processes; Markov control models; weighted-norm spaces; undiscounted cost criteria; optimal policies; sample path optimality; linear programming Citations:Zbl 0840.93001 PDF BibTeX XML Cite \textit{O. Hernández-Lerma} and \textit{J. B. Lasserre}, Further topics on discrete-time Markov control processes. New York, NY: Springer (1999; Zbl 0928.93002)