##
**Dynamic programming: deterministic and stochastic models.**
*(English)*
Zbl 0649.93001

Englewood Cliffs, N.J. 07632: Prentice-Hall, Inc. VIII, 376 p.; $ 64.85 (1987).

This book provides a comprehensive and unified treatment of dynamic programming for discrete stochastic systems, which is suitable for an audience from mathematicians and engineers to social scientists. At the beginning a single basic problem is introduced and then all considerations throughout the text are carried out with its help. Let us note that many stochastic control problems, Markovian decision problems and classes of combinatorial problems, popular in computer science, are solved by applying the dynamic programming technique. Moreover, the text provides a large variety of examples, many of them being important on their own. Recent research material such as queueing decision problems, armed bandit problems, stochastic scheduling, heuristic search, self- tuning regulators, and adaptive aggregation is also included.

The book is organized in the following seven chapters: (1) The dynamic programming algorithm; (2) Applications in specific areas; (3) Problems with imperfect state information; (4) Suboptimal and adaptive control; (5) Infinite horizon problems: theory; (6) Infinite horizon problems: applications; (7) Minimization of average cost per stage.

The mathematical prerequisites for reading the text are summarized in four appendices: (A) Review on basic mathematical notions; (B) On optimization theory; (C) On probability theory; (D) On finite state Markov chains.

In fact the reader needs a good knowledge of introductory probability and undergraduate mathematics. Nevertheless the book can even be read with success if one does not go to deeply into details. Those, who have no desire to bypass the mathematically rigorous treatment of the problems, can look for the author’s monograph written jointly with St. Shreve [Stochastic optimal control. The discrete time case (1978; Zbl 0471.93002)].

The book is organized in the following seven chapters: (1) The dynamic programming algorithm; (2) Applications in specific areas; (3) Problems with imperfect state information; (4) Suboptimal and adaptive control; (5) Infinite horizon problems: theory; (6) Infinite horizon problems: applications; (7) Minimization of average cost per stage.

The mathematical prerequisites for reading the text are summarized in four appendices: (A) Review on basic mathematical notions; (B) On optimization theory; (C) On probability theory; (D) On finite state Markov chains.

In fact the reader needs a good knowledge of introductory probability and undergraduate mathematics. Nevertheless the book can even be read with success if one does not go to deeply into details. Those, who have no desire to bypass the mathematically rigorous treatment of the problems, can look for the author’s monograph written jointly with St. Shreve [Stochastic optimal control. The discrete time case (1978; Zbl 0471.93002)].

Reviewer: Sv.Gaidov

### MSC:

93-01 | Introductory exposition (textbooks, tutorial papers, etc.) pertaining to systems and control theory |

90-01 | Introductory exposition (textbooks, tutorial papers, etc.) pertaining to operations research and mathematical programming |

93E20 | Optimal stochastic control |

49L20 | Dynamic programming in optimal control and differential games |

90B22 | Queues and service in operations research |

90B35 | Deterministic scheduling theory in operations research |

90B40 | Search theory |

90B50 | Management decision making, including multiple objectives |

90C40 | Markov and semi-Markov decision processes |

90C39 | Dynamic programming |

93C40 | Adaptive control/observation systems |

93C55 | Discrete-time control/observation systems |

93E03 | Stochastic systems in control theory (general) |