##
**Mathematical control theory. Deterministic finite dimensional systems.
2nd ed.**
*(English)*
Zbl 0945.93001

Texts in Applied Mathematics. 6. New York, NY: Springer. xvi, 531 p. (1998).

This second edition (for a review of the first edition (1990), see Zbl 0703.93001) contains the following new items:

1) A chapter on nonlinear controllability on \(R^p\) (the adopted language is not suitable for manifolds). Controllable and reachable sets to and from \(x_0\) respectively have nonempty interior if a Lie algebra arising from vector fields defining the control system can be identified with \(R^p\) at \(x_0\). If in addition a defined reversibility condition holds, then \(x_0\) belongs to the interior of the intersection of these two sets. Moreover, a criterion for complete controllability is obtained in this context. After devoting several pages to introduce and prove the Frobenius theorem, a converse to the first implication from above is provided when the conditions hold on possibly distinct open dense subsets of the state space. In the reviewer’s opinion, it is unfortunate that the concept of strong accessibility and related criteria are not treated: it is an important situation in the gap between accessibility and controllability which provides deeper insight into what is going on. This information can be found in the book of H. Nijmeijer and A. van der Schaft [Nonlinear dynamical control systems (1990; Zbl 0701.93001)].

2) A chapter is devoted to optimal control with constrained and unconstrained controls. The author obtains the extremum principle of Pontryagin and the Euler-Lagrange equation of the calculus of variations in a separate section. Another section deals with gradient-based numerical methods.

3) The minimum-time problem for linear systems is presented in a new chapter. The assumption is that the control set is convex and compact. The author shows that the reachability set in any time from any point is compact and convex through suitably defined weakly convergent sequences of control policies. Existence of an optimal control follows from the fact that the introduced reachability set is closed. The convexity of the reachability set is used to characterize final time optimal states on its boundary. This last situation is equivalent to an extremum principle and if an additional controllability type condition holds, the converse is true i.e. time-optimality is obtained from the boundary condition or the extremality principle.

4) A new section presents single input feedback linearization and nonlinear optimal stabilization is treated p. 390 via the Hamilton-Jacobi equation whose solution yields a value function for an optimization problem. This value function is a Lyapunov function. But this way of handling the problem is subject to plugging the right feedback in the Hamilton-Jacobi equation which will lead to a solution. Nonlinear stabilization is also introduced p. 239 where control Lyapunov functions and backstepping are presented.

5) Two sections deal with controllability in recurrent neural nets and controllability of linear systems with bounded controls. In the first case, the system is modelled by \[ \dot x= \theta_n (Ax+Bu) \] where \(x\in R^n\) and \(\theta_n\) is a diagonal mapping having each of its components acting as a saturation function modelled by the hyperbolic tangent. The main result is that local strong controllability around any state is equivalent to the condition that the rows of \(B\) do not vanish nor do two rows coincide or are opposite. The result is obtained from a lemma interesting in its own sake: For the system \(\dot x= f(x,u)\), if there is \(x_0\) such that the origin belongs to the interior of the convex hull of the set of vectors \(f(x_0,u)\) for \(u\) in its admissible set, then there is a neighborhood of \(x_0\) which is controllable to \(x_0\) and reachable from \(x_0\). The proof uses the analytic and asymptotic properties of the tangent hyperbolic function and is not trivial. In the case of bounded controls, it is shown that any point of the state space can be reached if and only if the system is controllable with an unstable drift.

6) The chapter on dynamic programming and \(L-Q\) problems has been revised.

7) “Errors and typos have been corrected” (but there are some left) and a list of symbols has been added.

This new edition gets closer to the impossible task of presenting a complete and rigorous panorama of control theory in a single book. It is one of the best source available with respect to these two qualities.

1) A chapter on nonlinear controllability on \(R^p\) (the adopted language is not suitable for manifolds). Controllable and reachable sets to and from \(x_0\) respectively have nonempty interior if a Lie algebra arising from vector fields defining the control system can be identified with \(R^p\) at \(x_0\). If in addition a defined reversibility condition holds, then \(x_0\) belongs to the interior of the intersection of these two sets. Moreover, a criterion for complete controllability is obtained in this context. After devoting several pages to introduce and prove the Frobenius theorem, a converse to the first implication from above is provided when the conditions hold on possibly distinct open dense subsets of the state space. In the reviewer’s opinion, it is unfortunate that the concept of strong accessibility and related criteria are not treated: it is an important situation in the gap between accessibility and controllability which provides deeper insight into what is going on. This information can be found in the book of H. Nijmeijer and A. van der Schaft [Nonlinear dynamical control systems (1990; Zbl 0701.93001)].

2) A chapter is devoted to optimal control with constrained and unconstrained controls. The author obtains the extremum principle of Pontryagin and the Euler-Lagrange equation of the calculus of variations in a separate section. Another section deals with gradient-based numerical methods.

3) The minimum-time problem for linear systems is presented in a new chapter. The assumption is that the control set is convex and compact. The author shows that the reachability set in any time from any point is compact and convex through suitably defined weakly convergent sequences of control policies. Existence of an optimal control follows from the fact that the introduced reachability set is closed. The convexity of the reachability set is used to characterize final time optimal states on its boundary. This last situation is equivalent to an extremum principle and if an additional controllability type condition holds, the converse is true i.e. time-optimality is obtained from the boundary condition or the extremality principle.

4) A new section presents single input feedback linearization and nonlinear optimal stabilization is treated p. 390 via the Hamilton-Jacobi equation whose solution yields a value function for an optimization problem. This value function is a Lyapunov function. But this way of handling the problem is subject to plugging the right feedback in the Hamilton-Jacobi equation which will lead to a solution. Nonlinear stabilization is also introduced p. 239 where control Lyapunov functions and backstepping are presented.

5) Two sections deal with controllability in recurrent neural nets and controllability of linear systems with bounded controls. In the first case, the system is modelled by \[ \dot x= \theta_n (Ax+Bu) \] where \(x\in R^n\) and \(\theta_n\) is a diagonal mapping having each of its components acting as a saturation function modelled by the hyperbolic tangent. The main result is that local strong controllability around any state is equivalent to the condition that the rows of \(B\) do not vanish nor do two rows coincide or are opposite. The result is obtained from a lemma interesting in its own sake: For the system \(\dot x= f(x,u)\), if there is \(x_0\) such that the origin belongs to the interior of the convex hull of the set of vectors \(f(x_0,u)\) for \(u\) in its admissible set, then there is a neighborhood of \(x_0\) which is controllable to \(x_0\) and reachable from \(x_0\). The proof uses the analytic and asymptotic properties of the tangent hyperbolic function and is not trivial. In the case of bounded controls, it is shown that any point of the state space can be reached if and only if the system is controllable with an unstable drift.

6) The chapter on dynamic programming and \(L-Q\) problems has been revised.

7) “Errors and typos have been corrected” (but there are some left) and a list of symbols has been added.

This new edition gets closer to the impossible task of presenting a complete and rigorous panorama of control theory in a single book. It is one of the best source available with respect to these two qualities.

Reviewer: A.Akutowicz (Berlin)

### MSC:

93-01 | Introductory exposition (textbooks, tutorial papers, etc.) pertaining to systems and control theory |

93C15 | Control/observation systems governed by ordinary differential equations |

49N05 | Linear optimal control problems |

93D15 | Stabilization of systems by feedback |

92B20 | Neural networks for/in biological studies, artificial life and related topics |

93B05 | Controllability |

49L20 | Dynamic programming in optimal control and differential games |

49K15 | Optimality conditions for problems involving ordinary differential equations |