×

Control of systems with aftereffect. Transl. from the Russian by Victor Kotov. (English) Zbl 0937.93001

Translations of Mathematical Monographs. 157. Providence, RI: American Mathematical Society (AMS). xi, 336 p. (1996).
This volume on the control of delayed systems arises from a previous Russian version of F. A. Andreeva and the authors [Control of hereditary systems (1992; Zbl 0840.93003)].
Chapter one describes some types of differential equations with aftereffect and then gives conditions guaranteeing well-posedness of these equations. The case of a stochastic component in the equations is considered, too. A representation of solutions to linear equations due to the first author is presented. A section shows the importance of delays in several application examples and several types of control and observation problems are stated.
In chapter two, the authors focus on the solution to optimal control problems via the dynamic programming method. The linear quadratic problem is examined. The process with suitable initial conditions is described by \[ \dot x = A_1(t)x(t - h) + \int^0_{-h} G(t,\tau)x(t +\tau)d\tau + B(t)u\tag{1} \] where \(x\in \mathbb{R}^n\) and \(u\in \mathbb{R}^m\) and the given matrices have the required dimensions. The Lyapunov functional which solves the Bellman equation generalizes the usual quadratic equation depending on the solution to a Riccati equation. Some sufficient optimality conditions (involving the additional terms) are derived. Exact solutions are obtained when \(G=0\) and when the integrand of the cost does not involve state dependent terms. In general, an iterative solution is obtained. Similar conditions of optimality are obtained in the case of systems with lags in the controls, neutral systems and bilinear systems (there is no distributed delay here except in an additional section which states some results). Some stabilization results for a mechanical system via proportional-derivative controllers, controllers with aftereffect in the state, proportional-integral controllers and integral controllers are provided. The Lyapunov functional depends on the kinetic energy of the system and on several control dependent additional terms. The delays are introduced here by the controllers.
Chapter three presents methods arising from the extremum principle for delayed deterministic systems. The types of processes and costs considered are: one lag in the state (and a possibly linear evolution equation) and a cost depending on a single lagged (with the same lag) state too, time optimal control of a scalar system with a single lagged state, variable lag of the state and cost with lagged state (same lag), fixed lag in the control and cost with lagged control (same lag), process described by an integro-differential equation and distributed delays, neutral systems with a corresponding term in the cost. The optimality criteria are not only necessary conditions of optimality in terms of a Hamiltonian formulation, they are also sufficient conditions through an extremal of a Krotov type function. Some iterative numerical methods through discretization are presented.
Chapter four can be seen as an introduction to model reference adaptive control (MRAC). The stability of self-adjusting systems is investigated using suitable Lyapunov functions. A section incorporates identification in this framework, and a theorem of convergence under persistent excitation for systems in canonical companion form is proved. A single section is devoted to the stabilization of self-adjusting systems with an unknown lag and the chapter ends with some considerations on the transient behaviour in the scalar case; the parameter dynamics can be aperiodic or nonoscillatory if it vanishes for few times, and conditions for these properties to hold or not are given.
Chapter five is devoted to stochastic optimal control using like in chapter two the tool of the Bellman equation. A first case consists in adding a Wiener process and a linear drift to the right hand side of (1) and the solution procedure is similar. Next, for nonlinear systems without lag but whose dynamics and cost depend on a small parameter \(\varepsilon\), approximate solutions to the Bellman equation using expansions in terms of \(\varepsilon\) of both the solution and the terms of the equation, are provided. Several special cases are carried out in detail where \(\varepsilon\) appears in the drift of a quasilinear system or in the noise appearing in factor with the control of driftless systems (Wiener process, Poisson process). A last section is devoted to the numerical treatment of time-optimal control problems through a discretization scheme. Except in the first section the systems do not contain delays in this chapter.
Chapter six considers the optimal control of systems described by stochastic integro-functional equations. A necessary condition of optimality is available when the limit of \(\frac 1\varepsilon(J(u_\varepsilon) - J(u_o))\), when \(\varepsilon\) tends to zero, is positive. Here \(u_\varepsilon\) is a perturbation of the optimal control \(u_0\) minimizing the cost \(J\). So, under suitable assumptions on the system in the cases when the noise contaminates the control or not (and the way one perturbs the control changes) the above limit is calculated. The linear-quadratic problem is examined and solved, several applied examples are presented, and an algorithm for constructing consecutive approximations to the quadratic optimal control of a stochastic quasilinear integral equation is presented.
Chapter seven treats optimal estimation of a Gaussian process from observations with a lag. The unique solution is expressed through an integral equation. In some cases it is seen that the estimation error does not decrease with the lag, but in other cases an optimal lag may be found. An analog to the Kalman-Bucy filter is derived for a process and an observation described by a system of integral equations. Several examples are studied at the end.
The last chapter focuses on partially observable processes. The optimal control of processes described by Volterra stochastic processes is treated using two methods (separation principle, integral representations). Systems with unknown parameters and adaptive control are also investigated.
A voluminous bibliography (of 357 items) ends this work but an index is lacking. It should be pointed out that only finite horizon optimal control problems are considered and issues of controllability (resp. stabilizability) are not (resp. rarely) addressed. Also the problem of existence of optimal solutions is not treated.
This is a good book which contains a lot of information concerning systems with lags, with emphasis on optimal control and stochastic aspects. This translation is very welcome.

MSC:

93-02 Research exposition (monographs, survey articles) pertaining to systems and control theory
49-02 Research exposition (monographs, survey articles) pertaining to calculus of variations and optimal control
34K35 Control problems for functional-differential equations
49K25 Optimal control problems with equations with ret.arguments (nec.) (MSC2000)
93E20 Optimal stochastic control
93C23 Control/observation systems governed by functional-differential equations
49L20 Dynamic programming in optimal control and differential games
49N10 Linear-quadratic optimal control problems

Citations:

Zbl 0840.93003
PDFBibTeX XMLCite