×

zbMATH — the first resource for mathematics

On deterministic control problems: An approximation procedure for the optimal cost. 1. The stationary problem. (English) Zbl 0563.49024
The authors consider optimal deterministic control problems: more precisely, in this first part, they study an infinite horizon problem with in addition optimal stopping and impulse controls. They present a powerful numerical method based upon the characterization of the value function as the maximum subsolution of the associated Bellman equation. For discretized problems, the approximate solutions are built by a convenient iteration scheme. Finally, an estimate of the rate of convergence of approximate solutions is given.
Reviewer: P.L.Lions

MSC:
49M20 Numerical methods of relaxation type
49L20 Dynamic programming in optimal control and differential games
49K15 Optimality conditions for problems involving ordinary differential equations
49K30 Optimality conditions for solutions belonging to restricted classes (Lipschitz controls, bang-bang controls, etc.)
60G40 Stopping times; optimal stopping problems; gambling theory
65K10 Numerical optimization and variational techniques
93C99 Model systems in control theory
93C15 Control/observation systems governed by ordinary differential equations
90C39 Dynamic programming
49M15 Newton-type methods
PDF BibTeX XML Cite
Full Text: DOI