zbMATH — the first resource for mathematics

An optimal control problem with a random stopping time. (English) Zbl 0681.93070
This paper deals with a stochastic optimal control problem where the randomness is essentially concentrated in the stopping time terminating the process. If the stopping time is characterized by an intensity depending on the state and control variables, one can reformulate the problem equivalently as an infinite-horizon optimal control problem. Applying dynamic programming and minimum principle techniques to this associated deterministic control problem yields specific optimality conditions for the original stochastic control problem. It is also possible to characterize extremal steady states. The model is illustrated by an example related to the economics of technological innovation.
Reviewer: E.Boukas

93E20 Optimal stochastic control
49L20 Dynamic programming in optimal control and differential games
Full Text: DOI
[1] Boukas, E. K., andHaurie, A.,Optimality Conditions for Continuous-Time Systems with Controlled Jump Markov Disturbances: Application to an FMS Planning Problem, Analysis and Optimization of Systems, Edited by A. Bensoussan and J. L. Lions, Springer-Verlag, Berlin, Germany, pp. 633-676, 1988.
[2] Davis, M. H. A.,Control of Piecewise-Deterministic Process via Discrete-Time Dynamic Programming, Proceedings of the 3rd Bad Honnef Symposium on Stochastic Differential Systems, Springer-Verlag, Berlin, Germany, 1985.
[3] Vermes, D.,Optimal Control of Piecewise Deterministic Markov Process, Stochastics, Vol. 14, pp. 165-208, 1985. · Zbl 0566.93074
[4] Halkin, H.,Necessary Conditions for Optimal Control Problems with Infinite Horizons, Econometrica, Vol. 42, pp. 267-272, 1974. · Zbl 0301.90009 · doi:10.2307/1911976
[5] Michel, P.,On the Transversality Condition in Infinite-Horizon Optimal Control Problems, Econometrica, Vol. 50, pp. 975-985, 1982. · Zbl 0483.90026 · doi:10.2307/1912772
[6] Baum, R. F.,Existence Theorems for Lagrange Control Problems with Unbounded Time Domain, Journal of Optimization Theory and Applications, Vol. 19, pp. 89-116, 1976. · Zbl 0305.49002 · doi:10.1007/BF00934054
[7] Toman, M. A.,Optimal Control with an Unbounded Horizon, Journal of Economic Dynamics and Control, Vol. 9, pp. 291-316, 1985. · doi:10.1016/0165-1889(85)90009-0
[8] Rishel, R.,Control of Systems with Jump Markov Disturbances, IEEE Transactions on Automatic Control, Vol. AC-20, pp. 241-244, 1975. · Zbl 0305.93059 · doi:10.1109/TAC.1975.1100943
[9] Boltyanskii, V. G.,Sufficient Conditions for Optimality and the Justification of the Dynamic Programming Method, SIAM Journal on Control, Vol. 4, pp. 326-361, 1966. · Zbl 0143.32004 · doi:10.1137/0304027
[10] Mirica, S.,On the Admissible Synthesis in Optimal Control Theory and Differential Games, SIAM Journal on Control, Vol. 7, pp. 292-316, 1969. · Zbl 0182.48502 · doi:10.1137/0307020
[11] Rishel, R.,Dynamic Programming and Minimum Principles for Systems with Jump Markov Disturbances, SIAM Journal on Control, Vol. 13, pp. 338-371, 1975. · Zbl 0304.93025 · doi:10.1137/0313020
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.