×

zbMATH — the first resource for mathematics

Optimal control of piecewise deterministic Markov processes. (English) Zbl 0756.90092
Applied stochastic analysis, Pap. Workshop, London/UK 1989, Stochastic Monogr. 5, 303-325 (1990).
[For the entire collection see Zbl 0728.00017.]
A complete control theory has been outlined for piecewise deterministic Markov processes with bounded local characteristics which are Lipschitz continuous in the state uniformly in the bounded control evolving within a bounded domain. Optimal policies, which specify open loop deterministic control from any interior point of the domain together with feedback control on the boundary, are characterized in terms of a Dirichlet problem for the associated generalized Bellman-Hamilton-Jacobi equation in which the classical gradient is replaced by a minimization over the generalized gradient of the Lipschitz continuous solution. The value function is shown to be the unique solution to this Dirichlet problem under an extrinsic regularity assumption. A nonsmooth local Pontryagin- Hamilton maximum principle is also given in which the value function appears explicitly in the Hamiltonian.

MSC:
90C40 Markov and semi-Markov decision processes
93E20 Optimal stochastic control
Citations:
Zbl 0728.00017
PDF BibTeX XML Cite