Generalized Bellman-Hamilton-Jacobi equations for piecewise deterministic Markov processes.

*(English)*Zbl 0812.90140
Henry, Jacques (ed.) et al., System modelling and optimization. Proceedings of the 16th IFIP-TC7 conference, Compiègne, France, July 5-9, 1993. London: Springer-Verlag. Lect. Notes Control Inf. Sci. 197, 541-550 (1994).

Summary: Piecewise deterministic Markov processes (PDPs) are continuous time homogeneous Markov processes whose trajectories are solutions of ordinary differential equations with random jumps between the different integral curves. Both continuous deterministic motion and the random jumps of the processes are controlled in order to minimize the expected value of a performance criterion involving discounted running and boundary costs. In this paper under fairly general assumptions, a necessary and sufficient optimality condition for control of piecewise deterministic Markov processes in terms of the generalized Hamilton-Jacobi-Bellman equations involving the lower Dini derivatives is given. The strengthened version of the necessary and sufficient condition gives an optimal piecewise feedback controls and a procedure for approximating optimal controls. The relationship between various notions of generalized solutions of the HJB equation are also discussed.

For the entire collection see [Zbl 0799.00038].

For the entire collection see [Zbl 0799.00038].

##### MSC:

90C40 | Markov and semi-Markov decision processes |