A generalized dynamic programming principle and Hamilton-Jacobi-Bellman equation. (English) Zbl 0756.49015

A fully nonlinear second order partial differential equation is interpreted as the value function of a certain optimally controlled diffusion problem. The problem is formulated as follows: The state equation of the control problem is a classical one; the cost function is described by an adapted solution of a certain backward stochastic differential equation. Bellman’s dynamic programming principle is discussed for this problem. The value function is proved to be a viscosity solution of the possibly degenerate fully nonlinear equation.


49L20 Dynamic programming in optimal control and differential games
49L25 Viscosity solutions to Hamilton-Jacobi equations in optimal control and differential games
Full Text: DOI