Peng, Shige A generalized dynamic programming principle and Hamilton-Jacobi-Bellman equation. (English) Zbl 0756.49015 Stochastics Stochastics Rep. 38, No. 2, 119-134 (1992). A fully nonlinear second order partial differential equation is interpreted as the value function of a certain optimally controlled diffusion problem. The problem is formulated as follows: The state equation of the control problem is a classical one; the cost function is described by an adapted solution of a certain backward stochastic differential equation. Bellman’s dynamic programming principle is discussed for this problem. The value function is proved to be a viscosity solution of the possibly degenerate fully nonlinear equation. Reviewer: R.Gessing (Gliwice) Cited in 5 ReviewsCited in 134 Documents MSC: 49L20 Dynamic programming in optimal control and differential games 49L25 Viscosity solutions to Hamilton-Jacobi equations in optimal control and differential games Keywords:nonlinear second order partial differential equation; optimally controlled diffusion problem; viscosity solution × Cite Format Result Cite Review PDF Full Text: DOI