×

Some estimates for finite difference approximations. (English) Zbl 0684.93088

The author deals with the approximation of optimal control problems for diffusion processes by means of finite difference methods. A typical problem in stochastic control theory is considered. In a complete filtered probability space (\(\Omega\),P,\({\mathcal F},{\mathcal F}(t),t\geq 0)\) suppose we have two progressively measurable processes (y(t),\(\lambda\) (t),t\(\geq 0)\) satisfying the following stochastic differential equation in the Itô sense: \[ dy(t)=g(y(t),\lambda (t))dt+\sigma (y(t),\lambda (t))dw(t),\quad t\geq 0,\quad y(0)=x, \] for given x, g, \(\sigma\), and some n-dimensional Wiener process (w(t),t\(\geq 0)\). The processes (y(t),t\(\geq 0)\) and (\(\lambda\) (t),t\(\geq 0)\) represent the state in \({\mathcal R}^ d\) and the control in \(\Lambda\) (a compact metric space) of the dynamic system, respectively. The cost functional is given by: \[ J(x,\lambda)=E\{\int^{\tau}_{0}f(y(t),\lambda (t))e^{-\alpha t}dt\}, \] where f is a given function, \(\alpha >0\), and \(\tau\) is the first exit time of a domain D in \({\mathcal R}^ d\) for the process (y(t),t\(\geq 0).\)
The author introduces a finite difference operator, satisfying the discrete maximum principle, by which the associated Hamilton-Jacobi- Bellman equation can be given a probabilistic interpretation.
First the one-dimensional case is investigated. Then the general problem is considered.
Reviewer: M.Tibaldi

MSC:

93E20 Optimal stochastic control
93E25 Computational methods in stochastic control (MSC2010)
49M25 Discrete approximations in optimal control
65K10 Numerical optimization and variational techniques
65G99 Error analysis and interval analysis
PDF BibTeX XML Cite
Full Text: DOI Link