Dolcetta, I. Capuzzo; Falcone, M. Discrete dynamic programming and viscosity solutions of the Bellman equation. (English) Zbl 0674.49028 Ann. Inst. Henri Poincaré, Anal. Non Linéaire 6, Suppl., 161-183 (1989). Summary: This paper presents a technique for approximating the viscosity solution of the Bellman equation in deterministic control problems. This technique, based on discrete dynamic programming, leads to monotonically converging schemes and allows to prove a priori error estimates. Several computational algorithms leading to monotone convergence are reviewed and compared. Cited in 21 Documents MSC: 49L20 Dynamic programming in optimal control and differential games 49M05 Numerical methods based on necessary conditions Keywords:approximation schemes; viscosity solution; Bellman equation; dynamic programming; monotone convergence PDF BibTeX XML Cite \textit{I. C. Dolcetta} and \textit{M. Falcone}, Ann. Inst. Henri Poincaré, Anal. Non Linéaire 6, 161--183 (1989; Zbl 0674.49028) Full Text: DOI Numdam EuDML OpenURL References: [1] M. Bardi, A boundary value problem for the minimum time function, to appear on SIAM J. and optimization. · Zbl 0682.49034 [2] M. Bardi, M. Falcone, An approximation scheme for the minimum time function, preprint, july 1988. · Zbl 0723.49024 [3] Barles, G., Inéquations quasi-variationnelles du premier ordre et équations d’hamilton-Jacobi, C.R. Acad. Sc. Paris, t. 296, (1983) [4] Barles, G., Deterministic impulse control problems, SIAM J. Control and optimization, vol.23, 3, (1985) · Zbl 0571.49020 [5] G. Barles, B. Perthame, Discontinuous solutions of deterministic optimal stopping time problems, to appear in Math. Methods and Num. Anal. · Zbl 0629.49017 [6] G. Barles, B. Perthame, Exit time problems in optimal control and vanishing viscosity method, preprint. · Zbl 0674.49027 [7] Bellman, R., Dynamic programming, (1957), Princeton University Press · Zbl 0077.13605 [8] Bellman, R.; Dreyfus, S., Applied dynamic programming, (1962), Princeton University Press · Zbl 0106.34901 [9] A. Bensoussan, Contrôle stochastique discret, Cahiers du Ceremade. [10] Bensoussan, A.; Runggaldier, W., An approximation method for stochastic control problems with partial observation of the state-A method for constructing ε-optimal controls, Acta Applicandae Mathematicae, vol. 10, 2, (1987) · Zbl 0653.93069 [11] Bertsekas, D. P.; Shreve, S. E., Stochastic optimal control: the discrete time case, (1978), Academic Press New York · Zbl 0471.93002 [12] Capuzzo Dolcetta, I., On a discrete approximation of the Hamilton − Jacobi equation of dynamic programming, Appl. Math. and Optim., 10, (1983) · Zbl 0582.49019 [13] Capuzzo Dolcetta, I.; Evans, L. C., Optimal switching for ordinary differential equations, SIAM J. Control and optimization, vol.22, (1984) · Zbl 0641.49017 [14] Capuzzo Dolcetta, I.; Ishii, H., , approximate solutions of the Bellman equation of deterministic control theory, Appl. Math. and Optim., 11, (1984) · Zbl 0553.49024 [15] I. Capuzzo Dolcetta, P.L. Lions, Hamilton-Jacobi equations and state-constrained problems, to appear in Transactions of the AMS. · Zbl 0933.49018 [16] Capuzzo Dolcetta, I.; Matzeu, M., On the dynamic programming inequalities associated with the optimal stopping problem in discrete and continuous time, Numer. Funct. Anal. Optim., 3, 4, (1981) · Zbl 0476.49021 [17] Capuzzo Dolcetta, I.; Matzeu, M.; Menaldi, J. L., On a system of first order quasi-variational. inequalities connected with the optimal switching problem, System and Control Letters, Vol.3, No.2, (1983) · Zbl 0521.49008 [18] Carlson, D. A.; Haurie, A., Infinite horizon optimal control, Lecture Notes in Economics and Mathematical Systems, 290, (1987), Springer Verlag · Zbl 0649.49001 [19] Crandall, M. G.; Ishii, H.; Lions, P. L., , uniqueness of viscosity solutions of Hamilton-Jacobi equations revisited, J. Math. Soc. Japan, vol. 39, 4, (1987) · Zbl 0644.35016 [20] Crandall, M. G.; Lions, P. L., Two approximations of solutions of Hamilton-Jacobi equations, Mathematics of Computation, vol.43, 167, (1984) · Zbl 0556.65076 [21] Crandall, M. G.; Lions, P. L., Viscosity solutions of Hamilton-Jacobi equations, Trans. A.M.S., 277, (1983) · Zbl 0599.35024 [22] Cullum, J., An explicit procedure for discretizing continuous optimal control problems, J. optimization Theory and Applications, 8, 1, (1971) · Zbl 0215.21907 [23] Falcone, M., A numerical approach to the infinite horizon problem of deterministic control theory, Appl. Math. Optim., 15, (1987) · Zbl 0715.49023 [24] M. Falcone, Numerical solution of deterministic continuous control problems, Proceedings International Symposium on Numerical Analysis, Madrid, 1985. [25] M. Falcone, Approximate feedback controls for deterministic control problems, in preparation. [26] W.H. Fleming, Controlled Markov processes and viscosity solutions of nonlinear evolution equations, Lecture Notes, Scuola Normale Superiore Pisa, to appear. · Zbl 0644.60086 [27] FIeming, W. H.; Rishel, R., Deterministic and stochastic optimal cotrol, (1975), Springer Verlag Berlin [28] Gonzales, R.; Rofman, E., On deterministic control problems: an approximation procedure for the optimal cost, part I and II, SIAM J. Control and optimization, vol. 23, n. 2, (1985) · Zbl 0563.49025 [29] Howard, R. A., Dynamic programming and Markov processes, (1960), Wiley New York · Zbl 0091.16001 [30] Hrustalev, M. M., Necessary and sufficient optimality conditions in the form of bellman’s equation, Soviet Math. Dokl., 19, 5, (1978) [31] Kalaba, R., On nonlinear differential equations, the maximum operation and monotone convergence, J. of Math. and Mech., vol 8, n. 4, (1959) · Zbl 0092.07703 [32] Kushner, H. J., Probability methods for approximation in stochastic control and for elliptic equations, (1977), Academic Press New York [33] H.J. Kushner, A.J. KIeinman, Accelerated procedures for the solution of discrete Markov control problems, IEEE Trans. Automatic control, AC-16, 1971. [34] Kushner, H. J.; Kleinman, A. J., Mathematical programming and the control of Markov chains, Int. J. Control, 13, (1971) · Zbl 0217.17802 [35] Larson, R. E., State increment dynamic programming, (1967), American EIsevier New York · Zbl 0204.47101 [36] R.E. Larson, Korsak, A dynamic programming successive approximation technique with convergence proofs, parts I end II Automatica, 1969. [37] Lee, E. B.; Markus, L., Foundations of optimal control, (1967), J. Wiley New York · Zbl 0159.13201 [38] Lions, P. L., Generalized solutions of Hamilton-Jacobi equations, (1982), Pitman London [39] Lions, P. L., Optimal control and viscosity solutions, (Capuzzo Dolcetta, I.; FIeming, W. H.; Zolezzi, T., Recent Malhematical Methods in Dynamic Programming, Lecture Notes in Mathematics 1119, (1985), Springer Verlag Berlin) [40] Lions, P. L., Recent progress on first order Hamilton-Jacobi equations, Directions in Partial Differential Equations, (1987), Academic Press · Zbl 0659.35015 [41] Lions, P. L.; Mercier, B., Approximation numérique des équations de Hamilton-Jacobi-Bellman, R.A.I.R.O Analyse numérique, vo1.14, n. 4, (1980) · Zbl 0469.65041 [42] P.L. Lions, P.E. Souganidis, Differential games, optimal control and directional derivatives of viscosity solutions of Bellman-lsaacs equations, to appear in SIAM J. Control and optimization. · Zbl 0569.49019 [43] Loreti, P., Approssimazione di soluzioni viscosità dell’equazione di Bellman, Boll. U.M.I, 6, 5-B, (1986) [44] K. Malanowski, On convergence of finite difference approximations to optimal control problems for systems with control appearing linearly, Archivum Automatyki i Telemechaniki, 24, Zeszyt 2, 1979. · Zbl 0408.49038 [45] E. Mascolo, L. Migliaccio, Relaxation methods in optimal control, to appear in J. Optim. Th. App. · Zbl 0689.49029 [46] J.L. Menaldi, Probabilistic view of estimates for finite difference methods, to appear in Mathematicae Notae. · Zbl 0713.60075 [47] Menaldi, J. L.; Rofman, E., An algorithm to compute the viscosity solution of the Hamilton-Jacobi-Bellman equation, (Byrnes, C. I.; Lindquist, A., Theory and Applications of Nonlinear Control Systems, (1986), North-Holland) [48] Puterman, M. L.; Brumelle, S. L., On the convergence of policy iteration in stationary dynamic programming, Math. of Operation Research, vol.4, n. 1, (1979) · Zbl 0411.90072 [49] Quadrat, J. P., Existence de solution et algorothme de resolutions numeriques de problemes stochastiques degenerées ou non, SIAM J. Control and optimization, vol.18, (1980) [50] Soner, M. H., Optimal control with state space constraint, SIAM J. Control and optimization, vol.24, 3, (1986) [51] Souganidis, P. E., Approximation schemes for viscosity solutions of Hamilton-Jacobi equations, Journal of Differential Equations, 57, (1985) · Zbl 0536.70020 [52] Warga, J., Optimal control of differential and functional equations, (1972), Academic Press New York · Zbl 0253.49001 [53] Young, L. C., Lectures on the calculus of variations and optimal control theory, (1969), W.B. Saunders Philadelfia · Zbl 0177.37801 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.