Discounted Markov control processes induced by deterministic systems. (English) Zbl 1249.90312

Summary: This paper deals with Markov control processes (MCPs) on Euclidean spaces with an infinite horizon and a discounted total cost. Firstly, MCPs which result from deterministic controlled systems are analyzed. For such MCPs, conditions that permit to establish the equation known in the economics literature as Euler’s equation (EE) is given. An example of a MCP with a deterministic controlled system is presented, where in order to obtain the optimal value function, EE is applied to the value iteration algorithm. Secondly, the MCPs which result from the perturbation of deterministic controlled systems with a random noise are dealt with. The conditions which allow to obtain the optimal value function and the optimal policy of a perturbed controlled system are presented, in terms of the optimal value function and the optimal policy of the corresponding deterministic controlled system. Finally, several examples to illustrate the last case mentioned are presented.


90C40 Markov and semi-Markov decision processes
93E20 Optimal stochastic control
Full Text: EuDML Link


[1] Benveniste L. M., Scheinkman J. A.: On the differentiability of the value function in dynamic models of economics. Econometrica 47 (1979), 727-732 · Zbl 0435.90031 · doi:10.2307/1910417
[2] Bertsekas D. P.: Dynamic Programming: Deterministic and Stochastic Models. Prentice-Hall, Englewood Cliffs, New Jersey 1987 · Zbl 0649.93001
[3] Cruz-Suárez D., Montes-de-Oca, R., Salem-Silva F.: Conditions for the uniqueness of optimal policies of discounted Markov decision processes. Math. Methods Oper. Res. 60 (2004), 415-436 · Zbl 1104.90053 · doi:10.1007/s001860400372
[4] Fuente A. De la: Mathematical Methods and Models for Economists. Cambridge University Press, New York 2000 · Zbl 0943.91001 · doi:10.1017/CBO9780511810756
[5] Duffie D.: Security Markets. Academic Press, Boston 1988 · Zbl 0861.90019 · doi:10.1016/0304-4068(95)00740-7
[6] Durán J.: On dynamic programming with unbounded returns. J. Econom. Theory 15 (2000), 339-352 · Zbl 1101.91339 · doi:10.1007/s001990050016
[7] Heer B., Maußner A.: Dynamic General Equilibrium Modelling: Computational Method and Application. Springer-Verlag, Berlin 2005 · Zbl 1180.91005 · doi:10.1007/b138909
[8] Hernández-Lerma O.: Adaptive Markov Control Processes. Springer-Verlag, New York 1989 · Zbl 0698.90053 · doi:10.1007/978-1-4419-8714-3
[9] Hernández-Lerma O., Lasserre J. B.: Discrete-Time Markov Control Processes: Basic Optimality Criteria. Springer-Verlag, New York 1996 · Zbl 0840.93001
[10] Van C. Le, Morhaim L.: Optimal growth models with bounded or unbounded returns: a unifying approach. J. Econom. Theory 105 (2002), 158-187 · Zbl 1013.91079 · doi:10.1006/jeth.2001.2880
[11] Levhari D., Srinivasan T. N.: Optimal savings under uncertainty. Rev. Econom. Stud. 36 (1969), 153-164 · doi:10.2307/2296834
[12] Mirman L. J.: Dynamic models of fishing: a heuristic approach. Control Theory in Mathematical Economics (Pan-Tai Liu and J. G. Sutinen, Marcel Dekker, New York 1979, pp. 39-73 · Zbl 0432.90024
[13] Rincón-Zapatero J. L., Rodríguez-Palmero C.: Existence and uniqueness of solutions to the Bellman equation in the unbounded case. Econometrica 71 (2003), 1519-1555 · Zbl 1160.49304 · doi:10.3982/ECTA7770
[14] Santos M. S.: Numerical solution of dynamic economic models. Handbook of Macroeconomic, Volume I (J. B. Taylor and M. Woodford, North Holland, Amsterdam 1999, pp. 311-386
[15] Stokey N. L., Lucas R. E.: Recursive Methods in Economic Dynamics. Harvard University Press, Cambridge, Mass. 1989 · Zbl 0774.90018
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.