Discrete-time Markov control processes with recursive discount rates. (English) Zbl 1357.49110

Summary: This work analyzes a discrete-time Markov Control Model (MCM) on Borel spaces when the performance index is the expected total discounted cost. This criterion admits unbounded costs. It is assumed that the discount rate in any period is obtained by using recursive functions and a known initial discount rate. The classic dynamic programming method for finite-horizon case is verified. Under slight conditions, the existence of deterministic non-stationary optimal policies for infinite-horizon case is proven. Also, to find deterministic non-stationary \(\epsilon\)-optimal policies, the value-iteration method is used. To illustrate an example of recursive functions that generate discount rates, we consider the expected values of stochastic processes, which are solutions of certain class of Stochastic Differential Equations (SDE) between consecutive periods, when the initial condition is the previous discount rate. Finally, the consumption-investment problem and the discount linear-quadratic problem are presented as examples; in both cases, the discount rates are obtained using a SDE, similar to the Vasicek short-rate model.


49L20 Dynamic programming in optimal control and differential games
90C40 Markov and semi-Markov decision processes
90C39 Dynamic programming
93E20 Optimal stochastic control
Full Text: DOI Link