Markov decision models with weighted discounted criteria. (English) Zbl 0803.90123

Summary: We consider a discrete time Markov Decision Process with infinite horizon. The criterion to be maximized is the sum of a number of standard discounted rewards, each with a different discount factor. Situations in which such criteria arise include modeling investments, production, modeling projects of different durations and systems with multiple criteria, and some axiomatic formulations of multi-attribute preference theory. We show that for this criterion for some positive \(\varepsilon\) there need not exist an \(\varepsilon\)-optimal (randomized) stationary strategy, even when the state and action sets are finite. However, \(\varepsilon\)-optimal Markov (nonrandomized) strategies and optimal Markov strategies exist under weak conditions. We exhibit \(\varepsilon\)- optimal Markov strategies which are stationary from some time onward. When both state and action spaces are finite, there exists an optimal Markov strategy with this property. We provide an explicit algorithm for the computation of such strategies and give a description of the set of optimal strategies.


90C40 Markov and semi-Markov decision processes
Full Text: DOI Link