Feinberg, Eugene A.; Shwartz, Adam Markov decision models with weighted discounted criteria. (English) Zbl 0803.90123 Math. Oper. Res. 19, No. 1, 152-168 (1994). Summary: We consider a discrete time Markov Decision Process with infinite horizon. The criterion to be maximized is the sum of a number of standard discounted rewards, each with a different discount factor. Situations in which such criteria arise include modeling investments, production, modeling projects of different durations and systems with multiple criteria, and some axiomatic formulations of multi-attribute preference theory. We show that for this criterion for some positive \(\varepsilon\) there need not exist an \(\varepsilon\)-optimal (randomized) stationary strategy, even when the state and action sets are finite. However, \(\varepsilon\)-optimal Markov (nonrandomized) strategies and optimal Markov strategies exist under weak conditions. We exhibit \(\varepsilon\)- optimal Markov strategies which are stationary from some time onward. When both state and action spaces are finite, there exists an optimal Markov strategy with this property. We provide an explicit algorithm for the computation of such strategies and give a description of the set of optimal strategies. Cited in 16 Documents MSC: 90C40 Markov and semi-Markov decision processes Keywords:discrete time Markov decision process; infinite horizon PDF BibTeX XML Cite \textit{E. A. Feinberg} and \textit{A. Shwartz}, Math. Oper. Res. 19, No. 1, 152--168 (1994; Zbl 0803.90123) Full Text: DOI Link OpenURL