##
**Asymptotically optimal priority policies for indexable and nonindexable restless bandits.**
*(English)*
Zbl 1349.90834

Summary: We study the asymptotic optimal control of multi-class restless bandits. A restless bandit is a controllable stochastic process whose state evolution depends on whether or not the bandit is made active. Since finding the optimal control is typically intractable, we propose a class of priority policies that are proved to be asymptotically optimal under a global attractor property and a technical condition. We consider both a fixed population of bandits as well as a dynamic population where bandits can depart and arrive. As an example of a dynamic population of bandits, we analyze a multi-class \(M/M/S+M\) queue for which we show asymptotic optimality of an index policy. {

} We combine fluid-scaling techniques with linear programming results to prove that when bandits are indexable, Whittle’s index policy is included in our class of priority policies. We thereby generalize a result of R. R. Weber and G. Weiss [J. Appl. Probab. 27, No. 3, 637–648 (1990; Zbl 0735.90072)] about asymptotic optimality of Whittle’s index policy to settings with (i) several classes of bandits, (ii) arrivals of new bandits and (iii) multiple actions.{

} Indexability of the bandits is not required for our results to hold. For nonindexable bandits, we describe how to select priority policies from the class of asymptotically optimal policies and present numerical evidence that, outside the asymptotic regime, the performance of our proposed priority policies is nearly optimal.

} We combine fluid-scaling techniques with linear programming results to prove that when bandits are indexable, Whittle’s index policy is included in our class of priority policies. We thereby generalize a result of R. R. Weber and G. Weiss [J. Appl. Probab. 27, No. 3, 637–648 (1990; Zbl 0735.90072)] about asymptotic optimality of Whittle’s index policy to settings with (i) several classes of bandits, (ii) arrivals of new bandits and (iii) multiple actions.{

} Indexability of the bandits is not required for our results to hold. For nonindexable bandits, we describe how to select priority policies from the class of asymptotically optimal policies and present numerical evidence that, outside the asymptotic regime, the performance of our proposed priority policies is nearly optimal.

### MSC:

90C40 | Markov and semi-Markov decision processes |

68M20 | Performance evaluation, queueing, and scheduling in the context of computer systems |

90B36 | Stochastic scheduling theory in operations research |