×

\({\mathcal Q}\)-learning. (English) Zbl 0773.68062

Summary: \({\mathcal Q}\)-learning is a simple way for agents to learn how to act optimally in controlled Markovian domains. It amounts to an incremental method for dynamic programming which imposes limited computational demands. It works by successively improving its evaluations of the quality of particular actions at particular states.
The paper presents and proves in detail a convergence theorem for \({\mathcal Q}\)-learning based on that outlined in C. J. C. H. Watkins [Learning from delayed rewards. Ph.D. Thesis, University of Cambridge, England (1989)]. We show that \({\mathcal Q}\)-learning converges to the optimum action-values with probability 1 so long as all actions are repeatedly sampled in all states and the action-values are represented discretely. We also sketch extensions to the cases of non-discounted, but absorbing, Markov environments, and where many \({\mathcal Q}\) values can be changed each iteration, rather than just one.

MSC:

68T05 Learning and adaptive systems in artificial intelligence
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] Barto, A.G., Bradtke, S.J. & Singh, S.P. (1991).Real-time learning and control using asynchronous dynamic programming. (COINS technical report 91-57). Amherst: University of Massachusetts.
[2] Barto, A.G. & Singh, S.P. (1990). On the computational economics of reinforcement learning. In D.S. Touretzky, J. Elman, T.J. Sejnowski & G.E. Hinton, (Eds.),Proceedings of the 1990 Connectionist Models Summer School. San Mateo, CA: Morgan Kaufmann.
[3] Bellman, R.E. & Dreyfus, S.E. (1962).Applied dynamic programming. RAND Corporation. · Zbl 0106.34901
[4] Chapman, D. & Kaelbling, L.P. (1991). Input generalization in delayed reinforcement learning: An algorithm and performance comparisons.Proceedings of the 1991 International Joint Conference on Artificial Intelligence (pp. 726-731). · Zbl 0748.68047
[5] Kushner, H. & Clark, D. (1978).Stochastic approximation methods for constrained and unconstrained systems. Berlin, Germany: Springer-Verlag. · Zbl 0381.60004
[6] Lin, L. (1992). Self-improving reactive agents based on reinforcement learning, planning and teaching.Machine Learning, 8.
[7] Mahadevan & Connell (1991). Automatic programming of behavior-based robots using reinforcement learning.Proceedings of the 1991 National Conference on AI (pp. 768-773).
[8] Ross, S. (1983).Introduction to stochastic dynamic programming. New York, Academic Press. · Zbl 0567.90065
[9] Sato, M., Abe, K. & Takeda, H. (1988). Learning control of finite Markov chains with explicit trade-off between estimation and control.IEEE Transactions on Systems, Man and Cybernetics, 18, pp. 677-684. · Zbl 0674.65036
[10] Sutton, R.S. (1984).Temporal credit assignment in reinforcement learning. PhD Thesis, University of Massachusetts, Amherst, MA.
[11] Sutton, R.S. (1988). Learning to predict by the methods of temporal difference.Machine Learning, 3, pp. 9-44.
[12] Sutton, R.S. (1990). Integrated architectures for learning, planning, and reacting based on approximating dynamic programming.Proceedings of the Seventh International Conference on Machine Learning. San Mateo, CA: Morgan Kaufmann.
[13] Watkins, C.J.C.H. (1989).Learning from delayed rewards. PhD Thesis, University of Cambridge, England.
[14] Werbos, P.J. (1977). Advanced forecasting methods for global crisis warning and models of intelligence.General Systems Yearbook, 22, pp. 25-38.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.