×

zbMATH — the first resource for mathematics

A decision-theoretic generalization of on-line learning and an application to boosting. (English) Zbl 0880.68103
Summary: In the first part of the paper we consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weight-update Littlestone-Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. We show how the resulting learning algorithm can be applied to a variety of problems, including gambling, multiple-outcome prediction, repeated games, and prediction of points in \(\mathbb{R}^n\). In the second part of the paper we apply the multiplicative weight-update technique to derive a new boosting algorithm. This boosting algorithm does not require any prior knowledge about the performance of the weak learning algorithm. We also study generalizations of the new boosting algorithm to the problem of learning functions whose range, rather than being binary, is an arbitrary finite set or a bounded segment of the real line.

MSC:
68T05 Learning and adaptive systems in artificial intelligence
Software:
AdaBoost.MH
PDF BibTeX XML Cite
Full Text: DOI
References:
[1] Baum, E. B.; Haussler, D., What size net gives valid generalization?, Adv. Neural Inform. Process. Systems I, 81-90, (1989)
[2] Blackwell, D., An analog of the minimax theorem for vector payoffs, Pacific J. Math., 6, 1-8, (Spring 1956) · Zbl 0074.34403
[3] L. Breiman, 1996, Bias, variance, and arcing classifiers, Statistics Dept. University of California
[4] N. Cesa-Bianchi, Y. Freund, D. P. Helmhold, D. Haussler, R. E. Schapire, M. K. Warmuth, How to use expert advice, Proceedings of the Twenty-Fifth Annual ACM Symposium on the Theory of Computing, 1993, 382, 391 · Zbl 1310.68177
[5] T. H. Chung, Approximate methods for sequential decision making using expert advice, Proceedings of the Seventh Annual ACM Conference on Computational Learning Theory, 1994, 183, 189
[6] Cover, T. M., Universal portfolios, Math. Finance, 1, 1-29, (Jan. 1991) · Zbl 0900.90052
[7] Dietterich, T. G.; Bakiri, G., Solving multiclass learning problems via error-correcting output codes, J. Artif. Intell. Res., 2, 263-286, (January 1995) · Zbl 0900.68358
[8] Drucker, H.; Cortes, C., Boosting decision trees, Adv. Neural Inform. Process. Systems, 8, (1996)
[9] Drucker, H.; Schapire, R.; Simard, P., Boosting performance in neural networks, Int. J. Pattern Recognition Artif. Intell., 7, 705-719, (1993)
[10] Y. Freund, 1993, Data Filtering and Distribution Modeling Algorithms for Machine Learning, University of California at Santa Cruz
[11] Freund, Y., Boosting a weak learning algorithm by majority, Inform. and Comput., 121, 256-285, (September 1995) · Zbl 0833.68109
[12] Y. Freund, R. E. Schapire, Experiments with a new boosting algorithm, Machine Learning: Proceedings of the Thirteenth International Conference, 1996, 148, 156
[13] Y. Freund, R. E. Schapire, Game theory, on-line prediction and boosting, Proceedings of the Ninth Annual Conference on Computational Learning Theory, 1996, 325, 332
[14] Hannan, J., Approximation to Bayes risk in repeated play, Contributions to the Theory of Games, (1957), Princeton Univ. Press Princeton, p. 97-139 · Zbl 0078.32804
[15] Haussler, D.; Kivinen, J.; Warmuth, M. K., Tight worst-case loss bounds for predicting with expert advice, Computational Learning Theory: Second European Conference, EuroCOLT ’95, (1995), Springer-Verlag New York/Berlin, p. 69-83
[16] Jackson, J. C.; Craven, M. W., Learning sparse perceptrons, Adv. Neural Inform. Process. Systems, 8, (1996)
[17] M. Kearns, Y. Mansour, A. Y. Ng, D. Ron, An experimental and theoretical comparison of model selection methods, Proceedings of the Eighth Annual Conference on Computational Learning Theory, 1995
[18] Kearns, M. J.; Vazirani, U. V., An Introduction to Computational Learning Theory, (1994), MIT Press Cambridge
[19] Kivinen, J.; Warmuth, M. K., Using experts for predicting continuous outcomes, Computational Learning Theory: EuroCOLT ’93, (1994), Springer-Verlag New York/Berlin, p. 109-120
[20] Littlestone, N.; Warmuth, M. K., The weighted majority algorithm, Inform. and Comput., 108, 212-261, (1994) · Zbl 0804.68121
[21] J. R. Quinlan, Bagging, boosting, and C4.5, Proceedings, Fourteenth National Conference on Artificial Intelligence, 1996
[22] Schapire, R. E., The strength of weak learnability, Machine Learning, 5, 197-227, (1990)
[23] Vapnik, V. N., Estimation of Dependences Based on Empirical Data, (1982), Springer-Verlag New York/Berlin · Zbl 0499.62005
[24] V. G. Vovk, A game of prediction with expert advice, Proceedings of the Eighth Annual Conference on Computational Learning Theory, 1995 · Zbl 0945.68528
[25] V. G. Vovk, Aggregating strategies, Proceedings of the Third Annual Workshop on Computational Learning Theory, 1990, 321, 383
[26] Wenocur, R. S.; Dudley, R. M., Some special vapnik – chervonenkis classes, Discrete Mathematics, 33, 313-318, (1981) · Zbl 0459.60008
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.