×

A decision-theoretic generalization of on-line learning and an application to boosting. (English) Zbl 0880.68103

Summary: In the first part of the paper we consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weight-update Littlestone-Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. We show how the resulting learning algorithm can be applied to a variety of problems, including gambling, multiple-outcome prediction, repeated games, and prediction of points in \(\mathbb{R}^n\). In the second part of the paper we apply the multiplicative weight-update technique to derive a new boosting algorithm. This boosting algorithm does not require any prior knowledge about the performance of the weak learning algorithm. We also study generalizations of the new boosting algorithm to the problem of learning functions whose range, rather than being binary, is an arbitrary finite set or a bounded segment of the real line.

MSC:

68T05 Learning and adaptive systems in artificial intelligence

Software:

AdaBoost.MH
PDF BibTeX XML Cite
Full Text: DOI Link

References:

[1] Baum, E. B.; Haussler, D., What size net gives valid generalization?, Adv. Neural Inform. Process. Systems I, 81-90 (1989)
[2] Blackwell, D., An analog of the minimax theorem for vector payoffs, Pacific J. Math., 6, 1-8 (Spring 1956) · Zbl 0074.34403
[6] Cover, T. M., Universal portfolios, Math. Finance, 1, 1-29 (Jan. 1991) · Zbl 0900.90052
[7] Dietterich, T. G.; Bakiri, G., Solving multiclass learning problems via error-correcting output codes, J. Artif. Intell. Res., 2, 263-286 (January 1995) · Zbl 0900.68358
[8] Drucker, H.; Cortes, C., Boosting decision trees, Adv. Neural Inform. Process. Systems, 8 (1996)
[9] Drucker, H.; Schapire, R.; Simard, P., Boosting performance in neural networks, Int. J. Pattern Recognition Artif. Intell., 7, 705-719 (1993)
[11] Freund, Y., Boosting a weak learning algorithm by majority, Inform. and Comput., 121, 256-285 (September 1995) · Zbl 0833.68109
[14] Hannan, J., Approximation to Bayes risk in repeated play, Contributions to the Theory of Games (1957), Princeton Univ. Press: Princeton Univ. Press Princeton, p. 97-139 · Zbl 0078.32804
[15] Haussler, D.; Kivinen, J.; Warmuth, M. K., Tight worst-case loss bounds for predicting with expert advice, Computational Learning Theory: Second European Conference, EuroCOLT ’95 (1995), Springer-Verlag: Springer-Verlag New York/Berlin, p. 69-83
[16] Jackson, J. C.; Craven, M. W., Learning sparse perceptrons, Adv. Neural Inform. Process. Systems, 8 (1996)
[18] Kearns, M. J.; Vazirani, U. V., An Introduction to Computational Learning Theory (1994), MIT Press: MIT Press Cambridge
[19] Kivinen, J.; Warmuth, M. K., Using experts for predicting continuous outcomes, Computational Learning Theory: EuroCOLT ’93 (1994), Springer-Verlag: Springer-Verlag New York/Berlin, p. 109-120
[20] Littlestone, N.; Warmuth, M. K., The weighted majority algorithm, Inform. and Comput., 108, 212-261 (1994) · Zbl 0804.68121
[22] Schapire, R. E., The strength of weak learnability, Machine Learning, 5, 197-227 (1990)
[23] Vapnik, V. N., Estimation of Dependences Based on Empirical Data (1982), Springer-Verlag: Springer-Verlag New York/Berlin · Zbl 0499.62005
[26] Wenocur, R. S.; Dudley, R. M., Some special Vapnik-Chervonenkis classes, Discrete Mathematics, 33, 313-318 (1981) · Zbl 0459.60008
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.