zbMATH — the first resource for mathematics

Examples
Geometry Search for the term Geometry in any field. Queries are case-independent.
Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact.
"Topological group" Phrases (multi-words) should be set in "straight quotation marks".
au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted.
Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff.
"Quasi* map*" py: 1989 The resulting documents have publication year 1989.
so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14.
"Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic.
dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles.
py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses).
la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.

Operators
a & b logic and
a | b logic or
!ab logic not
abc* right wildcard
"ab c" phrase
(ab c) parentheses
Fields
any anywhere an internal document identifier
au author, editor ai internal author identifier
ti title la language
so source ab review, abstract
py publication year rv reviewer
cc MSC code ut uncontrolled term
dt document type (j: journal article; b: book; a: book article)
A decision-theoretic generalization of on-line learning and an application to boosting. (English) Zbl 0880.68103
Summary: In the first part of the paper we consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weight-update Littlestone-Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. We show how the resulting learning algorithm can be applied to a variety of problems, including gambling, multiple-outcome prediction, repeated games, and prediction of points in $\bbfR^n$. In the second part of the paper we apply the multiplicative weight-update technique to derive a new boosting algorithm. This boosting algorithm does not require any prior knowledge about the performance of the weak learning algorithm. We also study generalizations of the new boosting algorithm to the problem of learning functions whose range, rather than being binary, is an arbitrary finite set or a bounded segment of the real line.

MSC:
68T05Learning and adaptive systems
Software:
AdaBoost.MH
WorldCat.org
Full Text: DOI
References:
[1] Baum, E. B.; Haussler, D.: What size net gives valid generalization?. Adv. neural inform. Process. systems I, 81-90 (1989)
[2] Blackwell, D.: An analog of the minimax theorem for vector payoffs. Pacific J. Math. 6, 1-8 (Spring 1956) · Zbl 0074.34403
[3] L. Breiman, 1996, Bias, variance, and arcing classifiers, Statistics Dept. University of California
[4] N. Cesa-Bianchi, Y. Freund, D. P. Helmhold, D. Haussler, R. E. Schapire, M. K. Warmuth, How to use expert advice, Proceedings of the Twenty-Fifth Annual ACM Symposium on the Theory of Computing, 1993, 382, 391 · Zbl 1310.68177
[5] T. H. Chung, Approximate methods for sequential decision making using expert advice, Proceedings of the Seventh Annual ACM Conference on Computational Learning Theory, 1994, 183, 189
[6] Cover, T. M.: Universal portfolios. Math. finance 1, 1-29 (Jan. 1991) · Zbl 0900.90052
[7] Dietterich, T. G.; Bakiri, G.: Solving multiclass learning problems via error-correcting output codes. J. artif. Intell. res. 2, 263-286 (January 1995) · Zbl 0900.68358
[8] Drucker, H.; Cortes, C.: Boosting decision trees. Adv. neural inform. Process. systems 8 (1996)
[9] Drucker, H.; Schapire, R.; Simard, P.: Boosting performance in neural networks. Int. J. Pattern recognition artif. Intell. 7, 705-719 (1993)
[10] Y. Freund, 1993, Data Filtering and Distribution Modeling Algorithms for Machine Learning, University of California at Santa Cruz
[11] Freund, Y.: Boosting a weak learning algorithm by majority. Inform. and comput. 121, 256-285 (September 1995) · Zbl 0833.68109
[12] Y. Freund, R. E. Schapire, Experiments with a new boosting algorithm, Machine Learning: Proceedings of the Thirteenth International Conference, 1996, 148, 156
[13] Y. Freund, R. E. Schapire, Game theory, on-line prediction and boosting, Proceedings of the Ninth Annual Conference on Computational Learning Theory, 1996, 325, 332
[14] Hannan, J.: Approximation to Bayes risk in repeated play. (1957) · Zbl 0078.32804
[15] Haussler, D.; Kivinen, J.; Warmuth, M. K.: Tight worst-case loss bounds for predicting with expert advice. (1995)
[16] Jackson, J. C.; Craven, M. W.: Learning sparse perceptrons. Adv. neural inform. Process. systems 8 (1996)
[17] M. Kearns, Y. Mansour, A. Y. Ng, D. Ron, An experimental and theoretical comparison of model selection methods, Proceedings of the Eighth Annual Conference on Computational Learning Theory, 1995
[18] Kearns, M. J.; Vazirani, U. V.: An introduction to computational learning theory. (1994)
[19] Kivinen, J.; Warmuth, M. K.: Using experts for predicting continuous outcomes. (1994)
[20] Littlestone, N.; Warmuth, M. K.: The weighted majority algorithm. Inform. and comput. 108, 212-261 (1994) · Zbl 0804.68121
[21] J. R. Quinlan, Bagging, boosting, and C4.5, Proceedings, Fourteenth National Conference on Artificial Intelligence, 1996
[22] Schapire, R. E.: The strength of weak learnability. Machine learning 5, 197-227 (1990)
[23] Vapnik, V. N.: Estimation of dependences based on empirical data. (1982) · Zbl 0499.62005
[24] V. G. Vovk, A game of prediction with expert advice, Proceedings of the Eighth Annual Conference on Computational Learning Theory, 1995 · Zbl 0945.68528
[25] V. G. Vovk, Aggregating strategies, Proceedings of the Third Annual Workshop on Computational Learning Theory, 1990, 321, 383
[26] Wenocur, R. S.; Dudley, R. M.: Some special vapnik--chervonenkis classes. Discrete mathematics 33, 313-318 (1981) · Zbl 0459.60008