[1] |
Baum, E. B.; Haussler, D.: What size net gives valid generalization?. Adv. neural inform. Process. systems I, 81-90 (1989) |

[2] |
Blackwell, D.: An analog of the minimax theorem for vector payoffs. Pacific J. Math. 6, 1-8 (Spring 1956) · Zbl 0074.34403 |

[3] |
L. Breiman, 1996, Bias, variance, and arcing classifiers, Statistics Dept. University of California |

[4] |
N. Cesa-Bianchi, Y. Freund, D. P. Helmhold, D. Haussler, R. E. Schapire, M. K. Warmuth, How to use expert advice, Proceedings of the Twenty-Fifth Annual ACM Symposium on the Theory of Computing, 1993, 382, 391 · Zbl 1310.68177 |

[5] |
T. H. Chung, Approximate methods for sequential decision making using expert advice, Proceedings of the Seventh Annual ACM Conference on Computational Learning Theory, 1994, 183, 189 |

[6] |
Cover, T. M.: Universal portfolios. Math. finance 1, 1-29 (Jan. 1991) · Zbl 0900.90052 |

[7] |
Dietterich, T. G.; Bakiri, G.: Solving multiclass learning problems via error-correcting output codes. J. artif. Intell. res. 2, 263-286 (January 1995) · Zbl 0900.68358 |

[8] |
Drucker, H.; Cortes, C.: Boosting decision trees. Adv. neural inform. Process. systems 8 (1996) |

[9] |
Drucker, H.; Schapire, R.; Simard, P.: Boosting performance in neural networks. Int. J. Pattern recognition artif. Intell. 7, 705-719 (1993) |

[10] |
Y. Freund, 1993, Data Filtering and Distribution Modeling Algorithms for Machine Learning, University of California at Santa Cruz |

[11] |
Freund, Y.: Boosting a weak learning algorithm by majority. Inform. and comput. 121, 256-285 (September 1995) · Zbl 0833.68109 |

[12] |
Y. Freund, R. E. Schapire, Experiments with a new boosting algorithm, Machine Learning: Proceedings of the Thirteenth International Conference, 1996, 148, 156 |

[13] |
Y. Freund, R. E. Schapire, Game theory, on-line prediction and boosting, Proceedings of the Ninth Annual Conference on Computational Learning Theory, 1996, 325, 332 |

[14] |
Hannan, J.: Approximation to Bayes risk in repeated play. (1957) · Zbl 0078.32804 |

[15] |
Haussler, D.; Kivinen, J.; Warmuth, M. K.: Tight worst-case loss bounds for predicting with expert advice. (1995) |

[16] |
Jackson, J. C.; Craven, M. W.: Learning sparse perceptrons. Adv. neural inform. Process. systems 8 (1996) |

[17] |
M. Kearns, Y. Mansour, A. Y. Ng, D. Ron, An experimental and theoretical comparison of model selection methods, Proceedings of the Eighth Annual Conference on Computational Learning Theory, 1995 |

[18] |
Kearns, M. J.; Vazirani, U. V.: An introduction to computational learning theory. (1994) |

[19] |
Kivinen, J.; Warmuth, M. K.: Using experts for predicting continuous outcomes. (1994) |

[20] |
Littlestone, N.; Warmuth, M. K.: The weighted majority algorithm. Inform. and comput. 108, 212-261 (1994) · Zbl 0804.68121 |

[21] |
J. R. Quinlan, Bagging, boosting, and C4.5, Proceedings, Fourteenth National Conference on Artificial Intelligence, 1996 |

[22] |
Schapire, R. E.: The strength of weak learnability. Machine learning 5, 197-227 (1990) |

[23] |
Vapnik, V. N.: Estimation of dependences based on empirical data. (1982) · Zbl 0499.62005 |

[24] |
V. G. Vovk, A game of prediction with expert advice, Proceedings of the Eighth Annual Conference on Computational Learning Theory, 1995 · Zbl 0945.68528 |

[25] |
V. G. Vovk, Aggregating strategies, Proceedings of the Third Annual Workshop on Computational Learning Theory, 1990, 321, 383 |

[26] |
Wenocur, R. S.; Dudley, R. M.: Some special vapnik--chervonenkis classes. Discrete mathematics 33, 313-318 (1981) · Zbl 0459.60008 |