Boosting a weak learning algorithm by majority. (English) Zbl 0833.68109

Summary: We present an algorithm for improving the accuracy of algorithms for learning binary concepts. The improvement is achieved by combining a large number of hypotheses, each of which is generated by training the given learning algorithm on a different set of examples. Our algorithm is based on ideas presented by Schapire and represents an improvement over his results. The analysis of our algorithm provides general upper bounds on the resources required for learning in Valiant’s polynomial PAC learning framework, which are the best general upper bounds known today.
We show that the number of hypotheses that are combined by our algorithm is the smallest number possible. Other outcomes of our analysis are results regarding the representational power of threshold circuits, the relation between learnability and compression, and a method for parallelizing PAC learning algorithms. We provide extensions of our algorithms to cases in which the concepts are not binary and to the case where the accuracy of the learning algorithm depends on the distribution of the instances.


68T05 Learning and adaptive systems in artificial intelligence
Full Text: DOI