×

Extremely randomized trees. (English) Zbl 1110.68124

Summary: This paper proposes a new tree-based ensemble method for supervised classification and regression problems. It essentially consists of randomizing strongly both attribute and cut-point choice while splitting a tree node. In the extreme case, it builds totally randomized trees whose structures are independent of the output values of the learning sample. The strength of the randomization can be tuned to problem specifics by the appropriate choice of a parameter. We evaluate the robustness of the default choice of this parameter, and we also provide insight on how to adjust it in particular situations. Besides accuracy, the main strength of the resulting algorithm is computational efficiency. A bias/variance analysis of the extra-trees algorithm is also provided as well as a geometrical and a kernel characterization of the models induced.

MSC:

68T05 Learning and adaptive systems in artificial intelligence
PDF BibTeX XML Cite
Full Text: DOI

References:

[1] Ali, K., & Pazzani, M. (1996). Error reduction through learning multiple descriptions. Machine Learning, 24:3, 173–206.
[2] Ali, K. (1995). On the link between error correlation and error reduction in decision tree ensembles. Technical report, Department of Information and Computer Science, University of California, Irvine.
[3] Bauer, E., & Kohavi., R. (1999). An empirical comparison of voting classification algorithms: bagging, boosting, and variants. Machine Learning, 36, 105–139.
[4] Blake, C., & Merz, C.(1998). UCI repository of machine learning databases. http://www.ics.uci.edu/\(\sim\)mlearn/MLRepository.html.
[5] Breiman, L., Friedman, J., Olsen, R., & Stone, C. (1984). Classification and regression trees. Wadsworth International. · Zbl 0541.62042
[6] Breiman, L. (1996a). Arcing classifiers. Technical report, University of California, Department of Statistics. · Zbl 0934.62064
[7] Breiman, L. (1996b). Bagging predictors. Machine Learning, 24:2, 123–140. · Zbl 0858.68080
[8] Breiman, L. (2000a). Randomizing outputs to increase prediction accuracy. Machine Learning, 40:3, 229–242. · Zbl 0962.68143
[9] Breiman, L. (2000b). Some infinity theory for predictor ensembles. Technical Report 579, University of California, Department of Statistics.
[10] Breiman, L. (2001). Random forests. Machine Learning, 45, 5–32. · Zbl 1007.68152
[11] Buntine, W., & Niblett, T. (1992), A further comparison of splitting rules for decision-tree induction. Machine Learning, 8, 75–85.
[12] Buntine, W., & Weigend, A. (1991). Bayesian back-propagation. Complex Systems, 5, 603–643. · Zbl 0761.62031
[13] Buntine, W. (1992). Learning classification trees. Statistics and Computing, 2, 63–73.
[14] Cutler, A.,& Guohua, Z. (2001), PERT – Perfect random tree ensembles. Computing Science and Statistics 33.
[15] Dietterich, T., & Kong, E. (1995). Machine learning bias, statistical bias, and statistical variance of decision tree algorithms. Technical report, Department of Computer Science, Oregon State University.
[16] Dietterich, T. (2000). An experimental comparison of three methods for constructing ensembles of decision trees: bagging, boosting, and randomization. Machine Learning, 40:2, 139–157.
[17] Ernst, D., Geurts, P., & Wehenkel, L. (2005). Tree-based batch mode reinforcement learning. Journal of Machine Learning Research, 6, 503–556. · Zbl 1222.68193
[18] Freund, Y., & Schapire, R. (1995). A decision-theoretic generalization of on-line learning and an application to boosting. In: Proceedings of the 2nd European Conference on Computational Learning Theory, 23–27.
[19] Friedman, J. (1991). Multivariate adaptive regression splines. Annals of Statistics, 19:1, 1–141. · Zbl 0765.62064
[20] Friedman, J. (1997). On bias, variance, 0/1-loss, and the curse-of-dimensionality. Data Mining and Knowledge Discovery, 1, 55–77.
[21] Geman, S., Bienenstock, E., & Doursat, R. (1992). Neural networks and the bias/variance dilemna. Neural Computation, 4, 1–58.
[22] Geurts, P., Blanco Cuesta A., & Wehenkel, L. (2005a). Segment and combine approach for biological sequence classification. In: Proceedings of IEEE Symposium on Computational Intelligence in Bioinformatics and Computational Biology, 194–201.
[23] Geurts, P., Fillet,M., de Seny, D., Meuwis, M. -A., Merville, M. -P., & Wehenkel, L. (2005b). Proteomic mass spectra classification using decision tree based ensemble methods. Bioinformatics, 21:14, 3138– 3145.
[24] Geurts, P., & L. Wehenkel. (2000). Investigation and reduction of discretization variance in decision tree induction. In: Proceedings of the 11th European Conference on Machine Learning, 162–170.
[25] Geurts, P., & Wehenkel, L. (2005). Segment and combine approach for non-parametric time-series classification. In: Proceedings of the 9th European Conference on Principles and Practice of Knowledge Discovery in Databases. pp. 478–485.
[26] Geurts, P. (2002). Contributions to decision tree induction: bias/variance tradeo. and time series classification. Ph.D. thesis, University of Liège.
[27] Geurts, P. (2003). Extremely randomized trees. Technical report, University of Liège - Department of Electrical Engineering and Computer Science. · Zbl 1110.68124
[28] Hastie, T., Tibshirani, R., & Friedman, J. (2001). The elements of statistical learning: Data mining, inference, and prediction. Springer. · Zbl 0973.62007
[29] Herbrich, R., Graepel, T., & Campbell, C. (2001). Bayes point machines. Journal of Machine Learning Research, 1, 241–279. · Zbl 1008.68104
[30] Ho, T. (1998). The Random subspace method for constructing decision forests. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20:8, 832–844.
[31] James, G. (2003). Variance and bias for generalized loss functions. Machine Learning, 51, 115–135. · Zbl 1027.68067
[32] Kamath, C., Cantu-Paz, E., & Littau, D. (2002). Approximate splitting for ensembles of trees using histograms. In: Proceedings of the 2nd SIAM International Conference on Data mining.
[33] Kleinberg, E. (1990). Stochastic discrimination. Annals of Mathematics and Artificial Intelligence 1, 207–239. · Zbl 0870.68071
[34] Lin, Y., & Jeon, Y. (2002). Random forests and adaptive nearest neighbors. Technical Report 1055, University of Wisconsin, Department of Statistics. · Zbl 1119.62304
[35] Marée, R., Geurts, P., Piater, J., & Wehenkel, L. (2004). A generic approach for image classsification based on decision tree ensembles and local sub-windows. In: Proceedings of the 6th Asian Conference on Computer Vision, 2, 860–865.
[36] Mingers, J. (1989). An empirical comparison of selection measures for decision-tree induction. Machine Learning, 3, 319–342.
[37] Nadeau, C., & Bengio, Y. (2003). Inference for the generalization error. Machine Learning, 52:3, 239–281. · Zbl 1039.68104
[38] Quinlan, J. (1986). C4.5: Programs for machine learning. Morgan Kaufmann (San Mateo).
[39] Torgo, L. (1999). Inductive learning of tree-based regression models. Ph.D. thesis, University of Porto.
[40] Valentini, G., & Dietterich, T. (2004). Bias-variance analysis of support vector machines for the development of SVM-based ensemble methods. Journal of Machine Learning Research, 5, 725–775. · Zbl 1222.68323
[41] Webb, G., & Zheng, Z. (2004). Multi-strategy ensemble learning: reducing error by combining ensemble learning techniques. IEEE Transactions on Knowledge and Data Engineering, 16:8, 980–991.
[42] Webb, G. (2000). Multiboosting: a technique for combining boosting and wagging. Machine Learning, 40:2, 159–196.
[43] Wehenkel, L., & Pavella, M. (1991). Decision trees and transient stability of electric power systems. Automatica, 27:1, 115–134.
[44] Wehenkel, L. (1996). On uncertainty measures used for decision tree induction. In: Proceedings of Information Processing and Management of Uncertainty in Knowledge Based Systems, 413–418.
[45] Wehenkel, L. (1997). Discretization of continuous attributes for supervised learning: variance evaluation and variance reduction. In: Proceedings of the International Fuzzy Systems Association World Congress, 381–388.
[46] Wehenkel, L. (1998). Automatic Learning Techniques in Power Systems. Boston: Kluwer Academic. · Zbl 0891.68080
[47] Wolpert, D. (1992). Stacked generalization. Neural Networks, 5, 241–259.
[48] Zhao, G. (2000). A new perspective on classification. Ph.D. thesis, Utah State University, Department of Mathematics and Statistics.
[49] Zheng, Z., & Webb, G. (1998). Stochastic attribute selection committees. In: Proceedings of the 11h Australian Joint Conference on Artificial Intelligence, 321–332.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.