×

An evolutionary algorithm for automated machine learning focusing on classifier ensembles: an improved algorithm and extended results. (English) Zbl 1436.68315

Summary: A large number of classification algorithms have been proposed in the machine learning literature. These algorithms have different pros and cons, and no algorithm is the best for all datasets. Hence, a challenging problem consists of choosing the best classification algorithm with its best hyper-parameter settings for a given input dataset. In the last few years, Automated Machine Learning (Auto-ML) has emerged as a promising approach for tackling this problem, by doing a heuristic search in a large space of candidate classification algorithms and their hyper-parameter settings. In this work we propose an improved version of our previous Evolutionary Algorithm (EA) – more precisely, an Estimation of Distribution Algorithm – for the Auto-ML task of automatically selecting the best classifier ensemble and its best hyper-parameter settings for an input dataset. The new version of this EA was compared against its previous version, as well as against a random forest algorithm (a strong ensemble algorithm) and a version of the well-known Auto-ML method Auto-WEKA adapted to search in the same space of classifier ensembles as the proposed EA. In general, in experiments with 21 datasets, the new EA version obtained the best results among all methods in terms of four popular predictive accuracy measures: error rate, precision, recall and F-measure.

MSC:

68T05 Learning and adaptive systems in artificial intelligence
62H30 Classification and discrimination; cluster analysis (statistical aspects)
68T20 Problem solving in the context of artificial intelligence (heuristics, search strategies, etc.)
68W50 Evolutionary algorithms, genetic algorithms (computational aspects)
PDF BibTeX XML Cite
Full Text: DOI Link

References:

[1] Zhou, Zhi-Hua, Ensemble Methods: Foundations and Algorithms (2012), Chapman & Hall/CRC
[2] Kuncheva, L. I., Classifier ensembles for changing environments, (Roli, F.; Kittler, J.; Windeatt, T., Multiple Classifier Systems. Multiple Classifier Systems, Lecture Notes in Computer Science, vol. 3077 (2004), Springer), 1-15
[3] Kuncheva, L. I., Combining Pattern Classifiers: Methods and Algorithms (2004), Wiley-Interscience · Zbl 1066.68114
[4] Thornton, C.; Hutter, F.; Hoos, H. H.; Leyton-Brown, K., Auto-WEKA: combined selection and hyperparameter optimization of classification algorithms, (Proc. 19th ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining (2013), ACM Press), 847-855
[5] Hall, M.; Frank, E.; Holmes, G.; Pfahringer, B.; Reutemann, P.; Witten, I., The WEKA data mining software: an update, ACM SIGKDD Explor. Newsl., 11, 1, 10-18 (2009)
[6] Feurer, M.; Klein, A.; Eggensperger, K.; Springenberg, J.; Blum, M.; Hutter, F., Efficient and robust automated machine learning, Adv. Neural Inf. Process. Syst., 28, 2962-2970 (2015)
[7] Larrañaga, P.; Lozano, J. A., Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation (2002), Kluwer: Kluwer Boston, MA · Zbl 0979.00024
[8] Baluja, S.; Caruana, R., Removing the genetics from the standard genetic algorithm, (Proc. 12th Int. Conf. on Machine Learning. Proc. 12th Int. Conf. on Machine Learning, California, July, 1995 (1995)), 38-46
[9] Fernández-Delgado, M.; Cernadas, E.; Barro, S.; Amorim, D., Do we need hundreds of classifiers to solve real world classification problems?, J. Mach. Learn. Res., 15, 1, 3133-3181 (2014) · Zbl 1319.62005
[10] Brazdil, P.; Giraud-Carrier, C.; Soares, C.; Vilalta, R., Metalearning: Applications to Data Mining (2009), Springer · Zbl 1173.68625
[11] Kotthoff, L.; Thornton, C.; Hoos, H. H.; Hutter, F.; Leyton-Brown, K., Auto-WEKA 2.0: automatic model selection and hyperparameter optimization in WEKA, J. Mach. Learn. Res., 18, 1, 826-830 (2017) · Zbl 06781346
[12] Freitas, A. A., Data Mining and Knowledge Discovery with Evolutionary Algorithms (2002), Springer · Zbl 1013.68075
[13] Eiben, A. E.; Smith, J. E., Introduction to Evolutionary Computing (2015), Springer · Zbl 1327.68003
[14] Inza, I.; Larrañaga, P.; Sierra, B., Feature subset selection by estimation of distribution algorithms, (Estimation of Distribution Algorithms (2002), Springer), 269-293 · Zbl 1018.68065
[15] Shelke, K.; Jayaraman, S.; Ghosh, S.; Valadi, J., Hybrid feature selection and peptide binding affinity prediction using an EDA based algorithm, (Proc. IEEE Congress on Evolutionary Computation (CEC) (2013)), 2384-2389
[16] Zangari, M.; Santana, R.; Mendibury, A.; Pozo, A. T.R., Not all PBILs are the same: unveiling the different learning mechanisms of PBIL variants, Appl. Soft Comput., 53, 88-96 (April 2017)
[17] Yang, X.; Dong, H.; Zhang, H., Naive Bayes based on estimation of distribution algorithms for classification, (International Conference on Information Science and Engineering (2009)), 908-911
[18] Saeys, Y.; Degroeve, S.; Aeyels, D.; Rouzé, P.; Van de Peer, Y., Feature selection for splice site prediction: a new method using EDA-based feature ranking, BMC Bioinform., 5, 64 (2004), 11 pages
[19] Kordík, P.; Černý, Jan; Frýda, T., Discovering predictive ensembles for transfer learning and meta-learning, Mach. Learn., 107, 1, 177-207 (2018) · Zbl 06855216
[20] Kotsiantis, S. B., Supervised machine learning: a review of classification techniques, (Emerging Artificial Intelligence Applications in Computer Engineering (2007), IOS Press), 3-24 · Zbl 1162.68552
[21] Wistuba, M.; Schilling, N.; Schmidt-Thieme, L., Automatic frankensteining: creating complex ensembles autonomously, (Proc. SIAM Int. Conf. on Data Mining (2017), SIAM), 741-749
[22] Lévesque, J.; Gagné, C.; Sabourin, R., Bayesian hyper-parameter optimization for ensemble learning, (Proc. 32nd Conference on Uncertainty in Artificial Intelligence (UAI). Proc. 32nd Conference on Uncertainty in Artificial Intelligence (UAI), Jersey City, New Jersey, USA, 2016 (2016)), 437-446
[23] A. Lacoste, H. Larochelle, F. Laviolette, M. Marchand, Sequential model-based ensemble optimization, Computing Research Repository (CoRR) (2014).
[24] Olson, R.; Urbanowicz, R.; Andrews, P.; Lavender, N.; Kidd, L.; Moore, J. H., Automating biomedical data science through tree-based pipeline optimization, (European Conference on the Applications of Evolutionary Computation (2016), Springer), 123-137
[25] de Sá, A. G.C.; Pappa, G. L.; Freitas, A. A., Automated selection and configuration of multi-label classification algorithms with grammar-based genetic programming, (Proc. of the 15th International Conf. on Parallel Problem Solving from Nature (PPSN-2018), to be Held in Coimbra, Portugal, Sep. 2018 (2018)), in press
[26] de Sá, A. G.C.; Pinto, W. J.G. S.; Oliveira, L. O.V. B.; Pappa, G. L., RECIPE: a grammar-based framework for automatically evolving classification pipelines, (Proc. of the 20th European Conference on Genetic Programming (EuroGP’17). Proc. of the 20th European Conference on Genetic Programming (EuroGP’17), LNCS, vol. 10196 (2017), Springer), 246-261
[27] Demsar, Janez, Statistical comparisons of classifiers over multiple data sets, J. Mach. Learn. Res., 7, 1-30 (2006) · Zbl 1222.68184
[28] Xavier-Júnior, J. C.; Freitas, A. A.; Feitosa-Neto, A.; Ludermir, T. B., A novel evolutionary algorithm for automated machine learning focusing on classifier ensembles, (7th Brazilian Conference on Intelligent Systems (BRACIS 2018) (2018)), 462-467
[29] Rokach, L., Ensemble-based classifiers, Artif. Intell. Rev., 33, 1-2, 1-39 (2010)
[30] Kotsiantis, S. B., Bagging and boosting variants for handling classification problems: a survey, Knowl. Eng. Rev., 29, 1, 78-100 (2014)
[31] Sagi, O.; Rokach, L., Ensemble learning: a survey, Wiley Interdiscip. Rev. Data Min. Knowl. Discov., 8, 4, Article e1249 pp. (2018)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.