×

Ensemble learning using multi-objective evolutionary algorithms. (English) Zbl 1114.68054

Summary: Multi-objective evolutionary algorithms for the construction of neural ensembles is a relatively new area of research. We recently proposed an ensemble learning algorithm called DIVACE (DIVerse and ACcurate Ensemble learning algorithm). It was shown that DIVACE tries to find an optimal trade-off between diversity and accuracy as it searches for an ensemble for some particular pattern recognition task by treating these two objectives explicitly separately. A detailed discussion of DIVACE together with further experimental studies form the essence of this paper. A new diversity measure which we call Pairwise Failure Crediting (PFC) is proposed. This measure forms one of the two evolutionary pressures being exerted explicitly in DIVACE. Experiments with this diversity measure as well as comparisons with previously studied approaches are hence considered. Detailed analysis of the results show that DIVACE, as a concept, has promise.

MSC:

68T05 Learning and adaptive systems in artificial intelligence
68Q32 Computational learning theory
68Q10 Modes of computation (nondeterministic, parallel, interactive, probabilistic, etc.)
68W05 Nonnumerical algorithms

Software:

UCI-ml
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] Abbass, H. A.: A memetic pareto evolutionary approach to artificial neural networks, in Proceedings of the 14th Australian Joint Conference on Artificial Intelligence Springer-Verlag, Berlin Heidelberg New York, 2000, pp. 1–12. · Zbl 1052.68645
[2] Abbass, H. A.: The self-adaptive pareto differential evolution algorithm, in D. B. Fogel, M.A. El-Sharkawi, X. Yao, G. Greenwood, H. Iba, P. Marrow and M. Shackleton (eds.),Proceedings of the 2002 Congress on Evolutionary Computation CEC2002, IEEE, 2002, pp. 831–836.
[3] Abbass, H. A.: Pareto neuro-evolution: Constructing ensemble of neural networks using multi-objective optimization, in The IEEE 2003 Conference on Evolutionary Computation, Vol. 3, IEEE, 2003, pp. 2074–2080.
[4] Abbass, H. A.: Speeding up backpropagation using multiobjective evolutionary algorithms, Neural Comput. 15(11) (November 2003), 2705–2726. · Zbl 1085.68608 · doi:10.1162/089976603322385126
[5] Abbass, H. A.: Pareto neuro-ensemble, in 16th Australian Joint Conference on Artificial Intelligence, Perth, Australia, Springer, 2003, pp. 554–566. · Zbl 1205.68276
[6] Abbass, H. A. and Deb, K.: Searching under multi-evolutionary pressures, in Proceedings of the 2003 Evolutionary Multiobjective Optimization Conference (EMO03), LNCS-2632, Springer, Berlin, Heidelberg, New York, 2003, pp. 391–404. · Zbl 1036.90514
[7] Abbass, H. A., Sarker, R. and Newton, C.: PDE: A pareto-frontier differential evolution approach for multi-objective optimization problems, in Proceedings of the IEEE Congress on Evolutionary Computation (CEC2001), Vol. 2, IEEE, 2001, pp. 971–978.
[8] Baldwin, J.: A new factor in evolution, Am. Nat. 30 (1896), 441–451. · JFM 27.0198.02 · doi:10.1086/276408
[9] Blake, C. and Merz, C.: UCI repository of machine learning databases, 1998.
[10] Boers, E., Borst, M. and Sprinkhuizen-Kuyper, I.: Evolving artificial neural networks using the ”baldwin effect,” Technical Report 95–14, Leiden Unversity, Department of Computer Science, The Netherlands, 1995.
[11] Brown, G.: Diversity in Neural Network Ensembles, PhD thesis, School of Computer Science, University of Birmingham, 2004.
[12] Brown, G., Wyatt, J., Harris, R. and Yao, X.: Diversity creation methods: A survey and categorisation, Inform. Fusion 6(1) (March 2005), 5–20. · Zbl 05422909 · doi:10.1016/j.inffus.2004.04.004
[13] Brown, G. and Wyatt, J. L.: Negative correlation learning and the ambiguity family of ensemble methods, in Proc. Int. Workshop on Multiple Classifier Systems (LNCS 2709), Springer, Guildford, Surrey, June 2003, pp. 266–275. · Zbl 1040.68596
[14] Brown, G. and Wyatt, J. L.: The use of the ambiguity decomposition in neural network ensemble learning methods, in T. Fawcett and N. Mishra (eds.), 20th International Conference on Machine Learning (ICML’l03), Washington DC, USA, August 2003. · Zbl 1040.68596
[15] Chandra, A.: Evolutionary approach to tackling the trade-off between diversity and accuracy in neural network ensembles, Technical report, School of Computer Science, The University of Birmingham, UK, 2004.
[16] Chandra, A. and Yao, X.: DIVACE: Diverse and accurate ensemble learning algorithm, in Proc. 5th Intl. Conference on Intelligent Data Engineering and Automated Learning (LNCS 3177), Berlin, Heidelberg, New York, Springer, August 2004, pp. 619–625.
[17] Chandra, A. and Yao, X.: Evolutionary framework for the construction of diverse hybrid ensembles, in Proc. 13th European Symposium on Artificial Neural Networks, d-side, Brugge, Belgium, April 2005, pp. 253–258.
[18] Darwen, P. J.: Co-Evolutionary Learning by Automatic Modularisation with Speciation, PhD thesis, University of New South Wales, November 1996.
[19] Darwen, P. J. and Yao, X.: A dilemma for fitness sharing with a scaling function, in Proceedings of the Second IEEE International Conference on Evolutionary Computation, IEEE, Piscataway, New Jersey, 1995.
[20] Darwen, P. J. and Yao, X.: Automatic modularization by speciation, in IEEE International Conference on Evolutionary Computation, IEEE, May 1996, pp. 88–93.
[21] Darwen, P. J. and Yao, X.: Every niching method has its niche: Fitness sharing and implicit sharing compared, in Proc. of the 4th International Conference on Parallel Problem Solving from Nature (PPSN-IV), (LNCS-1141), Berlin, Heidelberg, New York, Springer, September 1996, pp. 398–407.
[22] Darwen, P. J. and Yao, X.: Speciation as automatic categorical modularization, IEEE Trans. Evol. Comput., 1(2) (1997), 100–108. · Zbl 05451957 · doi:10.1109/4235.687878
[23] Deb, K.: Multi-objective evolutionary algorithms: Introducing bias among pareto-optimal solutions, Technical Report 99002, Kanpur Genetic Algorithm Group, Department of Mechanical Engineering, Indian Institute of Technology, Kanpur, India, 1999.
[24] Deb, K.: Multi-Objective Optimization Using Evolutionary Algorithms, Wiley, Chichester, UK, 2001. · Zbl 0970.90091
[25] Deb, K., Agrawal, S., Pratab, A. and Meyarivan, T.: A fast elitist non-dominated sorting genetic algorithm for multi-objective optimization: NSGA-II, in M. Schoenauer, K. Deb, G. Rudolph, X. Yao, E. Lutton, J. J. Merelo and H.-P. Schwefel (eds.), Proceedings of the Parallel Problem Solving from Nature VI Conference, Paris, France, Springer, 2000, pp. 849–858. Lecture Notes in Computer Science No. 1917.
[26] Dietterich, T. G.: Machine-learning research: Four current directions, AI Mag. 18(4) (1998), 97–136.
[27] Forrest, S., Smith, R. E., Javornik, B. and Perelson, A. S.: Using genetic algorithms to explore pattern recognition in the immune system, Evol. Comput. 1(3) (1993), 191–211. · Zbl 05412788 · doi:10.1162/evco.1993.1.3.191
[28] Hansen, L. K. and Salamon, P.: Neural network ensembles, IEEE Trans. Pattern Anal. Mach. Intell. 12(10) (1990), 993–1001. · Zbl 05112503 · doi:10.1109/34.58871
[29] Horn, J., Nafpliotis, N. and Goldberg, D. E.: A niched Pareto genetic algorithm for multiobjective optimization, in Proceedings of the First IEEE Conference on Evolutionary Computation, IEEE World Congress on Computational Intelligence, Vol. 1, Piscataway, New Jersey, IEEE Service Center, 1994, pp. 82–87.
[30] Islam, M. M., Yao, X. and Murase, K.: A constructive algorithm for training cooperative neural network ensembles, IEEE Trans. Neural Netw. 14(4) (July 2003), 820–834. · doi:10.1109/TNN.2003.813832
[31] Khare, V. and Yao, X.: Artificial speciation and automatic modularisation, in L. Wang, K. C. Tan, T. Furuhashi, J.-H. Kim and X. Yao (eds.), Proceedings of the 4th Asia-Pacific Conference on Simulated Evolution And Learning (SEAL’02), number 1, Singapore, November 2002, pp. 56–60.
[32] Khare, V. and Yao, X.: Artificial speciation of neural network ensembles, in J. A. Bullinaria (ed.), Proc. of the 2002 UK Workshop on Computational Intelligence (UKCI’02), Birmingham, September 2002, pp. 96–103.
[33] Krogh, A. and Vedelsby, J.: Neural network ensembles, cross validation, and active learning, NIPS 7 (1995), 231–238.
[34] Langdon, W. B., Barrett, S. J. and Buxton, B. F.: Combining decision trees and neural networks for drug discovery, in Genetic Programming, Proceedings of the 5th European Conference, EuroGP 2002, Kinsale, Ireland, 3–5 April 2002, pp. 60–70. · Zbl 1077.68589
[35] Liu, Y. and Yao, X.: Ensemble learning via negative correlation, Neural Netw. 12(10) (1999), 1399–1404. · doi:10.1016/S0893-6080(99)00073-8
[36] Liu, Y., Yao, X. and Higuchi, T.: Evolutionary ensembles with negative correlation learning, IEEE Trans. Evol. Comput. 4(4) (November 2000), 380–387.
[37] Michie, D., Spiegelhalter, D. and Taylor, C.: Machine Learning, Neural and Statistical Classification, Ellis Horwood Limited, 1994. · Zbl 0827.68094
[38] Opitz, D.: Feature selection for ensembles, in Proceedings of 16th National Conference on Artificial Intelligence (AAAI), 1999, pp. 379–384.
[39] Opitz, D. and Maclin, R.: Popular ensemble methods: An empirical study, J. Artif. Intell. Res. 11 (1999), 169–198. · Zbl 0924.68159
[40] Opitz, D. W. and Shavlik, J. W.: Generating accurate and diverse members of a neural-network ensemble, NIPS 8 (1996), 535–541.
[41] Sharkey, A.: Multi-Net Systems, Chapter Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems, Springer, 1999, pp. 1–30. · Zbl 0927.68080
[42] Smith, R., Forrest, S. and Perelson, A.: Searching for diverse, cooperative populations with genetic algorithms, Evol. Comput. 1(2) (1993), 127–149. · Zbl 05412875 · doi:10.1162/evco.1993.1.2.127
[43] Srinivas, N. and Deb, K.: Multi-objective function optimization using non-dominated sorting genetic algorithms, Evol. Comput. 2(3) (1994), 221–248. · Zbl 05412883 · doi:10.1162/evco.1994.2.3.221
[44] Stanley, K. O. and Miikkulainen, R.: Evolving neural networks through augmenting topologies, Evol. Comput. 10(2) (2002), 99–127. · Zbl 05412887 · doi:10.1162/106365602320169811
[45] Storn, R. and Price, K.: Differential evolution – a simple and efficient adaptive scheme for global optimization over continuous spaces, Technical Report TR-95-012, International Computer Science Institute, Berkeley, USA, 1995. · Zbl 0888.90135
[46] Tumer, K. and Ghosh, J.: Analysis of decision boundaries in linearly combined neural classifiers, Pattern Recogn. 29(2) (February 1996), 341–348. · Zbl 05478139 · doi:10.1016/0031-3203(95)00085-2
[47] Yao, X.: Evolving artificial neural networks, in Proceedings of the IEEE, Vol. 87, No. 9, September 1999, pp. 1423–1447.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.