×

Enhancing techniques for learning decision trees from imbalanced data. (English) Zbl 1459.68167

Summary: Several machine learning techniques assume that the number of objects in considered classes is approximately similar. Nevertheless, in real-world applications, the class of interest to be studied is generally scarce. The data imbalance status may allow high global accuracy through most standard learning algorithms, but it poses a real challenge when considering the minority class accuracy. To deal with this issue, we introduce in this paper a novel adaptation of the decision tree algorithm to imbalanced data situations. A new asymmetric entropy measure is proposed. It adjusts the most uncertain class distribution to the a priori class distribution and involves it in the node splitting-process. Unlike most competitive split criteria, which include only the maximum uncertainty vector in their formula, the proposed entropy is customizable with an adjustable concavity to better comply with the system expectations. The experimental results across thirty-five differently class-imbalanced data-sets show significant improvements over various split criteria adapted for imbalanced situations. Furthermore, being combined with sampling strategies and based-ensemble methods, our entropy proves significant enhancements on the minority class prediction, along with a good handling of the data difficulties related to the class imbalance problem.

MSC:

68T05 Learning and adaptive systems in artificial intelligence
62H30 Classification and discrimination; cluster analysis (statistical aspects)
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] Alcala-Fdez, J.; Fernandez, A.; Luengo, J.; Derrac, J.; Garcia, S., KEEL data-mining software tool: data set repository, integration of algorithms and experimental analysis framework, Multiple-Valued Logic Soft Comput, 17, 2-3, 255-287 (2011)
[2] Batista, GEAPA; Prati, RC; Monard, MC, A study of the behavior of several methods for balancing machine learning training data, SIGKDD Explor, 6, 1, 20-29 (2004)
[3] Beyan, C.; Fisher, R., Classifying imbalanced data sets using similarity based hierarchical decomposition, Pattern Recognit, 48, 5, 1653-1672 (2015)
[4] Blagus, R.; Lusa, L., SMOTE for high-dimensional class-imbalanced data, BMC Bioinformatics, 14, 1, 106 (2013)
[5] Blaszczynski J, Stefanowski J (2015) Neighbourhood sampling in bagging for imbalanced data. Neurocomputing 150:529-542. 10.1016/j.neucom.2014.07.064. http://www.sciencedirect.com/science/article/pii/S0925231214012296
[6] Blaszczynski, J.; Deckert, M.; Stefanowski, J.; Wilk, S.; Szczuka, M.; Kryszkiewicz, M.; Ramanna, S.; Jensen, R.; Hu, Q., Integrating selective pre-processing of imbalanced data with ivotes ensemble, Rough sets and current trends in computing, 148-157 (2010), Berlin: Springer, Berlin
[7] Blaszczynski J, Stefanowski J, Idkowiak L (2013) Extending bagging for imbalanced data. In: Burduk R, Jackowski K, Kurzynski M, Wozniak M, Zolnierek A (eds) Proceedings of the 8th international conference on computer recognition systems CORES 2013, Springer International Publishing, Heidelberg, pp 269-278
[8] Bosch A, Zisserman A, Munoz X (2007) Image classification using random forests and ferns. In: 11th International conference on computer vision, IEEE, pp 1-8. 10.1109/ICCV.2007.4409066
[9] Bradford, JP; Kunz, C.; Kohavi, R.; Brunk, C.; Brodley, CE; Nedellec, C.; Rouveirol, C., Pruning decision trees with misclassification costs, Machine learning: ECML-98, 131-136 (1998), Berlin: Springer, Berlin
[10] Breiman, L., Bagging predictors, Mach Learn, 24, 2, 123-140 (1996) · Zbl 0858.68080
[11] Breiman, L., Random forests, Mach Learn, 45, 1, 5-32 (2001) · Zbl 1007.68152
[12] Breiman, L.; Friedman, JH; Olshen, RA; Stone, CJ, Classification and regression trees (1984), Monterey: Wadsworth and Brooks, Monterey · Zbl 0541.62042
[13] Bressoux P (2010) Modélisation statistique appliquée aux sciences sociales. Méthodes en sciences humaines, De Boeck Supérieur. 10.3917/dbu.bress.2010.01. https://www.cairn.info/modelisation-statistique-appliquee-aux-sciences-so-9782804157142.htm
[14] Buntine, W.; Niblett, T., A further comparison of splitting rules for decision-tree induction, Mach Learn, 8, 1, 75-85 (1992)
[15] Chaabane, I.; Guermazi, R.; Hammami, M., Adapted pruning scheme for the framework of imbalanced data-sets, Procedia Comput Sci, 112, C, 1542-1553 (2017)
[16] Chawla, NV; Maimon, O.; Rokach, L., Data mining for imbalanced datasets: an overview, Data mining and knowledge discovery handbook, 853-867 (2005), Boston: Springer, Boston · Zbl 1087.68029
[17] Chawla NV (2003) C4.5 and imbalanced data sets: investigating the effect of sampling method, probabilistic estimate, and decision tree structure. In: Proceedings of the ICML’03 workshop on class imbalances
[18] Chawla, NV; Bowyer, KW; Hall, LO; Kegelmeyer, WP, SMOTE: synthetic minority over-sampling technique, J Artif Intell Res, 16, 321-357 (2002) · Zbl 0994.68128
[19] Chawla, NV; Lazarevic, A.; Hall, L.; Bowyer, K.; Lavrac, N.; Gamberger, D.; Todorovski, L.; Blockeel, H., SMOTEBoost: improving prediction of the minority class in boosting, Knowledge discovery in databases: PKDD 2003, 107-119 (2003), Berlin: Springer, Berlin
[20] Chen, J.; Tsai, C.; Moon, H.; Ahn, H.; Young, J.; Chen, C., Decision threshold adjustment in class prediction, SAR QSAR Environ Res, 17, 3, 337-352 (2006)
[21] Chen, LS; Cai, SJ, Neural-network-based resampling method for detecting diabetes mellitus, J Med Biol Eng, 35, 6, 824-832 (2015)
[22] Cieslak, DA; Hoens, TR; Chawla, NV; Kegelmeyer, WP, Hellinger distance decision trees are robust and skew-insensitive, Data Min Knowl Discov, 24, 1, 136-158 (2012) · Zbl 1235.68141
[23] Demsar, J., Statistical comparisons of classifiers over multiple data sets, J Mach Learn Res, 7, 1-30 (2006) · Zbl 1222.68184
[24] Derrac, J.; Garcia, S.; Molina, D.; Herrera, F., A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms, Swarm Evol Comput, 1, 1, 3-18 (2011)
[25] Jf, Diez-Pastor; Rodriguez, JJ; Garcia-Osorio, CI; Kuncheva, LI, Diversity techniques improve the performance of the best imbalance learning ensembles, Information Sci, 325, C, 98-117 (2015)
[26] Elkan C (2001) The foundations of cost-sensitive learning. In: Proceedings of the 17th international joint conference on artificial intelligence, vol 2. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, IJCAI’01, pp 973-978
[27] Freund, Y.; Schapire, RE, A decision-theoretic generalization of on-line learning and an application to boosting, J Comput Syst Sci, 55, 1, 119-139 (1997) · Zbl 0880.68103
[28] Galar, M.; Fernandez, A.; Barrenechea, E.; Bustince, H.; Herrera, F., A review on ensembles for the class imbalance problem: bagging-, boosting-, and hybrid-based approaches, IEEE Trans Syst Man Cybern C Appl Rev, 42, 4, 463-484 (2012)
[29] Galar, M.; Fernandez, A.; Barrenechea, E.; Bustince, H.; Herrera, F., Ordering-based pruning for improving the performance of ensembles of classifiers in the framework of imbalanced datasets, Information Sci, 354, 178-196 (2016)
[30] Ganganwar, V., An overview of classification algorithms for imbalanced datasets, Int J Emerg Technol Adv Eng, 2, 4, 42-47 (2012)
[31] Garcia S, Fernandez A, Luengo J, Herrera F (2010) Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: experimental analysis of power. Information Sci 180(10):2044-2064. 10.1016/j.ins.2009.12.010. http://www.sciencedirect.com/science/article/pii/S0020025509005404
[32] Garcia V, Mollineda RA, Sanchez JS (2009) Pattern recognition and image analysis: 4th Iberian conference, IbPRIA 2009 Povoa de Varzim, Portugal, June 10-12, 2009 Proceedings, Springer Berlin Heidelberg, Berlin, Heidelberg, chap Index of Balanced Accuracy: A Performance Measure for Skewed Class Distributions, pp 441-448
[33] Geddes K, Gonnet G (1981-2014) Maplesoft (18.02), a division of Waterloo Maple Inc., Waterloo, Ontario. www.maplesoft.com
[34] Geurts, P.; Ernst, D.; Wehenkel, L., Extremely randomized trees, Mach Learn, 63, 1, 3-42 (2006) · Zbl 1110.68124
[35] Gu, Q.; Zhu, L.; Cai, Z.; Cai, Z.; Li, Z.; Kang, Z.; Liu, Y., Evaluation measures of the classification performance of imbalanced data sets, Computational intelligence and intelligent systems, communications in computer and information science, 461-471 (2009), Berlin: Springer, Berlin · Zbl 1187.68455
[36] Guermazi, R.; Chaabane, I.; Hammami, M., AECID: asymmetric entropy for classifying imbalanced data, Information Sci, 467, 373-397 (2018) · Zbl 1450.62004
[37] Han H, Wang W, Mao B (2005) Borderline-SMOTE: a new over-sampling method in imbalanced data sets learning. In: Huang DS, Zhang XP, Huang GB (eds) ICIC (1), Springer, Lecture Notes in Computer Science, vol 3644, pp 878-887
[38] Hart, P., The condensed nearest neighbor rule, IEEE Trans Inf Theory, 14, 515-516 (1968)
[39] He, H.; Garcia, EA, Learning from imbalanced data, IEEE Trans Knowl Data Eng, 21, 9, 1263-1284 (2009)
[40] Hettich S, Bay SD (1999) The uci kdd archive. [http://kdd.ics.uci.edu]
[41] Hido, S.; Kashima, H.; Takahashi, Y., Roughly balanced bagging for imbalanced data, Stat Anal Data Min, 2, 56, 412-426 (2009)
[42] Japkowicz N, Stephen S (2002) The class imbalance problem: a systematic study. Intell Data Anal 6(5):429-449. http://dl.acm.org/citation.cfm?id=1293951.1293954 · Zbl 1085.68628
[43] Kang S, Ramamohanarao K (2014) Advances in knowledge discovery and data mining: 18th Pacific-Asia conference, PAKDD 2014, Tainan, Taiwan, May 13-16, 2014. Proceedings, Part I, Springer International Publishing, Cham, chap A Robust Classifier for Imbalanced Datasets, pp 212-223
[44] Kraiem MS, Moreno MN (2017) Effectiveness of basic and advanced sampling strategies on the classification of imbalanced data. A comparative study using classical and novel metrics. In: Martinez de Pison FJ, Urraca R, Quintien H, Corchado E (eds) Hybrid artificial intelligent systems, Springer International Publishing, Cham, pp 233-245
[45] Krawczyk, B.; Wozniak, M.; Schaefer, G., Cost-sensitive decision tree ensembles for effective imbalanced classification, Appl Soft Comput, 14, 554-562 (2014)
[46] Lallich S, Lenca P, Vaillant B (2007) Construction d’une entropie décentrée pour l’apprentissage supervisé. In: EGC 2007 : 7èmes journées francophones “Extraction et gestion des connaissances”, Atelier Qualité des Données et des Connaissances, Namur, Belgique, pp 45-54
[47] Lango M, Stefanowski J (2018) Multi-class and feature selection extensions of roughly balanced bagging for imbalanced data. J Intell Inf Syst pp 97-127. 10.1007/s10844-017-0446-7
[48] Lemaitre G, Nogueira F, Aridas CK (2017) Imbalanced-learn: a python toolbox to tackle the curse of imbalanced datasets in machine learning. J Mach Learn Res 18(17):1-5. http://jmlr.org/papers/v18/16-365.html
[49] Lenca P, Lallich S, Do TN, Pham NK (2008) A comparison of different off-centered entropies to deal with class imbalance for decision trees. In: Advances in knowledge discovery and data mining. Springer Berlin Heidelberg, Berlin, Heidelberg, pp 634-643
[50] Lenca, P.; Lallich, S.; Vaillant, B., Construction of an off-centered entropy for the supervised learning of imbalanced classes: some first results, Commun Stat Theory Methods, 39, 3, 493-507 (2010) · Zbl 1187.62006
[51] Liang, G.; Cranefield, S.; Nayak, A., An effective method for imbalanced time series classification: hybrid sampling, AI 2013: Adv Artif Intell, 374-385 (2013), Cham: Springer International Publishing, Cham
[52] Lin, W.; Tsai, CF; Hu, Y.; Jhang, J., Clustering-based undersampling in class-imbalanced data, Information Sci, 409, Supplement C, 17-26 (2017)
[53] Ling CX, Sheng VS (2010) Cost-sensitive learning. In: Encyclopedia of machine learning. pp 231-235. 10.1007/978-0-387-30164-8_181
[54] Ling CX, Yang Q, Wang J, Zhang S (2004) Decision trees with minimal costs. In: Proceedings of the twenty-first international conference on machine learning. ACM, New York, NY, USA, ICML ’04, pp 69-76
[55] Liu, W.; White, A., The importance of attribute selection measures in decision tree induction, Mach Learn, 15, 1, 25-41 (1994)
[56] Liu W, Chawla S, Cieslak DA, Chawla NV (2010) A robust decision tree algorithm for imbalanced data sets, pp 766-777
[57] Liu XY, Zhou ZH (2013) Imbalanced learning: foundations, algorithms, and applications. Wiley-IEEE Press, chap Ensemble Methods for Class Imbalance Learning, pp 61-82 · Zbl 1272.68022
[58] Liu, XY; Wu, J.; Zhou, ZH, Exploratory undersampling for class-imbalance learning, IEEE Trans Syst Man Cybern B, 39, 2, 539-550 (2009)
[59] Lyon R, Brooke J, Knowles J, Stappers B (2014) Hellinger distance trees for imbalanced streams. In: 22nd International conference on pattern recognition. pp 1969-1974. 10.1109/ICPR.2014.344
[60] Marcellin S, Zighed DA, Ritschard G (2006a) An asymmetric entropy measure for decision trees. In: 11th Conference on information processing and management of uncertainty in knowledge-based systems. IPMU 2006, pp 1292 - 1299
[61] Marcellin, S.; Zighed, DA; Ritschard, G.; Rizzi, A.; Vichi, M., Detection of breast cancer using an asymmetric entropy measure, Computional statistics (COMPSTAT 06), 975-982 (2006), Heidelberg: Springer, Heidelberg
[62] Marcellin S, Zighed DA, Ritschard G (2008) Evaluating decision trees grown with asymmetric entropies. In: Foundations of intelligent systems, 17th international symposium, ISMIS 2008, Toronto, Canada, May 20-23, pp 58-67
[63] Meng YA, Yu Y, Cupples LA, Farrer LA, Lunetta KL (2009) Performance of random forest when SNPs are in linkage disequilibrium. BMC Bioinformatics 10(1). 10.1186/1471-2105-10-78
[64] Napierala, K.; Stefanowski, J., Types of minority class examples and their influence on learning classifiers from imbalanced data, J Intell Inf Syst, 46, 3, 563-597 (2016)
[65] Napierala, K.; Stefanowski, J.; Wilk, S.; Szczuka, M.; Kryszkiewicz, M.; Ramanna, S.; Jensen, R.; Hu, Q., Learning from imbalanced data in presence of noisy and borderline examples, Rough Sets Current Trends Comput, 158-167 (2010), Berlin Heidelberg: Springer, Berlin Heidelberg
[66] Park, Y.; Ghosh, J., Ensembles of \(({\alpha })\)-trees for imbalanced classification problems, IEEE Trans Knowl Data Eng, 26, 1, 131-143 (2014)
[67] Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; Vanderplas, J.; Passos, A.; Cournapeau, D.; Brucher, M.; Perrot, M.; Duchesnay, E., Scikit-learn: machine learning in Python, J Mach Learn Res, 12, 2825-2830 (2011) · Zbl 1280.68189
[68] Pham NK, Do TN, Lenca P, Lallich S (2008) Using local node information in decision trees: coupling a local labeling rule with an off-centered entropy. In: Proceedings of the international conference on data mining, July 14-17, 2008, Las Vegas, USA, pp 117-123
[69] Provost, FJ; Weiss, GM, Learning when training data are costly: the effect of class distribution on tree induction, J Artif Intell Res, 19, 315-354 (2003) · Zbl 1046.68094
[70] Rayhan F, Ahmed S, Mahbub A, Jani MR, Shatabda S, Farid DM, Rahman CM (2017) MEBoost: mixing estimators with boosting for imbalanced data classification. In: International conference on software, knowledge, information management and applications (SKIMA), vol 11. IEEE, pp 1-6
[71] Ritschard G, Zighed DA, Marcellin S (2007) Données déséquilibrées, entropie décentrée et indice d’implication. In: Nouveaux apports théoriques à l’analyse statistique implicative et applications, ASI4, Departament de Matematiques, Universitat Jaume I, pp 315-327
[72] Rodriguez-Fdez I, Canosa A, Mucientes M, Bugarin A (2015) STAC: a web platform for the comparison of algorithms using statistical tests. In: 2015 IEEE international conference on fuzzy systems, pp 1-8. 10.1109/FUZZ-IEEE.2015.7337889
[73] Ryan Hoens T, Chawla N (2013) Imbalanced learning: foundations, algorithms, and applications. Wiley-IEEE Press, chap Imbalanced Datasets: From Sampling to Classifiers, pp 43-59 · Zbl 1272.68022
[74] Saez, JA; Luengo, J.; Stefanowsk, J.; Herrera, F., SMOTE-IPF: addressing the noisy and borderline examples problem in imbalanced classification by a re-sampling method with filtering, Information Sci, 291, Supplement C, 184-203 (2015)
[75] Shannon, CE, A mathematical theory of communication, Bell Syst Tech J, 27, 379-423, 623-656 (1948) · Zbl 1154.94303
[76] Shen A, Tong R, Deng Y (2007) Application of classification models on credit card fraud detection. In: 2007 International conference on service systems and service management. pp 1-4
[77] Sheng VS, Ling CX (2006) Thresholding for making classifiers cost-sensitive. In: Proceedings of the 21st national conference on artificial intelligence, vol 1. AAAI Press, pp 476-481
[78] Shuo, W.; Xin, Y., Diversity analysis on imbalanced data sets by using ensemble models, IEEE Symp Comput Intell Data Min, 2009, 324-331 (2009)
[79] Singh A, Liu J, Guttag J (2010) Discretization of continuous ECG based risk metrics using asymmetric and warped entropy measures. In: 2010 Computing in cardiology. pp 473-476
[80] Son Lam P, Abdesselam B, Giang HN (2009) Pattern recognition, chap Learning pattern classification tasks with imbalanced data sets, pp 193-208. 10.5772/7544
[81] Stefanowski, J., Overlapping, rare examples and class decomposition in learning classifiers from imbalanced data, 277-306 (2013), Berlin: Springer, Berlin
[82] Stefanowski, J., Dealing with data difficulty factors while learning from imbalanced data, 333-363 (2016), Cham: Springer International Publishing, Cham
[83] Sun Y, Kamel MS, Wong A, Wang Y (2007) Cost-sensitive boosting for classification of imbalanced data. Pattern Recognit 40(12):3358-3378. 10.1016/j.patcog.2007.04.009. http://www.sciencedirect.com/science/article/pii/S0031320307001835 · Zbl 1122.68505
[84] Thai-Nghe N, Gantner Z, Schmidt-Thieme L (2011) A new evaluation measure for learning from imbalanced data. In: The 2011 international joint conference on neural networks (IJCNN). pp 537-542
[85] Tomek, I., An experiment with the edited nearest-neighbor rule, IEEE Trans Syst Man Cybern, SMC-6, 6, 448-452 (1976) · Zbl 0332.68081
[86] Turney, PD, Cost-sensitive classification: empirical evaluation of a hybrid genetic decision tree induction algorithm, J Artif Intell Res, 2, 1, 369-409 (1995)
[87] Vanschoren, J.; van Rijn, JN; Bischl, B.; Torgo, L., Openml: networked science in machine learning, SIGKDD Explor, 15, 2, 49-60 (2013)
[88] Weiss, GM, Mining with rarity: a unifying framework, SIGKDD Explor, 6, 1, 7-19 (2004)
[89] Weiss, GM, The impact of small disjuncts on classifier learning, annals of information systems, 193-226 (2010), Boston: Springer, Boston
[90] Wilson DL (1972) Asymptotic properties of nearest neighbor rules using edited data. IEEE Trans Syst Man Cybern 2(3):408-421. http://dblp.uni-trier.de/db/journals/tsmc/tsmc2.html#Wilson72 · Zbl 0276.62060
[91] Wilson, DR; Martinez, TR, Reduction techniques for instance-based learning algorithms, Mach Learn, 38, 3, 257-286 (2000) · Zbl 0954.68126
[92] Yagci AM, Aytekin T, Gurgen FS (2016) Balanced random forest for imbalanced data streams. In: 24th Signal processing and communication application conference (SIU). pp 1065-1068. 10.1109/SIU.2016.7495927
[93] Yen, SJ; Lee, YS, Cluster-based under-sampling approaches for imbalanced data distributions, Expert Syst Appl, 36, 3, 5718-5727 (2009)
[94] Yildirim, P., Pattern classification with imbalanced and multiclass data for the prediction of albendazole adverse event outcomes, Procedia Comput Sci, 83, 1013-1018 (2016)
[95] Zadrozny B, Langford J, Abe N (2003) Cost-sensitive learning by cost-proportionate example weighting. In: Proceedings of the third IEEE international conference on data mining. IEEE Computer Society, Washington, DC, USA, ICDM ’03
[96] Zighed, DA; Ritschard, G.; Marcellin, S.; Ras, Z.; Tsay, L., Asymmetric and sample size sensitive entropy measures for supervised learning, Advances in intelligent information systems, studies in computational intelligence, 27-42 (2010), Berlin: Springer, Berlin · Zbl 1185.68014
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.