zbMATH — the first resource for mathematics

BEST: a decision tree algorithm that handles missing values. (English) Zbl 07255801
Summary: The main contribution of this paper is the development of a new decision tree algorithm. The proposed approach allows users to guide the algorithm through the data partitioning process. We believe this feature has many applications but in this paper we demonstrate how to utilize this algorithm to analyse data sets containing missing values. We tested our algorithm against simulated data sets with various missing data structures and a real data set. The results demonstrate that this new classification procedure efficiently handles missing values and produces results that are slightly more accurate and more interpretable than most common procedures without any imputations or pre-processing.
65C60 Computational problems in statistics (MSC2010)
C4.5; C50; ggplot2; MICE; R; rpart; sinaplot
Full Text: DOI
[1] Bailey, MA; Rosenthal, JS; Yoon, AH, Grades and incentives: assessing competing grade point average measures and postgraduate outcomes, Stud High Educ, 41, 9, 1548-1562 (2016)
[2] Beaulac C, Rosenthal JS (2018) Predicting University Students’ Academic Success and Choice of Major using Random Forests. ArXiv e-prints
[3] Breiman, L., Bagging predictors, Mach Learn, 24, 2, 123-140 (1996) · Zbl 0858.68080
[4] Breiman, L., Random forests, Mach Learn, 45, 1, 5-32 (2001) · Zbl 1007.68152
[5] Breiman, L.; Friedman, J.; Olshen, R.; Stone, C., Classification and regression trees (1984), Monterey: Wadsworth and Brooks, Monterey · Zbl 0541.62042
[6] Ding, Y.; Simonoff, JS, An investigation of missing data methods for classification trees applied to binary response data, J Mach Learn Res, 11, 131-170 (2010) · Zbl 1242.62052
[7] Feelders AJ (1999) Handling missing data in trees: surrogate splits or statistical imputation. In: PKDD
[8] Friedman J, Kohavi R, Yun Y (1997) Lazy decision trees 1
[9] Gavankar S, Sawarkar S (2015) Decision tree: review of techniques for missing values at training, testing and compatibility. In: 2015 3rd international conference on artificial intelligence, modelling and simulation (AIMS), pp 122-126. 10.1109/AIMS.2015.29
[10] Geurts, P.; Ernst, D.; Wehenkel, L., Extremely randomized trees, Mach Learn, 63, 1, 3-42 (2006) · Zbl 1110.68124
[11] Hastie, T.; Tibshirani, R.; Friedman, J., The elements of statistical learning (2009), Berlin: Springer, Berlin
[12] Hothorn, T.; Hornik, K.; Zeileis, A., Unbiased recursive partitioning: a conditional inference framework, J Comput Graph Stat, 15, 3, 651-674 (2006)
[13] Kim, H.; Loh, WY, Classification trees with unbiased multiway splits, J Am Stat Assoc, 96, 589-604 (2001)
[14] Kuhn M, Quinlan R (2018) C50: C5.0 decision trees and rule-based models. https://CRAN.R-project.org/package=C50. R package version 0.1.2
[15] Little, RJA; Rubin, DB, Statistical analysis with missing data (2002), Hoboken: Wiley, Hoboken
[16] Quinlan, JR, C4.5: programs for machine learning (1993), San Francisco: Morgan Kaufmann Publishers Inc., San Francisco
[17] R Core Team (2018) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna. https://www.R-project.org/
[18] Rahman, MG; Islam, MZ, Missing value imputation using decision trees and decision forests by splitting and merging records: two novel techniques, Knowl Based Syst, 53, 51-65 (2013)
[19] Rubin, DB, Inference and missing data, Biometrika, 63, 3, 581-592 (1976) · Zbl 0344.62034
[20] Saar-Tsechansky, M.; Provost, F., Handling missing values when applying classification models, J Mach Learn Res, 8, 1623-1657 (2007) · Zbl 1222.68295
[21] Schafer, JL; Olsen, MK, Multiple imputation for multivariate missing-data problems: a data analyst’s perspective, Multivar Behav Res, 33, 545-571 (2000)
[22] Seaman, S.; Galati, J.; Jackson, D.; Carlin, J., What is meant by “missing at random”?, Stat Sci, 28, 2, 257-268 (2013) · Zbl 1331.62036
[23] Shalev-Shwartz, S.; Ben-David, S., Understanding machine learning: from theory to algorithms (2014), New York: Cambridge University Press, New York · Zbl 1305.68005
[24] Sidiropoulos, N.; Sohi, SH; Rapin, N.; Bagger, FO, Sinaplot: an enhanced chart for simple and truthful representation of single observations over multiple classes, bioRxiv (2015)
[25] Strobl, C.; Boulesteix, AL; Zeileis, A.; Hothorn, T., Bias in random forest variable importance measures: illustrations, sources and a solution, BMC Bioinform, 8, 1, 25 (2007)
[26] Therneau T, Atkinson B (2018) rpart: recursive partitioning and regression trees. https://CRAN.R-project.org/package=rpart. R package version 4.1-13
[27] Tierney, NJ; Harden, FA; Harden, MJ; Mengersen, KL, Using decision trees to understand structure in missing data, BMJ Open (2015)
[28] Twala, B., An empirical comparison of techniques for handling incomplete data using decision trees, Appl Artif Intell, 23, 5, 373-405 (2009)
[29] Twala, B.; Jones, M.; Hand, D., Good methods for coping with missing data in decision trees, Pattern Recognit Lett, 29, 950-956 (2008)
[30] van Buuren, S.; Groothuis-Oudshoorn, K., mice: multivariate imputation by chained equations in R, J Stat Softw, 45, 3, 1-67 (2011)
[31] Wickham, H., ggplot2: elegant graphics for data analysis (2016), New York: Springer, New York · Zbl 1397.62006
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.