×

zbMATH — the first resource for mathematics

Appraisal of performance of three tree-based classification methods. (English) Zbl 1397.62215
Tez, Müjgan (ed.) et al., Trends and perspectives in linear statistical inference. Proceedings of the LINSTAT2016 meeting held 22–25 August 2016 in Istanbul, Turkey. Cham: Springer (ISBN 978-3-319-73240-4/hbk; 978-3-319-73241-1/ebook). Contributions to Statistics, 41-55 (2018).
Summary: Classification methods use different algorithms to get better performance in research fields such as statistics, machine learning and computational analysis. This study reviews the traditional method, recursive partitioning, as well as newer classification algorithms, conditional inference tree and evolutionary tree. Variations and improvements in algorithms, data types with or without missing values, and special applications are widely used in this field. Although classification algorithms have been studied often and performed reasonably well, there is no existing one that performs best among the others. Using a real dataset, the classification methods under consideration are applied and the results are compared.
For the entire collection see [Zbl 1400.62004].
Reviewer: Reviewer (Berlin)
MSC:
62H30 Classification and discrimination; cluster analysis (statistical aspects)
62P30 Applications of statistics in engineering and industry; control charts
PDF BibTeX XML Cite
Full Text: DOI
References:
[1] Altman, D.G., Bland, J.M.: Diagnostic tests 1: sensitivity and specificity. Br. Med. J. 308, 1552 (1994)
[2] Altman, D.G., Bland, J.M.: Diagnostic tests 2: predictive values. Br. Med. J. 309, 102 (1994)
[3] Breiman, L., Friedman, J., Olshen, Stone, C.: Classification and Regression Trees, Wadsworth, Inc. Monterey, California, U.S.A. (1984) · Zbl 0541.62042
[4] Grubinger, T., Zeileis, A., Pfeiffer, K.P. : evtree: evolutionary learning of globally optimal classification and regression trees in R. J. Stat. Softw. 61 (2014)
[5] Grubinger, T., Zeileis, A., Pfeiffer, K.P., (2015). evtree: evolutionary learning of globally optimal classification and regression trees in R.
[6] Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistics Learning; Data Mining, Inference, and Prediction, 2nd edn, pp. 305-311. Springer Science+Business Media, LLC (2009) · Zbl 1273.62005
[7] Hoehler, F.K.: Bias and prevalence effects in kappa viewed in terms of sensitivity and specificity. J. Clin. Epidemiol. 53 (2000)
[8] Hothorn, T., Hornik, K., Zeileis, A.: Unbiased recursive partitioning: a conditional inference framework. J. Comput. Graph. Stat. 15(3), 651674 (2006)
[9] Hothorn, T., Hornik, K., Zeileis, A.: ctree: conditional inference trees.
[10] Kuhn, M., Contributions from Wing, J., Weston, S., Williams, A., Keefer, C., Engelhardt, A., Cooper, T., Mayer, Z., Kenkel, B., the R Core Team, Benestry, M., Lescarbeau, R., Ziem, A., Scrucca, L., Tang, Y., and Candan, C.: Caret: classification and regression training.
[11] McHugh, Mary, L.: Biochemia Medica 22(3), 276-282 (2012)
[12] Nelson, K.P., Edwards, D.: Measures of agreement between many raters for ordinal classifications. Stat. Med. 34(23), 3116-3132 (2015)
[13] Strobl, C., Boulesteix, A., Augustin, T.: Unbiased split selection for classification trees based on the Gini index. Comput. Stat. Data Anal. 52(1), 483-501 (2007) · Zbl 1452.62469
[14] Tang, W., Hu, J., Zhang, H., Wu, P., He, H.: Kappa coefficient: a popular measure of rater agreement. Shanghai Arch. Psychiatry 27(1), 62-67 (2015)
[15] Therneau, T.M.: A short introduction to recursive partitioning. Orion Technical Report 21, Stanford University, Statistics Department (1983)
[16] Therneau, T.M., Atkinson, E.J.: An introduction to recursive partitioning using the rpart routines. Technical Report 61 (1997).
[17] Therneau, T. M., Atkinson, B., Ripley, M. B.: rpart: recursive partitioning and regression trees. (2010). Accessed 13 Oct 2017.
[18] Venables, W.N., Ripley, B.D.: Modern Applied Statistics with S, vol. 9, 4th edn, pp. 251-269. Springer (2002) · Zbl 1006.62003
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.