×

Comments on “Data science, big data and statistics”. (English) Zbl 1428.62017

Comment to [P. Galeano and D. Peña, ibid. 28, No. 2, 289–329 (2019; Zbl 1428.62021)].

MSC:

62A01 Foundations and philosophical topics in statistics
68T05 Learning and adaptive systems in artificial intelligence
62M10 Time series, auto-correlation, regression, etc. in statistics (GARCH)
62H12 Estimation in multivariate analysis
62R07 Statistical aspects of big data and data science

Citations:

Zbl 1428.62021

Software:

breakDown; live; DALEX
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] Baehrens D, Schroeter T, Harmeling S, Kawanabe M, Hansen K, MÞller KR (2010) How to explain individual classification decisions. J Mach Learn Res 11(Jun):1803-1831 · Zbl 1242.62049
[2] Biecek P (2018) DALEX: explainers for complex predictive models in R. arXiv:1806.08915v2 · Zbl 07008340
[3] Biran O, Cotton C (2017) Explanation and justification in machine learning: a survey. In: Proceedings of The IJCAI-17 workshop on Explainable AI (XAI), pp 8-13
[4] Breiman L (2001) Statistical modeling: the two cultures. Stat Sci 16:199-231 · Zbl 1059.62505
[5] Datta A, Sen S, Zick Y (2016) Algorithmic transparency via quantitative input influence: theory and experiments with learning systems. In: 2016 IEEE symposium on security and privacy (SP). IEEE, pp 598-617
[6] Ferrando P (2018) Lighting the black box: explaining individual predictions of machine learning algorithms. Master Thesis, MESIO UPC-UB. Advisors: Belanche, L. and Delicado, P. http://hdl.handle.net/2117/113463. Accessed 13 Dec 2018
[7] Fisher A, Rudin C, Dominici F (2018) All models are wrong but many are useful: Variable importance for black-box, proprietary, or misspecified prediction models, using model class reliance. arXiv:1801.01489v3
[8] Gregorutti B, Michel B, Saint-Pierre P (2015) Grouped variable importance with random forests and application to multiple functional data analysis. Comput Stat Data Anal 90:15-35 · Zbl 1468.62069
[9] Gregorutti B, Michel B, Saint-Pierre P (2017) Correlation and variable importance in random forests. Stat Comput 27(3):659-678 · Zbl 1505.62167
[10] Kononenko I et al (2010) An efficient explanation of individual classifications using game theory. J Mach Learn Res 11(Jan):1-18 · Zbl 1242.68250
[11] van der Laan MJ (2006) Statistical inference for variable importance. Int J Biostat. https://doi.org/10.2202/1557-4679.1008
[12] Miller T (2019) Explanation in artificial intelligence: insights from the social sciences. Artif Intell 267:1-38. https://doi.org/10.1016/j.artint.2018.07.007 · Zbl 1478.68274
[13] Nott G (2017) Explainable artificial intelligence: cracking open the black box of AI. Computer World. https://www.computerworld.com.au/article/617359. Accessed 12 Dec 2018
[14] Ribeiro MT, Singh S, Guestrin C (2016) Why should i trust you?: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM, pp 1135-1144
[15] Staniak M, Biecek P (2018) Explanations of model predictions with live and breakDown packages. arXiv:1804.01955
[16] Wikipedia Contributors: Explainable Artificial Intelligence (2018) Wikipedia, the free encyclopedia (2018). https://en.wikipedia.org/wiki/Explainable_Artificial_Intelligence. Accessed 12 Dec 2018
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.