zbMATH — the first resource for mathematics

A survey on the explainability of supervised machine learning. (English) Zbl 07299933
Summary: Predictions obtained by, e.g., artificial neural networks have a high accuracy but humans often perceive the models as black boxes. Insights about the decision making are mostly opaque for humans. Particularly understanding the decision making in highly sensitive areas such as healthcare or finance, is of paramount importance. The decision-making behind the black boxes requires it to be more transparent, accountable, and understandable for humans. This survey paper provides essential definitions, an overview of the different principles and methodologies of explainable Supervised Machine Learning (SML). We conduct a state-of-the-art survey that reviews past and recent explainable SML approaches and classifies them according to the introduced definitions. Finally, we illustrate principles by means of an explanatory case study and discuss important future directions.
68Txx Artificial intelligence
PDF BibTeX Cite
Full Text: DOI
[1] Abdollahi, B. & Nasraoui, O. (2016). Explainable restricted boltzmann machines for collaborative filtering.arXiv preprint arXiv:1606.07129.
[2] Abdollahi, B. & Nasraoui, O. (2017). Using explainability for constrained matrix factorization. InProceedings of the Eleventh ACM Conference on Recommender Systems(pp. 79-83).
[3] ACM (2017). Statement on algorithmic transparency and accountability.
[4] Adadi, A. & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (xai).IEEE Access.
[5] Adler, P., Falk, C., Friedler, S., Rybeck, G., Scheidegger, C., Smith, B., & Venkatasubramanian, S. (2016). Auditing black-box models for indirect influence. InData Mining (ICDM), 2016 IEEE 16th International Conference on: IEEE.
[6] Amatriain, X. (2017). More data or better models?
[7] Andrews, R., Diederich, J., & Tickle, A. B. (1995). Survey and critique of techniques for extracting rules from trained artificial neural networks.Knowledge-based systems.
[8] Andrzejak, A., Langner, F., & Zabala, S. (2013). Interpretable models from distributed data via merging of decision trees. InComputational Intelligence and Data Mining (CIDM), 2013 IEEE Symposium on: IEEE.
[9] Angelov, P. & Soares, E. (2019). Towards explainable deep neural networks (xdnn).arXiv preprint arXiv:1912.02523.
[10] Askham, N., Cook, D., Doyle, M., Fereday, H., Gibson, M., Landbeck, U., Lee, R., Maynard, C., Palmerand, G., & Schwarzenbach, J. (2013). The six primary dimensions for data quality assessment.DAMA UK Working Group, (pp. 432-435).
[11] Augasta, M. G. & Kathirvalavakumar, T. (2012). Reverse engineering the neural networks for rule extraction in classification problems.Neural processing letters.
[12] Baehrens, D., Schroeter, T., Harmeling, S., Kawanabe, M., Hansen, K., & M. Zoeller, K.-R. (2010). How to explain individual classification decisions.Journal of Machine Learning Research. · Zbl 1242.62049
[13] Bahdanau, D., Cho, K., & y. Bengio (2014). Neural machine translation by jointly learning to align and translate.arXiv preprint arXiv:1409.0473.
[14] Balestriero, R. (2017). Neural decision trees.arXiv preprint arXiv:1702.07360.
[15] Barakat, N. & Diederich, J. (2004). Learning-based rule-extraction from support vector machines. InThe 14th International Conference on Computer Theory and applications ICCTA’2004: not found.
[16] Barakat, N. H. & Bradley, A. P. (2007). Rule extraction from support vector machines: A sequential covering approach.IEEE Transactions on Knowledge and Data Engineering.
[17] Barbella, D., Benzaid, S., Christensen, J. M., Jackson, B., Qin, X. V., & Musicant, D. (2009). Understanding support vector machine classifications via a recommender systemlike approach. InDMIN.
[18] Bastani, O., Kim, C., & Bastani, H. (2017). Interpreting blackbox models via model extraction.arXiv preprint arXiv:1705.08504.
[19] Bengio, Y. & Pearson, J. (2016). When ai goes wrong we won’t be able to ask it why.
[20] Berkson, J. (1953). A statistically precise and relatively simple method of estimating the bio-assay with quantal response, based on the logistic function.Journal of the American Statistical Association. · Zbl 0051.11005
[21] Bertsimas, D., Chang, A., & Rudin, C. (2011). Ordered rules for classification: A discrete optimization approach to associative classification. InSUBMITTED TO THE ANNALS OF STATISTICS: Citeseer.
[22] Bhatt, U., Ravikumar, P., & J. M. F. Moura, J. (2019). Towards aggregating weighted feature attributions.arXiv preprint arXiv:1901.10040.
[23] Bien, J. & Tibshirani, R. (2009). Classification by set cover: The prototype vector machine. arXiv preprint arXiv:0908.2284.
[24] Bien, J. & Tibshirani, R. (2011). Prototype selection for interpretable classification.The Annals of Applied Statistics. · Zbl 1234.62096
[25] Biran, O. & Cotton, C. (2017). Explanation and Justification in Machine Learning: A survey. InIJCAI-17 Workshop on Explainable AI (XAI).
[26] Biran, O. & McKeown, K. R. (2017).Human-centric justification of machine learning predictions. InIJCAI.
[27] Biswas, S. K., Chakraborty, M., Purkayastha, B., Roy, P., & Thounaojam, D. M. (2017). Rule extraction from training data using neural network.International Journal on Artificial Intelligence Tools.
[28] Bohanec, M., Borvstnar, M. K., & Robnik-vSikonja, M. (2017). Explaining machine learning models in sales predictions.Expert Systems with Applications.
[29] Bojars, U. & Breslin, J. G. (2020). Semantically-interlinked online communities.
[30] Boz, O. (2002). Extracting decision trees from trained neural networks. InProceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining: ACM.
[31] Breiman, L. (2017).Classification and regression trees. Routledge.
[32] Brickley, D. & Miller, L. (2020). The foaf project.
[33] Burkart, N., Huber, M. F., & Faller, P. (2019). Forcing interpretability for deep neural networks through rule-based regularization. In2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA)(pp. 700-705).: IEEE.
[34] Byrum, J. (2017). The challenges for artificial intelligence in agriculture.
[35] Cambridge (2020).The Cambridge dictionary of psychology.Cambridge University Press.
[36] Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., & Elhadad, N. (2015). Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining: ACM.
[37] Charniak, E. (1991). Bayesian networks without tears.AI magazine.
[38] Chen, D., Fraiberger, S. P., Moakler, R., & Provost, F. (2015). Enhancing transparency and control when drawing data-driven inferences about individuals.Proceedings of 2016 ICML Workshop on Human Interpretability in Machine Learning.
[39] Chen, J., L´ecu´e, F., Pan, J. Z., Horrocks, I., & Chen, H. (2018). Knowledge-based transfer learning explanation.CoRR, abs/1807.08372.
[40] Chen, Y., Ouyang, L., Bao, S., Li, Q., Han, L., Zhang, H., Zhu, B., Xu, M., Liu, J., Ge, Y., et al. (2020). An interpretable machine learning framework for accurate severe vs non-severe covid-19 clinical type classification.medRxiv.
[41] Clark, P. & Niblett, T. (1989). The cn2 induction algorithm.Machine learning.
[42] Cleland, S. (2011). Google’s ’infringenovation’ secrets.
[43] Cohen, W. (1995). Fast effective rule induction. InMachine Learning Proceedings 1995. Elsevier.
[44] Confalonieri, R., del Prado, F. M., Agramunt, S., Malagarriga, D., Faggion, D., Weyde, T., & Besold, T. R. (2019). An ontology-based approach to explaining artificial neural networks.CoRR, abs/1906.08362.
[45] Cortez, P. & Embrechts, M. J. (2011). Opening black box data mining models using sensitivity analysis. InComputational Intelligence and Data Mining (CIDM), 2011 IEEE Symposium on: IEEE.
[46] Craven, M. & Shavlik, J. W. (1996). Extracting tree-structured representations of trained networks. InAdvances in neural information processing systems.
[47] Cui, Z., Chen, W., He, Y., & Chen, Y. (2015). Optimal action extraction for random forests and boosted trees. InProceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining.
[48] Datta, A., Sen, S., & Zick, Y. (2016). Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. InSecurity and Privacy (SP), 2016 IEEE Symposium on: IEEE.
[49] Doan, A., Madhavan, J., Domingos, P., & Halevy, A. (2004).Ontology Matching: A Machine Learning Approach, (pp. 385-403). Springer Berlin Heidelberg: Berlin, Heidelberg.
[50] Doran, D., Schulz, S., & Besold, T. R. (2017). What does explainable ai really mean? a new conceptualization of perspectives.arXiv preprint arXiv:1710.00794.
[51] Doshi-Velez, F. & Kim, B. (2017). Towards a rigorous science of interpretable machine learning.arXiv preprint arXiv:1702.08608.
[52] Dosilovi´c, F. K., Brci´c, M., & Hlupi´c, N. (2018). Explainable artificial intelligence: A survey. In2018 41st International convention on information and communication technology, electronics and microelectronics (MIPRO).
[53] Dua, D. & Graff, C. (2017). UCI machine learning repository.
[54] Efron, B., Hastie, T., Johnstone, I., Tibshirani, R., et al. (2004). Least angle regression. The Annals of statistics. · Zbl 1091.62054
[55] El-Bekri, N., Kling, J., & Huber, M. F. (2019). A study on trust in black box models and post-hoc explanations. InInternational Workshop on Soft Computing Models in Industrial and Environmental Applications: Springer.
[56] Etchells, T. A. & Lisboa, P. J. G. (2006). Orthogonal search-based rule extraction (osre) for trained neural networks: a practical and efficient approach.IEEE transactions on neural networks.
[57] Europa.eu (2017). Official journal of the european union: Regulations.
[58] F. Bao, a. Y. H., Liu, J., Chen, Y., Li, Q., Zhang, C., Han, L., Zhu, B., Ge, Y., Chen, S., et al. (2020). Triaging moderate covid-19 and other viral pneumonias from routine blood tests.arXiv preprint arXiv:2005.06546.
[59] Fan, X., Liu, S., Chen, J., & Henderson, T. C. (2020). An investigation of covid-19 spreading factors with explainable ai techniques.arXiv preprint arXiv:2005.06612.
[60] Fischer, G., Mastaglio, T., Reeves, B., & Rieman, J. (1990). Minimalist explanations in knowledge-based systems. InTwenty-Third Annual Hawaii International Conference on System Sciences, volume 3 (pp. 309-317 vol.3).
[61] Fisher, A., Rudin, C., & Dominici, F. (2018). Model class reliance: Variable importance measures for any machine learning model class, from the“ rashomon” perspective.arXiv preprint arXiv:1801.01489.
[62] Freitas, A. (2014). Comprehensible classification models: a position paper.ACM SIGKDD explorations newsletter.
[63] Friedman, J. H., Popescu, B. E., et al. (2008). Predictive learning via rule ensembles.The Annals of Applied Statistics. · Zbl 1149.62051
[64] Friedman, N., Geiger, D., & Goldszmidt, M. (1997). Bayesian network classifiers.Machine learning. · Zbl 0892.68077
[65] Fu, L. (1994). Rule generation from neural networks.IEEE Transactions on Systems, Man, and Cybernetics.
[66] Fung, G., Sandilya, S., & Rao, R. B. (2008). Rule extraction from linear support vector machines via mathematical programming. InRule Extraction from Support Vector Machines. Springer. · Zbl 1148.68433
[67] Geng, Y., Chen, J., Jimenez-Ruiz, E., & Chen, H. (2019). Human-centric transfer learning explanation via knowledge graph [extended abstract].
[68] Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., & Kagal, L. (2018). Explaining explanations: An overview of interpretability of machine learning. In2018 IEEE 5th International Conference on data science and advanced analytics (DSAA).
[69] Gkatzia, D., Lemon, O., & Rieser, V. (2016). Natural language generation enhances human decision-making with uncertain information.arXiv preprint arXiv:1606.03254.
[70] Goldstein, A., Kapelner, A., Bleich, J., & Pitkin, E. (2015). Peeking inside the black box: Visualizing statistical learning with plots of individual conditional expectation.Journal of Computational and Graphical Statistics.
[71] Goodman, B. & Flaxman, S. (2016). Eu regulations on algorithmic decision-making and a “right to explanation”. InICML workshop on human interpretability in machine learning (WHI 2016), New York, NY.
[72] Gruber, T. R. et al. (1993). A translation approach to portable ontology specifications. Knowledge acquisition, 5(2), 199-221.
[73] Gudivada, V., Apon, A., & Ding, J. (2017).Data quality considerations for big data and machine learning: Going beyond data cleaning and transformations.International Journal on Advances in Software, 10(1), 1-20.
[74] Guidotti, R., Monreale, A., Ruggieri, S., Pedreschi, D., Turini, F., & Giannotti, F. (2018a). Local rule-based explanations of black box decision systems.arXiv preprint arXiv:1805.10820.
[75] Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018b). A survey of methods for explaining black box models.ACM Comput. Surv.
[76] Gunning, D. (2017). Explainable artificial intelligence (xai).Defense Advanced Research Projects Agency (DARPA).
[77] Gurumoorthy, K. S., Dhurandhar, A., & Cecchi, G. (2017). Protodash: Fast interpretable prototype selection.arXiv preprint arXiv:1707.01212.
[78] Hall, P., Gill, N., Kurka, M., & Phan, W. (2017a). Machine learning interpretability with h2o driverless ai.H2O.ai.
[79] Hall, P., Phan, W., & Ambati, S. (2017b). Ideas on interpreting machine learning.
[80] Hara, S. & Hayashi, K. (2016).Making tree ensembles interpretable.arXiv preprint arXiv:1606.05390.
[81] Hayashi, Y. (2013). Neural network rule extraction by a new ensemble concept and its theoretical and historical background: A review.International Journal of Computational Intelligence and Applications.
[82] Helfert, M. & Ge, M. (2016). Big data quality-towards an explanation model in a smart city context. Inproceedings of 21st International Conference on Information Quality, Ciudad Real, Spain.
[83] Hendricks, L. A., Akata, Z., Rohrbach, M., Donahue, J., Schiele, B., & Darrell, T. (2016). Generating visual explanations. InEuropean Conference on Computer Vision: Springer.
[84] Henelius, A., Puolam¨aki, K., Bostr¨om, H., Asker, L., & Papapetrou, P. (2014). A peek into the black box: exploring classifiers by randomization.Data mining and knowledge discovery.
[85] Henelius, A., Puolam¨aki, K., & Ukkonen, A. (2017). Interpreting classifiers through attribute interactions in datasets. In2017 ICML Workshop on Human Interpretability in Machine Learning (WHI).
[86] Hepp, M. (2020). Good relations.
[87] Herman, B. (2017). The promise and peril of human evaluation for model interpretability. arXiv preprint arXiv:1711.07414.
[88] Hilton, D. J. (1990). Conversational processes and causal explanation.Psychological Bulletin.
[89] Hinton, G. & Frosst, N. (2017). Distilling a neural network into a soft decision tree. In Comprehensibility and Explanation in AI and ML (CEX), AI*IA.
[90] Hoehndorf, R. (2010). What is an upper level ontology?Ontogenesis.
[91] Hoffman, R., Mueller, S., Klein, G., & Litman, J. (2018).Metrics for explainable ai: Challenges and prospects.arXiv preprint arXiv:1812.04608.
[92] Holte, R. C. (1993). Very simple classification rules perform well on most commonly used datasets.Machine learning. · Zbl 0850.68278
[93] Holzinger, A., Kickmeier-Rust, M., & M¨uller, H. (2019a). Kandinsky patterns as iq-test for machine learning. InInternational Cross-Domain Conference for Machine Learning and Knowledge Extraction(pp. 1-14).: Springer.
[94] Holzinger, A., Langs, G., Denk, H., Zatloukal, K., & M¨uller, H. (2019b). Causability and explainabilty of artificial intelligence in medicine.Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery.
[95] Holzinger, A., Plass, M., Holzinger, K., Crisan, G. C., Pintea, C. M., & Palade, V. (2017). A glass-box interactive machine learning approach for solving np-hard problems with the human-in-the-loop.arXiv preprint arXiv:1708.01104. · Zbl 07238293
[96] Holzinger, A., Plass, M., Kickmeier-Rust, M., Holzinger, K., Cri¸san, G. C., Pintea, C. M., & Palade, V. (2019c). Interactive machine learning: experimental evidence for the human in the algorithmic loop.Applied Intelligence, 49(7), 2401-2414.
[97] Hoyt, R. E., Snider, D., Thompson, C., & Mantravadi, S. (2016). Ibm watson analytics: Automating visualization, descriptive, and predictive statistics.JMIR Public Health Surveill, 2(2), e157.
[98] Huysmans, J., Baesens, B., & Vanthienen, J. (2006). Iter: an algorithm for predictive regression rule extraction. InInternational Conference on Data Warehousing and Knowledge Discovery: Springer.
[99] Huysmans, J., Dejaeger, K., Mues, C., Vanthienen, J., & Baesens, B. (2011). An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models.Decision Support Systems.
[100] Jain, S. & Wallace, B. C. (2019).Attention is not explanation.arXiv preprint arXiv:1902.10186.
[101] Jiang, T. & Owen, A. B. (2002). Quasi-regression for visualization and interpretation of black box functions.
[102] Johansson, U., K¨onig, R., & Niklasson, L. (2004). The truth is in there-rule extraction from opaque models using genetic programming. InFLAIRS Conference: Miami Beach, FL.
[103] Kabra, M., Robie, A., & Branson, K. (2015). Understanding classifier errors by examining influential neighbors. InProceedings of the IEEE conference on computer vision and pattern recognition.
[104] Kaggle (2017). The state of data science and machine learning.
[105] Kamruzzaman, S. (2010). Rex: An efficient rule generator.arXiv preprint arXiv:1009.4988.
[106] Kass, R. & Finin, T. (1988). The Need for User Models in Generating Expert System Explanations.International Journal of Expert Systems, 1(4).
[107] Kim, B., Khanna, R., & Koyejo, O. O. (2016). Examples are not enough, learn to criticize! criticism for interpretability. InAdvances in Neural Information Processing Systems.
[108] Kim, B., Rudin, C., & Shah, J. A. (2014). The bayesian case model: A generative approach for case-based reasoning and prototype classification. InAdvances in Neural Information Processing Systems.
[109] Kim, B., Shah, J. A., & Doshi-Velez, F. (2015). Mind the gap: A generative approach to interpretable feature selection and extraction. InAdvances in Neural Information Processing Systems.
[110] Kittler, J. (1986). Feature selection and extraction.Handbook of Pattern Recognition and Image Processing.
[111] Koh, P. W. & Liang, P. (2017). Understanding black-box predictions via influence functions. arXiv preprint arXiv:1703.04730.
[112] Kramer, M. A. (1991). Nonlinear principal component analysis using autoassociative neural networks.AIChE journal, 37(2), 233-243.
[113] Krause, J., Perer, A., & Ng, K. (2016). Interacting with predictions: Visual inspection of black-box machine learning models. InProceedings of the 2016 CHI Conference on Human Factors in Computing Systems: ACM.
[114] Lage, I., Chen, E., He, J., Narayanan, M., Kim, B., Gershman, S., & Doshi-Velez, F. (2019).An evaluation of the human-interpretability of explanation.arXiv preprint arXiv:1902.00006.
[115] Lakkaraju, H., Bach, S. H., & Leskovec, J. (2016). Interpretable decision sets: A joint framework for description and prediction. InProceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining: ACM.
[116] Lakkaraju, H., Kamar, E., Caruana, R., & Leskovec, J. (2017). Interpretable & explorable approximations of black box models.arXiv preprint arXiv:1707.01154.
[117] Lakkaraju, H., Kamar, E., Caruana, R., & Leskovec, J. (2019). Faithful and customizable explanations of black box models. InProceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society.
[118] Lash, M. T., Lin, Q., Street, W. N., & Robinson, J. G. (2017). A budget-constrained inverse classification framework for smooth classifiers. In2017 IEEE International Conference on Data Mining Workshops (ICDMW).
[119] Laugel, T., Lesot, M. J., Marsala, C., Renard, X., & Detyniecki, M. (2017).Inverse classification for comparison-based interpretability in machine learning.arXiv preprint arXiv:1712.08443.
[120] Laugel, T., Renard, X., Lesot, M., Marsala, C., & Detyniecki, M. (2018). Defining locality for surrogates in post-hoc interpretablity.arXiv preprint arXiv:1806.07498.
[121] L´ecu´e, F., Abeloos, B., Anctil, J., Bergeron, M., Dalla-Rosa, D., Corbeil-Letourneau, S., Martet, F., Pommellet, T., Salvan, L., Veilleux, S., & Ziaeefard, M. (2019). Thales xai platform: Adaptable explanation of machine learning systems - a knowledge graphs perspective. InISWC Satellites.
[122] Lei, J., G’Sell, M., Rinaldo, A., Tibshirani, R. J., & Wasserman, L. (2018). Distribution-free predictive inference for regression.Journal of the American Statistical Association. · Zbl 1402.62155
[123] Lei, T., Barzilay, R., & Jaakkola, T. (2016). Rationalizing neural predictions.arXiv preprint arXiv:1606.04155.
[124] Lent, M. V., Fisher, W., & Mancuso, M. (2004). An explainable artificial intelligence system for small-unit tactical behavior. InProceedings of the national conference on artificial intelligence.
[125] Letham, B., Rudin, C., McCormick, T. H., & Madigan, D. (2012). Building interpretable classifiers with rules using bayesian analysis.Department of Statistics Technical Report tr609, University of Washington. · Zbl 1454.62348
[126] Letham, B., Rudin, C., McCormick, T. H., & Madigan, D. (2015). Interpretable classifiers using rules and bayesian analysis: Building a better stroke prediction model.The Annals of Applied Statistics. · Zbl 1454.62348
[127] Lipton, Z., Kale, D., & Wetzel, R. (2016). Modeling missing data in clinical time series with rnns.arXiv preprint arXiv:1606.04130.
[128] Lipton, Z. C. (2017). The doctor just won’t accept that!arXiv preprint arXiv:1711.08037.
[129] Lipton, Z. C. (2018). The mythos of model interpretability.Queue, 16(3), 31-57.
[130] Looveren, A. V. & Klaise, J. (2019). Interpretable counterfactual explanations guided by prototypes.arXiv preprint arXiv:1907.02584.
[131] Lou, Y., Caruana, R., & Gehrke, J. (2012). Intelligible models for classification and regression. InProceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining: ACM.
[132] Lou, Y., Caruana, R., Gehrke, J., & Hooker, G. (2013). Accurate intelligible models with pairwise interactions. InProceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining: ACM.
[133] Lu, H., Setiono, R., & Liu, H. (1995). Neurorule: A connectionist approach to data mining. InProceedings of the 21st VLDB Conference Zurich, Switzerland.
[134] Lundberg, S. M. & Lee, S. (2017). A unified approach to interpreting model predictions. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, & R. Garnett (Eds.),Advances in Neural Information Processing Systems 30(pp. 4765-4774). Curran Associates, Inc.
[135] Luong, M. T., Pham, H., & Manning, C. D. (2015). Effective approaches to attention-based neural machine translation.arXiv preprint arXiv:1508.04025.
[136] Maedche, A. & Staab, S. (2001). Ontology learning for the semantic web.IEEE Intelligent Systems, 16, 72-79.
[137] Mahajan, D., Tan, C., & Sharma, A. (2019). Preserving causal constraints in counterfactual explanations for machine learning classifiers.arXiv preprint arXiv:1912.03277.
[138] Malioutov, D. M., Varshney, K. R., Emad, A., & Dash, S. (2017). Learning interpretable classification rules with boolean compressed sensing. InTransparent Data Mining for Big and Small Data. Springer.
[139] Markowska-Kaczmar, U. & Chumieja, M. (2004). Discovering the mysteries of neural networks.International Journal of Hybrid Intelligent Systems. · Zbl 1089.68601
[140] Martens, D., Backer, M. D., Haesen, R., Vanthienen, J., Snoeck, M., & Baesens, B. (2007a). Classification with ant colony optimization.IEEE Transactions on Evolutionary Computation.
[141] Martens, D., Baesens, B., & Gestel, T. V. (2009). Decompositional rule extraction from support vector machines by active learning.IEEE Transactions on Knowledge and Data Engineering.
[142] Martens, D., Baesens, B., Gestel, T. V., & Vanthienen, J. (2007b). Comprehensible credit scoring models using rule extraction from support vector machines.European journal of operational research. · Zbl 1278.91177
[143] Martens, D., Huysmans, J., Setiono, R., Vanthienen, J., & Baesens, B. (2008). Rule extraction from support vector machines: an overview of issues and application in credit scoring.Rule extraction from support vector machines. · Zbl 1148.68439
[144] Martens, D. & Provost, F. (2014). Explaining data-driven document classifications.Mis Quarterly.
[145] Martens, D., Vanthienen, J., Verbeke, W., & Baesens, B. (2011). Performance of classification models from a user perspective.Decision Support Systems.
[146] Mashayekhi, M. & Gras, R. (2017). Rule extraction from decision trees ensembles: New algorithms based on heuristic search and sparse group lasso methods.International Journal of Information Technology & Decision Making.
[147] Massachusetts Institute of Technology (2017). The moral machine.
[148] McGuinness, D. L., Ding, L., da Silva, P., & Chang, C. (2007). Pml 2: A modular explanation interlingua. InExaCt.
[149] Mead, A. (1992). Review of the development of multidimensional scaling methods.Journal of the Royal Statistical Society: Series D (The Statistician), 41(1), 27-39.
[150] Meinshausen, N. (2010). Node harvest.The Annals of Applied Statistics.
[151] Melis, D. A. & Jaakkola, T. (2018). Towards robust interpretability with self-explaining neural networks. InAdvances in Neural Information Processing Systems.
[152] Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1-38. · Zbl 07099170
[153] Mohammed, O., Benlamri, R., & Fong, S. (2012). Building a diseases symptoms ontology for medical diagnosis: An integrative approach. InThe First International Conference on Future Generation Communication Technologies.
[154] Molnar, C. (2018).A guide for making black box models explainable.URL: https://christophm. github. io/interpretable-ml-book.
[155] Montavon, G., Lapuschkin, S., Binder, A., Samek, W., & M¨uller, K. R. (2017). Explaining nonlinear classification decisions with deep taylor decomposition.Pattern Recognition.
[156] Montavon, G., Samek, W., & M¨uller, K. R. (2018). Methods for interpreting and understanding deep neural networks.Digital Signal Processing.
[157] Murdoch, J., Singh, C., Kumbier, K., Abbasi-Asl, R., & Yu, B. (2019). Interpretable machine learning: definitions, methods, and applications.arXiv preprint arXiv:1901.04592. · Zbl 1431.62266
[158] Navigli, R. & Velardi, P. (2004). Learning domain ontologies from document warehouses and dedicated web sites.Computational Linguistics, 30(2), 151-179. · Zbl 1234.68373
[159] Ninama, H. (2013). Ensemble approach for rule extraction in data mining.Golden Reaserch Thoughts.
[160] Odajima, K., Hayashi, Y., Tianxia, G., & Setiono, R. (2008). Greedy rule generation from discrete data and its use in neural network rule extraction.Neural Networks. · Zbl 1254.68217
[161] Otero, F. E. B. & Freitas, A. (2016). Improving the interpretability of classification rules discovered by an ant colony algorithm: Extended results.Evolutionary Computation.
[162] Panigutti, C., Perotti, A., & Pedreschi, D. (2020). Doctor xai: An ontology-based approach to black-box sequential data classification explanations. InProceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* ’20 (pp. 629-639). New York, NY, USA: Association for Computing Machinery.
[163] Park, D. H., Hendricks, L. A., Akata, Z., Schiele, B., Darrell, T., & Rohrbach, M. (2016). Attentive explanations: Justifying decisions and pointing to the evidence.arXiv preprint arXiv:1612.04757.
[164] Phillips, R. L., Chang, K. H., & Friedler, S. A. (2017). Interpretable active learning.arXiv preprint arXiv:1708.00049.
[165] Plumb, G., Molitor, D., & Talwalkar, A. S. (2018). Model agnostic supervised local explanations. InAdvances in Neural Information Processing Systems.
[166] Poursabzi-Sangdeh, F., Goldstein, D. G., Hofman, J. M., Vaughan, J. W., & Wallach, H. (2018).Manipulating and measuring model interpretability.arXiv preprint arXiv:1802.07810.
[167] Poyiadzi, R., Sokol, K., Santos-Rodriguez, R., De Bie, T., & Flach, P. (2020). Face: feasible and actionable counterfactual explanations. InProceedings of the AAAI/ACM Conference on AI, Ethics, and Society(pp. 344-350).
[168] Publio, G. C., Esteves, D., Lawrynowicz, A., ce Panov, P., Soldatova, L., Soru, T., Vanschoren, J., & Zafar, H. (2018). Ml-schema: Exposing the semantics of machine learning with schemas and ontologies.
[169] Quinlan, J. R. (1986). Induction of decision trees.Machine learning.
[170] Quinlan, J. R. (1996). Bagging, boosting, and c4.5. InAAAI/IAAI, Vol. 1.
[171] Quinlan, J. R. (2014).C4.5: programs for machine learning. Elsevier.
[172] Raimond, Y., Abdallah, S., Sandler, M., & Giasson, F. (2007). The music ontology. In Proceedings of the 8th International Conference on Music Information Retrieval (ISMIR).
[173] Raimond, Y., Abdallah, S., Sandler, M., & Giasson, F. (2020). The music ontology.
[174] Rezaul, K., D ˜APhmen, T., Rebholz-Schuhmann, D., Decker, S., Cochez, M., & Beyan, O. (2020). Deepcovidexplainer: Explainable covid-19 predictions based on chest x-ray images.arXiv, (pp. arXiv-2004).
[175] Ribeiro, M. T., Singh, S., & Guestrin, C. (2016a). Model-agnostic interpretability of machine learning.arXiv preprint arXiv:1606.05386.
[176] Ribeiro, M. T., Singh, S., & Guestrin, C. (2016b). Why should i trust you?: Explaining the predictions of any classifier. InProceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining: ACM.
[177] Ribeiro, M. T., Singh, S., & Guestrin, C. (2016c). Why should i trust you?: Explaining the predictions of any classifier. InProceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining: ACM.
[178] Ribeiro, M. T., Singh, S., & Guestrin, C. (2018). Anchors: High-precision model-agnostic explanations. InProceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI).
[179] Robnik, M. & Kononenko, I. (2008). Explaining classifications for individual instances. IEEE Transactions on Knowledge and Data Engineering.
[180] Robnik-vSikonja, M. & Kononenko, I. (2008). Explaining classifications for individual instances.IEEE Transactions on Knowledge and Data Engineering.
[181] Rudin, C. (2018). Please stop explaining black box models for high stakes decisions.CoRR.
[182] R¨uping, S. (2005). Learning with local models. InLocal Pattern Detection(pp. 153-170).
[183] R¨uping, S. (2006). Learning interpretable models.Doctoral Dissertation, University of Dortmund.
[184] Rush, A. M., Chopra, S., & Weston, J. (2015). A neural attention model for abstractive sentence summarization.arXiv preprint arXiv:1509.00685.
[185] Russell, C. (2019). Efficient search for diverse coherent explanations. InProceedings of the Conference on Fairness, Accountability, and Transparency(pp. 20-28).
[186] Saabas, A. (2015). Treeinterpreter.https://github.com/andosa/treeinterpreter.
[187] Samek, W., Montavon, G., Vedaldi, A., Hansen, L. K., & M¨uller, K., Eds. (2019).Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. Springer.
[188] Samek, W., Wiegand, T., & M¨uller, K. R. (2017).Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models.arXiv preprint arXiv:1708.08296.
[189] Sarker, M. K., Xie, N., Doran, D., Raymer, M., & Hitzler, P. (2017). Explaining trained neural networks with semantic web technologies: First steps.
[190] Schaaf, N. & Huber, M. F. (2019). Enhancing decision tree based interpretation of deep neural networks through l1-orthogonal regularization.arXiv preprint arXiv:1904.05394.
[191] Schetinin, V., Fieldsend, J. E., Partridge, D., Coats, T. J., Krzanowski, W. J., Everson, R. M., Bailey, T. C., & Hernandez, A. (2007). Confident interpretation of bayesian decision tree ensembles for clinical applications.IEEE Transactions on Information Technology in Biomedicine. · Zbl 1269.91034
[192] Schmidt, P. & Biessmann, F. (2019). Quantifying interpretability and trust in machine learning systems.arXiv preprint arXiv:1901.08558.
[193] Schmitz, G., Aldrich, C., & Gouws, F. S. (1999). Ann-dt: an algorithm for extraction of decision trees from artificial neural networks.IEEE Transactions on Neural Networks.
[194] Selvaraju, R. R., Das, A., Vedantam, R., Cogswell, M., Parikh, D., & Batra, D. (2016). Grad-cam: Why did you say that? visual explanations from deep networks via gradientbased localization.arXiv preprint arXiv:1610.02391.
[195] Sestito, S. & Dillon, T. (1992). Automated knowledge acquisition of rules with continuously valued attributes. InProceedings of the 12th international conference on expert systems and their applications, 1992. · Zbl 0850.68301
[196] Sethi, K. K., Mishra, D. K., & Mishra, B. (2012). Extended taxonomy of rule extraction techniques and assessment of kdruleex.International Journal of Computer Applications.
[197] Setiono, R., Azcarraga, A., & Hayashi, Y. (2014). Mofn rule extraction from neural networks trained with augmented discretized input. InNeural Networks (IJCNN), 2014 International Joint Conference on: IEEE.
[198] Setiono, R., Baesens, B., & Mues, C. (2008). Recursive neural network rule extraction for data with mixed attributes.IEEE Transactions on Neural Networks.
[199] Setiono, R. & Liu, H. (1997). Neurolinear: From neural networks to oblique decision rules. Neurocomputing.
[200] Shapley, L. S. (1951).Notes on the n-Person Game-II: The Value of an n-Person Game. Technical report, U.S. Air Force, Project Rand.
[201] Shrikumar, A., Greenside, P., Shcherbina, A., & Kundaje, A. (2016). Not just a black box: Learning important features through propagating activation differences. In33rd International Conference on Machine Learning.
[202] Si, Z. & Zhu, S. C. (2013). Learning and-or templates for object recognition and detection. IEEE transactions on pattern analysis and machine intelligence.
[203] Smilkov, D., Thorat, N., Kim, B., Vi´egas, F., & Wattenberg, M. (2017). Smoothgrad: removing noise by adding noise.arXiv preprint arXiv:1706.03825.
[204] Strumbelj, E., Bosni´c, Z., Kononenko, I., Zakotnik, B., & Kuhar, C. (2010). Explanation and reliability of prediction models: the case of breast cancer recurrence.Knowledge and information systems.
[205] Strumbelj, E. & Kononenko, I. (2014). Explaining prediction models and individual predictions with feature contributions.Knowledge and information systems.
[206] Su, G., Wei, D., Varshney, K. R., & Malioutov, D. M. (2015). Interpretable two-level boolean rule learning for classification.arXiv preprint arXiv:1511.07361.
[207] Su, G., Wei, D., Varshney, K. R., & Malioutov, D. M. (2016). Learning sparse two-level boolean rules. In2016 IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP): IEEE.
[208] Subianto, M. & Siebes, A. (2007). Understanding discrete classifiers with a case study in gene prediction. InSeventh IEEE International Conference on Data Mining 2007(pp. 661-666).: IEEE.
[209] Sundararajan, M., Taly, A., & Yan, Q. (2016). Gradients of counterfactuals.arXiv preprint arXiv:1611.02639.
[210] Swartout, W., Paris, C., & Moore, J. (1991). Explanations in knowledge systems: design for explainable expert systems.IEEE Expert, 6(3), 58-64.
[211] Taha, I. & Ghosh, J. (1996). Three techniques for extracting rules from feedforward networks.Intelligent Engineering Systems Through Artificial Neural Networks.
[212] Tibshirani, R. (1996). Regression shrinkage and selection via the lasso.Journal of the Royal Statistical Society: Series B (Methodological). · Zbl 0850.62538
[213] Tjoa, E. & Guan, C. (2019). A survey on explainable artificial intelligence (xai): Towards medical xai.arXiv preprint arXiv:1907.07374.
[214] Tolomei, G., Silvestri, F., Haines, A., & Lalmas, M. (2017). Interpretable predictions of tree-based ensembles via actionable feature tweaking. InProceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining: ACM.
[215] Tsymbal, A., Zillner, S., & Huber, M. (2007). Ontology - Supported Machine Learning and Decision Support in Biomedicine. InInternational Conference on Data Integration in the Life Sciences, volume 4544 (pp. 156-171).
[216] Turner, R. (2016). A model explanation system. In2016 IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP).
[217] Tversky, A. & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science.
[218] Tversky, A. & Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science. · Zbl 1225.91017
[219] Ustun, B. & Rudin, C. (2014). Methods and models for interpretable linear classification. arXiv preprint arXiv:1405.4047.
[220] Ustun, B. & Rudin, C. (2016). Supersparse linear integer models for optimized medical scoring systems.Machine Learning. · Zbl 1406.62144
[221] Ustun, B. & Rudin, C. (2017). Optimized risk scores. InProceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining: ACM. · Zbl 1440.68242
[222] van der Maaten, L. & Hinton, G. (2008). Visualizing data using t-sne.Journal of machine learning research, 9(Nov), 2579-2605. · Zbl 1225.68219
[223] Vedantam, R., Bengio, S., Murphy, K., Parikh, D., & Chechik, G. (2017). Context-aware captions from context-agnostic supervision. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
[224] Verbeke, W., Martens, D., Mues, C., & Baesens, B. (2011). Building comprehensible customer churn prediction models with advanced rule induction techniques.Expert Systems with Applications.
[225] Voosen, P. (2017). HowAIdetectives are cracking open the black box of deep learning. Science Magazine.
[226] W3C(2012a).GoodOntologies.W3Crecommendation,W3C. https://www.w3.org/wiki/Good Ontologies.
[227] W3C (2012b).OWL 2 Web Ontology Language Document Overview (Second Edition). W3C recommendation, W3C. https://www.w3.org/TR/2012/REC-owl2-overview-20121211/.
[228] W3C (2014).Ressource Description Framework(RDF).W3C recommendation, W3C. https://www.w3.org/RDF/.
[229] Wachter, S., Mittelstadt, B., & Russell, C. (2018). Counterfactual explanations without opening the black box: Automated decisions and the gdpr.Harvard Journal of Law & Technology, 31(2).
[230] Wang, F. & Rudin, C. (2014). Falling rule lists.arXiv preprint arXiv:1411.5899.
[231] Wang, F. & Rudin, C. (2015). Falling rule lists. In18th International Conference on Artificial Intelligence and Statistics (AISTATS).
[232] Wang, J., Fujimaki, R., & Motohashi, Y. (2015a). Trading interpretability for accuracy: Oblique treed sparse additive models. InProceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining: ACM.
[233] Wang, R. Y. & Strong, D. M. (1996). Beyond accuracy: What data quality means to data consumers.Journal of management information systems, 12(4), 5-33.
[234] Wang, T., Rudin, C., Doshi-Velez, F., Liu, Y., Klampfl, E., & MacNeille, P. (2015b). Or’s of and’s for interpretable classification, with application to context-aware recommender systems.arXiv preprint arXiv:1504.07614.
[235] Wang, T., Rudin, C., Velez-Doshi, F., Liu, Y., Klampfl, E., & MacNeille, P. (2016). Bayesian rule sets for interpretable classification. InData Mining (ICDM), 2016 IEEE 16th International Conference on: IEEE. · Zbl 1434.68467
[236] Weiner, J. (1980). Blah, a system which explains its reasoning.Artificial intelligence, 15(1-2), 19-48.
[237] Weller, A. (2017). Challenges for transparency.arXiv preprint arXiv:1708.01870.
[238] West, J., Ventura, D., & Warnick, S. (2007). Spring research presentation: A theoretical foundation for inductive transfer. Retrieved 2007-08-05.
[239] Wiegreffe, S. & Pinter, Y. (2019).Attention is not not explanation.arXiv preprint arXiv:1908.04626.
[240] Wold, S., Esbense, K., & Geladi, P. (1987). Principal component analysis.Chemometrics and intelligent laboratory systems, 2(1-3), 37-52.
[241] Wong, W., Liu, W., & Bennamoun, M. (2011). Ontology learning from text: A look back and into the future.ACM Computing Surveys - CSUR, 44, 1-36. · Zbl 1293.68243
[242] Wu, M., Hughes, M. C., Parbhoo, S., Zazzi, M., Roth, V., & Doshi-Velez, F. (2018). Beyond sparsity: Tree regularization of deep models for interpretability. InThirty-Second AAAI Conference on Artificial Intelligence.
[243] Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhudinov, R., Zemel, R., & Bengio, Y. (2015a). Show, attend and tell: Neural image caption generation with visual attention. InInternational conference on machine learning.
[244] Xu, N., Jiangping, W., Qi, G., Huang, T., & Lin, W. (2015b). Ontological random forests for image classification.International Journal of Information Retrieval Research, 5, 61-74.
[245] Yang, C., Rangarajan, A., & Ranka, S. (2018a). Global model interpretation via recursive partitioning.arXiv preprint arXiv:1802.04253.
[246] Yang, H., Rudin, C., & Seltzer, M. (2016). Scalable bayesian rule lists. unpublished.
[247] Yang, Y., Morillo, I. G., & Hospedales, T. M. (2018b). Deep neural decision trees.arXiv preprint arXiv:1806.06988.
[248] Yin, M., Vaughan, J. W., & Wallach, H. (2019). Understanding the effect of accuracy on trust in machine learning models. InProceedings of the 2019 CHI Conference on Human Factors in Computing Systems: ACM.
[249] Yin, X. & Han, J. (2003). Cpar: Classification based on predictive association rules. In Proceedings of the 2003 SIAM International Conference on Data Mining: SIAM.
[250] Zhang, Y. & Chen, X. (2018). Explainable recommendation: A survey and new perspectives. arXiv preprint arXiv:1804.11192.
[251] Zhao, X., Wu, Y., Lee, D. L., & Cui, W. (2019). iforest: Interpreting random forests via visual analytics.IEEE transactions on visualization and computer graphics.
[252] Zhou, Z. H., Chen, S. F., & Chen, Z. Q. (2000). A statistics based approach for extracting priority rules from trained neural networks. Inijcnn: IEEE.
[253] Zhou, Z. H., Jiang, Y., & Chen, S. F. (2003). Extracting symbolic rules from trained neural network ensembles.Ai Communications. · Zbl 1102.68609
[254] Zilke, E.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.