×

zbMATH — the first resource for mathematics

Explanation in artificial intelligence: insights from the social sciences. (English) Zbl 07099170
Summary: There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to provide more transparency to their algorithms. Much of this research is focused on explicitly explaining decisions or actions to a human observer, and it should not be controversial to say that looking at how humans explain to each other can serve as a useful starting point for explanation in artificial intelligence. However, it is fair to say that most work in explainable artificial intelligence uses only the researchers’ intuition of what constitutes a ’good’ explanation. There exist vast and valuable bodies of research in philosophy, psychology, and cognitive science of how people define, generate, select, evaluate, and present explanations, which argues that people employ certain cognitive biases and social expectations to the explanation process. This paper argues that the field of explainable artificial intelligence can build on this existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics. It draws out some important findings, and discusses ways that these can be infused with work on explainable artificial intelligence.

MSC:
68T Artificial intelligence
PDF BibTeX XML Cite
Full Text: DOI
References:
[1] Allemang, D.; Tanner, M. C.; Bylander, T.; Josephson, J. R., Computational complexity of hypothesis assembly, (IJCAI, vol. 87, (1987)), 1112-1117
[2] Angwin, J.; Larson, J.; Mattu, S.; Kirchner, L., Machine bias, ProPublica, (23 May 2016)
[3] Antaki, C.; Leudar, I., Explaining in conversation: towards an argument model, Eur. J. Soc. Psychol., 22, 2, 181-194, (1992)
[4] Arioua, A.; Croitoru, M., Formalizing explanatory dialogues, (International Conference on Scalable Uncertainty Management, (2015), Springer), 282-297
[5] Aronson, J. L., On the grammar of ‘cause’, Synthese, 22, 3, 414-430, (1971)
[6] Baehrens, D.; Schroeter, T.; Harmeling, S.; Kawanabe, M.; Hansen, K.; MÞller, K.-R., How to explain individual classification decisions, J. Mach. Learn. Res., 11, Jun, 1803-1831, (2010) · Zbl 1242.62049
[7] Bekele, E.; Lawson, W. E.; Horne, Z.; Khemlani, S., Human-level explanatory biases for person re-identification, (HRI Workshop on Explainable Robotic Systems, (2018))
[8] Besnard, P.; Hunter, A., Elements of Argumentation, vol. 47, (2008), MIT Press: MIT Press Cambridge
[9] Biran, O.; Cotton, C., Explanation and justification in machine learning: a survey, (IJCAI 2017 Workshop on Explainable Artificial Intelligence (XAI), (2017)), 8-13
[10] Boonzaier, A.; McClure, J.; Sutton, R. M., Distinguishing the effects of beliefs and preconditions: the folk psychology of goals and actions, Eur. J. Soc. Psychol., 35, 6, 725-740, (2005)
[11] Brafman, R. I.; Domshlak, C., From one to many: planning for loosely coupled multi-agent systems, (International Conference on Automated Planning and Scheduling, (2008)), 28-35
[12] Broekens, J.; Harbers, M.; Hindriks, K.; Van Den Bosch, K.; Jonker, C.; Meyer, J.-J., Do you get it? User-evaluated explainable BDI agents, (German Conference on Multiagent System Technologies, (2010), Springer), 28-39
[13] Bromberger, S., Why-questions, (Colodny, R. G., Mind and Cosmos: Essays in Contemporary Science and Philosophy, (1966), Pittsburgh University Press: Pittsburgh University Press Pittsburgh), 68-111
[14] Buchanan, B.; Shortliffe, E., Rule-based Expert Systems: the MYCIN Experiments the Stanford Heuristic Programming Project, (1984), Addison-Wesley
[15] Burguet, A.; Hilton, D., Effets de contexte sur l’explication causale, (Bromberg, M.; Trognon, A., Psychologie Sociale et Communication, (2004), Dunod: Dunod Paris), 219-228
[16] Byrne, R. M., The construction of explanations, (AI and Cognitive Science’90, (1991), Springer), 337-351
[17] Cawsey, A., Generating interactive explanations, (AAAI, (1991)), 86-91
[18] Cawsey, A., Explanation and Interaction: The Computer Generation of Explanatory Dialogues, (1992), MIT Press
[19] Cawsey, A., Planning interactive explanations, Int. J. Man-Mach. Stud., 38, 2, 169-199, (1993)
[20] Cawsey, A., User modelling in interactive explanations, User Model. User-Adapt. Interact., 3, 221-247, (1993)
[21] Chakraborti, T.; Sreedharan, S.; Zhang, Y.; Kambhampati, S., Plan explanations as model reconciliation: moving beyond explanation as soliloquy, (Proceedings of IJCAI, (2017))
[22] Chan, K.; Lee, T.-W.; Sample, P. A.; Goldbaum, M. H.; Weinreb, R. N.; Sejnowski, T. J., Comparison of machine learning and traditional classifiers in glaucoma diagnosis, IEEE Trans. Biomed. Eng., 49, 9, 963-974, (2002)
[23] Chandrasekaran, B.; Tanner, M. C.; Josephson, J. R., Explaining control strategies in problem solving, IEEE Expert, 4, 1, 9-15, (1989)
[24] Charniak, E.; Goldman, R., A probabilistic model of plan recognition, (Proceedings of the Ninth National Conference on Artificial Intelligence—Volume 1, (1991), AAAI Press), 160-165
[25] Chen, J. Y.; Procci, K.; Boyce, M.; Wright, J.; Garcia, A.; Barnes, M., Situation Awareness-Based Agent Transparency, (2014), U.S. Army Research Laboratory, Tech. Rep. ARL-TR-6905
[26] Chevaleyre, Y.; Endriss, U.; Lang, J.; Maudet, N., A short introduction to computational social choice, (International Conference on Current Trends in Theory and Practice of Computer Science, (2007), Springer), 51-69 · Zbl 1131.91316
[27] Chin-Parker, S.; Bradner, A., Background shifts affect explanatory style: how a pragmatic theory of explanation accounts for background effects in the generation of explanations, Cogn. Process., 11, 3, 227-249, (2010)
[28] Chin-Parker, S.; Cantelon, J., Contrastive constraints guide explanation-based category learning, Cogn. Sci., 41, 6, 1645-1655, (2017)
[29] Chockler, H.; Halpern, J. Y., Responsibility and blame: a structural-model approach, J. Artif. Intell. Res., 22, 93-115, (2004) · Zbl 1080.68680
[30] Cimpian, A.; Salomon, E., The inherence heuristic: an intuitive means of making sense of the world, and a potential precursor to psychological essentialism, Behav. Brain Sci., 37, 5, 461-480, (2014)
[31] A. Cooper, The inmates are running the asylum: why high-tech products drive us crazy and how to restore the sanity, Sams Indianapolis, IN, USA, 2004.
[32] DARPA Explainable, Artificial intelligence (XAI) program, http://www.darpa.mil/program/explainable-artificial-intelligence, (2016), full solicitation at
[33] Davey, G. C., Characteristics of individuals with fear of spiders, Anxiety Res., 4, 4, 299-314, (1991)
[34] de Graaf, M. M.; Malle, B. F., How people explain action (and autonomous intelligent systems should too), (AAAI Fall Symposium on Artificial Intelligence for Human-Robot Interaction, (2017))
[35] Dennett, D. C., The Intentional Stance, (1989), MIT Press
[36] Dennett, D. C., From Bacteria to Bach and Back: The Evolution of Minds, (2017), WW Norton & Company
[37] Dignum, F.; Prada, R.; Hofstede, G. J., From autistic to social agents, (Proceedings of the 2014 International Conference on Autonomous Agents and Multi-Agent Systems, IFAAMAS, (2014)), 1161-1164
[38] Dodd, D. H.; Bradshaw, J. M., Leading questions and memory: pragmatic constraints, J. Mem. Lang., 19, 6, 695, (1980)
[39] Dowe, P., Wesley Salmon’s process theory of causality and the conserved quantity theory, Philos. Sci., 59, 2, 195-216, (1992)
[40] Eiter, T.; Lukasiewicz, T., Complexity results for structure-based causality, Artif. Intell., 142, 1, 53-89, (2002) · Zbl 1043.68100
[41] Eiter, T.; Lukasiewicz, T., Causes and explanations in the structural-model approach: tractable cases, Artif. Intell., 170, 6-7, 542-580, (2006) · Zbl 1131.68104
[42] Fagin, R.; Halpern, J.; Moses, Y.; Vardi, M., Reasoning About Knowledge, Vol. 4, (1995), MIT Press: MIT Press Cambridge · Zbl 0839.68095
[43] Fair, D., Causation and the flow of energy, Erkenntnis, 14, 3, 219-250, (1979)
[44] Fischer, G., User modeling in human-computer interaction, User Model. User-Adapt. Interact., 11, 1-2, 65-86, (2001) · Zbl 1030.68664
[45] Fox, J.; Glasspool, D.; Grecu, D.; Modgil, S.; South, M.; Patkar, V., Argumentation-based inference and decision making—a medical perspective, IEEE Intell. Syst., 22, 6, 34-41, (2007)
[46] Fox, M.; Long, D.; Magazzeni, D., Explainable planning, (IJCAI 2017 Workshop on Explainable Artificial Intelligence (XAI), (2017))
[47] Frosst, N.; Hinton, G., Distilling a neural network into a soft decision tree, arXiv e-prints 1711.09784
[48] Gerstenberg, T.; Lagnado, D. A., Spreading the blame: the allocation of responsibility amongst multiple agents, Cognition, 115, 1, 166-171, (2010)
[49] Gerstenberg, T.; Peterson, M. F.; Goodman, N. D.; Lagnado, D. A.; Tenenbaum, J. B., Eye-tracking causality, Psychol. Sci., 28, 12, 1731-1744, (2017)
[50] Ghallab, M.; Nau, D.; Traverso, P., Automated Planning: Theory and Practice, (2004), Elsevier · Zbl 1074.68613
[51] Gilbert, D. T.; Malone, P. S., The correspondence bias, Psychol. Bull., 117, 1, 21, (1995)
[52] Ginet, C., In defense of a non-causal account of reasons explanations, J. Ethics, 12, 3-4, 229-237, (2008)
[53] Giordano, L.; Schwind, C., Conditional logic of actions and causation, Artif. Intell., 157, 1-2, 239-279, (2004) · Zbl 1085.68160
[54] Girotto, V.; Legrenzi, P.; Rizzo, A., Event controllability in counterfactual thinking, Acta Psychol., 78, 1, 111-133, (1991)
[55] Greaves, M.; Holmback, H.; Bradshaw, J., What is a conversation policy?, (Issues in Agent Communication, (2000), Springer), 118-131
[56] Grice, H. P., Logic and conversation, (Syntax and Semantics 3: Speech Arts, (1975), Academic Press: Academic Press New York), 41-58
[57] Halpern, J. Y., Axiomatizing causal reasoning, J. Artif. Intell. Res., 12, 317-337, (2000) · Zbl 0943.68016
[58] Halpern, J. Y.; Pearl, J., Causes and explanations: a structural-model approach. Part I: causes, Br. J. Philos. Sci., 56, 4, 843-887, (2005) · Zbl 1092.03003
[59] Halpern, J. Y.; Pearl, J., Causes and explanations: a structural-model approach. Part II: explanations, Br. J. Philos. Sci., 56, 4, 889-911, (2005) · Zbl 1096.03005
[60] Hankinson, R. J., Cause and Explanation in Ancient Greek Thought, (2001), Oxford University Press
[61] Hanson, N. R., Patterns of Discovery: An Inquiry Into the Conceptual Foundations of Science, (1965), CUP Archive
[62] Harman, G. H., The inference to the best explanation, Philos. Rev., 74, 1, 88-95, (1965)
[63] Harradon, M.; Druce, J.; Ruttenberg, B., Causal learning and explanation of deep neural networks via autoencoded activations, arXiv e-prints 1802.00541
[64] Hart, H. L.A.; Honoré, T., Causation in the Law, (1985), OUP: OUP Oxford
[65] B. Hayes, J.A. Shah, Improving robot controller transparency through autonomous policy explanation, in: Proceedings of the 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2017), p. 2017.
[66] Heider, F., The Psychology of Interpersonal Relations, (1958), Wiley: Wiley New York
[67] Heider, F.; Simmel, M., An experimental study of apparent behavior, Am. J. Psychol., 57, 2, 243-259, (1944)
[68] Hempel, C. G.; Oppenheim, P., Studies in the logic of explanation, Philos. Sci., 15, 2, 135-175, (1948)
[69] Hesslow, G., The problem of causal selection, (Contemporary Science and Natural Explanation: Commonsense Conceptions of Causality, (1988)), 11-32
[70] Hilton, D., Social attribution and explanation, (Oxford Handbook of Causal Reasoning, (2017), Oxford University Press), 645-676
[71] Hilton, D. J., Logic and causal attribution, (Contemporary Science and Natural Explanation: Commonsense Conceptions of Causality, (1988), New York University Press), 33-65
[72] Hilton, D. J., Conversational processes and causal explanation, Psychol. Bull., 107, 1, 65-81, (1990)
[73] Hilton, D. J., Mental models and causal explanation: judgements of probable cause and explanatory relevance, Think. Reasoning, 2, 4, 273-308, (1996)
[74] Hilton, D. J.; McClure, J.; Slugoski, B., Counterfactuals, conditionals and causality: a social psychological perspective, (Mande, D. R.; Hilton, D. J.; Catellani, P., The Psychology of Counterfactual Thinking, (2005), Routledge: Routledge London), 44-60
[75] Hilton, D. J.; McClure, J.; Sutton, R. M., Selecting explanations from causal chains: do statistical principles explain preferences for voluntary causes?, Eur. J. Soc. Psychol., 40, 3, 383-400, (2010)
[76] Hilton, D. J.; McClure, J. L.; Ben Slugoski, R., The course of events: counterfactuals, causal sequences and explanation, (Mandel, D. R.; Hilton, D. J.; Catellani, P., The Psychology of Counterfactual Thinking, Routledge, (2005))
[77] Hilton, D. J.; Slugoski, B. R., Knowledge-based causal attribution: the abnormal conditions focus model, Psychol. Rev., 93, 1, 75, (1986)
[78] Hoffman, R. R.; Klein, G., Explaining explanation, part 1: theoretical foundations, IEEE Intell. Syst., 32, 3, 68-73, (2017)
[79] Hume, D., An Enquiry Concerning Human Understanding: A Critical Edition, vol. 3, (2000), Oxford University Press
[80] Jaspars, J. M.; Hilton, D. J., Mental models of causal reasoning, (The Social Psychology of Knowledge, (1988), Cambridge University Press), 335-358
[81] Josephson, J. R.; Josephson, S. G., Abductive Inference: Computation, Philosophy, Technology, (1996), Cambridge University Press · Zbl 0813.68021
[82] Kahneman, D., Thinking, Fast and Slow, (2011), Macmillan
[83] Kahneman, D.; Tversky, A., The simulation heuristic, (Kahneman, P. S.D.; Tversky, A., Judgment Under Uncertainty: Heuristics and Biases, (1982), Cambridge University Press: Cambridge University Press New York)
[84] Kashima, Y.; McKintyre, A.; Clifford, P., The category of the mind: folk psychology of belief, desire, and intention, Asian J. Social Psychol., 1, 3, 289-313, (1998)
[85] Kass, A.; Leake, D., Types of Explanations, (1987), DTIC Document, Tech. Rep. ADA183253
[86] Kelley, H. H., Attribution Theory in Social Psychology, in: Nebraska Symposium on Motivation, 192-238, (1967), University of Nebraska Press
[87] Kelley, H. H., Causal Schemata and the Attribution Process, (1972), General Learning Press: General Learning Press Morristown, NJ
[88] Knobe, J., Intentional action and side effects in ordinary language, Analysis, 63, 279, 190-194, (2003)
[89] Kulesza, T.; Burnett, M.; Wong, W.-K.; Stumpf, S., Principles of explanatory debugging to personalize interactive machine learning, (Proceedings of the 20th International Conference on Intelligent User Interfaces, (2015), ACM), 126-137
[90] Kulesza, T.; Stumpf, S.; Burnett, M.; Yang, S.; Kwan, I.; Wong, W.-K., Too much, too little, or just right? Ways explanations impact end users’ mental models, (2013 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), (2013), IEEE), 3-10
[91] Kulesza, T.; Stumpf, S.; Wong, W.-K.; Burnett, M. M.; Perona, S.; Ko, A.; Oberst, I., Why-oriented end-user debugging of naive Bayes text classification, ACM Trans. Interact. Intell. Syst. (TiiS), 1, 1, 2, (2011)
[92] Lagnado, D. A.; Channon, S., Judgments of cause and blame: the effects of intentionality and foreseeability, Cognition, 108, 3, 754-770, (2008)
[93] Langley, P.; Meadows, B.; Sridharan, M.; Choi, D., Explainable agency for intelligent autonomous systems, (Proceedings of the Twenty-Ninth Annual Conference on Innovative Applications of Artificial Intelligence, (2017), AAAI Press)
[94] Leake, D. B., Goal-based explanation evaluation, Cogn. Sci., 15, 4, 509-545, (1991)
[95] Leake, D. B., Abduction, experience, and goals: a model of everyday abductive explanation, J. Exp. Theor. Artif. Intell., 7, 4, 407-428, (1995)
[96] Leddo, J.; Abelson, R. P.; Gross, P. H., Conjunctive explanations: when two reasons are better than one, J. Pers. Soc. Psychol., 47, 5, 933, (1984)
[97] Levesque, H. J., A knowledge-level account of abduction, (IJCAI, (1989)), 1061-1067
[98] Causation, D. Lewis, J. Philos., 70, 17, 556-567, (1974)
[99] Lewis, D., Causal explanation, Philos. Pap., 2, 214-240, (1986)
[100] Lim, B. Y.; Dey, A. K., Assessing demand for intelligibility in context-aware applications, (Proceedings of the 11th International Conference on Ubiquitous Computing, (2009), ACM), 195-204
[101] Linegang, M. P.; Stoner, H. A.; Patterson, M. J.; Seppelt, B. D.; Hoffman, J. D.; Crittendon, Z. B.; Lee, J. D., Human-automation collaboration in dynamic mission planning: a challenge requiring an ecological approach, Proc. Human Factors Ergonom. Soc. Annual Meeting, 50, 23, 2482-2486, (2006)
[102] Lipton, P., Contrastive explanation, R. Inst. Philos. Suppl., 27, 247-266, (1990)
[103] Lipton, Z. C., The mythos of model interpretability, arXiv preprint
[104] Lombrozo, T., The structure and function of explanations, Trends Cogn. Sci., 10, 10, 464-470, (2006)
[105] Lombrozo, T., Simplicity and probability in causal explanation, Cogn. Psychol., 55, 3, 232-257, (2007)
[106] Lombrozo, T., Explanation and categorization: how “why?“ informs “what?”, Cognition, 110, 2, 248-253, (2009)
[107] Lombrozo, T., Causal-explanatory pluralism: how intentions, functions, and mechanisms influence causal ascriptions, Cogn. Psychol., 61, 4, 303-332, (2010)
[108] Lombrozo, T., Explanation and abductive inference, (Oxford Handbook of Thinking and Reasoning, (2012)), 260-276
[109] Lombrozo, T.; Gwynne, N. Z., Explanation and inference: mechanistic and functional explanations guide property generalization, Front. Human Neurosci., 8, 700, (2014)
[110] Mackie, J. L., The Cement of the Universe, (1980), Oxford
[111] Malle, B. F., How people explain behavior: a new theoretical framework, Personal. Soc. Psychol. Rev., 3, 1, 23-48, (1999)
[112] Malle, B. F., How the Mind Explains Behavior: Folk Explanations, Meaning, and Social Interaction, (2004), MIT Press
[113] Malle, B. F., Attribution theories: how people make sense of behavior, (Theories in Social Psychology, (2011)), 72-95
[114] Malle, B. F., Time to give up the dogmas of attribution: an alternative theory of behavior explanation, Adv. Exp. Soc. Psychol., 44, 1, 297-311, (2011)
[115] Malle, B. F.; Knobe, J., The folk concept of intentionality, J. Exp. Soc. Psychol., 33, 2, 101-121, (1997)
[116] Malle, B. F.; Knobe, J.; O’Laughlin, M. J.; Pearce, G. E.; Nelson, S. E., Conceptual structure and social functions of behavior explanations: beyond person-situation attributions, J. Pers. Soc. Psychol., 79, 3, 309, (2000)
[117] Malle, B. F.; Knobe, J. M.; Nelson, S. E., Actor-observer asymmetries in explanations of behavior: new answers to an old question, J. Pers. Soc. Psychol., 93, 4, 491, (2007)
[118] Malle, B. F.; Pearce, G. E., Attention to behavioral events during interaction: two actor-observer gaps and three attempts to close them, J. Pers. Soc. Psychol., 81, 2, 278-294, (2001)
[119] Marr, D., Vision: A Computational Investigation Into the Human Representation and Processing of Visual Information, (1982), Inc.: Inc. New York, NY
[120] Marr, D.; Poggio, T., From Understanding Computation to Understanding Neural Circuitry, (1976), MIT, AI Memos AIM-357
[121] McCloy, R.; Byrne, R. M., Counterfactual thinking about controllable events, Mem. Cogn., 28, 6, 1071-1078, (2000)
[122] McClure, J., Goal-based explanations of actions and outcomes, Eur. Rev. Soc. Psychol., 12, 1, 201-235, (2002)
[123] McClure, J.; Hilton, D., For you can’t always get what you want: when preconditions are better explanations than goals, Br. J. Soc. Psychol., 36, 2, 223-240, (1997)
[124] McClure, J.; Hilton, D.; Cowan, J.; Ishida, L.; Wilson, M., When rich or poor people buy expensive objects: is the question how or why?, J. Lang. Soc. Psychol., 20, 229-257, (2001)
[125] McClure, J.; Hilton, D. J., Are goals or preconditions better explanations? It depends on the question, Eur. J. Soc. Psychol., 28, 6, 897-911, (1998)
[126] McClure, J. L.; Sutton, R. M.; Hilton, D. J., The role of goal-based explanations, (Social Judgments: Implicit and Explicit Processes, vol. 5, (2003), Cambridge University Press), 306
[127] McGill, A. L.; Klein, J. G., Contrastive and counterfactual reasoning in causal judgment, J. Pers. Soc. Psychol., 64, 6, 897, (1993)
[128] Menzies, P.; Price, H., Causation as a secondary quality, Br. J. Philos. Sci., 44, 2, 187-203, (1993)
[129] Mercado, J. E.; Rupp, M. A.; Chen, J. Y.; Barnes, M. J.; Barber, D.; Procci, K., Intelligent agent transparency in human-agent teaming for multi-UxV management, Hum. Factors, 58, 3, 401-415, (2016)
[130] Mill, J. S., A System of Logic: The Collected Works of John Stuart Mill, vol. III, (1973)
[131] Miller, D. T.; Gunasegaram, S., Temporal order and the perceived mutability of events: implications for blame assignment, J. Pers. Soc. Psychol., 59, 6, 1111, (1990)
[132] Miller, T.; Howe, P.; Sonenberg, L., Explainable AI: beware of inmates running the asylum, (IJCAI 2017 Workshop on Explainable Artificial Intelligence (XAI), (2017)), 36-42
[133] Mitchell, T. M.; Keller, R. M.; Kedar-Cabelli, S. T., Explanation-based generalization: a unifying view, Mach. Learn., 1, 1, 47-80, (1986)
[134] Moore, J. D.; Paris, C. L., Planning text for advisory dialogues: capturing intentional and rhetorical information, Comput. Linguist., 19, 4, 651-694, (1993)
[135] Muise, C.; Belle, V.; Felli, P.; McIlraith, S.; Miller, T.; Pearce, A. R.; Sonenberg, L., Planning over multi-agent epistemic states: a classical planning approach, (Bonet, B.; Koenig, S., Proceedings of AAAI 2015, (2015)), 1-8
[136] Nott, G., ‘Explainable Artificial Intelligence’: cracking open the black box of AI, Computer World
[137] O’Laughlin, M. J.; Malle, B. F., How people explain actions performed by groups and individuals, J. Pers. Soc. Psychol., 82, 1, 33, (2002)
[138] Overton, J., Scientific explanation and computation, (Thomas Roth-Berghofer, D. B.L.; Tintarev, Nava, Proceedings of the 6th International Explanation-Aware Computing (ExaCt) Workshop, (2011)), 41-50
[139] Overton, J. A., Explanation in Science, (2012), The University of Western: The University of Western Ontario, Ph.D. thesis
[140] Overton, J. A., “Explain” in scientific discourse, Synthese, 190, 8, 1383-1405, (2013)
[141] Pearl, J.; Mackenzie, D., The Book of Why: The New Science of Cause and Effect, (2018), Hachette: Hachette UK · Zbl 1416.62026
[142] Peirce, C. S., Harvard lectures on pragmatism, (Collected Papers, vol. 5, (1903))
[143] Petrick, R.; Foster, M. E., Using general-purpose planning for action selection in human-robot interaction, (AAAI 2016 Fall Symposium on Artificial Intelligence for Human-Robot Interaction, (2016))
[144] Poole, D., Normality and faults in logic-based diagnosis, (IJCAI, vol. 89, (1989)), 1304-1310
[145] Pople, H. E., On the mechanization of abductive logic, (IJCAI, vol. 73, (1973)), 147-152
[146] Popper, K., The Logic of Scientific Discovery, (2005), Routledge
[147] Prakken, H., Formal systems for persuasion dialogue, Knowl. Eng. Rev., 21, 02, 163-188, (2006)
[148] Prasada, S., The scope of formal explanation, Psychon. Bull. Rev., 1-10, (2017)
[149] Prasada, S.; Dillingham, E. M., Principled and statistical connections in common sense conception, Cognition, 99, 1, 73-112, (2006)
[150] Preston, J.; Epley, N., Explanations versus applications: the explanatory power of valuable beliefs, Psychol. Sci., 16, 10, 826-832, (2005)
[151] Ranney, M.; Thagard, P., Explanatory coherence and belief revision in naive physics, (Proceedings of the Tenth Annual Conference of the Cognitive Science Society, (1988)), 426-432
[152] Rao, A. S.; Georgeff, M. P., BDI agents: from theory to practice, (ICMAS, vol. 95, (1995)), 312-319
[153] Read, S. J.; Marcus-Newhall, A., Explanatory coherence in social explanations: a parallel distributed processing account, J. Pers. Soc. Psychol., 65, 3, 429, (1993)
[154] Rehder, B., A causal-model theory of conceptual representation and categorization, J. Exp. Psychol. Learn. Mem. Cogn., 29, 6, 1141, (2003)
[155] Rehder, B., When similarity and causality compete in category-based property generalization, Mem. Cogn., 34, 1, 3-16, (2006)
[156] Reiter, R., A theory of diagnosis from first principles, Artif. Intell., 32, 1, 57-95, (1987) · Zbl 0643.68122
[157] Ribeiro, M. T.; Singh, S.; Guestrin, C., Why should I trust you?: explaining the predictions of any classifier, (Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, (2016), ACM), 1135-1144
[158] Robnik-Šikonja, M.; Kononenko, I., Explaining classifications for individual instances, IEEE Trans. Knowl. Data Eng., 20, 5, 589-600, (2008)
[159] Salmon, W. C., Four Decades of Scientific Explanation, (2006), University of Pittsburgh Press
[160] Samland, J.; Josephs, M.; Waldmann, M. R.; Rakoczy, H., The role of prescriptive norms and knowledge in children’s and adults’ causal selection, J. Exp. Psychol. Gen., 145, 2, 125, (2016)
[161] Samland, J.; Waldmann, M. R., Do social norms influence causal inferences?, (Bello, P.; Guarini, M.; McShane, M.; Scassellati, B., Proceedings of the 36th Annual Conference of the Cognitive Science Society, (2014), Cognitive Science Society), 1359-1364
[162] Scriven, M., The concept of comprehension: from semantics to software, (Carroll, J. B.; Freedle, R. O., Language Comprehension and the Acquisition of Knowledge, (1972), W. H. Winston & Sons: W. H. Winston & Sons Washington), 31-39
[163] Shams, Z.; de Vos, M.; Oren, N.; Padget, J., Normative practical reasoning via argumentation and dialogue, (Proceedings of the 25th International Joint Conference on Artificial Intelligence (IJCAI-16), (2016), AAAI Press)
[164] Singh, R.; Miller, T.; Newn, J.; Sonenberg, L.; Velloso, E.; Vetere, F., Combining planning with gaze for online human intention recognition, (Proceedings of the 17th International Conference on Autonomous Agents and Multiagent Systems, (2018))
[165] Slugoski, B. R.; Lalljee, M.; Lamb, R.; Ginsburg, G. P., Attribution in conversational context: effect of mutual knowledge on explanation-giving, Eur. J. Soc. Psychol., 23, 3, 219-238, (1993)
[166] Stubbs, K.; Hinds, P.; Wettergreen, D., Autonomy and common ground in human-robot interaction: a field study, IEEE Intell. Syst., 22, 2, 42-50, (2007)
[167] Susskind, J.; Maurer, K.; Thakkar, V.; Hamilton, D. L.; Sherman, J. W., Perceiving individuals and groups: expectancies, dispositional inferences, and causal attributions, J. Pers. Soc. Psychol., 76, 2, 181, (1999)
[168] Swartout, W. R.; Moore, J. D., Explanation in second generation expert systems, (Second Generation Expert Systems, (1993), Springer), 543-585
[169] Tetlock, P. E.; Boettger, R., Accountability: a social magnifier of the dilution effect, J. Pers. Soc. Psychol., 57, 3, 388, (1989)
[170] Tetlock, P. E.; Learner, J. S.; Boettger, R., The dilution effect: judgemental bias, conversational convention, or a bit of both?, Eur. J. Soc. Psychol., 26, 915-934, (1996)
[171] Thagard, P., Explanatory coherence, Behav. Brain Sci., 12, 03, 435-467, (1989)
[172] Trabasso, T.; Bartolone, J., Story understanding and counterfactual reasoning, J. Exp. Psychol. Learn. Mem. Cogn., 29, 5, 904, (2003)
[173] Tversky, A.; Kahneman, D., Extensional versus intuitive reasoning: the conjunction fallacy in probability judgment, Psychol. Rev., 90, 4, 293, (1983)
[174] Uttich, K.; Lombrozo, T., Norms inform mental state ascriptions: a rational explanation for the side-effect effect, Cognition, 116, 1, 87-100, (2010)
[175] Van Bouwel, J.; Weber, E., Remote causes, bad explanations?, J. Theory Soc. Behav., 32, 4, 437-449, (2002)
[176] Van Fraassen, B. C., The pragmatics of explanation, Am. Philos. Q., 14, 2, 143-150, (1977)
[177] Vasilyeva, N.; Wilkenfeld, D. A.; Lombrozo, T., Goals affect the perceived quality of explanations, (Noelle, D. C.; Dale, R.; Warlaumont, A. S.; Yoshimi, J.; Matlock, T.; Jennings, C. D.; Maglio, P. P., Proceedings of the 37th Annual Conference of the Cognitive Science Society, (2015), Cognitive Science Society), 2469-2474
[178] von der Osten, F. B.; Kirley, M.; Miller, T., The minds of many: opponent modelling in a stochastic game, (Proceedings of the 25th International Joint Conference on Artificial Intelligence (IJCAI), (2017), AAAI Press), 3845-3851
[179] Von Wright, G. H., Explanation and Understanding, (1971), Cornell University Press
[180] Walton, D., A new dialectical theory of explanation, Philos. Explor., 7, 1, 71-89, (2004)
[181] Walton, D., Examination dialogue: an argumentation framework for critically questioning an expert opinion, J. Pragmat., 38, 5, 745-777, (2006)
[182] Walton, D., Dialogical models of explanation, (Proceedings of the International Explanation-Aware Computing (ExaCt) Workshop, (2007)), 1-9
[183] Walton, D., A dialogue system specification for explanation, Synthese, 182, 3, 349-374, (2011)
[184] Walton, D. N., Logical Dialogue — Games and Fallacies, (1984), University Press of America: University Press of America Lanham, Maryland
[185] Weiner, J., BLAH, a system which explains its reasoning, Artif. Intell., 15, 1-2, 19-48, (1980)
[186] Weld, D. S.; Bansal, G., Intelligible Artificial Intelligence, arXiv e-prints
[187] Wendt, A., On constitution and causation in international relations, Rev. Int. Stud., 24, 05, 101-118, (1998)
[188] Wilkenfeld, D. A.; Lombrozo, T., Inference to the best explanation (IBE) versus explaining for the best inference (EBI), Sci. Educ., 24, 9-10, 1059-1077, (2015)
[189] Williams, J. J.; Lombrozo, T.; Rehder, B., The hazards of explanation: overgeneralization in the face of exceptions, J. Exp. Psychol. Gen., 142, 4, 1006, (2013)
[190] Winikoff, M., Debugging agent programs with why?: questions, (Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems, AAMAS ’17, IFAAMAS, (2017)), 251-259
[191] Woodward, J., Making Things Happen: A Theory of Causal Explanation, (2005), Oxford University Press
[192] Woodward, J., Sensitive and insensitive causation, Philos. Rev., 115, 1, 1-50, (2006)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.