×

SQuARE: semantics-based question answering and reasoning engine. (English) Zbl 07455707

Ricca, Francesco (ed.) et al., Proceedings of the 36th international conference on logic programming (technical communications), ICLP 2020, UNICAL, Rende (CS), Italy, September 18–24, 2020. Waterloo: Open Publishing Association (OPA). Electron. Proc. Theor. Comput. Sci. (EPTCS) 325, 73-86 (2020).
Summary: Understanding the meaning of a text is a fundamental challenge of natural language understanding (NLU) and from its early days, it has received significant attention through question answering (QA) tasks. We introduce a general semantics-based framework for natural language QA and also describe the SQuARE system, an application of this framework. The framework is based on the denotational semantics approach widely used in programming language research. In our framework, valuation function maps syntax tree of the text to its commonsense meaning represented using basic knowledge primitives (the semantic algebra) coded using answer set programming (ASP). We illustrate an application of this framework by using VerbNet primitives as our semantic algebra and a novel algorithm based on partial tree matching that generates an answer set program that represents the knowledge in the text. A question posed against that text is converted into an ASP query using the same framework and executed using the s(CASP) goal-directed ASP system. Our approach is based purely on (commonsense) reasoning. SQuARE achieves 100% accuracy on all the five datasets of bAbI QA tasks that we have tested. The significance of our work is that, unlike other machine learning based approaches, ours is based on “understanding” the text and does not require any training. SQuARE can also generate an explanation for an answer while maintaining high accuracy.
For the entire collection see [Zbl 1466.68027].

MSC:

68N17 Logic programming
PDF BibTeX XML Cite
Full Text: arXiv Link

References:

[1] Joaquin Arias, Manuel Carro, Elmer Salazar, Kyle Marple & Gopal Gupta (2018): Constraint answer set programming without grounding. TPLP 18(3-4), pp. 337-354, doi:10.1017/S1471068418000285. · Zbl 1451.68063
[2] Kinjal Basu, Farhad Shakerin & Gopal Gupta (2020): AQuA: ASP-Based Visual Question Answering. In: Practical Aspects of Declarative Languages, Springer International Publishing, Cham, pp. 57-72, doi:10.1007/978-3-030-39197-3 4.
[3] Michael Gelfond & Yulia Kahl (2014): Knowledge representation, reasoning, and the design of intelligent agents: The answer-set programming approach. Cambridge University Press, doi:10.1017/CBO9781139342124.
[4] Michael Gelfond & Vladimir Lifschitz (1988): The stable model semantics for logic programming. In: ICLP/SLP, 88, pp. 1070-1080.
[5] Sepp Hochreiter & Jürgen Schmidhuber (1997): Long short-term memory. Neural computation 9(8), pp. 1735-1780, doi:10.1162/neco.1997.9.8.1735.
[6] Karin Kipper, Anna Korhonen, Neville Ryant & Martha Palmer (2008): A large-scale classification of English verbs. Language Resources and Evaluation 42(1), pp. 21-40, doi:10.1007/s10579-007-9048-2.
[7] Ankit Kumar et al. (2016): Ask me anything: Dynamic memory networks for natural language processing. In: ICML, pp. 1378-1387, doi:10.1109/ASYU48272.2019.8946411.
[8] Douglas B Lenat (1995): CYC: A large-scale investment in knowledge infrastructure. Communications of the ACM 38(11), pp. 33-38, doi:10.1145/219717.219745.
[9] Beth Levin (1993): English verb classes and alternations: A preliminary investigation. U. Chicago Press, doi:10.1075/fol.2.1.16noe.
[10] Yuliya Lierler, Daniela Inclezan & Michael Gelfond (2017): Action languages and question answering. In: IWCS 2017 -12th International Conference on Computational Semantics -Short papers.
[11] Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard & David McClosky (2014): The Stanford CoreNLP NLP Toolkit. In: ACL System Demonstrations, pp. 55-60, doi:10.3115/v1/P14-5010.
[12] Mitch Marcus, Beatrice Santorini & Mary Ann Marcinkiewicz (1993): Building a Large Annotated Corpus of English: The Penn Treebank. Computational Linguistics 19(2), pp. 313-330, doi:10.21236/ada273556.
[13] Kyle Marple, Elmer Salazar & Gopal Gupta (2017): Computing stable models of normal logic programs without grounding. arXiv preprint arXiv:1709.00501.
[14] Arindam Mitra & Chitta Baral (2016): Addressing a question answering challenge by combining statistical methods with inductive rule learning and reasoning. In: Proc. AAAI.
[15] Arindam Mitra & Chitta Baral (2018): Incremental and iterative learning of answer set programs from mutually dis-tinct examples. Theory and Practice of Logic Programming 18(3-4), pp. 623-637, doi:10.1017/S1471068418000248. · Zbl 1451.68066
[16] D. Pendharkar & G. Gupta (2019): An ASP Based Approach to Answering Questions for Natural Language Text. In: International Symposium on PADL, Springer, pp. 46-63, doi:10.1007/978-3-030-05998-9 4.
[17] Peng Qi, Timothy Dozat, Yuhao Zhang & Christopher D Manning (2019): Universal dependency parsing from scratch. arXiv preprint arXiv:1901.10457, doi:10.18653/v1/K18-2016.
[18] Pranav Rajpurkar, Robin Jia & Percy Liang (2018): Know what you don’t know: Unanswerable questions for SQuAD. arXiv preprint arXiv:1806.03822, doi:10.18653/v1/P18-2124.
[19] Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev & Percy Liang (2016): SQuAD: 100,000+ Questions for Machine Comprehension of Text. arXiv preprint arXiv:1606.05250, doi:10.18653/v1/D16-1264.
[20] Matthew Richardson, Christopher JC Burges & Erin Renshaw (2013): MCTest: A Challenge Dataset for the Open-Domain Machine Comprehension of Text. In: Proceedings of the 2013 EMNLP, pp. 193-203.
[21] David A Schmidt (1986): Denotational semantics: a methodology for language development, William C. Brown Publishers, Dubuque, IA, USA.
[22] Farhad Shakerin, Elmer Salazar & Gopal Gupta (2017): A new algorithm to automate inductive learning of default theories. Theory and Practice of Logic Programming 17(5-6), pp. 1010-1026, doi:10.1017/s1471068417000333. · Zbl 1422.68029
[23] Jason Weston et al. (2015): Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks. arXiv preprint arXiv:1502.05698.
[24] Yi Yang, Wen-tau Yih & Christopher Meek (2015): WikiQA: A Challenge Dataset for Open-Domain Question Answering. In: Proceedings of the 2015 EMNLP, ACL, Lisbon, Portugal, pp. 2013-2018, doi:10.18653/v1/D15-1237. Available at https://www.aclweb.org/anthology/D15-1237.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.