zbMATH — the first resource for mathematics

Lemma learning in the model evolution calculus. (English) Zbl 1165.03308
Hermann, Miki (ed.) et al., Logic for programming, artificial intelligence, and reasoning. 13th international conference, LPAR 2006, Phnom Penh, Cambodia, November 13–17, 2006. Proceedings. Berlin: Springer (ISBN 978-3-540-48281-9/pbk). Lecture Notes in Computer Science 4246. Lecture Notes in Artificial Intelligence, 572-586 (2006).
Summary: The Model Evolution (\(\mathcal{ME}\)) Calculus is a proper lifting to first-order logic of the DPLL procedure, a backtracking search procedure for propositional satisfiability. Like DPLL, the \(\mathcal{ME}\) calculus is based on the idea of incrementally building a model of the input formula by alternating constraint propagation steps with non-deterministic decision steps. One of the major conceptual improvements over basic DPLL is lemma learning, a mechanism for generating new formulae that prevent later in the search combinations of decision steps guaranteed to lead to failure. We introduce two lemma generation methods for \(\mathcal{ME}\) proof procedures, with various degrees of power, effectiveness in reducing search, and computational overhead. Even if formally correct, each of these methods presents complications that do not exist at the propositional level but need to be addressed for learning to be effective in practice for \(\mathcal{ME}\). We discuss some of these issues and present initial experimental results on the performance of an implementation of the two learning procedures within our \(\mathcal{ME}\) prover Darwin.
For the entire collection see [Zbl 1135.68002].

03B35 Mechanization of proofs and logical operations
68T15 Theorem proving (deduction, resolution, etc.) (MSC2010)
Darwin; E-Darvin; Mace4
Full Text: DOI