Martín, Mario; Geffner, Hector Learning generalized policies from planning examples using concept languages. (English) Zbl 1078.68713 Appl. Intell. 20, No. 1, 9-19 (2004). Summary: We are concerned with the problem of learning how to solve planning problems in one domain given a number of solved instances. This problem is formulated as the problem of inferring a function that operates over all instances in the domain and maps states and goals into actions. We call such functions generalized policies and the question that we address is how to learn suitable representations of generalized policies from data. This question has been addressed recently by R. Khardon [Technical Report TR-09-97 (Harvard, 1997)]. Khardon represents generalized policies using an ordered list of existentially quantified rules that are inferred from a training set using a version of Rivest’s learning algorithm [Machine Learning 2, No. 3, 229–246 (1987)]. Here, we follow Khardon’s approach but represent generalized policies in a different way using a concept language. We show through a number of experiments in the blocks-world that the concept language yields a better policy using a smaller set of examples and no background knowledge. Cited in 5 Documents MSC: 68T05 Learning and adaptive systems in artificial intelligence 68T20 Problem solving in the context of artificial intelligence (heuristics, search strategies, etc.) Keywords:Learning policies; planning; generalized policies Software:Graphplan PDF BibTeX XML Cite \textit{M. Martín} and \textit{H. Geffner}, Appl. Intell. 20, No. 1, 9--19 (2004; Zbl 1078.68713) Full Text: DOI