×

PEORL

swMATH ID: 29437
Software Authors: Fangkai Yang, Daoming Lyu, Bo Liu, Steven Gustafson
Description: PEORL: Integrating Symbolic Planning and Hierarchical Reinforcement Learning for Robust Decision-Making. Reinforcement learning and symbolic planning have both been used to build intelligent autonomous agents. Reinforcement learning relies on learning from interactions with real world, which often requires an unfeasibly large amount of experience. Symbolic planning relies on manually crafted symbolic knowledge, which may not be robust to domain uncertainties and changes. In this paper we present a unified framework {em PEORL} that integrates symbolic planning with hierarchical reinforcement learning (HRL) to cope with decision-making in a dynamic environment with uncertainties. Symbolic plans are used to guide the agent’s task execution and learning, and the learned experience is fed back to symbolic knowledge to improve planning. This method leads to rapid policy search and robust symbolic plans in complex domains. The framework is tested on benchmark domains of HRL.
Homepage: https://arxiv.org/abs/1804.07779
Keywords: Machine Learning: arXiv_cs.LG; Artificial Intelligence; arXiv_cs.AI; arXiv_stat.ML; PEORL framework
Related Software: REBA; BWIBots; ALM; CCalc; PDDL; Clingcon; DeepProbLog; DL2; FODD-Planner; Smodels; BLOG; Clingo
Cited in: 5 Publications

Standard Articles

1 Publication describing the Software Year
PEORL: Integrating Symbolic Planning and Hierarchical Reinforcement Learning for Robust Decision-Making
Fangkai Yang, Daoming Lyu, Bo Liu, Steven Gustafson
2018

Citations by Year