×

Modeling robotic operations controlled by natural language. (English) Zbl 1399.93173

Summary: There are multiple ways to control a robotic system. Most of them require the users to have prior knowledge about robots or get trained before using them. Natural language based control attracts increasing attention due to its versatility and less requirements for users. Since natural language instructions from users cannot be understood by the robots directly, the linguistic input has to be processed into a formal representation which captures the task specification and removes the ambiguity inherent in natural language. For most of existing natural language controlled robotic systems, they assume the given language instructions are already in correct orders. However, it is very likely for untrained users to give commands in a mixed order based on their direct observation and intuitive thinking. Simply following the order of the commands can lead to failures of tasks. To provide a remedy for the problem, we propose a novel framework named dependency relation matrix (DRM) to model and organize the semantic information extracted from language input, in order to figure out an executable sequence of subtasks for later execution. In addition, the proposed approach projects abstract language input and detailed sensory information into the same space, and uses the difference between the goal specification and temporal status of the task under implementation to monitor the progress of task execution. In this paper, we describe the DRM framework in detail, and illustrate the utility of this approach with experiment results.

MSC:

93C85 Automated systems (robots, etc.) in control theory
68T40 Artificial intelligence for robotics
68T50 Natural language processing
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] T. Lozano-Perez. Robot programming. Proceedings of the IEEE, 1983, 71(7): 821–841. · doi:10.1109/PROC.1983.12681
[2] G. Biggs, B. MacDonald. A survey of robot programming systems. Proceedings of the Australasian Conference on Robotics and Automation, Brisbane, Australia, 2003.
[3] S. Lauria, G. Bugmann, T. Kyriacou, et al. Personal robot training via natural-language instructions. IEEE Intelligent Systems, 2001, 16(3): 38–45. · doi:10.1109/MIS.2001.956080
[4] M. MacMahon, B. Stankiewicz, B. Kuipers. Walk the talk: Connecting language, knowledge, and action in route instructions. Proceedings of National Conference on Artificial intelligence, Austin: AAAI, 2006: 1475–1482.
[5] P. E. Rybski, J. Stolarz, K. Yoon, et al. Using dialog and human observations to dictate tasks to a learning robot assistant, Intelligent Service Robotics, 2008, 1(2): 159–167. · doi:10.1007/s11370-008-0016-5
[6] R. Cantrell, K. Talamadupula, P. Schermerhorn, et al. Tell me when and why to do it! Run-time planner model updates via natural language instruction. Proceedings of ACM/IEEE International Conference on Human-Robot Interaction, Boston: IEEE, 2012: 471–478.
[7] H. Kress-Gazit, G. E. Fainekos, G. J. Pappas. Temporal-logicbased reactive mission and motion planning. IEEE Transactions on Robotics, 2009, 25(6): 1370–1381. · doi:10.1109/TRO.2009.2030225
[8] C. Lignos, V. Raman, C. Finucane, et al. Provably correct reactive control from natural language. Autonomous Robots, 2015, 38(1): 89–105. · doi:10.1007/s10514-014-9418-8
[9] N. Shimizu, A. R. Haas. Learning to follow navigational route instructions. Proceedings of International Jont Conference on Artifical Intelligence, Pasadena: ACM, 2009: 1488–1493.
[10] W. Takano, I. Kusajima, Y. Nakamura. Generating action descriptions from statistically integrated representations of human motions and sentences. Neural Networks, 2016, 80(C): 1–8. · doi:10.1016/j.neunet.2016.03.001
[11] G. Lisca, D. Nyga, F. Bálint-Benczédi, et al. Towards robots conducting chemical experiments. Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, Hamburg: IEEE, 2015: 5202–5208.
[12] D. K. Misra, J. Sung, K. Lee, et al. Tell me dave: Context-sensitive grounding of natural language to manipulation instructions. The International Journal of Robotics Research, 2015, 35(1–3): 281–300. · doi:10.1177/0278364915602060
[13] L. She, Y. Cheng, J. Y. Chai, et al. Teaching robots new actions through natural language instructions. Proceedings of IEEE International Symposium on Robot and Human Interactive Communication, Edinburgh: IEEE, 2014: 868–873.
[14] C. Matuszek, E. Herbst, L. Zettlemoyer, et al. Learning to parse natural language commands to a robot control system. Experimental Robotics, Heidelberg: Springer, 2013: 403–415.
[15] T. Kollar, S. Tellex, M. R. Walter, et al. Generalized grounding graphs: A probabilistic framework for understanding grounded language. Journal of Artificial Intelligence Research, 2013: 1–35.
[16] P. J. Ramadge, W. M. Wonham. The control of discrete event systems. Proceedings of the IEEE, 1989, 77(1): 81–98. · doi:10.1109/5.21072
[17] P. Hubbard, P. E. Caines. Dynamical consistency in hierarchical supervisory control. IEEE Transactions on Automatic Control, 2002, 47(1): 37–52. · Zbl 1364.93467 · doi:10.1109/9.981721
[18] M. Sampath, S. Lafortune, D. Teneketzis. Active diagnosis of discrete-event systems. IEEE Transactions on Automatic Control, 1998, 43(7): 908–929. · Zbl 0949.90025 · doi:10.1109/9.701089
[19] M. Ghallab, D. Nau, P. Traverso. Automated Planning: Theory & Practice. San Francisco: Elsevier, 2004. · Zbl 1074.68613
[20] Y. Cheng, J. Bao, Y. Jia, et al. Analytic approach for natural language based supervisory control of robotic manipulations. Proceedings of IEEE International Conference on Robotics and Biomimetics, Qingdao: IEEE, 2016: 331–336.
[21] K. M. Passino, A. N. Michel, P. J. Antsaklis. Lyapunov stability of a class of discrete event systems. IEEE Transactions on Automatic Control, 1994, 39(2): 269–279. · Zbl 0801.93052 · doi:10.1109/9.272323
[22] J. Bao, Y. Jia, Y. Cheng, N. Xi. Saliency-guided detection of unknown objects in rgb-d indoor scenes. Sensors. 2015, 15(9): 21054–21074. · doi:10.3390/s150921054
[23] L. She, S. Yang, Y. Cheng, et al. Back to the blocks world: Learning new actions through situated human-robot dialogue. Annual Meeting of the Special Interest Group on Discourse and Dialogue, Philadelphia: ACL, 2014: 89–97.
[24] W. Takano, M. Kanazawa, Y. Nakamura. Motion-language association model for human-robot communication. Experimental Robotics, Berlin: Springer, 2014: 17–30.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.