Abstract
Using assistive robots for educational applications requires robots to be able to adapt their behavior specifically for each child with whom they interact.Among relevant signals, non-verbal cues such as the child’s gaze can provide the robot with important information about the child’s current engagement in the task, and whether the robot should continue its current behavior or not. Here we propose a reinforcement learning algorithm extended with active state-specific exploration and show its applicability to child engagement maximization as well as more classical tasks such as maze navigation. We first demonstrate its adaptive nature on a continuous maze problem as an enhancement of the classic grid world. There, parameterized actions enable the agent to learn single moves until the end of a corridor, similarly to “options” but without explicit hierarchical representations.We then apply the algorithm to a series of simulated scenarios, such as an extended Tower of Hanoi where the robot should find the appropriate speed of movement for the interacting child, and to a pointing task where the robot should find the child-specific appropriate level of expressivity of action. We show that the algorithm enables to cope with both global and local non-stationarities in the state space while preserving a stable behavior in other stationary portions of the state space. Altogether, these results suggest a promising way to enable robot learning based on non-verbal cues and the high degree of non-stationarities that can occur during interaction with children.
References
[1] T. Fong, I. Nourbakhsh, K. Dautenhahn, A survey of socially interactive robots, Robotics and Autonomous Systems, 2003, 42, 143-16610.1016/S0921-8890(02)00372-XSearch in Google Scholar
[2] T. Kanda, T. Hirano, D. Eaton, H. Ishiguro, Interactive robots as social partners and peer tutors for children: A field trial, Human- Computer Interaction, 2004, 19(1), 61-8410.1207/s15327051hci1901&2_4Search in Google Scholar
[3] B. Robins, K. Dautenhahn, R. Te Boekhorst, A. Billard, Robotic assistants in therapy and education of children with autism: Can a small humanoid robot help encourage social interaction skills? Universal Access in the Information Society, 2005, 4(2), 105-12010.1007/s10209-005-0116-3Search in Google Scholar
[4] T. Belpaeme, P. E. Baxter, R. Read, R. Wood, H. Cuayáhuitl, B. Kiefer, et al.,Multimodal child-robot interaction: Building social bonds, Journal of Human-Robot Interaction, 2012, 1(2), 33-5310.5898/JHRI.1.2.BelpaemeSearch in Google Scholar
[5] K.-Y. Chin, Z.-W. Hong, Y.-L. Chen, Impact of using an educational robot-based learning system on students motivation in elementary education, IEEE Transactions on Learning Technologies, 2014, 7(4), 333-34510.1109/TLT.2014.2346756Search in Google Scholar
[6] C. Rich, B. Ponsler, A. Holroyd, C. L. Sidner, Recognizing engagement in human-robot interaction, In: 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), IEEE, 2010, 375- 38210.1109/HRI.2010.5453163Search in Google Scholar
[7] S. Ivaldi, S. Lefort, J. Peters, M. Chetouani, J. Provasi, E. Zibetti, Towards engagement models that consider individual factors in HRI: on the relation of extroversion and negative attitude towards robots to gaze and speech during a human-robot assembly task, International Journal of Social Robotics, 2017, 9(1), 63-8610.1007/s12369-016-0357-8Search in Google Scholar
[8] S. Lemaignan, M.Warnier, E.A. Sisbot, A. Clodic, R. Alami, Artificial cognition for social human-robot interaction: An implementation, Artificial Intelligence, 2017, 247, 45-6910.1016/j.artint.2016.07.002Search in Google Scholar
[9] C. L. Sidner, C. Lee, C. D. Kidd, N. Lesh, C. Rich, Explorations in engagement for humans and robots, Artificial Intelligence, 2005, 166(1-2), 140-16410.1016/j.artint.2005.03.005Search in Google Scholar
[10] S. M. Anzalone, S. Boucenna, S. Ivaldi, M. Chetouani, Evaluating the engagement with social robots, International Journal of Social Robotics, 2015, 7(4), 465-47810.1007/s12369-015-0298-7Search in Google Scholar
[11] M. Khamassi, S. Lallée, P. Enel, E. Procyk, P. F. Dominey, Robot cognitive control with a neurophysiologically inspired reinforcement learning model, Frontiers in Neurorobotics, 2011, 5, 110.3389/fnbot.2011.00001Search in Google Scholar PubMed PubMed Central
[12] J. Kober, J. Peters, Policy search for motor primitives in robotics, Machine Learning, 2011, 84, 171-20310.1007/s10994-010-5223-6Search in Google Scholar
[13] F. Stulp, O. Sigaud, Robot skill learning: From reinforcement learning to evolution strategies, Paladyn Journal of Behavioral Robotics, 2013, 4(1), 49-6110.2478/pjbr-2013-0003Search in Google Scholar
[14] J. Kober, J. A. Bagnell, J. Peters, Reinforcement learning in robotics: A survey, The International Journal of Robotics Research, 2013, 32(11), 1238-127410.1177/0278364913495721Search in Google Scholar
[15] M. Khamassi, G. Velentzas, T. Tsitsimis, C. Tzafestas, Active exploration and parameterized reinforcement learning applied to a simulated human-robot interaction task, In: 2017 First IEEE International Conference on Robotic Computing (IRC), Taichung, Taiwan, 2017, 28-3510.1109/IRC.2017.33Search in Google Scholar
[16] M. Khamassi, G. Velentzas, T. Tsitsimis, C. Tzafestas, Robot fast adaptation to changes in human engagement during simulated dynamic social interaction with active exploration in parameterized reinforcement learning, IEEE Transactions on Cognitive and Developmental Systems, 2018 (in press)10.1109/TCDS.2018.2843122Search in Google Scholar
[17] W. Masson, P. Ranchod, G. Konidaris, Reinforcement learning with parameterized actions, In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16), 201610.1609/aaai.v30i1.10226Search in Google Scholar
[18] M. Hausknecht, P. Stone, Deep reinforcement learning in parameterized action space, In: International Conference on Learning Representations (ICLR 2016), 2016Search in Google Scholar
[19] J. Schmidhuber, Developmental robotics, optimal artificial curiosity, creativity, music, and the fine arts, Connection Science, 2006, 18(2), 173-18710.1080/09540090600768658Search in Google Scholar
[20] A. Baranes, P.-Y. Oudeyer, Active learning of inverse models with intrinsically motivated goal exploration in robots, Robotics and Autonomous Systems, 2013, 61(1), 49-7310.1016/j.robot.2012.05.008Search in Google Scholar
[21] C. Moulin-Frier, P.-Y. Oudeyer, Exploration strategies in developmental robotics: a unified probabilistic framework, In: 2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL), IEEE, 2013, 1-610.1109/DevLrn.2013.6652535Search in Google Scholar
[22] F. C. Y. Benureau, P.-Y. Oudeyer, Behavioral diversity generation in autonomous exploration through reuse of past experience, Frontiers in Robotics and AI, 2016, 310.3389/frobt.2016.00008Search in Google Scholar
[23] J. X. Wang, Z. Kurth-Nelson, D. Tirumala, H. Soyer, J. Z. Leibo, R. Munos, et al., Learning to reinforcement learn, 2016, arXiv:1611.05763Search in Google Scholar
[24] N. Schweighofer, K. Doya, Meta-learning in reinforcement learning, Neural Networks, 2003, 16(1), 5-910.1016/S0893-6080(02)00228-9Search in Google Scholar
[25] K. Doya, Metalearning and neuromodulation, Neural Networks, 2002, 15(4-6), 495-50610.1016/S0893-6080(02)00044-8Search in Google Scholar
[26] G. Velentzas, C. Tzafestas, M. Khamassi, Bio-inspired meta learning for active exploration during non-stationary multiarmed bandit tasks, In: IEEE Intelligent Systems Conference 2017, London, UK, 201710.1109/IntelliSys.2017.8324365Search in Google Scholar
[27] A. Garivier, E.Moulines, On upper-confidence bound policies for non-stationary bandit problems, 2008, arXiv:0805.3415Search in Google Scholar
[28] H. van Hasselt, M. Wiering, Reinforcement learning in continuous action spaces, In: IEEE Symposium on Approximate Dynamic Programming and Reinforcement Learning, 2007, 272-27910.1109/ADPRL.2007.368199Search in Google Scholar
[29] R. S. Sutton, A. G. Barto, Reinforcement Learning: An Introduction, Cambridge, MA: MIT Press, 199810.1109/TNN.1998.712192Search in Google Scholar
[30] L. Schilbach, M. Wilms, S. B. Eickhoff, S. Romanzetti, R. Tepest, G. Bente, N. J. Shah, G. R. Fink, K. Vogeley, Minds made for sharing: Initiating joint attention recruits reward-related neurocircuitry, Journal of Cognitive Neuroscience, 2010, 22(12), 2702- 2715.10.1162/jocn.2009.21401Search in Google Scholar PubMed
© by George Velentzas et al., published by De Gruyter
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.