Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3340555.3353758acmotherconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Generative Model of Agent’s Behaviors in Human-Agent Interaction

Published: 14 October 2019 Publication History

Abstract

A social interaction implies a social exchange between two or more persons, where they adapt and adjust their behaviors in response to their interaction partners. With the growing interest in human-agent interactions, it is desirable to make these interactions more natural and human like. In this context, we aim at enhancing the quality of the interaction between user and Embodied Conversational Agent (ECA) by endowing ECA with the capacity to adapt its behavior in real time according the user’s behavior. The novelty of our approach is to model the agent’s nonverbal behaviors as a function of both agent’s and user’s behaviors jointly with the agent’s communicative intentions creating a dynamic loop between both interactants. Moreover, we encompass the variation of behavior over time through a LSTM-based model. Our model IL-LSTM (Interaction Loop LSTM) predicts the next agent’s behavior taking into account the behavior that both, the agent and the user, have displayed within a time window. We have conducted an evaluation study involving an agent interacting with visitors in a science museum. Results of our study show that participants have better experience and are more engaged in the interaction when the agent adapts its behaviors to theirs, thus creating an interactive loop.

References

[1]
Jens Allwood and Loredana Cerrato. 2003. A study of gestural feedback expressions. In First Nordic Symposium on Multimodal Communication. 7–22.
[2]
Fumihito Arai and Yasuhisa Hasegawa. 2004. Facial Expressive Robotic Head System for Human – Robot Communication and Its. December (2004). https://doi.org/10.1109/JPROC.2004.835355
[3]
Tadas Baltrusaitis, Peter Robinson, and Louis Philippe Morency. 2016. OpenFace: An open source facial behavior analysis toolkit. 2016 IEEE Winter Conference on Applications of Computer Vision, WACV 2016 (2016). https://doi.org/10.1109/WACV.2016.7477553
[4]
Timothy Bickmore, Daniel Schulman, and Langxuan Yin. 2012. Maintaining Engagement in Long-term Interventions with Relational Agents. International Society of Differentiation 83, 2 (2012), 1–29. https://doi.org/10.1158/0008-5472.CAN-10-4002.BONE arxiv:NIHMS150003
[5]
C. Breazeal.2004. Function meets style: insights from emotion theory applied to HRI. IEEE Transactions on Systems, Man, and Cybernetics 34(2) (2004).
[6]
Judee K. Burgoon, Lesa A. Stern, and Leesa Dillman. 2010. Adaptation in Dyadic Interaction: Defining and Operationalizing Patterns of Reciprocity and Compensation.Communication Theory1993(2010). https://doi.org/10.1017/cbo9780511720314
[7]
Angelo Cafaro, Brian Ravenet, Magalie Ochs, Hannes Högni Vilhjálmsson, and Catherine Pelachaud. 2016. The Effects of Interpersonal Attitude of a Group of Agents on User’s Presence and Proxemics Behavior. ACM Transactions on Interactive Intelligent Systems 6, 2 (2016), 1–33. https://doi.org/10.1145/2914796
[8]
Angelo Cafaro, Johannes Wagner, Tobias Baur, Soumia Dermouche, Mercedes Torres Torres, Catherine Pelachaud, Elisabeth Andr, and Michel Valstar. 2017. The NoXi Database : Multimodal Recordings of Mediated Novice-Expert Interactions. In ICMI’17,. ACM, Glasgow, Scotland, 350–359. https://doi.org/10.1145/3136755.3136780
[9]
Ginevra Castellano, André Pereira, Iolanda Leite, Ana Paiva, and Peter W. McOwan. 2009. Detecting user engagement with a robot companion using task and social interaction-based features. Proceedings of the 2009 international conference on Multimodal interfaces - ICMI-MLMI ’09January 2009 (2009), 119. https://doi.org/10.1145/1647314.1647336
[10]
Paul Ekman and Wallace V Friesen. 1976. Mesauring facial movement.pdf.
[11]
Will Feng, Anitha Kannan, Georgia Gkioxari, and C. Lawrence Zitnick. 2017. Learn2Smile: Learning non-verbal interaction through observation. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Vol. 2017-Septe. 4131–4138. https://doi.org/10.1109/IROS.2017.8206272
[12]
Terrence Fong, Illah Nourbakhsh, and Kerstin Dautenhahn. 2002. A Survey of Socially Interactive Robots : Concepts, Design, and Applications Terrence Fong, Illah Nourbakhsh, and Kerstin Dautenhahn. Technical Report CMU-RI-TR-02-29, Robotics Institute, Pittsburgh, PANovember(2002).
[13]
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative Adversarial Nets. In Advances in neural information processing systems (2014), 2672–2680. https://doi.org/10.1016/B978-0-408-00109-0.50001-8 arxiv:arXiv:1011.1669v3
[14]
Goren Gordon, Samuel Spaulding, Kory Westlund, Joo Lee, Luke Plummer, Marayna Martinez, and Madhurima Das. 2016. Affective Personalization of a Social Robot Tutor for Children ’ s Second Language Skills. 2011 (2016), 3951–3957.
[15]
H. L. O’Brien and E. G. Toms. 2010. What is User Engagement? A Conceptual Framework for Defining User Engagement with Technology Heather. Journal of the American Society for Information Science 1, 6(2010), 2581–2583. https://doi.org/10.1002/asi
[16]
Dirk Heylen, Stefan Kopp, Stacy C. Marsella, Catherine Pelachaud, and Hannes Vilhjálmsson. 2008. The next step towards a function markup language. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 5208 LNAI (2008), 270–280. https://doi.org/10.1007/978-3-540-85483-8_28
[17]
Yuchi Huang and Saad M. Khan. 2017. DyadGAN: Generating Facial Expressions in Dyadic Interactions. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops 2017-July(2017), 2259–2266. https://doi.org/10.1109/CVPRW.2017.280
[18]
Elly A. Konijn and Johan F. Hoorn. 2005. Some like it bad: Testing a model for perceiving and experiencing fictional characters. Media Psychology 7, 2 (2005), 107–144. https://doi.org/10.1207/S1532785XMEP0702_1
[19]
Caroline Langlet and Chloé Clavel. 2018. Detecting User’s Likes and Dislikes for a Virtual Negotiating Agent. In Proceedings of the 20th ACM International Conference on Multimodal Interaction(ICMI ’18). ACM, 103–110.
[20]
A S Mackenzie, C Beaumont, R Boutilier, J Rullkotter, S A F Murrell, R Mason, G Eglinton, and D P McKenzie. 1985. The Aromatization and Isomerization of Hydrocarbons and the Thermal and Subsidence History of the Nova Scotia Margin. Philosophical Transactions of the Royal Society of London. Series A, Mathematical and Physical Sciences 315, 1531 (1985), 203–232. https://doi.org/10.1103/PhysRevB.44.2085
[21]
Erik Murphy-chutorian, Student Member, and Mohan Manubhai Trivedi. 2009. Head Pose Estimation in Computer Vision :. Analysis 31, 4 (2009), 607–626. https://doi.org/10.1109/TPAMI.2008.106
[22]
Behnaz Nojavanasghari, Yuchi Huang, and Saad Khan. 2018. Interactive Generative Adversarial Networks for Facial Expression Generation in Dyadic Interactions. (2018). arxiv:1801.09092http://arxiv.org/abs/1801.09092
[23]
T. Nomura, T. Kanda, and T. Suzuki. 2006. Experimental Investigation into Influence of Negative Attitudes toward Robots on Human–Robot Interaction. AI & Society 20, 2 (2006).
[24]
Igor S Pandzic and Robert Forchheimer. [n.d.]. MPEG-4 Facial Animation The Standard, Implementation. John Wiley & Sons.
[25]
Florian Pecune, Angelo Cafaro, Mathieu Chollet, Pierre Philippe, and Catherine Pelachaud. 2014. Suggestions for Extending SAIBA with the VIB Platform. In Workshop on Architectures and Standards for IVAs, held at the ’14th International Conference on Intelligent Virtual Agents (IVA 2014). Bielefeld eCollections, Boston, MA, USA, 16–20.
[26]
Evangelos Sariyanidi, Hatice Gunes, and Andrea Cavallaro. 2015. Automatic analysis of facial affect: A survey of registration, representation, and recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 37, 6(2015), 1113–1133. https://doi.org/10.1109/TPAMI.2014.2366127
[27]
Candace L Sidner, Christopher Lee, and Neal Lesh. 2003. Engagement by Looking: Behaviours for Robots when Collaborating with People. Proceedings of DiaBruck (the 7th Workshop on Semantics and Pragmatics of Dialogue) (2003), 123–130.
[28]
Henriette C. van Vugt, Johan F. Hoorn, Elly A. Konijn, and Athina de Bie Dimitriadou. 2006. Affective affordances: Improving interface character engagement through interaction. International Journal of Human Computer Studies 64, 9 (2006), 874–888. https://doi.org/10.1016/j.ijhcs.2006.04.008
[29]
Jelte van Waterschoot, Merijn Bruijnes, Jan Flokstra, Dennis Reidsma, Daniel Davison, Mariët Theune, and Dirk Heylen. 2018. Flipper 2.0. In Proceedings of the 18th International Conference on Intelligent Virtual Agents. 43–50. https://doi.org/10.1145/3267851.3267882
[30]
Hannes Vilhjalmsson, Nathan Cantelmo, Justine Cassell, Nicolas E. Chafai, Michael Kipp, Stefan Kopp, Maurizio Mancini, Stacy Marsella, Andrew N. Marshall, Catherine Pelachaud, Zsofi Ruttkay, Kristinn R. Thórisson, Herwin Van Welbergen, and Rick J. Van Der Werf. 2007. The behavior markup language: Recent developments and challenges. Lecture Notes in Artificial Intelligence 4722, 1 (2007), 99–111. https://doi.org/10.1007/978-3-540-74997-4
[31]
G. Volpe, P. Alborno, A. Camurri, P. Coletta, S. Ghisio, M. Mancini, R. Niewiadomski, and S. Piana. 2016. Designing Multimodal Interactive Systems Using EyesWeb XMI. SERVE@AVI (2016), 49–56.
[32]
Nannan Wang, Xinbo Gao, Dacheng Tao, Heng Yang, and Xuelong Li. 2018. Facial feature point detection: A comprehensive survey. Neurocomputing 275(2018), 50–65. https://doi.org/10.1016/j.neucom.2017.05.013 arxiv:1410.1037
[33]
J. S. Wiggins. 1979. A psychological taxonomy of trait-descriptive terms: The interpersonal domain.Journal of Personality and Social Psychology37 (1979), 395–412.
[34]
Beverly Woolf and Winslow Burleson. 2009. Affect-aware tutors : recognising and responding to student affect Ivon Arroyo, Toby Dragon and David Cooper Rosalind Picard. 4 (2009).
[35]
Zhou Yu, Xinrui He, Alan W Black, and Alexander I Rudnicky. 2016. User Engagement Study with Virtual Agents Under Different Cultural Contexts. In Intelligent Virtual Agents - 16th International Conference, IVA2016. Los Angeles, CA, USA, 364–368.

Cited By

View all
  • (2024)Identifying intentions in conversational tools: a systematic mappingProceedings of the 20th Brazilian Symposium on Information Systems10.1145/3658271.3658286(1-10)Online publication date: 20-May-2024
  • (2024)Actor Takeover of Animated Characters2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)10.1109/VRW62533.2024.00361(1134-1135)Online publication date: 16-Mar-2024
  • (2023)ASAP: Endowing Adaptation Capability to Agent in Human-Agent InteractionProceedings of the 28th International Conference on Intelligent User Interfaces10.1145/3581641.3584081(464-475)Online publication date: 27-Mar-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
ICMI '19: 2019 International Conference on Multimodal Interaction
October 2019
601 pages
ISBN:9781450368605
DOI:10.1145/3340555
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 14 October 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. LSTM
  2. behavior adaptation
  3. human-agent interaction
  4. multimodal behavior

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

ICMI '19

Acceptance Rates

Overall Acceptance Rate 453 of 1,080 submissions, 42%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)130
  • Downloads (Last 6 weeks)2
Reflects downloads up to 25 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Identifying intentions in conversational tools: a systematic mappingProceedings of the 20th Brazilian Symposium on Information Systems10.1145/3658271.3658286(1-10)Online publication date: 20-May-2024
  • (2024)Actor Takeover of Animated Characters2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)10.1109/VRW62533.2024.00361(1134-1135)Online publication date: 16-Mar-2024
  • (2023)ASAP: Endowing Adaptation Capability to Agent in Human-Agent InteractionProceedings of the 28th International Conference on Intelligent User Interfaces10.1145/3581641.3584081(464-475)Online publication date: 27-Mar-2023
  • (2023)IAVAProceedings of the 23rd ACM International Conference on Intelligent Virtual Agents10.1145/3570945.3607326(1-8)Online publication date: 19-Sep-2023
  • (2023)Stop Copying MeProceedings of the 23rd ACM International Conference on Intelligent Virtual Agents10.1145/3570945.3607322(1-4)Online publication date: 19-Sep-2023
  • (2023)Supporting Co-Presence in Populated Virtual Environments by Actor Takeover of Animated Characters2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)10.1109/ISMAR59233.2023.00110(940-949)Online publication date: 16-Oct-2023
  • (2022)Methods for Robot Behavior Adaptation for Cognitive NeurorehabilitationAnnual Review of Control, Robotics, and Autonomous Systems10.1146/annurev-control-042920-0932255:1(109-135)Online publication date: 3-May-2022
  • (2022)Immersive machine learning for social attitude detection in virtual reality narrative gamesVirtual Reality10.1007/s10055-022-00644-426:4(1519-1538)Online publication date: 1-Dec-2022
  • (2021)Multimodal Behavior Modeling for Socially Interactive AgentsThe Handbook on Socially Interactive Agents10.1145/3477322.3477331(259-310)Online publication date: 10-Sep-2021
  • (2021)Development of an Interactive Human/Agent Loop using Multimodal Recurrent Neural NetworksProceedings of the 2021 International Conference on Multimodal Interaction10.1145/3462244.3481275(822-826)Online publication date: 18-Oct-2021
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media