Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3308532.3329452acmconferencesArticle/Chapter ViewAbstractPublication PagesivaConference Proceedingsconference-collections
extended-abstract
Public Access

Time to Go ONLINE! A Modular Framework for Building Internet-based Socially Interactive Agents

Published: 01 July 2019 Publication History

Abstract

Although socially interactive agents have emerged as a new metaphor for human-computer interaction, they are, to date, absent from the Internet. We describe the design choices, implementation, and challenges in building EEVA, the first fully integrated platform-independent framework for deploying realistic 3D web-based social agents: with real-time multimodal perception of, and response to, the user's verbal and non-verbal social cues, EEVA agents are capable of communicating rich customizable content to users in real time, while building and maintaining users' profiles for long-term interactions. The modularity of the EEVA framework enables it to be used as a testbed for agents' social communication model development of increasing performance and sophistication (e.g. building rapport, expressing empathy). We discuss the framework's feasibility by analyzing the response time of the system over the Internet, in the context of a health intervention built using EEVA authoring functionalities.

References

[1]
R. Amini, C. Lisetti, and G. Ruiz. 2015. HapFACS 3.0: FACS-based facial expression generator for 3D speaking virtual characters . IEEE Transactions on Affective Computing, Vol. 6, 4 (2015).
[2]
Maryam Ashoori, Chunyan Miao, Majid Nili, and Mehdi Amoui. 2008. Economically inspired self-healing model for Multi-Agent Systems. In Proceedings of the IEEE/WIC/ACM International Conference on Intelligent Agent Technology, IAT 2007, Vol. 2. 261--264.
[3]
Janet B. Bavelas, Alex Black, Charles R. Lemery, and Jennifer Mullett. 1986. "I show how you feel": Motor mimicry as a communicative act. Journal of Personality and Social Psychology, Vol. 50, 2 (1986), 322--329.
[4]
Amy L Baylor. 2011. The design of motivational agents and avatars. Educational Technology Research and Development, Vol. 59, 2 (2011), 291--300.
[5]
Rafael A Calvo and Sidney D'Mello. 2010. Affect detection: An interdisciplinary review of models, methods, and their applications . IEEE Transactions on Affective Computing, Vol. 1, 1 (2010), 18--37.
[6]
M. Courgeon and C. Clavel. 2013. MARC: A framework that features emotion models for facial animation during human-computer interaction . Journal on Multimodal User Interfaces, Vol. 7, 4 (2013), 311--319.
[7]
Schroeder et al. 2018. Pocket Skills: A Conversational Mobile Web App To Support Dialectical Behavioral Therapy . Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI 2018) (2018), 1--15.
[8]
Wagner et al. 2013. The Social Signal Interpretation ( SSI ) Framework Multimodal Signal Processing and Recognition in Real-Time. In Proceedings of the ACM Multimedia Conference . 1--4.
[9]
JE Grahe. 1999. The importance of nonverbal cues in judging rapport . Journal of Nonverbal behavior, Vol. 23, 4 (1999), 253--269.
[10]
Ouriel Grynszpan, Jean Claude Martin, and Philippe Fossati. 2017. Gaze leading is associated with liking . Acta Psychologica, Vol. 173 (2017), 66--72.
[11]
Lixing Huang, Louis-philippe Morency, and Jonathan Gratch. 2011. Virtual Rapport 2.0. In International Conference on Intelligent Virtual Agents, Intelligence, Lecture Notes in Artificial Intelligence. Springer-Verlag Berlin Heidelberg, 68--79.
[12]
Joris H. Janssen. 2012. A three-component framework for empathic technologies to augment human interaction . Journal on Multimodal User Interfaces, Vol. 6, 3--4 (2012), 143--161.
[13]
Stefan Kopp, Brigitte Krenn, Stacy Marsella, A Marshall, C Pelachaud, H Pirker, K Thó risson, and H Vilhjá lmsson. 2006. Towards a common framework for multimodal generation: The behavior markup language. In Intelligent Virtual Agents (Lecture Notes in Computer Science), Jonathan Gratch, Michael Young, Ruth Aylett, Daniel Ballin, and Patrick Olivier (Eds.), Vol. 4133. Springer, 205--217.
[14]
Christine Lisetti, Reza Amini, Ugan Yasavur, and Naphtali Rishe. 2013. I can help you change! an empathic virtual agent delivers behavior change health interventions . ACM TMIS, Vol. 4, 4 (2013), 19.
[15]
Gerard Llorach and Josep Blat. 2017. Say Hi to Eliza. In International Conference on Intelligent Virtual Agents. 255--258.
[16]
Magalie Ochs, Catherine Pelachaud, and Gary McKeown. 2017. A User-Perception Based Approach to Create Smiling Embodied Conversational Agents . ACM Transactions on Interactive Intelligent Systems, Vol. 7, 1 (2017), 1--33.
[17]
Ana Paiva, Iolanda Leite, Hana Boukricha, and Ipke Wachsmuth. 2017. Empathy in Virtual Agents and Robots . ACM Transactions on Interactive Intelligent Systems, Vol. 7, 3 (2017).
[18]
Catherine Pelachaud. 2009. Modelling multimodal expression of emotion in a virtual agent. Philosophical transactions of the Royal Society of London. Series B, Biological sciences, Vol. 364, 1535 (dec 2009), 3539--48.
[19]
Pedro Pena, Christine Lisetti, Mihai Polceanu, and Ubbo Visser. 2018. eEVA: Real-time Web-based Affective Agents for Human-Robot Interface. In RoboCup 2018: Robot World Cup XXII. Montreal, Canada, Springer International Publishing.
[20]
Morgan Quigley, Ken Conley, Brian Gerkey, Josh Faust, Tully Foote, Jeremy Leibs, Rob Wheeler, and Andrew Y Ng. 2009. ROS: an open-source Robot Operating System. In ICRA workshop on open source software, Vol. 3. Kobe, Japan, 5.
[21]
Vikram Ramanarayanan, David Pautler, Patrick Lange, and David Suendermann-Oeft. 2018. Interview with an avatar: A real-time cloud-based virtual dialog agent for educational and job training applications. Technical Report Research Memorandum No. RM-18-02. Princeton, NJ: Educational Testing Service. 1--8 pages.
[22]
L. Tickle-Degnen and Robert Rosenthal. 1990. The nature of rapport and its nonverbal correlates . Psychological Inquiry, Vol. 1, 4 (1990), 285--293.
[23]
Website. 2019. WebRTC project. https://webrtc.org . Accessed Mar 2019.
[24]
U. Yasavur, C. Lisetti, and N. Rishe. 2014. Let's talk! speaking virtual counselor offers you a brief intervention . Journal on Multimodal User Interfaces, Vol. 8, 4 (2014).

Cited By

View all
  • (2022)Multimodal Embodied Conversational Agents: A discussion of architectures, frameworks and modules for commercial applications2022 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)10.1109/AIVR56993.2022.00013(36-45)Online publication date: Dec-2022
  • (2022)Towards Building Rapport with a Human Support RobotRoboCup 2021: Robot World Cup XXIV10.1007/978-3-030-98682-7_18(214-225)Online publication date: 22-Mar-2022
  • (2021)Development, Feasibility, Acceptability, and Utility of an Expressive Speech-Enabled Digital Health Agent to Deliver Online, Brief Motivational Interviewing for Alcohol Misuse: Descriptive StudyJournal of Medical Internet Research10.2196/2583723:9(e25837)Online publication date: 29-Sep-2021
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
IVA '19: Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents
July 2019
282 pages
ISBN:9781450366724
DOI:10.1145/3308532
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 July 2019

Check for updates

Author Tags

  1. multimodal interaction
  2. real-time virtual counseling
  3. web-based 3d character

Qualifiers

  • Extended-abstract

Funding Sources

Conference

IVA '19
Sponsor:

Acceptance Rates

IVA '19 Paper Acceptance Rate 15 of 63 submissions, 24%;
Overall Acceptance Rate 53 of 196 submissions, 27%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)70
  • Downloads (Last 6 weeks)26
Reflects downloads up to 15 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2022)Multimodal Embodied Conversational Agents: A discussion of architectures, frameworks and modules for commercial applications2022 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)10.1109/AIVR56993.2022.00013(36-45)Online publication date: Dec-2022
  • (2022)Towards Building Rapport with a Human Support RobotRoboCup 2021: Robot World Cup XXIV10.1007/978-3-030-98682-7_18(214-225)Online publication date: 22-Mar-2022
  • (2021)Development, Feasibility, Acceptability, and Utility of an Expressive Speech-Enabled Digital Health Agent to Deliver Online, Brief Motivational Interviewing for Alcohol Misuse: Descriptive StudyJournal of Medical Internet Research10.2196/2583723:9(e25837)Online publication date: 29-Sep-2021
  • (2021)Multisensor-Pipeline: A Lightweight, Flexible, and Extensible Framework for Building Multimodal-Multisensor InterfacesCompanion Publication of the 2021 International Conference on Multimodal Interaction10.1145/3461615.3485432(13-18)Online publication date: 18-Oct-2021
  • (2021)AvaFlowProceedings of the 52nd ACM Technical Symposium on Computer Science Education10.1145/3408877.3439620(1307-1307)Online publication date: 3-Mar-2021

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media