Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3686215.3688376acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
demonstration

ARCADE: An Augmented Reality Display Environment for Multimodal Interaction with Conversational Agents

Published: 04 November 2024 Publication History

Abstract

Making the interaction with embodied conversational agents accessible in a ubiquitous and natural manner is not only a question of the underlying software but also brings challenges in terms of the technical system that is used to display them. To this end, we present our spatial augmented reality system ARCADE, which can be utilized like a conventional monitor for displaying virtual agents as well as additional content. With its optical-see-through display, ARCADE creates the illusion of the agent being in the room similarly to a human. The applicability of our system is demonstrated in two different dialogue scenarios, which are included in the video accompanying this paper at https://youtu.be/9nH4c4Q-ooE.

Supplemental Material

MP4 File
Demonstration Video

References

[1]
Annalena Aicher, Klaus Weber, Elisabeth André, Wolfgang Minker, and Stefan Ultes. 2023. The Influence of Avatar Interfaces on Argumentative Dialogues. In Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents, IVA 2023.
[2]
Elisabeth André and Catherine Pelachaud. 2010. Interacting with Embodied Conversational Agents. In Speech Technology: Theory and Applications. Springer US.
[3]
Julie Carmigniani, Borko Furht, Marco Anisetti, Paolo Ceravolo, Ernesto Damiani, and Misa Ivkovic. 2010. Augmented reality technologies, systems and applications. Multimedia Tools and Applications (2010).
[4]
Yuya Chiba, Koh Mitsuda, Akinobu Lee, and Ryuichiro Higashinaka. 2024. The Remdis toolkit: Building advanced real-time multimodal dialogue systems with incremental processing and large language models. In Proceedings of the 14th International Workshop on Spoken Dialogue Systems Technology (IWSDS ’24).
[5]
Yu-Liang Chou, Catarina Moreira, Peter Bruza, Chun Ouyang, and Joaquim A. Jorge. 2022. Counterfactuals and causability in explainable artificial intelligence: Theory, algorithms, and applications. Inf. Fusion (2022).
[6]
Arun Das and Paul Rad. 2020. Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey. CoRR (2020).
[7]
Isabel Feustel, Niklas Rach, Wolfgang Minker, and Stefan Ultes. 2023. Towards interactive explanations of machine learning methods through dialogue systems. In Proceedings of the 13th International Workshop on Spoken Dialogue Systems Technology (IWSDS ’23).
[8]
Akinobu Lee. 2023. MMDAgent-EX. https://doi.org/10.5281/zenodo.10427369
[9]
Luke Merrick and Ankur Taly. 2020. The Explanation Game: Explaining Machine Learning Models Using Shapley Values. In Machine Learning and Knowledge Extraction - 4th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference.
[10]
Jens Reinhardt, Luca Hillen, and Katrin Wolf. 2020. Embedding Conversational Agents into AR: Invisible or with a Realistic Human Body?. In TEI ’20: Fourteenth International Conference on Tangible, Embedded, and Embodied Interaction.
[11]
Wolfgang Wahlster. 2003. Towards Symmetric Multimodality: Fusion and Fission of Speech, Gesture, and Facial Expression. In KI 2003: Advances in Artificial Intelligence, 26th Annual German Conference on AI.
[12]
Daisuke Yamamoto, Keiichiro Oura, Ryota Nishimura, Takahiro Uchiya, Akinobu Lee, Ichi Takumi, and Keiichi Tokuda. 2014. Voice interaction system with 3D-CG virtual agent for stand-alone smartphones. In Proceedings of the second international conference on Human-agent interaction, HAI ’14.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICMI Companion '24: Companion Proceedings of the 26th International Conference on Multimodal Interaction
November 2024
252 pages
ISBN:9798400704635
DOI:10.1145/3686215
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 04 November 2024

Check for updates

Author Tags

  1. Dialogue Systems
  2. Embodiment
  3. Human-Agent Interaction
  4. Human-Computer Interaction

Qualifiers

  • Demonstration
  • Research
  • Refereed limited

Conference

ICMI '24
Sponsor:
ICMI '24: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION
November 4 - 8, 2024
San Jose, Costa Rica

Acceptance Rates

Overall Acceptance Rate 453 of 1,080 submissions, 42%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 53
    Total Downloads
  • Downloads (Last 12 months)53
  • Downloads (Last 6 weeks)6
Reflects downloads up to 15 Feb 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media