Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3536221.3564027acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
abstract

GENEA Workshop 2022: The 3rd Workshop on Generation and Evaluation of Non-verbal Behaviour for Embodied Agents

Published: 07 November 2022 Publication History

Abstract

Embodied agents benefit from using non-verbal behavior when communicating with humans. Despite several decades of non-verbal behavior-generation research, there is currently no well-developed benchmarking culture in the field. For example, most researchers do not compare their outcomes with previous work, and if they do, they often do so in their own way which frequently is incompatible with others. With the GENEA Workshop 2022, we aim to bring the community together to discuss key challenges and solutions, and find the most appropriate ways to move the field forward.

References

[1]
Alice Delbosc, Stéphane Ayache, and Magalie Ochs. 2022. Automatic facial expressions, gaze direction and head movements generation of a virtual agent. In Companion Publication of the 2022 International Conference on Multimodal Interaction(ICMI ’22 Companion). ACM.
[2]
Patrik Jonell, Taras Kucherenko, Gustav Eje Henter, and Jonas Beskow. 2020. Let’s Face It: Probabilistic multi-modal interlocutor-aware generation of facial gestures in dyadic settings. In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents. Article 31, 8 pages.
[3]
Alex Klein, Zerrin Yumak, Arjen Beij, and A. Frank van der Stappen. 2019. Data-driven gaze animation using recurrent neural networks. In Proceedings of the ACM SIGGRAPH Conference on Motion, Interaction and Games. Article 4, 11 pages.
[4]
Vladislav Korzun, Anna Beloborodova, and Arkady Iliin. 2022. ReCell: replicating recurrent cell for auto-regressive pose generation. In Companion Publication of the 2022 International Conference on Multimodal Interaction(ICMI ’22 Companion). ACM.
[5]
Taras Kucherenko, Dai Hasegawa, Naoshi Kaneko, Gustav Eje Henter, and Hedvig Kjellström. 2021. Moving fast and slow: Analysis of representations and post-processing in speech-driven automatic gesture generation. International Journal of Human-Computer Interaction 37, 14(2021), 1300–1316.
[6]
Taras Kucherenko, Patrik Jonell, Youngwoo Yoon, Pieter Wolfert, and Gustav Eje Henter. 2021. A large, crowdsourced evaluation of gesture generation systems on common Data: The GENEA Challenge 2020. In Proceedings of the 26th International Conference on Intelligent User Interfaces. 11–21.
[7]
Isabel Donya Meywirth and Jana Götze. 2022. Can you tell that I’m confused? An overhearer study for German backchannels by an embodied agent. In Companion Publication of the 2022 International Conference on Multimodal Interaction(ICMI ’22 Companion). ACM.
[8]
Rozemarijn Roes, Francisca Pessanha, and Almila Akdag. 2022. Emotional respiration speech dataset. In Companion Publication of the 2022 International Conference on Multimodal Interaction(ICMI ’22 Companion). ACM.
[9]
Jinal Thakkar, Pooja S. B. Rao, Kumar Shubham, Vaibhav Jain, and Dinesh Babu Jayagopi. 2022. Understanding interviewees’ perceptions and behaviour towards verbally and non-verbally expressive virtual interviewing agents. In Companion Publication of the 2022 International Conference on Multimodal Interaction(ICMI ’22 Companion). ACM.
[10]
Youngwoo Yoon, Bok Cha, Joo-Haeng Lee, Minsu Jang, Jaeyeon Lee, Jaehong Kim, and Geehyuk Lee. 2020. Speech gesture generation from the trimodal context of text, audio, and speaker identity. ACM Transactions on Graphics (TOG) 39, 6, Article 222(2020), 16 pages.
[11]
Youngwoo Yoon, Pieter Wolfert, Taras Kucherenko, Carla Viegas, Teodor Nikolov, Mihail Tsakov, and Gustav Eje Henter. 2022. The GENEA Challenge 2022: A large evaluation of data-driven co-speech gesture generation. In Proceedings of the ACM International Conference on Multimodal Interaction(ICMI ’22). ACM.

Index Terms

  1. GENEA Workshop 2022: The 3rd Workshop on Generation and Evaluation of Non-verbal Behaviour for Embodied Agents

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      ICMI '22: Proceedings of the 2022 International Conference on Multimodal Interaction
      November 2022
      830 pages
      ISBN:9781450393904
      DOI:10.1145/3536221
      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 07 November 2022

      Check for updates

      Author Tags

      1. behavior synthesis
      2. datasets
      3. evaluation
      4. gesture generation

      Qualifiers

      • Abstract
      • Research
      • Refereed limited

      Funding Sources

      Conference

      ICMI '22
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 453 of 1,080 submissions, 42%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 69
        Total Downloads
      • Downloads (Last 12 months)13
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 24 Dec 2024

      Other Metrics

      Citations

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media