Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3610978.3640652acmconferencesArticle/Chapter ViewAbstractPublication PageshriConference Proceedingsconference-collections
short-paper
Open access

Initial Study on Robot Emotional Expression Using Manpu

Published: 11 March 2024 Publication History

Abstract

In recent years, robots have started to play an active role in various places in society. The ability of robots not only to convey information but also to interact emotionally, is necessary to realize a human-robot symbiotic society. Many studies have been conducted on the emotional expression of robots. However, as robots come in a wide variety of designs, it is difficult to construct a generic expression method, and some robots are not equipped with expression devices such as faces or displays. To address these problems, this research aims to develop technology that enables robots to express emotions, using Manpu (a symbolic method used in comic books, expressing not only the emotions of humans and animals but also the states of objects) and mixed reality technology. As the first step of the research, we categorize manpu and use large language models to generate manpu expressions according to the dialogue information.

Supplemental Material

MP4 File
Supplemental video

References

[1]
Hojjat Abdollahi, Mohammad H. Mahoor, Rohola Zandie, Jarid Siewierski, and Sara Honn Qualls. 2022. Artificial Emotional Intelligence in Socially Assistive Robots for Older Adults: A Pilot Study. IEEE Transactions on Affective Computing, Vol. 14 (2022), 2020--2032. https://api.semanticscholar.org/CorpusID:246053537
[2]
Yuki Akai, Yukihiro Moriyama, and Mitsunori Matsushita. 2017. Scene Retrieval System based Emotion and Action Scenes in Comics by Using Comic Symbols (In Japanese). IPSJ SIG Technical Report, Vol. 2017-DCC-15, 42 (2017), 1--7.
[3]
Yuki Akai, Ryo Yamashita, and Mitsunori Matsushita. 2015. Giving Emotions to Characters Using Comic Symbols. In Proceedings of the 12th International Conference on Advances in Computer Entertainment Technology (Iskandar, Malaysia) (ACE '15). Association for Computing Machinery, New York, NY, USA, Article 26, 4 pages. https://doi.org/10.1145/2832932.2832979
[4]
Samer Al Moubayed, Jonas Beskow, Gabriel Skantze, and Björn Granström. 2012. Furhat: A Back-Projected Human-Like Robot Head for Multiparty Human-Machine Interaction. In Cognitive Behavioural Systems, Anna Esposito, Antonietta M. Esposito, Alessandro Vinciarelli, Rüdiger Hoffmann, and Vincent C. Müller (Eds.). 114--130.
[5]
Cynthia Breazeal. 2003. Emotion and sociable humanoid robots. International Journal of Human-Computer Studies, Vol. 59, 1 (2003), 119--155. https://doi.org/10.1016/S1071--5819(03)00018--1 Applications of Affective Computing in Human-Computer Interaction.
[6]
Wei-Ta Chu and Wei-Wei Li. 2017. Manga FaceNet: Face Detection in Manga Based on Deep Neural Network (ICMR '17). Association for Computing Machinery, New York, NY, USA, 412--415. https://doi.org/10.1145/3078971.3079031
[7]
Joe Crumpton and Cindy L. Bethel. 2016. A Survey of Using Vocal Prosody to Convey Emotion in Robot Speech. International Journal of Social Robotics, Vol. 8 (2016), 271--285. https://doi.org/10.1007/s12369-015-0329--4
[8]
Christyan Cruz Ulloa, David Domínguez, Jaime Del Cerro, and Antonio Barrientos. 2022. A Mixed-Reality Tele-Operation Method for High-Level Control of a Legged-Manipulator Robot. Sensors, Vol. 22, 21 (2022). https://doi.org/10.3390/s22218146
[9]
Temirlan Dzhoroev, Haeun Park, Jiyeon Lee, Byounghern Kim, and Hui Sung Lee. 2023. Human Perception on Social Robot's Face and Color Expression Using Computational Emotion Model. In 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). 2484--2491. https://doi.org/10.1109/RO-MAN57019.2023.10309452
[10]
Ayaka Fujii, Kanae Kochigami, Shingo Kitagawa, Kei Okada, and Masayuki Inaba. 2020. Development and Evaluation of Mixed Reality Co-eating System: Sharing the Behavior of Eating Food with a Robot Could Improve Our Dining Experience. In 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). 357--362. https://doi.org/10.1109/RO-MAN47096.2020.9223518
[11]
Shu Gemba and Tokiichiro Takahashi. 2020. Avatar's facial expression with "Manpu (Comic Symbols)" by using multiple biometric information. In International Workshop on Advanced Imaging Technology (IWAIT) 2020, Vol. 11515. SPIE, 319--324.
[12]
Dylan F. Glas, Takashi Minato, Carlos T. Ishi, Tatsuya Kawahara, and Hiroshi Ishiguro. 2016. ERICA: The ERATO Intelligent Conversational Android. In 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). 22--29. https://doi.org/10.1109/ROMAN.2016.7745086
[13]
Randy Gomez, Deborah Szapiro, Kerl Galindo, and Keisuke Nakamura. 2018. Haru: Hardware Design of an Experimental Tabletop Robot Assistant. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (Chicago, IL, USA) (HRI '18). Association for Computing Machinery, New York, NY, USA, 233--240. https://doi.org/10.1145/3171221.3171288
[14]
Hironori Ito and Yasuhito Asano. 2020. Composition Proposal Generation for Manga Creation Support. IEICE TRANSACTIONS on Information and Systems, Vol. 103, 5 (2020), 949--957.
[15]
T. Kishi, H. Futaki, G. Trovato, N. Endo, M. Destephe, S. Cosentino, K. Hashimoto, and A. Takanishi. 2014. Development of a Comic Mark Based Expressive Robotic Head Adapted to Japanese Cultural Background. In 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems. 2608--2613. https://doi.org/10.1109/IROS.2014.6942918
[16]
Fumiyo Kouno. 2018. Gigatown Manpuzufu (In Japanese). Asahi Shimbun Publications Inc., Tokyo, Japan.
[17]
Yoon Kyung Lee, Yoonwon Jung, Gyuyi Kang, and Sowon Hahn. 2023. Developing Social Robots with Empathetic Non-Verbal Cues Using Large Language Models. arXiv preprint arXiv:2308.16529 (2023).
[18]
Yi Lei, Shan Yang, Xinsheng Wang, and Lei Xie. 2022. MsEmoTTS: Multi-Scale Emotion Transfer, Prediction, and Control for Emotional Speech Synthesis. IEEE/ACM Transactions on Audio, Speech, and Language Processing, Vol. 30 (2022), 853--864. https://doi.org/10.1109/TASLP.2022.3145293
[19]
Yu Liu, Gelareh Mohammadi, Yang Song, and Wafa Johal. 2021. Speech-Based Gesture Generation for Robots and Embodied Agents: A Scoping Review. In Proceedings of the 9th International Conference on Human-Agent Interaction (HAI '21). Association for Computing Machinery, New York, NY, USA, 31--38. https://doi.org/10.1145/3472307.3484167
[20]
Yohei Noguchi, Yijie Guo, and Fumihide Tanaka. 2023. A Plug-In Weight-Shifting Module That Adds Emotional Expressiveness to Inanimate Objects in Handheld Interaction. In 2023 IEEE International Conference on Robotics and Automation (ICRA). 12450--12456. https://doi.org/10.1109/ICRA48891.2023.10160659
[21]
Kyeong-Beom Park, Sung Ho Choi, Jae Yeol Lee, Yalda Ghasemi, Mustafa Mohammed, and Heejin Jeong. 2021. Hands-Free Human--Robot Interaction Using Multimodal Gestures and Deep Learning in Wearable Mixed Reality. IEEE Access, Vol. 9 (2021), 55448--55464. https://doi.org/10.1109/ACCESS.2021.3071364
[22]
Christophe Rigaud, Jean-Christophe Burie, Jean-Marc Ogier, Dimosthenis Karatzas, and Joost Van De Weijer. 2013. An Active Contour Model for Speech Balloon Detection in Comics. In 2013 12th International Conference on Document Analysis and Recognition. 1240--1244. https://doi.org/10.1109/ICDAR.2013.251
[23]
Christophe Rigaud, Clément Guérin, Dimosthenis Karatzas, Jean-Christophe Burie, and Jean-Marc Ogier. 2015. Knowledge-Driven Understanding of Images in Comic Books., Vol. 18, 3 (sep 2015), 199--221. https://doi.org/10.1007/s10032-015-0243--1
[24]
Sai Vemprala, Rogerio Bonatti, Arthur Bucker, and Ashish Kapoor. 2023. Chatgpt for robotics: Design principles and model abilities. Microsoft Auton. Syst. Robot. Res, Vol. 2 (2023), 20.
[25]
Nguyen Tan Viet Tuyen and Oya Celiktutan. 2022. Agree or Disagree? Generating Body Gestures from Affective Contextual Cues during Dyadic Interactions. In 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). 1542--1547. https://doi.org/10.1109/RO-MAN53752.2022.9900760
[26]
Ryosuke Yamanishi, Hideki Tanaka, Yoko Nishihara, and Junichi Fukumoto. 2017. Speech-balloon shapes estimation for emotional text communication. Information Engineering Express, Vol. 3, 2 (2017), 1--10.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
HRI '24: Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction
March 2024
1408 pages
ISBN:9798400703232
DOI:10.1145/3610978
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 11 March 2024

Check for updates

Author Tags

  1. Manpu
  2. comic engineering
  3. comic symbols
  4. human-robot interaction

Qualifiers

  • Short-paper

Conference

HRI '24
Sponsor:

Acceptance Rates

Overall Acceptance Rate 268 of 1,124 submissions, 24%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 125
    Total Downloads
  • Downloads (Last 12 months)125
  • Downloads (Last 6 weeks)29
Reflects downloads up to 06 Oct 2024

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media