Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3563657.3595968acmconferencesArticle/Chapter ViewAbstractPublication PagesdisConference Proceedingsconference-collections
research-article

Exploring the Design Space of Extra-Linguistic Expression for Robots

Published: 10 July 2023 Publication History

Abstract

In this paper, we explore the new design space of extra-linguistic cues inspired by graphical tropes used in graphic novels and animation to enhance the expressiveness of social robots. We identified a set of cues that can be used to generate expressions, including smoke/steam/fog, water droplets, and bubbles, and prototyped devices that can generate these fluid expressions for a robot. We conducted design sessions where eight designers explored the use and utility of these expressions in conveying the robot’s internal states in various design scenarios. Our analysis of the 22 designs, the associated design justifications, and the interviews with designers revealed patterns in how each form of expression was used, how they were combined with nonverbal cues, and where the participants drew their inspiration from. These findings informed the design of an integrated module called EmoPack, which can be used to augment the expressive capabilities of any robot platform.

References

[1]
Ismo Alakärppä, Elisa Hartikainen, Ashley Colley, and Jonna Häkkilä. 2017. BreathScreen – Design and Evaluation of an Ephemeral UI. https://doi.org/10.1145/3025453.3025973
[2]
Rasmus S. Andersen, Ole Madsen, Thomas B. Moeslund, and Heni Ben Amor. 2016. Projecting robot intentions into human environments. In 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). 294–301. https://doi.org/10.1109/ROMAN.2016.7745145
[3]
Bruno G Bara and Maurizio Tirassa. 1999. A mentalist framework for linguistic and extralinguistic communication. (1999).
[4]
Gabriele Bolano, Christian Juelg, Arne Roennau, and Ruediger Dillmann. 2019. Transparent Robot Behavior Using Augmented Reality in Close Human-Robot Interaction. In 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). 1–7. https://doi.org/10.1109/RO-MAN46459.2019.8956296
[5]
Cynthia Breazeal. 2004. Designing sociable robots. MIT press.
[6]
Elizabeth J. Carter, Michael N. Mistry, G. Peter K. Carr, Brooke A. Kelly, and Jessica K. Hodgins. 2014. Playing catch with robots: Incorporating social gestures into physical interactions. In The 23rd IEEE International Symposium on Robot and Human Interactive Communication. 231–236. https://doi.org/10.1109/ROMAN.2014.6926258
[7]
Elizabeth Cha, Tushar Trehon, Lancelot Wathieu, Christian Wagner, Anurag Shukla, and Maja J Matarić. 2017. ModLight: Designing a modular light signaling tool for human-robot interaction. In 2017 IEEE International Conference on Robotics and Automation (ICRA). 1654–1661. https://doi.org/10.1109/ICRA.2017.7989195
[8]
Victoria Clarke and Virginia Braun. 2014. Thematic Analysis. Vol. 3. 1947–1952. https://doi.org/10.1037/13620-004
[9]
Anca D. Dragan, Shira Bauman, Jodi Forlizzi, and Siddhartha S. Srinivasa. 2015. Effects of Robot Motion on Human-Robot Collaboration. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction (Portland, Oregon, USA) (HRI ’15). Association for Computing Machinery, New York, NY, USA, 51–58. https://doi.org/10.1145/2696454.2696473
[10]
Miguel Faria, Andrea Costigliola, Patrícia Alves-Oliveira, and Ana Paiva. 2016. Follow me: Communicating Intentions with a Spherical Robot. https://doi.org/10.1109/ROMAN.2016.7745189
[11]
Charles Forceville. 2016. Conceptual metaphor theory, blending theory, and other cognitivist perspectives on comics. The visual narrative reader (2016), 89–114.
[12]
Akatsuka Fujio. 2018. Denshiban Tensai Bakabon. Vol. 5. Shogakukan, 2-3-1 Hitotsubashi, Chiyoda, Tokyo.
[13]
Fujiko F. Fujio. 1974. Doraemon Vol.1. Vol. 1. Shogakukan, 2-3-1 Hitotsubashi, Chiyoda, Tokyo.
[14]
Thomas Groechel, Zhonghao Shi, Roxanna Pakkar, and Maja J Matarić. 2019. Using Socially Expressive Mixed Reality Arms for Enhancing Low-Expressivity Robots. In 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) (New Delhi, India). IEEE Press, 1–8. https://doi.org/10.1109/RO-MAN46459.2019.8956458
[15]
Yijie Guo and Fumihide Tanaka. 2020. Robot That Sweats to Remind the Elderly of High-Temperature. In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (Cambridge, United Kingdom) (HRI ’20). Association for Computing Machinery, New York, NY, USA, 221–223. https://doi.org/10.1145/3371382.3378383
[16]
Jeremy M Heiner, Scott E Hudson, and Kenichiro Tanaka. 1999. The information percolator: ambient information display in a decorative object. In Proceedings of the 12th annual ACM symposium on User interface software and technology. 141–148.
[17]
Hooman, Ryo Suzuki, Daniel Leithinger, and Daniel Szafir. 2020. PufferBot: Actuated Expandable Structures for Aerial Robots. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 1338–1343. https://doi.org/10.1109/IROS45743.2020.9341088
[18]
Yaxin Hu, Lingjie Feng, Bilge Mutlu, and Henny Admoni. 2021. Exploring the Role of Social Robot Behaviors in a Creative Activity. In Designing Interactive Systems Conference 2021. 1380–1389.
[19]
Yuhan Hu and Guy Hoffman. 2020. Using Skin Texture Change to Design Emotion Expression in Social Robots. In Proceedings of the 14th ACM/IEEE International Conference on Human-Robot Interaction (Daegu, Republic of Korea) (HRI ’19). IEEE Press, 2–10.
[20]
Jonas Jørgensen. 2019. Constructing Soft Robot Aesthetics-Art, Sensation, and Materiality in Practice.
[21]
Jonas Jørgensen, Kirsten Borup Bojesen, and Elizabeth Jochum. 2022. Is a soft robot more “natural”? Exploring the perception of soft robotics in human–robot interaction. International Journal of Social Robotics (2022), 1–19.
[22]
Heather Knight, Ravenna Thielstrom, and Reid Simmons. 2016. Expressive path shape (swagger): Simple features that illustrate a robot’s attitude toward its goal in real time. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 1475–1482. https://doi.org/10.1109/IROS.2016.7759240
[23]
Fumiyo Kono. 2018. Gigatown Manpu Zufu. Asahi Shimbun Publications Inc., 5-3-2 Tsukiji, Chuo, Tokyo.
[24]
Doris CCK Kowaltowski, Giovana Bianchi, and Valéria Teixeira De Paiva. 2010. Methods that may stimulate creativity and their use in architectural design education. International Journal of Technology and Design Education 20 (2010), 453–476.
[25]
HAL Laboratory. 2022. Kirby and the Forgotten Land. Nintendo, Nintendo Switch.
[26]
Miu-Ling Lam, Yaozhun Huang, and Bin Chen. 2015. Interactive Volumetric Fog Display. In SIGGRAPH Asia 2015 Emerging Technologies (Kobe, Japan) (SA ’15). Association for Computing Machinery, New York, NY, USA, Article 13, 2 pages. https://doi.org/10.1145/2818466.2818488
[27]
Wen-Ying Lee, Yoyo Tsung-Yu Hou, Cristina Zaga, and Malte Jung. 2019. Design for Serendipitous Interaction: BubbleBot - Bringing People Together with Bubbles. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). 759–760. https://doi.org/10.1109/HRI.2019.8673265
[28]
Yuxuan Lei, Qi Lu, and Yingqing Xu. 2022. O&O: A DIY Toolkit for Designing and Rapid Prototyping Olfactory Interfaces. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 637, 21 pages. https://doi.org/10.1145/3491102.3502033
[29]
Séverin Lemaignan, Mamoun Gharbi, Jim Mainprice, Matthieu Herrb, and Rachid Alami. 2012. Roboscopie: A theatre performance for a human and a robot. In Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction. 427–428.
[30]
David V Lu and William D Smart. 2011. Human-robot interactions as theatre. In 2011 Ro-Man. IEEE, 473–478.
[31]
Michal Luria, Marius Hoggenmüller, Wen-Ying Lee, Luke Hespanhol, Malte Jung, and Jodi Forlizzi. 2021. Research through Design Approaches in Human-Robot Interaction. In Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (Boulder, CO, USA) (HRI ’21 Companion). Association for Computing Machinery, New York, NY, USA, 685–687. https://doi.org/10.1145/3434074.3444868
[32]
Takahiko Masuda, Phoebe C Ellsworth, Batja Mesquita, Janxin Leu, Shigehito Tanida, and Ellen Van de Veerdonk. 2008. Placing the face in context: cultural differences in the perception of facial emotion.Journal of personality and social psychology 94, 3 (2008), 365.
[33]
Michael Mose Biskjaer, Peter Dalsgaard, and Kim Halskov. 2017. Understanding creativity methods in design. In Proceedings of the 2017 conference on designing interactive systems. 839–851.
[34]
James F. Mullen, Josh Mosier, Sounak Chakrabarti, Anqi Chen, Tyler White, and Dylan P. Losey. 2021. Communicating Inferred Goals With Passive Augmented Reality and Active Haptic Feedback. IEEE Robotics and Automation Letters 6 (2021), 8522–8529.
[35]
Bilge Mutlu, Jodi Forlizzi, and Jessica Hodgins. 2007. A Storytelling Robot: Modeling and Evaluation of Human-like Gaze Behavior. Proceedings of the 2006 6th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS 518-523, 518 – 523. https://doi.org/10.1109/ICHR.2006.321322
[36]
Ken Nakagaki, Pasquale Totaro, Jim Peraino, Thariq Shihipar, Chantine Akiyama, Yin Shuang, and Hiroshi Ishii. 2016. HydroMorph: Shape Changing Water Membrane for Display and Interaction. In Proceedings of the TEI ’16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction (Eindhoven, Netherlands) (TEI ’16). Association for Computing Machinery, New York, NY, USA, 512–517. https://doi.org/10.1145/2839462.2856517
[37]
Pragathi Praveena, Daniel Rakita, Bilge Mutlu, and Michael Gleicher. 2020. Supporting perception of weight through motion-induced sensory conflicts in robot teleoperation. In Proceedings of the 2020 ACM/IEEE International Conference on Human-robot Interaction. 509–517.
[38]
Mauricio E Reyes, Ivan V Meza, and Luis A Pineda. 2019. Robotics facial expression of anger in collaborative human–robot interaction. International Journal of Advanced Robotic Systems 16, 1 (2019), 1729881418817972.
[39]
Tiago Ribeiro and Ana Paiva. 2012. The Illusion of Robotic Life: Principles and Practices of Animation for Robots. In Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction (Boston, Massachusetts, USA) (HRI ’12). Association for Computing Machinery, New York, NY, USA, 383–390. https://doi.org/10.1145/2157689.2157814
[40]
Eric Rosen, David Whitney, Elizabeth Phillips, Gary Chien, James Tompkin, George Konidaris, and Stefanie Tellex. 2020. Communicating Robot Arm Motion Intent Through Mixed Reality Head-Mounted Displays. 301–316. https://doi.org/10.1007/978-3-030-28619-4_26
[41]
Junpei Sanda and Masayoshi Kanoh. 2021. Effectiveness of Manga Technique in Expressing Facial Expressions of Welfare Robot. 189–194. https://doi.org/10.1007/978-3-030-78642-7_25
[42]
Trenton Schulz, Patrick Holthaus, Farshid Amirabdollahian, and Kheng Lee Koay. 2019. Humans’ Perception of a Robot Moving Using a Slow in and Slow Out Velocity Profile. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). 594–595. https://doi.org/10.1109/HRI.2019.8673239
[43]
Trenton Schulz, Jim Torresen, and Jo Herstad. 2019. Animation Techniques in Human-Robot Interaction User Studies: A Systematic Literature Review. J. Hum.-Robot Interact. 8, 2, Article 12 (jun 2019), 22 pages. https://doi.org/10.1145/3317325
[44]
Alessandra Sciutti, Laura Patanè, Francesco Nori, and Giulio Sandini. 2014. Understanding Object Weight from Human and Humanoid Lifting Actions. IEEE Transactions on Autonomous Mental Development 6, 2 (2014), 80–92. https://doi.org/10.1109/TAMD.2014.2312399
[45]
Sato Shuho. 2002. Give My Regards to Black Jack Vol.1. Vol. 1. Kodansha, 2-12-21 Otowa, Bunkyo, Tokyo.
[46]
Ashish Singh and James E. Young. 2013. A dog tail for communicating robotic states. In 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI). 417–417. https://doi.org/10.1109/HRI.2013.6483625
[47]
Ashish Singh and James E Young. 2013. A dog tail for utility robots: exploring affective properties of tail movement. In Human-Computer Interaction–INTERACT 2013: 14th IFIP TC 13 International Conference, Cape Town, South Africa, September 2-6, 2013, Proceedings, Part II 14. Springer, 403–419.
[48]
Sichao Song and Seiji Yamada. 2018. Bioluminescence-Inspired Human-Robot Interaction: Designing Expressive Lights That Affect Human’s Willingness to Interact with a Robot. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (Chicago, IL, USA) (HRI ’18). Association for Computing Machinery, New York, NY, USA, 224–232. https://doi.org/10.1145/3171221.3171249
[49]
Ippei Suzuki, Shuntarou Yoshimitsu, Keisuke Kawahara, Nobutaka Ito, Atsushi Shinoda, Akira Ishii, Takatoshi Yoshida, and Yoichi Ochiai. 2017. Design Method for Gushed Light Field: Aerosol-Based Aerial and Instant Display. In Proceedings of the 8th Augmented Human International Conference (Silicon Valley, California, USA) (AH ’17). Association for Computing Machinery, New York, NY, USA, Article 1, 10 pages. https://doi.org/10.1145/3041164.3041170
[50]
Ryo Suzuki, Adnan Karim, Tian Xia, Hooman Hedayati, and Nicolai Marquardt. 2022. Augmented Reality and Robotics: A Survey and Taxonomy for AR-Enhanced Human-Robot Interaction and Robotic Interfaces. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 553, 33 pages. https://doi.org/10.1145/3491102.3517719
[51]
Axel Sylvester, Tanja Döring, and Albrecht Schmidt. 2010. Liquids, smoke, and soap bubbles: reflections on materials for ephemeral user interfaces. 269–270. https://doi.org/10.1145/1709886.1709941
[52]
Daniel Szafir, Bilge Mutlu, and Terrence Fong. 2014. Communication of Intent in Assistive Free Flyers. In Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction (Bielefeld, Germany) (HRI ’14). Association for Computing Machinery, New York, NY, USA, 358–365. https://doi.org/10.1145/2559636.2559672
[53]
Daniel Szafir, Bilge Mutlu, and Terrence Fong. 2014. Communication of intent in assistive free flyers. In Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction. 358–365.
[54]
Daniel Szafir, Bilge Mutlu, and Terry Fong. 2015. Communicating Directionality in Flying Robots. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction (Portland, Oregon, USA) (HRI ’15). Association for Computing Machinery, New York, NY, USA, 19–26. https://doi.org/10.1145/2696454.2696475
[55]
Leila Takayama, Doug Dooley, and Wendy Ju. 2011. Expressing Thought: Improving Robot Readability with Animation Principles. In Proceedings of the 6th International Conference on Human-Robot Interaction (Lausanne, Switzerland) (HRI ’11). Association for Computing Machinery, New York, NY, USA, 69–76. https://doi.org/10.1145/1957656.1957674
[56]
Yunus Terzioğlu, Bilge Mutlu, and Erol Şahin. 2020. Designing Social Cues for Collaborative Robots: The Role of Gaze and Breathing in Human-Robot Collaboration. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (Cambridge, United Kingdom) (HRI ’20). Association for Computing Machinery, New York, NY, USA, 343–357. https://doi.org/10.1145/3319502.3374829
[57]
Frank Thomas, Ollie Johnston, and Frank Thomas. 1995. The illusion of life: Disney animation. Hyperion New York.
[58]
Yutaka Tokuda, Mohd Adili, Sriram Subramanian, and Diego Martinez Plasencia. 2017. MistForm: Adaptive Shape Changing Fog Screens. 4383–4395. https://doi.org/10.1145/3025453.3025608
[59]
Gabriele Trovato, Tatsuhiro Kishi, Nobutsuna Endo, Massimiliano Zecca, Kenji Hashimoto, and Atsuo Takanishi. 2013. Cross-cultural perspectives on emotion expressive humanoid robotic head: recognition of facial expressions and symbols. International Journal of Social Robotics 5 (2013), 515–527.
[60]
Sherry Turkle, Cynthia Breazeal, Olivia Dasté, and Brian Scassellati. 2006. Encounters with kismet and cog: Children respond to relational artifacts. Digital media: Transformations in human communication 120 (2006).
[61]
TV Tropes. 2023. Animation Tropes. https://tvtropes.org/pmwiki/pmwiki.php/Main/AnimationTropes Accessed: 02-13-2023.
[62]
Michael Walker, Hooman Hedayati, Jennifer Lee, and Daniel Szafir. 2018. Communicating Robot Motion Intent with Augmented Reality. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (Chicago, IL, USA) (HRI ’18). Association for Computing Machinery, New York, NY, USA, 316–324. https://doi.org/10.1145/3171221.3171253
[63]
Nick Walker, Kevin Weatherwax, Julian Allchin, Leila Takayama, and Maya Cakmak. 2020. Human Perceptions of a Curious Robot That Performs Off-Task Actions. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (Cambridge, United Kingdom) (HRI ’20). Association for Computing Machinery, New York, NY, USA, 529–538. https://doi.org/10.1145/3319502.3374821
[64]
Jianmin Wang, Yuxi Wang, Yujia Liu, Tianyang Yue, Chengji Wang, Weiguang Yang, Preben Hansen, and Fang You. 2021. Experimental Study on Abstract Expression of Human-Robot Emotional Communication. Symmetry 13, 9 (2021). https://doi.org/10.3390/sym13091693
[65]
[65] WBKids. Looney Tuesdays | The Best (Or Worst) of Wile E. Coyote | Looney Tunes | WB Kids(Aug. 4, 2020). Accessed Feb. 1, 2023. https://youtube.com/watch?v=p4-FeTK3zcE&si=EnSIkaIECMiOmarE=02m45s
[66]
Shigeo Yoshida, Takuji Narumi, Tomohiro Tanikawa, Hideaki Kuzuoka, and Michitaka Hirose. 2021. Teardrop Glasses: Pseudo Tears Induce Sadness in You and Those Around You. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 508, 12 pages. https://doi.org/10.1145/3411764.3445741
[67]
James E. Young, Ehud Sharlin, and Jeffrey E. Boyd. 2006. Implementing Bubblegrams: The Use of Haar-Like Features for Human-Robot Interaction. In 2006 IEEE International Conference on Automation Science and Engineering. 298–303. https://doi.org/10.1109/COASE.2006.326897
[68]
James E. Young, Min Xin, and Ehud Sharlin. 2007. Robot Expressionism through Cartooning. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (Arlington, Virginia, USA) (HRI ’07). Association for Computing Machinery, New York, NY, USA, 309–316. https://doi.org/10.1145/1228716.1228758
[69]
John Zimmerman, Jodi Forlizzi, and Shelley Evenson. 2007. Research through Design as a Method for Interaction Design Research in HCI. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI ’07). Association for Computing Machinery, New York, NY, USA, 493–502. https://doi.org/10.1145/1240624.1240704

Cited By

View all
  • (2024)Sprout: Designing Expressivity for Robots Using Fiber-Embedded ActuatorProceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610977.3634983(403-412)Online publication date: 11-Mar-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
DIS '23: Proceedings of the 2023 ACM Designing Interactive Systems Conference
July 2023
2717 pages
ISBN:9781450398930
DOI:10.1145/3563657
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 10 July 2023

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. design tools
  2. expressivity
  3. extra-linguistic cues
  4. graphical tropes
  5. human-robot interaction
  6. interaction design

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

DIS '23
Sponsor:
DIS '23: Designing Interactive Systems Conference
July 10 - 14, 2023
PA, Pittsburgh, USA

Acceptance Rates

Overall Acceptance Rate 1,158 of 4,684 submissions, 25%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)137
  • Downloads (Last 6 weeks)13
Reflects downloads up to 06 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Sprout: Designing Expressivity for Robots Using Fiber-Embedded ActuatorProceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610977.3634983(403-412)Online publication date: 11-Mar-2024

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media