Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3491102.3501920acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article
Open access

EmoBalloon - Conveying Emotional Arousal in Text Chats with Speech Balloons

Published: 29 April 2022 Publication History

Abstract

Text chat applications are an integral part of daily social and professional communication. However, messages sent over text chat applications do not convey vocal or nonverbal information from the sender, and detecting the emotional tone in text-only messages is challenging. In this paper, we explore the effects of speech balloon shapes on the sender-receiver agreement regarding the emotionality of a text message. We first investigated the relationship between the shape of a speech balloon and the emotionality of speech text in Japanese manga. Based on these results, we created a system that automatically generates speech balloons matching linear emotional arousal intensity by Auxiliary Classifier Generative Adversarial Networks (ACGAN). Our evaluation results from a controlled experiment suggested that the use of emotional speech balloons outperforms the use of emoticons in decreasing the differences between message senders’ and receivers’ perceptions about the level of emotional arousal in text messages.

Supplementary Material

MP4 File (3491102.3501920-talk-video.mp4)
Talk Video
MP4 File (3491102.3501920-video-preview.mp4)
Video Preview

References

[1]
Kiyoharu Aizawa, Azuma Fujimoto, Atsushi Otsubo, Toru Ogawa, Yusuke Matsui, Koki Tsubota, and Hikaru Ikuta. 2020. Building a Manga Dataset “Manga109” with Annotations for Multimedia Applications. IEEE MultiMedia 27, 2 (2020), 8–18. https://doi.org/10.1109/mmul.2020.2987895
[2]
Saima Aman and Stan Szpakowicz. 2007. Identifying Expressions of Emotion in Text. In Text, Speech and Dialogue, Václav Matoušek and Pavel Mautner (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 196–205.
[3]
Tom Arnstein. 2019. Never Listen to Another Voice Message Again: WeChat Rolls Out Awesome Transcription Function. https://www.thebeijinger.com/blog/2019/09/12/never-listen-another-voice-message-again-wechat-rolls-out-awesome-transcription
[4]
Sigal G Barsade. 2002. The ripple effect: Emotional contagion and its influence on group behavior. Administrative science quarterly 47, 4 (2002), 644–675. https://doi.org/10.2307/3094912
[5]
Margaret M Bradley and Peter J Lang. 1994. Measuring emotion: the self-assessment manikin and the semantic differential. Journal of behavior therapy and experimental psychiatry 25, 1(1994), 49–59.
[6]
Ralph Allan Bradley and Milton E Terry. 1952. Rank analysis of incomplete block designs: I. The method of paired comparisons. Biometrika 39, 3/4 (1952), 324–345.
[7]
Tony W Buchanan, Kai Lutz, Shahram Mirzazade, Karsten Specht, N Jon Shah, Karl Zilles, and Lutz Jäncke. 2000. Recognition of emotional prosody and verbal components of spoken language: an fMRI study. Cognitive Brain Research 9, 3 (2000), 227–238. https://doi.org/10.1016/S0926-6410(99)00060-9
[8]
Daniel Buschek, Mariam Hassib, and Florian Alt. 2018. Personal Mobile Messaging in Context: Chat Augmentations for Expressiveness and Awareness. ACM Trans. Comput.-Hum. Interact. 25, 4, Article 23 (aug 2018), 33 pages. https://doi.org/10.1145/3201404
[9]
Kristin Byron. 2008. Carrying too heavy a load? The communication and miscommunication of emotion by email. Academy of Management Review 33, 2 (apr 2008), 309–327. https://doi.org/10.5465/AMR.2008.31193163
[10]
Fanglin Chen, Kewei Xia, Karan Dhabalia, and Jason I. Hong. 2019. MessageOnTap: A Suggestive Interface to Facilitate Messaging-Related Tasks. Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3290605.3300805
[11]
Qinyue Chen, Yuchun Yan, and Hyeon-Jeong Suk. 2021. Bubble Coloring to Visualize the Speech Emotion. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, Article 361, 6 pages. https://doi.org/10.1145/3411763.3451698
[12]
Saemi Choi and Kiyoharu Aizawa. 2019. Emotype: Expressing emotions by changing typeface in mobile messenger texting. Multimedia Tools and Applications 78, 11 (jun 2019), 14155–14172. https://doi.org/10.1007/s11042-018-6753-3
[13]
Richard L. Daft and Robert H. Lengel. 1986. Organizational Information Requirements, Media Richness and Structural Design. Management Science 32, 5 (may 1986), 554–571. https://doi.org/10.1287/mnsc.32.5.554
[14]
Jose Eurico De Vasconcelos Filho, Kori M. Inkpen, and Mary Czerwinski. 2009. Image, appearance and vanity in the use of media spaces and videoconference systems. In GROUP’09 - Proceedings of the 2009 ACM SIGCHI International Conference on Supporting Group Work. ACM Press, New York, New York, USA, 253–261. https://doi.org/10.1145/1531674.1531712
[15]
Alan R Dennis and Susan T Kinney. 1998. Testing media richness theory in the new media: The effects of cues, feedback, and task equivocality. Information systems research 9, 3 (1998), 256–274. https://doi.org/10.1287/isre.9.3.256
[16]
Daantje Derks, Arjan E R Bos, and Jasper Von Grumbkow. 2008. Emoticons and Online Message Interpretation. Social Science Computer Review 26 (2008), 379–388. https://doi.org/10.1177/0894439307311611
[17]
NTT DOCOMO. 2021. みえる電話[in Japanese]. https://www.nttdocomo.co.jp/service/mieru_denwa/index.html
[18]
Paul Ekman. 1982. Methods for measuring facial action. Handbook of Methods in Nonverbal Behavior Research, 44–90.
[19]
Ahmed Elgammal, Bingchen Liu, Mohamed Elhoseiny, and Marian Mazzone. 2017. CAN: Creative Adversarial Networks Generating ”Art” by Learning About Styles and Deviating from Style Norms *. arxiv:1706.07068v1
[20]
Robert W. Frick. 1985. Communicating Emotion. The Role of Prosodic Features. 97, 3 (may 1985), 412–429. https://doi.org/10.1037/0033-2909.97.3.412
[21]
Raymond A Friedman and Steven C Currall. 2003. Conflict escalation: Dispute exacerbating elements of e-mail communication. Human relations 56, 11 (2003), 1325–1347.
[22]
Susan R Fussell. 2002. Introduction and overview. In The verbal communication of emotions, Susan R Fussell (Ed.). Psychology Press, New York, NY, USA, 9–24.
[23]
Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative Adversarial Nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2 (Montreal, Canada) (NIPS’14). MIT Press, Cambridge, MA, USA, 2672–2680.
[24]
Jeffrey T. Hancock, Kailyn Gee, Kevin Ciaccio, and Jennifer Mae-Hwah Lin. 2008. I’m Sad You’re Sad: Emotional Contagion in CMC. In Proceedings of the 2008 ACM Conference on Computer Supported Cooperative Work (San Diego, CA, USA) (CSCW ’08). Association for Computing Machinery, New York, NY, USA, 295–298. https://doi.org/10.1145/1460563.1460611
[25]
Jeffrey T. Hancock, Christopher Landrigan, and Courtney Silver. 2007. Expressing Emotion in Text-Based Communication. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI ’07). Association for Computing Machinery, New York, NY, USA, 929–932. https://doi.org/10.1145/1240624.1240764
[26]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Computer Society, Los Alamitos, CA, USA, 770–778. https://doi.org/10.1109/CVPR.2016.90
[27]
George L Huttar. 1968. Relations between prosodic variables and emotions in normal American English utterances. Journal of Speech and Hearing Research 11, 3 (1968), 481–487. https://doi.org/10.1044/jshr.1103.481
[28]
Sara R. Jaeger, Christina M. Roigard, David Jin, Leticia Vidal, and Ares Gaston. 2019. Valence, arousal and sentiment meanings of 33 facial emoji: Insights for the use of emoji in consumer research. Food Research International 119 (may 2019), 895–907. https://doi.org/10.1016/j.foodres.2018.10.074
[29]
Nikolay Jetchev and Urs Bergmann. 2017. The Conditional Analogy GAN: Swapping Fashion Articles on People Images. arxiv:1709.04695 [stat.ML]
[30]
Tero Karras, Samuli Laine, and Timo Aila. 2019. A Style-Based Generator Architecture for Generative Adversarial Networks. arxiv:1812.04948 [cs.NE]
[31]
Shogo Kato, Yuuki Kato, and Kanji Akahori. 2006. Study on Emotional Transmissions in Communication Using Bulletin Board System. In Proceedings of E-Learn: World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education 2006, Thomas Reeves and Shirley Yamashita (Eds.). Association for the Advancement of Computing in Education (AACE), Honolulu, Hawaii, USA, 2576–2584. https://www.learntechlib.org/p/24095
[32]
Boaz Keysar and Anne S Henly. 2002. Speakers’ overestimation of their effectiveness. Psychological Science 13, 3 (2002), 207–212. https://doi.org/10.1111/1467-9280.00439
[33]
Sujay Khandekar, Joseph Higg, Yuanzhe Bian, Chae Won Ryu, Jerry O. Talton Iii, and Ranjitha Kumar. 2019. Opico: A Study of Emoji-first Communication in a Mobile Social App. In Companion Proceedings of The 2019 World Wide Web Conference (San Francisco, USA) (WWW ’19). Association for Computing Machinery, New York, NY, USA, 450–458. https://doi.org/10.1145/3308560.3316547
[34]
Joongyum Kim, Taesik Gong, Kyungsik Han, Juho Kim, JeongGil Ko, and Sung-Ju Lee. 2020. Messaging Beyond Texts with Real-Time Image Suggestions. In 22nd International Conference on Human-Computer Interaction with Mobile Devices and Services (Oldenburg, Germany) (MobileHCI ’20). Association for Computing Machinery, New York, NY, USA, Article 28, 12 pages. https://doi.org/10.1145/3379503.3403553
[35]
Joongyum Kim, Taesik Gong, Bogoan Kim, Jaeyeon Park, Woojeong Kim, Evey Huang, Kyungsik Han, Juho Kim, Jeonggil Ko, and Sung-Ju Lee. 2020. No More One Liners: Bringing Context into Emoji Recommendations. Trans. Soc. Comput. 3, 2, Article 9 (apr 2020), 25 pages. https://doi.org/10.1145/3373146
[36]
Diederik P. Kingma and Jimmy Ba. 2017. Adam: A Method for Stochastic Optimization. arxiv:1412.6980 [cs.LG]
[37]
Ned Kock. 2005. Media richness or media naturalness? The evolution of our biological communication apparatus and its influence on our behavior toward e-communication tools. IEEE transactions on professional communication 48, 2(2005), 117–130. https://doi.org/10.1109/TPC.2005.849649
[38]
Yuko Konya, Akihiro Nakatani, Itaru Sato, and Itiro Siio. 2013. Paralinguistic 表現を用いた聴覚障害者向け吹き出し型字幕提示方法[in Japanese] Paralinguistic Caption Presentation method with Speech bubble for hearing impaired person. 研究報告エンタテインメントコンピューティング(EC) 2013-EC-29, 4(2013), 1–6.
[39]
Justin Kruger, Nicholas Epley, Jason Parker, and Zhi-Wen Ng. 2005. Egocentrism over e-mail: Can we communicate as well as we think?Journal of personality and social psychology 89, 6(2005), 925. https://doi.org/10.1037/0022-3514.89.6.925
[40]
Ito Kumiko. 1985. 感情を含む音声に関する研究 II 合成単母音 [え]による音響パラメータ評価 [in Japanese]. 人間工学 21, 2 (1985), 81–87. https://doi.org/10.5100/jje.21.81
[41]
David Kurlander, Tim Skelly, and David Salesin. 1996. Comic chat. In Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1996. Association for Computing Machinery, Inc, New York, New York, USA, 225–236. https://doi.org/10.1145/237170.237260
[42]
Aizawa Yamasaki Matsui Lab.2015. Manga109. http://www.manga109.org/ja/index.html
[43]
Robert H Lengel and Richard L Daft. 1988. The selection of communication media as an executive skill. Academy of Management Perspectives 2, 3 (1988), 225–232. https://doi.org/10.5465/ame.1988.4277259
[44]
Miki Liu, Austin Wong, Ruhi Pudipeddi, Betty Hou, David Wang, and Gary Hsieh. 2018. ReactionBot: Exploring the Effects of Expression-Triggered Emoji in Text Messages. Proceedings of the ACM on Human-Computer Interaction 2, CSCW, Article 110 (nov 2018), 16 pages. https://doi.org/10.1145/3274379
[45]
Xuan Lu, Wei Ai, Xuanzhe Liu, Qian Li, Ning Wang, Gang Huang, and Qiaozhu Mei. 2016. Learning from the Ubiquitous Language: An Empirical Analysis of Emoji Usage of Smartphone Users. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing (Heidelberg, Germany) (UbiComp ’16). Association for Computing Machinery, New York, NY, USA, 770–780. https://doi.org/10.1145/2971648.2971724
[46]
Yusuke Matsui, Kota Ito, Yuji Aramaki, Azuma Fujimoto, Toru Ogawa, Toshihiko Yamasaki, and Kiyoharu Aizawa. 2017. Sketch-based Manga Retrieval using Manga109 Dataset. Multimedia Tools and Applications 76, 20 (2017), 21811–21838. https://doi.org/10.1007/s11042-016-4020-z
[47]
Fusanosuke Natsume. 1997. マンガはなぜ面白いのか: その表現と文法 [in Japanese]. NHK Publishing, Tokyo, Japan.
[48]
Augustus Odena, Christopher Olah, and Jonathon Shlens. 2017. Conditional Image Synthesis with Auxiliary Classifier GANs. In Proceedings of the 34th International Conference on Machine Learning - Volume 70 (Sydney, NSW, Australia) (ICML’17). JMLR.org, 2642–2651.
[49]
James Ohene-Djan, Jenny Wright, and Kirsty Combie-Smith. 2007. Emotional Subtitles: A System and Potential Applications for Deaf and Hearing Impaired People. In Proceedings of the Conference and Workshop on Assistive Technologies for People with Vision and Hearing Impairments: Assistive Technology for All Ages, CVHI-2007. 415.
[50]
Sinno Jialin Pan and Qiang Yang. 2010. A survey on transfer learning. In IEEE Transactions on Knowledge and Data Engineering, Vol. 22. 1345–1359. https://doi.org/10.1109/TKDE.2009.191
[51]
Pedius. 2021. Pedius. https://www.pedius.org/
[52]
Yi Hao Peng, Ming Wei Hsu, Paul Taele, Ting Yu Lin, Po En Lai, Leon Hsu, Tzu Chuan Chen, Te Yen Wu, Yu An Chen, Hsien Hui Tang, and Mike Y. Chen. 2018. Speechbubbles: Enhancing captioning experiences for Deaf and hard-of-hearing people in group conversations. In Conference on Human Factors in Computing Systems - Proceedings, Vol. 2018-April. Association for Computing Machinery, New York, NY, USA, 1–10. https://doi.org/10.1145/3173574.3173867
[53]
Henning Pohl, Christian Domin, and Michael Rohs. 2017. Beyond just text: semantic emoji similarity modeling to support expressive communication. ACM Transactions on Computer-Human Interaction (TOCHI) 24, 1(2017), 1–42. https://doi.org/10.1145/3039685
[54]
Robert R Provine, Robert J Spencer, and Darcy L Mandell. 2007. Emotional expression online: Emoticons punctuate website text messages. Journal of Language and Social Psychology 26, 3 (2007), 299–307. https://doi.org/10.1177/0261927X06303481
[55]
Alec Radford, Luke Metz, and Soumith Chintala. 2016. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arxiv:1511.06434 [cs.LG]
[56]
rong-chang ESL Inc.2021. Easy Conversations For ESL/EFL Beginners. http://www.eslfast.com/easydialogs/
[57]
James Russell. 1980. A Circumplex Model of Affect. Journal of Personality and Social Psychology 39 (dec 1980), 1161–1178. https://doi.org/10.1037/h0077714
[58]
Jeremiah Scholl, John McCarthy, and Rikard Harr. 2006. A Comparison of Chat and Audio in Media Rich Environments. In Proceedings of the 2006 20th Anniversary Conference on Computer Supported Cooperative Work (Banff, Alberta, Canada) (CSCW ’06). Association for Computing Machinery, New York, NY, USA, 323–332. https://doi.org/10.1145/1180875.1180925
[59]
John Short, Ederyn Williams, and Bruce Christie. 1976. The social psychology of telecommunications. John Wiley & Sons, Hoboken, New Jersey, USA.
[60]
Hideki Tanaka, Ryosuke Yamanishi, and Junichi Fukumoto. 2015. Relation Analysis between Speech Balloon Shapes and their Serif Descriptions in Comic. In 2015 IIAI 4th International Congress on Advanced Applied Informatics. IEEE, New York, NY, USA, 229–233. https://doi.org/10.1109/IIAI-AAI.2015.235
[61]
Philip A. Thompsen and Davis A. Foulger. 1996. Effects of pictographs and quoting on flaming in electronic mail. Computers in Human Behavior 12, 2 (1996), 225 – 243. https://doi.org/10.1016/0747-5632(96)00004-0
[62]
Joseph B. Walther. 1992. Interpersonal Effects in Computer-Mediated Interaction. Communication Research 19, 1 (feb 1992), 52–90. https://doi.org/10.1177/009365092019001003
[63]
Joseph B. Walther. 1994. Anticipated Ongoing Interaction Versus Channel Effects on Relational Communication in Computer-Mediated Interaction. Human Communication Research 20, 4 (jun 1994), 473–501.
[64]
Joseph B. Walther. 1996. Computer-mediated communication: Impersonal, interpersonal, and hyperpersonal interaction. Communication research 23, 1 (1996), 3–43. https://doi.org/10.1177/009365096023001001
[65]
Joseph B. Walther and Kyle P. D’Addario. 2001. The Impacts of Emoticons on Message Interpretation in Computer-Mediated Communication. Social Science Computer Review 19, 3 (aug 2001), 324–347. https://doi.org/10.1177/089443930101900307
[66]
Joseph B. Walther, Yuhua Liang, David DeAndrea, Stephanie Tong, Caleb Carr, Erin Spottswood, and Yair Amichai-Hamburger. 2011. The Effect of Feedback on Identity Shift in Computer-Mediated Communication. Media Psychology 14 (mar 2011), 1–26. https://doi.org/10.1080/15213269.2010.547832
[67]
Joseph B. Walther, Tracy Loh, and Laura Granka. 2005. Let me count the ways: The interchange of verbal and nonverbal cues in computer-mediated and face-to-face affinity. Journal of language and social psychology 24, 1 (2005), 36–65. https://doi.org/10.1177/0261927X04273036
[68]
Hua Wang, Helmut Prendinger, and Takeo Igarashi. 2004. Communicating Emotions in Online Chat Using Physiological Sensors and Animated Text. In CHI ’04 Extended Abstracts on Human Factors in Computing Systems (Vienna, Austria) (CHI EA ’04). Association for Computing Machinery, New York, NY, USA, 1171–1174. https://doi.org/10.1145/985921.986016
[69]
Hisako W. Yamamoto, Misako Kawahara, Mariska E. Kret, and Akihiro Tanaka. 2020. Cultural Differences in Emoticon Perception: Japanese See the Eyes and Dutch the Mouth of Emoticons. Letters on Evolutionary Behavioral Science 11, 2 (dec 2020), 40–45. https://doi.org/10.5178/lebs.2020.80
[70]
Hisako W. Yamamoto, Misako Kawahara, Mariska E. Kret, and Akihiro Tanaka. 2020. Cultural Differences in Emoticon Perception: Japanese See the Eyes and Dutch the Mouth of Emoticons. Letters on Evolutionary Behavioral Science 11, 2 (dec 2020), 40–45. https://doi.org/10.5178/lebs.2020.80
[71]
Ryosuke Yamanishi, Hideki Tanaka, Yoko Nishihara, and Junichi Fukumoto. 2017. Speech-balloon Shapes Estimation for Emotional Text Communication. Information Engineering Express 3, 2 (2017), 1–10. https://doi.org/10.52731/iee.v3.i2.168
[72]
Dezhi Yin, Samuel D. Bond, and Han Zhang. 2017. Keep Your Cool or Let it Out: Nonlinear Effects of Expressed Arousal on Perceptions of Consumer Reviews. Journal of Marketing Research 54, 3 (2017), 447–463. https://doi.org/10.1509/jmr.13.0379
[73]
Ryota Yonekura, Saemi Choi, Ryota Yoshihashi, Katsufumi Matsui, and Ari Hautasaari. 2019. Automated Font Selection System based on Message Sentiment in English Text-Based Chat [in Japanese]. IEICE Technical Report 118, 502 (2019), 131–136.
[74]
Rui Zhou, Jasmine Hentschel, and Neha Kumar. 2017. Goodbye Text, Hello Emoji: Mobile Communication on WeChat in China. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI ’17). Association for Computing Machinery, New York, NY, USA, 748–759. https://doi.org/10.1145/3025453.3025800

Cited By

View all
  • (2024)Exploring the Effects of Japanese Font Designs on Impression Formation and Decision-Making in Text-Based CommunicationIEICE Transactions on Information and Systems10.1587/transinf.2023HCP0009E107.D:3(354-362)Online publication date: 1-Mar-2024
  • (2024)Metamorpheus: Interactive, Affective, and Creative Dream Narration Through Metaphorical Visual StorytellingProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642410(1-16)Online publication date: 11-May-2024
  • (2024)Using the Visual Language of Comics to Alter Sensations in Augmented RealityProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642351(1-17)Online publication date: 11-May-2024
  • Show More Cited By

Index Terms

  1. EmoBalloon - Conveying Emotional Arousal in Text Chats with Speech Balloons

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '22: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems
    April 2022
    10459 pages
    ISBN:9781450391573
    DOI:10.1145/3491102
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 29 April 2022

    Permissions

    Request permissions for this article.

    Check for updates

    Badges

    • Best Paper

    Author Tags

    1. emotion
    2. speech balloon
    3. text chat
    4. voice input

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    Conference

    CHI '22
    Sponsor:
    CHI '22: CHI Conference on Human Factors in Computing Systems
    April 29 - May 5, 2022
    LA, New Orleans, USA

    Acceptance Rates

    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)1,271
    • Downloads (Last 6 weeks)102
    Reflects downloads up to 03 Oct 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Exploring the Effects of Japanese Font Designs on Impression Formation and Decision-Making in Text-Based CommunicationIEICE Transactions on Information and Systems10.1587/transinf.2023HCP0009E107.D:3(354-362)Online publication date: 1-Mar-2024
    • (2024)Metamorpheus: Interactive, Affective, and Creative Dream Narration Through Metaphorical Visual StorytellingProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642410(1-16)Online publication date: 11-May-2024
    • (2024)Using the Visual Language of Comics to Alter Sensations in Augmented RealityProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642351(1-17)Online publication date: 11-May-2024
    • (2024)EmoWear: Exploring Emotional Teasers for Voice Message Interaction on SmartwatchesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642101(1-16)Online publication date: 11-May-2024
    • (2024)Creating Emordle: Animating Word Cloud for Emotion ExpressionIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2023.328639230:8(5198-5211)Online publication date: 1-Aug-2024
    • (2024)Decoding comics: a systematic literature review on recognition, segmentation, and classification techniques with emphasis on computer vision and non-computer visionMultimedia Tools and Applications10.1007/s11042-024-20214-xOnline publication date: 1-Oct-2024
    • (2024)Hearing with the eyes: modulating lyrics typography for music visualizationThe Visual Computer10.1007/s00371-023-03239-5Online publication date: 19-Jan-2024
    • (2023)Affective Affordance of Message Balloon Animations: An Early Exploration of AniBalloonsCompanion Publication of the 2023 Conference on Computer Supported Cooperative Work and Social Computing10.1145/3584931.3607017(138-143)Online publication date: 14-Oct-2023
    • (2023)“Hey, can we talk?”: Exploring How Revealing Implicit Emotional Responses Tangibly Could Foster Empathy During Mobile TextingProceedings of the Seventeenth International Conference on Tangible, Embedded, and Embodied Interaction10.1145/3569009.3573124(1-7)Online publication date: 26-Feb-2023
    • (2023)EmoFlow: Visualizing Emotional Changes in Video Chat - Preliminary StudyProceedings of the 25th International Conference on Mobile Human-Computer Interaction10.1145/3565066.3608702(1-7)Online publication date: 26-Sep-2023
    • Show More Cited By

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media