Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3544548.3581061acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
research-article
Open access

Affective Profile Pictures: Exploring the Effects of Changing Facial Expressions in Profile Pictures on Text-Based Communication

Published: 19 April 2023 Publication History

Abstract

When receiving text messages from unacquainted colleagues in fully remote workplaces, insufficient mutual understanding and limited social cues can lead people to misinterpret the tone of the message and further influence their impression of remote colleagues. Emojis have been commonly used for supporting expressive communication; however, people seldom use emojis before they become acquainted with each other. Hence, we explored how changing facial expressions in profile pictures could be an alternative channel to communicate socio-emotional cues. By conducting an online controlled experiment with 186 participants, we established that changing facial expressions of profile pictures can influence the impression of the message receivers toward the sender and the message valence when receiving neutral messages. Furthermore, presenting incongruent profile pictures to positive messages negatively affected the interpretation of the message valence, but did not have much effect on negative messages. We discuss the implications of affective profile pictures in supporting text-based communication.
Figure 1:
Figure 1: This figure illustrates the message recipient received NEUTRAL (left), HAPPY (middle), and SAD (right) messages from the sender, Ashley. Ashley’s facial expressions in the profile picture changed with the sentiment of the message accordingly. In this study, we investigated the effect of changing facial expressions in the profile pictures on the perception of message receivers when receiving NEUTRAL, POSITIVE, and NEGATIVE messages to explore an alternative space for communicating socio-emotional cues in text-based communication. Results demonstrated that the facial expression of a profile picture affected the impression of the receivers toward the sender and the interpreted emotion of the NEUTRAL and POSITIVE messages. However, the influence of facial expressions on NEGATIVE messages was not salient. (The facial expressions in the example were generated by https://generated.photos/)

1 Introduction

Text messaging is a common approach for communicating with colleagues in fully remote workplaces. People exchange information, make announcements, and coordinate tasks using text-based communication such as Slack1 and Discord2. In addition to communicating neutral valenced messages, people also exchange socio-emotional messages for functional and relational purposes [5]. Compared with audio calls and videoconferencing, text messaging has been identified as a less rich communication medium [6, 74]. Because most of the social cues, such as facial expressions, gestures, body movements, and vocal inflections, cannot be transmitted via texts [6, 57], it is easy to misinterpret messages and people tend to form an intensified impression toward the counterpart, such as an overly positive or negative impression [30, 71]. This occurs specifically when people receive messages from their unacquainted counterparts. For instance, studies established that limited shared understanding between interlocutors caused misinterpretations when exchanging socio-emotional messages [26, 53].
To address such issues, emojis and stickers have been widely adopted in instant messaging to visualize people’s emotional state. Prior studies have shown that such visualization of people’s emotional states can enrich technology-mediated communication and increase empathy between remote interlocutors [17, 31]. However, people seldom and hesitantly use emojis or stickers with colleagues with whom they are unfamiliar when exchanging messages in workplaces [64]. Furthermore, using emoticons in workplace conversation could adversely affect how competent workers are perceived by their colleagues and further impede their information exchange [27]. Hence, an alternative channel for workplace communication is worth to be explored to communicate socio-emotional cues through text messaging.
One of the alternative channels that people can utilize to enrich nonverbal cues in text messaging is a profile picture. In most of the communication services, such as emails, messengers, and audio/video chats, the profile picture of users is a common default design feature. Profile pictures enable people to infer the online identities and personalities of others when deciding whether to reach out for subsequent interaction [8, 19, 62]. Therefore, profile pictures can be an effective channel to help receivers better sense senders’ emotions in the text messages. For instance, a user can manually change a facial expression before sending a message, or a system can automatically select and generate corresponding facial expressions for text messages [14, 15, 66]. Furthermore, some systems already enable users to show animated facial expressions (e.g., Discord2, LINE3), and animated expressions have been demonstrated for the receivers to recognize certain emotions with higher accuracy than with static expressions [37]. However, we still lack an understanding of how facial expressions of profile pictures impact the receivers. Therefore, it is worth exploring how changing facial expressions and adding animation in profile pictures influence the perception of receivers when profile pictures can express a variety of emotional cues. In this paper, we aim to investigate ways in which “different facial expressions” and “animation” of the profile pictures influence receivers’ impression toward the sender and interpretation of the message valence in text-based communication. To answer the questions, we conducted controlled online experiments simulating text-based workplace conversations with 186 participants. In the study, we first focused on different combinations of facial expressions and the sentiments of text messages – specifically, when facial expressions in profile pictures are congruent and incongruent with the sentiment of text messages. We examined how the emotional congruency affects the receivers’ impression of the sender and the received message. Then, we compared static and animated facial expressions and investigated how they affect the recipients differently.
The results indicated that, when communicating NEUTRAL text messages, seeing a happy facial expression in profile pictures made the participants to form a more positive impression toward the sender and interpret NEUTRAL messages more positively. However, facial expressions in the profile pictures influenced the impression of the receiver toward the senders when receiving NEUTRAL and POSITIVE messages but not NEGATIVE messages. Additionally, the incongruent facial expression in the profile pictures mitigated the perceived valence of the affective messages, regardless of the facial expressions were animated or static.
The contribution of this research is twofold. First, we conducted an empirical study to clarify the effect of changing facial expressions in profile pictures on the perception of receivers when receiving both neutral and affective messages. Second, we discussed the challenges and opportunities for people to change facial expressions in the profile pictures in text-based communication. The current understanding may inspire ways in which computer-mediated communication (CMC) designers use the space of profile pictures to support affective communication.

2 Background and Related Works

Here, we first describe the challenges in interpreting text messages and ways in which misinterpreting messages influences impression formation in text-based communication. Second, we discuss why the space in the profile picture could be an alternative channel for supporting affective communication. Finally, we review related works around affective communication to motivate why we focused on examining “facial expressions” and “animation” of profile pictures in text-based communication.

2.1 Challenges in Communicating with Unacquainted Collaborators in Text-Based Communication

2.1.1 Misinterpretation of the Sentiment of Text Messages.

Misunderstanding among unacquainted interlocutors occurs in text-based communication, owning to the absence of nonverbal cues, immediate feedback, and lack of mutual understanding [63]. The difficulty of text-based communication can be complicated when sending and receiving messages that contains affect. Although emoticons and stickers have been proposed to supplement affective expressions, people interpreted emoticons differently in text-based communication [12, 48, 69]. When communicating emotions in a media as lean as text, people who do not share a common language or cultural background can perceive the emotion of the message differently [26, 32].

2.1.2 Challenges in Forming an Impression of Message Senders.

Not being able to see their remote counterparts in text-based communication is challenging for people to understand or recognize the intentions and emotions of their interlocutors correctly, which could impact the impression formation [29]. Insufficient social presence in text-based communication could cause receivers to form exaggerated perceptions of their remote counterparts, either positive or negative [71]. For instance, it was established that message receivers perceived negative emails with intensified negativity when they only perceived the limited social presence of the message senders, and they tended to regard the impersonal and demanding emails as senders’ hostile intent [65]. Unlike face-to-face interactions in which people can directly observe rich nonverbal cues of an interlocutor, message receivers can only form impressions of senders based on textual information when interacting over texts. Therefore, receivers tend to automatically fill in missing information based on the available social cues they receive from texts [67, 71]. Consequently, people describe their remote interlocutors with lesser details, but with more polarized attributions (e.g., extremely positive or negative characteristics) in text-based communication than those in face-to-face interactions [30, 35]. Considering all evidence, it is suggested that how the messages are interpreted plays a part in forming impressions about the sender. Hence, we aim to determine ways to emphasize social cues to support impression formation in text-based communication.

2.1.3 Affective Communication in the Workplace.

In workplace communication, authentic and empathic communication is critical for forming good impressions of each other [28], building good relationships, shaping a good social climate [5], and facilitating constructive discussions [40]. Colleagues exchange task-oriented messages for instrumental purposes, including task coordination and information sharing. In addition to the instrumental function, colleagues exchange affective messages to fulfill a relational purpose [75], including showing friendliness and humor to colleagues [22], sharing personal activities and thoughts with each other to strengthen social connection, as well as facilitating social integration [79]. However, colleagues seldom use affective artifact such as emojis and stickers to express their emotions during the early stage of interacting with unfamiliar colleagues using text, as inferred by a recent study, which demonstrated that new collaborators were uncertain about whether using emojis with unacquainted teams in virtual workplaces could harm their perceived professionalism [64]. Therefore, in addition to using affective artifacts such as emojis, emoticons, or stickers, it is worth exploring whether the space of the profile picture can be used to support remote workers to be more expressive when communicating both nonaffective (neutral valenced) and affective (positive and negative valenced) text messages with their unacquainted colleagues.

2.2 Social Cues in Profile Pictures

When interacting with people online, profile pictures are one of the main sources for people to perceive social presence [54], infer others’ personality traits [62], credibility [55], and online identities [33]. Even a simple profile picture contains various social cues, such as the facial expressions of senders and background images behind the portrait. Based on these social cues, people make trait inferences, such as whether the profile owner is agreeable or neurotic [62], and form various impressions [73]. In a professional social network site, the perceived credibility of profile pictures can further determine the initiation of social interactions [19]. In addition to social network sites, studies on email communication have shown that profile avatars could affect the perceptions of recipients regarding the personalities of senders and the quality evaluation of the email content [29]. Although various studies have investigated the effects of profile pictures on social network sites and email on impression formation, few have explored the impact of profile pictures on forming impressions toward senders and interpreting the affect of messages in text-based communication, including Slack, Discord, or Skype4. In online collaborative spaces, such as messaging tools (Slack), and remote collaborative workspaces (Google Docs5), the profile picture is a common default feature. Therefore, we aim to investigate the impact of profile pictures on ways in which remote colleagues perceive text messages.

2.3 Expressing Emotion Through Profile Pictures in CMC

People rely on facial expression and nonverbal behavior to infer interlocutors’ emotion [4, 18]. The gaze, eye brow movement, shape of the mouth, and frequency of eye blinking, serve as rich social cues for people to gain mutual understanding, empathize, and infer the emotions of the communicator. Such mutual understanding and empathy lead to effective and healthy communication in virtual teams [40] and boosts trust among interlocutors [23]. However, this information is unavailable in text-based communication. Various approaches have been proposed to support affective communication in text-messaging. For instance, automatic detection of the facial expression of a sender and triggering corresponding emoticons on Slack [43] or changing the shape of a speech bubble by transforming the detected voice input in text chat [2]. Despite these efforts, people seldom use emojis with unacquainted colleagues. Consequently, we aimed to explore ways in which different facial expressions in profile pictures influence the perception of message receivers. By using the facial expression of a profile owner to show socio-emotional cues, we can help people communicate emotions through text messages.

2.3.1 Effects of Congruent/Incongruent Affective Artifacts.

In most of communication tools, current practice allows people to use the same profile pictures while sending text messages. However, the facial expression people see in profile pictures can be congruent and incongruent with the sentiment of the message in workplace conversations. For instance, receiving encouragements paired with a happy facial expression in the profile pictures or receiving an angry warning with a happy facial expression. Ways in which the socio-emotional cues seen from facial expressions in profile pictures and their congruency/incongruency with the message influence the perception of receivers in text-based communication are unclear.
Previous studies have shown that seeing an enhanced smile of remote counterparts’ avatars is effective in facilitating people to use positive words and experience positive affect during conversations in virtual environments [56]. Seeing counterparts’ happy facial expressions leads to better remote discussion compared with sad expressions in videoconferencing [51]. In contrast, the incongruency between facial expressions and body language was found confusing in communication [77]. However, to the best of our knowledge, there have been few empirical investigations into the effects of emotional congruency between different facial expressions and text messaging on impression formation and message interpretation. Therefore, we examined the effect of congruent/incongruent facial expressions in profile pictures on receivers’ perception when receiving neutral, positive, and negative valenced messages.

2.3.2 Effects of Animated Affective Artifacts.

Many commercialized communication services enable users to upload animations as profile pictures in a graphics interchange format (GIF). Users can display animated facial expressions or actions in their profile pictures when exchanging text messages. Studies have shown that people can better recognize emotions when facial expressions move dynamically than when they are static [37]. CMC research also revealed that adding animation to emojis enables message senders to be creative and expressive when constructing messages [1]. People use animation to add new meanings to animated emojis [1], express emotions (e.g., comfort, support, and sympathy), and increase the depth of conversation [58]. Making emoticons animate was found effective in enhancing the sender’s social presence in a remote learning context [68]. However, the effects of animated profile pictures on receiving text messaging have not been explored. Thus, we examined whether animated and static profile pictures would have different effects when receiving text messages.
The objective of this study is to investigate the effect of profile pictures on receivers’ perceptions when receiving neutral and affective messages by examining the following research questions:
RQ1:
How does changing the facial expressions in profile pictures influence receivers’ impression toward the sender?
RQ2:
How does changing the facial expressions in profile pictures influence ways in which receivers interpret the emotion of the messages?

3 Hypotheses

Figure 2:
Figure 2: Layout of the affective profile picture and text messages participants saw in the experiment. Note that some part of the face image shown in the figure is masked due to the restrictions on the face image dataset we used. In the experiment, the participants saw the unmasked facial expression. The translation of the message is: “They also told us that almost all reviewers favored this research proposal and looked forward to seeing our progress.
To answer our research questions, RQ1 and RQ2, regarding the effect of changing facial expressions in profile pictures on impression formation by the receiver (RQ1) and the interpretation of the message valence (RQ2), we formed two sets of hypotheses, hypothesis 1 (H1), and hypothesis 2 (H2). The first set of hypotheses examines the impression a receiver develops toward the sender when s/he sees the affective facial expression in the sender’s profile picture (RQ1). According to previous studies, the impressions people form about a message sender can be influenced by the valences of the accompanying social cues. For instance, receivers associated the frequent use of positive emoticons with the perceived extroversion of the message sender [25], and when including a smiley emoticon in emails, a receiver formed a favorable impression of the email sender [11]. Also, the use of a positive emoji was found effective in increasing the perceived warmth of message senders, whereas a negative emoji could be linked with the senders’ negative mood [7]. Based on previous empirical findings, we formulated the following three hypotheses:
H1a:
Positive facial expressions in profile pictures make receivers form more positive impressions toward the senders than negative facial expressions when receiving NEUTRAL 6 messages.
H1b:
Congruent facial expressions in POSITIVE messages make receivers form more positive impressions toward the senders than incongruent facial expressions.
H1c:
Congruent facial expressions in NEGATIVE messages make receivers form more negative impressions toward the senders than incongruent facial expressions.
Furthermore, past studies have shown that dynamic facial movements influence perceived trustworthiness of a person [41] and rapport building [50]. Therefore, we hypothesized that the nuanced movements in social cues, such as the movement of eyebrow, shape of the mouth and eyes, could be highlighted using animated profile pictures. The following two hypotheses were formulated:
H1d:
Animated rather than static facial expressions intensify receivers’ impressions toward the senders when receiving NEUTRAL messages.
H1e:
Animated facial expressions intensify receivers’ impressions toward the senders more than static facial expressions when receiving affective messages.
The second set of hypotheses investigates the effects of affective profile pictures on receivers’ interpretation of the message valence (RQ2). Related works revealed that people perceived greater positivity when seeing a positive message with a happy emoji than when seeing a verbal message alone [69]. When there was simultaneous presence of both verbal and nonverbal cues, people could better detect emotion than verbal messages alone, which could be because nonverbal cues emphasize or substitute the verbal contents [16]. Therefore, we hypothesized that when affective messages are presented with congruent facial expressions, receivers can perceive enhanced positive and negative emotions toward positive and negative messages, respectively. Conversely, when verbal cues are incongruent with the nonverbal cues, a previous study demonstrated that seeing opposite-meaning emoticons made receivers form various perceptions about the text messages [38]. Hence, we hypothesized that the incongruent facial expressions for positive and negative messages may reduce perceived positivity and negativity when compared with congruent ones. Note that, in our study, we selected five types of workplace messages, neutral, happy (positive), relaxed (positive), angry (negative), and sad (negative) because they represent the diverse emotions expressed and experienced in the workplace [3, 45].
Consequently, we formulated the following hypotheses:
H2a:
Positive facial expressions in profile pictures make receivers interpret NEUTRAL messages with more positive emotions than negative ones.
H2b:
Congruent facial expressions in profile pictures make receivers interpret POSITIVE messages with more positive emotions than incongruent ones.
H2c:
Congruent facial expressions in profile pictures make receivers interpret NEGATIVE messages with more negative emotions than incongruent profile pictures.
With regards to animated facial expressions, prior studies have shown that animated emoticons or stickers enable communicators to be more expressive [1, 58], which in turn leads receivers to interpret stronger emotions from the messages. Thus, we formulated two hypotheses for the effect of animated facial expressions on interpreting NEUTRAL and affective messages (POSITIVE and NEGATIVE messages):
H2d:
Animated facial expressions intensify receivers’ emotional interpretation of NEUTRAL messages than static ones.
H2e:
Animated facial expressions intensify receivers’ emotional interpretation of affective messages than static ones.

4 Study Design

The study context was designed as workplace communication with text messages between two unacquainted remote colleagues who have never met in-person. We focused on the perspective of message receivers and assigned each participant the role of a company employee who received messages from a new hire. Participants were instructed with a scenario in which a newcomer joined their team at the same job position. The newcomer was a pseudocharacter created to allow participants to imagine a social interaction. We instructed the participants that they would receive several text messages from the newcomer in the experiment. We prepared static and animated profile pictures with five types of facial expressions, neutral, happy, relaxed, angry, and sad for both male and female faces (details are in section 4.1). We also generated five types of messages, NEUTRAL, HAPPY, RELAXED, ANGRY, and SAD, before the study was conducted (details are in section 4.2). We delivered combinations of a message and a profile picture (Figure 2) to the participants via several HTML pages. We instructed the participants to read each message page by page and answer the corresponding survey questions after reading each message.

4.1 Different Facial Expressions of the Profile Pictures

We prepared both static and animated facial expressions with male and female faces and used them as profile pictures (Figure 3). For static facial expressions, we adopted the neutral, happy, relaxed, angry, and sad facial expressions from a Japanese facial dataset [24]. The five facial expressions were chosen to match the sentiment of the message, which is described in the next subsection (section 4.2). The facial dataset validated the perceived emotions and corresponding facial expressions of Japanese participants [24]. We used Japanese facial expressions 7 as profile pictures to fit the cultural background of the potential participant pool. Because studying cultural factors was beyond the scope of this study, we did not use the human portraits of international populations to avoid any effect caused by viewing people from unfamiliar cultural backgrounds.
For animated facial expressions, we combined one neutral and one emotional facial expression (e.g., happy or sad). We used the GIF to repeatedly switch between the two facial expressions. In one cycle, the neutral facial expression lasted 200 ms and the emotional facial expression lasted 500 ms (Figure 3). Because our goal was not to find the optimized matching of the animated duration of different facial expressions, the length of the animated duration was determined by one of the authors through a pilot test. The relative sizes and layouts of the profile picture and the text messages are shown in Figure 2.
Figure 3:
Figure 3: There were five female and male facial expressions, including happy, relaxed, angry, sad, and neutral. These facial expressions were generated by https://generated.photos/ due to the limited access of the actual face image we used in the experiment. Examples of the actual facial expressions we used can be access via: https://www.tandfonline.com/doi/suppl/10.1080/02699931.2017.1419936?scroll=top

4.2 Generate Neutral and Affective Messages

We prepared five types of workplace messages: NEUTRAL, HAPPY, RELAXED, ANGRY, and SAD. We also referred to Russell’s circumplex model [61] and intended to include balanced affects in the model, that is, NEUTRAL messages, positive-high arousal (HAPPY) messages, positive-low arousal (RELAXED) messages, negative-high arousal (ANGRY) messages, and negative-low arousal (SAD) messages. We prepared five different messages for each message type, resulting in 25 messages. The five messages within each message type differed in their topics. Multiple topics were included to increase the variance of the topics and reduce the possible influence of a single topic. All messages used in the experiment can be found in the supplementary materials. Each participants was presented with the 25 messages in random order with an assigned profile picture.
All the messages were generated by one of the authors who referred to common workplace conversation topics people shared online and then decided with the second author. After that, we adjusted the language and expressions by referring to [3, 45]. Additionally, we asked five external raters to label perceived valence and arousal after reading each text messages without any accompanying profile pictures. The Kappa score among all five raters was 0.62 (substantial agreement [42]) and the percent agreement was 0.81 [47].

4.3 Experimental Conditions

We designed two experiments, the one for NEUTRAL messages and another one for affective messages, but merged them into one online study when we collected data.
For the NEUTRAL message experiment, we conducted a two-by-two factorial design with one baseline comparison, where the effect of the “facial expression” (positive/negative) and “animation” factors (animated/static) on receivers’ perception were compared. The five conditions, including “Pos-Animated-PP”, “Neg-Animated-PP”, “Pos-Static-PP”, “Neg-Static-PP”, and “Neutral-Static-PP”. PP denotes “profile picture”. We showed participants NEUTRAL messages accompanied by an animated happy face, an animated angry face, a static happy face, a static angry face, and a static neutral face, respectively (Figure 4, left side). We used happy and angry facial expressions because they have the highest arousal in positive and negative valence according to Russell’s circumplex model [61] and they were also commonly used to represent positive and negative emotions in related works (see [13, 36]).
For the affective message experiment, we also conducted a two-by-two factorial design with one baseline comparison, where the effect of the “facial expression” (congruent/incongruent) and “animation” factors (animated/static) of profile pictures on receivers’ perception were compared. The emotion of the facial expression in each condition was also decided based on Russell’s circumplex model [61]. We selected emotions that display similar arousal levels but with opposite valence in order to control the potential influence of arousal levels. For instance, happy and angry are categorized as high arousal, where happy is positive-valenced while angry is negative-valenced. Therefore, we paired an happy face with HAPPY messages for congruent condition, whereas we paired an angry face with HAPPY messages for incongruent condition. There were five conditions, “Congruent-Animated-PP”, “Incongruent-Animated-PP”, “Congruent-Static-PP”, “Incongruent-Static-PP” and “Neutral-Static-PP”. In the Congruent-Animated-PP condition, we paired an animated happy facial expression with happy messages, an animated relaxed facial expression with relaxed messages, an animated angry facial expression with angry messages, and an animated sad facial expression with sad messages. In the Incongruent-Animated-PP condition, we paired an animated angry facial expression with happy messages, an animated sad facial expression with relaxed messages, an animated happy facial expression with angry messages, and an animated relaxed facial expression with sad messages. In the Congruent-Static-PP condition, we paired a static happy facial expression with happy messages, a static relaxed facial expression with relaxed messages, a static angry facial expression with angry messages, and a static sad facial expression with sad messages. In the Incongruent-Static-PP condition, we paired a static angry facial expression with happy messages, a static sad facial expression with relaxed messages, a static happy facial expression with angry messages, and a static relaxed facial expression with sad messages. In the Neutral-Static-PP conditions, which served as a baseline comparison, we paired a static neutral facial expression with HAPPY, RELAXED, ANGRY, and SAD messages (Figure 4, left side).
We examined the effects of these two factors on NEUTRAL, POSITIVE (HAPPY and RELAXED), and NEGATIVE (ANGRY and SAD) messages. Notably, we did not compare across messages; thus, they were not considered a factor.
Figure 4:
Figure 4: Participants were randomly assigned to one of five groups. Names of conditions for NEUTRAL, POSITIVE and NEGATIVE messages are highlighted in bold. The italic shows the exact profile pictures for the corresponding messages. Note that we did not compare across NEUTRAL, POSITIVE and NEGATIVE messages. PP denotes “profile picture”.

4.4 Procedure

We randomly assigned participants to five groups to read NEUTRAL and affective messages. Figure 4 shows how we combined the two experiments for NEUTRAL and affective messages. For instance, a participant who was assigned to the Positive-static-PP for the NEUTRAL message was assigned to Congruent-static-PP for the affective message experiment. In other words, every participant received both NEUTRAL and affective messages. The presentation of NEUTRAL and affective messages was randomized. For instance, a participant may read ANGRY, NEUTRAL, HAPPY, RELAXED, and SAD messages in the assigned condition, whereas another participant in the same condition may read the above messages in a different order. There were approximately 35 participants in each condition for NEUTRAL and affective messages (between-subject design). The imbalanced number of participants in each condition was owing to the removal of incomplete or repetitive participation within each condition.
The participants first read the study instructions and their right to participate and subsequently agreed to participate. Subsequently, they read the introduction of the message sender. The introduction stated that the sender was a newcomer who just joined their team and worked with the participant in the company. They were randomly assigned to one of the five groups and read 25 messages in random order. In each condition, half of the participants read and rated messages from a male newcomer, whereas the other half received messages from a female newcomer. Subsequently, the participants read each message paired with the corresponding (male or female) facial expressions in the profile pictures. Subsequently, they were asked to select their perceived emotion labels of the message and their impression toward the message sender (Figure 5).

4.5 Participants

One hundred eighty-six online participants (65 females, 120 males, and one preferring not to specify their gender) were recruited from a local Japanese participant recruiting platform 8. The average age of the participants was 43.15 (SD = 9.6). The participants completed a 25 to 30-min online task directly via a URL embedded in the study post, which redirected them to an HTML page for the experiment. The participants were paid approximately $4.1 USD for participation. This amount was determined based on the standard hourly wage in the country in which the study was conducted. This study was approved by the ethical review board of the institute.
Figure 5:
Figure 5: Procedure of the online experiment.

4.6 Measurements and Analysis

Here, we describe the measurements performed and their analysis methods.

4.6.1 Perceived Likeability of Senders.

We adopted and adapted the survey in [60] to evaluate participants’ perceived likeability toward their unacquainted message senders as an index for measuring their impressions toward message senders. The participants reported their agreement level with seven statements using a seven-point Likert scale, where 1 indicates high disagreement and 7 indicates high agreement. Example statements were, “This person is friendly”or “This person is likeable” (Table 1). All questions were translated into Japanese for the survey. We averaged the scores from seven items and used the averaged score as an index of the participants’ impressions toward the message sender.

4.6.2 Perceived Emotion of the Messages.

We also instructed the participants to select the emotion labels they perceived from each text message when they read it with an assigned profile picture. We provided them 13 emotion labels: six were positive emotions, six were negative emotions, and one was neutral [59, 61]. If the participants selected positive emotions after reading the message, we assigned a score of one. If participants selected negative emotions, we subtracted one score. If a neutral emotion was selected, we assigned a zero score. We assigned a score by referring to the Russell’s circumplex model [61]. The participants were able to select multiple emotion labels. The summed score is an index of the perceived emotion after reading each message. The full list of emotion labels were “surprised (+1),” “excited (+1),” “happy (+1),” “satisfied (+1),” “calm (+1),” “relaxed (+1),” “tired (-1),” “sad (-1),” “depressed (-1),” “annoyed (-1),” “angry (-1),” “nervous (-1),” and “neutral (+0)”. The score ranged from -30 (the most negative) to +30 (the most positive), because there were five chunks of messages for NEUTRAL, HAPPY, RELAXED, ANGRY, and SAD messages.
Table 1:
Question
1) This person is friendly
2) This person is likeable
3) This person is warm
4) This person is approachable
5) I would ask this person for advice
6) I would like to be friends with this person
7) This person is knowledgeable
Table 1: Questions for evaluating impressions toward the senders

5 Results

Here, we report our analysis of the perceived impressions of the participants toward the message sender (RQ1) and the perceived emotion of the messages (RQ2). The Shapiro-Wilk tests showed that the dependent variables in each comparison were not normally distributed. Therefore, we performed two-way non-parametric inferential statistical tests with aligned-rank transformation (ART) [76] and pairwise comparisons with p-values being adjusted using Tukey’s method [21] to test our hypotheses. The two factors in the two-way ANOVA were “emotion congruency” and “animation”. To include the comparison with neutral facial expressions in the profile picture, we performed one-way non-parametric inferential statistical tests with ART and post-hoc contrast tests using Tukey’s p-value adjustment to answer the research questions. The result of two-way analysis of variance (ANOVA) with ART are presented in the first paragraph within each subsection below, whereas the result of one-way ANOVA with ART are presented in the second paragraph within each subsection and the figures below.

5.1 Perceived Impression toward the Sender (RQ1)

5.1.1 When Receiving NEUTRAL Messages (H1a, H1d).

A two-way ANOVA test with ART revealed that there was no statistically significant interaction between the effects of facial expression and the animation of profile pictures on participants’ impression toward the sender when receiving NEUTRAL messages (F[1, 141] = 0.43, n.s.). The analysis of the simple main effects showed that the facial expression had a statistically significant effect on the impression evaluation (F[1, 141] = 61.15, p < .001, \(\eta _{p}^{2}\) = 0.3). When seeing a happy facial expression of the profile picture (M = 4.83, SD = 0.86) appeared next to NEUTRAL messages, participants had a significantly positive impression toward the sender than when seeing an angry facial expressions appeared (M = 3.44, SD = 1.2). Therefore, the result supports H1a, which states that happy facial expressions make receivers form more positive impressions toward the senders than angry facial expressions when receiving NEUTRAL messages. In contrast, we did not find support for H1d, which states that the animation intensifies receivers’ impression toward the sender when receiving NEUTRAL messages. The animation of the profile pictures did not have a statistically significant effect on the impression evaluation of the sender (F[1, 141] = 0.63, n.s.).
To further investigate RQ1 regarding how different facial expressions of profile pictures differ from the neutral baseline, we applied a one-way ANOVA test with ART. The result indicated that when receiving NEUTRAL messages, different facial expression had significant effect on participants’ impression toward the sender (F[4, 181] = 16.99, p < .001, \(\eta _{p}^{2}\) = 0.27). Post-hoc comparison showed that participants had more positive impressions toward the sender when seeing NEUTRAL messages paired with an animated happy facial expression than when paired with a neutral facial expression. Additionally, participants had positive impressions toward the sender when they saw a happy facial expression when receiving NEUTRAL messages. (Figure 6).
Figure 6:
Figure 6: Participants’ impression toward the sender when receiving NEUTRAL messages shown in a box plot. The horizontal black line in the box denotes the median, whereas the black triangle denotes the mean score. The blue box shows happy facial expression, the yellow box shows angry facial expression, and the gray box shows a neutral facial expression. The y-axis shows participants’ rating about the senders’ impression. Higher score indicates more positive impression the receivers perceived. The x-axis shows all conditions compared using one-way ANOVA. When a happy facial expression in profile pictures paired with a NEUTRAL message, participants had a significant positive impression toward the sender than when seeing angry and neutral facial expressions in profile pictures. This trend was found for both static and animated profile pictures. Happy and angry facial expressions in profile pictures changed the participants’ impression toward the sender when receiving NEUTRAL messages. The significant differences between conditions were from post-hoc analysis after doing one-way ANOVA with ART.

5.1.2 When Receiving POSITIVE Messages (H1b, H1e).

When receiving POSITIVE messages that convey HAPPY or RELAXED emotions, the results of a the two-way ANOVA test with ART revealed that there was no statistically significant interaction between the effects of facial expression and the animation of profile pictures on participants’ impression toward the sender (HAPPY messages: F[1, 141] = 0.23, n.s.; RELAXED messages:F[1, 141] = 0.83, n.s.). The analysis of the simple main effects demonstrated that the facial expression had a statistically significant effect on the impression evaluation (HAPPY messages:F[1, 141] = 39.53, p < .001, \(\eta _{p}^{2}= 0.22\); RELAXED messages: F[1, 141] = 41.75, p < .001, \(\eta _{p}^{2}= 0.23\)). When a positive facial expression of the profile pictures (Happy profile picture: M = 5.39, SD = 0.71; relaxed profile picture: M = 5.2, SD = 0.68) appeared next to positive messages (e.g., relaxed face with RELAXED messages), the participants had a significantly positive impression of the sender than when seeing a negative facial expression (e.g., sad face with RELAXED messages) (Angry profile picture: M = 4.28, SD = 1.32; sad profile picture: M = 4.14, SD = 1.1). Hence, we found support for H1b, which states that congruent facial expressions in POSITIVE messages make receivers form more positive impressions toward the senders than incongruent facial expressions. Yet again, we did not find support for H1e, which states that animated facial expressions intensify receivers’ impressions toward the senders more than static facial expressions when receiving affective messages. The animation of profile pictures did not have a statistically significant effect on participants’ impression evaluation when receiving POSITIVE messages (HAPPY messages: F[1, 141] = 0.12, n.s.; RELAXED messages:F[1, 141] = 0.30, n.s.).
Additionally, we performed the one-way ANOVA test with ART to further answer RQ1 regarding how different facial expressions in profile pictures differ from neutral facial expressions in profile pictures when receiving positive messages. Different facial expressions in profile pictures had a significant effect on participants’ impressions toward the sender (HAPPY messages: F[4, 181] = 10.26, p < .001, \(\eta _{p}^{2} = 0.18\); RELAXED messages: F[4, 181] = 11.54, p < .001, \(\eta _{p}^{2} = 0.20\)). In addition to the finding that participants had more positive impressions of the sender when seeing the positive (happy and relaxed) facial expressions of profile pictures being paired with POSITIVE messages than negative (angry and sad) ones, participants also had more negative impressions of the sender when seeing the negative facial expression paired with POSITIVE messages compared with when seeing neutral facial expressions, regardless of whether the profile pictures were animated or not (Figure 7 A and B for receiving HAPPY and RELAXED messages, respectively).
Figure 7:
Figure 7: Participants’ impression toward the sender when receiving POSITIVE (A for HAPPY, B for RELAXED) and NEGATIVE (C for ANGRY, D for SAD) messages shown in box plots. The horizontal line in the box denotes the median, whereas the black triangle denotes the mean score. The y-axis shows participants’ rating about the senders’ impression. A higher score indicates receivers had a more positive impression toward the sender. The blue box shows that the facial expression in the profile picture is congruent with the valence of the message (E.g., a SAD message with a sad expression), whereas the yellow box shows an incongruent situation. The gray box shows a profile picture with a neutral expression. The significant differences between conditions were from post-hoc analysis after doing one-way ANOVA with ART. Figures A and B show that seeing incongruent profile pictures (angry and sad facial expressions) next to POSITIVE messages made participants form significantly more negative impressions toward the senders than when seeing congruent profile pictures. Figures C and D show that regardless of what the facial expression was and whether the profile picture was animated or not, the profile picture has no significant effect on participants’ impression toward the sender when receiving NEGATIVE messages. (** p < 0.01 and *** p < 0.001)

5.1.3 When Receiving NEGATIVE Messages (H1c, H1e).

When receiving NEGATIVE messages that convey angry and sad emotions, surprisingly, the result of the two-way ANOVA test with ART revealed that there was neither a statistically significant interaction between the effects of facial expression and the animation of profile pictures (ANGRY message:F[1, 141] = 1.05, n.s.; SAD message: F[1, 141] = 2.1, n.s.) nor a simple main effect of facial expression (ANGRY message: F[1, 141] = 2.23, n.s.; SAD message: F[1, 141] = 0.78, n.s.) and animation (ANGRY message: F[1, 141] = 0.004, n.s.; SAD message: F[1, 141] = 1.1, n.s.) of the profile pictures on participants’ impression toward the sender. We did not find support for H1c, which states that congruent facial expressions make receivers form more negative impressions toward the senders than incongruent ones when receiving NEGATIVE messages. Similar to POSITIVE messages, we did not find support for H1e, which states that animated facial expressions intensify receivers’ impressions toward the senders when receiving affective messages. Contrary to receiving NEUTRAL and POSITIVE messages, neither the facial expression nor the animation of the profile pictures influenced the participants’ impression toward the sender when receiving negative messages.
We also performed the one-way ANOVA test with ART to further answer RQ1 regarding how different facial expressions in profile pictures differ from neutral profile pictures when receiving NEGATIVE messages. It was revealed that different facial expressions in profile pictures had no significant effect on participants’ impression toward the sender. That is, after including the comparison of the neutral facial expression, regardless of the facial expression and animation of the profile pictures, the participants’ impressions toward the senders remained unaffected (ANGRY message: F[4, 181] = 0.94, n.s.; SAD message: F[1, 141] = 1.46, n.s., Figure 7 C and D).

5.2 Interpretation of Emotions of Received Messages (RQ2)

5.2.1 Interpreting Emotions of NEUTRAL Messages (H2a, H2d).

Similar to the above findings on impression, the result of a two-way ANOVA test with ART revealed that there was no statistically significant interaction between the effects of facial expression and the animation of profile pictures on how participants interpreted the emotion of NEUTRAL messages (F[1, 141] = 1.35, n.s.). The analysis of the simple main effects showed that the positive and negative facial expressions had a statistically significant effect on how participants interpreted NEUTRAL messages (F[1, 141] = 102.21, p < .001, \(\eta _{p}^{2} = 0.42\)). When a happy facial expression appeared on the profile picture with NEUTRAL messages, participants interpreted it with significantly more positivity (M = 5.86, SD = 4.57) than when seeing an angry facial expressions (M = −3.14, SD = 6.10). Hence, we found support for H2a, which states that happy facial expressions make receivers interpret messages with more positive emotions than angry ones when receiving NEUTRAL messages. However, we did not find support for H2d, which states that animated facial expressions intensify receivers’ emotional interpretation than static ones when receiving NEUTRAL messages. The animation of profile pictures did not have a statistically significant effect on interpreting message valence of NEUTRAL messages (F[1, 141] = 0.64, n.s.).
To answer RQ2, further one-way ANOVA testing with ART revealed that when receiving NEUTRAL messages, different facial expressions in profile pictures had significant effect on how participants interpreted the emotion of the message (F[4, 181] = 29.39, p < .001, \(\eta _{p}^{2}= 0.39\)). Post-hoc analysis showed that the participants interpreted NEUTRAL messages with more positivity when they were paired with animated happy facial expression than when paired with neutral facial expressions. Moreover, participants had positive interpretation of the NEUTRAL message when seeing a happy facial expression than when seeing an angry facial expression when they received NEUTRAL messages (Figure 8).
Figure 8:
Figure 8: Participants’ interpretation of NEUTRAL messages shown in a box plot. The y-axis shows the participants’ rating about the perceived emotion of messages. A higher score indicates participants have a more positive interpretation of the received messages. The x-axis shows all conditions we compared in the study. The horizontal black line in the box denotes the median and the black triangulation denotes the mean score of each condition. The significant differences between conditions were from post-hoc analysis after doing one-way ANOVA with ART. When an animated happy profile pictures is paired with a NEUTRAL message, participants had significantly a more positive interpretation of the message than seeing the angry and neutral profile pictures. Both happy and angry profile pictures changed participants’ perceived emotion of NEUTRAL messages.

5.2.2 Interpreting Emotions of POSITIVE Messages (H2b, H2e).

Similar to the above finding on impression, the result of the two-way ANOVA test with ART revealed that there was no statistically significant interaction between the effects of the facial expression and the animation of profile pictures on how participants interpreted the emotion of the POSITIVE messages (HAPPY messages: F[1, 141] = 1.51, n.s.; RELAXED messages:F[1, 141] = 0.19, n.s.). The analysis of the simple main effects showed that the facial expression had a statistically significant effect on how participants interpreted the emotion of positive messages (HAPPY message: F[1, 141] = 37.41, p < .001, \(\eta _{p}^{2}= 0.21\); RELAXED message: F[1, 141] = 58.56, p < .001, \(\eta _{p}^{2}= 0.29\)). When negative (angry, sad) facial expressions in the profile picture appeared next to POSITIVE messages, participants interpreted POSITIVE messages with significantly less positivity (Angry profile picture: M = 5.44, SD = 6.9; sad profile picture: M = 4.12, SD = 7.48) than when positive (happy, relaxed) facial expressions were paired with POSITIVE messages (Happy profile picture: M = 11.04, SD = 3.81; relaxed profile picture: M = 12.43, SD = 5.29). Hence, we found support for H2b, which states that congruent facial expressions make receivers interpret POSITIVE messages with more positivity than incongruent ones. Yet, we did not find support for H2e, which states that animated facial expressions intensify receivers’ emotional interpretation of POSITIVE messages than static ones. There was no statistically significant effect of the animation of profile pictures on interpreting POSITIVE messages (HAPPY messages: F[1, 141] = 0.70, n.s.; RELAXED messages:F[1, 141] = 4.39, n.s.).
To answer RQ2, further one-way ANOVA testing with ART revealed that when receiving POSITIVE messages, different facial expressions in profile pictures had a significant effect on how participants interpreted the emotion of the message (HAPPY messages: F[4, 181] = 10.46, p < .001, \(\eta _{p}^{2}= 0.19\); RELAXED messages: F[4, 181] = 15.55, p < .001, \(\eta _{p}^{2}= 0.26\)). Post-hoc analysis showed that in addition to the finding that participants interpreted POSITIVE messages with less positivity when they were paired with negative (angry, sad) facial expressions than when they were paired with positive (happy, relaxed) facial expressions, participants interpreted positive messages less positively when they were paired with negative (angry, sad) facial expressions than when they were paired with neutral facial expressions in the profile picture (Figure 9 A and B).
Figure 9:
Figure 9: Participants’ perceived emotion of the POSITIVE (A for HAPPY, B for RELAXED) and NEGATIVE (C for ANGRY, D for SAD) messages shown in box plots. The horizontal black line in the box denotes the median, whereas the black triangle denotes the mean score. The y-axis shows participants’ perceived emotion when receiving affective messages. Higher score indicates more positive emotion receivers perceived. The significant differences between conditions were from post-hoc analysis after doing one-way ANOVA with ART. Figures A and B show that seeing incongruent profile pictures (angry, sad) next to POSITIVE messages made participants form significantly more negative interpretation toward the messages than seeing congruent profile pictures (happy, relaxed). Whereas Figures C and D show that happy and relaxed profile pictures alleviated the perceived negativity of NEGATIVE messages. (** p < 0.01 and *** p < 0.001)

5.2.3 Interpreting Emotions of NEGATIVE Messages (H2c, H2e).

The two-way ANOVA test with ART revealed that there was no statistically significant interaction between the effects of facial expression and the animation of profile pictures on how participants interpreted the emotion of the NEGATIVE messages (ANGRY message: F[1, 141] = 0.01, n.s.; SAD message: F[1, 141] = 0.32, n.s.). The analysis of the simple main effects showed that the facial expression had a statistically significant effect on how participants interpreted the emotion of NEGATIVE messages (ANGRY messages: F[1, 141] = 13.58, p < .001, \(\eta _{p}^{2}= 0.09\); SAD messages: F[1, 141] = 7.53, p < .01, \(\eta _{p}^{2}= 0.05\)). That is, when seeing negative facial expressions are paired with NEGATIVE messages (angry profile pictures: M = −9.66, SD = 4.70; sad profile pictures: M = −10.87, SD = 4.43), participants interpreted NEGATIVE messages with a significantly negative emotion than when seeing positive facial expressions (happy profile pictures: M = −6.98, SD = 5.42; relaxed profile pictures: M = −9.17, SD = 4.0). Thus, we found support for H2c, which states that congruent facial expressions intensify the negativity when receiving NEGATIVE messages than incongruent profile pictures. Yet again, we did not find support for H2e, which states that animated facial expressions intensify receivers’ emotional interpretation of affective messages than static ones. There was no statistically significant effect of profile pictures animation on interpreting NEGATIVE messages (ANGRY messages: F[1, 141] = 2.01, n.s.; SAD messages: F[1, 141] = 2.04, n.s.).
To answer RQ2, further one-way ANOVA test with ART revealed that, when receiving NEGATIVE messages, different facial expressions in profile pictures had significant effect on how participants interpreted the emotion of ANGRY messages (F[4, 181] = 3.92, p < .01, \(\eta _{p}^{2}= 0.08\)), and marginal significant effect on SAD messages (F[4, 181] = 2.38, p = 0.05 marginal significance, \(\eta _{p}^{2}= 0.05\)). Post-hoc analysis showed that participants interpreted ANGRY messages more negatively when they were paired with animated angry facial expressions than when paired with animated happy facial expressions. However, no significant difference was found when participants saw a neutral facial expression paired with ANGRY messages compared with other types of profile pictures. (Figure 9 C and D for ANGRY and SAD messages, respectively).
Taken together, our results indicated that changing facial expressions in profile pictures influenced both receivers’ impressions toward the sender (RQ1) and how receivers interpreted the emotion of the messages (RQ2). The results are summarized in Table 2, and the main findings are discussed in the next section.

6 Discussion

6.1 Effect of Facial Expressions on Impression Formation and Valence Interpretation in Text-Based Communication

Current results indicated that facial expressions in the profile pictures affected receivers’ impressions toward the sender and their interpreted message valence, and interestingly, this influence differed depending on whether the text messages were neutral or affective. When communicating NEUTRAL messages in the remote workplace, such as giving announcements, sharing information, updating work status, or exchanging different perspectives, current results revealed that a happy facial expression could make receivers form a more positive impression toward the sender, and interpret the message with more positive emotion, compared with when seeing a neutral and angry facial expression (section 5.1.1, Figure 6 and section 5.2.1, Figure 8). This result may be beneficial to scenarios where offering different perspectives is likely to create tensions in relationships. Indeed, remote team members tend to worry about sharing different viewpoints in virtual teams [40]. Under such context, affective profile pictures may be used intentionally to induce a positive impression of the sender and to create empathic discussion while avoiding conflict among the team members.
Unlike receiving NEUTRAL messages, we established that changing facial expressions in profile pictures significantly changed receivers’ impression toward the senders when receiving POSITIVE messages, but did not influence NEGATIVE messages (Figure 7). When receiving POSITIVE messages, incongruent facial expressions made receivers form less positive impressions toward the sender compared to congruent ones (Figure 7 A and B). As hypothesized, it is possible that the mismatch between facial expressions and POSITIVE message (HAPPY, RELAXED) creates an inauthentic feeling or confusion that harms impression formation. Contrary to our hypothesis, messages that conveyed NEGATIVE emotions (ANGRY, SAD) were not affected by any facial expressions we compared (Figure 7 C and D). This might be because disclosing negative emotions is regarded as unhelpful for collaboration outcomes in the workplace [9], specifically when they are unacquainted with each other.
Not only for impression evaluation, we found that changing profile pictures is quite influential for interpreting the valence of POSITIVE messages, but not for NEGATIVE messages. Angry facial expressions significantly reduced the positivity for POSITIVE messages (Figure 9 A and B), but happy facial expressions only slightly reduced the negativity for NEGATIVE messages (Figure 9 C and D). The negativity bias in computer-mediated communication [70] may explain this imbalance. According to the negativity bias, people tend to attend to negative cues more than other cues in computer-mediated communication [70]. For instance, people were more likely to interpret neutral emails negatively [10] and to react negatively to speakers when they saw negative facial expressions while listening to positive feedback in a video conference [52]. Our results extended the negativity bias by showing that not only presenting negative cues in a textual and video format but in profile pictures can also capture the attention of receivers and influence their perception.

6.2 The Effect of Animated Facial Expressions on Text-Based Communication

Turning to the effect of animation, we just found positive effect of animated happy facial expressions on forming positive impression toward the sender and interpreting the valence of NEUTRAL messages (Figure 6 and Figure 8). Conversely, we could not find a significant effect of animation on people’s perceptions when receiving POSITIVE and NEGATIVE messages. These results may be explained by the parameters used in creating animated profile pictures. The speed and duration of the movement, size of the image that shows the facial expressions, or the amount of different facial expressions changed over a period could influence the results. It has been argued that the frequency of the facial expression changing from neutral to specific expressions affects the accuracy of identifying positive and negative emotions [37]. Further work is required to identify whether and how the optimized parameters for creating animated profile pictures influence affective communication.
Table 2:
 Impression of the senderInterpretation of the message valence
 RQ1: How does changing the
facial expressions in profile
pictures influence receivers’
impression toward the sender?

- Animated positive facial expressions
increased the positive impression
toward the sender when
receiving NEUTRAL messages.
(Figure 6)

- Incongruent facial expressions
made people form negative
impressions toward the sender when
receiving POSITIVE messages
but did not make the
impression worse when receiving
NEGATIVE messages. (Figure 7)
RQ2: How does changing the
facial expressions in profile
pictures influence ways in
which receivers interpret
the emotion of the messages?

- Animated positive facial expressions
increased the perceived positivity of
the NEUTRAL message. (Figure 8)

- Incongruent facial expressions reduced
the perceived positivity of the POSITIVE
messages but had a slight influence on
reducing the negativity of the NEGATIVE
messages. (Figure 9)
NEUTRAL message
“We have a regular
discussion every
Wednesday
2PM~3PM.”
Positive facial expressions caused
more positive impressions than
negative facial expressions,
regardless of the animation.
(H1a, supported; H1d, not supported,
section 5.1.1)
Positive facial expressions caused
more positive interpretations than
negative facial expressions,
regardless of the animation.
(H2a, supported; H2d, not supported,
section 5.2.1)
POSITIVE message
“That was a great
place for relaxing.
I feel refreshed
after the trip.
Highly recommend.”
Positive facial expressions caused
more positive impressions
toward the sender than
negative facial expressions,
regardless of the animation.
(H1b, supported; H1e, not supported,
section 5.1.2)
Positive facial expressions caused
more positive interpretations
than negative facial expressions,
regardless of the animation.
(H2b, supported; H2e, not supported,
section 5.2.2)
NEGATIVE message
“Everyone has many
things to handle and
it is unreasonable to
expect someone to
put your task as a
top priority, especially
when the task was
so unorganized.”
Neither the facial expression nor
animation of the profile pictures
would influence impression
evaluation.
(H1c, not supported; H1e, not supported,
section 5.1.3)
Negative facial expressions
caused more negative
interpretation than positive
facial expressions.
(H2c, supported; H2e, not supported,
section 5.2.3)
Table 2: Summary table

6.3 Design Implications Toward Mediated Empathy for Text-Based Communication

Empathy is the ability of an observer to infer another parties’ affective states and respond accordingly [34]. Building an empathic and friendly environment for remote discussion is essential for the success of virtual teams [23, 40]. During in-person communication, one can understand the affective state of the other based on rich non-verbal cues such as facial expressions, gestures, voice, etc. However, when communicating remotely, such social cues are not available, which makes it hard for users to infer the affective state of others. Our study reveals that the combination of a text message and a facial expression in a profile picture affects a receiver’s impression toward a sender and the perceived message valence. Hence, our results can contribute to strengthening empathy over text-based communication. We proposed the following design implications:

6.3.1 Applying AI to Mediate Empathy Automatically.

Automatically matching facial expressions in profile pictures with text messages can be achieved with the recent AI technology9, which enables a receiver to infer a sender’s affective state with more empathy. For instance, detecting the sender’s affective state, the valence of the text message, and then pairing the appropriate facial expression displayed in the profile picture. To detect the sender’s affective state while typing a message, it is possible to use automatic emotion recognition (AER) technologies to predict the sender’s affective state from his/her facial expression, heart rate [31], electrodermal activity (EDA) [17], etc. The valence of the text message can be detected using semantic analysis [49], whereas the facial expression can be generated using facial expression manipulation [72]. According to our results, we envision future messaging systems can suggest the appropriate facial expressions that make the receiver perceive the correct impression of the sender and the text message. Nonetheless, further research is necessary to improve ways for detecting various affective states of the sender and the message.

6.3.2 Allowing Senders to Choose Affective Profile Pictures or Modify Text Messages.

Although we propose to automatically generate affective profile pictures, AI can still make mistakes, such as matching inappropriate facial expressions with the messages or incorrectly detecting senders’ affective state. Therefore, showing senders the result of automatically generated profile pictures for confirmation before sending is necessary. It is also possible for the system to show senders a list of facial expressions paired with their potential effects on the receivers’ perceptions. Then, senders can choose the appropriate facial expression that matches their intended self-presentation. Meanwhile, our result revealed that the NEGATIVE messages made receivers form negative impressions toward the senders regardless of the facial expression used. Therefore, in addition to changing facial expressions in profile pictures, we suggest that future systems could also recommend senders to adjust the tone of messages when negative valence was detected.
Moreover, to facilitate empathy and mutual understanding in remote discussion, revealing more complex facial expressions and emotional cues in affective profile pictures, such as displaying worry, remorse, curiosity, unbelief, and excitement as suggested in [44] could be added in text-based collaborative tools. Future studies are encouraged to explore ways in which making facial expressions by multiple positive/negative faces or animated complex expressions influence the impression toward the sender and message interpretation. Additionally, given that mobile-based messengers are also frequently used for workplace communication, combining our findings with previous work which visualizes physiological data on mobile chat to support empathy [31], we encourage instant messaging services to enable users to change affective profile pictures when communicating with remote counterparts who share little mutual understanding, such as remote new hires, subordinates, or managers.

7 Limitation and Future Directions

Although we established that changing facial expressions in profile pictures influenced people’s perception when receiving NEUTRAL and POSITIVE messages, there were a few limitations to this study. First, we did not explicitly inform participants whether the profile picture automatically changed with the sentiment of the text or whether the sender intentionally changed the profile picture to match with the text message. The belief of intentional or unintentional changes in facial expressions might affect participants’ interpretation of the senders’ behavior, thus influencing their impression toward the sender and interpretation of the message valence. Future studies could investigate how different beliefs about senders’ intention in changing facial expressions in profile pictures affect receivers’ perceptions.
Second, although this study demonstrated that presenting different facial expressions to messages influenced receivers’ perception, we did not examine how different facial expressions displayed in profile pictures might influence senders’ perception and message construction. Past research has shown that viewing people’s own facial expressions being rendered to positive or negative expressions real time [78] or being displayed through an embodied avatar [36] could both change ways in which people perceive their own emotion. It is interesting to further investigate how seeing senders’ own facial expressions being changed according to the sentiment of the message before sending out the message influence their perceived communication quality and expressivity. To address the first and second limitations, our future work will implement this in a text-messaging tool and investigate how such design changes remote dyads’ perception and message construction synchronously.
Third, we conducted this study with participants from a single cultural background, which prevented us from generalizing the findings to other cultural contexts. Although it has been argued that facial expression is universal across cultures [20], we acknowledge that cultural backgrounds plays an important role in shaping the perception of people and their adoption of emotional features in communication [39, 46]. Furthermore, gender factor might also influence receivers’ perception about affective profile pictures and corresponding messages. However, because it was not the focus of the current study, we did not explore whether changing facial expressions in profile pictures lead to different effects when considering the cultural and gender factors. Future studies are needed to examine whether receivers from other cultural backgrounds or different gender combinations between senders and receivers respond similarly to affective profile pictures. Owing to the nature of universal facial expressions, we expect that changing facial expressions in profile pictures could help cross-lingual teams to better decode nuanced expressions. Further exploration of how affective profile pictures influence cross-cultural communication is encouraged.

8 Conclusion

When communicating using a media as lean as text, the lack of mutual understanding and insufficient nonverbal cues between unacquainted people can lead to misunderstanding, thus further harm the impression we formed about each other in fully remote workplaces. To enrich socio-emotional cues in text-based communication, we simulated a series of workplace conversations and investigated the effect of changing facial expressions in profile pictures on people’s perceptions when they received NEUTRAL, POSITIVE (HAPPY, RELAXED), and NEGATIVE (ANGRY, SAD) messages. As a common default feature in most computer-mediated communication tools, we found that changing facial expressions influenced receivers’ impressions toward the sender and interpretation of the message valence for NEUTRAL and POSITIVE messages. However, such change did not affect much for NEGATIVE messages. Based on the empirical results, we highlighted that future communication systems could enable users to perceive socio-emotional cues through changing profile pictures in text-based communication.

Acknowledgments

This work was supported by JST Moonshot R&D Program Project "Cybernetic being" (Grant number JPMJMS2013). We thank participants for showing interests in participating this study. We also thank reviewers for sharing their constructive feedback to help us improve this work.

Footnotes

6
For better readability, we used upper case letters to denote five message types, whereas we used lower case letters to denote five facial expressions in profile pictures.
7
The facial dataset we used did not allow public sharing; hence, we used generated facial expressions in the figures to illustrate the experiment design. One of the facial expressions used in the study can be observed here: https://www.tandfonline.com/doi/suppl/10.1080/02699931.2017.1419936?scroll=top
8
We recruited participants from Lancers.jp, an online freelancing platform.
9
The term AI technology used here covers the machine learning-infused systems which can be applied for recognizing, predicting and generating data based on given input data.

Supplementary Material

Supplemental Materials (3544548.3581061-supplemental-materials.zip)
MP4 File (3544548.3581061-talk-video.mp4)
Pre-recorded Video Presentation
MP4 File (3544548.3581061-video-preview.mp4)
Video Preview

References

[1]
Pengcheng An, Ziqi Zhou, Qing Liu, Yifei Yin, Linghao Du, Da-Yuan Huang, and Jian Zhao. 2022. VibEmoji: Exploring User-authoring Multi-modal Emoticons in Social Communication. CHI Conference on Human Factors in Computing Systems (2022).
[2]
Toshiki Aoki, Rintaro Chujo, Katsufumi Matsui, Saemi Choi, and Ari Hautasaari. 2022. EmoBalloon - Conveying Emotional Arousal in Text Chats with Speech Balloons. CHI Conference on Human Factors in Computing Systems (2022).
[3]
Blake E. Ashforth and Ronald H. Humphrey. 1995. Emotion in the Workplace: A Reappraisal. Human Relations 48(1995), 125 – 97.
[4]
Cezary Biele and Anna Grabowska. 2005. Sex differences in perception of emotion intensity in dynamic and static facial expressions. Experimental Brain Research 171 (2005), 1–6.
[5]
Silvia Bonaccio, Jane O’Reilly, Sharon L O’Sullivan, and François Chiocchio. 2016. Nonverbal behavior and communication in the workplace: A review and an agenda for research. Journal of Management 42, 5 (2016), 1044–1074.
[6]
Nathan D. Bos, Judy Olson, Darren Gergle, Gary M. Olson, and Zachary C. Wright. 2002. Effects of four computer-mediated communications channels on trust development. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (2002).
[7]
Isabelle Boutet, Megan LeBlanc, Justin A. Chamberland, and Charles A. Collin. 2021. Emojis influence emotional communication, social attributions, and information processing. Comput. Hum. Behav. 119(2021), 106722.
[8]
Danah Boyd and Jeffrey Heer. 2006. Profiles as Conversation: Networked Identity Performance on Friendster. Proceedings of the 39th Annual Hawaii International Conference on System Sciences (HICSS’06) 3(2006), 59c–59c.
[9]
Steven P. Brown, Robert A. Westbrook, and Goutam N. Challagalla. 2005. Good cope, bad cope: adaptive and maladaptive coping strategies following a critical negative work event.The Journal of applied psychology 90 4 (2005), 792–8.
[10]
Kris Byron. 2008. Carrying too Heavy a Load? The Communication and Miscommunication of Emotion by Email. Academy of Management Review 33 (2008), 309–327.
[11]
Kris Byron and David C. Baldridge. 2007. E-Mail Recipients’ Impressions of Senders’ Likability. Journal of Business Communication 44 (2007), 137 – 160.
[12]
Yoonjeong Cha, Jongwon Kim, Sangkeun Park, Mun Yong Yi, and Uichin Lee. 2018. Complex and Ambiguous: Understanding Sticker Misinterpretations in Instant Messaging. Proc. ACM Hum.-Comput. Interact. 2, CSCW, Article 30 (nov 2018), 22 pages. https://doi.org/10.1145/3274299
[13]
Arik Cheshin, Anat Rafaeli, and Nathan D. Bos. 2011. Anger and happiness in virtual teams: Emotional influences of text and behavior on others’ affect in the absence of non-verbal cues. Organizational Behavior and Human Decision Processes 116 (2011), 2–16.
[14]
Yunjey Choi, Min-Je Choi, Mun Su Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. 2018. StarGAN: Unified Generative Adversarial Networks for Multi-domain Image-to-Image Translation. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (2018), 8789–8797.
[15]
Google Cloud. 2022. Natural Language AI. https://cloud.google.com/natural-language.
[16]
Deborah A. Coker and Judee K. Burgoon. 1987. The Nature of Conversational Involvement and Nonverbal Encoding Patterns. Human Communication Research 13 (1987), 463–494.
[17]
Max T. Curran, Jeremy Raboff Gordon, Lily Lin, Priyashri Kamlesh Sridhar, and John C.-I. Chuang. 2019. Understanding Digitally-Mediated Empathy: An Exploration of Visual, Narrative, and Biosensory Informational Cues. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (2019).
[18]
Fernando De la Torre and Jeffrey F Cohn. 2011. Facial expression analysis. Visual analysis of humans(2011), 377–409.
[19]
Chad Edwards, Brett Stoll, Natalie Faculak, and Sandi Karman. 2015. Social presence on LinkedIn: Perceived credibility and interpersonal attractiveness based on user profile picture.
[20]
Paul Ekman. 1970. Universal facial expressions of emotion.
[21]
Lisa Elkin, Matthew Kay, James J. Higgins, and Jacob O. Wobbrock. 2021. An Aligned Rank Transform Procedure for Multifactor Contrast Tests. The 34th Annual ACM Symposium on User Interface Software and Technology (2021).
[22]
Claus-Peter Hermann Ernst and Martin Huschens. 2019. Friendly, Humorous, Incompetent? On the Influence of Emoticons on Interpersonal Perception in the Workplace. In HICSS.
[23]
Jinjuan Feng, Jonathan Lazar, and Jennifer Preece. 2004. Empathy and online interpersonal trust: A fragile relationship. Behaviour & Information Technology 23 (2004), 106 – 97.
[24]
Tomomi Fujimura and Hiroyuki Umemura. 2018. Development and validation of a facial expression database based on the dimensional and categorical model of emotions. Cognition and Emotion 32(2018), 1663 – 1670.
[25]
Tina Ganster, Sabrina C. Eimler, and Nicole C. Krämer. 2012. Same Same But Different!? The Differential Influence of Smilies and Emoticons on Person Perception. Cyberpsychology, behavior and social networking 15 4 (2012), 226–30.
[26]
Ge Gao, Sun Young Hwang, Gabriel Culbertson, Susan R Fussell, and Malte F Jung. 2017. Beyond information content: The effects of culture on affective grounding in instant messaging conversations. Proceedings of the ACM on Human-Computer Interaction 1, CSCW(2017), 1–18.
[27]
Ella Glikson, Arik Cheshin, and Gerben A van Kleef. 2018. The dark side of a smiley: Effects of smiling emoticons on virtual first impressions. Social Psychological and Personality Science 9, 5 (2018), 614–625.
[28]
Alicia A Grandey, Glenda M. Fisk, Anna S. Mattila, Karen J. Jansen, and Lori Sideman. 2005. Is “service with a smile” enough? Authenticity of positive displays during service encounters. Organizational Behavior and Human Decision Processes 96 (2005), 38–55.
[29]
Joshua M. Hailpern, Mark W. Huber, and Ronald Calvo. 2020. How Impactful Is Presentation in Email? The Effect of Avatars and Signatures. ACM Transactions on Interactive Intelligent Systems (TiiS) 10 (2020), 1 – 26.
[30]
Jeffrey T Hancock and Philip J Dunham. 2001. Impression formation in computer-mediated communication revisited: An analysis of the breadth and intensity of impressions. Communication research 28, 3 (2001), 325–347.
[31]
Mariam Hassib, Daniel Buschek, Paweł W. Woźniak, and Florian Alt. 2017. HeartChat: Heart Rate Augmented Mobile Chat to Support Empathy and Awareness. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (2017).
[32]
Ari Hautasaari, Naomi Yamashita, and Ge Gao. 2017. How Non-Native English Speakers Perceive the Emotional Valence of Messages in Text-Based Computer-Mediated Communication. Discourse Processes 56(2017), 24 – 40.
[33]
Noelle J. Hum, Perrin E. Chamberlin, Brittany L. Hambright, Anne C. Portwood, Amanda C. Schat, and Jennifer L. Bevan. 2011. A picture is worth a thousand words: A content analysis of Facebook profile photographs. Comput. Hum. Behav. 27(2011), 1828–1833.
[34]
William Ickes. 2005. Empathic Accuracy.
[35]
L Crystal Jiang, Natalya N Bazarova, and Jeffrey T Hancock. 2013. From perception to behavior: Disclosure reciprocity and the intensification of intimacy in computer-mediated communication. Communication Research 40, 1 (2013), 125–143.
[36]
Joohee Jun, Myeongul Jung, So yeon Kim, and Kwanguk Kim. 2018. Full-Body Ownership Illusion Can Change Our Emotion. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (2018).
[37]
Miyuki G. Kamachi, Vicki Bruce, Shigeru Mukaida, Jiro Gyoba, Sakiko Yoshikawa, and Shigeru Akamatsu. 2013. Dynamic Properties Influence the Perception of Facial Expressions. Perception 30(2013), 875 – 887.
[38]
Shao kang Lo. 2008. The Nonverbal Communication Functions of Emoticons in Computer-Mediated Communication. Cyberpsychology & behavior : the impact of the Internet, multimedia and virtual reality on behavior and society 11 5 (2008), 595–7.
[39]
Shipra Kayan, Susan R. Fussell, and Leslie D. Setlock. 2006. Cultural differences in the use of instant messaging in Asia and North America. In CSCW ’06.
[40]
Pranav Khadpe, Chinmay Kulkarni, and Geoff Kaufman. 2022. Empathosphere: Promoting Constructive Communication in Ad-Hoc Virtual Teams through Perspective-Taking Spaces. Proc. ACM Hum.-Comput. Interact. 6, CSCW1, Article 55 (apr 2022), 26 pages. https://doi.org/10.1145/3512902
[41]
Eva G. Krumhuber, Antony S. R. Manstead, Darren P. Cosker, Dave Marshall, Paul L. Rosin, and Arvid Kappas. 2007. Facial dynamics as indicators of trustworthiness and cooperative behavior.Emotion 7 4(2007), 730–5.
[42]
J Richard Landis and Gary G. Koch. 1977. The measurement of observer agreement for categorical data.Biometrics 33 1(1977), 159–74.
[43]
Miki Liu, Austin Wong, Ruhi Pudipeddi, Betty Hou, David Wang, and Gary Hsieh. 2018. ReactionBot: Exploring the Effects of Expression-Triggered Emoji in Text Messages. Proc. ACM Hum. Comput. Interact. 2 (2018), 110:1–110:16.
[44]
Divine Maloney, Guo Freeman, and Donghee Yvette Wohn. 2020. "Talking without A Voice" : Understanding Non-Verbal Communication in Social Virtual Reality.
[45]
Gloria Mark, Shamsi T. Iqbal, Mary Czerwinski, and Paul Johns. 2014. Capturing the mood: facebook and face-to-face encounters in the workplace. Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing(2014).
[46]
David Matsumoto and Paul Ekman. 1989. American-Japanese cultural differences in intensity ratings of facial expressions of emotion. Motivation and Emotion 13 (1989), 143–157.
[47]
Mary L. McHugh. 2012. Interrater reliability: the kappa statistic. Biochemia Medica 22(2012), 276 – 282.
[48]
Hannah Jean Miller, Jacob Thebault-Spieker, Shuo Chang, Isaac L. Johnson, Loren G. Terveen, and Brent J. Hecht. 2016. "Blissfully Happy" or "Ready toFight": Varying Interpretations of Emoji. In ICWSM.
[49]
Saif M Mohammad. 2016. Sentiment analysis: Detecting valence, emotions, and other affectual states from text. In Emotion measurement. Elsevier, 201–237.
[50]
Philipp Matthias Müller, Michael Xuelin Huang, and Andreas Bulling. 2018. Detecting Low Rapport During Natural Interactions in Small Groups from Non-Verbal Behaviour. 23rd International Conference on Intelligent User Interfaces (2018).
[51]
Naoto Nakazato, Shigeo Yoshida, Sho Sakurai, Takuji Narumi, Tomohiro Tanikawa, and Michitaka Hirose. 2014. Smart Face: enhancing creativity during video conferences using real-time facial deformation. Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing(2014).
[52]
M. J. Newcombe and Neal M. Ashkanasy. 2002. The role of affect and affective congruence in perceptions of leaders: An experimental study.Leadership Quarterly 13(2002), 601–614.
[53]
Duyen T. Nguyen and Susan R. Fussell. 2012. How did you feel during our conversation?: retrospective analysis of intercultural and same-culture instant messaging conversations. Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work (2012).
[54]
Kristine L. Nowak. 2013. Choosing Buddy Icons that look like me or represent my personality: Using Buddy Icons for social presence. Comput. Hum. Behav. 29(2013), 1456–1464.
[55]
Kristine L. Nowak and Christian Rauh. 2008. Choose your "buddy icon" carefully: The influence of avatar androgyny, anthropomorphism and credibility in online interactions. Comput. Hum. Behav. 24(2008), 1473–1493.
[56]
Soo Youn Oh, Jeremy N. Bailenson, Nicole C. Krämer, and Benjamin Li. 2016. Let the Avatar Brighten Your Smile: Effects of Enhancing Facial Expressions in Virtual Environments. PLoS ONE 11(2016).
[57]
Judith S. Olson and Gary M. Olson. 2000. i2i trust in e-commerce. Commun. ACM 43(2000), 41–44.
[58]
Per Persson. 2003. Exms: an animated and avatar-based messaging system for expressive peer communication. In GROUP ’03.
[59]
M. Ptaszynski, Pawel Dybala, Wenhan Shi, Rafal Rzepka, and Kenji Araki. 2009. A System for Affect Analysis of Utterances in Japanese Supported with Web Mining. Journal of Japan Society for Fuzzy Theory and Intelligent Informatics 21 (2009), 194–213.
[60]
Pei-Luen Patrick Rau, Ye Li, and Dingjun Li. 2009. Effects of communication style and culture on ability to accept recommendations from robots. Comput. Hum. Behav. 25(2009), 587–595.
[61]
James A. Russell. 1980. A circumplex model of affect. Journal of Personality and Social Psychology 39 (1980), 1161–1178.
[62]
Cristina Segalin, Fabio Celli, Luca Polonio, Michal Kosinski, David Stillwell, N. Sebe, Marco Cristani, and B. Lepri. 2017. What your Facebook Profile Picture Reveals about your Personality. Proceedings of the 25th ACM international conference on Multimedia (2017).
[63]
Leslie D. Setlock and Susan R. Fussell. 2010. What’s it worth to you?: the costs and affordances of CMC tools to asian and american users. In CSCW ’10.
[64]
Esha Shandilya, Mingming Fan, and Garreth W. Tigwell. 2022. “I need to be professional until my new team uses emoji, GIFs, or memes first’’: New Collaborators’ Perspectives on Using Non-Textual Communication in Virtual Workspaces. CHI Conference on Human Factors in Computing Systems (2022).
[65]
Alan L. Sillars and Theodore E. Zorn. 2020. Hypernegative Interpretation of Negatively Perceived Email at Work. Management Communication Quarterly 35 (2020), 171 – 200.
[66]
Lingxiao Song, Zhihe Lu, Ran He, Zhenan Sun, and Tieniu Tan. 2018. Geometry Guided Adversarial Facial Expression Synthesis. Proceedings of the 26th ACM international conference on Multimedia (2018).
[67]
Catalina L. Toma. 2010. Perceptions of trustworthiness online: the role of visual and textual information. In CSCW ’10.
[68]
Fang-Wu Tung and Yi-Shin Deng. 2007. Increasing social presence of social actors in e-learning environments: Effects of dynamic and static emoticons on children. Displays 28(2007), 174–180.
[69]
Joseph B Walther and Kyle P D’addario. 2001. The impacts of emoticons on message interpretation in computer-mediated communication. Social science computer review 19, 3 (2001), 324–347.
[70]
Joseph B. Walther and Kyle P. D’Addario. 2001. The Impacts of Emoticons on Message Interpretation in Computer-Mediated Communication. Social Science Computer Review 19 (2001), 324 – 347.
[71]
Joseph B Walther, Brandon Van Der Heide, Artemio Ramirez, Judee K Burgoon, and Jorge Peña. 2015. Interpersonal and hyperpersonal dimensions of computer-mediated communication. The handbook of the psychology of communication technology 1 (2015), 22.
[72]
Feng Wang, Suncheng Xiang, Ting Liu, and Yuzhuo Fu. 2021. Attention Based Facial Expression Manipulation. 2021 IEEE International Conference on Multimedia & Expo Workshops (ICMEW) (2021), 1–6.
[73]
David Keith Westerman, Ron Tamborini, and Nicholas David Bowman. 2015. The effects of static avatars on impression formation across different contexts on social networking sites. Comput. Hum. Behav. 53(2015), 111–117.
[74]
Steve Whittaker. 2003. Theories and methods in mediated communication.
[75]
Steve Whittaker, David Mark Frohlich, and Owen Daly-Jones. 1994. Informal workplace communication: what is it like and how might we support it?. In CHI ’94.
[76]
Jacob O. Wobbrock, Leah Findlater, Darren Gergle, and James J. Higgins. 2011. The aligned rank transform for nonparametric factorial analyses using only anova procedures. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (2011).
[77]
Adrienne Wood, Jared D Martin, Martha W. Alibali, and Paula M. Niedenthal. 2019. A sad thumbs up: incongruent gestures and disrupted sensorimotor activity both slow processing of facial expressions. Cognition and Emotion 33(2019), 1196 – 1209.
[78]
Shigeo Yoshida, Sho Sakurai, Takuji Narumi, Tomohiro Tanikawa, and Michitaka Hirose. 2013. Manipulation of an emotional experience by real-time deformed facial feedback. In Proceedings of the 4th Augmented Human International Conference. 35–42.
[79]
Dejin Zhao and Mary Beth Rosson. 2009. How and why people Twitter: the role that micro-blogging plays in informal communication at work. In GROUP ’09.

Cited By

View all
  • (2024)EmoWear: Exploring Emotional Teasers for Voice Message Interaction on SmartwatchesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642101(1-16)Online publication date: 11-May-2024
  • (2024)“Should I Introduce myself?”International Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103279188:COnline publication date: 1-Aug-2024

Index Terms

  1. Affective Profile Pictures: Exploring the Effects of Changing Facial Expressions in Profile Pictures on Text-Based Communication

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems
    April 2023
    14911 pages
    ISBN:9781450394215
    DOI:10.1145/3544548
    This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike International 4.0 License.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 19 April 2023

    Check for updates

    Author Tags

    1. Text-based communication
    2. affective communication
    3. computer-mediated communication
    4. impression formation
    5. message interpretation
    6. social cues

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    Conference

    CHI '23
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Upcoming Conference

    CHI 2025
    ACM CHI Conference on Human Factors in Computing Systems
    April 26 - May 1, 2025
    Yokohama , Japan

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)1,750
    • Downloads (Last 6 weeks)151
    Reflects downloads up to 23 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)EmoWear: Exploring Emotional Teasers for Voice Message Interaction on SmartwatchesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642101(1-16)Online publication date: 11-May-2024
    • (2024)“Should I Introduce myself?”International Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103279188:COnline publication date: 1-Aug-2024

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Login options

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media