Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Keywords-Misinformation, Challenging Misinformation, Social Media, Online Silence

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

Online Silence: why do not people challenge others when posting misinformation?

Selin Gurgun 1,*, Emily Arden-Close 1, Keith Phalp 1, Raian Ali 2, *


1
Faculty of Science and Technology, Bournemouth University, UK
{sgurgun, eardenclose, kphalp}@bournemouth.ac.uk
2
College of Science and Engineering, Hamad Bin Khalifa University, Qatar
raali2@hbku.edu.qa

Abstract

Purpose: The main aim of this paper is to provide a groundwork for future research into users’
inaction on social media by generating hypotheses drawing from several bodies of related
literature, including organisational behaviour, communication, human-computer interaction
(HCI), psychology, and education.

Design/methodology/approach: This research does not intend to be either a systematic


review or a scoping or narrative review, as to our knowledge, there is no literature primarily
focused on this topic. This paper aims to synthesise pertinent literature on related topics and
provide potential reasons regarding users’ silence towards misinformation on social media.

Findings: Drawing on relevant literature, the potential reasons can be divided into six
categories: self-oriented, relationship-oriented, others-oriented, content-oriented, individual
characteristics, and technical factors.
Originality: This work provides a starting point for both the socio-psychological reasons that
prevent users from challenging and measures to encourage users to engage in conversations
by incorporating various literature.

Keywords—misinformation, challenging misinformation, social media, online silence

I. INTRODUCTION

The term misinformation typically denotes incorrect or misleading information. Although many
terms such as disinformation, fake news, and rumours are used interchangeably they are
distinguished from misinformation due to the intention or format (Wu et al., 2019). Throughout
this paper, the term “misinformation” will be used as an umbrella term, as it refers widely to any
type of false information regardless of the intention or format.

With the widespread use of the internet and Social Networking Sites (SNS) for information
seeking and with users becoming content creators and broadcasters on such sites, information
starts to spread widely even before its accuracy can be verified. Recently, with the outbreak of
COVID 19, the issue of misinformation has become so prominent that the World Health
Organization raised concerns about social media by arguing that it amplified "infodemic"
(Organization, 2020). Social media design can be argued to be a neutral communication
channel however the way SNS prioritise content and display them to a user is also argued to
amplify users’ cognitive biases such as the bandwagon effect (Sundar et al., 2009) which refers
that individuals are more likely to think and behave in a certain way if they perceive that other
people are doing the same or confirmation bias (Kim and Yang, 2017) refers the tendency to

Electronic copy available at: https://ssrn.com/abstract=4137613


search or remember the information that confirms or supports one's previous opinions. Limiting
the exposure of diverse opinions and promoting a common narrative with like-minded people,
known as an "echo chamber” (Cinelli et al., 2021) is also often considered to cause further
polarisation of views and reduce any nuance in complex arguments.
On SNS, where fake news spreads six times faster than true news (Vosoughi et al., 2018) and
misinformation is engaged with more than factual posts (Edelson et al., 2021) it becomes even
more important to combat the spread of misinformation. One of the most effective ways to
reduce the problem is called “social corrections”, which refers to correction attempts made by
social sources, where social contacts are usually a primary source of information (Bode and
Vraga, 2018, Margolin et al., 2018). Evidence suggests that algorithmic corrections provided
by the SNS, are successful in reducing belief in incorrect information (Bode and Vraga, 2015)
and users’ corrections are as effective as algorithmic corrections (Bode and Vraga, 2018).
Nevertheless, people can be hesitant to take any action to correct misinformation (Tully et al.,
2020, Chadwick and Vaccari, 2019, Tandoc et al., 2020).

Conflict management styles (Thomas, 1992) can provide insights regarding this avoidance
behaviour. It suggests that people deal with conflict by using one of five modes: avoiding,
competing, collaborating, compromising, and accommodating (Thomas, 1992). When users
encounter a conflict in cyberspace, such as seeing misinformation and weighing the
consequences of correcting it, they might employ avoidance as a management strategy. More
precisely, instead of seeking to resolve a problem, users might seek to evade it. Indeed,
Tandoc et al. (2020) reported that one reason people would avoid correcting a stranger or
acquaintance when they spot fake news is the anticipation of conflict.

Although several studies have shown that people in cyberspace do not challenge
misinformation (Chadwick and Vaccari, 2019, Bode and Vraga, 2021c, Tandoc et al., 2020,
Vicol, 2020, Tully et al., 2020), to our knowledge, there have been no attempts to examine the
factors leading to this inaction.

Our aim in this paper is to pave the way for future research regarding hesitancy in challenging
misinformation on social media. We identify potential reasons for such online silence by
drawing from different bodies of literature. This research does not intend to be either a
systematic review or a scoping or narrative review as to our knowledge, there is no literature
primarily focused on this topic. This paper aims to synthesise pertinent literature on related
topics and provide potential reasons regarding users’ silence towards misinformation on social
media.

In this research, the term “challenging” rather than “correcting” is being used for two reasons.
One of these is that “correcting” is an absolute statement that presumes the person doing the
correction is accurate. However, this might not be the case. The person who intends to correct
may not actually be correct. The second reason is that this paper does not only address the
corrections but also disagreements or disputes over the content. In other words, the term
“challenging” serves a wider function than simply correcting.

This paper is organised as follows. Section 2 presents a literature review and an overview of
challenging misinformation on social media. Section 3 provides insights into possible reasons
people are reluctant to challenge misinformation. We conclude the paper and present possible
solutions and future work directions in Section 4.

Electronic copy available at: https://ssrn.com/abstract=4137613


II. BACKGROUND AND MOTIVATION

As many as 53% of U.S. adults obtain news from social media (Shearer and Mitchell, 2021).
Besides being a fertile ground for misinformation, social media also offers opportunities to
mitigate the problem (Anderson, 2018, Djordjevic, 2020). In addition to algorithmic approaches
or machine learning-based solutions, individuals’ active participation in conversations to
challenge misinformation can help reduce misinformation (Bode and Vraga, 2018, Margolin et
al., 2018). Many social media users may not see themselves as key players in mitigating
misinformation, but many may contribute to its dissemination through their silence. Indeed,
one might argue that, in not challenging such misinformation, users are complicit in its spread.

Data from several studies suggest that correcting misinformation is not common on social
media. Almost 80% of social media users in the UK have not told anyone who shared false
news on social media that the news they shared was false or exaggerated (Chadwick and
Vaccari, 2019). Another U.K. survey found that although 58% of respondents reported having
encountered content that they thought was false, only 21% said they did something to correct
it (Vicol, 2020). Recent research in the U.S. regarding the correction of misinformation about
COVID 19 on social media revealed similar findings. Among 56.6% of those reporting that they
saw misinformation, only 35.1% said they corrected someone (Bode and Vraga, 2021c).
Similarly, in Singapore, 73% of social media users dismiss fake news posts on social media
without taking further action (Tandoc et al., 2020). Some studies have documented that actively
interacting with corrections when exposed to unconfirmed claims is not common (Zollo et al.,
2015), SNS users are not consistently motivated to correct misinformation publicly (Cohen et
al., 2020) and explicit corrections of other users are rare in the online environment (Arif et al.,
2017).
Reporting misinformation is one of the techniques provided by social media, enabling
individuals to anonymously mark a post as false. However, much as reporting helps diminish
the problem, it requires several steps and does not allow users to express their opinions and
thereby generate constructive and meaningful dialogue. Such dialogue may also have the
extra benefit of altering the beliefs and enhancing the critical literacy of the social sources
posting or sharing misinformation and their audience, which can foster long-term behaviour
change. Active engagement with false posts to challenge them by deliberation, argumentation,
or questioning is crucial to decreasing misinformation dissemination and cultivating a diverse
environment as it increases distinct ideas. Individuals modify their beliefs in misinformation
after seeing another user being corrected on social media (Vraga and Bode, 2017). Users are
also less likely to spread rumours when they feel they could be confronted with a
counterargument, criticism, or warning (Ozturk et al., 2015, Tanaka et al., 2013) and are more
likely to comment critically on posts when they are exposed to critical comments from other
users (Colliander, 2019).

III. POTENTIAL REASONS FOR AVOIDING CHALLENGING MISINFORMATION ON SOCIAL MEDIA

Social media was construed as fundamentally different from offline spaces by offering an
environment where individuals are less restricted to express ideas. Some believe that the risks
of expressing opinions and sharing content online are lower than doing the same offline (Ho
and McLeod, 2008, Luarn and Hsieh, 2014, Papacharissi, 2002). However, according to social
information processing theory (Walther, 1992), although interpersonal development requires
more time in a computer-mediated environment than face-to-face (FtF), users may grow

Electronic copy available at: https://ssrn.com/abstract=4137613


similar levels of interpersonal relations. The theory suggests that, regardless of the medium,
people are driven to form impressions and build connections and language or symbols in
computer-mediated communication (CMC) are as important as nonverbal cues in FtF
communication. Based on these findings, the constraints people have when expressing
contradictory opinions or challenging people in an online environment might not be different
from those in FtF.

In FtF communication, withholding opinions, abstaining from participating in discussions, or


remaining silent even though there is an issue that needs intervention. Refraining from
providing opinion is discussed in different environments such as enterprise environments
where employees withhold information purposefully (Morrison and Milliken, 2000, Milliken et
al., 2003, Dyne et al., 2003) or classrooms where students keep silent during discussions
(Jaworski and Sachdev, 1998, Rocca, 2010, Fassinger, 1995).

The silence of employees in organisations is considered to be a behaviour which is driven by


several motivations, rather than a sign of acceptance. According to the conceptualisations of
Pinder and Harlos (2001) and Dyne et al. (2003), there are three types of silence based on
employee motives: acquiescent silence, defensive silence, and prosocial silence. Acquiescent
silence is passive behaviour and is motivated by a lack of desire to speak up. An employee
could withhold their ideas due to a belief that speaking up will not change the situation or that
their capability is insufficient to make a difference. Defensive silence is described as proactive
behaviour and is motivated by the intention of protecting oneself. An employee could remain
silent due to fear of the consequences of expressing ideas, such as getting fired or demoted.
Finally, prosocial silence is defined as withholding ideas based on positive intentions for others
or the organisation. An employee could be reticent due to protecting others from
embarrassment or having trouble.

In a qualitative study investigating the reasons why employees remain silent (Milliken et al.,
2003), fear of being viewed or labelled negatively and damaging valued relationships were the
most frequently mentioned reasons. Even in situations where silence might have devastating
consequences, people might still choose not to speak up. Bienefeld and Grote (2012) revealed
that although speaking up is critical for flight safety; aircrew members are reluctant to do so
because of the negative outcomes of speaking up. Their most common reason for not speaking
was their desire to maintain a good relationship with the team and not lose the other crew
members’ acceptance and trust. In addition, captains were afraid of embarrassing first officers,
and first officers were concerned that captains would view them as troublemakers if they
contradicted them.

Reasons that hinder employees from speaking up align closely with the educational psychology
literature investigating reasons for student participation in the classroom. Barriers to
participating in class range from negative outcome expectations or evaluation apprehension to
fear of appearing unintelligent or inadequate to one’s peers or instructors (Rocca, 2010,
Fassinger, 1995). Logistics such as class size, seating arrangement, mandatory participation,
and the instructor’s influence also affect students’ participation (Rocca, 2010).

Taken together, these studies provide important insights into the reasons people refrain from
entering conversations, speaking up, or questioning in offline environment. In CMC, users also
choose to be silent, they refrain from activities such as discussing their ideas (Hampton et al.,

Electronic copy available at: https://ssrn.com/abstract=4137613


2014), posting content about political and social issues (McClain, 2021), and commenting on
questionable news (Stroud et al., 2016) or correcting misinformation (Tandoc et al., 2020,
Chadwick and Vaccari, 2019). The concept of silence or non-participation in the online
environment is conceptualised as lurking or passive SNS use which refers to passively viewing
and not posting or participating in an online community (Nonnecke and Preece, 2001). In trying
to understand the factors affecting lurking behaviour, Amichai-Hamburger et al. (2016)
proposed a model with three main reasons for remaining passive: individual differences (need
for gratification, personality dispositions, time available, and self-efficacy), social-group
processes (socialisation, type of community, social loafing, responses to delurking, and quality
of the response) and technological setting (technical design flaws and the privacy and safety
of the group). However, the concept of lurking or passive SNS use may not provide a
comprehensive explanation about self-silencing behaviour in the online environment as self-
silencing might be due to reasons other than just passively browsing or choosing to observe
over participating.

Despite the importance of challenging misinformation to reduce the spread of misinformation,


there remains a paucity of evidence on what prevents people from doing it. To explore
potential barriers to challenging misinformation, this article synthesises findings from several
bodies of literature. Drawing on relevant literature, the potential reasons can be divided into
six categories: Self-oriented, relationship-oriented, others-oriented, content-oriented,
individual characteristics, and technical factors (see Fig. 1).

Fig.1. Potential reasons why people do not challenge misinformation

1) Self-oriented Reasons
a) Fear of being attacked
Anonymity and lack of visual cues in CMC may lead to an online disinhibition effect, which
describes users acting more fiercely in cyberspace than they would do in real life by loosening
social norms or restrictions (Suler, 2004). Internet users who are aware that cyberspace
enables and provides ample opportunities for hostile communication may be reluctant to

Electronic copy available at: https://ssrn.com/abstract=4137613


engage in challenging misinformation due to fear of being attacked or becoming the victim of
cyberbullying, which is deliberate, repeated hostile behaviour to harm others using information
and communication technologies (Slonje et al., 2013).

The negative consequences of cyberbullying are known to be intense (Barlett, 2015) and
include depression (Patchin and Hinduja, 2006) or emotional distress (Cao et al., 2020).
Therefore, fear of cyber aggression may thwart users from expressing their deviant opinions.
For instance, college students and young adults avoid expressing their opinions regarding
politics in the online environment because of online outrage (Vraga et al., 2015, Powers et al.,
2019). Evidence also suggests that users exposed to cyberbullying tend to decrease or
abandon their usage of SNS (Cao et al., 2020, Urbaniak et al., 2022).

The fear of being attacked might emanate from the polarisation of social media. Social media
enable an environment encouraging homophily, where individuals with the same beliefs and
opinions get together and become homogeneous (Cinelli et al., 2021). While convenient,
algorithms showing users customised content based on their interests and views facilitate
further polarisation. It is therefore likely that users who are aware that radicalisation and
extremism are prevalent in the online environment might be more prone to keeping silent. For
instance, 32% of users who never or rarely share content about political or social issues cited
the fear of being attacked as the reason for not posting (McClain, 2021). These findings
suggest that users might refrain from challenging misinformation due to fear of being attacked
in the online environment.

b) The motivation for protecting self-image


Impression management (also known as self-presentation) is the process through which
people try to control how others see them (Leary and Kowalski, 1990). According to this
approach, people sometimes modify their behaviour to create positive impressions in the eyes
of others by monitoring and assessing others’ perceptions of themselves. People seek to
manage their impressions on social media (Paliszkiewicz and Mądra-Sawicka, 2016), where
users are able to curate their images easily (Weinstein, 2014). They selectively disclose
information to create a desirable, ideal, and socially acceptable image (Zhao et al., 2008). In
order to meet the expectations of the audience, they post information that their audience will
find non-offensive (Marwick and Boyd, 2011) and avoid engaging in controversial topics
(Sleeper et al., 2013).

According to impression management theory (Leary and Kowalski, 1990), individuals are
motivated to make a positive impression rather than act as they feel they should. In this case,
it can be speculated that although individuals think they should correct misinformation (Bode
and Vraga, 2021c), they may remain silent owing to the risk of creating a negative impression,
as conflicts, negative feedback, and political discussions on social media are not desirable
(Thorson, 2014, Vraga et al., 2015, Koutamanis et al., 2015).

c) Lack of self-efficacy
Self-efficacy theory, derived from social cognitive theory, focuses on the interconnections
between behaviour, outcome expectancies and self-efficacy (Bandura, 1977). According to
this theory, self-efficacy refers to a person’s judgement of their own ability to determine how
successfully they can perform a specific behaviour. Simply put, self-efficacy is a person’s belief
in their own capacity to succeed.

Electronic copy available at: https://ssrn.com/abstract=4137613


Outcome expectancies, defined as a person's perception of the consequences of their actions,
have the potential to influence self-efficacy (Bandura, 1977). The theory suggests that efficacy
beliefs influence outcome expectancies. More precisely, people's outcome expectancies are
heavily influenced by their assessments of how well they would perform in various settings. In
this case, we can speculate that individuals might not challenge misinformation due to a
perceived lack of efficacy in achieving the behaviour (e.g., a perceived lack of knowledge).
Indeed, when individuals feel equipped enough to express their opinions, they are more likely
to speak up on a political issue regardless of what the majority thinks (Lasorsa, 1991).
Supporting this view. In another study, Tandoc et al. (2020) found that personal efficacy is
one of the main factors affecting a user’s decision to correct fake news. In sum, users may
avoid challenging others due to their belief that their abilities are insufficient to succeed or that
their efforts would not make any difference.

d) Lack of Accountability

The bystander effect suggests that people are less likely to offer help in an emergency when
other people are present because of the diffusion of responsibility (Darley and Latané, 1968).
Although this phenomenon is associated with emergency situations in physical space, it is
also examined in virtual environments (Fischer et al., 2011) such as participation in
conversations in an online learning environment or cyberbullying (Hudson and Bruckman,
2004, You and Lee, 2019). It was shown that one of the reasons for not intervening in
cyberbullying or not participating in conversations is because the participants delegate the
responsibility for intervention to other bystanders.

In SNS, misinformation can be seen by many people, and this also might lead to a diffusion of
responsibility in which people do not feel accountable for not correcting misinformation. They
might regard their own responsibility as lower because they think others might be more
accountable for correcting misinformation.

2) Relationship-oriented Reasons
a) Fear of isolation
Asch (Asch, 1956) demonstrated empirically that individuals adjust their behaviour in order to
fit in with the group. Relying on the Asch conformity experiment, Noelle-Neumann (1974)
introduced the Spiral of Silence theory which proposes that people gauge the public opinion
climate and, if they perceive that their opinion is in the minority, they are more likely to hold
back their opinion, while if they think that their opinion is in the majority, they tend to speak out
confidently. One of the main reasons for conforming is the fear of isolation. In her book (Noelle-
Neumann, 1974), it is argued that because of our social nature, we are afraid of being isolated
from our peers and losing their respect. In order to avoid disapproval or social sanctions,
people constantly monitor their environment and decide whether to express their opinions.

Being isolated or being ostracised, (ignored or excluded by others) is painful as it triggers


several physiological, affective, cognitive, and behavioural responses (Williams and Nida,
2011). Therefore, users on social networking sites conform, comply, or obey so they are not
excluded (Williams et al., 2000). One of the reasons they do not correct others might be due
to fear of being isolated as they want to fit in the group.

Electronic copy available at: https://ssrn.com/abstract=4137613


b) The desire to maintain relationships
Maintaining social ties plays a pivotal role in psychological well-being (Kawachi and Berkman,
2001). The need to belong is one of the fundamental needs (Williams and Sommer, 1997) and
it is linked to psychological and physical well-being (Baumeister and Leary, 1995). The pursuit
of belonging also exists in cyberspace (Williams et al., 2000) and SNS such as Facebook
(Covert and Stefanone 2018), Instagram and Twitter (Hayes et al. 2018). In SNS users interact
with a large number of friends. “Friends” on SNS encompass both strong and weak ties, and
include friends, family, neighbours, colleagues, or romantic partners, but also acquaintances
or consequential strangers (Fingerman, 2009) and they are the sources of social capital
(Antheunis et al., 2015). One of the main motivations for using SNS is the desire to maintain
relationships (Dunne et al., 2010, Joinson, 2008). As a result, people might be more cautious
in their interactions. Indeed, Gallrein et al. (2019) found that individuals tend to withhold
negative interpersonal feedback as they perceive it to have the potential to harm their
relationships. In another study exploring why employees do not speak up about issues or
concerns, Milliken et al. (2003) reported that fear of damaging relationships is the second most
common reason for withholding opinions.

As conflicts may pose a threat to users’ sense of belonging, users may avoid challenging or
confronting others on social media.

3) Others-oriented Reasons
a) Concerns about negative impact on others
Individuals might withhold their opinions or refrain from challenging others for altruistic
purposes, such as fear of embarrassing or offending others. For instance, in organisations,
employees remain silent because of the concern that speaking up might upset, embarrass or
in some way harm other people (Milliken et al., 2003).

Withholding opinions because of concern for others reveals itself in the public arena. A
Norwegian study exploring freedom of expression in different social conventions and norms
found that citizens withheld their opinions in the public domain due to the fear of offending
others (Steen-Johnsen and Enjolras, 2016). On social media, users also adhere to the norm
of not offending others. A qualitative study showed that users hesitate to counteract
misinformation due to fear of embarrassing the sharer, so they prefer to use private
communication to minimise the risk (Rohman, 2021).

b) Normative Beliefs
Perceived norms are people’s understanding of the prevalent set of rules regulating the
behaviour that group members can enact (Lapinski and Rimal, 2005). They can be divided
into two categories: descriptive norms and injunctive norms. While descriptive norms explain
beliefs regarding the prevalence of a behaviour, injunctive norms explain perceived approval
of that behaviour by others (Lapinski and Rimal, 2005, Rimal and Real, 2003). Engagement
in the behaviour is influenced by these two norms, the extent to which a behaviour is prevalent
and approved by others (Berkowitz, 2003).

Social norms play an important role as behavioural antecedents in many contexts, as well as
in the context of the correction of misinformation. Koo et al. (2021) showed that when

Electronic copy available at: https://ssrn.com/abstract=4137613


individuals perceive that corrective actions are common, they are more motivated to correct
misinformation. In their experimental study, Gimpel et al. (2021) found that emphasising the
socially desirable action of reporting false news using an injunctive social norm increases the
rate of reporting fake news. Although individuals think it is normative to correct someone (Bode
and Vraga, 2021c), it is not easy to engage in a dialogue to challenge the content sharer due
to the norms that govern social interactions. On Facebook, for example, where everyone's
social contacts could see the entire conversation, heated interactions and public discussions
were viewed as norm violations (McLaughlin and Vitak, 2012).

Taken together, as prevalence and approval by others influence the engagement in the
behaviour, it can be proposed that people do not challenge others since they perceive doing
so to be unusual and unacceptable on social media.

4) Content-oriented Reasons
a) Issue Relevance
One reason that individuals choose to remain silent might be the extent to which the content
is personally relevant or important to them. Indeed, studies have found that people are more
willing to correct misinformation when the news story is personally relevant to them or their
loved ones (Tandoc et al., 2020). Another study showed that people skipped past false posts
without thoroughly reading them as they did not find them interesting or relevant enough to
fully read (Geeng et al., 2020).

b) Issue Importance
Issue importance also influences people’s willingness to speak out publicly on a contentious
topic. The greater the perceived importance, the more willing people are to speak out (Moy et
al., 2001, Gearhart and Zhang, 2014). Consequently, it might be argued that people’s
avoidance of correcting false news can be related to the content’s importance, relevancy, or
appeal.

5) Individual Characteristics
Although there are some contextual influences on people’s decisions to discuss or confront,
individual factors such as demographics (e.g., age, sex, education level) may influence the
decision to engage in these conversations. For example, in their study about correction
experiences on social media regarding COVID 19, Bode and Vraga (2021a) found that
respondents with more education were more likely to engage in correction, and older
respondents were less likely to report correcting others.

Personality traits might also influence users’ willingness to challenge. The five-factor model of
personality describes five dimensions of personality: extraversion, agreeableness,
conscientiousness, openness to experience, and neuroticism (McCrae and John, 1992).
Personality traits influence the frequency and patterns of social interactions in political
discussions (Hibbing et al., 2011, Gerber et al., 2012), online news comment behaviour (Wu
and Atkin, 2017) and students’ participation in controversial discussions in the classroom
(Gronostay, 2019).

In the context of politics, there is an association between extraversion and the tendency to
discuss politics (Hibbing et al., 2011, Mondak and Halperin, 2008). As challenging someone
requires asking questions or voluntarily providing corrections, extraversion might be positively
associated with engaging in conversations to correct misinformation. It may influence people’s

Electronic copy available at: https://ssrn.com/abstract=4137613


willingness to approach controversial dialogues on social media. Since agreeable individuals
focus on being acceptable in the eyes of others (Graziano and Tobin, 2002) and
agreeableness is associated with conflict avoidance, it might be negatively related to
challenging others. Individuals high in openness to experience are amenable to new ideas
and experiences. It can be speculated that openness to experience might be positively
associated with approaching conversations to question and learn the perspective of the
sharer. Neuroticism describes individuals who are unstable and troubled by negative emotions
such as worry, and stress (McCrae and John, 1992). Therefore, neuroticism might be
negatively related to approaching conversations to challenge or correct them.

Perspective-taking and empathic concern could also impact a user’s decision to challenge.
According to the empathy-altruism hypothesis, empathy for another person generates an
altruistic drive to improve that individual’s welfare (Batson, 1987). As people try to establish a
positive self-presentation on SNS (Zhao et al., 2008) and sharing misinformation could hurt
one’s reputation (Altay et al., 2019), it can be speculated that users may develop empathy and
therefore refrain from challenging misinformation to protect themselves from negative feelings.

6) Technical Characteristics
Features and affordances on social media impact users’ engagement in several ways. They
can encourage young people to express themselves on political issues (Lane, 2020), or affect
a user’s decision on whether to interact with the content, as they may choose not to engage
by using some available features (Zhu et al., 2017, Wu et al., 2020a, Wu et al., 2020b).
Features like “hide a post”, “unfollow”, “snooze”, and reactions (smiley face, angry face) can
be utilised by users as avoidance strategies for not commenting (Wu et al., 2020b).

Research has shown that the way in which information is displayed on the interface can have
an impact on users’ misinformation sharing behaviour (Avram et al., 2020, Di Domenico et al.,
2021). To illustrate, people who are exposed to high engagement metrics (i.e., the numbers
of likes and shares) are more likely to like or share false content without verifying it (Avram et
al., 2020). In addition, when misinformation is presented in a way that the source precedes
the message, users are less likely to share due to a lack of trust (Di Domenico et al., 2021).
How social media is designed also affects users’ misinformation sharing behaviour (Fazio,
2020). Integrating friction into a design, for example, a question, to make a user pause and
think before sharing information reduces misinformation sharing (Fazio, 2020).

Given that features, affordances, and interface design have an impact on whether to engage
with the content or how to engage on SNS, it can also be argued that they may also affect
users’ decisions to challenge misinformation. The lack of tools provided by the platforms and
the way SNS is designed might affect users’ tendency to be silent when encountering
misinformation.

IV. CONCLUSION

Social corrections were proposed as countermeasures to mitigate misinformation (Bode and


Vraga, 2018, Bode and Vraga, 2021b, Walter et al., 2021, Walter and Murphy, 2018).
However, in studies using social corrections as an intervention strategy, there is a lack of
research into why people do not correct misinformation. Research has tended to focus on
factors affecting the correction of misinformation (Cohen et al., 2020, Koo et al., 2021, Tandoc
et al., 2020) rather than the barriers that prevent users.

Electronic copy available at: https://ssrn.com/abstract=4137613


By synthesising insights from various bodies of literature, we presented hypotheses about
why people refrain from challenging misinformation when they encounter it on SNS. Although
there is a commonly held belief that social media is a disinhibited environment where
individuals discuss or express anything they like with little concern, studies show that users
may feel restrained in some circumstances while expressing their opinions online (Thorson,
2014) or correcting misinformation (Tandoc et al., 2020). Identifying the reasons why people
do not engage in conversations to question or correct the content might be helpful for devising
socio-technical measures to encourage people to challenge misinformation and contribute to
mitigating its spread.

This study aims to provide a starting point for both the socio-psychological reasons and
measures to encourage users to challenge. Another gap in the existing literature is the
presentation of potential solutions. Scholars have proposed tools, design considerations, or
systems to cultivate constructive discussions in online environments from different fields, e.g.,
web-based learning environments (Hew and Cheung, 2008, Lazonder et al., 2003, Yiong-
Hwee and Churchill, 2007) and political deliberation (Semaan et al., 2015, Lane, 2020). To
the best of our knowledge, no application has been identified for facilitating challenging
misinformation. Given that incorporating design approaches into digital behaviour change
interventions is successful in many diverse areas (Elaheebocus et al., 2018), design
considerations can be extended to encourage users to disagree, question, or correct.

Persuasive system designs (PSD) have been used in many ways to promote behaviour
change (Torning and Oinas-Kukkonen, 2009). They are designed to influence attitudes and
behaviour by prompting cognitive, behavioural, psycho-social and other psychological factors.
Implementing PSD strategies (Oinas-Kukkonen and Harjumaa, 2009) into the design may
motivate users to challenge misinformation. Table 1 fleshes out a few of the design
suggestions to illustrate the potential of using PSD strategies.

Electronic copy available at: https://ssrn.com/abstract=4137613


Strategy Definition Example Implementation
Reduction The design strategy of reducing Stickers with questions may help users
complex behaviour into simple to question the content in a quick and
tasks impersonal way.
Suggestion The design strategy that offers Sentence starters with different phrases
fitting suggestions to begin the first sentence may help
users to challenge misinformation in a
quicker and more constructed way.
Users are enabled to write their
comments by following an already
initiated sentence.
Self-monitoring The design strategy that A tone detector may help users to
enables you to monitor your monitor how their comment is likely to
status or progress. sound to someone reading it. As the
user is writing a comment, the indicator
on the scale of emotions begins to form
as word choices, style, and punctuations
are identified.
Recognition The design strategy that A badge can provide public recognition.
provides public recognition for Users who occasionally correct
performing misinformation can display the badge on
their profile. As a result, other users are
able to see that this user with the badge
has taken the initiative to challenge
misinformation on social media.
Normative The design strategy that A message that gives information about
influence displays norms that how most other users' positive attitudes towards
people behave and what correcting misinformation on social
behaviour they approve media may motivate users
Praise The design strategy that uses A notification or message after
praise as feedback for people's correcting misinformation may motivate
behaviour users
Rewards The design strategy that A reward such as badges or points after
rewards people for performing each correction may motivate users to
the target behaviour correct misinformation more often.
Table I. Design suggestions based on Persuasive System Design (PSD) strategies

Future research is needed to study the extent to which each of the identified factors influences
challenging behaviour online and whether the perceptions regarding other users on social
media are accurate. It is hoped that this review will give some insights regarding users’
challenging behaviour on social media that can aid future designs. Future research could also
investigate how the design of social media interfaces can be improved to empower people to
challenge misinformation.

ACKNOWLEDGEMENT
This work is supported by Bournemouth University's match-funded studentship.

Electronic copy available at: https://ssrn.com/abstract=4137613


REFERENCES

Altay, S., Hacquin, A.-S., Mercier, H. J. n. m. and society (2019), "Why do so few people share fake
news? It hurts their reputation", New Media & Society, p. 1461444820969893.
Amichai-Hamburger, Y., Gazit, T., Bar-Ilan, J., Perez, O., Aharony, N., Bronstein, J. and Dyne, T. S.
(2016), "Psychological factors behind the lack of participation in online discussions",
Computers in Human Behavior Vol. 55, pp. 268-277.
Anderson, K. E. (2018), "Getting acquainted with social networks and apps: combating fake news on
social media", Library Hi Tech News, Vol. 35 No. 3, pp. 1-6.
Antheunis, M. L., Vanden Abeele, M. M. P. and Kanters, S. (2015), "The impact of Facebook use on
micro-level social capital: A synthesis", Societies, Vol. 5 No. 2, pp. 399-419.
Arif, A., Robinson, J. J., Stanek, S. A., Fichet, E. S., Townsend, P., Worku, Z. and Starbird, K. (2017),
"A Closer Look at the Self-Correcting Crowd: Examining Corrections in Online Rumors",
Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and
Social Computing, Portland, Oregon, USA, Association for Computing Machinery pp. 155–
168.
Asch, S. E. (1956), "Studies of independence and conformity: I. A minority of one against a
unanimous majority", Psychological monographs: General and applied, Vol. 70 No. 9, p. 1.
Avram, M., Micallef, N., Patil, S. and Menczer, F. (2020), "Exposure to social engagement metrics
increases vulnerability to misinformation", arXiv preprint arXiv:2005.04682.
Bandura, A. (1977), "Self-efficacy: toward a unifying theory of behavioral change", Psychological
Review, Vol. 84 No. 2, p. 191.
Barlett, C. P. (2015), "Anonymously hurting others online: The effect of anonymity on cyberbullying
frequency", Psychology of Popular Media Culture, Vol. 4 No. 2, p. 70.
Batson, C. D. (1987), "Prosocial Motivation: Is it ever Truly Altruistic?", in Berkowitz, L. (Ed.)
Advances in Experimental Social Psychology, Academic Press, pp. 65-122.
Baumeister, R. F. and Leary, M. R. (1995), "The need to belong: desire for interpersonal attachments
as a fundamental human motivation", Psychological Bulletin, Vol. 117 No. 3, p. 497.
Berkowitz, A. D. (2003), "Applications of social norms theory to other health and social justice issues",
in Perkins, H. W. (Ed.) The social norms approach to preventing school and college age
substance abuse: A handbook for educators, counselors, and clinicians., Jossey-Bass/Wiley,
Hoboken, NJ, pp. 259-279.
Bienefeld, N. and Grote, G. (2012), "Silence that may kill", Aviation Psychology and Applied Human
Factors, Vol. 2(1):1-10.
Bode, L. and Vraga, E. K. (2015), "In related news, that was wrong: The correction of misinformation
through related stories functionality in social media", Journal of Communication, Vol. 65 No.
4, pp. 619-638.
Bode, L. and Vraga, E. K. (2018), "See Something, Say Something: Correction of Global Health
Misinformation on Social Media", Health Communication, p. 1131.
Bode, L. and Vraga, E. K. (2021a), "Correction Experiences on Social Media During COVID-19",
Social Media + Society, Vol. 7.
Bode, L. and Vraga, E. K. (2021b), "People-powered correction: Fixing misinformation on social
media", The Routledge Companion to Media Disinformation and Populism, Routledge, pp.
498-506.
Bode, L. and Vraga, E. K. (2021c), "Value for Correction: Documenting Perceptions about Peer
Correction of Misinformation on Social Media in the Context of COVID-19", Journal of
Quantitative Description: Digital Media, Vol. 1.
Cao, X., Khan, A. N., Ali, A. and Khan, N. A. (2020), "Consequences of Cyberbullying and Social
Overload while Using SNSs: A Study of Users’ Discontinuous Usage Behavior in SNSs",
Information Systems Frontiers, Vol. 22 No. 6, pp. 1343-1356.
Chadwick, A. and Vaccari, C. (2019), "News sharing on UK social media: Misinformation,
disinformation, and correction", Loughborough, UK: Loughborough University.
Cinelli, M., Morales, G. D. F., Galeazzi, A., Quattrociocchi, W. and Starnini, M. (2021), "The echo
chamber effect on social media", Proceedings of the National Academy of Sciences, Vol. 118
No. 9.
Cohen, E. L., Atwell Seate, A., Kromka, S. M., Sutherland, A., Thomas, M., Skerda, K. and Nicholson,
A. (2020), "To correct or not to correct? Social identity threats increase willingness to
denounce fake news through presumed media influence and hostile media perceptions",
Communication Research Reports, Vol. 37 No. 5, pp. 263-275.

Electronic copy available at: https://ssrn.com/abstract=4137613


Colliander, J. (2019), "“This is fake news”: Investigating the role of conformity to other users’ views
when commenting on and spreading disinformation in social media", Computers in Human
Behavior, Vol. 97, pp. 202-215.
Darley, J. M. and Latané, B. (1968), "Bystander intervention in emergencies: diffusion of
responsibility", Journal of Personality and Social Psychology, Vol. 8 No. 4p1, p. 377.
Di Domenico, G., Nunan, D., Sit, J. and Pitardi, V. (2021), "Free but fake speech: When giving
primacy to the source decreases misinformation sharing on social media", Psychology &
Marketing, Vol. 38 No. 10, pp. 1700-1711.
Djordjevic, M. (2020), "Corporate Attempts to Combat Fake News", Fake News in an Era of Social
Media: Tracking Viral Contagion, p. 103.
Dunne, Á., Lawlor, M. A. and Rowley, J. (2010), "Young people's use of online social networking
sites–a uses and gratifications perspective", Journal of Research in Interactive Marketing.
Dyne, L. V., Ang, S. and Botero, I. C. (2003), "Conceptualizing employee silence and employee voice
as multidimensional constructs", Journal of Management Studies, Vol. 40 No. 6, pp. 1359-
1392.
Edelson, L., Nguyen, M.-K., Goldstein, I., Goga, O., McCoy, D. and Lauinger, T. (2021),
"Understanding engagement with US (mis) information news sources on Facebook", in IMC
'21: Proceedings of the 21st ACM Internet Measurement Conference, pp. 444-463.
Elaheebocus, S. M. R. A., Weal, M., Morrison, L. and Yardley, L. (2018), "Peer-based social media
features in behavior change interventions: systematic review", Journal of medical Internet
research, Vol. 20 No. 2, p. e8342.
Fassinger, P. A. (1995), "Understanding classroom interaction: Students' and professors'
contributions to students' silence", The Journal of Higher Education, Vol. 66 No. 1, pp. 82-96.
Fazio, L. (2020), "Pausing to consider why a headline is true or false can help reduce the sharing of
false news", Harvard Kennedy School Misinformation Review, Vol. 1 No. 2.
Fingerman, K. L. (2009), "Consequential strangers and peripheral ties: The importance of unimportant
relationships", Journal of Family Theory & Review, Vol. 1 No. 2, pp. 69-86.
Fischer, P., Krueger, J. I., Greitemeyer, T., Vogrincic, C., Kastenmüller, A., Frey, D., Heene, M.,
Wicher, M. and Kainbacher, M. (2011), "The bystander-effect: a meta-analytic review on
bystander intervention in dangerous and non-dangerous emergencies", Psychological
bulletin, Vol. 137 No. 4, p. 517.
Gallrein, A. M. B., Bollich‐Ziegler, K. L. and Leising, D. (2019), "Interpersonal feedback in everyday
life: Empirical studies in Germany and the United States", European Journal of Social
Psychology, Vol. 49 No. 1, pp. 1-18.
Gearhart, S. and Zhang, W. (2014), "Gay Bullying and Online Opinion Expression", Social Science
Computer Review, Vol. 32 No. 1, pp. 18-36.
Geeng, C., Yee, S. and Roesner, F. (2020), "Fake News on Facebook and Twitter: Investigating How
People (Don't) Investigate", Conference on Human Factors in Computing Systems
Proceedings, pp. 1-14.
Gerber, A. S., Huber, G. A., Doherty, D. and Dowling, C. M. (2012), "Disagreement and the avoidance
of political discussion: Aggregate relationships and differences across personality traits",
American Journal of Political Science, Vol. 56 No. 4, pp. 849-874.
Gimpel, H., Heger, S., Olenberger, C. and Utz, L. (2021), "The effectiveness of social norms in
fighting fake news on social media", Journal of Management Information Systems, Vol. 38
No. 1, pp. 196-221.
Graziano, W. G. and Tobin, R. M. (2002), "Agreeableness: Dimension of personality or social
desirability artifact?", Journal of Personality, Vol. 70 No. 5, pp. 695-728.
Gronostay, D. (2019), "To argue or not to argue? The role of personality traits, argumentativeness,
epistemological beliefs and assigned positions for students’ participation in controversial
political classroom discussions", Vol. 47 No. 1, pp. 117-135.
Hampton, K. N., Rainie, H., Lu, W., Dwyer, M., Shin, I. and Purcell, K. (2014), Social media and
the'spiral of silence', Pew Research Center.
Hew, K. F. and Cheung, W. S. (2008), "Attracting student participation in asynchronous online
discussions: A case study of peer facilitation", Computers & Education, Vol. 51 No. 3, pp.
1111-1124.
Hibbing, M. V., Ritchie, M. and Anderson, M. R. (2011), "Personality and political discussion", Political
Behaviour, Vol. 33 No. 4, pp. 601-624.
Ho, S. S. and McLeod, D. M. (2008), "Social-psychological influences on opinion expression in face-
to-face and computer-mediated communication", Communication Research, Vol. 35 No. 2,
pp. 190-207.

Electronic copy available at: https://ssrn.com/abstract=4137613


Hudson, J. M. and Bruckman, A. S. (2004), "The Bystander Effect: A Lens for Understanding Patterns
of Participation", Journal of the Learning Sciences, Vol. 13 No. 2, pp. 165-195.
Jaworski, A. and Sachdev, I. (1998), "Beliefs about silence in the classroom", Language and
Education, Vol. 12 No. 4, pp. 273-292.
Joinson, A. N. (2008), "Looking at, looking up or keeping up with people? Motives and use of
Facebook", in pp. 1027-1036.
Kawachi, I. and Berkman, L. F. (2001), "Social ties and mental health", Journal of Urban health, Vol.
78 No. 3, pp. 458-467.
Kim, C. and Yang, S.-U. (2017), "Like, comment, and share on Facebook: How each behavior differs
from the other", Public Relations Review, Vol. 43 No. 2, pp. 441-449.
Koo, A. Z.-X., Su, M.-H., Lee, S., Ahn, S.-Y. and Rojas, H. (2021), "What Motivates People to Correct
Misinformation? Examining the Effects of Third-person Perceptions and Perceived Norms",
Journal of Broadcasting & Electronic Media, Vol. 65 No. 1, pp. 111-134.
Koutamanis, M., Vossen, H. G. M. and Valkenburg, P. M. (2015), "Adolescents’ comments in social
media: Why do adolescents receive negative feedback and who is most at risk?", Computers
in Human Behavior, Vol. 53, pp. 486-494.
Lane, D. S. (2020), "Social media design for youth political expression: Testing the roles of
identifiability and geo-boundedness", New Media & Society, Vol. 22 No. 8, pp. 1394-1413.
Lapinski, M. K. and Rimal, R. N. (2005), "An explication of social norms", Communication theory, Vol.
15 No. 2, pp. 127-147.
Lasorsa, D. L. (1991), "Political outspokenness: Factors working against the spiral of silence",
Journalism Quarterly, Vol. 68 No. 1-2, pp. 131-140.
Lazonder, A. W., Wilhelm, P. and Ootes, S. A. W. (2003), "Using sentence openers to foster student
interaction in computer-mediated learning environments", Computers & Education, Vol. 41
No. 3, pp. 291-308.
Leary, M. R. and Kowalski, R. M. (1990), "Impression management: A literature review and two-
component model", Psychological Bulletin, Vol. 107 No. 1, p. 34.
Luarn, P. and Hsieh, A.-Y. (2014), "Speech or silence: the effect of user anonymity and member
familiarity on the willingness to express opinions in virtual communities", Online Information
Review, Vol. 38 No. 7, pp. 881-895.
Margolin, D. B., Hannak, A. and Weber, I. (2018), "Political Fact-Checking on Twitter: When Do
Corrections Have an Effect?", Political Communication p. 196.
Marwick, A. E. and Boyd, D. (2011), "I tweet honestly, I tweet passionately: Twitter users, context
collapse, and the imagined audience", New Media & Society, Vol. 13 No. 1, pp. 114-133.
McClain, C. (2021), "American Trend Panel".
McCrae, R. R. and John, O. P. (1992), "An introduction to the five‐factor model and its applications",
Journal of Personality, Vol. 60 No. 2, pp. 175-215.
McLaughlin, C. and Vitak, J. (2012), "Norm evolution and violation on Facebook", New Media &
Society, Vol. 14 No. 2, pp. 299-315.
Milliken, F. J., Morrison, E. W. and Hewlin, P. F. (2003), "An exploratory study of employee silence:
Issues that employees don’t communicate upward and why", Journal of Management Studies,
Vol. 40 No. 6, pp. 1453-1476.
Mondak, J. J. and Halperin, K. D. (2008), "A framework for the study of personality and political
behaviour", British Journal of Political Science, Vol. 38 No. 2, pp. 335-362.
Morrison, E. W. and Milliken, F. J. (2000), "Organizational silence: A barrier to change and
development in a pluralistic world", Academy of Management Review, Vol. 25 No. 4, pp. 706-
725.
Moy, P., Domke, D. and Stamm, K. (2001), "The Spiral of Silence and Public Opinion on Affirmative
Action", Journalism & Mass Communication Quarterly, Vol. 78 No. 1, pp. 7-25.
Noelle-Neumann, E. (1974), "The Spiral of Silence A Theory of Public Opinion", Journal of
Communication, Vol. 24 No. 2, pp. 43-51.
Nonnecke, B. and Preece, J. (2001), "Why lurkers lurk".
Oinas-Kukkonen, H. and Harjumaa, M. (2009), "Persuasive systems design: Key issues, process
model, and system features", Communications of the Association for Information Systems,
Vol. 24 No. 1, p. 28.
Organization, W. H. (2020), "Managing the COVID-19 infodemic: Promoting healthy behaviours and
mitigating the harm from misinformation and disinformation",
https://www.who.int/news/item/23-09-2020-managing-the-covid-19-infodemic-promoting-
healthy-behaviours-and-mitigating-the-harm-from-misinformation-and-disinformation.

Electronic copy available at: https://ssrn.com/abstract=4137613


Ozturk, P., Li, H. and Sakamoto, Y. (2015), "Combating rumor spread on social media: The
effectiveness of refutation and warning", in 2015 48th Hawaii international conference on
system sciences, pp. 2406-2414.
Paliszkiewicz, J. and Mądra-Sawicka, M. (2016), "Impression Management in Social Media: The
Example of LinkedIn", Management, Vol. 11 No. 3.
Papacharissi, Z. (2002), "The virtual sphere: the internet as a public sphere", New Media & Society,
No. 1, p. 9.
Patchin, J. W. and Hinduja, S. (2006), "Bullies move beyond the schoolyard: A preliminary look at
cyberbullying", Youth Violence and Juvenile Justice, Vol. 4 No. 2, pp. 148-169.
Pinder, C. C. and Harlos, K. P. (2001), "Employee silence: Quiescence and acquiescence as
responses to perceived injustice", Research in Personnel and Human Resources
Management, Emerald Group Publishing Limited.
Powers, E., Koliska, M. and Guha, P. (2019), "“Shouting Matches and Echo Chambers”: Perceived
Identity Threats and Political Self-Censorship on Social Media", International Journal of
Communication, Vol. 13, p. 20.
Rimal, R. N. and Real, K. (2003), "Understanding the influence of perceived norms on behaviors",
Communication Theory, Vol. 13 No. 2, pp. 184-203.
Rocca, K. A. (2010), "Student participation in the college classroom: An extended multidisciplinary
literature review", Communication Education, Vol. 59 No. 2, pp. 185-213.
Rohman, A. (2021), "Counteracting Misinformation in Quotidian Settings", in International Conference
on Information, pp. 141-155.
Semaan, B., Faucett, H., Robertson, S. P., Maruyama, M. and Douglas, S. (2015), "Designing political
deliberation environments to support interactions in the public sphere", in pp. 3167-3176.
Shearer, E. and Mitchell, A. (2021), "News use across social media platforms in 2020".
Sleeper, M., Balebako, R., Das, S., McConahy, A. L., Wiese, J. and Cranor, L. F. (2013), "The post
that wasn't: exploring self-censorship on facebook", Proceedings of the 2013 conference on
Computer supported cooperative work, San Antonio, Texas, USA, Association for Computing
Machinery pp. 793–802.
Slonje, R., Smith, P. K. and Frisén, A. (2013), "The nature of cyberbullying, and strategies for
prevention", Computers in Human Behavior, Vol. 29 No. 1, pp. 26-32.
Steen-Johnsen, K. and Enjolras, B. (2016), "The Fear of Offending: Social Norms and Freedom of
Expression", Society, No. 4, p. 352.
Stroud, N. J., Van Duyn, E. and Peacock, C. (2016), "News commenters and news comment
readers", Engaging News Project, pp. 1-21.
Suler, J. (2004), "The Online Disinhibition Effect", Cyberpsychology & Behavior, No. 3, p. 321.
Sundar, S. S., Xu, Q. and Oeldorf-Hirsch, A. (2009), "Authority vs. peer: How interface cues influence
users", CHI EA '09: CHI '09 Extended Abstracts on Human Factors in Computing Systems,
pp. 4231-4236.
Tanaka, Y., Sakamoto, Y. and Matsuka, T. (2013), "Toward a Social-Technological System that
Inactivates False Rumors through the Critical Thinking of Crowds", 46th Hawaii International
Conference on System Sciences, IEEE pp. 649-658.
Tandoc, E. C., Lim, D. and Ling, R. (2020), "Diffusion of disinformation: How social media users
respond to fake news and why", Journalism, Vol. 21 No. 3, pp. 381-398.
Thomas, K. W. (1992), "Conflict and conflict management: Reflections and update", Journal of
Organizational Behavior, pp. 265-274.
Thorson, K. (2014), "Facing an uncertain reception: Young citizens and political interaction on
Facebook", Information, Communication & Society, Vol. 17 No. 2, pp. 203-216.
Torning, K. and Oinas-Kukkonen, H. (2009), "Persuasive system design: state of the art and future
directions", in Persuasive '09: Proceedings of the 4th International Conference on Persuasive
Technology, pp. 1-8.
Tully, M., Bode, L. and Vraga, E. K. (2020), "Mobilizing Users: Does Exposure to Misinformation and
Its Correction Affect Users’ Responses to a Health Misinformation Post?", Social Media +
Society, Vol. 6.
Urbaniak, R., Ptaszyński, M., Tempska, P., Leliwa, G., Brochocki, M. and Wroczyński, M. (2022),
"Personal attacks decrease user activity in social networking platforms", Computers in Human
Behavior, Vol. 126, p. 106972.
Vicol, D. O. (2020), "Who is most likely to believe and to share misinformation? Full Fact".
Vosoughi, S., Roy, D. and Aral, S. (2018), "The spread of true and false news online", Science, Vol.
359 No. 6380, pp. 1146-1151.

Electronic copy available at: https://ssrn.com/abstract=4137613


Vraga, E. K. and Bode, L. (2017), "Using expert sources to correct health misinformation in social
media", Science Communication, Vol. 39 No. 5, pp. 621-645.
Vraga, E. K., Thorson, K., Kligler-Vilenchik, N. and Gee, E. (2015), "How individual sensitivities to
disagreement shape youth political expression on Facebook", Computers in Human Behavior,
Vol. 45, pp. 281-289.
Walter, N., Brooks, J. J., Saucier, C. J. and Suresh, S. (2021), "Evaluating the impact of attempts to
correct health misinformation on social media: A meta-analysis", Health Communication, Vol.
36 No. 13, pp. 1776-1784.
Walter, N. and Murphy, S. T. (2018), "How to unring the bell: A meta-analytic approach to correction
of misinformation", Communication Monographs, p. 423.
Walther, J. B. (1992), "Interpersonal effects in computer-mediated interaction: A relational
perspective", Communication Research, Vol. 19 No. 1, pp. 52-90.
Weinstein, E. C. (2014), "The personal is political on social media: Online civic expression patterns
and pathways among civically engaged youth", International journal of communication, Vol. 8,
p. 24.
Williams, K. D., Cheung, C. K. and Choi, W. (2000), "Cyberostracism: effects of being ignored over
the Internet", Journal of Personality and Social Psychology, Vol. 79 No. 5, p. 748.
Williams, K. D. and Nida, S. A. (2011), "Ostracism: Consequences and coping", Current Directions in
Psychological Science, Vol. 20 No. 2, pp. 71-75.
Williams, K. D. and Sommer, K. L. (1997), "Social ostracism by coworkers: Does rejection lead to
loafing or compensation?", Personality and Social Psychology Bulletin, Vol. 23 No. 7, pp. 693-
706.
Wu, L., Morstatter, F., Carley, K. M. and Liu, H. (2019), "Misinformation in Social Media: Definition,
Manipulation, and Detection", SIGKDD Explor. Newsl., Vol. 21 No. 2, pp. 80–90.
Wu, T.-Y. and Atkin, D. (2017), "Online News Discussions: Exploring the Role of User Personality and
Motivations for Posting Comments on News", Journalism & Mass Communication Quarterly,
Vol. 94 No. 1, pp. 61-80.
Wu, T.-Y., Oeldorf-Hirsch, A. and Atkin, D. (2020a), "A click is worth a thousand words: Probing the
predictors of using click speech for online opinion expression", International Journal of
Communication, Vol. 14, p. 20.
Wu, T.-Y., Xu, X. and Atkin, D. (2020b), "The alternatives to being silent: exploring opinion expression
avoidance strategies for discussing politics on Facebook", Internet Research.
Yiong-Hwee, T. and Churchill, D. (2007), "Using Sentence Openers to Support Students’
Argumentation in an Online Learning Environment", Educational Media International, Vol. 44
No. 3, pp. 207-218.
You, L. and Lee, Y.-H. (2019), "The bystander effect in cyberbullying on social network sites:
Anonymity, group size, and intervention intentions", Telematics and Informatics, Vol. 45, p.
101284.
Zhao, S., Grasmuck, S. and Martin, J. (2008), "Identity construction on Facebook: Digital
empowerment in anchored relationships", Computers in Human Behavior, Vol. 24 No. 5, pp.
1816-1836.
Zhu, Q., Skoric, M. and Shen, F. (2017), "I shield myself from thee: Selective avoidance on social
media during political protests", Political Communication, Vol. 34 No. 1, pp. 112-131.
Zollo, F., Bessi, A., Del Vicario, M., Scala, A., Caldarelli, G., Shekhtman, L., Havlin, S. and
Quattrociocchi, W. (2015), "Debunking in a World of Tribes", PLoS ONE Vol. 12(7).

Electronic copy available at: https://ssrn.com/abstract=4137613

You might also like