Keywords-Misinformation, Challenging Misinformation, Social Media, Online Silence
Keywords-Misinformation, Challenging Misinformation, Social Media, Online Silence
Keywords-Misinformation, Challenging Misinformation, Social Media, Online Silence
Abstract
Purpose: The main aim of this paper is to provide a groundwork for future research into users’
inaction on social media by generating hypotheses drawing from several bodies of related
literature, including organisational behaviour, communication, human-computer interaction
(HCI), psychology, and education.
Findings: Drawing on relevant literature, the potential reasons can be divided into six
categories: self-oriented, relationship-oriented, others-oriented, content-oriented, individual
characteristics, and technical factors.
Originality: This work provides a starting point for both the socio-psychological reasons that
prevent users from challenging and measures to encourage users to engage in conversations
by incorporating various literature.
I. INTRODUCTION
The term misinformation typically denotes incorrect or misleading information. Although many
terms such as disinformation, fake news, and rumours are used interchangeably they are
distinguished from misinformation due to the intention or format (Wu et al., 2019). Throughout
this paper, the term “misinformation” will be used as an umbrella term, as it refers widely to any
type of false information regardless of the intention or format.
With the widespread use of the internet and Social Networking Sites (SNS) for information
seeking and with users becoming content creators and broadcasters on such sites, information
starts to spread widely even before its accuracy can be verified. Recently, with the outbreak of
COVID 19, the issue of misinformation has become so prominent that the World Health
Organization raised concerns about social media by arguing that it amplified "infodemic"
(Organization, 2020). Social media design can be argued to be a neutral communication
channel however the way SNS prioritise content and display them to a user is also argued to
amplify users’ cognitive biases such as the bandwagon effect (Sundar et al., 2009) which refers
that individuals are more likely to think and behave in a certain way if they perceive that other
people are doing the same or confirmation bias (Kim and Yang, 2017) refers the tendency to
Conflict management styles (Thomas, 1992) can provide insights regarding this avoidance
behaviour. It suggests that people deal with conflict by using one of five modes: avoiding,
competing, collaborating, compromising, and accommodating (Thomas, 1992). When users
encounter a conflict in cyberspace, such as seeing misinformation and weighing the
consequences of correcting it, they might employ avoidance as a management strategy. More
precisely, instead of seeking to resolve a problem, users might seek to evade it. Indeed,
Tandoc et al. (2020) reported that one reason people would avoid correcting a stranger or
acquaintance when they spot fake news is the anticipation of conflict.
Although several studies have shown that people in cyberspace do not challenge
misinformation (Chadwick and Vaccari, 2019, Bode and Vraga, 2021c, Tandoc et al., 2020,
Vicol, 2020, Tully et al., 2020), to our knowledge, there have been no attempts to examine the
factors leading to this inaction.
Our aim in this paper is to pave the way for future research regarding hesitancy in challenging
misinformation on social media. We identify potential reasons for such online silence by
drawing from different bodies of literature. This research does not intend to be either a
systematic review or a scoping or narrative review as to our knowledge, there is no literature
primarily focused on this topic. This paper aims to synthesise pertinent literature on related
topics and provide potential reasons regarding users’ silence towards misinformation on social
media.
In this research, the term “challenging” rather than “correcting” is being used for two reasons.
One of these is that “correcting” is an absolute statement that presumes the person doing the
correction is accurate. However, this might not be the case. The person who intends to correct
may not actually be correct. The second reason is that this paper does not only address the
corrections but also disagreements or disputes over the content. In other words, the term
“challenging” serves a wider function than simply correcting.
This paper is organised as follows. Section 2 presents a literature review and an overview of
challenging misinformation on social media. Section 3 provides insights into possible reasons
people are reluctant to challenge misinformation. We conclude the paper and present possible
solutions and future work directions in Section 4.
As many as 53% of U.S. adults obtain news from social media (Shearer and Mitchell, 2021).
Besides being a fertile ground for misinformation, social media also offers opportunities to
mitigate the problem (Anderson, 2018, Djordjevic, 2020). In addition to algorithmic approaches
or machine learning-based solutions, individuals’ active participation in conversations to
challenge misinformation can help reduce misinformation (Bode and Vraga, 2018, Margolin et
al., 2018). Many social media users may not see themselves as key players in mitigating
misinformation, but many may contribute to its dissemination through their silence. Indeed,
one might argue that, in not challenging such misinformation, users are complicit in its spread.
Data from several studies suggest that correcting misinformation is not common on social
media. Almost 80% of social media users in the UK have not told anyone who shared false
news on social media that the news they shared was false or exaggerated (Chadwick and
Vaccari, 2019). Another U.K. survey found that although 58% of respondents reported having
encountered content that they thought was false, only 21% said they did something to correct
it (Vicol, 2020). Recent research in the U.S. regarding the correction of misinformation about
COVID 19 on social media revealed similar findings. Among 56.6% of those reporting that they
saw misinformation, only 35.1% said they corrected someone (Bode and Vraga, 2021c).
Similarly, in Singapore, 73% of social media users dismiss fake news posts on social media
without taking further action (Tandoc et al., 2020). Some studies have documented that actively
interacting with corrections when exposed to unconfirmed claims is not common (Zollo et al.,
2015), SNS users are not consistently motivated to correct misinformation publicly (Cohen et
al., 2020) and explicit corrections of other users are rare in the online environment (Arif et al.,
2017).
Reporting misinformation is one of the techniques provided by social media, enabling
individuals to anonymously mark a post as false. However, much as reporting helps diminish
the problem, it requires several steps and does not allow users to express their opinions and
thereby generate constructive and meaningful dialogue. Such dialogue may also have the
extra benefit of altering the beliefs and enhancing the critical literacy of the social sources
posting or sharing misinformation and their audience, which can foster long-term behaviour
change. Active engagement with false posts to challenge them by deliberation, argumentation,
or questioning is crucial to decreasing misinformation dissemination and cultivating a diverse
environment as it increases distinct ideas. Individuals modify their beliefs in misinformation
after seeing another user being corrected on social media (Vraga and Bode, 2017). Users are
also less likely to spread rumours when they feel they could be confronted with a
counterargument, criticism, or warning (Ozturk et al., 2015, Tanaka et al., 2013) and are more
likely to comment critically on posts when they are exposed to critical comments from other
users (Colliander, 2019).
Social media was construed as fundamentally different from offline spaces by offering an
environment where individuals are less restricted to express ideas. Some believe that the risks
of expressing opinions and sharing content online are lower than doing the same offline (Ho
and McLeod, 2008, Luarn and Hsieh, 2014, Papacharissi, 2002). However, according to social
information processing theory (Walther, 1992), although interpersonal development requires
more time in a computer-mediated environment than face-to-face (FtF), users may grow
In a qualitative study investigating the reasons why employees remain silent (Milliken et al.,
2003), fear of being viewed or labelled negatively and damaging valued relationships were the
most frequently mentioned reasons. Even in situations where silence might have devastating
consequences, people might still choose not to speak up. Bienefeld and Grote (2012) revealed
that although speaking up is critical for flight safety; aircrew members are reluctant to do so
because of the negative outcomes of speaking up. Their most common reason for not speaking
was their desire to maintain a good relationship with the team and not lose the other crew
members’ acceptance and trust. In addition, captains were afraid of embarrassing first officers,
and first officers were concerned that captains would view them as troublemakers if they
contradicted them.
Reasons that hinder employees from speaking up align closely with the educational psychology
literature investigating reasons for student participation in the classroom. Barriers to
participating in class range from negative outcome expectations or evaluation apprehension to
fear of appearing unintelligent or inadequate to one’s peers or instructors (Rocca, 2010,
Fassinger, 1995). Logistics such as class size, seating arrangement, mandatory participation,
and the instructor’s influence also affect students’ participation (Rocca, 2010).
Taken together, these studies provide important insights into the reasons people refrain from
entering conversations, speaking up, or questioning in offline environment. In CMC, users also
choose to be silent, they refrain from activities such as discussing their ideas (Hampton et al.,
1) Self-oriented Reasons
a) Fear of being attacked
Anonymity and lack of visual cues in CMC may lead to an online disinhibition effect, which
describes users acting more fiercely in cyberspace than they would do in real life by loosening
social norms or restrictions (Suler, 2004). Internet users who are aware that cyberspace
enables and provides ample opportunities for hostile communication may be reluctant to
The negative consequences of cyberbullying are known to be intense (Barlett, 2015) and
include depression (Patchin and Hinduja, 2006) or emotional distress (Cao et al., 2020).
Therefore, fear of cyber aggression may thwart users from expressing their deviant opinions.
For instance, college students and young adults avoid expressing their opinions regarding
politics in the online environment because of online outrage (Vraga et al., 2015, Powers et al.,
2019). Evidence also suggests that users exposed to cyberbullying tend to decrease or
abandon their usage of SNS (Cao et al., 2020, Urbaniak et al., 2022).
The fear of being attacked might emanate from the polarisation of social media. Social media
enable an environment encouraging homophily, where individuals with the same beliefs and
opinions get together and become homogeneous (Cinelli et al., 2021). While convenient,
algorithms showing users customised content based on their interests and views facilitate
further polarisation. It is therefore likely that users who are aware that radicalisation and
extremism are prevalent in the online environment might be more prone to keeping silent. For
instance, 32% of users who never or rarely share content about political or social issues cited
the fear of being attacked as the reason for not posting (McClain, 2021). These findings
suggest that users might refrain from challenging misinformation due to fear of being attacked
in the online environment.
According to impression management theory (Leary and Kowalski, 1990), individuals are
motivated to make a positive impression rather than act as they feel they should. In this case,
it can be speculated that although individuals think they should correct misinformation (Bode
and Vraga, 2021c), they may remain silent owing to the risk of creating a negative impression,
as conflicts, negative feedback, and political discussions on social media are not desirable
(Thorson, 2014, Vraga et al., 2015, Koutamanis et al., 2015).
c) Lack of self-efficacy
Self-efficacy theory, derived from social cognitive theory, focuses on the interconnections
between behaviour, outcome expectancies and self-efficacy (Bandura, 1977). According to
this theory, self-efficacy refers to a person’s judgement of their own ability to determine how
successfully they can perform a specific behaviour. Simply put, self-efficacy is a person’s belief
in their own capacity to succeed.
d) Lack of Accountability
The bystander effect suggests that people are less likely to offer help in an emergency when
other people are present because of the diffusion of responsibility (Darley and Latané, 1968).
Although this phenomenon is associated with emergency situations in physical space, it is
also examined in virtual environments (Fischer et al., 2011) such as participation in
conversations in an online learning environment or cyberbullying (Hudson and Bruckman,
2004, You and Lee, 2019). It was shown that one of the reasons for not intervening in
cyberbullying or not participating in conversations is because the participants delegate the
responsibility for intervention to other bystanders.
In SNS, misinformation can be seen by many people, and this also might lead to a diffusion of
responsibility in which people do not feel accountable for not correcting misinformation. They
might regard their own responsibility as lower because they think others might be more
accountable for correcting misinformation.
2) Relationship-oriented Reasons
a) Fear of isolation
Asch (Asch, 1956) demonstrated empirically that individuals adjust their behaviour in order to
fit in with the group. Relying on the Asch conformity experiment, Noelle-Neumann (1974)
introduced the Spiral of Silence theory which proposes that people gauge the public opinion
climate and, if they perceive that their opinion is in the minority, they are more likely to hold
back their opinion, while if they think that their opinion is in the majority, they tend to speak out
confidently. One of the main reasons for conforming is the fear of isolation. In her book (Noelle-
Neumann, 1974), it is argued that because of our social nature, we are afraid of being isolated
from our peers and losing their respect. In order to avoid disapproval or social sanctions,
people constantly monitor their environment and decide whether to express their opinions.
As conflicts may pose a threat to users’ sense of belonging, users may avoid challenging or
confronting others on social media.
3) Others-oriented Reasons
a) Concerns about negative impact on others
Individuals might withhold their opinions or refrain from challenging others for altruistic
purposes, such as fear of embarrassing or offending others. For instance, in organisations,
employees remain silent because of the concern that speaking up might upset, embarrass or
in some way harm other people (Milliken et al., 2003).
Withholding opinions because of concern for others reveals itself in the public arena. A
Norwegian study exploring freedom of expression in different social conventions and norms
found that citizens withheld their opinions in the public domain due to the fear of offending
others (Steen-Johnsen and Enjolras, 2016). On social media, users also adhere to the norm
of not offending others. A qualitative study showed that users hesitate to counteract
misinformation due to fear of embarrassing the sharer, so they prefer to use private
communication to minimise the risk (Rohman, 2021).
b) Normative Beliefs
Perceived norms are people’s understanding of the prevalent set of rules regulating the
behaviour that group members can enact (Lapinski and Rimal, 2005). They can be divided
into two categories: descriptive norms and injunctive norms. While descriptive norms explain
beliefs regarding the prevalence of a behaviour, injunctive norms explain perceived approval
of that behaviour by others (Lapinski and Rimal, 2005, Rimal and Real, 2003). Engagement
in the behaviour is influenced by these two norms, the extent to which a behaviour is prevalent
and approved by others (Berkowitz, 2003).
Social norms play an important role as behavioural antecedents in many contexts, as well as
in the context of the correction of misinformation. Koo et al. (2021) showed that when
Taken together, as prevalence and approval by others influence the engagement in the
behaviour, it can be proposed that people do not challenge others since they perceive doing
so to be unusual and unacceptable on social media.
4) Content-oriented Reasons
a) Issue Relevance
One reason that individuals choose to remain silent might be the extent to which the content
is personally relevant or important to them. Indeed, studies have found that people are more
willing to correct misinformation when the news story is personally relevant to them or their
loved ones (Tandoc et al., 2020). Another study showed that people skipped past false posts
without thoroughly reading them as they did not find them interesting or relevant enough to
fully read (Geeng et al., 2020).
b) Issue Importance
Issue importance also influences people’s willingness to speak out publicly on a contentious
topic. The greater the perceived importance, the more willing people are to speak out (Moy et
al., 2001, Gearhart and Zhang, 2014). Consequently, it might be argued that people’s
avoidance of correcting false news can be related to the content’s importance, relevancy, or
appeal.
5) Individual Characteristics
Although there are some contextual influences on people’s decisions to discuss or confront,
individual factors such as demographics (e.g., age, sex, education level) may influence the
decision to engage in these conversations. For example, in their study about correction
experiences on social media regarding COVID 19, Bode and Vraga (2021a) found that
respondents with more education were more likely to engage in correction, and older
respondents were less likely to report correcting others.
Personality traits might also influence users’ willingness to challenge. The five-factor model of
personality describes five dimensions of personality: extraversion, agreeableness,
conscientiousness, openness to experience, and neuroticism (McCrae and John, 1992).
Personality traits influence the frequency and patterns of social interactions in political
discussions (Hibbing et al., 2011, Gerber et al., 2012), online news comment behaviour (Wu
and Atkin, 2017) and students’ participation in controversial discussions in the classroom
(Gronostay, 2019).
In the context of politics, there is an association between extraversion and the tendency to
discuss politics (Hibbing et al., 2011, Mondak and Halperin, 2008). As challenging someone
requires asking questions or voluntarily providing corrections, extraversion might be positively
associated with engaging in conversations to correct misinformation. It may influence people’s
Perspective-taking and empathic concern could also impact a user’s decision to challenge.
According to the empathy-altruism hypothesis, empathy for another person generates an
altruistic drive to improve that individual’s welfare (Batson, 1987). As people try to establish a
positive self-presentation on SNS (Zhao et al., 2008) and sharing misinformation could hurt
one’s reputation (Altay et al., 2019), it can be speculated that users may develop empathy and
therefore refrain from challenging misinformation to protect themselves from negative feelings.
6) Technical Characteristics
Features and affordances on social media impact users’ engagement in several ways. They
can encourage young people to express themselves on political issues (Lane, 2020), or affect
a user’s decision on whether to interact with the content, as they may choose not to engage
by using some available features (Zhu et al., 2017, Wu et al., 2020a, Wu et al., 2020b).
Features like “hide a post”, “unfollow”, “snooze”, and reactions (smiley face, angry face) can
be utilised by users as avoidance strategies for not commenting (Wu et al., 2020b).
Research has shown that the way in which information is displayed on the interface can have
an impact on users’ misinformation sharing behaviour (Avram et al., 2020, Di Domenico et al.,
2021). To illustrate, people who are exposed to high engagement metrics (i.e., the numbers
of likes and shares) are more likely to like or share false content without verifying it (Avram et
al., 2020). In addition, when misinformation is presented in a way that the source precedes
the message, users are less likely to share due to a lack of trust (Di Domenico et al., 2021).
How social media is designed also affects users’ misinformation sharing behaviour (Fazio,
2020). Integrating friction into a design, for example, a question, to make a user pause and
think before sharing information reduces misinformation sharing (Fazio, 2020).
Given that features, affordances, and interface design have an impact on whether to engage
with the content or how to engage on SNS, it can also be argued that they may also affect
users’ decisions to challenge misinformation. The lack of tools provided by the platforms and
the way SNS is designed might affect users’ tendency to be silent when encountering
misinformation.
IV. CONCLUSION
This study aims to provide a starting point for both the socio-psychological reasons and
measures to encourage users to challenge. Another gap in the existing literature is the
presentation of potential solutions. Scholars have proposed tools, design considerations, or
systems to cultivate constructive discussions in online environments from different fields, e.g.,
web-based learning environments (Hew and Cheung, 2008, Lazonder et al., 2003, Yiong-
Hwee and Churchill, 2007) and political deliberation (Semaan et al., 2015, Lane, 2020). To
the best of our knowledge, no application has been identified for facilitating challenging
misinformation. Given that incorporating design approaches into digital behaviour change
interventions is successful in many diverse areas (Elaheebocus et al., 2018), design
considerations can be extended to encourage users to disagree, question, or correct.
Persuasive system designs (PSD) have been used in many ways to promote behaviour
change (Torning and Oinas-Kukkonen, 2009). They are designed to influence attitudes and
behaviour by prompting cognitive, behavioural, psycho-social and other psychological factors.
Implementing PSD strategies (Oinas-Kukkonen and Harjumaa, 2009) into the design may
motivate users to challenge misinformation. Table 1 fleshes out a few of the design
suggestions to illustrate the potential of using PSD strategies.
Future research is needed to study the extent to which each of the identified factors influences
challenging behaviour online and whether the perceptions regarding other users on social
media are accurate. It is hoped that this review will give some insights regarding users’
challenging behaviour on social media that can aid future designs. Future research could also
investigate how the design of social media interfaces can be improved to empower people to
challenge misinformation.
ACKNOWLEDGEMENT
This work is supported by Bournemouth University's match-funded studentship.
Altay, S., Hacquin, A.-S., Mercier, H. J. n. m. and society (2019), "Why do so few people share fake
news? It hurts their reputation", New Media & Society, p. 1461444820969893.
Amichai-Hamburger, Y., Gazit, T., Bar-Ilan, J., Perez, O., Aharony, N., Bronstein, J. and Dyne, T. S.
(2016), "Psychological factors behind the lack of participation in online discussions",
Computers in Human Behavior Vol. 55, pp. 268-277.
Anderson, K. E. (2018), "Getting acquainted with social networks and apps: combating fake news on
social media", Library Hi Tech News, Vol. 35 No. 3, pp. 1-6.
Antheunis, M. L., Vanden Abeele, M. M. P. and Kanters, S. (2015), "The impact of Facebook use on
micro-level social capital: A synthesis", Societies, Vol. 5 No. 2, pp. 399-419.
Arif, A., Robinson, J. J., Stanek, S. A., Fichet, E. S., Townsend, P., Worku, Z. and Starbird, K. (2017),
"A Closer Look at the Self-Correcting Crowd: Examining Corrections in Online Rumors",
Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and
Social Computing, Portland, Oregon, USA, Association for Computing Machinery pp. 155–
168.
Asch, S. E. (1956), "Studies of independence and conformity: I. A minority of one against a
unanimous majority", Psychological monographs: General and applied, Vol. 70 No. 9, p. 1.
Avram, M., Micallef, N., Patil, S. and Menczer, F. (2020), "Exposure to social engagement metrics
increases vulnerability to misinformation", arXiv preprint arXiv:2005.04682.
Bandura, A. (1977), "Self-efficacy: toward a unifying theory of behavioral change", Psychological
Review, Vol. 84 No. 2, p. 191.
Barlett, C. P. (2015), "Anonymously hurting others online: The effect of anonymity on cyberbullying
frequency", Psychology of Popular Media Culture, Vol. 4 No. 2, p. 70.
Batson, C. D. (1987), "Prosocial Motivation: Is it ever Truly Altruistic?", in Berkowitz, L. (Ed.)
Advances in Experimental Social Psychology, Academic Press, pp. 65-122.
Baumeister, R. F. and Leary, M. R. (1995), "The need to belong: desire for interpersonal attachments
as a fundamental human motivation", Psychological Bulletin, Vol. 117 No. 3, p. 497.
Berkowitz, A. D. (2003), "Applications of social norms theory to other health and social justice issues",
in Perkins, H. W. (Ed.) The social norms approach to preventing school and college age
substance abuse: A handbook for educators, counselors, and clinicians., Jossey-Bass/Wiley,
Hoboken, NJ, pp. 259-279.
Bienefeld, N. and Grote, G. (2012), "Silence that may kill", Aviation Psychology and Applied Human
Factors, Vol. 2(1):1-10.
Bode, L. and Vraga, E. K. (2015), "In related news, that was wrong: The correction of misinformation
through related stories functionality in social media", Journal of Communication, Vol. 65 No.
4, pp. 619-638.
Bode, L. and Vraga, E. K. (2018), "See Something, Say Something: Correction of Global Health
Misinformation on Social Media", Health Communication, p. 1131.
Bode, L. and Vraga, E. K. (2021a), "Correction Experiences on Social Media During COVID-19",
Social Media + Society, Vol. 7.
Bode, L. and Vraga, E. K. (2021b), "People-powered correction: Fixing misinformation on social
media", The Routledge Companion to Media Disinformation and Populism, Routledge, pp.
498-506.
Bode, L. and Vraga, E. K. (2021c), "Value for Correction: Documenting Perceptions about Peer
Correction of Misinformation on Social Media in the Context of COVID-19", Journal of
Quantitative Description: Digital Media, Vol. 1.
Cao, X., Khan, A. N., Ali, A. and Khan, N. A. (2020), "Consequences of Cyberbullying and Social
Overload while Using SNSs: A Study of Users’ Discontinuous Usage Behavior in SNSs",
Information Systems Frontiers, Vol. 22 No. 6, pp. 1343-1356.
Chadwick, A. and Vaccari, C. (2019), "News sharing on UK social media: Misinformation,
disinformation, and correction", Loughborough, UK: Loughborough University.
Cinelli, M., Morales, G. D. F., Galeazzi, A., Quattrociocchi, W. and Starnini, M. (2021), "The echo
chamber effect on social media", Proceedings of the National Academy of Sciences, Vol. 118
No. 9.
Cohen, E. L., Atwell Seate, A., Kromka, S. M., Sutherland, A., Thomas, M., Skerda, K. and Nicholson,
A. (2020), "To correct or not to correct? Social identity threats increase willingness to
denounce fake news through presumed media influence and hostile media perceptions",
Communication Research Reports, Vol. 37 No. 5, pp. 263-275.