Each study participant shared their perceptions of the platforms on which they experienced identity-related content removals that they considered incorrect or unfair. Participants’ content removals took place across a range of highly-trafficked social media platforms, such as Facebook, Instagram, X, TikTok, and Reddit. All participants stated that they perceived social media platforms as being biased against marginalized users in some way. This led participants to perceive these platforms as having a negative platform spirit, as the removals of participants’ identity-related content conflicted with their perceptions of how platforms’ guidelines, goals, and values should work. Participants addressed these conflicts by developing folk theories that explain the dissonance between their perception of how social media platforms should work and their experiences with identity-related content removals that informed their negative perceptions of the platforms’ spirit. These theories typically highlighted specific parts of a platform’ design (such as its guidelines or algorithmic moderation tools) that marginalized users perceived as causing or enabling their negative experiences on the platform. Participants generally trusted folk theories as behavioral guides more than they trusted platforms’ guidelines themselves, as many users perceived platform guidelines to be flawed, unclear, or not enforced equally between marginalized and non-marginalized users. Participants responded to their folk theories by adjusting their behavior on social media platforms; the most commonly reported behavioral change was either avoiding the use of explicitly identity-related vocabulary or using coded language as alternatives for explicitly identity-related vocabulary to avoid flagging algorithmic moderation tools. Users also reported significantly reducing their use of platforms or outright leaving them after experiencing disproportionate moderation. Platform spirit played a range of roles throughout participants’ folk theorization processes. Participants’ perceptions of platform spirit (guided by their experiences with content moderation and removals) served as information guiding their folk theorization; in turn, participants’ negative folk theories reinforced their negative perceptions of platforms’ spirit, degrading their relationships with these platforms.
In what follows, we describe participants’ perceptions of platforms’ spirit based on their experiences having content removed from those platforms. We then describe how participants developed folk theories about platforms and their content moderation practices in response to their negative perceptions of platforms’ spirit. Afterward, we describe participants’ behavioral responses guided by their folk theories, including their decision-making about how to behave on social media platforms and whether they decide to continue using those platforms at all.
4.1 Users’ Perceptions of Platform Spirit
Despite social media platforms’ stated goals of inclusivity and safety for a diverse community of users [
23,
56,
70,
71], participants regularly experienced abuse from bigoted users, disproportionate removals of their identity-related content, and other obstacles that prevented them from using social media platforms freely or safely. These challenges drove participants to perceive their social media platforms as having a
negative platform spirit, as the users’ experiences with abuse and incorrect content removals directly contradicted the platforms’ stated goals of inclusivity, safety, and freedom of self-expression for all users, violating users’ understanding of what their platforms are supposed to do by failing to protect them from abuse and incorrect content moderation. For example, P3, an Asian nonbinary Instagram user who experienced transphobic harassment and incorrect removals of their selfies on the platform, shared their thoughts on Instagram’s addition of a user pronoun feature:
[Instagram] was definitely long due for adding pronouns, but [Instagram] also doesn’t do anything when people abuse or ridicule the pronoun feature. It doesn’t matter if you report somebody for saying that their pronouns are ‘that/bitch’ or something, because Instagram will reinforce the rights of that user to ‘express themselves how they want.’ I’ve tried it!
P3 later elaborated on how Instagram’s handling of their pronoun feature damaged their overall perception of Instagram as a platform:
I think it was very performative of [Instagram] to include pronouns with no intention to back up their rules. It felt similar to rainbow capitalist ideas of throwing in this “confetti of representation” without giving the represented any real power to speak for their communities. And then [Instagram] does exactly what one would expect, which is not defending marginalized identities. With the use of their pronoun feature, the Instagram world was supposed to become more inclusive... but it really just opened the door to new problems.
Though Instagram introduced a user pronoun feature that should benefit trans and nonbinary users, P3 witnessed many instances of transphobic users abusing the feature that went unmoderated on the platform. This led P3 to feel frustrated with Instagram and its perceived unwillingness to confront transphobic abuse, prioritizing the “self-expression” of its abusive users over the safety and well-being of its trans and nonbinary users. The contradiction between Instagram’s stated “support” of trans and nonbinary users and the unmoderated transphobic abuse of its pronoun feature led P3 to develop a negative perception of Instagram’s platform spirit, as they now perceived Instagram’s “support” of trans and nonbinary users as performative instead of genuine.
4.1.1 Platform Spirit and Algorithmic Moderation Tools.
Participants held particularly negative perceptions of social media platforms’ algorithmic moderation tools. These tools were generally perceived as aggressively removing or suppressing marginalized users’ identity-related content, even when their content follows the platform’s community guidelines. P4, a transgender man whose transition-related surgery photos were incorrectly removed for “nudity” on Facebook, shared his perception of Facebook’s algorithmic moderation tools after the incorrect removal took place:
There’s an exception [to Facebook’s nudity guidelines] for transition-related surgery [images]. There’s an exception, you’re allowed to post that content, and that’s literally all the content we’re posting. So... are we not allowed to post this? Because [Facebook’s] rule says you’re not allowed to except in this case, which it is key. So what’s the deal here? I mean, you can’t have every single post be manually vetted by a human, because that’s physically impossible. But something’s obviously not working.
Based on his experience with the removal of his top surgery photos, P4 perceived that, despite Facebook’s nudity guidelines allowing top surgery photos [
22], the platform’s algorithmic moderation systems could not actually distinguish between top surgery photos and other kinds of unallowed topless photos. P4 identified both the perceived flaws of Facebook’s algorithmic moderation systems and the specific inequities he experienced on Facebook as a transgender man, both of which negatively impacted P4’s perception of Facebook’s platform spirit and degraded his relationship with the platform.
Other users shared their own negative perceptions of algorithmic identity-related content removals or suppression based on their own negative experiences with platforms’ moderation of their content; P8 stated that Instagram’s algorithmic removal of their LGBTQIA+ healthcare-related imagery was “creepy” and “invasive,” and attributed the removals to flaws in Instagram’s algorithmic moderation systems’ ability to distinguish between images of bodies that either do or do not violate Meta’s guidelines on nude and graphic content. When P9 suspected that their trans-related political Facebook posts were being algorithmically suppressed by the platform, they asked their Facebook friends to “like” or otherwise engage with the posts in question; after experiencing a low number of engagements from their Facebook friends, P9 “vaguely attributed” the unexpectedly weak Facebook engagement to the presumed “overaggressive” algorithmic suppression of their trans-related political posts. Like P4, both P8 and P9 addressed their experiences with identity-related content removals or suppression that they recognized to be incorrect, including the disproportionate moderation they faced specifically as marginalized users posting identity-related content, with P9 taking additional steps to verify whether their identity-related posts were being suppressed. And like P4, P8 and P9’s experiences negatively impacted their perceptions of Instagram and Facebook’s platform spirit, respectively; even if it wasn’t the platforms’ intent to disproportionately remove marginalized users’ content, the fact that participants experienced identity-related removals that they recognized to be incorrect still resulted in them developing negative perceptions of their platforms’ spirit. When participants witnessed the incorrect algorithmic suppression of marginalized users’ content, they recognized their platforms’ algorithmic moderation tools as harming marginalized users and improperly enforcing platform guidelines, resulting in users developing increasingly negative perceptions of their platforms’ spirit.
4.1.2 Platform Spirit and Platform Guidelines.
Users also generally recognized platform guidelines as either not designed to include marginalized users or not enforced in a way that keeps marginalized users safe and free to express themselves on their platforms, drawn from their personal experiences with inequitable platform policies and enforcement of policies on social media platforms. For example, P6, an Asian nonbinary user who frequently experienced racist abuse, transphobic abuse, and incorrect removals of trans-related content on TikTok and Facebook, shared their negative perception of Facebook and TikTok based on their negative experiences with the two platforms’ inequitable community guidelines: “I think [poor guidelines] especially are apparent on both TikTok and Facebook. Both apps tolerate white supremacy and protect white supremacists, but don’t protect black and brown and intersectional identity creators.” P6 then questioned the purpose of community guidelines that reinforce bigotry and expose marginalized users to harm, asking: “I understand the need for rules... but when the rules hurt the marginalized but protect literal white supremacists, what are your guidelines actually *doing* other than reinforcing that ideology?”
Similar to P3 and P4, P6 experienced racist and transphobic abuse from other Facebook and TikTok users, along with incorrect removals of their identity-related Facebook and TikTok content. Like P3 and P4, P6 developed a negative perception of TikTok’s and Facebook’s platform spirit resulting from their experiences with racist and transphobic abuse on both platforms, as P6’s experiences on TikTok and Facebook contradicted their expectation that they may use both platforms as freely and safely as non-marginalized users. After P6 identified TikTok and Facebook’s guidelines as enabling white supremacist values, racist abuse, and transphobic abuse, they then recognized the two platforms themselves as tolerant of white supremacy and transphobia, and therefore unsafe for BIPOC users (particularly Black and brown social media users), transgender users, and marginalized users broadly. Negative and painful experiences with guidelines that exclude or outright harm marginalized users lead marginalized users to develop worse perceptions of the platforms as a whole, contributing to their negative perceptions of the platform’s spirit.
Several participants also reported not trusting social media guidelines as accurate, helpful guides for behavior on platforms, emphasizing that community guidelines are subject to frequent updates that can be difficult for ordinary users to track. P14 shared that she perceived social media guidelines in general as including “a lot of sneaky updates and agreements,” while P15 perceived reading Instagram’s guidelines as pointless as “they’re just gonna update them anyways.” P14 and P15 both perceived their platforms’ guidelines as updated too frequently and unclearly to accurately guide users’ behavior; in response, both participants expressed mistrust of (and a reluctance to read) social media platforms’ guidelines as a whole. P14 and P15’s mistrust of their platforms’ frequently-updated guidelines exacerbated their frustrations with identity-related content removals, contributing to their negative perceptions of their platforms’ spirit and degrading their relationships with their platforms.
Overall, participants’ perceptions of social media platforms degraded as they experienced or witnessed disproportionate and inequitable removals of identity-related content on those platforms. The participants’ experiences with identity-related content removals, along with the moderation-related tools that enable incorrect identity-related removals (such as inaccurate algorithmic moderation tools or unreliable written guidelines), resulted in the participants identifying platforms’ moderation of their content as both inequitable and inconsistent. The inequity and inconsistency of both platforms’ guidelines and moderation of identity-related content resulted in participants perceiving their platforms as having a negative platform spirit.
Instead of relying on platforms’ guidelines that they recognized to be unhelpful or untrustworthy, participants more often used their perceptions of platforms’ negative spirit, combined with their personal experiences with identity-related moderation, to develop social media folk theories to guide their behavior on platforms. In what follows, we describe how participants developed folk theories which addressed their negative perceptions of platforms’ spirit (including negative perceptions of platforms’ algorithmic systems, guidelines, and other affordances), informed by their personal experiences with inequitable identity-related content and account removals.
4.2 Users’ Folk Theories
Every participant in our study developed folk theories as sensemaking tools that explain their experiences with and perceptions of identity-related moderation on social media platforms. While some folk theories are unique to individual users, most folk theories are the complex, layered products of individual experiences with content moderation, personal perceptions of platforms’ spirit, and other existing theories within specific identity-based user communities, reflecting similar theorization trends to those presented by DeVito [
12,
13]. Nearly all of the participants’ folk theories denote a baseline perception within marginalized social media user communities that social media platforms disadvantage marginalized users in some way.
The folk theories participants shared about content removals are explicit, structured theories that draw from negative perceptions of platforms’ spirit like those described in Section
4.1, where platforms (or their algorithmic moderation systems) are considered at fault for disproportionately removing marginalized users’ content. Contrasting past work on platform spirit and folk theories, which found that certain types of folk theories can erode perceptions of platform spirit [
11,
12], our study participants drew from their eroded perceptions of platforms’ spirit to directly inform their folk theories,
using platform spirit itself as a basis for their subsequent folk theorization. While some theories refer to marginalized users as a whole, most theories focus on how a specific marginalized user community is impacted by content moderation. In what follows, we describe participants’ folk theories in five categories: theories addressing algorithms, theories and perceptions of platform spirit, theories reinforcing other theories, theories addressing unclear platform guidelines, and theories addressing platforms’ values.
4.2.1 Theories Addressing Algorithms.
Folk theories surrounding content removal and suppression, particularly those associated with algorithmic removals, are often used by marginalized social media users to inform other marginalized users (e.g., in online communities) of how to adjust their behavior to avoid having their own content removed or suppressed on a platform [
14]. Even though ordinary users cannot know the exact mechanics of algorithmic content moderation or sorting on a platform, their folk theories show that they do perceive which kinds of activities on a platform are most likely to trigger an algorithmic response. P9, a nonbinary Asian Facebook user, shared several personal folk theories that they adhere to in order to make their content more visible while avoiding incorrect removals:
People posting their fundraisers or something say they’re finding ways to censor “PayPal” or “CashApp” or whatever. [Many] of my friends say things like “image for attention” or “image for algorithm.” I feel like Facebook prioritizes images over other kinds of content... And [other users] will say “please repost this entire post to your own feed instead of just pressing ‘share,’ because more people will see it that way.” Or they’ll say, “don’t say ‘bump’ or ‘boost’ in the comments, because Facebook is trying to suppress those comments. Instead, if you want the post to be seen, write a full sentence or post a GIF in the comments.”
P9 described several folk theories (based on the actions and theories of other Facebook users) about how Facebook’s content recommendation algorithms choose content to make more or less visible on the platform (such as prioritizing content that includes images or suppressing posts whose comments include “bump” or “boost”). Though P9 could not know for certain whether each theory about Facebook’s algorithms was true, they were confident enough in their folk theories to act on them in order to avoid the suppression of their Facebook content. P9 related their folk theories about algorithmic content suppression to more general folk theories about marginalized user communities being disproportionately impacted by content removal and suppression:
Even if it’s not intended on the surface to hurt marginalized people, who’s going to need to raise money? Or who’s feeling impacted by an issue and wants to raise people’s awareness? And they have to go through all these hoops in order to make their posts get seen, and even then their post isn’t seen the same way that just posting a photo of your pet would be.
Participants typically formed links between their folk theories and their perceptions of social media platforms’ apathy or hostility toward marginalized user groups. P10, a queer man from India, shared his theory that X’s algorithmic moderation tools do not properly detect posts that include transphobic slurs in non-English languages, basing his theory on his past experiences seeing non-English transphobic tweets repeatedly go unremoved from the platform. Like P9, P10 connected his specific theory about non-English transphobic X content to his broader theories about platforms exhibiting apathy toward marginalized user groups, particularly those outside of the US. In this case, his broader theory was that X, a US company, “does not consider the cultural context of different countries” while developing its content moderation algorithms, creating a destructive environment for both queer Indian X users and for X users outside the US in general.
Overall, the participants who shared theories about social media algorithms typically saw the theories as evidence that certain groups of users are disadvantaged on the platform in some way, either through disproportionate algorithmic content removal or visibility suppression. Even when a folk theory is meant to inform a marginalized user on how to behave on a platform, it can also reinforce their overall perception that the platform is biased against them to begin with, negatively impacting their perception of the platforms’ spirit and degrading their relationship with the platform.
4.2.2 Theories and Perceptions of Platform Spirit.
Some participants created folk theories explicitly addressing the perceived reasons for their negative perceptions of a platforms’ spirit, acknowledging their negative impression of their social media platforms. The participants who developed these theories typically wanted to understand why a platform would moderate content in a way that violated the participants’ expectations for how a fair and equitable platform should moderate content. For example, P12, a Latina Instagram user whose swimwear selfie was removed for allegedly violating community guidelines, felt frustrated when her posts were removed while similar content posted by advertisers remained on the platform:
I had one photo taken down [from Instagram]; from what I recall, I think it was seen as “lewd.” But I was completely covered up, it was in a bathing suit... They said that it violated community guidelines or something like that. But what I don’t understand is that you see ads on Instagram all the time for companies advertising bathing suits or other clothing where the models are scantily clad, and they don’t get their posts taken down. So I feel like anything that is considered “lewd” that doesn’t make money for a company is seen as “going against community guidelines,” so it needs to be removed. That’s what it seems like.
Prior to the removal, P12 had an expectation that her swimwear selfie did not violate Instagram’s guidelines and was allowed to be posted on the platform. But not only was P12’s swimwear selfie removed, she then noticed that similar swimwear images were not removed from Instagram when posted as corporate or organizational advertisements. P12 found Instagram’s removal of her swimwear selfie (while not removing similar images posted by advertisers) to be unfair and inconsistent, contradicting her expectation of how Instagram’s community guidelines
should be enforced. P12 developed two folk theories in response to her selfie removal: a theory that Instagram considers swimwear selfies to be “lewd” content that is subject to removal, and a theory that Instagram allows “lewd” images to be posted as paid advertisements by corporations and organizations, but not by ordinary users. These two theories helped P12 make sense of why her swimwear selfie was removed while similar content posted by paid advertisers is not removed. Theories like P9’s and P12’s also demonstrated a unique relationship between perceived platform spirit and folk theorization, in that users’ perceptions of platform spirit became an information source serving as a basis for their folk theorization, situated alongside and expanding our understanding of the endogenous and exogenous information sources for folk theorization described in earlier work [
11]. Like P9 in Section
4.2.1, P12’s negative perceptions of Instagram’s platform spirit deepened while developing her theories, further degrading her relationship with the platform.
4.2.3 Theories Reinforcing Other Theories.
Even folk theories by marginalized users that seemingly do not relate to one another, or seem unrelated to marginalization and identity, can intersect with other theories that explicitly relate to those topics. Some participants drew connections between their existing folk theories, even if they seemed only tangentially related to one another at first, to develop new folk theories addressing aspects of platforms’ moderation systems that their previous folk theories did not. Some users then took another step by developing new folk theories based on the connections they perceived between their existing folk theories. For example, P6, a mixed-race nonbinary TikTok user who experienced the removal of a video that they believe did not violate community guidelines, initially theorized that TikTok disproportionately moderates and removes content posted by “small” accounts that do not have a large number of followers on the platform:
TikTok will take down videos from small creators where it’s kind of a non issue, [it] really doesn’t violate anything. But there’ll be huge accounts that post, you know, murder scene cleanup videos. And it’s like, those are up, that account has a huge following... I think those accounts tend to bring traffic to platforms like TikTok, where they kinda do benefit from all the views. So [platforms] kind of turn a blind eye, whereas with smaller creators it’s less consequential.
P6 then tied this theory to their parallel theory that large TikTok accounts with many followers disproportionately belong to white users, meaning that TikTok’s preferential treatment toward “large” accounts translates in practice into preferential treatment and elevated visibility for white TikTok users.
[From] what I’ve seen, TikTok definitely does favor larger white creators. Then they take down a lot of videos of minorities... So many minority creators, Black, Indigenous, and People of Color (BIPOC), get their videos taken down for no reason at all. So it’s frustrating to see how this community guideline situation happens. It’s definitely kind of shady.
By combining two folk theories based on observation, other users’ experiences, and a general negative perception of TikTok’s platform spirit, P6 developed a new folk theory that TikTok’s content recommendation algorithm promotes and privileges white creators over BIPOC creators. This new theory validated P6’s negative perception of TikTok’s platform spirit, helping them explain their perception that TikTok shows preferential treatment to its white content users through its algorithmic content recommendation and moderation systems. Even if P6 did not have an exact understanding of how the algorithms operate, they were confident that their folk theory (based on their observations and existing folk theories about the platform) explains some mechanics of how TikTok’s algorithmic content recommendation and moderation systems could disadvantage its BIPOC users.
Like P9 and P12 earlier, P6 drew connections between their existing folk theories related to the disproportionate removal of marginalized users’ content (in this case on TikTok); P6 then took another step by developing a new folk theory about TikTok’s disproportionate moderation of BIPOC users’ content based on the connection they found between their two folk theories. P6’s theory reinforced their perception of TikTok being negatively biased against BIPOC content creators like themself, degrading their relationship with the platform.
4.2.4 Theories Addressing Platform Guidelines.
Other participants developed folk theories that addressed aspects of platforms’ guidelines, such as their lack of clarity, exclusion of marginalized users, and embedded harms toward marginalized users. Participants were particularly likely to theorize about platforms’ guidelines if they recognized their platforms’ guidelines (and their platforms’ subsequent identity-related content removal decisions) as inherently discriminatory against their marginalized identity. P3 gave an example of harm embedded in the wording of social media guidelines while sharing their experience of having a topless selfie removed from Instagram for, in their words, “not having cis male nipples”:
So many trans creators on Instagram who are banned from the app just for being trans... There is a huge issue with [trans] content being removed. [Instagram’s] policy itself is totally biased and skewed towards cisgender and heterosexual people who hold cisgender and heterosexual and white identities - because it was written by them! I don’t think that there’s necessarily malicious intent, but I do think that there is a consequence to that. It’s damaging to people who hold marginalized identities, damaging to their ability to interface with the software [and] with the app, to socialize, and to feel included.
In this instance, P3’s topless selfie was removed, leading them to experience alienation and invalidation on the platform. P3 felt targeted by Instagram’s content guidelines and, guided by both their own content removal and witnessing similar removals happen to other nonbinary Instagram users, theorized that Instagram’s policies inherently discriminate against non-cis male Instagram users, directly harming trans and nonbinary users as a result. P3’s theory would later be explicitly confirmed by the Oversight Board
3 itself, when the Oversight Board overturned Meta’s incorrect removal of two trans and nonbinary users’ top surgery fundraising posts while officially recommending that Meta clarify its Adult Nudity and Sexual Activity policy to avoid imposing cisnormative views of bodies on transgender and nonbinary users [
61]. P3 also theorized that Instagram’s guidelines are written by people who hold cisgender, heterosexual, and white identities, and favor cisgender, heterosexual, and white users as a result. Other users shared their own theories about social media platform guidelines based on their content removal experiences; P9 theorized that Facebook’s guidelines prohibit criticizing men broadly on the platform, while P8 (a nonbinary healthcare worker) theorized that Instagram’s ban on graphic content extends to medical content, limiting the kinds of medical information and resources that can be shared on the platform. Theories regarding platform guidelines were also often based on the participants’ difficulty understanding the guidelines themselves; P9 stated that Instagram’s community guidelines are
“not user friendly” and difficult to find on the platform, while P10 expressed frustration with keeping up-to-date on platform guidelines that are updated often but do not clearly communicate its changes. Participants responded to their uncertainties and harms they experienced due to platforms’ guidelines by relying on their folk theories about guidelines to guide their behavior on platforms instead of the guidelines themselves, (a continuation of the dynamic discussed in section
4.1.2). In turn, the perceived need to theorize about guidelines instead of trusting them as written, and doubt that the guidelines would equally include marginalized users, degraded participants’ perceptions of their platforms’ spirit and their relationships with their platforms.
4.2.5 Theories Addressing Platforms’ Values.
Some participants shared folk theories addressing the perceived relationship between social media platforms’ public stances on social issues related to marginalized communities and platforms’ disproportionate moderation of marginalized users in practice. In general, participants’ theories about platforms’ values indicated their desire to understand whether platforms’ public “support” of their communities was sincere or performative, and (by extension) whether they could trust “supportive” platforms to treat their marginalized users equitably. Some of these theories addressed platforms’ values related to marginalized users broadly; P9 stated that Facebook and Instagram “don’t actually care” about the moderation struggles faced by marginalized users, while P11 shared their perception that X is “apathetic” toward the harassment faced by marginalized users on their platform. Other participants shared theories addressing the motivations behind a platform acknowledging a social issue publicly. For example, participants noted that, while some platforms enabled users to change their profile pictures to a rainbow theme for Pride Month, or publicly acknowledged the Black Lives Matter (BLM) movement, these actions conflicted with the same platforms’ disproportionate removals of marginalized users’ content. As a result of this conflict, the majority of participants expressed distrust of platforms that engage with social issues in this way (P3 called the phenomenon a “very placating and performative gesture”), and developed folk theories associating platforms’ engagement with social issues with a desire to appear socially conscious to the general public instead of a sincere intent to treat their marginalized users in an equitable way.
As platforms’ public social stances can conflict with their actual moderation practices, marginalized social media users often face unequal treatment on platforms that publicly claim to support them. For example, P1, a Black LinkedIn user whose content about BLM was removed from the platform, shared her theories about the relationship between platforms and social issues:
I noticed that with LinkedIn... at least on my feed, they have been doing a lot of changes that completely remove posts and [accounts] calling out white supremacy. Like, comments about BLM will get automatically deleted... If you’re deleting people calling out injustice and asking people to be held accountable, you’re a hypocrite. You’re a hypocritical company.
P1 witnessed LinkedIn introduce BLM-themed graphical assets (such as profile banners) to their website for users to freely use; because of this, she assumed that she would be welcome to discuss the BLM movement on LinkedIn as well. However, she then witnessed LinkedIn algorithmically removing content that explicitly mentioned BLM (including her own posts), despite LinkedIn’s alleged support of that very same movement. As a result, P1 developed a folk theory that tied these contradictory observations together, theorizing that “LinkedIn is willing to perform superficial acknowledgement of the BLM movement, but is unwilling to host visible discussions about the topic on their platform.” P1’s folk theory tied her experience on LinkedIn to her overall understanding of how antiblackness operates in the corporate world, even behind the smokescreen of performative allyship. Participants like P1 ultimately theorized that platforms’ engagement with social issues is typically performative, and that a platforms’ publicly stated “support” of social issues and marginalized groups does not translate in practice into equitable treatment of their marginalized users. These theories reinforced the participants’ negative perceptions of their platforms’ spirit and degraded their relationships with their platforms. For P1, her experiences and theories caused her to develop a deeply negative perception of LinkedIn’s platform spirit and to consider leaving LinkedIn for other professional networking platforms.
Overall, participants shared a variety of folk theories about social media platforms and their moderation practices. These theories ranged in topic; some addressed platforms’ mechanics (such as their algorithmic moderation and recommendation systems), while others addressed platforms’ guidelines and the sincerity of platforms’ publicly stated social values. Some participants also theorized about the reasons for their negative perceptions of a platforms’ spirit, or used their existing folk theories to develop new theories about their platforms. Participants typically theorized about the elements of their social media platforms’ moderation systems that drove their negative perception of the platforms’ spirit; said theories could reinforce (or even magnify) participants’ negative perceptions of platforms’ spirit, accelerating their degrading relationships with their platforms.
In what follows, we describe how participants responded to their folk theories about social media platforms that they perceive to disproportionately moderate marginalized users’ content, such as changing their behavior on platforms, reducing their use of platforms, or leaving platforms entirely.
4.3 User Behaviors/Responses to Theories
After developing folk theories based on their negative perceptions of platforms’ spirit and their own personal experiences with content moderation, participants typically adjusted their behaviors and decision-making on platforms in response to those theories. All participants perceived marginalized identity-related social media content as likely to face incorrect suppression and removal regardless of the platform they were posted on or the platforms’ guidelines themselves. Notably, participants’ perceptions of platform guidelines being unreliable for marginalized users encouraged them to rely on their folk theories to guide their behavior and decision-making on platforms instead of relying on the platforms’ guidelines themselves.
4.3.1 Using Coded Language to Avoid Incorrect Algorithmic Moderation.
Substituting coded language for explicitly identity-related vocabulary was the most common behavioral response to folk theories about platforms reported by participants. For example, participants who experienced the algorithmic removal of posts including identity-related terminology (like P1 in Section
4.2.5, whose LinkedIn posts including the term “BLM” were removed from the platform) theorized that certain identity-related words and phrases flag platforms’ algorithmic moderation tools and result in those posts (and possibly the users’ accounts) being incorrectly removed. In response, participants described that they generally avoided including identity-related phrases and words on their social media posts; instead, they substituted slang, deliberate misspellings, and abbreviations to encrypt the meaning of more explicitly identity-specific words that they theorized would attract algorithmic removals. For example, P7 stated that they misspelled the names of political figures that they criticized on X to avoid being flagged for “harassment,” while P9 reported avoiding certain phrases while discussing trans-related issues on Facebook to avoid having their posts falsely removed for “hate speech.” Several users also reported limiting the kinds of images they posted due to similar theories involving algorithmic image moderation, such as P4 avoiding posting gender affirming surgery images on Facebook even though those images are explicitly permitted by Facebook’s community guidelines.
Participants’ perceived need to avoid using identity-specific terms that may trigger algorithmic removals reflects similar findings from past studies related to marginalized users’ theories about platforms suppressing identity-specific speech [
46]; the perceived need to avoid using identity-related words and phrases not only influenced participants’ use of language on platforms, but also reinforced their negative perceptions of their social media platforms’ spirit. P7 stated that their perceived need to obscure the names of political figures on X made them feel
“unsafe” and
“threatened” on the platform, while P18 stated that he felt
“rattled” and
“censored” by the algorithmic keyword flagging that he theorized took place with his X and Facebook posts. P9 stated that the need to substitute certain “flagged” terms could create barriers of communication with other marginalized users, stating that
“some people could understand what I’m talking about if I use these euphemisms, but others may be confused.” Ultimately, marginalized users who theorize that platforms suppress identity-related terms are likely to dodge that censorship in ways that the platform may not have intended. The perceived need to substitute coded language for explicitly identity-related phrases can also negatively impact marginalized users’ perceptions of platforms’ spirit, degrading both their relationship with their platforms and their willingness to continue using their platforms in the long-term.
4.3.2 Leaving or Reducing Use of the Platform.
Many participants reported leaving or significantly reducing their use of platforms after developing theories addressing how and why their identity-related content removals took place. For example, P17 is a Black Instagram user whose post about the Black Lives Matter movement received backlash and harassing comments from several anti-BLM users before being removed by Instagram itself. P17 felt frustrated when Instagram’s removal alert did not state which rule her post allegedly violated; she speculated that her post may not have violated Instagram’s guidelines at all, and instead wondered if anti-BLM users mass-reported her post to have it algorithmically removed instead. P17 eventually theorized that “Instagram algorithmically removes posts that do not violate community guidelines so long as they receive a certain number of reports allowing the report feature to be abused for bigotry and harassment.” This theory, along with the removal itself, left P17 with a deeply negative perception of Instagram’s platform spirit and an unwillingness to keep using the platform:
I haven’t really posted since the removal, even though there have been other issues to discuss [involving the BLM movement]. I think it deterred me from posting, not because I don’t want to get the message out there, but because... is this really important to Instagram? Are their values aligned with my values? Are they going to delete my post again? So yeah, I just... haven’t posted since then.
Here, P17 reveals her negative perception of Instagram’s platform spirit, informed by her folk theory and frustrating experiences surrounding the removal of her BLM-related post. She also revealed that she no longer posts on Instagram as a result of this incident, as she no longer perceives Instagram’s values as aligning with her own. Other participants also reported leaving platforms after experiencing identity-related content removals; after experiencing persistent harassing comments and false reports of her Facebook selfies, P13 theorized that Facebook does not take a strong stance against cyberbullying and harassment, emphasizing other users’ persistent abuse of the report feature as evidence for her theory. She acted on her theory by deactivating her Facebook account, stating that she’s “done with Facebook” and has no intention of returning to the platform. P24 also stated that he no longer posted on Reddit after his post about experiencing ADHD was removed, while P16 no longer posted on Instagram after incorrectly having her selfie removed for “nudity.”
Other participants reported significantly reducing their use of platforms or avoiding specific features: P6 significantly reduced the number of trans-related TikToks they posted to avoid having their videos removed from the platform again, while P15 began only posting on Instagram’s “Stories” feature after theorizing that her non-Stories posts would continue to be algorithmically removed. P1, whose experiences on LinkedIn were discussed in Section
4.2.5, began exploring alternative professional networking platforms where she could continue her online networking without experiencing the content removals that she did on LinkedIn. In the same way that folk theories guide the behavior of users who remain on a platform, folk theories can also guide users into reducing their use of a platform or leaving it entirely, having decided to step away from platforms that they perceive as targeting, disproportionately moderating, and harming users like themselves.
Overall, participants who theorized that social media platforms disproportionately remove marginalized users’ identity-related content (and developed negative perceptions of those platforms’ spirit as a result) responded to their theories in a variety of ways. Some participants chose to substitute coded language for identity-related speech on their platforms despite identity-related speech being allowed on those platforms, expressing fear that openly using identity-related speech would result in even more of their content being algorithmically removed. Other participants acted on their theories by reducing their use of their platforms or leaving them entirely, having determined based on their theories that their social media platforms do not provide safe, welcoming, and equitable experiences for marginalized social media users.