Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Free access
Just Accepted

Bans vs. Warning Labels: Examining Bystanders’ Support for Community-wide Moderation Interventions

Online AM: 05 February 2025 Publication History

Abstract

Social media platforms like Facebook and Reddit host thousands of user-governed online communities. These platforms sanction communities that frequently violate platform policies; however, public perceptions of such sanctions remain unclear. In a pre-registered survey conducted in the US, I explore bystander perceptions of content moderation for communities that frequently feature hate speech, violent content, and sexually explicit content. Two community-wide moderation interventions are tested: (1) community bans, where all community posts are removed, and (2) community warning labels, where an interstitial warning label precedes access. I examine how third-person effects and support for free speech influence user approval of these interventions on any platform. My regression analyses show that presumed effects on others are a significant predictor of backing for both interventions, while free speech beliefs significantly influence participants’ inclination for using warning labels. Analyzing the open-ended responses, I find that community-wide bans are often perceived as too coarse, and users instead value sanctions in proportion to the severity and type of infractions. I report on concerns that norm-violating communities could reinforce inappropriate behaviors and show how users’ choice of sanctions is influenced by their perceived effectiveness. I discuss the implications of these results for HCI research on online harms and content moderation.

References

[1]
Lawrence Alexander and Paul Horton. 2018. Review essay: the impossibility of a free speech principle. Routledge, 2018.
[2]
Ruth E. Appel, Jennifer Pan and Margaret E. Roberts. 2023. Partisan conflict over content moderation is more than disagreement about facts. Science Advances, 9, 44 (2023), eadg6799. https://www.science.org/doi/abs/10.1126/sciadv.adg6799
[3]
Carolina Are. 2020. How Instagram’s algorithm is censoring women and vulnerable users but helping online abusers. Feminist Media Studies, 20, 5 (2020/07/03 2020), 741-744. https://doi.org/10.1080/14680777.2020.1783805
[4]
Carolina Are. 2022. The Shadowban Cycle: an autoethnography of pole dancing, nudity and censorship on Instagram. Feminist Media Studies, 22, 8 (2022/11/17 2022), 2002-2019. https://doi.org/10.1080/14680777.2021.1928259
[5]
Shubham Atreja, Libby Hemphill and Paul Resnick. 2023. Remove, Reduce, Inform: What Actions do People Want Social Media Platforms to Take on Potentially Misleading Content? Proc. ACM Hum.-Comput. Interact., 7, CSCW2 (2023), Article 291. https://doi.org/10.1145/3610082
[6]
Jack M Balkin. 2018. Free speech is a triangle. Colum. L. Rev., 118 (2018), 2011.
[7]
Albert Bandura. 2005. The evolution of social cognitive theory. Great minds in management (2005), 9-35.
[8]
Teresa M. Bejan. 2019. Two Concepts of Freedom (of Speech). Proceedings of the American Philosophical Society, 163, 2 (2019), 95-107. http://www.jstor.org/stable/45380623
[9]
Aparajita Bhandari, Marie Ozanne, Natalya N. Bazarova and Dominic DiFranzo. 2021. Do You Care Who Flagged This Post? Effects of Moderator Visibility on Bystander Behavior. Journal of Computer-Mediated Communication, 26, 5 (2021), 284-300. https://doi.org/10.1093/jcmc/zmab007
[10]
Pepe Borrás Pérez. 2021. Facebook Doesn’t Like Sexual Health or Sexual Pleasure: Big Tech’s Ambiguous Content Moderation Policies and Their Impact on the Sexual and Reproductive Health of the Youth. International Journal of Sexual Health, 33, 4 (2021/10/02 2021), 550-554. https://doi.org/10.1080/19317611.2021.2005732
[11]
David Braddon-Mitchell and Caroline West. 2004. What is Free Speech? Journal of Political Philosophy, 12, 4 (2004), 437-460. https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-9760.2004.00208.x
[12]
Caitlin Ring Carlson, Luc Cousineau and Caitlin Ring Carlson. 2020. Are You Sure You Want to View This Community? Exploring the Ethics of Reddit's Quarantine Practice. Journal of Media Ethics, 00, 00 (2020), 1–12-11–12. https://doi.org/10.1080/23736992.2020.1819285
[13]
Stevie Chancellor, Zhiyuan Jerry Lin and Munmun De Choudhury. 2016. This Post Will Just Get Taken Down: Characterizing Removed Pro-Eating Disorder Social Media Content. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems.
[14]
Eshwar Chandrasekharan, Shagun Jhaver, Amy Bruckman and Eric Gilbert. 2022. Quarantined! Examining the Effects of a Community-Wide Moderation Intervention on Reddit. ACM Trans. Comput.-Hum. Interact., 29, 4 (2022), Article 29. https://doi.org/10.1145/3490499
[15]
Eshwar Chandrasekharan, Umashanthi Pavalanathan, Anirudh Srinivasan, Adam Glynn, Jacob Eisenstein and Eric Gilbert. 2017. You Can't stay here: The efficacy of Reddit's 2015 ban examined through hate speech. Proc. ACM Hum.-Comput. Interact., 1, CSCW (2017), 31:31–31:22-31:31–31:22. http://doi.acm.org/10.1145/3134666
[16]
Myojung Chung, Greg J. Munno and Brian Moritz. 2015. Triggering participation: Exploring the effects of third-person and hostile media perceptions on online participation. Computers in Human Behavior, 53 (2015/12/01/ 2015), 452-461. https://www.sciencedirect.com/science/article/pii/S074756321500477X
[17]
Sungeun Chung and Shin-Il Moon. 2016. Is the Third-Person Effect Real? a Critical Examination of Rationales, Testing Methods, and Previous Findings of the Third-Person Effect on Censorship Attitudes. Human Communication Research, 42, 2 (2016), 312-337. https://doi.org/10.1111/hcre.12078
[18]
Jeremy Cohen and Robert G. Davis. 1991. Third-Person Effects and the Differential Impact in Negative Political Advertising. Journalism Quarterly, 68, 4 (1991), 680-688. https://journals.sagepub.com/doi/abs/10.1177/107769909106800409
[19]
R. Cohen, L. Irwin, T. Newton-John and A. Slater. 2019. #bodypositivity: A content analysis of body positive accounts on Instagram. Body Image, 29 (Jun 2019), 47-57.
[20]
Carl Colglazier, Nathan TeBlunthuis and Aaron Shaw. 2024. The Effects of Group Sanctions on Participation and Toxicity: Quasi-experimental Evidence from the Fediverse. International AAAI Conference on Web and Social Media.
[21]
Alexander Coppock and Oliver A. Mcclellan. 2019. Validating the demographic, political, psychological, and experimental results obtained from a new source of online survey respondents. Research & Politics, 6, 1 (2019-01-01 2019), 205316801882217. https://doi.org/10.1177/2053168018822174
[22]
Kelley Cotter. 2023. “Shadowbanning is not a thing”: black box gaslighting and the power to independently know and credibly critique algorithms. Information, Communication & Society, 26, 6 (2023/04/26 2023), 1226-1243. https://doi.org/10.1080/1369118X.2021.1994624
[23]
W. Phillips Davison. 1983. The Third-Person Effect in Communication. Public Opinion Quarterly, 47, 1 (1983), 1-15. https://doi.org/10.1086/268763
[24]
Munmun De Choudhury, Shagun Jhaver, Benjamin Sugar and Ingmar Weber. 2016. Social Media Participation in an Activist Movement for Racial Equality. Tenth International AAAI Conference on Web and Social Media.
[25]
Ángel Díaz and Laura Hecht-Felella. 2021. Double standards in social media content moderation. Brennan Center for Justice at New York University School of Law. https://www. brennancenter. org/our-work/research-reports/double-standards-socialmedia-content-moderation (2021).
[26]
Eff. 2020. Section 230 of the Communications Decency Act. 2020. https://www.eff.org/issues/cda230
[27]
Sindhu Kiranmai Ernala, Moira Burke, Alex Leavitt and Nicole B. Ellison. 2020. How Well Do People Report Time Spent on Facebook? An Evaluation of Established Survey Questions with Recommendations. In Proceedings of the Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA, 2020). Association for Computing Machinery. https://doi.org/10.1145/3313831.3376435
[28]
Colby M Everett. 2018. Free speech on privately-owned fora: a discussion on speech freedoms and policy for social media. Kan. JL & Pub. Pol'y, 28 (2018), 113.
[29]
Erich C Fein, John Gilmour, Tayna Machin and Liam Hendry. 2022. Statistics for research students. University of Southern Queensland Darling Heights, Australia, 2022. https://usq.pressbooks.pub/statisticsforresearchstudents/chapter/hierarchical-regression-assumptions/
[30]
Jason Fields. 2024. Mark Zuckerberg's Problem Isn't Free Speech, It's Lies | Opinion. Newsweek.
[31]
Sara Fischer. 2024. Big Tech defends free speech amid government pressure. Axios.
[32]
AJ Flanagin and MJ Metzger. 2001. Internet use in the contemporary media environment. Human Communication Research, 27, 1 (2001), 153-181. https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1468-2958.2001.tb00779.x
[33]
Zihan Gao and Jacob Thebault-Spieker. 2024. Investigating Influential Users' Responses to Permanent Suspension on Social Media. Proc. ACM Hum.-Comput. Interact., 8, CSCW1 (2024), Article 79. https://doi.org/10.1145/3637356
[34]
Tarleton Gillespie. 2017. Governance of and by platforms. Sage handbook of social media (2017).
[35]
Tarleton Gillespie Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media. Yale University Press, 2018.
[36]
April Glaser. 2018. Want a Terrible Job? Facebook and Google May Be Hiring. 2018. https://slate.com/technology/2018/01/facebook-and-google-are-building-an-army-of-content-moderators-for-2018.html
[37]
Guy J. Golan and Stephen A. Banning. 2008. Exploring a Link Between the Third-Person Effect and the Theory of Reasoned Action:Beneficial Ads and Social Expectations. American Behavioral Scientist, 52, 2 (2008), 208-224. https://journals.sagepub.com/doi/abs/10.1177/0002764208321352
[38]
Jonah Goldberg. 2019. Deplatforming on Social Media and Free Speech. National Review, 2019. https://www.nationalreview.com/corner/deplatforming-on-social-media-and-free-speech/
[39]
James Grimmelmann. 2009. Virtual world feudalism. Yale Law Journal Pocket Part, 118 (2009), 126. https://ssrn.com/abstract=1331602
[40]
Albert Gunther. 1991. What We Think Others Think:Cause and Consequence in the Third-Person Effect. Communication Research, 18, 3 (1991), 355-372. https://journals.sagepub.com/doi/abs/10.1177/009365091018003004
[41]
Albert C. Gunther. 2006. Overrating the X-Rating: The Third-Person Perception and Support for Censorship of Pornography. Journal of Communication, 45, 1 (2006), 27-38. https://doi.org/10.1111/j.1460-2466.1995.tb00712.x
[42]
Albert C. Gunther and Paul Mundy. 1993. Biased Optimism and the Third-Person Effect. Journalism Quarterly, 70, 1 (1993), 58-67. https://journals.sagepub.com/doi/abs/10.1177/107769909307000107
[43]
Lei Guo and Brett G. Johnson. 2020. Third-Person Effect and Hate Speech Censorship on Facebook. Social Media + Society, 6, 2 (2020), 2056305120923003. https://journals.sagepub.com/doi/abs/10.1177/2056305120923003
[44]
Stephen M. Haas, Meghan E. Irr, Nancy A. Jennings and Lisa M. Wagner. 2011. Communicating thin: A grounded model of Online Negative Enabling Support Groups in the pro-anorexia movement. New Media & Society, 13, 1 (2011), 40-57. https://journals.sagepub.com/doi/abs/10.1177/1461444810363910
[45]
Oliver L. Haimson, Daniel Delmonaco, Peipei Nie and Andrea Wegner. 2021. Disproportionate Removals and Differing Content Moderation Experiences for Conservative, Transgender, and Black Social Media Users: Marginalization and Moderation Gray Areas. Proc. ACM Hum.-Comput. Interact., 5, CSCW2 (2021). https://doi.org/10.1145/3479610
[46]
Cynthia Hoffner, Richard S. Plotkin, Martha Buchanan, Joel David Anderson, Stacy K. Kamigaki, Lisa A. Hubbs, Laura Kowalczyk, Kelsey Silberg and Angela Pastorek. 2006. The Third-Person Effect in Perceptions of the Influence of Television Violence. Journal of Communication, 51, 2 (2006), 283-299. https://doi.org/10.1111/j.1460-2466.2001.tb02881.x
[47]
Manoel Horta Ribeiro, Shagun Jhaver, Savvas Zannettou, Jeremy Blackburn, Gianluca Stringhini, Emiliano De Cristofaro and Robert West. 2021. Do Platform Migrations Compromise Content Moderation? Evidence from r/The_Donald and r/Incels. Proc. ACM Hum.-Comput. Interact., 5, CSCW2 (2021). https://doi.org/10.1145/3476057
[48]
Jeffrey W. Howard. 2019. Free Speech and Hate Speech. Annual Review of Political Science, 22, Volume 22, 2019 (2019), 93-109. https://www.annualreviews.org/content/journals/10.1146/annurev-polisci-051517-012343
[49]
S. Mo Jang and Joon K. Kim. 2018. Third person effects of fake news: Fake news regulation and media literacy interventions. Computers in Human Behavior, 80 (2018/03/01/ 2018), 295-302. https://www.sciencedirect.com/science/article/pii/S0747563217306726
[50]
Shagun Jhaver, Darren Scott Appling, Eric Gilbert and Amy Bruckman. 2019. “Did You Suspect the Post Would Be Removed?”: Understanding User Reactions to Content Removals on Reddit. Proc. ACM Hum.-Comput. Interact., 3, CSCW (2019). https://doi.org/10.1145/3359294
[51]
Shagun Jhaver, Iris Birman, Eric Gilbert and Amy Bruckman. 2019. Human-Machine Collaboration for Content Regulation: The Case of Reddit Automoderator. ACM Trans. Comput.-Hum. Interact., 26, 5 (2019). https://doi.org/10.1145/3338243
[52]
Shagun Jhaver, Christian Boylston, Diyi Yang and Amy Bruckman. 2021. Evaluating the Effectiveness of Deplatforming as a Moderation Strategy on Twitter. Proc. ACM Hum.-Comput. Interact., 5, CSCW2 (2021). https://doi.org/10.1145/3479525
[53]
Shagun Jhaver, Amy Bruckman and Eric Gilbert. 2019. Does Transparency in Moderation Really Matter? User Behavior After Content Removal Explanations on Reddit. Proc. ACM Hum.-Comput. Interact., 3, CSCW (2019). https://doi.org/10.1145/3359252
[54]
Shagun Jhaver, Larry Chan and Amy Bruckman. 2018. The View from the Other Side: The Border Between Controversial Speech and Harassment on Kotaku in Action. First Monday, 23, 2 (2018). http://firstmonday.org/ojs/index.php/fm/article/view/8232
[55]
Shagun Jhaver, Quan Ze Chen, Detlef Knauss and Amy X. Zhang. 2022. Designing Word Filter Tools for Creator-Led Comment Moderation. Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3491102.3517505
[56]
Shagun Jhaver, Seth Frey and Amy X. Zhang. 2023. Decentralizing Platform Power: A Design Space of Multi-Level Governance in Online Social Platforms. Social Media + Society, 9, 4 (2023), 20563051231207857. https://journals.sagepub.com/doi/abs/10.1177/20563051231207857
[57]
Shagun Jhaver, Himanshu Rathi and Koustuv Saha. 2024. Bystanders of Online Moderation: Examining the Effects of Witnessing Post-Removal Explanations. CHI Conference on Human Factors in Computing Systems.
[58]
Shagun Jhaver and Amy Zhang. 2023. Do Users Want Platform Moderation or Individual Control? Examining the Role of Third-Person Effects and Free Speech Support in Shaping Moderation Preferences. New Media & Society (2023). https://doi.org/10.1177/14614448231217993
[59]
Jialun Aaron Jiang, Peipei Nie, Jed R. Brubaker and Casey Fiesler. 2023. A Trade-off-centered Framework of Content Moderation. ACM Trans. Comput.-Hum. Interact., 30, 1 (2023), Article 3. https://doi.org/10.1145/3534929
[60]
Ben Kaiser, Jerry Wei, J. Nathan Matias and Kevin Lee. 2021. Adapting Security Warnings to Counter Online Disinformation. USENIX Security Symposium (USENIX Security 21).
[61]
Gary King, Jennifer Pan and Margaret E. Roberts. 2013. How Censorship in China Allows Government Criticism but Silences Collective Expression. American Political Science Review, 107, 2 (2013), 326-343. https://www.cambridge.org/core/product/C7EF4A9C9D59425C2D09D83742C1FE00
[62]
Kyle Langvardt. 2017. Regulating online content moderation. Geo. LJ, 106 (2017), 1353.
[63]
David Lazer, Matthew Baum, Nir Grinberg, Lisa Friedland, Kenneth Joseph, Will Hobbs and Carolina Mattsson. 2017. Combating fake news: An agenda for research and action (2017).
[64]
Byoungkwan Lee and Ron Tamborini. 2006. Third-Person Effect and Internet Pornography: The Influence of Collectivism and Internet Self-Efficacy. Journal of Communication, 55, 2 (2006), 292-310. https://doi.org/10.1111/j.1460-2466.2005.tb02673.x
[65]
Edward Lee. 2020. Moderating content moderation: A framework for nonpartisanship in online governance. Am. UL Rev., 70 (2020), 913.
[66]
Azi Lev-On. 2017. The third-person effect on Facebook: The significance of perceived proficiency. Telematics and Informatics, 34, 4 (2017/07/01/ 2017), 252-260. https://www.sciencedirect.com/science/article/pii/S0736585316302477
[67]
Xigen Li. 2008. Third-Person Effect, Optimistic Bias, and Sufficiency Resource in Internet Use. Journal of Communication, 58, 3 (2008), 568-587. https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1460-2466.2008.00400.x
[68]
Joon Soo Lim. 2017. The Third-Person Effect of Online Advertising of Cosmetic Surgery: A Path Model for Predicting Restrictive Versus Corrective Actions. Journalism & Mass Communication Quarterly, 94, 4 (2017), 972-993. https://journals.sagepub.com/doi/abs/10.1177/1077699016687722
[69]
Joon Soo Lim and Guy J. Golan. 2011. Social Media Activism in Response to the Influence of Political Parody Videos on YouTube. Communication Research, 38, 5 (2011), 710-727. https://journals.sagepub.com/doi/abs/10.1177/0093650211405649
[70]
Sean MacAvaney, Hao-Ren Yao, Eugene Yang, Katina Russell, Nazli Goharian and Ophir Frieder. 2019. Hate speech detection: Challenges and solutions. PLOS ONE, 14, 8 (2019), e0221152. https://doi.org/10.1371/journal.pone.0221152
[71]
Alexis Madrigal. 2018. Inside Facebook's Fast-Growing Content-Moderation Effort. 2018. https://www.theatlantic.com/technology/archive/2018/02/what-facebook-told-insiders-about-how-it-moderates-posts/552632/
[72]
Kaitlin Mahar, Amy X. Zhang and David Karger. 2018. Squadbox: A Tool to Combat Email Harassment Using Friendsourced Moderation. In Proceedings of the Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada, 2018). Association for Computing Machinery. https://doi.org/10.1145/3173574.3174160
[73]
Sanna Malinen. 2021. The owners of information: Content curation practices of middle-level gatekeepers in political Facebook groups. New Media & Society, 0, 0 (2021), 14614448211062123. https://journals.sagepub.com/doi/abs/10.1177/14614448211062123
[74]
J. Nathan Matias. 2019. The Civic Labor of Volunteer Moderators Online. Social Media + Society, 5, 2 (2019), 2056305119836778. https://journals.sagepub.com/doi/abs/10.1177/2056305119836778
[75]
Nathan J. Matias. 2019. Preventing harassment and increasing group participation through social norms in 2,190 online science discussions. Proceedings of the National Academy of Sciences, 116, 20 (2019), 9785–9789-9785–9789.
[76]
Aiden McGillicuddy, Jean-Gregoire Bernard and Jocelyn Cranefield. 2016. Controlling Bad Behavior in Online Communities: An Examination of Moderation Work. ICIS 2016 Proceedings (2016). http://aisel.aisnet.org/icis2016/SocialMedia/Presentations/23
[77]
Douglas M. McLeod, William P. Eveland and Amy I. Nathanson. 1997. Support for Censorship of Violent and Misogynic Rap Lyrics:An Analysis of the Third-Person Effect. Communication Research, 24, 2 (1997), 153-174. https://journals.sagepub.com/doi/abs/10.1177/009365097024002003
[78]
Paul Mena. 2020. Cleaning Up Social Media: The Effect of Warning Labels on Likelihood of Sharing False News on Facebook. Policy & Internet, 12, 2 (2020), 165-183. https://onlinelibrary.wiley.com/doi/abs/10.1002/poi3.214
[79]
Garrett Morrow, Briony Swire-Thompson, Jessica Montgomery Polny, Matthew Kopec and John P. Wihbey. 2020. The emerging science of content labeling: Contextualizing social media content moderation. Journal of the Association for Information Science and Technology, n/a, n/a (2020). https://asistdl.onlinelibrary.wiley.com/doi/abs/10.1002/asi.24637
[80]
Matt Motta, Timothy Callaghan, Kristin Lunz-Trujillo and Alee Lockman. 2023. Erroneous Consonance. How inaccurate beliefs about physician opinion influence COVID-19 vaccine hesitancy. Vaccine, 41, 12 (2023/03/17/ 2023), 2093-2099. https://www.sciencedirect.com/science/article/pii/S0264410X23001950
[81]
Tyler Musgrave, Alia Cummings and Sarita Schoenebeck. 2022. Experiences of Harm, Healing, and Joy among Black Women and Femmes on Social Media. In Proceedings of the Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA, 2022). Association for Computing Machinery. https://doi.org/10.1145/3491102.3517608
[82]
Steven Musil. 2021. Facebook will warn you when you're about to join a group that broke its rules. cnet.com.
[83]
Teresa Naab. 2012. The relevance of people's attitudes towards freedom of expression in a changing media environment. ESSACHESS Journal for Communication Studies, 5, 1 (2012).
[84]
Teresa K Naab, Anja Kalch and Tino GK Meitz. 2018. Flagging uncivil user comments: Effects of intervention information, type of victim, and response comments on bystander behavior. New Media & Society, 20, 2 (2018), 777-795. https://journals.sagepub.com/doi/abs/10.1177/1461444816670923
[85]
Dawn Carla Nunziato. 2022. Protecting Free Speech and Due Process Values on Dominant Social Media Platforms. Hastings LJ, 73 (2022), 1255.
[86]
Susanna Paasonen and Jenny Sundén. 2024. Objectionable nipples: Puritan data politics and sexual agency in social media. Queer data. Routledge (2024).
[87]
Angela Paradise and Meghan Sullivan. 2012. (In)visible threats? The third-person effect in perceptions of the influence of Facebook. Cyberpsychology, Behavior, and Social Networking, 15, 1 (2012), 55-60.
[88]
Bhikhu Parekh. 2012. Is there a case for banning hate speech? The content and context of hate speech: Rethinking regulation and responses, 40 (2012), 22-23.
[89]
Jessica A. Pater, Moon K. Kim, Elizabeth D. Mynatt and Casey Fiesler. 2016. Characterizations of Online Harassment: Comparing Policies Across Social Media Platforms. Proceedings of the 19th International Conference on Supporting Group Work. http://doi.acm.org/10.1145/2957276.2957297
[90]
W Peiser and J Peter. 2000. Third-person perception of television-viewing behavior. Journal of Communication, 50, 1 (2000), 25-45. https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1460-2466.2000.tb02832.x
[91]
Gordon Pennycook, Adam Bear, Evan T. Collins and David G. Rand. 2020. The Implied Truth Effect: Attaching Warnings to a Subset of Fake News Headlines Increases Perceived Accuracy of Headlines Without Warnings. Management Science, 66, 11 (2020), 4944-4957. https://pubsonline.informs.org/doi/abs/10.1287/mnsc.2019.3478
[92]
Sarah Perez. 2020. Facebook tries to clean up Groups with new policies. TechCrunch.
[93]
Kyle Peyton, Gregory A. Huber and Alexander Coppock. 2022. The Generalizability of Online Experiments Conducted During the COVID-19 Pandemic. Journal of Experimental Political Science, 9, 3 (2022), 379-394. https://www.cambridge.org/core/product/977D0A898CD4EA803ABE474A49B719E0
[94]
Adrian Rauchfleisch and Jonas Kaiser. 2021. Deplatforming the far-right: An analysis of YouTube and BitChute. SSRN (2021). https://ssrn.com/abstract=3867818
[95]
Yim Register, Izzi Grasso, Lauren N. Weingarten, Lilith Fury, Constanza Eliana Chinea, Tuck J. Malloy and Emma S. Spiro. 2024. Beyond Initial Removal: Lasting Impacts of Discriminatory Content Moderation to Marginalized Creators on Instagram. Proc. ACM Hum.-Comput. Interact., 8, CSCW1 (2024), Article 23. https://doi.org/10.1145/3637300
[96]
Martin J. Riedl, Kelsey N. Whipple and Ryan Wallace. 2021. Antecedents of support for social media content moderation and platform regulation: the role of presumed effects on self and others. Information, Communication & Society (2021), 1-18. https://doi.org/10.1080/1369118X.2021.1874040
[97]
Kai Riemer and Sandra Peter. 2021. Algorithmic audiencing: Why we need to rethink free speech on social media. Journal of Information Technology, 36, 4 (2021), 409-426. https://journals.sagepub.com/doi/abs/10.1177/02683962211013358
[98]
Tony Romm and Elizabeth Dwoskin. 2019. Twitter adds labels for tweets that break its rules - a move with potentially stark implications for Trump's account. WP Company, 2019. https://www.washingtonpost.com/technology/2019/06/27/twitter-adds-labels-tweets-that-break-its-rules-putting-president-trump-companys-crosshairs/
[99]
Steven Ruggles, Sarah Flood, Ronald Goeken, Megan Schouweiler and Matthew Sobek. 2022. IPUMS USA, Minneapolis, MN. https://doi.org/10.18128/D010.V12.0
[100]
Giuseppe Russo, Manoel Horta Ribeiro, Giona Casiraghi and Luca Verginer. 2023. Understanding Online Migration Decisions Following the Banning of Radical Communities. In Proceedings of the Proceedings of the 15th ACM Web Science Conference 2023 (Austin, TX, USA, 2023). Association for Computing Machinery. https://doi.org/10.1145/3578503.3583608
[101]
Emily Saltz, Claire R Leibowicz and Claire Wardle. 2021. Encounters with Visual Misinformation and Labels Across Platforms: An Interview and Diary Study to Inform Ecosystem Approaches to Misinformation Interventions. In Proceedings of the Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan, 2021). Association for Computing Machinery. https://doi.org/10.1145/3411763.3451807
[102]
Brennan Schaffner, Arjun Nitin Bhagoji, Siyuan Cheng, Jacqueline Mei, Jay L Shen, Grace Wang, Marshini Chetty, Nick Feamster, Genevieve Lakier and Chenhao Tan. 2024. "Community Guidelines Make this the Best Party on the Internet": An In-Depth Study of Online Platforms' Content Moderation Policies. In Proceedings of the Proceedings of the CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA, 2024). Association for Computing Machinery. https://doi.org/10.1145/3613904.3642333
[103]
Erica Scharrer. 2002. Third-Person Perception and Television Violence:The Role of Out-Group Stereotyping in Perceptions of Susceptibility to Effects. Communication Research, 29, 6 (2002), 681-704. https://journals.sagepub.com/doi/abs/10.1177/009365002237832
[104]
Morgan Klaus Scheuerman, Jialun Aaron Jiang, Casey Fiesler and Jed R. Brubaker. 2021. A Framework of Severity for Harmful Content Online. Proc. ACM Hum.-Comput. Interact., 5, CSCW2 (2021), Article 368. https://doi.org/10.1145/3479512
[105]
Sarita Schoenebeck, Oliver L. Haimson and Lisa Nakamura. 2021. Drawing from justice theories to support targets of online harassment. New Media & Society (2021). https://doi.org/10.1177/1461444820913122
[106]
Valarie Schweisberger, Jennifer Billinson and T. Makana Chock. 2014. Facebook, the Third-Person Effect, and the Differential Impact Hypothesis*. Journal of Computer-Mediated Communication, 19, 3 (2014), 403-413. https://doi.org/10.1111/jcc4.12061
[107]
Joseph Seering, Robert Kraut and Laura Dabbish. 2017. Shaping Pro and Anti-Social Behavior on Twitch Through Moderation and Example-Setting. Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing. https://doi.org/10.1145/2998181.2998277
[108]
Joseph Seering, Tony Wang, Jina Yoon and Geoff Kaufman. 2019. Moderator engagement and community development in the age of algorithms. New Media & Society (2019), 1461444818821316-1461444818821316.
[109]
Haeseung Seo, Aiping Xiong and Dongwon Lee. 2019. Trust It or Not: Effects of Machine-Learning Warnings in Helping Individuals Mitigate Misinformation. In Proceedings of the Proceedings of the 10th ACM Conference on Web Science (Boston, Massachusetts, USA, 2019). Association for Computing Machinery. https://doi.org/10.1145/3292522.3326012
[110]
Qinlan Shen and Carolyn Rose. 2019. The Discourse of Online Content Moderation: Investigating Polarized User Responses to Changes in Reddit’s Quarantine Policy. Proceedings of the Third Workshop on Abusive Language Online.
[111]
Mohit Singhal, Chen Ling, Pujan Paudel, Poojitha Thota, Nihal Kumarswamy, Gianluca Stringhini and Shirin Nilizadeh. 2023. SoK: Content moderation in social media, from guidelines to enforcement, and research to practice. 2023 IEEE 8th European Symposium on Security and Privacy (EuroS&P).
[112]
Clare Southerton, Daniel Marshall, Peter Aggleton, Mary Lou Rasmussen and Rob Cover. 2021. Restricted modes: Social media, content classification and LGBTQ sexual citizenship. New Media & Society, 23, 5 (2021), 920-938. https://journals.sagepub.com/doi/abs/10.1177/1461444820904362
[113]
Elizabeth Stewart. 2021. Detecting Fake News: Two Problems for Content Moderation. Philosophy & Technology, 34, 4 (2021/12/01 2021), 923-940. https://doi.org/10.1007/s13347-021-00442-x
[114]
Anselm Strauss and Juliet Corbin Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory. SAGE Publications, Inc., Los Angeles, 2015.
[115]
John L Sullivan, James Piereson and George E Marcus Political tolerance and American democracy. University of Chicago Press, 1993.
[116]
Ye Sun, Zhongdang Pan and Lijiang Shen. 2008. Understanding the Third-Person Perception: Evidence From a Meta-Analysis. Journal of Communication, 58, 2 (2008), 280-300. https://doi.org/10.1111/j.1460-2466.2008.00385.x
[117]
Nicolas P. Suzor Lawless: the secret rules that govern our digital lives. Cambridge University Press, 2019.
[118]
Sue Tait. 2008. Pornographies of Violence? Internet Spectatorship on Body Horror. Critical Studies in Media Communication, 25, 1 (2008/03/01 2008), 91-111. https://doi.org/10.1080/15295030701851148
[119]
Kurt Thomas, Chris Grier, Dawn Song and Vern Paxson. 2011. Suspended accounts in retrospect: an analysis of twitter spam. In Proceedings of the Proceedings of the 2011 ACM SIGCOMM conference on Internet measurement conference (Berlin, Germany, 2011). Association for Computing Machinery. https://doi.org/10.1145/2068816.2068840
[120]
Amaury Trujillo and Stefano Cresci. 2022. Make reddit great again: assessing community effects of moderation interventions on r/the_donald. arXiv preprint arXiv:2201.06455 (2022).
[121]
B. M. Tynes, H. A. Willis, A. M. Stewart and M. W. Hamilton. 2019. Race-Related Traumatic Events Online and Mental Health Among Adolescents of Color. J Adolesc Health, 65, 3 (Sep 2019), 371-377.
[122]
Stefanie Ullmann and Marcus Tomalin. 2020. Quarantining online hate speech: technical and ethical perspectives. Ethics and Information Technology, 22, 1 (2020/03/01 2020), 69-80. https://doi.org/10.1007/s10676-019-09516-z
[123]
Reed Van Schenck. 2023. Deplatforming “the people”: media populism, racial capitalism, and the regulation of online reactionary networks. Media, Culture & Society, 45, 7 (2023), 1317-1333. https://journals.sagepub.com/doi/abs/10.1177/01634437231169909
[124]
William Warner and Julia Hirschberg. 2012. Detecting hate speech on the world wide web. Proceedings of the second workshop on language in social media.
[125]
Lindy West. 2017. I've left Twitter. It is unusable for anyone but trolls, robots and dictators | Lindy West | Opinion | The Guardian. 2017. https://www.theguardian.com/commentisfree/2017/jan/03/ive-left-twitter-unusable-anyone-but-trolls-robots-dictators-lindy-west?CMP=share%7B%5C_%7Dbtn%7B%5C_%7Dtw
[126]
Sarah Myers West. 2018. Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms. New Media & Society (2018).
[127]
John Wihbey, Myojung Chung, Mike Peacey, Garrett Morrow, Yushu Tian, Lauren Vitacco, Daniela Rincon Reyes and Melissa Clavijo. 2022. Divergent Global Views on Social Media, Free Speech, and Platform Regulation: Findings from the United Kingdom, South Korea, Mexico, and the United States. Free Speech, and Platform Regulation: Findings from the United Kingdom, South Korea, Mexico, and the United States (January 3, 2022) (2022).
[128]
Jason Wilson. 2021. Rightwingers flock to 'alt tech' networks as mainstream sites ban Trump. The Guardian.
[129]
Sijia Xiao, Coye Cheshire and Niloufar Salehi. 2022. Sensemaking, Support, Safety, Retribution, Transformation: A Restorative Justice Approach to Understanding Adolescents’ Needs for Addressing Online Harm. In Proceedings of the Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA, 2022). Association for Computing Machinery. https://doi.org/10.1145/3491102.3517614
[130]
Sijia Xiao, Shagun Jhaver and Niloufar Salehi. 2023. Addressing Interpersonal Harm in Online Gaming Communities: The Opportunities and Challenges for a Restorative Justice Approach. ACM Trans. Comput.-Hum. Interact., 30, 6 (2023), Article 83. https://doi.org/10.1145/3603625
[131]
Caleb Yong. 2011. Does Freedom of Speech Include Hate Speech? Res Publica, 17, 4 (2011/11/01 2011), 385-403. https://doi.org/10.1007/s11158-011-9158-y
[132]
Brandy Zadrozny. 2021. Facebook to crack down on groups that break its rules. NBC NEws, 2021. https://www.nbcnews.com/tech/tech-news/facebook-crack-groups-break-rules-rcna435
[133]
Savvas Zannettou, Barry Bradlyn, Emiliano De Cristofaro, Haewoon Kwak, Michael Sirivianos, Gianluca Stringini and Jeremy Blackburn. 2018. What is Gab: A Bastion of Free Speech or an Alt-Right Echo Chamber. In Proceedings of the Companion Proceedings of the The Web Conference 2018 (Lyon, France, 2018). International World Wide Web Conferences Steering Committee. https://doi.org/10.1145/3184558.3191531
[134]
Alice Qian Zhang, Kaitlin Montague and Shagun Jhaver. 2023. Cleaning up the streets: Understanding motivations, mental models, and concerns of users flagging social media posts. arXiv preprint arXiv:2309.06688 (2023).
[135]
Shuhuan Zhou and Zhian Zhang. 2023. Impact of Internet Pornography on Chinese Teens: The Third-Person Effect and Attitudes Toward Censorship. Youth & Society, 55, 1 (2023), 83-102. https://journals.sagepub.com/doi/abs/10.1177/0044118X211040095

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Computer-Human Interaction
ACM Transactions on Computer-Human Interaction Just Accepted
EISSN:1557-7325
Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Online AM: 05 February 2025
Accepted: 20 December 2024
Revised: 04 September 2024
Received: 03 May 2024

Check for updates

Author Tags

  1. Social media
  2. content moderation
  3. governance
  4. censorship
  5. platforms

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 45
    Total Downloads
  • Downloads (Last 12 months)45
  • Downloads (Last 6 weeks)45
Reflects downloads up to 20 Feb 2025

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Full Access

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media