Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Public Access

Do Platform Migrations Compromise Content Moderation? Evidence from r/The_Donald and r/Incels

Published: 18 October 2021 Publication History

Abstract

When toxic online communities on mainstream platforms face moderation measures, such as bans, they may migrate to other platforms with laxer policies or set up their own dedicated websites. Previous work suggests that within mainstream platforms, community-level moderation is effective in mitigating the harm caused by the moderated communities. It is, however, unclear whether these results also hold when considering the broader Web ecosystem. Do toxic communities continue to grow in terms of their user base and activity on the new platforms? Do their members become more toxic and ideologically radicalized? In this paper, we report the results of a large-scale observational study of how problematic online communities progress following community-level moderation measures. We analyze data from r/The_Donald and r/Incels, two communities that were banned from Reddit and subsequently migrated to their own standalone websites. Our results suggest that, in both cases, moderation measures significantly decreased posting activity on the new platform, reducing the number of posts, active users, and newcomers. In spite of that, users in one of the studied communities (r/The_Donald) showed increases in signals associated with toxicity and radicalization, which justifies concerns that the reduction in activity may come at the expense of a more toxic and radical community. Overall, our results paint a nuanced portrait of the consequences of community-level moderation and can inform their design and deployment.

References

[1]
Almeida, V., Filgueiras, F., and Gaetani, F. Digital governance and the tragedy of the commons. IEEE Internet Computing (2020).
[2]
Baumgartner, J., Zannettou, S., Keegan, B., Squire, M., and Blackburn, J. The Pushshift Reddit dataset. In Proceedings of the International Conference on Web and Social Media (ICWSM) (2020).
[3]
Blackwell, L., Dimond, J. P., Schoenebeck, S., and Lampe, C. Classification and its consequences for online harassment: design insights from HeartMob. In Proceedings of the ACM on Human-Computer Interaction (CSCW) (2017).
[4]
Borah, P. Does it matter where you read the news story? Interaction of incivility and news frames in the political blogosphere. Communication Research (2014).
[5]
Brewster, J. Forbes - The extremists, conspiracy theorists, and conservative stars banned from social media following the capitol takeover. https://bit.ly/3xPvSLJ, 2020.
[6]
Carillo, K. D. A., and Marsan, J. The dose makes the poison: exploring the toxicity phenomenon in online communities. In ICIS (2016).
[7]
Chandrasekharan, E., Pavalanathan, U., Srinivasan, A., Glynn, A., Eisenstein, J., and Gilbert, E. You can't stay here: the efficacy of Reddit's 2015 ban examined through hate speech. In Proceedings of the ACM on Human-Computer Interaction (CSCW) (2017).
[8]
Cheng, J., Bernstein, M., Danescu-Niculescu-Mizil, C., and Leskovec, J. Anyone can become a troll: causes of trolling behavior in online discussions. In Proceedings of the ACM on Human-Computer Interaction (CSCW) (2017).
[9]
Cheng, J., Danescu-Niculescu-Mizil, C., and Leskovec, J. Antisocial behavior in online discussion communities. In Proceedings of the International Conference on Web and Social Media (ICWSM) (2015).
[10]
Clegg, N. Facebook Newsroom - Welcoming the Oversight Board. https://bit.ly/2VVgHU7, 2020.
[11]
Cohen, K., Johansson, F., Kaati, L., and Mork, J. C. Detecting linguistic markers for radical violence in social media. Terrorism and Political Violence (2014).
[12]
Davidson, T., Warmsley, D., Macy, M., and Weber, I. Automated hate speech detection and the problem of offensive language. In Proceedings of the International Conference on Web and Social Media (ICWSM) (2017).
[13]
Dewey, C. Washington Post - These are the 5 subreddits Reddit banned under its game-changing anti-harassment policy, and why it banned them. https://wapo.st/3AO7pbl, 2016.
[14]
Flores-Saviaga, C. I., Keegan, B. C., and Savage, S. Mobilizing the Trump Train: understanding collective action in a political trolling community. In Proceedings of the International Conference on Web and Social Media (ICWSM) (2018).
[15]
Gillespie, T. Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media . Yale University Press, 2018.
[16]
Gilmour, D. The Daily Dot - 4chan and Reddit users set out to prove Seth Rich murder conspiracy. https://bit.ly/2VWJ4B8, 2017.
[17]
Ging, D. Alphas, betas, and Incels: Theorizing the masculinities of the Manosphere. Men and Masculinities (2019).
[18]
Grover, T., and Mark, G. Detecting potential warning behaviors of ideological radicalization in an Alt-Right subreddit. In Proceedings of the International Conference on Web and Social Media (ICWSM) (2019).
[19]
Hatebase. Hatebase. https://www.hatebase.org/, 2018.
[20]
Hines, A. The Cut - How many bones would you break to get laid? https://bit.ly/2VSTrpD, 2019.
[21]
Hoffman, B., Ware, J., and Shapiro, E. Assessing the threat of Incel violence. Studies in Conflict & Terrorism (2020).
[22]
incels.co . Rules. https://bit.ly/3yY8wF2, 2018.
[23]
Jhaver, S., Ghoshal, S., Bruckman, A., and Gilbert, E. Online harassment and content moderation: the case of blocklists. ACM Transactions on Computer-Human Interaction (TOCHI) (2018).
[24]
Johnson, N. F., Leahy, R., Restrepo, N. J., Velasquez, N., Zheng, M., Manrique, P., Devkota, P., and Wuchty, S. Hidden resilience and adaptive dynamics of the global online hate ecology. Nature (2019).
[25]
Kates, N. Statusmaxxing Admincel. https://open.spotify.com/episode/0MsycI8DgqpcX2NJEouTWl, 2019.
[26]
Kwak, H., Blackburn, J., and Han, S. Exploring cyberbullying and other toxic behavior in team competition online games. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (2015).
[27]
Lewis, R. Alternative influence: broadcasting the reactionary right on YouTube. Data & Society (2018).
[28]
Lilly, M. 'The World is Not a Safe Place for Men': The Representational Politics Of The Manosphere . PhD thesis, Université d'Ottawa/University of Ottawa, 2016.
[29]
Lyons, M. N. Ctrl-alt-delete: the origins and ideology of the alternative right. Political Research Associates (2017).
[30]
Massanari, A. #Gamergate and The Fappening: how Reddit's algorithm, governance, and culture support toxic technocultures. New Media & Society (2017).
[31]
Mathew, B., Illendula, A., Saha, P., Sarkar, S., Goyal, P., and Mukherjee, A. Hate begets hate: a temporal study of hate speech. In Proceedings of the ACM on Human-Computer Interaction (CSCW) (2020).
[32]
Munger, K., and Phillips, J. Right-wing YouTube: a supply and demand perspective. The International Journal of Press/Politics (2020).
[33]
Newell, E., Jurgens, D., Saleem, H. M., Vala, H., Sassine, J., Armstrong, C., and Ruths, D. User migration in online social networks: a case study on Reddit during a period of community unrest. In Proceedings of the International Conference on Web and Social Media (ICWSM) (2016).
[34]
Ohlheiser, A. The Washington Post - Fearing yet another witch hunt, reddit bans pizzagate. https://wapo.st/2Xvbvae, 2016.
[35]
Pennebaker, J. W., and Chung, C. K. Computerized text analysis of Al-Qaeda transcripts. A content analysis reader (2007).
[36]
Perspective API . https://www.perspectiveapi.com/, 2018.
[37]
Project, C. Media Coral open source commenting platform. https://docs.coralproject.net/talk/toxic-comments/.
[38]
Rajadesingan, A., Resnick, P., and Budak, C. Quick, community-specific learning: how distinctive toxicity norms are maintained in political subreddits. In Proceedings of the International Conference on Web and Social Media (ICWSM) (2020).
[39]
Raman, N., Cao, M., Tsvetkov, Y., K"astner, C., and Vasilescu, B. Stress and burnout in open source: toward finding, understanding, and mitigating unhealthy interactions. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering: New Ideas and Emerging Results (2020).
[40]
Ribeiro, M. H., Blackburn, J., Bradlyn, B., De Cristofaro, E., Stringhini, G., Long, S., Greenberg, S., and Zannettou, S. The evolution of the Manosphere across the Web. In Proceedings of the International Conference on Web and Social Media (ICWSM) (2021).
[41]
Ribeiro, M. H., Calais, P. H., Santos, Y. A., Almeida, V. A., and Meira Jr, W. Characterizing and detecting hateful users on Twitter. In Proceedings of the International Conference on Web and Social Media (ICWSM) (2018).
[42]
Ribeiro, M. H., Ottoni, R., West, R., Almeida, V. A., and Meira Jr, W. Auditing radicalization pathways on YouTube. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (2020).
[43]
r/Incels . Rules. https://bit.ly/3iXIA77, 2018.
[44]
Roberts, S. T. Behind the screen: content moderation in the shadows of social media. Yale University Press, 2019.
[45]
r/OutOfTheLoop . Post: “What's up with /r/The_Donald leaving Reddit?”. https://www.reddit.com/r/OutOfTheLoop/comments/6bzv8v/, 2017.
[46]
r/The_Donald . Rules. https://bit.ly/3k47MIp, 2018.
[47]
r/The_Donald. Post: “Bookmark this site”. https://bit.ly/30brHfm, 2020.
[48]
Saleem, H. M., and Ruths, D. The aftermath of disbanding an online hateful community. arXiv:1804.07354 (2018).
[49]
Sap, M., Card, D., Gabriel, S., Choi, Y., and Smith, N. A. The risk of racial bias in hate speech detection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (2019).
[50]
Shen, Q., and Rose, C. The discourse of online content moderation: investigating polarized user responses to changes in Reddit's quarantine policy. In Proceedings of the Third Workshop on Abusive Language Online (2019).
[51]
Solon, O. The Guardian - Reddit bans misogynist men's group blaming women for their celibacy. https://bit.ly/2W2M0fq, 2017.
[52]
Tausczik, Y. R., and Pennebaker, J. W. The psychological meaning of words: LIWC and computerized text analysis methods. Journal of language and social psychology (2010).
[53]
thedonald.win . Rules. https://bit.ly/3g7IKH9, 2018.
[54]
thedonald.win . Post: “I hope if you came from T_D you reserved your reddit username even if you don't plan to use it”. thedonald.win/p/FMA6trrU/, 2020.
[55]
Timberg, C., and Dwoskin, E. Washington Post - Reddit closes long-running forum supporting President Trump after years of policy violations. https://wapo.st/3ySp2Gv, 2020.
[56]
Zannettou, S., ElSherief, M., Belding, E., Nilizadeh, S., and Stringhini, G. Measuring and characterizing hate speech on news websites. In ACM Conference on Web Science (2020).
[57]
Zhang, J., Chang, J., Danescu-Niculescu-Mizil, C., Dixon, L., Hua, Y., Taraborelli, D., and Thain, N. Conversations gone awry: detecting early signs of conversational failure. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (2018).

Cited By

View all
  • (2024)Investigating the increase of violent speech in Incel communities with human-guided GPT-4 prompt iterationFrontiers in Social Psychology10.3389/frsps.2024.13831522Online publication date: 3-Jul-2024
  • (2024)Blocking the information war? Testing the effectiveness of the EU’s censorship of Russian state propaganda among the fringe communities of Western EuropeInternet Policy Review10.14763/2024.3.178813:3Online publication date: 29-Jul-2024
  • (2024)8. Algorithms Against Antisemitism?Antisemitism in Online Communication10.11647/obp.0406.08(205-236)Online publication date: 21-Jun-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Human-Computer Interaction
Proceedings of the ACM on Human-Computer Interaction  Volume 5, Issue CSCW2
CSCW2
October 2021
5376 pages
EISSN:2573-0142
DOI:10.1145/3493286
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 18 October 2021
Published in PACMHCI Volume 5, Issue CSCW2

Permissions

Request permissions for this article.

Check for updates

Badges

  • Honorable Mention

Author Tags

  1. content moderation
  2. deplatforming
  3. fringe online communities
  4. online communities
  5. online radicalization
  6. social networks

Qualifiers

  • Research-article

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)866
  • Downloads (Last 6 weeks)86
Reflects downloads up to 17 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Investigating the increase of violent speech in Incel communities with human-guided GPT-4 prompt iterationFrontiers in Social Psychology10.3389/frsps.2024.13831522Online publication date: 3-Jul-2024
  • (2024)Blocking the information war? Testing the effectiveness of the EU’s censorship of Russian state propaganda among the fringe communities of Western EuropeInternet Policy Review10.14763/2024.3.178813:3Online publication date: 29-Jul-2024
  • (2024)8. Algorithms Against Antisemitism?Antisemitism in Online Communication10.11647/obp.0406.08(205-236)Online publication date: 21-Jun-2024
  • (2024)The Great Ban: Efficacy and Unintended Consequences of a Massive Deplatforming Operation on RedditCompanion Proceedings of the 16th ACM Web Science Conference10.1145/3630744.3663608(85-93)Online publication date: 13-Jun-2024
  • (2024)Community Begins Where Moderation Ends: Peer Support and Its Implications for Community-Based RehabilitationProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642675(1-18)Online publication date: 11-May-2024
  • (2024)"Community Guidelines Make this the Best Party on the Internet": An In-Depth Study of Online Platforms' Content Moderation PoliciesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642333(1-16)Online publication date: 11-May-2024
  • (2024)Bystanders of Online Moderation: Examining the Effects of Witnessing Post-Removal ExplanationsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642204(1-9)Online publication date: 11-May-2024
  • (2024)"I Got Flagged for Supposed Bullying, Even Though It Was in Response to Someone Harassing Me About My Disability.": A Study of Blind TikTokers’ Content Moderation ExperiencesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642148(1-15)Online publication date: 11-May-2024
  • (2024)Users Volatility on Reddit and VoatIEEE Transactions on Computational Social Systems10.1109/TCSS.2024.337931811:5(5871-5879)Online publication date: Oct-2024
  • (2024)No Easy Way Out: the Effectiveness of Deplatforming an Extremist Forum to Suppress Hate and Harassment2024 IEEE Symposium on Security and Privacy (SP)10.1109/SP54263.2024.00007(717-734)Online publication date: 19-May-2024
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media