Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Measuring the Prevalence of Anti-Social Behavior in Online Communities

Published: 11 November 2022 Publication History

Abstract

With increasing attention to online anti-social behaviors such as personal attacks and bigotry, it is critical to have an accurate accounting of how widespread anti-social behaviors are. In this paper, we empirically measure the prevalence of anti-social behavior in one of the world's most popular online community platforms. We operationalize this goal as measuring the proportion of unmoderated comments in the 97 most popular communities on Reddit that violate eight widely accepted platform norms. To achieve this goal, we contribute a human-AI pipeline for identifying these violations and a bootstrap sampling method to quantify measurement uncertainty. We find that 6.25% (95% Confidence Interval [5.36%, 7.13%]) of all comments in 2016, and 4.28% (95% CI [2.50%, 6.26%]) in 2020, are violations of these norms. Most anti-social behaviors remain unmoderated: moderators only removed one in twenty violating comments in 2016, and one in ten violating comments in 2020. Personal attacks were the most prevalent category of norm violation; pornography and bigotry were the most likely to be moderated, while politically inflammatory comments and misogyny/vulgarity were the least likely to be moderated. This paper offers a method and set of empirical results for tracking these phenomena as both the social practices (e.g., moderation) and technical practices (e.g., design) evolve.

References

[1]
Ali Alkhatib and Michael Bernstein. 2019. Street-level algorithms: A theory at the gaps between policy and decisions. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1--13.
[2]
Carolyn J. Anderson. 2019. Poisson Regression for Regression of Counts and Rates. https://education.illinois.edu/docs/default-source/carolyn-anderson/edpsy589/lectures/4_glm/4glm_3_beamer_post.pdf Retrieved July 1, 2021 from
[3]
Gabor Angeli, Julie Tibshirani, Jean Wu, and Christopher D. Manning. 2014. Combining Distant and Partial Supervision for Relation Extraction. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, 1556--1567. https://www.aclweb.org/anthology/D14--1164/
[4]
Solon Barocas, Kate Crawford, Aaron Shapiro, and Hanna Wallach. 2017. The Problem With Bias: Allocative Versus Representational Harms in Machine Learning. In SIGCIS Conference, http://meetings.sigcis.org/uploads/6/3/6/8/6368912/program.pdf. ACM.
[5]
Paul M. Barrett. 2020. Who Moderates the Social Media Giants? A Call to End Outsourcing. https://bhr.stern.nyu.edu/tech-content-moderation-june-2020 Retrieved July 1, 2021 from
[6]
Michael S Bernstein, Andres Monroy-Hernandez, Drew Harry, Paul Andre, Katrina Panovich, and Gregory G Vargas. 2011. 4chan and/b: An Analysis of Anonymity and Ephemerality in a Large Online Community. In In ICWSM. AAAI.
[7]
Monika Bickert. 2018. Publishing Our Internal Enforcement Guidelines and Expanding Our Appeals Process. https://newsroom.fb.com/news/2018/04/comprehensive-community-standards/ Retrieved Dec 1, 2020 from
[8]
Reuben Binns, Michael Veale, Max Van Kleek, and Nigel Shadbolt. 2017. Like Trainer, Like Bot? Inheritance of Bias in Algorithmic Content Moderation. In International Conference on Social Informatics. Springer, Springer, New York, 405--415. https://link.springer.com/chapter/10.1007/978--3--319--67256--4_32
[9]
Su Lin Blodgett, Lisa Green, and Brendan O'Connor. 2016. Demographic dialectal variation in social media: A case study of African-American English. In In: EMNLP 2016: Conference on Empirical Methods in Natural Language Processing. ACM.
[10]
Alexander Bor and Michael Petersen. 2021. The Psychology of Online Political Hostility: A Comprehensive, Cross-National Test of the Mismatch Hypothesis. American Political Science Review APSR (2021).
[11]
ALEXANDER BOR and MICHAEL BANG PETERSEN. 2021. The Psychology of Online Political Hostility: A Comprehensive, Cross-National Test of the Mismatch Hypothesis. American Political Science Review (2021), 1--18. https://doi.org/10.1017/S0003055421000885
[12]
Samuel Brody and Nicholas Diakopoulos. 2011. Cooooooooooooooollllllllllllll!!!!!!!!!!!!!!: using word lengthening to detect sentiment in microblogs. In In Proceedings of the conference on empirical methods in natural language processing. Association for Computational Linguistics.
[13]
Erin E Buckels, Paul D Trapnell, and Delroy L Paulhus. 2014. Trolls just want to have fun. Personality and individual Differences, Vol. 67 (2014), 97--102.
[14]
Brian Butler, Elisabeth Joyce, and Jacqueline Pike. 2008. Don't look now, but we've created a bureaucracy: the nature and roles of policies and rules in wikipedia. In CHI '08: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 1101--1110. https://doi.org/10.1145/1357054.1357227
[15]
Statistics by Jim. 2020. Introduction to Bootstrapping in Statistics with an Example. https://statisticsbyjim.com/hypothesis-testing/bootstrapping/ Retrieved Dec 1, 2020 from
[16]
Stevie Chancellor, Yannis Kalantidis, Jessica Pater, Munmun De Choudhury, and David A. Shamma. 2017. Multimodal Classification of Moderated Online Pro-Eating Disorder Content. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (2017).
[17]
Stevie Chancellor, Zhiyuan Jerry Lin, and Munmun De Choudhury. 2016a. “This Post Will Just Get Taken Down”: Characterizing Removed Pro-Eating Disorder Social Media Content. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, San Jose, CA, USA, 1157----1162. https://doi.org/10.1145/2858036.2858248
[18]
Stevie Chancellor, Jessica Annette Pater, Trustin Clear, Eric Gilbert, and Munmun De Choudhury. 2016b. # thyghgapp: Instagram Content Moderation and Lexical Variation in Pro-Eating Disorder Communities. In In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing. ACM.
[19]
Eshwar Chandrasekharan, Chaitrali Gandhi, Matthew Wortley Mustelier, and Eric Gilbert. 2019. Crossmod: A Cross-Community Learning-Based System to Assist Reddit Moderators. Proc. ACM Hum.-Comput. Interact., Vol. 3, CSCW, Article 174 (Nov. 2019), 30 pages. https://doi.org/10.1145/3359276
[20]
Eshwar Chandrasekharan and Eric Gilbert. 2019. Hybrid Approaches to Detect Comments Violating Macro Norms on Reddit. https://arxiv.org/pdf/1904.03596.pdf Retrieved July 1, 2021 from
[21]
Eshwar Chandrasekharan, Umashanthi Pavalanathan, Anirudh Srinivasan, Adam Glynn, Jacob Eisenstein, and Eric Gilbert. 2017a. You Can't Stay Here: The Efficacy of Reddit's 2015 Ban Examined Through Hate Speech. Proc. ACM Hum.-Comput. Interact. (2017), 22 pages. https://dl.acm.org/doi/10.1145/3134666
[22]
Eshwar Chandrasekharan, Mattia Samory, Shagun Jhaver, Hunter Charvat, Amy Bruckman, Cliff Lampe, Jacob Eisenstein, and Eric Gilbert. 2018. The Internet's Hidden Rules: An Empirical Study of Reddit Norm Violations at Micro, Meso, and Macro Scales. Proc. ACM Hum.-Comput. Interact., Vol. 2, CSCW, Article 32 (Nov. 2018), 25 pages. https://doi.org/10.1145/3274301
[23]
Eshwar Chandrasekharan, Mattia Samory, Anirudh Srinivasan, and Eric Gilbert. 2017b. The Bag of Communities: Identifying Abusive Behavior Online with Preexisting Internet Data. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, Denver, CO, USA, 3175--3187. https://doi.org/10.1145/3025453.3026018
[24]
Jonathan P Chang, Justin Cheng, and Cristian Danescu-Niculescu-Mizil. 2020. Don't let me be misunderstood: Comparing intentions and perceptions in online discussions. In Proceedings of The Web Conference 2020. 2066--2077.
[25]
Justin Cheng, Michael Bernstein, Cristian Danescu-Niculescu-Mizil, and Jure Leskovec. 2017. Anyone Can Become a Troll: Causes of Trolling Behavior in Online Discussions. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (Portland, Oregon, USA) (CSCW '17). ACM, New York, NY, USA, 1217--1230. https://doi.org/10.1145/2998181.2998213
[26]
Justin Cheng, Cristian Danescu-Niculescu-Mizil, and Jure Leskovec. 2016. Antisocial Behavior in Online Discussion Communities. In In Ninth International AAAI Conference on Web and Social Media. AAAI. https://cs.stanford.edu/people/jure/pubs/trolls-icwsm15.pdf
[27]
Hardaker Claire. 2010. Trolling in asynchronous computer-mediated communication: From user discussions to academic definitions. Politeness Res (2010).
[28]
Alex Cranz and Russell Brandom. 2021. Facebook encourages hate speech for profit, says whistleblower. The Verge (2021).
[29]
Elizabeth Culliford and Katie Paul. 2020. Facebook offers up first-ever estimate of hate speech prevalence on its platform. https://www.reuters.com/article/us-facebook-content/facebook-estimates-hate-speech-seen-in-1-out-of-1000-views-on-its-platform-idUSKBN27Z2R0 Retrieved Dec 1, 2020 from
[30]
Mark Diaz, Isaac Johnson, Amanda Lazar, Anne Marie Piper, and Darren Gergle. 2018. Addressing Age-Related Bias in Sentiment Analysis. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM.
[31]
Julian Dibbell. 1993. A Rape in Cyberspace: How an Evil Clown, a Haitian Trickster Spirit, Two Wizards, and a Cast of Dozens Turned a Database Into a Society. The Village Voice, Vol. December 23 (1993), 36--42. https://www.villagevoice.com/2005/10/18/a-rape-in-cyberspace/
[32]
Karthik Dinakar, Roi Reichart, and Henry Lieberman. 2011. Modeling the detection of Textual Cyberbullying. In In The Social Mobile Web.
[33]
Joan Donovan. 2019. How Hate Groups' Secret Sound System Works. The Atlantic (March 2019). https://doi.org/10.1145/1188913.1188915
[34]
Jenny Fan and Amy X. Zhang. 2020. Digital Juries: A Civics-Oriented Approach to Platform Governance. In Conference on Human Factors in Computing Systems (CHI). ACM, 1--14. https://doi.org/10.1145/3313831.3376293
[35]
Williamm Gardner, Edward Mulvey, and Esther Shaw. 1995. Regression Analyses of Counts and Rates: Poisson, Overdispersed Poisson, and Negative Binomial Models. Quantitative Methods in Psychology (1995).
[36]
R.Stuart Geiger. 2011. The lives of bots. In: Wikipedia: A Critical Reader. Amsterdam: Institute of Network CulturesSage.
[37]
GIFCT. 2019. About the global internet forum to counter terrorism. https://perma.cc/44V5--554U Retrieved Dec 1, 2020 from
[38]
Eric Gilbert. 2013. Widespread underprovision on Reddit. In Proceedings of the 2013 conference on Computer supported cooperative work (CSCW). ACM, 803--808. https://doi.org/10.1145/2441776.2441866
[39]
Sarah Gilbert. 2020. ?I run the world's largest historical outreach project and it's on a cesspool of a website." Moderating a public scholarship site on Reddit: A case study of r/AskHistorians. Proceedings of the ACM on Human-Computer Interaction (2020).
[40]
Tarleton Gillespie. 2017. Governance of and by platforms: Sage handbook of social media. London: Sage.
[41]
Tarleton Gillespie. 2018. Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media. Yale University Press, New Haven, CT.
[42]
Google. 2018. YouTube Community Guidelines enforcement in Google's Tranparency Report for 2019. https://transparencyreport.google.com/youtube-policy/removals Retrieved Dec 1, 2020 from
[43]
Robert Gorwa, Reuben Binns, and Christian Katzenbach. 2020. Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data and Society (2020). https://journals.sagepub.com/doi/full/10.1177/2053951719897945
[44]
Hossein Hosseini, Sreeram Kannan, Baosen Zhang, and Radha Poovendran. 2017. Deceiving Google's Perspective API Built for Detecting Toxic Comments. In arxiv. https://arxiv.org/pdf/1702.08138.pdf
[45]
Joseph M. Kayany. 1998. Contexts of uninhibited online behavior: Flaming in social newsgroups on Usenet. J Am Soc Inf Sci (1998).
[46]
Charles Kiene, Kenny Shores, Eshwar Chandrasekharan, Shagun Jhaver, Jialun Jiang, Brianna Dym, Joseph Seering, Sarah Gilbert, Kat Lo, Donghee Yvette Wohn, and Bryan Dosono. 2019. Volunteer work: Mapping the future of moderation research. Conference Companion Publication of the 2019 on Computer Supported Cooperative Work and Social Computing (2019).
[47]
Sara Kiesler, Robert Kraut, Paul Resnick, and Aniket Kittur. 2012. Regulating behavior in online communities. In Building Successful Online Communities: Evidence-Based Social Design, Robert Kraut and Paul Resnick (Eds.). MIT Press, Cambridge, MA, USA, Chapter 4, 125--177.
[48]
Sara Kiesler, Jane Siegel, and Timothy W. McGuire. 1984. Social psychological aspects of computer-mediated communication. American Psychologist, Vol. 39, 10 (1984), 1123--1134. https://doi.org/10.1037/0003-066X.39.10.1123
[49]
Kate Klonick. 2017. The New Governors: The People, Rules, and Processes Governing Online Speech. Harvard Law Review (2017).
[50]
Robert E. Kraut and Paul Resnick. 2012. Building successful online communities: Evidence-based social design. MIT Press, Cambridge, Massachusetts, United States.
[51]
Cliff Lampe and Paul Resnick. 2004. Slash(dot) and Burn: Distributed Moderation in a Large Online Conversation Space. In Proc. of ACM Computer Human Interaction Conference 2004 (CHI). Association for Computing Machinery. http://www.presnick.people.si.umich.edu/papers/chi04/LampeResnick.pdf
[52]
Zachary Laub. 2019. Hate Speech on Social Media: Global Comparisons. https://www.cfr.org/backgrounder/hate-speech-social-media-global-comparisons Retrieved Dec 1, 2020 from
[53]
Lawrence Lessig. 1999. Code and other laws of cyberspace. Basic books New York.
[54]
S Li and Williams J. 2018. Despite What Zuckerberg's Testimony May Imply, AI Cannot Save Us. Electronic Frontier Foundation Deeplinks Blog. https://www.eff.org/deeplinks/2018/04/despite-what-zuckerbergs-testimony-may-imply-ai-cannot-save-us Retrieved Dec 1, 2020 from
[55]
Angli Liu, Stephen Soderland, Jonathan Bragg, Christopher H. Lin, Xiao Ling, and Daniel S. Weld. 2016. Effective Crowd Annotation for Relation Extraction. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, 897--906. https://www.aclweb.org/anthology/N16--1104/
[56]
J. Nathan Matias. 2016. Going dark: Social factors in collective action against platform operators in the Reddit blackout. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (2016).
[57]
Thiago Dias Oliva, Dennys Marcelo Antonialli, and Alessandra Gomes. 2020. Fighting Hate Speech, Silencing Drag Queens? Artificial Intelligence in Content Moderation and Risks to LGBTQ Voices Online. Sexuality & Culture (2020).
[58]
Jessica Annette Pater, Yacin Nadji, Elizabeth D Mynatt, and Amy S Bruckman. 2014. Just awful enough: the functional dysfunction of the something awful forums. In In Proceedings of the 32nd annual ACM conference on Human factors in computing systems, ACM. ACM.
[59]
Scott Pelley. 2021. Whistleblower: Facebook is misleading the public on progress against hate speech, violence, misinformation. https://www.cbsnews.com/news/facebook-whistleblower-frances-haugen-misinformation-public-60-minutes-2021--10-03/
[60]
Maria Pershina, Bonan Min, Wei Xu, and Ralph Grishman. 2014. Infusion of Labeled Data into Distant Supervision for Relation Extraction. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, 732--738. https://www.aclweb.org/anthology/P14--2119/
[61]
Jigsaw perspective. 2020. Perspective API. https://www.perspectiveapi.com Retrieved Dec 1, 2020 from
[62]
Trung T. Phan. 2020. For the very first time, Reddit revealed its user numbers. https://thehustle.co/12032020-reddit-user-num/ Retrieved Dec 1, 2020 from
[63]
Twitter Public Policy. 2018. Evolving our Twitter Transparency Report: expanded data and insights. https://blog.twitter.com/official/en_us/topics/company/2018/evolving-our-twitter-transparency-report.html Retrieved Dec 1, 2020 from
[64]
Jenny Preece and Diane Maloney-Krichmar. 2003. Online communities: focusing on sociability and usability. Handbook of human-computer interaction (2003).
[65]
Reddit. 2020. Automoderator. https://www.reddit.com/wiki/automoderator Retrieved Dec 1, 2020 from
[66]
Sarah Roberts. 2018. Digital detritus: 'Error' and the logic of opacity in social media content moderation. First Monday (2018).
[67]
Sarah T. Roberts. 2016. Commercial Content Moderation: Digital Laborers' Dirty Work. Media Studies Publications, Vol. 12 (2016). https://ir.lib.uwo.ca/cgi/viewcontent.cgi?article=1012&context=commpub
[68]
Matthew D. Rocklage and Russell H. Fazio. 2015. The Evaluative Lexicon: Adjective use as a means of assessing and distinguishing attitude valence, extremity, and emotionality. Journal of Experimental Social Psychology, Vol. 56 (2015). https://doi.org/10.1016/j.jesp.2014.10.005
[69]
Matthew D. Rocklage, Derek D. Rucker, and Loran F. Nordgren. 2018. The Evaluative Lexicon 2.0: The measurement of emotionality, extremity, and valence in language. Behavior Research Methods, Vol. 50 (2018). https://doi.org/10.3758/s13428-017-0975--6
[70]
David Rolf. 2016. The Fight for Fifteen: The Right Wage for a Working America. The New Press.
[71]
Joseph Seering. 2020. Reconsidering Self-Moderation: the Role of Research in Supporting Community-Based Models for Online Content Moderation. Proceedings of the ACM on Human-Computer Interaction, Vol. 4, CSCW2 (2020), 1--28.
[72]
Joseph Seering, Tony Wang, Jina Yoon, and Geoff Kaufman. 2019. Moderator Engagement and Community Development in the Age of Algorithms. New Media and Society (2019).
[73]
Pnina Shachaf and Noriko Hara. 2010. Beyond vandalism: Wikipedia trolls. Journal of Information Science, Vol. 36, 3 (2010), 357--370.
[74]
C Sinders. 2017. Toxicity and tone are not the same thing: Analyzing the new Google API on toxicity, PerspectiveAPI. https://perma.cc/R9BM-V638 Retrieved Dec 1, 2020 from
[75]
Sara Sood, Judd Antin, and Elizabeth Churchill. 2012a. Profanity use in online communities. In In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM.
[76]
Sara Owsley Sood, Elizabeth F Churchill, and Judd Antin. 2012b. Automatic identification of personal insults on social news sites. Journal of the American Society for Information Science and Technology 63, 2 (2012), 270--285. https://dl.acm.org/doi/10.1002/asi.21690
[77]
John Suler. 2004. The online disinhibition effect. Cyberpsychology & behavior, Vol. 7, 3 (2004), 321--326.
[78]
HN Moderation Team. 2015. . https://news.ycombinator.com/threads?id=dang Retrieved Dec 1, 2020 from
[79]
The YouTube Team. 2019. Our ongoing work to tackle hate. https://blog.youtube/news-and-events/our-ongoing-work-to-tackle-hate Retrieved Dec 1, 2020 from
[80]
TensorFlow. 2020. Text Classification. https://www.tensorflow.org/tutorials/keras/text_classification Retrieved Dec 1, 2020 from
[81]
Twitch. 2021. Transparency Report. https://safety.twitch.tv/s/article/Transparency-Reports?language=en_US
[82]
Hal R. Varian. 2005. Bootstrap Tutorial. Mathematica Journal (2005).
[83]
Kris Varjas, Jasmaine Talley, Joel Meyers, Leandra Parris, and Hayley Cutts. 2010. High school students' perceptions of motivations for cyberbullying: An exploratory study. Western Journal of Emergency Medicine, Vol. 11, 3 (2010), 269.
[84]
James Vincent. 2020. Reddit reports 18 percent reduction in hateful content after banning nearly 7,000 subreddits. https://www.theverge.com/2020/8/20/21376957/reddit-hate-speech-content-policies-subreddit-bans-reduction Retrieved Dec 1, 2020 from
[85]
Jessican Vitak, Kalyani Chadha, Linda Steiner, and Zahra Ashktorab. 2017. Identifying Women's Experiences With and Strategies for Mitigating Negative Effects of Online Harassment. Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing CSCW (2017), 1231--1245.
[86]
Emily Vogels. 2021. The State of Online Harassment. https://www.pewresearch.org/internet/2021/01/13/the-state-of-online-harassment/
[87]
Eric Weisstein. 2020. Bootstrap Methods. http://mathworld.wolfram.com/BootstrapMethods.html Retrieved Dec 1, 2020 from
[88]
David Wiener. 1998. Negligent publication of statements posted on electronic bulletin boards: Is there any liability left after Zeran? Santa Clara L Rev (1998).
[89]
Wikipedia. 2020a. Flesch--Kincaid readability tests. https://en.wikipedia.org/wiki/Flesch%E2%80%93Kincaid_readability_tests Retrieved Dec 1, 2020 from
[90]
Wikipedia. 2020b. Poisson Regression. https://en.wikipedia.org/wiki/Poisson_regression#: :text=Poisson%20regression%20assumes%20the%20response,used%20to%20model%20contingency%20tables. Retrieved Dec 1, 2020 from
[91]
Ruth L Williams and Joseph Cothrel. 2000. Four smart ways to run online communities. MIT Sloan Management Review (2000).
[92]
Donghee Yvette Wohn. 2019. Volunteer Moderators in Twitch Micro Communities: How They Get Involved, the Roles They Play, and the Emotional Labor They Experience. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (2019).
[93]
Jun-Ming Xu, Benjamin Burchfiel, Xiaojin Zhu, and Amy Bellmore. 2013a. An Examination of Regret in Bullying Tweets. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, 697--702. https://www.aclweb.org/anthology/N13--1082/
[94]
Jun-Ming Xu, Benjamin Burchfiel, Xiaojin Zhu, and Amy Bellmore. 2013b. An Examination of Regret in Bullying Tweets. In In Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT).
[95]
Akbulut Yavuz, Sahin Yusuf Levent, and Eristi Bahadir. 2010. Contexts of uninhibited online behavior: Flaming in social newsgroups on Usenet. Educ Technol Soc (2010).
[96]
Meike Zehlike, Francesco Bonchi, Carlos Castillo, Sara Hajian, Mohamed Megahed, and Ricardo Baeza-Yates. 2017. Fa* ir: A fair top-k ranking algorithm. In In: Proceedings of the 2017 ACM conference on information and knowledge management. ACM.
[97]
Amy X Zhang, Grant Hugh, and Michael S Bernstein. 2020. PolicyKit: Building Governance in Online Communities. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology. 365--378.
[98]
Ce Zhang, Feng Niu, Christopher Ré, and Jude Shavlik. 2012. Big Data versus the Crowd: Looking for Relationships in All the Right Places. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, 825--834. https://www.aclweb.org/anthology/P12--1087/
[99]
Justine Zhang, Jonathan P Chang, Cristian Danescu-Niculescu-Mizil, Lucas Dixon, Yiqing Hua, Nithum Thain, and Dario Taraborelli. 2018. Conversations gone awry: Detecting early signs of conversational failure. arXiv preprint arXiv:1805.05345 (2018). io

Cited By

View all
  • (2024)Linguistically Differentiating Acts and Recalls of Racial Microaggressions on Social MediaProceedings of the ACM on Human-Computer Interaction10.1145/36373668:CSCW1(1-36)Online publication date: 26-Apr-2024
  • (2024)LLM-Mod: Can Large Language Models Assist Content Moderation?Extended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650828(1-8)Online publication date: 11-May-2024
  • (2024)Rehearsal: Simulating Conflict to Teach Conflict ResolutionProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642159(1-20)Online publication date: 11-May-2024
  • Show More Cited By

Index Terms

  1. Measuring the Prevalence of Anti-Social Behavior in Online Communities

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image Proceedings of the ACM on Human-Computer Interaction
    Proceedings of the ACM on Human-Computer Interaction  Volume 6, Issue CSCW2
    CSCW
    November 2022
    8205 pages
    EISSN:2573-0142
    DOI:10.1145/3571154
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 11 November 2022
    Published in PACMHCI Volume 6, Issue CSCW2

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. anti-social behavior
    2. moderation
    3. online communities

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)238
    • Downloads (Last 6 weeks)32
    Reflects downloads up to 01 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Linguistically Differentiating Acts and Recalls of Racial Microaggressions on Social MediaProceedings of the ACM on Human-Computer Interaction10.1145/36373668:CSCW1(1-36)Online publication date: 26-Apr-2024
    • (2024)LLM-Mod: Can Large Language Models Assist Content Moderation?Extended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650828(1-8)Online publication date: 11-May-2024
    • (2024)Rehearsal: Simulating Conflict to Teach Conflict ResolutionProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642159(1-20)Online publication date: 11-May-2024
    • (2024)The Medium is the Message: Toxicity Declines in Structured vs Unstructured Online DeliberationsManagement of Digital EcoSystems10.1007/978-3-031-51643-6_27(374-381)Online publication date: 2-Feb-2024
    • (2023)The Impact of Fake News on Traveling and Antisocial Behavior in Online Communities: OverviewApplied Sciences10.3390/app13211171913:21(11719)Online publication date: 26-Oct-2023
    • (2023)Cura: Curation at Social Media ScaleProceedings of the ACM on Human-Computer Interaction10.1145/36101867:CSCW2(1-33)Online publication date: 4-Oct-2023
    • (2023)A Pilot Study on People's Views of Gratitude Practices and Reactions to Expressing Gratitude in an Online CommunityCompanion Publication of the 2023 Conference on Computer Supported Cooperative Work and Social Computing10.1145/3584931.3607000(182-188)Online publication date: 14-Oct-2023

    View Options

    Get Access

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media