Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Public Access

Human-Machine Collaboration for Content Regulation: The Case of Reddit Automoderator

Published: 19 July 2019 Publication History
  • Get Citation Alerts
  • Abstract

    What one may say on the internet is increasingly controlled by a mix of automated programs, and decisions made by paid and volunteer human moderators. On the popular social media site Reddit, moderators heavily rely on a configurable, automated program called “Automoderator” (or “Automod”). How do moderators use Automod? What advantages and challenges does the use of Automod present? We participated as Reddit moderators for over a year, and conducted interviews with 16 moderators to understand the use of Automod in the context of the sociotechnical system of Reddit. Our findings suggest a need for audit tools to help tune the performance of automated mechanisms, a repository for sharing tools, and improving the division of labor between human and machine decision making. We offer insights that are relevant to multiple stakeholders—creators of platforms, designers of automated regulation systems, scholars of platform governance, and content moderators.

    References

    [1]
    Mark S. Ackerman. 2000. The intellectual challenge of CSCW: The gap between social requirements and technical feasibility. Human-Computer Interaction 15, 2--3 (2000), 179--203.
    [2]
    Alexa. 2018. reddit.com Traffic Statistics. Retrieved from https://www.alexa.com/siteinfo/reddit.com.
    [3]
    Automoderator. 2018. Automoderator - reddit.com. Retrieved from https://www.reddit.com/wiki/automoderator.
    [4]
    Oren Ben-Kiki, Clark Evans, and Brian Ingerson. 2005. YAML Ain’t Markup Language. Retrieved from http://yaml.org/spec/1.2/spec.html.
    [5]
    Zane L. Berge and Mauri P. Collins. 2000. Perceptions of e-moderators about their roles and functions in moderating electronic mailing lists. Distance Education 21, 1 (2000), 81--100.
    [6]
    BigQuery. 2018. Google BigQuery. Retrieved from https://bigquery.cloud.google.com/dataset/fh-bigquery:reddit_comments.
    [7]
    Reuben Binns, Michael Veale, Max Van Kleek, and Nigel Shadbolt. 2017. Like trainer, like bot? Inheritance of bias in algorithmic content moderation. In International Conference on Social Informatics. Springer, 405--415.
    [8]
    Lindsay Blackwell, Jill P. Dimond, Sarita Schoenebeck, and Cliff Lampe. 2017. Classification and its consequences for online harassment: Design insights from HeartMob.Proceedings of the ACM on Human-Computer Interaction 1, CSCW (2017), 24.
    [9]
    Amy Bruckman. 2006. Teaching students to study online communities ethically. Journal of Information Ethics 15, 2 (2006), 82--98.
    [10]
    Amy Bruckman, Kurt Luther, and Casey Fiesler. 2015. When should we use real names in published accounts of internet research? In Digital Research Confidential: The Secrets of Studying Behavior Online. Eszter Hargittai and Christian Sandvig (Eds.), MIT Press.
    [11]
    Catherine Buni. 2016. The secret rules of the internet: The murky history of moderation, and how it’s shaping the future of free speech. Retrieved from https://www.theverge.com/2016/4/13/11387934/internet-moderator-history-youtube-facebook-reddit-censorship-free-speech.
    [12]
    Catherine Buni. 2019. The Trauma Floor: The secret lives of Facebook moderators in America. Retrieved from https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona.
    [13]
    Robyn Caplan. 2018. Content or Context Moderation? Artisanal, Community-Reliant, and Industrial Approaches. Retrieved from https://datasociety.net/output/content-or-context-moderation/.
    [14]
    captainmeta4 (Submitter). 2016. What is /u/BotBust?: BotBust. Retrieved from https://www.reddit.com/r/BotBust/comments/5092dg/what_is_ubotbust/.
    [15]
    Stevie Chancellor, Yannis Kalantidis, Jessica A. Pater, Munmun De Choudhury, and David A. Shamma. 2017. Multimodal classification of moderated online pro-eating disorder content. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI’17). ACM, New York, New York, 3213--3226.
    [16]
    Stevie Chancellor, Jessica Annette Pater, Trustin Clear, Eric Gilbert, and Munmun De Choudhury. 2016. # thyghgapp: Instagram content moderation and lexical variation in pro-eating disorder communities. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work 8 Social Computing. ACM, 1201--1213.
    [17]
    Eshwar Chandrasekharan, Mattia Samory, Shagun Jhaver, Hunter Charvat, Amy Bruckman, Cliff Lampe, Jacob Eisenstein, and Eric Gilbert. 2018. The internet’s hidden rules: An empirical study of reddit norm violations at micro, meso, and macro scales. Proceedings of the ACM on Human-Computer Interaction 2, CSCW (2018), 32.
    [18]
    Eshwar Chandrasekharan, Mattia Samory, Anirudh Srinivasan, and Eric Gilbert. 2017. The bag of communities: Identifying abusive behavior online with preexisting internet data. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems.
    [19]
    Kathy Charmaz. 2006. Coding in grounded theory practice. Constructing Grounded Theory (2006), 42--70. Sage Publications Ltd, Thousand Oaks, CA.
    [20]
    Adrian Chen. 2014. The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feed. Wired. Retrieved from https://www.wired.com/2014/10/content-moderation/.
    [21]
    Justin Cheng and Michael S. Bernstein. 2015. Flock: Hybrid crowd-machine learning classifiers. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work 8 Social Computing (CSCW’15). ACM, New York, NY, 600--611.
    [22]
    Danielle Keats Citron. 2014. Hate Crimes in Cyberspace. Harvard University Press.
    [23]
    Danielle Keats Citron and Mary Anne Franks. 2014. Criminalizing revenge porn. Wake Forest Law Review 49 (2014), 345.
    [24]
    Maxime Clément and Matthieu J. Guitton. 2015. Interacting with bots online: Users’ reactions to actions of automated programs in Wikipedia. Computers in Human Behavior 50 (2015), 66--75.
    [25]
    Sasha Costanza-Chock. 2018. Design Justice: Towards an Intersectional Feminist Framework for Design Theory and Practice. (2018). Retrieved from https://ssrn.com/abstract=3189696.
    [26]
    Kate Crawford and Tarleton Gillespie. 2016. What is a flag for? Social media reporting tools and the vocabulary of complaint. New Media 8 Society 18, 3 (2016), 410--428.
    [27]
    Deimorz (Submitter). 2012. AutoModerator -- A Bot for Automating Straightforward Reddit Moderation Tasks and Improving Upon the Existing Spam-Filter : TheoryOfReddit. Retrieved from https://www.reddit.com/r/TheoryOfReddit/comments/onl2u/automoderator_a_bot_for_automating/.
    [28]
    Jean-Yves Delort, Bavani Arunasalam, and Cecile Paris. 2011. Automatic moderation of online discussion sites. International Journal of Electronic Commerce 15, 3 (2011), 9--30.
    [29]
    Nicholas Diakopoulos and Mor Naaman. 2011. Towards quality discourse in online news comments. In Proceedings of the ACM 2011 Conference on Computer Supported Cooperative Work (CSCW’11). ACM, New York, NY, 133--142.
    [30]
    Bryan Dosono and Bryan Semaan. 2019. Moderation practices as emotional labor in sustaining online communities: The case of AAPI identity work on reddit. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 142.
    [31]
    Dmitry Epstein and Gilly Leshed. 2016. The magic sauce: Practices of facilitation in online policy deliberation. Journal of Public Deliberation 12, 1 (2016), 4.
    [32]
    Casey Fiesler, Jialun Aaron Jiang, Joshua McCann, Kyle Frye, and Jed R. Brubaker. 2018. Reddit rules! Characterizing an ecosystem of governance. In Proceedings of the 12th International AAAI Conference on Web and Social Media. 72--81.
    [33]
    National Science Foundation. 2019. Future of Work at the Human-Technology Frontier: Core Research (FW-HTF). Retrieved from https://www.nsf.gov/funding/pgm_summ.jsp?pims_id=505620.
    [34]
    Sandra E. Garcia. 2018. Ex-Content Moderator Sues Facebook, Saying Violent Images Caused Her PTSD. Retrieved from https://www.nytimes.com/2018/09/25/technology/facebook-moderator-job-ptsd-lawsuit.html.
    [35]
    R. Stuart Geiger and Aaron Halfaker. 2013. When the levee breaks: Without bots, what happens to wikipedia’s quality control processes? In Proceedings of the 9th International Symposium on Open Collaboration (WikiSym’13). ACM, New York, NY, Article 6, 6 pages.
    [36]
    R. Stuart Geiger and David Ribes. 2010. The work of sustaining order in wikipedia: The banning of a vandal. In Proceedings of the 2010 ACM Conference on Computer Supported Cooperative Work (CSCW’10). ACM, New York, NY, 117--126.
    [37]
    Ysabel Gerrard. 2018. Beyond the hashtag: Circumventing content moderation on social media. New Media 8 Society 20, 12 (2018), 4492--4511.
    [38]
    Tarleton Gillespie. 2017. Governance of and by platforms. In Sage Handbook of Social Media. Sage.
    [39]
    Tarleton Gillespie. 2018. Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale University Press.
    [40]
    Tarleton Gillespie. 2018. The Logan Paul YouTube controversy and What We Should Expect from Internet Platforms. Retrieved from https://www.vox.com/the-big-idea/2018/1/12/16881046/logan-paul-youtube-controversy-internet-companies.
    [41]
    April Glaser. 2018. Want a Terrible Job? Facebook and Google May Be Hiring. Retrieved from https://slate.com/technology/2018/01/facebook-and-google-are-building-an-army-of-content-moderators-for-2018.html.
    [42]
    Kirsten Gollatz, Felix Beer, and Christian Katzenbach. 2018. The turn to artificial intelligence in governing communication online. (2018). Social Science Open Access Repository. Retrieved from https://nbn-resolving.org/urn:nbn:de:0168-ssoar-59528-6.
    [43]
    Robert Gorwa. 2019. What is platform governance? Information, Communication 8 Society 22, 6 (2019), 854--871.
    [44]
    James Grimmelmann. 2015. The virtues of moderation. Yale Journal of Law and Technology 17 (2015), 42.
    [45]
    Hugo Lewi Hammer. 2016. Automatic detection of hateful comments in online discussion. In Proceedings of the International Conference on Industrial Networks and Intelligent Systems. Springer, 164--173.
    [46]
    Natali Helberger, Jo Pierson, and Thomas Poell. 2018. Governing online platforms: From contested to cooperative responsibility. The Information Society 34, 1 (2018), 1--14.
    [47]
    Susan C. Herring. 2000. Gender differences in CMC: Findings and implications. Computer Professionals for Social Responsibility Journal 18, 1 (2000), 1--9.
    [48]
    Arlie Russell Hochschild. 1983. The Managed Heart: Commercialization of Human Feeling. University of California Press, Berkeley.
    [49]
    Maya Holikatti, Shagun Jhaver, and Neha Kumar. 2019. Learning to Airbnb by engaging in online communities of practice. Under Review at Proceedings of the ACM on Human-Computer Interaction.
    [50]
    Matthew W. Hughey and Jessie Daniels. 2013. Racist comments at online news sites: A methodological dilemma for discourse analysis. Media, Culture 8 Society 35, 3 (2013), 332--347.
    [51]
    Shagun Jhaver, Scott Appling, Eric Gilbert, and Amy Bruckman. 2019. “Did you suspect the post would be removed?” User reactions to content removals on reddit. Under Review at Proceedings of the ACM on Human-Computer Interaction.
    [52]
    Shagun Jhaver, Amy Bruckman, and Eric Gilbert. 2019. Does transparency in moderation really matter? User behavior after content removal explanations on reddit. Under Review at Proceedings of the ACM on Human-Computer Interaction.
    [53]
    Shagun Jhaver, Larry Chan, and Amy Bruckman. 2018. The view from the other side: The border between controversial speech and harassment on Kotaku in Action. First Monday 23, 2 (2018). Retrieved from http://firstmonday.org/ojs/index.php/fm/article/view/8232.
    [54]
    Shagun Jhaver, Justin Cranshaw, and Scott Counts. 2019. Measuring professional skill development in U.S. cities using internet search queries. In Proceedings of the 13th International AAAI Conference on Web and Social Media.
    [55]
    Shagun Jhaver, Sucheta Ghoshal, Amy Bruckman, and Eric Gilbert. 2018. Online harassment and content moderation: The case of blocklists. ACM Transactions on Computer-Human Interaction 25, 2 (2018), 12.
    [56]
    Shagun Jhaver, Yoni Karpfen, and Judd Antin. 2018. Algorithmic anxiety and coping strategies of Airbnb hosts. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 421.
    [57]
    Shagun Jhaver, Pranil Vora, and Amy Bruckman. 2017. Designing for Civil Conversations: Lessons Learned from ChangeMyView. Technical Report. Georgia Institute of Technology.
    [58]
    Ling Jiang and Eui-Hong Han. 2019. ModBot: Automatic comments moderation. In Proceedings of the Computation+ Journalism Symposium.
    [59]
    Victor Kaptelinin. 1996. Activity theory: Implications for human-computer interaction. Context and Consciousness: Activity Theory and Human-Computer Interaction 1 (1996), 103--116.
    [60]
    Aphra Kerr and John D. Kelleher. 2015. The recruitment of passion and community in the service of capital: Community managers in the digital games industry. Critical Studies in Media Communication 32, 3 (2015), 177--192.
    [61]
    Charles Kiene, Andrés Monroy-Hernández, and Benjamin Mako Hill. 2016. Surviving an “Eternal September”: How an online community managed a surge of newcomers. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI’16). ACM, New York, NY, 1152--1156.
    [62]
    Sara Kiesler, Robert Kraut, and Paul Resnick. 2012. Regulating behavior in online communities. In Building Successful Online Communities: Evidence-Based Social Design. MIT Press.
    [63]
    Kate Klonick. 2017. The new governors: The people, rules, and processes governing online speech. Harvard Law Review 131 (March 2017), 1598--1670. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2937985.
    [64]
    Himabindu Lakkaraju, Stephen H. Bach, and Jure Leskovec. 2016. Interpretable decision sets: A joint framework for description and prediction. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’16). ACM, New York, NY, 1675--1684.
    [65]
    Cliff Lampe, Erik Johnston, and Paul Resnick. 2007. Follow the reader: Filtering comments on slashdot. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’07). ACM, New York, NY, 1253--1262.
    [66]
    Cliff Lampe and Paul Resnick. 2004. Slash(Dot) and burn: Distributed moderation in a large online conversation space. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’04). ACM, New York, NY, 543--550.
    [67]
    Cliff Lampe, Paul Zube, Jusil Lee, Chul Hyun Park, and Erik Johnston. 2014. Crowdsourcing civility: A natural experiment examining the effects of distributed moderation in online forums. Government Information Quarterly 31, 2 (2014), 317--326.
    [68]
    Ping Liu, Joshua Guberman, Libby Hemphill, and Aron Culotta. 2018. Forecasting the presence and intensity of hostility on Instagram using linguistic and social features. In Proceedings of the 12th International AAAI Conference on Web and Social Media.
    [69]
    Claudia Lo. 2018. When All You Have is a Banhammer: The Social and Communicative Work of Volunteer Moderators. Ph.D. Dissertation. Massachusetts Institute of Technology.
    [70]
    Kiel Long, John Vines, Selina Sutton, Phillip Brooker, Tom Feltwell, Ben Kirman, Julie Barnett, and Shaun Lawson. 2017. “Could you define that in bot terms”? Requesting, creating and using bots on reddit. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI’17). ACM, New York, NY, 3488--3500.
    [71]
    Alexis Madrigal. 2018. Inside Facebook’s Fast-Growing Content-Moderation Effort. Retrieved from https://www.theatlantic.com/technology/archive/2018/02/what-facebook-told-insiders-about-how-it-moderates-posts/552632/.
    [72]
    Fiona Martin. 2015. Getting My Two Cents Worth In: Access, Interaction, Participation and Social Inclusion in Online News Commenting. Retrieved from https://isojjournal.wordpress.com/2015/04/15/getting-my-two-cents-worth-in-access-interaction-participation-and-social-inclusion-in-online-news-commenting/.
    [73]
    Nathan J. Matias. 2016. Going dark: Social factors in collective action against platform operators in the reddit blackout. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems.
    [74]
    Nathan J. Matias. 2016. Posting Rules in Online Discussions Prevents Problems 8 Increases Participation. Retrieved from https://civilservant.io/moderation_experiment_r_science_rule_posting.html.
    [75]
    Nathan J. Matias. 2016. The civic labor of online moderators. In Internet Politics and Policy conference. Oxford, United Kingdom.
    [76]
    Patrick McDaniel, Nicolas Papernot, and Z. Berkay Celik. 2016. Machine learning in adversarial settings. IEEE Security 8 Privacy 14, 3 (May 2016), 68--72.
    [77]
    Aiden McGillicuddy, Jean-Gregoire Bernard, and Jocelyn Cranefield. 2016. Controlling bad behavior in online communities: An examination of moderation work. In Proceedings of the International Conference on Information Systems (ICIS’16). Retrieved from http://aisel.aisnet.org/icis2016/SocialMedia/Presentations/23.
    [78]
    Steven Melendez. 2015. Here’s How 20,000 Reddit Volunteers Fight Trolls, Spammers, and Played-Out Memes. Retrieved from https://www.fastcompany.com/3048406/heres-how-20000-reddit-volunteers-fight-trolls-spammers-and-played-out-memes.
    [79]
    Sharan B. Merriam. 2002. Introduction to qualitative research. In Qualitative Research in Practice: Examples for Discussion and Analysis. Jossey-Bass, San Francisco.
    [80]
    Elise Moreau. 2017. What Exactly Is a Reddit AMA? Retrieved from https://www.lifewire.com/what-exactly-is-a-reddit-ama-3485985.
    [81]
    Kevin Morris. 2015. Reddit Moderation Being Taken Over by Bots—And That’s a Good Thing. Retrieved from https://www.dailydot.com/news/reddit-automoderator-bots/.
    [82]
    Enid Mumford. 2000. A socio-technical approach to systems design. Requirements Engineering 5, 2 (2000), 125--133.
    [83]
    Erica Ong. 2018. Is Machine Learning the Future of Content Moderation? Retrieved from https://insights.conduent.com/conduent-blog/is-machine-learning-the-future-of-content-moderation.
    [84]
    M. Pilar Opazo. 2010. Revitalizing the concept of sociotechnical systems in social studies of technology. (2010). Available at https://www.semanticscholar.org/paper/Revitalizing-the-Concept-of-Sociotechnical-Systems-Opazo/78460b4bdcbe004ccd469f4bda1f7d98017954de.
    [85]
    Deokgun Park, Simranjit Sachar, Nicholas Diakopoulos, and Niklas Elmqvist. 2016. Supporting comment moderators in identifying high quality online news comments. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, 1114--1125.
    [86]
    Michael Quinn Patton. 1990. Qualitative Evaluation and Research Methods. SAGE Publications, Inc.
    [87]
    Emma Pierson. 2015. Outnumbered but well-spoken: Female commenters in the New York times. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work 8 Social Computing. ACM, 1201--1213.
    [88]
    Neil Postman. 1992. Technopoly. Vintage,New York.
    [89]
    Kim Renfro. 2016. For whom the troll trolls: A day in the life of a Reddit moderator. Retrieved from https://www.businessinsider.com/what-is-a-reddit-moderator-2016-1#crocker-has-.
    [90]
    Sarah T. Roberts. 2014. Behind the screen: The hidden digital labor of commercial content moderation. Ph.D. Dissertation. University of Illinois at Urbana-Champaign. Retrieved from https://www.ideals.illinois.edu/handle/2142/50401.
    [91]
    Sarah T. Roberts. 2016. Commercial content moderation: Digital laborers’ dirty work. Media Studies Publications (January 2016). Retrieved from https://ir.lib.uwo.ca/commpub/12.
    [92]
    Sarah T. Roberts. 2019. Behind The Screen: Content Moderation in the Shadows of Social Media. Yale University Press.
    [93]
    Koustuv Saha, Eshwar Chandrasekharan, and Munmun De Choudhury. 2019. Prevalence and psychological effects of hateful speech in online college communities. In Proceedings of the 11th ACM Conference on Web Science.
    [94]
    Marcos Rodrigues Saude, Marcelo de Medeiros Soares, Henrique Gomes Basoni, Patrick Marques Ciarelli, and Elias Oliveira. 2014. A strategy for automatic moderation of a large data set of users comments. In Proceedings of the 2014 XL Latin American Computing Conference (CLEI’14). IEEE, 1--7.
    [95]
    Joseph Seering, Tianmi Fang, Luca Damasco, Mianhong ‘Cherie’ Chen, Likang Sun, and Geoff Kaufman. 2019. Designing user interface elements to improve the quality and civility of discourse in online commenting behaviors. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 606.
    [96]
    Joseph Seering, Robert Kraut, and Laura Dabbish. 2017. Shaping pro and anti-social behavior on twitch through moderation and example-setting. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW’17). ACM, New York, NY, 111--125.
    [97]
    Joseph Seering, Tony Wang, Jina Yoon, and Geoff Kaufman. 2019. Moderator engagement and community development in the age of algorithms. New Media 8 Society 21, 7 (2019), 1417--1443.
    [98]
    Monika Singh, Divya Bansal, and Sanjeev Sofat. 2016. Behavioral analysis and classification of spammers distributing pornographic content in social media. Social Network Analysis and Mining 6, 1 (2016), 41.
    [99]
    Devin Soni and Vivek K. Singh. 2018. See no evil, hear no evil: Audio-visual-textual cyberbullying detection. Proceedings of the ACM Human-Computer Interaction 2, CSCW (November 2018), Article 164, 26 pages.
    [100]
    Tim Squirrell. 2019. Platform dialectics: The relationships between volunteer moderators and end users on reddit. New Media 8 Society (2019).
    [101]
    Steve Stecklow. 2018. Why Facebook is Losing the War on Hate Speech in Myanmar. Retrieved from https://www.reuters.com/investigates/special-report/myanmar-facebook-hate/.
    [102]
    Anselm Strauss and Juliet Corbin. 1990. Basics of Qualitative Research. Sage publications.
    [103]
    Nicolas P. Suzor, Sarah Myers West, Andrew Quodling, and Jillian York. 2019. What do we mean when we talk about transparency? Toward meaningful transparency in commercial content moderation. International Journal of Communication 13 (2019), 18.
    [104]
    Courtnie Swearingen and Brian Lynch. 2018. We’re Reddit Mods, and This Is How We Handle Hate Speech. Retrieved from https://www.wired.com/2015/08/reddit-mods-handle-hate-speech/.
    [105]
    Steven J. Taylor, Robert Bogdan, and Marjorie DeVault. 2015. Participant observation: In the field. In Introduction to Qualitative Research Methods: A Guidebook and Resource. John Wiley 8 Sons, Chapter 3.
    [106]
    T. L. Taylor. 2018. Regulating the networked broadcasting frontier. In Watch Me Play: Twitch and the Rise of Game Live Streaming. Princeton University Press, Chapter 5.
    [107]
    Ken Thompson. 1968. Programming techniques: Regular expression search algorithm. Communications of the ACM 11, 6 (June 1968), 419--422.
    [108]
    Adriano Veloso, Wagner Meira Jr, Tiago Alves Macambira, Dorgival O. Guedes, and Hélio Marcos Paz de Almeida. 2007. Automatic moderation of comments in a large on-line journalistic environment. In Proceedings of the 2007 International Conference on Weblogs and Social Media (ICWSM’07).
    [109]
    Sarah Myers West. 2018. Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms. New Media 8 Society (2018).
    [110]
    Vanessa L. Wilburn. 1994. Gender and Anonymity in Computer-Mediated Communication: Participation, Flaming, Deindividuation. Ph.D. Dissertation. University of Florida.
    [111]
    Donghee Yvette Wohn. 2019. Volunteer moderators in twitch micro communities: How they get involved, the roles they play, and the emotional labor they experience. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 160.
    [112]
    Ellery Wulczyn, Nithum Thain, and Lucas Dixon. 2017. Ex machina: Personal attacks seen at scale. In Proceedings of the 26th International Conference on World Wide Web (WWW’17). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, Switzerland, 1391--1399.

    Cited By

    View all
    • (2024)Development of an Automated Moderator for Deliberative EventsElectronics10.3390/electronics1303054413:3(544)Online publication date: 29-Jan-2024
    • (2024)Adopting Third-party Bots for Managing Online CommunitiesProceedings of the ACM on Human-Computer Interaction10.1145/36537078:CSCW1(1-26)Online publication date: 26-Apr-2024
    • (2024)Opportunities, tensions, and challenges in computational approaches to addressing online harassmentProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661623(1483-1498)Online publication date: 1-Jul-2024
    • Show More Cited By

    Index Terms

    1. Human-Machine Collaboration for Content Regulation: The Case of Reddit Automoderator

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Transactions on Computer-Human Interaction
      ACM Transactions on Computer-Human Interaction  Volume 26, Issue 5
      October 2019
      249 pages
      ISSN:1073-0516
      EISSN:1557-7325
      DOI:10.1145/3349608
      Issue’s Table of Contents
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 19 July 2019
      Accepted: 01 May 2019
      Revised: 01 April 2019
      Received: 01 January 2019
      Published in TOCHI Volume 26, Issue 5

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Automod
      2. Content moderation
      3. automated moderation
      4. future of work
      5. mixed initiative
      6. platform governance

      Qualifiers

      • Research-article
      • Research
      • Refereed

      Funding Sources

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)1,439
      • Downloads (Last 6 weeks)154
      Reflects downloads up to 11 Aug 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Development of an Automated Moderator for Deliberative EventsElectronics10.3390/electronics1303054413:3(544)Online publication date: 29-Jan-2024
      • (2024)Adopting Third-party Bots for Managing Online CommunitiesProceedings of the ACM on Human-Computer Interaction10.1145/36537078:CSCW1(1-26)Online publication date: 26-Apr-2024
      • (2024)Opportunities, tensions, and challenges in computational approaches to addressing online harassmentProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661623(1483-1498)Online publication date: 1-Jul-2024
      • (2024)Labeling in the Dark: Exploring Content Creators’ and Consumers’ Experiences with Content Classification for Child Safety on YouTubeProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661565(1518-1532)Online publication date: 1-Jul-2024
      • (2024)“HOT” ChatGPT: The Promise of ChatGPT in Detecting and Discriminating Hateful, Offensive, and Toxic Comments on Social MediaACM Transactions on the Web10.1145/364382918:2(1-36)Online publication date: 2-Feb-2024
      • (2024)AppealMod: Inducing Friction to Reduce Moderator Workload of Handling User AppealsProceedings of the ACM on Human-Computer Interaction10.1145/36372968:CSCW1(1-35)Online publication date: 26-Apr-2024
      • (2024)LLM-Mod: Can Large Language Models Assist Content Moderation?Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650828(1-8)Online publication date: 11-May-2024
      • (2024)Third-Party Developers and Tool Development For Community Management on Live Streaming Platform TwitchProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642787(1-18)Online publication date: 11-May-2024
      • (2024)Community Begins Where Moderation Ends: Peer Support and Its Implications for Community-Based RehabilitationProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642675(1-18)Online publication date: 11-May-2024
      • (2024)The ``Colonial Impulse" of Natural Language Processing: An Audit of Bengali Sentiment Analysis Tools and Their Identity-based BiasesProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642669(1-18)Online publication date: 11-May-2024
      • Show More Cited By

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Get Access

      Login options

      Full Access

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media