Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

The Unsung Heroes of Facebook Groups Moderation: A Case Study of Moderation Practices and Tools

Published: 16 April 2023 Publication History

Abstract

Volunteer moderators have the power to shape society through their influence on online discourse. However, the growing scale of online interactions increasingly presents significant hurdles for meaningful moderation. Furthermore, there are only limited tools available to assist volunteers with their work. Our work aims to meaningfully explore the potential of AI-driven, automated moderation tools for social media to assist volunteer moderators. One key aspect is to investigate the degree to which tools must become personalizable and context-sensitive in order to not just delete unsavory content and ban trolls, but to adapt to the millions of online communities on social media mega-platforms that rely on volunteer moderation. In this study, we conduct semi-structured interviews with 26 Facebook Group moderators in order to better understand moderation tasks and their associated challenges. Through qualitative analysis of the interview data, we identify and address the most pressing themes in the challenges they face daily. Using interview insights, we conceptualize three tools with automated features that assist them in their most challenging tasks and problems. We then evaluate the tools for usability and acceptance using a survey drawing on the technology acceptance literature with 22 of the same moderators. Qualitative and descriptive analyses of the survey data show that context-sensitive, agency-maintaining tools in addition to trial experience are key to mass adoption by volunteer moderators in order to build trust in the validity of the moderation technology.

References

[1]
Saleh Beyt Sheikh Ahmad, Mahnaz Rafie, and Seyed Mojtaba Ghorabie. 2021. Spam detection on Twitter using a support vector machine and users' features by identifying their interactions. Multimedia Tools and Applications, Vol. 80, 8 (2021), 11583--11605. https://doi.org/10.1007/s11042-020--10405--7
[2]
Hind Almerekhi, Haewoon Kwak, Joni Salminen, and Bernard J. Jansen. 2020. Are these comments triggering? Predicting triggers of toxicity in online discussions. In Proceedings of The Web Conference 2020. Association for Computing Machinery, New York, NY, USA, 3033--3040. https://doi.org/10.1145/3366423.3380074
[3]
Sinan Aral, Chrysanthos Dellarocas, and David Godes. 2013. Introduction to the special issue -- Social media and business transformation: A framework for research. Information Systems Research, Vol. 24 (2013), 3--13. https://doi.org/10.1287/isre.1120.0470
[4]
Debolina Biswas. 2021. Understanding the technology behind content moderation system of tech giants. https://analyticsindiamag.com/understanding-the-technology-behind-content-moderation-system-of-tech-giants/
[5]
Jens Brunk, Jana Mattern, and Dennis M. Riehle. 2019. Effect of transparency and trust on acceptance of automatic online comment moderation systems. In 2019 IEEE 21st Conference on Business Informatics (CBI), Vol. 1. IEEE, 429--435. https://doi.org/10.1109/CBI.2019.00056
[6]
Jie Cai and Donghee Yvette Wohn. 2019. Categorizing live streaming moderation tools: An analysis of Twitch. International Journal of Interactive Communication Systems and Technologies, Vol. 9, 2 (2019), 36--50. https://doi.org/10.4018/IJICST.2019070103
[7]
Jie Cai and Donghee Yvette Wohn. 2021. After violation but before sanction: Understanding volunteer moderators' profiling processes toward violators in live streaming communities. Proceedings of the ACM on Human-Computer Interaction, Vol. 5, CSCW2, Article 410 (2021), 25 pages. https://doi.org/10.1145/3479554
[8]
Jie Cai and Donghee Yvette Wohn. 2022. Coordination and collaboration: How do volunteer moderators work as a team in live streaming communities?. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, Article 300, 14 pages. https://doi.org/10.1145/3491102.3517628
[9]
Eshwar Chandrasekharan, Chaitrali Gandhi, Matthew Wortley Mustelier, and Eric Gilbert. 2019. Crossmod: A cross-community learning-based system to assist Reddit moderators. Proceedings of the ACM on Human-Computer Interaction, Vol. 3, CSCW, Article 174 (2019), 30 pages. https://doi.org/10.1145/3359276
[10]
Eshwar Chandrasekharan, Mattia Samory, Shagun Jhaver, Hunter Charvat, Amy Bruckman, Cliff Lampe, Jacob Eisenstein, and Eric Gilbert. 2018. The Internet's hidden rules: An empirical study of Reddit norm violations at micro, meso, and macro scales. Proceedings of the ACM on Human-Computer Interaction, Vol. 2, CSCW, Article 32 (2018), 25 pages. https://doi.org/10.1145/3274301
[11]
Kathy Charmaz. 2015. Teaching theory construction with initial grounded theory tools: A reflection on lessons and learning. Qualitative Health Research, Vol. 25, 12 (2015), 1610--1622. https://doi.org/10.1177/1049732315613982
[12]
Jennifer Cobbe. 2021. Algorithmic censorship by social platforms: Power and resistance. Philosophy & Technology, Vol. 34 (2021), 739--766. https://doi.org/10.1007/s13347-020-00429-0
[13]
Cambridge Consultants. 2019. Use of AI in online content moderation. https://www.ofcom.org.uk/__data/assets/pdf_file/0028/157249/cambridge-consultants-ai-content-moderation.pdf
[14]
Elisa Costante, Jerry den Hartog, and Milan Petkovic. 2011. On-line trust perception: What really matters. In 2011 1st Workshop on Socio-Technical Aspects in Security and Trust (STAST). IEEE, 52--59. https://doi.org/10.1109/STAST.2011.6059256
[15]
Kate Crawford and Tarleton Gillespie. 2016. What is a flag for? Social media reporting tools and the vocabulary of complaint. New Media & Society, Vol. 18, 3 (2016), 410--428. https://doi.org/10.1177/1461444814543163
[16]
Amanda L. Cullen and Sanjay R. Kairam. 2022. Practicing moderation: Community moderation as reflective practice. Proceedings of the ACM on Human-Computer Interaction, Vol. 6, CSCW1, Article 111 (2022), 32 pages. https://doi.org/10.1145/3512958
[17]
Niklas F. Cypris, Severin Engelmann, Julia Sasse, Jens Grossklags, and Anna Baumert. 2022. Intervening against online hate speech: A case for automated counterspeech. Research Brief. TUM Institute for Ethics in Artificial Intelligence. https://ieai.sot.tum.de/wp-content/uploads/2022/05/Research-Brief_Intervening-Against-Online-Hate-Speech_April2022_FINAL.pdf
[18]
Fred Davis. 1989. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, Vol. 13, 3 (1989), 319--340. https://doi.org/10.2307/249008
[19]
Clarisse de Souza and Jenny Preece. 2004. A framework for analyzing and understanding online communities. Interacting with Computers, Vol. 16 (2004), 579--610. https://doi.org/10.1016/j.intcom.2003.12.006
[20]
Bryan Dosono and Bryan Semaan. 2019. Moderation practices as emotional labor in sustaining online communities: The case of AAPI identity work on Reddit. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, Article 142, 13 pages. https://doi.org/10.1145/3290605.3300372
[21]
Brianna Dym and Casey Fiesler. 2020. Social norm vulnerability and its consequences for privacy and safety in an online community. Proceedings of the ACM on Human-Computer Interaction, Vol. 4, CSCW2, Article 155 (2020), 24 pages. https://doi.org/10.1145/3415226
[22]
Satu Elo and Helvi Kyngäs. 2008. The qualitative content analysis. Journal of Advanced Nursing, Vol. 62, 1 (2008), 107--115. https://doi.org/10.1111/j.1365--2648.2007.04569.x
[23]
Jane Forman and Laura Damschroder. 2007. Qualitative content analysis. Advances in Bioethics, Vol. 11 (2007), 39--62. https://doi.org/10.1016/S1479--3709(07)11003--7
[24]
Tarleton Gillespie. 2018. Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press. https://doi.org/10.12987/9780300235029
[25]
Robert Gorwa, Reuben Binns, and Christian Katzenbach. 2020. Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society, Vol. 7, 1 (2020). https://doi.org/10.1177/2053951719897945
[26]
Daniel R. Greening and Alan D. Wexelblat. 1988. Experiences with cooperative moderation of a Usenet newsgroup. Technical Report. UCLA, Computer Science Department. http://ftp.cs.ucla.edu/tech-report/198_-reports/880028.pdf
[27]
James Grimmelmann. 2015. The virtues of moderation. Yale Journal of Law and Technology, Vol. 17 (2015), 42--109. https://yjolt.org/virtues-moderation
[28]
Qinglai He, Kevin Hong, and T. S. Raghu. 2022. The effects of machine-powered platform governance: An empirical study of content moderation. In Proceedings of the 55th Hawaii International Conference on System Sciences (HICSS). 5963--5972. http://hdl.handle.net/10125/80064
[29]
Robert R. Hoffman, Matthew Johnson, Jeffrey M. Bradshaw, and Al Underbrink. 2013. Trust in automation. IEEE Intelligent Systems, Vol. 28, 1 (2013), 84--88. https://doi.org/10.1109/MIS.2013.24
[30]
Hsiu-Fang Hsieh and Sarah E. Shannon. 2005. Three approaches to qualitative content analysis. Qualitative Health Research, Vol. 15, 9 (2005), 1277--1288. https://doi.org/10.1177/1049732305276687
[31]
Bernardo A. Huberman, Eytan Adar, and Leslie R. Fine. 2005. Valuating privacy. IEEE Security & Privacy, Vol. 3, 5 (2005), 22--25. https://doi.org/10.1109/MSP.2005.137
[32]
Sohyeon Hwang and Jeremy D. Foote. 2021. Why do people participate in small online communities? Proceedings of the ACM on Human-Computer Interaction, Vol. 5, CSCW2, Article 462 (2021), 25 pages. https://doi.org/10.1145/3479606
[33]
Charlotte Jee. 2020. Facebook needs 30,000 of its own content moderators, says a new report. https://www.technologyreview.com/2020/06/08/1002894/facebook-needs-30000-of-its-own-content-moderators-says-a-new-report/
[34]
Shagun Jhaver, Iris Birman, Eric Gilbert, and Amy Bruckman. 2019. Human-machine collaboration for content regulation: The case of Reddit Automoderator. ACM Transactions on Computer-Human Interaction, Vol. 26, 5, Article 31 (2019), 35 pages. https://doi.org/10.1145/3338243
[35]
Shagun Jhaver, Quan Ze Chen, Detlef Knauss, and Amy X. Zhang. 2022. Designing word filter tools for creator-led comment moderation. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, Article 205, 21 pages. https://doi.org/10.1145/3491102.3517505
[36]
Shagun Jhaver, Sucheta Ghoshal, Amy Bruckman, and Eric Gilbert. 2018. Online harassment and content moderation: The case of blocklists. ACM Transactions on Computer-Human Interaction, Vol. 25, 2, Article 12 (2018), 33 pages. https://doi.org/10.1145/3185593
[37]
Jialun Aaron Jiang, Charles Kiene, Skyler Middler, Jed R. Brubaker, and Casey Fiesler. 2019. Moderation challenges in voice-based online communities on Discord. Proceedings of the ACM on Human-Computer Interaction, Vol. 3, CSCW, Article 55 (2019), 23 pages. https://doi.org/10.1145/3359157
[38]
Nam Gu Kang, Tina Kuo, and Jens Grossklags. 2022. Closing Pandora's box on Naver: Toward ending cyber harassment. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 16. 465--476. https://ojs.aaai.org/index.php/ICWSM/article/view/19307
[39]
Charles Kiene and Benjamin Mako Hill. 2020. Who uses bots? A statistical analysis of bot usage in moderation teams. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1--8. https://doi.org/10.1145/3334480.3382960
[40]
Charles Kiene, Jialun Aaron Jiang, and Benjamin Mako Hill. 2019. Technological frames and user innovation: Exploring technological change in community moderation teams. Proceedings of the ACM on Human-Computer Interaction, Vol. 3, CSCW, Article 44 (2019), 23 pages. https://doi.org/10.1145/3359146
[41]
Kate Klonick. 2018. The new governors: The people, rules, and processes governing online speech. Harvard Law Review, Vol. 131, 6 (2018), 1598--1670. https://www.jstor.org/stable/44865879
[42]
Yubo Kou and Xinning Gui. 2021. Flag and flaggability in automated moderation: The case of reporting toxic behavior in an online game community. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, Article 437, 12 pages. https://doi.org/10.1145/3411764.3445279
[43]
Robert E. Kraut and Paul Resnick. 2012. Building successful online communities: Evidence-based social design. MIT Press. https://mitpress.mit.edu/9780262528917/building-successful-online-communities/
[44]
Cliff Lampe and Paul Resnick. 2004. Slash(dot) and burn: Distributed moderation in a large online conversation space. In Proceedings of the 2004 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 543--550. https://doi.org/10.1145/985692.985761
[45]
Cliff Lampe, Paul Zube, Jusil Lee, Chul Hyun Park, and Erik Johnston. 2014. Crowdsourcing civility: A natural experiment examining the effects of distributed moderation in online forums. Government Information Quarterly, Vol. 31, 2 (2014), 317--326. https://doi.org/10.1016/j.giq.2013.11.005
[46]
J. Richard Landis and Gary G. Koch. 1977. An application of hierarchical kappa-type statistics in the assessment of majority agreement among multiple observers. Biometrics, Vol. 33, 2 (1977), 363--374. https://doi.org/10.2307/2529786
[47]
Keri M. Larson and Richard T. Watson. 2013. The impact of natural language processing-based textual analysis of social media interactions on decision making. In European Conference on Information Systems (ECIS). Article 70. https://aisel.aisnet.org/ecis2013_cr/70
[48]
John D. Lee and Katrina A. See. 2004. Trust in automation: Designing for appropriate reliance. Human Factors, Vol. 46, 1 (2004), 50--80. https://doi.org/10.1518/hfes.46.1.50_30392
[49]
Min Kyung Lee. 2018. Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society, Vol. 5, 1 (2018). https://doi.org/10.1177/2053951718756684
[50]
Claudia Wai Yu Lo. 2018. When all you have is a banhammer: The social and communicative work of volunteer moderators. S.M. Thesis. Massachusetts Institute of Technology.
[51]
Nathan J. Matias. 2019. The civic labor of volunteer moderators online. Social Media Society, Vol. 5, 2 (2019). https://doi.org/10.1177/2056305119836778
[52]
Nathan J. Matias and Merry Mou. 2018. CivilServant: Community-led experiments in platform governance. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, Article 9, 13 pages. https://doi.org/10.1145/3173574.3173583
[53]
Nora McDonald, Sarita Schoenebeck, and Andrea Forte. 2019. Reliability and inter-rater reliability in qualitative research: Norms and guidelines for CSCW and HCI practice. Proceedings of the ACM on Human-Computer Interaction, Vol. 3, CSCW, Article 72 (2019), 23 pages. https://doi.org/10.1145/3359174
[54]
William Melicher, Mahmood Sharif, Joshua Tan, Lujo Bauer, Mihai Christodorescu, and Pedro Giovanni Leon. 2016. (Do Not) Track me sometimes: Users' contextual preferences for web tracking. Proceedings on Privacy Enhancing Technologies, Vol. 2016, 2 (2016), 135--154. https://doi.org/10.1515/popets-2016-0009
[55]
George R. Milne, George Pettinico, Fatima M. Hajjat, and Ereni Markos. 2017. Information sensitivity typology: Mapping the degree and type of risk consumers perceive in personal data sharing. Journal of Consumer Affairs, Vol. 51, 1 (2017), 133--161. https://doi.org/10.1111/joca.12111
[56]
Sarah Myers West. 2018. Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms. New Media & Society, Vol. 20, 11 (2018), 4366--4383. https://doi.org/10.1177/1461444818773059
[57]
Teresa Naab and Anja Kalch. 2017. Replying, evaluating, flagging: How users engage with uncivil and impolite comments on news sites. Studies in Communication and Media, Vol. 6, 4 (2017), 395--419. https://doi.org/10.5771/2192--4007--2017--4--395
[58]
Casey Newton. 2019. The trauma floor. https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona
[59]
Zizi Papacharissi. 2004. Democracy online: Civility, politeness, and the democratic potential of online political discussion groups. New Media & Society, Vol. 6, 2 (2004), 259--283. https://doi.org/10.1177/1461444804041444
[60]
Jessica A. Pater, Moon K. Kim, Elizabeth D. Mynatt, and Casey Fiesler. 2016. Characterizations of online harassment: Comparing policies across social media platforms. In Proceedings of the 19th International Conference on Supporting Group Work (GROUP). Association for Computing Machinery, New York, NY, USA, 369--374. https://doi.org/10.1145/2957276.2957297
[61]
Hector Postigo. 2003. Emerging sources of labor on the internet: The case of America Online volunteers. International Review of Social History, Vol. 48, S11 (2003), 205--223. https://doi.org/10.1017/S0020859003001329
[62]
Paul Resnick and Richard Zeckhauser. 2002. Trust among strangers in Internet transactions: Empirical analysis of eBay's reputation system. In The Economics of the Internet and E-commerce, Michael R. Baye (Ed.). Emerald Group Publishing Limited. https://doi.org/10.1016/S0278-0984(02)11030--3
[63]
Howard Rheingold. 2000. The virtual community, revised edition: Homesteading on the electronic frontier. MIT press. https://mitpress.mit.edu/9780262681216/the-virtual-community/
[64]
Martin J. Riedl, Gina M. Masullo, and Kelsey N. Whipple. 2020. The downsides of digital labor: Exploring the toll incivility takes on online comment moderators. Computers in Human Behavior, Vol. 107, Article 106262 (2020). https://doi.org/10.1016/j.chb.2020.106262
[65]
Sarah Roberts. 2016. Commercial content moderation: Digital laborers' dirty work. In The Intersectional Internet: Race, Sex, Class, and Culture Online, Safiya Umoja Noble and Brendesha M. Tynes (Eds.). Peter Lang Publishing. https://ir.lib.uwo.ca/commpub/12/
[66]
Minna Ruckenstein and Linda Lisa Maria Turunen. 2020. Re-humanizing the platform: Content moderators and the logic of care. New Media & Society, Vol. 22, 6 (2020), 1026--1042. https://doi.org/10.1177/1461444819875990
[67]
Morgan Klaus Scheuerman, Jialun Aaron Jiang, Casey Fiesler, and Jed R. Brubaker. 2021. A framework of severity for harmful content online. Proceedings of the ACM on Human-Computer Interaction, Vol. 5, CSCW2, Article 368 (2021), 33 pages. https://doi.org/10.1145/3479512
[68]
Joseph Seering. 2020. Reconsidering community self-moderation: The role of research in supporting community-based models for online content moderation. Proceedings of the ACM on Human-Computer Interaction, Vol. 4, CSCW2, Article 107 (2020), 28 pages. https://doi.org/10.1145/3415178
[69]
Joseph Seering, Tianmi Fang, Luca Damasco, Mianhong Chen, Likang Sun, and Geoff Kaufman. 2019a. Designing user interface elements to improve the quality and civility of discourse in online commenting behaviors. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, Article 606, 14 pages. https://doi.org/10.1145/3290605.3300836
[70]
Joseph Seering, Geoff Kaufman, and Stevie Chancellor. 2020. Metaphors in moderation. New Media & Society, Vol. 24, 3 (2020), 621--640. https://doi.org/10.1177/1461444820964968
[71]
Joseph Seering, Tony Wang, Jina Yoon, and Geoff Kaufman. 2019b. Moderator engagement and community development in the age of algorithms. New Media & Society, Vol. 21, 7 (2019), 1417--1443. https://doi.org/10.1177/1461444818821316
[72]
Patricia Silva. 2015. Davis' technology acceptance model (TAM) (1989). In Information Seeking Behavior and Technology Adoption: Theories and Trends, Mohammed Nasser Al-Suqri and Ali Saif Al-Aufi (Eds.). IGI Global, 205--219.
[73]
Kumar Bhargav Srinivasan, Cristian Danescu-Niculescu-Mizil, Lillian Lee, and Chenhao Tan. 2019. Content removal as a moderation strategy: Compliance and other outcomes in the ChangeMyView community. Proceedings of the ACM on Human-Computer Interaction, Vol. 3, CSCW, Article 163 (2019), 21 pages. https://doi.org/10.1145/3359265
[74]
Miriah Steiger, Timir J. Bharucha, Sukrit Venkatagiri, Martin J. Riedl, and Matthew Lease. 2021. The psychological well-being of content moderators: The emotional labor of commercial moderation and avenues for improving support. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, Article 341, 14 pages. https://doi.org/10.1145/3411764.3445092
[75]
Elizabeth Stewart. 2021. Detecting fake fews: Two problems for content moderation. Philosophy & Technology, Vol. 34 (2021), 1--18. https://doi.org/10.1007/s13347-021-00442-x
[76]
Hari Sundaram, Yu-Ru Lin, Munmun De Choudhury, and Aisling Kelliher. 2012. Understanding community dynamics in online social networks: A multidisciplinary review. IEEE Signal Processing Magazine, Vol. 29, 2 (2012), 33--40. https://doi.org/10.1109/MSP.2011.943583
[77]
Nicolas P. Suzor, Sarah Myers West, Andrew Quodling, and Jillian York. 2019. What do we mean when we talk about transparency? Toward meaningful transparency in commercial content moderation. International Journal of Communication, Vol. 13 (2019), 1526--1543. https://ijoc.org/index.php/ijoc/article/view/9736
[78]
Neil Thurman, Judith Moeller, Natali Helberger, and Damian Trilling. 2019. My friends, editors, algorithms, and I: Examining audience attitudes to news selection. Digital Journalism, Vol. 7, 4 (2019), 447--469. https://doi.org/10.1080/21670811.2018.1493936
[79]
Stéphan Tulkens, Lisa Hilte, Elise Lodewyckx, Ben Verhoeven, and Walter Daelemans. 2016. A dictionary-based approach to racism detection in Dutch social media. In First Workshop on Text Analytics for Cybersecurity and Online Safety (TA-COS). 11--17. http://www.lrec-conf.org/proceedings/lrec2016/workshops/LREC2016Workshop-TA-COS_Proceedings.pdf#page=19
[80]
Kristen Vaccaro, Christian Sandvig, and Karrie Karahalios. 2020. "At the end of the day Facebook does what it wants": How users experience contesting algorithmic content moderation. Proceedings of the ACM on Human-Computer Interaction, Vol. 4, CSCW2, Article 167 (2020), 22 pages. https://doi.org/10.1145/3415238
[81]
Bingjie Yu, Joseph Seering, Katta Spiel, and Leon Watts. 2020. "Taking care of a fruit tree": Nurturing as a layer of concern in online community moderation. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1--9. https://doi.org/10.1145/3334480.3383009

Cited By

View all
  • (2024)Third-Party Developers and Tool Development For Community Management on Live Streaming Platform TwitchProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642787(1-18)Online publication date: 11-May-2024
  • (2024)Why did you delete my comment? Investigating observing consumers' reactions to comment‐deletion‐clues during a brand crisisPsychology & Marketing10.1002/mar.22065Online publication date: 24-Jun-2024
  • (2023)Breaking the Silence: Investigating Which Types of Moderation Reduce Negative Effects of Sexist Social Media ContentProceedings of the ACM on Human-Computer Interaction10.1145/36101767:CSCW2(1-26)Online publication date: 4-Oct-2023

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Human-Computer Interaction
Proceedings of the ACM on Human-Computer Interaction  Volume 7, Issue CSCW1
CSCW
April 2023
3836 pages
EISSN:2573-0142
DOI:10.1145/3593053
Issue’s Table of Contents
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 16 April 2023
Published in PACMHCI Volume 7, Issue CSCW1

Check for updates

Author Tags

  1. automation
  2. moderation
  3. moderation technology
  4. personalized AI
  5. technology adoption

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)844
  • Downloads (Last 6 weeks)122
Reflects downloads up to 01 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Third-Party Developers and Tool Development For Community Management on Live Streaming Platform TwitchProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642787(1-18)Online publication date: 11-May-2024
  • (2024)Why did you delete my comment? Investigating observing consumers' reactions to comment‐deletion‐clues during a brand crisisPsychology & Marketing10.1002/mar.22065Online publication date: 24-Jun-2024
  • (2023)Breaking the Silence: Investigating Which Types of Moderation Reduce Negative Effects of Sexist Social Media ContentProceedings of the ACM on Human-Computer Interaction10.1145/36101767:CSCW2(1-26)Online publication date: 4-Oct-2023

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media