Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

Beyond Initial Removal: Lasting Impacts of Discriminatory Content Moderation to Marginalized Creators on Instagram

Published: 26 April 2024 Publication History
  • Get Citation Alerts
  • Abstract

    Recent work has demonstrated how content moderation practices on social media may unfairly affect marginalized individuals, for example by censoring women's bodies and misidentifying reclaimed terms as hate speech. This study documents and explores the direct experiences of marginalized creators who have been impacted by discriminatory content moderation on Instagram. Collaborating with our participants for over a year, we contribute five co-constructed narratives of discriminatory content moderation from advocates in trauma-informed care, LGBTQ+ sex education, anti-racism education, and beauty and body politics. In sharing these detailed personal accounts, not only do we shed light on their experiences with being blocked, banned, or deleted unfairly, but we delve deeper into the lasting impacts of these experiences to their livelihoods and mental health. Reflecting on their stories, we observe that content moderation on social media is deeply entangled with the situated experiences of offline discrimination. As such, we document how each participant experiences moderation through the lens of their often intersectional identities. Using participatory research methods, we collectively strategize ways to learn from these individual accounts and resist discriminatory content moderation, as well as imagine possibilities for repair and accountability.

    References

    [1]
    Sara Ahmed. 2006. Orientations: Toward a queer phenomenology. GLQ: A journal of Lesbian and Gay Studies 12, 4 (2006), 543--574.
    [2]
    W Carsten Andresen. 2022. Research Note: Comparing the Gay and Trans Panic Defenses. Women & Criminal Justice 32, 1--2 (2022), 219--241.
    [3]
    Carolina Are. 2020. How Instagram's algorithm is censoring women and vulnerable users but helping online abusers. Feminist media studies 20, 5 (2020), 741--744.
    [4]
    Carolina Are. 2021. The Shadowban Cycle: an autoethnography of pole dancing, nudity and censorship on Instagram. Feminist Media Studies (2021), 1--18.
    [5]
    Andrew Arsht and Daniel Etcovitch. 2018. The human cost of online content moderation. Harvard Law Review Online, Harvard University, Cambridge, MA, USA. Retrieved from https://jolt. law.harvard.edu/digest/the-human-cost-of-online-content-moderation (2018).
    [6]
    Imran Awan. 2014. Islamophobia and Twitter: A typology of online hate against Muslims on social media. Policy & Internet 6, 2 (2014), 133--150.
    [7]
    Joseph B Bak-Coleman, Ian Kennedy, Morgan Wack, Andrew Beers, Joseph S Schafer, Emma S Spiro, Kate Starbird, and Jevin D West. 2022. Combining interventions to reduce the spread of viral misinformation. Nature Human Behaviour (2022), 1--9.
    [8]
    Thomas E Barone. 1992. Beyond theory and method: A case of critical storytelling. Theory into practice 31, 2 (1992), 142--146.
    [9]
    Roukaya Benjelloun and Yassine Otheman. 2020. Psychological distress in a social media content moderator: A case report. (2020).
    [10]
    Reuben Binns, Michael Veale, Max Van Kleek, and Nigel Shadbolt. 2017. Like trainer, like bot? Inheritance of bias in algorithmic content moderation. In International conference on social informatics. Springer, 405--415.
    [11]
    Danielle Blunt and Ariel Wolf. 2020. Erased: The impact of FOSTA-SESTA and the removal of Backpage on sex workers. Anti-trafficking review 14 (2020), 117--121.
    [12]
    Pepe Borrás Pérez. 2021. Facebook doesn't like sexual health or sexual pleasure: Big tech's ambiguous content moderation policies and their impact on the sexual and reproductive health of the youth. International Journal of Sexual Health 33, 4 (2021), 550--554.
    [13]
    Stacy M Branham, Anja Thieme, Lisa P Nathan, Steve Harrison, Deborah Tatar, and Patrick Olivier. 2014. Co-creating & identity-making in CSCW: revisiting ethics in design research. In Proceedings of the companion publication of the 17th ACM conference on Computer supported cooperative work & social computing. 305--308.
    [14]
    Erik Calleberg. 2021. Making Content Moderation Less Frustrating: How Do Users Experience Explanatory Human and AI Moderation Messages.
    [15]
    M Castelli. 2021. Introduction to critical race theory and counter-storytelling.
    [16]
    Janet X Chen, Allison McDonald, Yixin Zou, Emily Tseng, Kevin A Roundy, Acar Tamersoy, Florian Schaub, Thomas Ristenpart, and Nicola Dell. 2022. Trauma-Informed Computing: Towards Safer Technology Experiences for All. In CHI Conference on Human Factors in Computing Systems. 1--20.
    [17]
    Ann-Dorte Christensen and Sune Qvotrup Jensen. 2012. Doing intersectional analysis: Methodological implications for qualitative research. NORA-Nordic Journal of Feminist and Gender Research 20, 2 (2012), 109--125.
    [18]
    Kirsti K Cole. 2015. "It's like she's eager to be verbally abused": Twitter, trolls, and (en) gendering disciplinary rhetoric. Feminist Media Studies 15, 2 (2015), 356--358.
    [19]
    Combahee River Collective. 1983. The Combahee river collective statement. Home girls: A Black feminist anthology 1 (1983), 264--274.
    [20]
    Patricia Hill Collins. 1997. Comment on Hekman's" Truth and method: Feminist standpoint theory revisited": Where's the power? Signs: Journal of Women in Culture and Society 22, 2 (1997), 375--381.
    [21]
    Kelley Cotter. 2023. "Shadowbanning is not a thing": black box gaslighting and the power to independently know and credibly critique algorithms. Information, Communication & Society 26, 6 (2023), 1226--1243.
    [22]
    Munmun De Choudhury, Shagun Jhaver, Benjamin Sugar, and Ingmar Weber. 2016. Social media participation in an activist movement for racial equality. In Tenth International AAAI Conference on Web and Social Media.
    [23]
    Richard Delgado. 1989. Storytelling for oppositionists and others: A plea for narrative. Michigan law review 87, 8 (1989), 2411--2441.
    [24]
    Richard Delgado. 1990. When a story is just a story: Does voice really matter? Virginia Law Review (1990), 95--111.
    [25]
    Richard Delgado. 1993. On telling stories in school: A reply to Farber and Sherry. Vand. L. Rev. 46 (1993), 665.
    [26]
    Richard Delgado and Jean Stefancic. 2023. Critical race theory: An introduction. Vol. 87. NyU press.
    [27]
    Thiago Dias Oliva, Dennys Marcelo Antonialli, and Alessandra Gomes. 2021. Fighting Hate Speech, Silencing Drag Queens? Artificial Intelligence in Content Moderation and Risks to LGBTQ Voices Online. Sexuality & Culture 25, 2 (2021), 700--732.
    [28]
    Ángel Díaz and Laura Hecht-Felella. 2021. Double standards in social media content moderation. Brennan Center for Justice at New York University School of Law. https://www. brennancenter. org/our-work/research-reports/double-standards-socialmedia-content-moderation (2021).
    [29]
    Catherine D'Ignazio, Erhardt Graeff, Christina N Harrington, and Daniela K Rosner. 2020. Toward equitable participatory design: Data feminism for CSCW amidst multiple pandemics. In Conference Companion Publication of the 2020 on Computer Supported Cooperative Work and Social Computing. 437--445.
    [30]
    Christina Dinar. 2021. The state of content moderation for the LGBTIQA community and the role of the EU Digital Services Act. Technical Report. Technical Report. Heinrich-Böll-Stiftung.
    [31]
    Lydia Dishman. 2019. This is the impact of Instagram's accidental fat-phobic algorithm. https://www.fastcompany.com/90415917/this-is-the-impact-of-instagrams-accidental-fat-phobic-algorithm
    [32]
    Stefanie Duguay, Jean Burgess, and Nicolas Suzor. 2020. Queer women's experiences of patchwork platform governance on Tinder, Instagram, and Vine. Convergence 26, 2 (2020), 237--252.
    [33]
    Lee Edelman. 2004. No future. In No Future. Duke University Press.
    [34]
    Gretchen Faust. 2017. Hair, blood and the nipple. In Digital Environments. transcript-Verlag, 159--170.
    [35]
    Jessica L Feuston, Alex S Taylor, and Anne Marie Piper. 2020. Conformity of eating disorders through content moderation. Proceedings of the ACM on Human-Computer Interaction 4, CSCW1 (2020), 1--28.
    [36]
    Casey Fiesler, Joshua McCann, Kyle Frye, Jed R Brubaker, et al. 2018. Reddit rules! characterizing an ecosystem of governance. In Twelfth International AAAI Conference on Web and Social Media.
    [37]
    Ysabel Gerrard. 2018. Beyond the hashtag: Circumventing content moderation on social media. New Media & Society 20, 12 (2018), 4492--4511.
    [38]
    Ysabel Gerrard. 2020. Social media content moderation: Six opportunities for feminist intervention. Feminist Media Studies 20, 5 (2020), 748--751.
    [39]
    Ysabel Gerrard and Helen Thornham. 2020. Content moderation: Social media's sexist assemblages. new media & society 22, 7 (2020), 1266--1286.
    [40]
    Tarleton Gillespie. 2018. Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.
    [41]
    Tarleton Gillespie. 2020. Content moderation, AI, and the question of scale. Big Data & Society 7, 2 (2020), 2053951720943234.
    [42]
    Evelyn Nakano Glenn. 1999. The social construction and institutionalization of gender and race: An integrative framework. Revisioning gender (1999), 3--43.
    [43]
    Robert Gorwa, Reuben Binns, and Christian Katzenbach. 2020. Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society 7, 1 (2020), 2053951719897945.
    [44]
    Kristin K Gundersen and Kristen L Zaleski. 2021. Posting the story of your sexual assault online: A phenomenological study of the aftermath. Feminist Media Studies 21, 5 (2021), 840--852.
    [45]
    Oliver L Haimson, Daniel Delmonaco, Peipei Nie, and Andrea Wegner. 2021. Disproportionate Removals and Differing Content Moderation Experiences for Conservative, Transgender, and Black Social Media Users: Marginalization and Moderation Gray Areas. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021), 1--35.
    [46]
    Donna Haraway. 2020. Situated knowledges: The science question in feminism and the privilege of partial perspective. In Feminist theory reader. Routledge, 303--310.
    [47]
    Deion S Hawkins. 2022. "After Philando, I had to take a sick day to recover": Psychological distress, trauma and police brutality in the Black community. Health communication 37, 9 (2022), 1113--1122.
    [48]
    Jane Im, Jill Dimond, Melody Berton, Una Lee, Katherine Mustelier, Mark S Ackerman, and Eric Gilbert. 2021. Yes: Affirmative consent as a theoretical framework for understanding and imagining social platforms. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--18.
    [49]
    Shagun Jhaver, Darren Scott Appling, Eric Gilbert, and Amy Bruckman. 2019. " Did you suspect the post would be removed?" Understanding user reactions to content removals on Reddit. Proceedings of the ACM on human-computer interaction 3, CSCW (2019), 1--33.
    [50]
    Shagun Jhaver, Amy Bruckman, and Eric Gilbert. 2019. Does transparency in moderation really matter? User behavior after content removal explanations on reddit. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (2019), 1--27.
    [51]
    Javon Johnson. 2015. Black joy in the time of Ferguson. QED: A Journal in GLBTQ Worldmaking 2, 2 (2015), 177--183.
    [52]
    Ian Kennedy, Morgan Wack, Andrew Beers, Joseph S Schafer, Isabella Garcia-Camargo, Emma S Spiro, and Kate Starbird. 2022. Repeat Spreaders and Election Delegitimization: A Comprehensive Dataset of Misinformation Tweets from the 2020 US Election. Journal of Quantitative Description: Digital Media 2 (2022).
    [53]
    Kolina Koltai, Rachel E Moran, and Izzi Grasso. 2022. Addressing the root of vaccine hesitancy during the COVID-19 pandemic. XRDS: Crossroads, The ACM Magazine for Students 28, 2 (2022), 34--38.
    [54]
    Calvin A Liang, Sean A Munson, and Julie A Kientz. 2021. Embracing four tensions in human-computer interaction research with marginalized people. ACM Transactions on Computer-Human Interaction (TOCHI) 28, 2 (2021), 1--47.
    [55]
    Emma J Llansó. 2020. No amount of "AI" in content moderation will solve filtering's prior-restraint problem. Big Data & Society 7, 1 (2020), 2053951720920686.
    [56]
    Mufan Luo and Jeffrey T Hancock. 2020. Self-disclosure and social media: motivations, mechanisms and psychological well-being. Current Opinion in Psychology 31 (2020), 110--115.
    [57]
    Renkai Ma and Yubo Kou. 2021. " How advertiser-friendly is my video?": YouTuber's Socioeconomic Interactions with Algorithmic Content Moderation. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021), 1--25.
    [58]
    Sean MacAvaney, Hao-Ren Yao, Eugene Yang, Katina Russell, Nazli Goharian, and Ophir Frieder. 2019. Hate speech detection: Challenges and solutions. PloS one 14, 8 (2019), e0221152.
    [59]
    João Carlos Magalhães and Christian Katzenbach. 2020. Coronavirus and the frailness of platform governance. Internet Policy Review 9 (2020).
    [60]
    Shervin Malmasi and Marcos Zampieri. 2017. Detecting hate speech in social media. arXiv preprint arXiv:1712.06427 (2017).
    [61]
    Brandeis Marshall. 2021. Algorithmic misogynoir in content moderation practice. Technical Report. Technical Report. Heinrich-Böll-Stiftung.
    [62]
    Aja Y Martinez. 2014. A plea for critical race theory counterstory: Stock story versus counterstory dialogues concerning Alejandra's" fit" in the academy. Composition Studies (2014), 33--55.
    [63]
    Brad McKenna and Hameed Chughtai. 2020. Resistance and sexuality in virtual worlds: An LGBT perspective. Computers in Human Behavior 105 (2020), 106199.
    [64]
    Lisa R Merriweather Hunn, Talmadge C Guy, and Elaine Mangliitz. 2006. Who can speak for whom? Using counter- storytelling to challenge racial hegemony. (2006).
    [65]
    Richard Miller, Katrina Liu, and Arnetha F Ball. 2020. Critical counter-narrative as transformative methodology for educational equity. Review of Research in Education 44, 1 (2020), 269--300.
    [66]
    Ryan A Miller and Annemarie Vaccaro. 2016. Queer student leaders of color: Leadership as authentic, collaborative, culturally competent. Journal of Student Affairs Research and Practice 53, 1 (2016), 39--50.
    [67]
    Tamar Mitts, Nilima Pisharody, and Jacob Shapiro. 2022. Removal of Anti-Vaccine Content Impacts Social Media Discourse. In 14th ACM Web Science Conference 2022. 319--326.
    [68]
    Marzieh Mozafari, Reza Farahbakhsh, and Noël Crespi. 2020. Hate speech detection and racial bias mitigation in social media based on BERT model. PloS one 15, 8 (2020), e0237861.
    [69]
    Tyler Musgrave, Alia Cummings, and Sarita Schoenebeck. 2022. Experiences of Harm, Healing, and Joy among Black Women and Femmes on Social Media. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI '22). Association for Computing Machinery, New York, NY, USA, Article 240, 17 pages. https://doi.org/10.1145/3491102.3517608
    [70]
    Sarah Myers West. 2018. Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms. New Media & Society 20, 11 (2018), 4366--4383.
    [71]
    Yifat Nahmias and Maayan Perel. 2021. The Oversight of Content Moderation by AI: Impact Assessments and Their Limitations. Harv. J. on Legis. 58 (2021), 145.
    [72]
    R Nonomura, C Giesbrecht, T Jivraj, A Lapp, K Bax, A Jenney, K Scott, A Straatman, and L Baker. 2020. Toward a trauma-and violence-informed research ethics module: Considerations and recommendations. London, ON: Centre for Research & Education on Violence Against Women & Children, Western University (2020).
    [73]
    Ihudiya Finda Ogbonnaya-Ogburu, Angela DR Smith, Alexandra To, and Kentaro Toyama. 2020. Critical race theory for HCI. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1--16.
    [74]
    Alexandra Olteanu, Kartik Talamadupula, and Kush R Varshney. 2017. The limits of abstract evaluation metrics: The case of hate speech detection. In Proceedings of the 2017 ACM on web science conference. 405--406.
    [75]
    A Treatment Improvement Protocol. 2014. Trauma-informed care in behavioral health services. Rockville, USA: Substance Abuse and Mental Health Services Administration (2014).
    [76]
    Sadruddin Bahadur Qutoshi. 2018. Phenomenology: A philosophy and method of inquiry. Journal of Education and Educational Development 5, 1 (2018), 215--222.
    [77]
    Inioluwa Deborah Raji and Joy Buolamwini. 2019. Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial ai products. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. 429--435.
    [78]
    Sarah T Roberts. 2019. Behind the screen. Yale University Press.
    [79]
    Gayle Salamon. 2018. The life and death of Latisha King. In The Life and Death of Latisha King. New York University Press.
    [80]
    Daniel A Salmon, Matthew Z Dudley, Jason M Glanz, and Saad B Omer. 2015. Vaccine hesitancy: causes, consequences, and a call to action. Vaccine 33 (2015), D66--D71.
    [81]
    Morgan Klaus Scheuerman, Stacy M. Branham, and Foad Hamidi. 2018. Safe Spaces and Safe Places: Unpacking Technology-Mediated Experiences of Safety and Harm with Transgender People. Proc. ACM Hum.-Comput. Interact. 2, CSCW, Article 155 (nov 2018), 27 pages. https://doi.org/10.1145/3274424
    [82]
    Morgan Klaus Scheuerman, Jialun Aaron Jiang, Casey Fiesler, and Jed R Brubaker. 2021. A framework of severity for harmful content online. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021), 1--33.
    [83]
    Joseph Seering. 2020. Reconsidering community self-moderation: the role of research in supporting community-based models for online content moderation. Proceedings of the ACM on Human-Computer Interaction 4 (2020), 107.
    [84]
    Joseph Seering, Tony Wang, Jina Yoon, and Geoff Kaufman. 2019. Moderator engagement and community development in the age of algorithms. New Media and Society 21 (01 2019), 146144481882131. https://doi.org/10.1177/1461444818821316
    [85]
    Eugenia Siapera. 2022. AI Content Moderation, Racism and (de) Coloniality. International Journal of Bullying Prevention 4, 1 (2022), 55--65.
    [86]
    Jonathan A Smith. 2011. Evaluating the contribution of interpretative phenomenological analysis. Health psychology review 5, 1 (2011), 9--27.
    [87]
    C Riley Snorton and Jin Haritaworn. 2013. Trans necropolitics: A transnational reflection on violence, death, and the trans of color afterlife. In The Transgender Studies Reader Remix. Routledge, 305--316.
    [88]
    Daniel G Solorzano and Tara J Yosso. 2001. Critical race and LatCrit theory and method: Counter-storytelling. International journal of qualitative studies in education 14, 4 (2001), 471--495.
    [89]
    Daniel G Solórzano and Tara J Yosso. 2002. Critical race methodology: Counter-storytelling as an analytical framework for education research. Qualitative inquiry 8, 1 (2002), 23--44.
    [90]
    Clare Southerton, Daniel Marshall, Peter Aggleton, Mary Lou Rasmussen, and Rob Cover. 2021. Restricted modes: Social media, content classification and LGBTQ sexual citizenship. New Media & Society 23, 5 (2021), 920--938.
    [91]
    Wiley William Stem. 2020. A Phenomenological Study of the Effects of Social Media Use on Minority Stress and Self-concept in LGB College Students. Ph. D. Dissertation. New Mexico State University.
    [92]
    Nicolas P Suzor, Sarah Myers West, Andrew Quodling, and Jillian York. 2019. What do we mean when we talk about transparency? Toward meaningful transparency in commercial content moderation. International Journal of Communication 13 (2019), 18.
    [93]
    Jacqueline Urakami, Yeongdae Kim, Hiroki Oura, and Katie Seaborn. 2022. Finding Strategies Against Misinformation in Social Media: A Qualitative Study. In CHI Conference on Human Factors in Computing Systems Extended Abstracts. 1--7.
    [94]
    Kristen Vaccaro, Christian Sandvig, and Karrie Karahalios. 2020. " At the End of the Day Facebook Does What ItWants" How Users Experience Contesting Algorithmic Content Moderation. Proceedings of the ACM on human-computer interaction 4, CSCW2 (2020), 1--22.
    [95]
    Julia Velkova and Anne Kaun. 2021. Algorithmic resistance: media practices and the politics of repair. Information, Communication & Society 24, 4 (2021), 523--540.
    [96]
    William Warner and Julia Hirschberg. 2012. Detecting hate speech on the world wide web. In Proceedings of the second workshop on language in social media. 19--26.
    [97]
    Richard Ashby Wilson and Molly K Land. 2020. Hate speech on social media: Content moderation in context. Conn. L. Rev. 52 (2020), 1029.
    [98]
    Rebecca Wong. 2021. Guidelines to Incorporate Trauma-Informed Care Strategies in Qualitative Research. (2021).
    [99]
    Sijia Xiao, Coye Cheshire, and Niloufar Salehi. 2022. Sensemaking, Support, Safety, Retribution, Transformation: A Restorative Justice Approach to Understanding Adolescents' Needs for Addressing Online Harm. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI '22). Association for Computing Machinery, New York, NY, USA, Article 146, 15 pages. https://doi.org/10.1145/3491102.3517614
    [100]
    Jing Zeng and D Bondy Valdovinos Kaye. 2022. From content moderation to visibility moderation: A case study of platform governance on TikTok. Policy & Internet 14, 1 (2022), 79--95.

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image Proceedings of the ACM on Human-Computer Interaction
    Proceedings of the ACM on Human-Computer Interaction  Volume 8, Issue CSCW1
    CSCW
    April 2024
    6294 pages
    EISSN:2573-0142
    DOI:10.1145/3661497
    Issue’s Table of Contents
    This work is licensed under a Creative Commons Attribution International 4.0 License.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 26 April 2024
    Published in PACMHCI Volume 8, Issue CSCW1

    Check for updates

    Author Tags

    1. LGBTQ
    2. algorithm bias
    3. content moderation
    4. digital activism
    5. gender
    6. hate speech
    7. instagram
    8. marginalization
    9. race
    10. shadowban
    11. social media

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 232
      Total Downloads
    • Downloads (Last 12 months)232
    • Downloads (Last 6 weeks)73

    Other Metrics

    Citations

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Get Access

    Login options

    Full Access

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media