Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Advertisement

Fighting Hate Speech, Silencing Drag Queens? Artificial Intelligence in Content Moderation and Risks to LGBTQ Voices Online

  • Original Paper
  • Published:
Sexuality & Culture Aims and scope Submit manuscript

Abstract

Companies operating internet platforms are developing artificial intelligence tools for content moderation purposes. This paper discusses technologies developed to measure the ‘toxicity’ of text-based content. The research builds upon queer linguistic studies that have indicated the use of ‘mock impoliteness’ as a form of interaction employed by LGBTQ people to cope with hostility. Automated analyses that disregard such a pro-social function may, contrary to their intended design, actually reinforce harmful biases. This paper uses ‘Perspective’, an AI technology developed by Jigsaw (formerly Google Ideas), to measure the levels of toxicity of tweets from prominent drag queens in the United States. The research indicated that Perspective considered a significant number of drag queen Twitter accounts to have higher levels of toxicity than white nationalists. The qualitative analysis revealed that Perspective was not able to properly consider social context when measuring toxicity levels and failed to recognize cases in which words, that might conventionally be seen as offensive, conveyed different meanings in LGBTQ speech.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Availability of Data and Material

Due to Twitter’s Developer Policy, which provides rules and guidelines for developers who interact with Twitter’s applications and content, the authors decided not to publish the CSV dataset. The document sets forth several restrictions for that matter, limiting what could be disclosed in downloadable datasets. Additionally, it provides that any third party with access to the dataset would have to adhere to Twitter’s ToS, Privacy Policy, Developer Agreement, and Developer Policy—the authors would not be in a position to guarantee this if the dataset was publicly available.

Code Availability

The Python source code of the algorithms developed to be employed in the research are available at GitHub and may be accessed on the following link: https://github.com/internetlab-br/ai_content_moderation.

Notes

  1. Algorithms may be defined as ‘encoded procedures for transforming input data into a desired output, based on specified calculations’ (Gillespie, 2014: 167). They are designed to store and analyze data, apply mathematical formulas to it and come up with new information as a result.

  2. Available at: https://www.perspectiveapi.com/#/.

  3. Available at: https://www.tweepy.org/.

  4. Available at: https://pypi.org/project/emoji/.

  5. Available at: https://requests.kennethreitz.org/en/master/.

  6. Available at: https://developer.twitter.com/en/developer-terms/agreement-and-policy.html (accessed 17 September 2019). Link on Perma.cc: [https://perma.cc/RZM2-4LYW].

  7. For more information, access Perspective’s profile at GitHub: https://github.com/conversationai/perspectiveapi/blob/master/api_reference.md.

  8. The source-code is available at: https://github.com/internetlab-br/ai_content_moderation.

References

Download references

Acknowledgements

The authors are grateful to Timothy Rosenberger for his editing and review support; to Ester Borges, Clarice Tavares and Victor Pavarin Tavares for their research support.

Funding

The authors received no financial support for the research, authorship, and/or publication of this article.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thiago Dias Oliva.

Ethics declarations

Conflict of interest

There are no conflicts of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dias Oliva, T., Antonialli, D.M. & Gomes, A. Fighting Hate Speech, Silencing Drag Queens? Artificial Intelligence in Content Moderation and Risks to LGBTQ Voices Online. Sexuality & Culture 25, 700–732 (2021). https://doi.org/10.1007/s12119-020-09790-w

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12119-020-09790-w

Keywords