Automated hate speech detection and the problem of offensive language

T Davidson, D Warmsley, M Macy… - Proceedings of the …, 2017 - ojs.aaai.org
Proceedings of the international AAAI conference on web and social media, 2017ojs.aaai.org
A key challenge for automatic hate-speech detection on social media is the separation of
hate speech from other instances of offensive language. Lexical detection methods tend to
have low precision because they classify all messages containing particular terms as hate
speech and previous work using supervised learning has failed to distinguish between the
two categories. We used a crowd-sourced hate speech lexicon to collect tweets containing
hate speech keywords. We use crowd-sourcing to label a sample of these tweets into three …
Abstract
A key challenge for automatic hate-speech detection on social media is the separation of hate speech from other instances of offensive language. Lexical detection methods tend to have low precision because they classify all messages containing particular terms as hate speech and previous work using supervised learning has failed to distinguish between the two categories. We used a crowd-sourced hate speech lexicon to collect tweets containing hate speech keywords. We use crowd-sourcing to label a sample of these tweets into three categories: those containing hate speech, only offensive language, and those with neither. We train a multi-class classifier to distinguish between these different categories. Close analysis of the predictions and the errors shows when we can reliably separate hate speech from other offensive language and when this differentiation is more difficult. We find that racist and homophobic tweets are more likely to be classified as hate speech but that sexist tweets are generally classified as offensive. Tweets without explicit hate keywords are also more difficult to classify.
ojs.aaai.org