Abstract- Detection and Processing of Hate Speech through Methods of Computational Criminology

The growing threat of hate speech on social media has a significant impact on liberal societies requiring a governmental response. Germany introduced the Network Enforcement Act (NetzDG) in 2017 which obligates social media platforms to remove hate speech within 24 hours. With the initiatives such as “Verfolgen statt nur Löschen” (Prosecution instead of Deletion only) it has soon become clear that deleting hate speech is not sufficient. The recent changes on NetzDG also requires companies to report hate speech cases. However, once the law comes into force, the number of reported hate speech cases will significantly increase and will make manual processing particularly challenging. This research proposes a machine learning text classification model to process unstructured textual data on social media. The automated identification and classification of hate speech as well as automated ranking and prioritising of comments can significantly reduce the manual labour. Despite the benefits, automated hate speech detection generates ethical challenges. First, it can lead to automatisation bias, in particular overreliance on the generated results and reduction of human control. Second, over-blocking undermines freedom of expression. Automatisation can serve as a gatekeeping mechanism and a tool to silence voices. This study contributes to research by taking an interdisciplinary approach to hate speech through the lens of computer science and criminology.