Content Moderation in a State of Flux HU CyberLaw Newsletter Editorial #17

Welcome to the 17th newsletter of the HUJI CyberLaw Program.
This newsletter is published at a time in which the US is confronting widespread protests in the aftermath of the killing of George Floyd on 25 May 2020 by a Minneapolis policeman, a public health crisis caused by the COVID-19 pandemic and growing tensions related to the upcoming elections. Social media play an important role in all these situations: It has been used for promulgation of hate speech and calls for violence in connection with the Black Lives Matter protests, it has been misused for disseminating disinformation about the risks of COVID-19 and available treatment, and it raises concerns of manipulation and misinformation by political actors in the US and by foreign powers in and around the elections.

The growing pressure on US social media companies to prevent abuse of their platforms has already led Facebook to establish an Oversight Board and Twitter to apply new content moderation rules to some of President Trump’s tweets. After receiving much criticism for its reluctance to apply content moderation tools to political posts, Facebook moved in June to remove posts and ads from the Trump campaign for violating its hate speech policy. The Trump administration’s threats to abrogate section 230 of the Communications Decency Act – which shields social media companies from liability for most of  he contents they post - as retaliation of the platforms failure to respect political neutrality and their alleged selectiveness in applying content moderation tools, coincided with calls by prominent Democratic politicians to hold platforms responsible for hate speech and misinformation.
The content moderation policies of the major social media platforms have also been the subject of intense debate in Europe, resulting already in two major European states – France and Germany – introducing a harsh liability rule for failure to promptly remove offensive contents (the French Constitutional Council has nullified, however, in June parts of the French law on online hate speech and harassment for excessively infringing on freedom of expression). Other countries, including Israel, have also expressed concern about the lax content moderation policies of certain platforms.

Ultimately, the issue in question is that of accountability – a topic also discussed in one of the podcasts presented in this newsletter. It appears as if the tectonic plates on which Internet governance was founded in the 1990s are shifting fast, and that social media companies find themselves increasingly confronted with expectations reflected in the old adage that with great power comes great responsibility. This, in turn, would require them to strike a new balance between freedom of expression and competing rights and public interests, and to develop sophisticated mechanism to review decisions by social media platforms and obtain a remedy.

Technological tools, including natural language processing tools – another topic discussed in this newsletter – will have to become more nuanced and effective in order to allow social media platforms to effectively implement their new policies. This raises in turn new tensions between configuration of decision-making processes involving human and non-human decision makers – the topic of another podcast featured in this newsletter. So not only is everything relative – as maintained by one of Hebrew University’s founders - everything we research on cyberspace and digital platforms also seems to be inter-related.

I hope you enjoy this newsletter edition. Like always, I and the rest of the team on the Federmann Center would be most happy to discuss with you further the topics mentioned in this newsletter and in our other publications, as well as the ways our
Program and Centre can help confront contemporary challenges in the digital space.

Sincerely,
Yuval Shany
Program Director