From Rule of Law to Rule of Community Standards?

The dramatic events of January 6, 2021 represent not only a severe crisis for American democracy – the first takeover of the US Capitol building since 1812 – but also a possible tipping point in the power relationship between the US government and the major online platforms. Note that up until recently, the US government was considering whether to subject the platforms to a stricter form of regulation – for example, by removing their immunity from claims pursuant to section 230 of the Communications Decency Act – because of a widespread perception of repeated failures on their part in application of policies on moderating harmful online contents. However, the decision by the largest online platforms to de-platform President Trump in the aftermath of the failed insurrection attempt marks an odd reversal of roles: The platforms have taken drastic steps to protect the democratic institutions of the US government, in lieu of the government itself, which failed to physically protect the Capitol. 

Reactions by the platforms to what they perceived to be serious violations by the President of their policies against harmful political speech have been rapid and far-reaching. On January 7, Facebook indefinitely blocked President Trump’s account in order to prevent him from using it to “incite violent insurrection against a democratically elected government”, and on January 8 Twitter permanently suspended the account of @realDonaldTrump for “risk of further incitement to violence”. These two decisions, which have been followed by some of the other major platforms, build on previous content moderation measures applied in recent months, which involved the labeling and limited sharing of a number of the President’s social media messages pursuant to policies designed to protect “civic integrity” and prevent misinformation in connection with the results of the November elections, and in connection with the COVID-19 pandemic. Twitter also labelled and blocked sharing of some of the Presidents earlier tweets in connection with the Black Lives Matter protests in application of their policy against glorifying violence. Almost in parallel to the deplatforming of the President by Facebook and Twitter, Apple, Google and Amazon withdrew the right-wing social network Parler from their app stores or web-hosting services.

These developments stand in marked contrast to the traditional hands-off policies adhered to by online platforms, which for many years resisted calls for moderating false contents citing concerns about becoming “arbiters of the truth”, and displayed particular deference towards interfering with online contents posted by “world leaders” – alluding to the public interest in obtaining information about the world and receiving messages from leaders. The quick shift from refusing to serve as a “speech police” to permanently blocking the President raises major questions about the compatibility of the new platform policies and practices with core freedom of expression principles and the adequacy of the regulatory framework governing the operations of the online platforms. In fact, Jack Dorsey, the CEO of Twitter, himself alluded to the measures taken by Twitter vis-à-vis the President’s account as a “dangerous precedent”.  

Much has already been said and written on the implications of the private nature of the online platforms for their content moderation responsibilities. In a nutshell, it may be recalled that as private entities they are not subject to constitutional limits on speech regulation, such as those found in the US First Amendment, but rather to a contractual framework established by terms of service agreements and to company policies represented in community standards or rules. Still, given the significant market share held by a small number of online platforms and the dominance of online speech in the public ‘marketplace of ideas’, the major platforms have effectively become critical gatekeepers in the world of information and views, with outsized influence on public discourse. And while we still think of freedom of expression as a right exercised by individuals vis-à-vis governments, in the digital space the platforms operate as de facto governments, exercising law-making, law-interpretation and law-enforcement functions, including – apparently – the power to permanently banish individuals, groups and businesses from large parts of the online world (an outcome which might be regarded as the virtual equivalent of exile for life). What’s more, they seem to construe their governmental role as a more and more active one. And the more actively they assume the role of “speech police”, the more pronounced is their actual public function – online, and increasingly offline, where they assume new responsibilities, complementing traditional governmental responsibilities to protect public order, public health or election integrity.

In a rule of law democracy, the exercise of public power has to meet tests of legality, but also democratic accountability. Transparency also plays an important role in facilitating the public scrutiny of public power. The public function increasingly assumed by online platforms invites greater public scrutiny of the manner in which they actually exercise their significant quasi-governmental powers, including whether the norms they apply comport with widely-held societal notions about freedom of expression. Such scrutiny may confer a degree of legitimacy on platform policy choices – compensating for the fact they do not enjoy real democratic legitimacy. By contrast, it may also give political impetus for calls on governments to better regulate online platforms activities, either with a view to promoting more liberal free speech standards (as reflected in the demand by some GOP politicians that the platforms ensure political ‘content neutrality’ or else lose their section 230 immunity), or more robust measures against harmful speech (as is the regulatory trend in some European countries, where the slow removal of harmful contents results in hefty fines imposed on the platforms). The very possibility of regulatory intervention serves in and of itself as a method for generating some degree of accountability and responsiveness by online platforms to public expectations, alongside other avenues of pressure from customer and advertising groups.        

When viewed from a rule of law perspective, one cannot but join Jack Dorsey in expressing apprehension about the implications of the precedent created by the decision to deplatform President Trump. Although the decision has been nominally based on existing community standards (retaining thereby certain rule of law attributes), the interpretation of these standards has been changing at great speed, moving within a short time frame from an extreme non-interventionist position vis-à-vis political speech of world leaders to what appears a diametrically opposed approach, permanently blocking the President’s on the basis of vague criteria such as context, likely audience perception, and overall risk assessment. While the specific decision to block President Trump (at least temporarily) may be justified as a legitimate restriction of freedom of expression in light of the very specific circumstances of the dramatic and unprecedented events which unfolded in the US in the post-election period, the impression that platform rules are being made up or changed on the fly is anthemic to basic rule of law notions. Lack of certainty about the contents of the policies, their interpretive elasticity, and the decision-making process for generating new interpretations also raises significant concerns about potential abuse of power by the platforms, and about a future chilling effect for political speech made against the background of volatile contexts. The same unclarity also holds for whether or not President Trump can obtain effective remedies against the decision to deplatform him. Whereas Facebook has announced on January 21 that its Oversight Board will review this decision, it is not clear what remedies will be available to him. As for the other platforms, they do not even have as of yet a similar standing review mechanism and have not identified any alternative review process, underscoring their accountability and transparency deficits.

Now that the balance of power has shifted, with online platforms moving away from the role envisioned for them under section 230–re-publishers of third party information, who may choose to exercise some editorial control over such contents (protected by section 230’s ‘good Samaritan’ clause authorizing the removal in good faith of objectionable contents) – to gatekeepers of the political process and important defenders of the public interest – legal and political checks over their newly exercised power need to be revaluated. Three specific sets of questions should arguably comprise part of a future online platform rule of law review undertaken by democratic government institutions, international human rights bodies and public interest groups: (a) whether community standards on political speech are sufficiently detailed and precise in explaining what constitutes prohibited speech and what are the consequences of violating the said standards. In other to retain broad public legitimacy, such standards ought to be compatible with recognized freedom of expression standards, such as those found in the domestic laws of rule of law democracies or international human rights law;

(b) whether decision-making regarding content moderation in political speech cases is sufficiently transparent in terms of the process undertaken and the reasons given for specific speech-limiting decisions; and (c) whether individuals and groups affected by content moderation decisions have effective avenue of recourse to challenge such decisions. Ensuring that online platforms integrate in their community standards and actual operations such basic rule of law safeguards may help mitigate the “dangerous precedent” of deplatforming President Trump.   

 

Prof. Yuval Shany directs the Federmann Cyber Security Research Center Cyber Law Program at the Hebrew University of Jerusalem. He is also the Vice-President of the Israel Democracy Institute and former Chair of the UN Human Rights Committee.

A shorter version of this op-Ed was published on 28 January 2021 in Fortune Magazine.

 

https://csrcl.huji.ac.il/sites/default/files/csrcl/files/dsc_9732_01.jpg?m=1611908418