Putting Algorithmic Regulation to Test: The Case of Clustered Disclosure Regulation

By: Fabiana Di Porto

With the definition of cybersecurity having been broadened after Cambridge Analytica and local elections, to include the control of data and information on social media and the Internet more broadly via AI-driven technologies, a claim has been made of a lack of awareness among all stakeholders, especially individuals and Small and Medium Enterprises (or SMEs). The Council of Europe has pointed to the ‘dangers for democratic societies that emanate from the possibility [of public entities and private actors] to employ algorithmic capacity to manipulate and control [both] economic choices [and] social and political behaviors’. It has therefore urged Member States, amongst others, to empower users by enhancing their awareness; increase transparency and accountability in the use of algorithms; adopt innovative solutions – even regulatory ones – to avoid algorithmic manipulation of information.

The digitalization of the economy has been further lowering the bargaining power and awareness of consumers. The use of automatic algorithms (machine-learning technologies) running on big data, by making personalization of information possible, has provided many firms, even non-dominant ones, with the capability to manipulate the huge amounts of information that they produce and distribute to consumers, with the ultimate goal of steering their choices. However, disclosure regulation, which is key to tackle consumers’ informational needs and low bargaining power, has substantially remained the same: mostly generic, detailed, impersonal. Therefore, an effort should be made to rethink disclosure regulation with a view to differentiate it by integrating big data analytics in the design of disclosures.

The core of my policy research proposal will be to differentiate disclosures so as to target the different informational needs of the regulatees, while at the same time making disclosures complying with the proportionality principle. In this aim, regulators should be allowed to use predictive algorithms to design and implement new and better forms of disclosure regulations, provided that big data analytics allows revealing the informational needs of individual consumers and the features of every market scenario. However, to conjugate algorithmic opacity with transparency (and participation) of the rule-making process, I propose to incorporate an experimentation phase, to pre-test the algorithms needed to design disclosures in the course of a coregulatory process, that is participated by representatives of consumers and the industry and facilitated by the regulator (co-regulation). Such small group would serve to train an algorithm in a controlled environment (like in regulatory sandboxes) in order to pre-test algorithmic disclosures before their implementation on a large scale. Pretesting the algorithm in a participated procedure would increase the knowledge about the target population and its informational needs, while also strengthening transparency and maintaining some due process guarantees of the regulatory process. The targeted disclosures thus produced are likely to be not only proportionate, and thus legal, but also effective in regaining momentum to the autonomy of consumers in the digital economy, because will have the most chances to be read and understood.

Eventually, such disclosures could be tested also in the cybersecurity domain, to ascertain how effective they are in raising awareness among the targeted population.