Artificial intelligence is briskly transforming consequential decision-making, disrupting democratic institutions. Most recently, some 26 000 Dutch parents of immigrant background or members of cultural communities stood wrongfully accused of defrauding their government with disastrous consequences, including suicide. It later surfaced that the authorities had naively procured an algorithm to more efficiently detect the fraudulent obtention of child benefit subsidies. Unbeknownst to the AI deployers or its victims, the procured algorithm insidiously factored ethnic origin in its assessment, thereby effectively disproportionately singling out immigrants or those holding dual citizenship. Disturbingly, this occurred notwithstanding multiple stringent, time-honoured legal prohibitions on such practices ( not unlike our own) in the brick and mortar world.
As this European example strikingly highlights, the eco-system in which public institutions operate has shifted, a transition to the cyber realm abruptly intensified by the pandemic, as governments precipitously migrate to the ease of private platforms and algorithms. This de facto “marriage of convenience” may best be characterized as an unstructured partnership, prematurely born out of immediate necessity, whose legal scaffolding has yet to be defined and aligned with Human Rights ( and Charter) values.
In effect, a ‘new normal’, defined by unmitigated dependence on private digital infrastructure or “a few dominant internet intermediaries acting as opaque gatekeepers in the curation, distribution, and monetization of information” is ripe for serious reflection, as the recent Facebook outage suggests.
Plainly put, a pandemic ‘post-nup” – or substantively revised legal framework – is required for what is effectively as transformative as the industrial revolution. For in terms of democratic governance, a great deal is at stake when public services and institutions rely on private platforms, particularly for assisted decision-making.
In that vein, the extensive AI activity in inherently sensitive – yet data rich – areas such as healthcare and immigration is particularly notable on the heels of the EU Proposal on the Regulation of AI, released just last spring.
Significantly, the Proposed Regulation, which will take shape in the coming years, sets out a classification risk-based model, limiting and even prohibiting some ‘high risk’ AI systems, particularly those which might invite ‘harm’, including immaterial.
But it is its tone rather than fine print which is more striking.
Refreshingly deploying a Human Rights narrative rather than a box-checking ‘compliance’ type privacy protection, that the law traditionally afforded individuals ( tellingly referred to as ‘data subjects’), the Proposal attempts to nurture an ‘eco-system of trust’ in the age of disinformation, to mitigate the unintended harm of predictive AI models in decision making (rather than merely secure users’ often meaningless ‘consent’ to data collection); to impose structured oversight mechanisms which promote fairness and mitigate marginalization of vulnerable individuals. To prevent the sort of catastrophic outcome described in the Dutch example.
This novel approach is consistent with the Supreme Court of Canada’s decision in Facebook v. Douez (2017) and indeed Justice Abella’s dissenting reasons, effectively pointing to the artificiality of protection afforded by merely clicking consent in the digital age.
In effect, the central challenge which the EU Proposal recognizes and seeks to address are the societal risks and unintended consequences of the rapid deployment of technology including Predictive AI in areas such as health and fintech. These are contexts where women and minorities (insidiously unrepresented in data sets) can be at a disadvantage and have no choice but to ‘consent’.
The Proposal is therefore notable not only for its broad extra territorial application beyond the EU but for advancing a legal framework premised on substantive human rights and preventing disparate impact harms.
Under this fresh paradigm, scrutiny would be aimed at mitigating inadvertent and insidious discrimination against vulnerable groups the likes of which occurred in the Netherlands.
Skewed datasets, the inclusion of seemingly neutral factors that act as a proxy for prohibited grounds, ‘blind’-decision making in the absence of contextual analysis are all significant societal risks that implore us to implement quality management systems around the use of AI in sensitive contexts such as digital health and fintech.
The AI deployed by the U.S. Department of Veterans’ Affairs to generate individualized risk scores to allocate medical resources to Covid victims is such an example. Canada’s own streamlining of its immigration process via an Algorithmic Impact Assessment (“a survey deployed to measure several factors related to immigrants including health as it pertains to the individual and their community”) may be another.
In a word, though clearly tempting in our quest for efficiency in the face of limited resources, systems most likely be classified as “high risk” under the Proposed EU Regulation, will increasingly invite rigorous scrutiny from a Human Rights law perspective.
If nothing else, the still little-known Proposed EU Directive quietly but surely signals a notable paradigm shift from a procedural regulation firmly anchored in consent to one whose narrative may be understood as anchored in the precautionary principle of harm prevention, aimed at entrenching digital access and equality.
In a word, fresh legal approach featuring Oversight and principled explainability is essential to public trust, especially when allocating scarce public resources. This is especially true when the staples of the current legal model (such as consent and foreseeability) pale in the face of opaque and autonomous systems.
As a prominent UK Justice, Lord Sales cautioned more broadly with respect to legal reform here :“[t]he law has to provide structures so that algorithms and AI are used to enhance human capacities, agency and dignity, not to remove them. It has to impose its order on the digital world and must resist being reduced to an irrelevance”.
The Proposed Regulation may be a step in just that direction.
* This blog was originally published in Slaw.