By: Karen Eltis.
Three Ottawa residents recently launched a Constitutional (“Charter”) challenge against the Mayor’s Twitter block, citing social media as “the primary means for communicating” political thoughts and views. The Munk Center’s report, for its part, reveals that “algorithms and artificial intelligence are augmenting and replacing human decision makers in Canada’s immigration and refugee system, with profound implications for fundamental human rights”. Facebook now takes on a complex public health role, screening for suicide risk, as the New York Times reports.
Increasingly, and in the absence of normative clarity, platforms are tasked with decision making more broadly as the above-stated illustrate. As previously noted, corporate policies applied by reticently private actors - at times at the behest of governments circumventing constitutional safeguards- raises untold questions. To further complicate matters, the prevailing precautions (such as the GDPR) laudably endeavoring to allay these concerns, tenaciously cling to misbegotten – arguably moribund - notions of ‘compliance’ (notice and consent) inherent to data protection.
Thus for instance, the hastily crafted incarnation of the “right to be forgotten” (RTBF) notwithstanding its obvious populist appeal, lacks conceptual coherence within the civil law tradition, which focuses on substantive personality rights and personhood not the procedural protection of data.
Yes, undoubtedly data privacy and protecting data from an unwanted gaze is important. But policy makers would do well to recognize that the issue goes far beyond that and lends itself to human rights conceptualization. Indeed, as a noteworthy start, in Douez v. Facebook (2017), the Supreme Court of Canada acknowledged that the ‘notice and consent’ model has not only outlived its usefulness (given the inequality of bargaining power inter alia) but more importantly imperils the human rights protections long enshrined in domestic law and further threatens to emasculate courts as individuals ‘opt out’ of their state’s legal protections and the ambit of their justice system.
Perhaps this is where Canada, the country which in many ways serves as a bridge between the US and Europe - and is said to have gifted the industry with the notion of “privacy by design” - might come in (however politely). Mirroring the Continental spirit, Canada’s privacy model is one that emphasizes broader human rights, rather than mere consumer protection. It is anchored in flexible principles, which unlike the GDPR’s liability-premised compliance model, leave room for organizations to innovate.
Although certainly not flawless (and itself in need of significant review), Canada’s framework in principle encourages cooperation with regulators – albeit subject to oversight- rather than confrontation. That in turn addresses the issue of over-suppression of content spurred by companies’ fear of liability. Finally it is – like Quebec Civil Law-flexible rather than overly procedural or pigeonholed, a desired trait in a world where the development of technology, as it has been said, far outpaces law.
Expression, as noted, is contextual and cultural. It is above all human. The digital realm and its algorithms, however unintentionally, but as a function of the underlying financial model, decontextualize and oftentimes distort decision making. The application of standards must – like all regulation – provide the correct incentives. Therefore, cooperation with oversight and responsibilization based on established human rights principles may prove a better vehicle than the liability-based model.
As the US sits at an inflection point, eying an omnibus statute to achieve equipoise between innovation and privacy rights and countries around the globe emulate the GDPR in its pre-packaged form, let us pause and consider the underlying rationale of our own model and its particular usefulness in the development of frameworks and legal solutions to privacy challenges and beyond.