The Council of Europe Convention on Artificial Intelligence: The Future Is Here?

Dafna Dror-Shpolianksy 

Background

In May 2024 the Council of Europe (CoE) adopted the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law. This is the first ever binding international treaty focusing especially on AI - a significant milestone in the ongoing global efforts to govern AI technologies.

The treaty is a product of several years of work by the CoE Committee on Artificial Intelligence (CAI). It followed a negotiation process that included 46 member states of the Council of Europe, the EU, and a number of non-member states which were invited to participate in the negotiation process - the United States, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru, Uruguay and Argentina, some of which have observer status in the CoE. In addition, the negotiation process included international organizations (OECD, OSCE, UNESCO), representatives from civil society, as well as representatives from the business, technological and academic community.

The CoE is one of the leading bodies in the field of human rights in Europe. It has 46 member states (after it decided to cease the membership of the Russian Federation in 2022), 27 of which are member states of the European Union, and all member states of the Council of Europe are parties to the European Convention on Human Rights. The CoE aims to protect human rights in a variety of areas, and as such it promotes international human rights treaties while supervising the processes that the states take in order to align with the obligations they have assumed in these treaties. Famously, in the field of law and technology, the CoE has developed the Budapest Convention on Cybercrime, which has 72 states parties (and 21 non-party signatories) including many non-member states from different global regions. Another prominent CoE treaty is the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data ("Convention 108"), both of which have become a "gold standard" in their field, or at least, a reliable benchmark in the global sphere (a modernized version of Convention 108 – Convention 108+ - has been concluded but not entered yet into force). Thus, while the CoE is a regional body and not a universal one per se, it has a remarkable impact on global governance, while it enjoys the benefits of being relatively small body that manages to push forward international treaties faster than other international organizations such as the UN for example. The Framework Convention is one of those examples, and, indeed, it was negotiated in a relatively short time frame (less than 3 years).

The Convention will be open for signature in September 2024. At the first stage, only member states of the CoE, the non-member states which have participated in the negotiation process and the EU would be able to sign. Once the Convention will enter into force, any non-member state may be invited to accede the Convention, if the Committee of Ministers of the CoE has decided so and after obtaining unanimous consent of the parties to the Convention.

 

The goal of the Convention and the issue of public v private entities

The purpose of the Convention is to ensure that activities within the lifecycle of AI are consistent with human rights, democracy and the rule of law. Once the Convention will enter into force, each state party to the Convention should apply its obligations to all "activities within the lifecycle of artificial intelligence systems that have the potential to interfere with human rights, democracy and the rule of law". These 'activities' refers to activities undertaken by public authorities or private actors acting on their behalf.

In addition, states should address the risks and impacts to human rights arising from AI systems with regard to private actors not acting on their behalf as well, however, with regard to private entities, the drafters took a more lenient approach. While with regard to public authorities state parties "should apply the Convention" to their activities, when it comes to AI activities taken by independent private actors, states should only "address the risks and impacts arising" from those activities. In other words, state parties are not obliged to apply the obligations set in the Convention in relation to activities taken by private actors, but rather, choose the nature of the regulation measures which should be taken, and are expected to have their regulatory approach with regard to the private sector "develop over time".

The decision not to apply the Convention directly to private companies was a highly controversial issue during the negotiations. Generally speaking, being an international Convention, the expectation is from states to implement the Convention through appropriate legislative, administrative or other measures. However, in this case, many have argued that private companies are pivotal actors in the AI industry, and thus, delivering a first-ever Convention on AI and leaving the private companies free of any duty, would undermine the purpose of the Convention and send a wrong message to the tech industry. Eventually, Article 3(1)(b) represents the compromise between these two positions, as it grants states the discretion to determine what measures they will take to comply with the Convention, while requiring them to submit a declaration to the CoE Secretary General (SG) describing how they will implement their obligations concerning the activities of private actors.

This is a different approach from the European Parliament recent AI Act which is a regulatory tool that provides a set of rules and obligations for use and supply of Artificial Intelligence systems, across different public and private sectors, as long as they provide AI systems that are used in the EU.

Exemptions

There are three significant caveats regarding the scope of application of the Framework Convention. First, the Convention is inapplicable to activities within the lifecycle of AI systems that are related to the protection of national security interests of a state party. However, it is clear from the Explanatory Report that the Convention would still apply to law enforcement activities for the "prevention, detection, investigation, and prosecution of crimes, including threats to public security", insofar as the 'national security interests of the Parties are not at stake'. This means for example that facial recognition AI technologies, as well as AI-based technologies predicting recidivism in the court system would probably be included in the scope of the Convention. A second exemption is "matters related to national defense", which exceeds the scope of the Council of Europe mandate. Third, are research and development activities which exceed the scope of the Convention as long as they have not yet been made available for use and that their testing does not in itself pose a potential for interference with human rights, democracy and the rule of law. That being said, states would still need to comply with the principle of "safe innovation".

Overview of the Convention

The Framework Convention includes 8 chapters: General provisions; Obligations; Principles; Remedies; Assessment and mitigation of risks and adverse impacts; Implementation of the Convention; Follow-up mechanism; and Final Clauses dealing with amendments, signing process etc.

Article 4 of the Convention, entitled "protection of human rights" provides that states should take measures to ensure that AI systems are consistent with the applicable international and domestic human rights law obligations. It other words, it essentially extends the applicability of IHRL to the lifecycle of AI systems. This approach of extending the same rights from the offline world to the online digital world including AI, resembles the notion reflected in the recent UN resolution on AI and human rights, which was adopted approximately at the same time as the CoE Convention. Furthermore, in the Explanatory Report attached to the Convention, the drafters have emphasized that the parties did not intend to create new rights.

Article 5 addresses the issue of integrity of democratic process and respect for the rule of law. In this regard, States are obliged to take measures to ensure that AI systems will not be used to undermine the integrity, independence and effectiveness of democratic institutions and processes, including the principle of separation of powers, judicial independence and access to the justice. This Article is fairly broad and, in many ways, inventive, as it is one of the firsts international instruments to draw a clear connection between emerging technologies and its disruptive potential for democracy as a whole. In particular it emphasized the risks posed by mis/disinformation; manipulation of content and affecting the integrity of the justice systems; as well as illegal or arbitrary surveillance. That being said, the drafters did not manage to provide a clear picture of what are those "measures" that states need to seek in order to address these concerns.

Following these obligations, Chapter 3 of the Convention provides a set of principles that should be maintained in relation to activities within the lifecycle of AI: (1) human dignity and individual autonomy; (2) transparency and oversight requirements, including with regard to the identification of content generated by artificial intelligence systems; (3) accountability and responsibility for impacts on human rights democracy and rule of law resulting from AI; (4) equality and non-discrimination; (5) privacy and personal data protection; (6) reliability and (7) safe innovation.

Interestingly, the principle of Reliability essentially suggests establishing standards, or other measures such as assurance techniques, that will allow to evaluate and verify the 'trustworthiness' of AI systems. The explanatory report suggests that this includes a documentation and 'communicating evidence' requisite to facilitate the verification process. Yet another notable principle is the principle of Safe Innovation, which provides that each state party is called to enable the establishment of "controlled environments" for development, experimenting and testing AI systems under supervision of the competent authorities, while States have the discretion to decide in what way they will set up these regimes.

Chapter 4 of the Convention offers remedies for violations of human rights that result from activities within the lifecycle of AI. Roughly speaking, the Convention offers three types of remedies: (1) A right to information, i.e, documentation, access to information by authorized bodies, and, where 'appropriate and applicable', availability or communication of that relevant information to the affected persons. (2) A right to contest an AI-based decision, regarding AI system which "significantly affect human rights" states need to provide sufficient information for the affected persons to contest a decision which was "made or substantially informed" by that AI system, and, "where relevant and appropriate", to contest the use of the system itself. (3) A right to lodge a complaint: a state must provide an "effective possibility" for the persons concerned to lodge a complaint to the competent authorities - which may include oversight mechanism, which states are required to establish to oversee compliance with the Convention.

The drafters have highlighted in this regard in the Explanatory Report the "significant imbalance" in the understanding of, access to and control over the information between the different parties in the lifecycle of AI. However, it is precisely because of that, it is quite surprising to see that the Convention does not explicitly provide a right to a human decision maker per se in decisions which are "made or substantially informed" by the AI system (as provided under Article 22 of the GDPR for example), but settled instead on a right to contest the decision or complain to an oversight mechanism whose scope of application is not quite clear. Moreover, the requirement to adopt measures of documentation, access and communication to the affected persons applies only with regard to AI systems that "significantly affect human rights". This threshold applies also to the right to "procedural safeguards" which is provided in article 15. However here, the drafters did not leave it exclusively for states to interpret what is an AI system that "significantly impacts upon the enjoyment of human rights" and specified in the Explanatory Report that it includes situation in which AI substantially informs or takes decisions impacting human rights. These procedural safeguards include, for example, human oversight, ex-ante or ex-post review of the decision by humans, and in appropriate cases a "built-in operational constraints" that cannot be overridden by the AI system and is responsive to the human operator.

In addition, individuals should be notified that they are interacting with AI system rather than with a human, "as appropriate for the context".

Chapter 5 provides a list of measures for the assessment and mitigation of risks and adverse impacts of the AI system, including monitoring, documentation, and, "where appropriate", testing of AI systems before making them available for use in the market. The measures taken should be documented and reported.  Notably, a state is required to assess the need for a moratorium, ban or other appropriate measures in respect of certain uses of AI systems that are incompatible with the respect for human rights, the functioning democracy or the rule of law.

Chapter 6 of the Convention addresses the Convention implementation by states. It includes a specific reference to taking appropriate consideration of needs and vulnerabilities in relation to rights of people with disabilities and children; Article 19, provides that states should seek to ensure public consultation on 'important questions' in relation to AI. This deliberative approach seems appropriate as AI uses are expending into everyday life experiences. In accordance, Article 20 encourages the promotion of digital literacy and skills.

The Convention also includes a follow up mechanism, which entails a reporting cycle of once in two years on the implementation of the Convention. As mentioned, states are also required to establish domestic oversight mechanisms to oversee compliance with the Convention. Similar to privacy commissions, the AI Convention oversight mechanisms should be independent and impartial and have the necessary mandate and expertise to effectively oversee the implementation of the Convention.

Chapter 8 provides a set of procedural matters regarding the Convention with regard to operational and procedural aspects governing the Convention. Unlike other CoE Conventions, the AI Convention does not allow for any substantial reservations, and it remains to be seen how it would impact the number of states ratifying the Convention.

Final Remarks

The Convention does not regulate technology per-se, nor does it attend to the technical aspect of AI systems, and its risk assessment criteria are quite ambiguous. While this methodology might weaken the Convention's effectiveness, it simultaneously creates a "future-proof" instrument that can address future risks AI systems might impose.

In addition, choosing to cover the whole lifecycle of AI, including design, data collection and processing, testing, deployment, supply, and monitoring seems to try and cover the decentralized supply chain of AI systems. Yet, it is doubtful whether it is well-adapted to do so, given its limited application to private companies and the regulatory gaps that might arise between different jurisdictions.

Furthermore, the clear intention to apply only the existing normative frameworks that states already have in relation to human rights in the context of activities within the lifecycle of AI, and not to create new rights, presents another challenge, as it might be necessary to protect the particular human interests AI systems endanger. Nevertheless, the drafters seem to have recognized that the application of human rights in the AI lifecycle will demand some elasticity, and clarified that the expectation from states is to considerably reflect on their existing human rights frameworks, and ensure that they are adapted, and effective enough, to address the new challenges AI technologies impose.