The Council of Europe Framework Convention on Artificial Intelligence and human rights, democracy and the rule of law is the first-ever international legally binding treaty in this field. It aims to ensure that activities within the lifecycle of artificial intelligence systems are fully consistent with human rights, democracy and the rule of law, while being conducive to technological progress and innovation.

 Framework Convention and its Explanatory Report

The Framework Convention complements existing international standards on human rights, democracy and the rule of law, and aims to fill any legal gaps that may result from rapid technological advances. In order to stand the test of time, the Framework Convention does not regulate technology and is essentially technology-neutral.

 How was the Framework Convention elaborated?

Work was initiated in 2019, when the ad hoc Committee on Artificial Intelligence (CAHAI) was tasked with examining the feasibility of such an instrument. Following its mandate, it was succeeded in 2022 by the Committee on Artificial Intelligence (CAI) which drafted and negotiated the text.

The Framework Convention was drafted by the 46 member states of the Council of Europe, with the participation of all observer states: Canada, Japan, Mexico, the Holy See and the United States of America, as well as the European Union, and a significant number of non-member states: Australia, Argentina, Costa Rica, Israel, Peru and Uruguay.

In line with the Council of Europe’s practice of multi-stakeholder engagement, 68 international representatives from civil society, academia and industry, as well as several other international organisations were also actively involved in the development of the Framework Convention.

 What does the Framework Convention require states to do?

Fundamental principles

Activities within the lifecycle of AI systems must comply with the following fundamental principles:

  • Human dignity and individual autonomy
  • Equality and non-discrimination
  • Respect for privacy and personal data protection
  • Transparency and oversight
  • Accountability and responsibility
  • Reliability
  • Safe innovation

Remedies, procedural rights and safeguards

  • Document the relevant information regarding AI systems and their usage and to make it available to affected persons;
  • The information must be sufficient to enable people concerned to challenge the decision(s) made through the use of the system or based substantially on it, and to challenge the use of the system itself;
  • Effective possibility to lodge a complaint to competent authorities;
  • Provide effective procedural guarantees, safeguards and rights to affected persons in connection with the application of an artificial intelligence system where an artificial intelligence system significantly impacts upon the enjoyment of human rights and fundamental freedoms;
  • Provision of notice that one is interacting with an artificial intelligence system and not with a human being.

Risk and impact management requirements

  • Carry out risk and impact assessments in respect of actual and potential impacts on human rights, democracy and the rule of law, in an iterative manner;
  • Establishment of sufficient prevention and mitigation measures as a result of the implementation of these assessments;
  • Possibility for the authorities to introduce ban or moratoria on certain application of AI systems (“red lines”).

 Who is covered by the Framework Convention?

The Framework Convention covers the use of AI systems by public authorities – including private actors acting on their behalf – and private actors.

The Convention offers Parties two modalities to comply with its principles and obligations when regulating the private sector: Parties may opt to be directly obliged by the relevant Convention provisions or, as an alternative, take other measures to comply with the treaty’s provisions while fully respecting their international obligations regarding human rights, democracy and the rule of law.

Parties to the Framework Convention are not required to apply the provisions of the treaty to activities related to the protection of their national security interests but must ensure that such activities respect international law and democratic institutions and processes. The Framework Convention does not apply to national defence matters nor to research and development activities, except when the testing of AI systems may have the potential to interfere with human rights, democracy, or the rule of law.

 How is the implementation of the Framework Convention monitored?

The Framework Convention establishes a follow-up mechanism, the Conference of the Parties, composed of official representatives of the Parties to the Convention to determine the extent to which its provisions are being implemented. Their findings and recommendations help to ensure States’ compliance with the Framework Convention and guarantee its long-term effectiveness. The Conference of the Parties shall also facilitate co-operation with relevant stakeholders, including through public hearings concerning pertinent aspects of the implementation of the Framework Convention.