“As AI development affects nearly every aspect of our lives and its influence will further increase in the foreseeable future, member states must take concrete steps to ensure that people’s human rights are safeguarded in the design, development and deployment of AI systems” says Council of Europe Commissioner for Human Rights Dunja Mijatović in a report released today.
In her recommendations entitled “Human rights by design - future-proofing human rights protection in the era of AI”, the Commissioner reviews key challenges faced by member states in protecting and promoting human rights in the use of AI in light of her initial practical guidance issued in 2019. Member states should, for instance, assess the human rights risks and impacts of AI systems before their use, strengthen transparency guarantees, and ensure independent oversight and access to effective remedies. “The overall approach has not been consistent. Human rights centred regulation of AI systems is still lacking”, she says.
The Commissioner observes that the continued narrative on AI as being so highly technical and mysterious that it escapes the grasp of human control and effective regulation is a misconception. This myth has led to a remarkable reluctance at senior policy level to engage comprehensively with the potential human rights harms caused by AI and hinders the effective enforcement of existing legal standards and the creation of adequate mechanisms to mitigate threats.
According to her findings, and in consultation with national human rights structures, the Commissioner highlights three interdependent trends that constitute obstacles to the full implementation of international human rights standards related to AI in Europe.
Firstly, the lack of comprehensive and human rights-based approaches. Member states have too often adopted sector-specific approaches to the implementation of human rights standards and focused on subsets of rights only, such as privacy rights, rather than ensuring that existing guarantees are consistently applied to all relevant sectors that use AI. Legal frameworks, where they exist, have often not been effectively and promptly enforced, as infrastructure dependence on large platforms may hinder implementation and oversight remains fragmented.
Secondly, insufficient transparency and information sharing. Clear and updated information about AI and its potential impact on human rights remains scarce across Europe. Intellectual property protections constitute obstacles to the enforcement of information rights - including for the judiciary, national human rights structures, and regulatory authorities – hindering independent oversight.
Thirdly, the lack of initiative on the part of member states to use AI to strengthen human rights. As most AI development is driven by the private sector, public authorities have overall adopted a reactive rather than proactive approach. “By delaying regulation that would prompt alternative innovation, member states risk missing the opportunities that AI capacities offer towards the implementation and strengthening of human rights protections and the fundamental principles of democracy and the rule of law”, warns the Commissioner.
In her recommendations, the Commissioner underscores the key role national human rights structures play in ensuring member states safeguard human rights in the design, development, and deployment of AI systems. She also focuses on the need to reinforce supervision and oversight by independent institutions, to promote transparency around AI systems and public awareness about their impact on human rights, and to proactively explore the potential of AI to boost rather than harm human rights protection. “There is still an untapped potential for AI design, development and deployment incentivised by value-based objectives, such as exposing and dismantling existing prejudice and discrimination, boosting public participation, amplifying the voices of those usually unheard, and addressing inequalities by helping prioritise those most in need”, she underlines.