As artificial intelligence (AI) continues to evolve at a rapid pace, policymakers around the world are facing unprecedented challenges in ensuring that this powerful technology serves the public good while safeguarding human rights. In response to these challenges, a high-level side event titled “Legal Certainty and Trust in AI: The Role of Parliamentarians in AI Governance” was held during the April 2025 session of the Parliamentary Assembly of the Council of Europe (PACE) at the Palais de l’Europe in Strasbourg.
Building a Human-Centred and Trustworthy AI Ecosystem
The session brought together senior officials, legal experts, and parliamentarians to discuss the urgent need for international cooperation and legal clarity in AI governance. With impacts of AI on human rights, democracy, and the rule of law being felt unevenly across countries, participants emphasized that parliamentarians have a crucial role to play in bridging the gap between technological innovation and legal safeguards.
Keynote speaker Mr Bjørn Berge, Deputy Secretary General of the Council of Europe, stressed the importance of swiftly ratifying the Framework Convention on AI, Human Rights, Democracy, and the Rule of Law — the world’s first legally binding treaty on AI opened for signature globally since 5 September 2024. He urged parliamentarians to act as major actors of responsible and inclusive AI governance:
“Let’s make sure our new AI Convention quickly enters into force. Parliamentarians have a unique responsibility to promote ethical AI use and ensure legal certainty in their national systems.”
Mr Mario Hernández Ramos, Chair of the Council of Europe’s Committee on AI (CAI), underlined that “the true meaning of the Framework Convention is to be the first international binding instrument on AI and human rights, democracy and the Rule of Law upholding principles such as human dignity and individual autonomy, equality and non-discrimination, protection of privacy and personal data protection, transparency and oversight, accountability and responsibility, safe innovation and reliability, and to be open to the world, not confined to a limited geographical and cultural area”.
Focus on Legal Instruments and AI Risk Assessment
Discussions also explored the Council of Europe’s HUDERIA Methodology — a comprehensive tool for assessing the risks and impacts of AI systems on human rights, democracy, and the rule of law. This methodology, adopted by the Committee on Artificial Intelligence (CAI) in November 2024, is designed to support policymakers in making informed, rights-based decisions on AI deployment.
High-Level Participants
The event was co-organised by Directorate General I and PACE, and moderated by Mr Vladimir Vardanyan (Armenia), Vice-Chairperson of the PACE Sub-Committee on Artificial Intelligence and Human Rights.
The panel featured:
- Mr Mario Hernández Ramos, Chair of the Council of Europe’s Committee on AI (CAI) and Professor of Constitutional Law, Complutense University of Madrid (Spain)
- Mr Fotis Fitsilis, Head of Department for Scientific Documentation and Supervision, Hellenic Parliament (Greece)
The experts discussed how national parliaments can contribute to responsible, transparent, and accountable AI frameworks, especially in areas such as risk-based regulation, data protection, and the ethics of automated decision-making. The event also underscored the pressing need for multi-stakeholder dialogue. As AI technologies increasingly shape societies, a unified, human rights-based legal approach is essential to build public trust and ensure democratic resilience.
Learn more about the Council of Europe’s efforts on AI governance here.