The Council of Europe’s starting point is that AI is a human rights issue and so is gender equality. When AI and gender meet, we have to be particularly vigilant and proactive.
AI systems that reproduce existing biases and prejudices lead to discriminatory outcomes and reinforce structural inequalities. The under-representation of women in the AI sector, especially in decision-making, design and development roles, only reinforces those risks.
A particularly worrying dimension of AI is its role in technology-facilitated violence against women and girls. Deepfake technology, algorithmic content curation, and automated decision-making systems can be misused to facilitate online harassment, image-based abuse, and other forms of gender-based violence.
These are clear human rights issues. In the judgment of M.S.D. v. Romania from December 2024, the European Court of Human Rights examined the case of an 18-year-old woman whose intimate images had been shared online by her ex-boyfriend without her consent, accompanied by personal details, and resulting in significant psychological harm. The Court affirmed that online violence, including the non-consensual sharing of intimate images, is a form of gender-based violence that undermines the physical and psychological integrity of women and girls. The case did not directly involve AI, but it reflects well the broader issue of technology-facilitated violence.
At the same time, AI has the potential to advance gender equality. AI systems can be leveraged to identify and address disparities in treatment, amplify women’s voices in decision-making, and support efforts to achieve substantive gender equality.
Realising this positive potential requires stronger and more targeted action. We need clear accountability measures, robust regulatory frameworks, and gender-sensitive AI governance.
Last year, the Council of Europe concluded the negotiations of the world’s first legally binding treaty on AI, the Framework Convention on AI and Human Rights, Democracy and the Rule of Law. The Convention refers specifically to the situation of women and girls and is centred around principles such as human dignity, individual autonomy, accountability, equality and non-discrimination.
It adopts a risk-based approach, with parties being obliged to provide remedies and procedural safeguards, and to set up a risk and impact management framework. The Convention is open to accession by all countries. Ten states, including the US, plus the European Union on behalf of the 27, have signed it.
There are two other key initiatives under way at the Council of Europe:
Firstly, a Recommendation on Equality and AI will address the challenges posed by AI to gender equality and non-discrimination, offering guidance on how to integrate equality and non-discrimination principles into AI systems, and ensuring that AI promotes and protects gender equality instead of exacerbating discrimination and leading to violations of women’s rights.
Secondly, a Recommendation on technology-facilitated violence against women and girls will focus on strengthening accountability, ensuring that victims receive adequate and effective responses from member States.
A key objective is to reinforce legal frameworks, particularly under criminal law, to hold perpetrators accountable and provide remedies to victims.
The draft Recommendation also notes that AI can play a constructive role in combating tech-facilitated violence against women through content moderation and risk detection mechanisms.
Although non-binding, these Recommendations, adopted at the level of the Ministers of Foreign Affairs, are used widely by states, and cited in the case-law of the European Court of Human Rights.
Finally, we have to engage everyone – the authorities obviously, and the companies, National Human Rights Institutions, equality bodies, and civil society.
Our experience from negotiating the Framework Convention shows that developing AI regulation in sensitive fields such as gender equality requires a multi-stakeholder approach. This really concerns everyone.