Back Tackling bias in artificial intelligence systems to promote equality – new study published

Legal gaps, the complexity of AI-driven and algorithmic discrimination
Tackling bias in artificial intelligence systems to promote equality – new study published

Algorithmic technologies can perpetuate and amplify societal inequalities and harmful stereotypes because they are often built and sustained by historic data and models that reproduce stereotypes and false assumptions about gender, race, sexual orientation, ability, class, age, religion or belief, geography, and other socio-cultural and demographic factors. 

The new study commissioned by the Council of Europe’s gender equality and anti-discrimination committees investigates the specific risks to equality and non-discrimination of algorithmic technologies; the legal and policy responses that could be offered to combat these risks; and the potential of these technologies to promote equality, including gender equality. 

The study highlights that bias is related not only to data but also to the wider human and social underpinnings of these technological artefacts. It identifies the shortcomings of existing legal and practical tools to prevent discrimination from arising in the development of algorithmic systems and it sets out ways to leverage these technologies to promote equality through legal routes of positive action and obligations.

On 8 October, a co-author of the study, Ivana Bartoletti, spoke about these issues as part of a Council of Europe session of the 2023 UN Internet Governance Forum in Kyoto on “Shaping Artificial Intelligence Technologies to Ensure Respect for Human Rights and Democratic Values”. 

Strasbourg 06/10/2023
  • Diminuer la taille du texte
  • Augmenter la taille du texte
  • Imprimer la page