Επιστροφή Human rights oversight of artificial intelligence

Speech
Speech delievered at the Embassy of Ireland in Paris.
Human rights oversight of artificial intelligence

Monsieur l’Ambassadeur,

Excellences,

Mesdames et Messieurs,

Chers amis,

Avant de commencer, je voudrais vous demander de m’excuser : je vais parler Anglais. Comme beaucoup d’Anglophones, j’ai peur de parler Français en public.

Many years ago, I was sitting on a hotel terrace, looking across the lawn. As I sat there, I became aware of a little machine going up and down the grass, cutting as it went. It was my first encounter with a robot lawn mower.

I was transfixed. For a good half an hour, I watched as it made its journey, up and down, a thousand times. And at a certain point, I began to feel sorry for it. That it was doing this job with no gratitude, no recognition, no encouragement. I had to restrain myself from going over to pat it.

But what I remember about that moment, above all, is a genuine sense of awe. It was my first encounter with robotic technology in any meaningful way. And I was deeply impressed.

Much has moved on since that first encounter, but I never cease to be in awe of AI and its potential for human thriving and well-being.

Just think about the transformative impact it can have on vaccine development. Amidst the worst pandemic since 1920, thanks to a new machine learning tool known as Smart Data Query, the COVID-19 vaccine clinical trial data was ready to be reviewed a mere 22 hours after meeting the primary efficacy case counts. Previously this would only be possible after a 30-day trial phase. Similar stories can be told across sectors, all about saving and improving lives.

But of course, given my work at the Council of Europe, I am no less aware of the risks that AI pose for us, and for our societies.

Here are just five contexts in which AI can get it badly wrong.

The first is the well-known one of discrimination and all that is related. AI hoovers up every fact, every datum in our world, with all of the discriminations, the hatreds, the biases to be found in that data. And it is well known, this is nothing new. However, beyond mistakes or biases in data, there is also the worrying extent to which data is mistaken. Sometimes this is obvious and deliberate, but sometimes it is very subtle and difficult to spot. And I am not taking about disinformation here, but rather about what is known as “AI Slop”. This very illustrative term describes a vague text, a low-quality AI generated content that is now over-feeding the internet without any control or moderation.

So, it is about bias, it is about mistakes and mess, and it is also about something very specific to tech and that is the role of feedback loops, and the extent to which feedback loops can enlarge error over time and practice. We have looked at that in the context of automated online content moderation. We have seen how a piece of technology that begins benign and does its job relatively well can learn error and then expand the error, with some pretty remarkable consequences.

Just to give you an example: at the Fundamental Right Agency, in my previous capacity, we did research on automated online content moderation, where we developed algorithms and then tested language to see what would happen. As is well known, moderation in lesser-used languages was largely ineffective. But in English, we inserted particular terms. One such term was “I hate Jews”. And the online tech did its job. The term was flagged as problematic speech - exactly what we intended it to do. But then, my colleagues inserted the words “I hate Jews love”. And the machine passed over the term. It did not flag it as problematic because of the power of the word ‘love’ and the associations of the word ‘love’, which, according to the machine, overrode the “I hate” part of the phrase. So again, an example of something rather specific to the online sphere, in terms of how error can multiply.

A related problematic application of AI has to do with the so-called anthropomorphic chatbots and the phenomenon of hallucinations. I am talking about AI that is trained to pretend to have emotions, to care and to love. Commonly, humans think they are engaging with a human-like entity and as you can imagine the consequences can be dreadful. In 2023, a man took his life after a chatbot became his confidante and fuelled his eco-anxiety. It nourished the idea that he would help save the planet if he killed himself.

The second category of risks has to do with the fact of who dominates the tech world: the private sector. There is nothing inherently wrong with the tech world owning technology, but it becomes problematic when that is in the context of something that is so profoundly impacting our lives. And in a context where we know what are the primary drivers for much of the private sector. One important driver is efficiency. We know through empirical research that the most important motivation for investment in technology is to do things quicker and more efficiently, not to do things better. This raises obvious concerns in terms of the safety of applications.

Other private sector drivers, like profit and some kind of world domination, are no less significant but I will not explore them today.

My third concern is the exact converse of the second. That is the extent to which AI enhances the power of the state. This is not inherently problematic - at least if you are in a state that respects democracy. But, and again, I do not need to give examples, it is perfectly obvious how tech in the wrong state hands can be a tool for repression and oppression – and this is sadly something I have to deal with on a daily basis in my current role, for instance regarding misuse of facial recognition technology.

The fourth of my five concerns is the somewhat more apocalyptic one of the transfer or the outsourcing of decision-making to artificial intelligence. We have very many examples here, but again the obvious one is autonomous weapon systems, that can hit targets without human guidance and control. I could continue by mentioning what will hopefully remain in the realm of science-fiction: the idea of machines outsmarting humans and the latter becoming enslaved. As researchers are climbing the ladder towards building this technology, dubbed Artificial General Intelligence (AGI), all these developments should and do strike fear into the heart of anybody who is concerned about the well-being of our world and humanity.

The fifth of my concerns is something a little bit harder to pin down. It is broad, and something seen and understood over time; it is the erosion through the application of AI of our social solidarity. The degradation of the human community, in the sense that, so often today and far more likely in the future, we are dealing with a machine, not with a person.  Although these new ways to interact affect all of us, it may have an even more profound effect on older people, who may feel left behind and excluded. In addition, psychologists and others speak of the risk to mental health of the automation of life.

And so, my concerns and others that I didn’t have time to address lead us to the question of how we tame technology.

If we all accept that we need to tame this awesome power, what should that look like? What solutions can we suggest so that the technology is in the service of human well-being? There are a few frames of reference for how we begin a discussion of how we tame tech but two of the most prominent are through the invocation of the language of ethics on the one hand and the language of human rights on the other. And I welcome that these are the starting points for the work on AI of both UNESCO and the OECD. They are also strongly reflected in Ireland’s AI Strategy.

That said, I am disappointed at how the ethical frame has until now – and I would argue still – dominates. It is as if the ethics and the human rights approaches are contesting, and we must fight our corner so that we dominate. I suggest that ethics has the upper hand because of its inherent subjectivity - my sense of right and good does not have to be the same as your sense of right and good. This means that using ethics to frame the taming of technology, allows us a tool that is malleable, adaptable to our various world-view and objectives.

Turning to the other frame of reference: human rights. Here, we see something rather different. We see a far more sturdy objective infrastructure on which to base standards and practice.

My concern, as Commissioner for Human Rights, is to help put human rights in the centre of the discourse. This is not to displace ethics – it is not a competition, but rather to turn the rhetoric of human rights-based oversight of AI into an applied reality.

Before I get to what that would look like in practice, allow me a brief word on human rights, more generally.  On 10 December 1948, here in Paris, United Nations member states adopted the Universal Declaration of Human Rights; the best effort by humanity, coming out of the horrors of the Second World War, to define the minimum standards for a society where we could thrive and mutually respect each other.  Or, to paraphrase, article one of the Declaration, to achieve a world where everyone is equal in dignity and in rights.

The Universal Declaration has been repeatedly reaffirmed universally and has been at the origin of a sophisticated and binding system of institutions and treaties that protects the rights and freedoms of all human beings.  Here in Europe, this year, we celebrate the 75th anniversary of one of the principals of its regional expressions, the European Convention on Human Rights.

Notwithstanding popular misconceptions, the human rights legal system is rarely about absolutes. It is very insightful in the way that it allows rights to be limited in the interests of the public good. We saw that, sometimes for good and sometimes maybe a bit too enthusiastically, in the context of Covid related restrictions. That period neatly illustrates the extent to which the human rights system accommodates extraordinary crises and issues, and in the public good, allows for the restriction of rights.

So, we have this astonishing achievement of our societies, sometimes described as “modernity’s greatest achievement”, and the question arises of why it has been so peripheral to the discussion about the restraining, the taming, of artificial intelligence. There are many reasons for this. I have already alluded to some, but one that is very important and has preoccupied me for nearly a decade has been a lack of understanding or at least of clarity regarding the application of human rights standards in practice: how the human rights standards and systems apply in the AI context.

And we must do so in the specific context of the “now” of AI.

By the “now”, I am referring to this moment for regulation. In the last year, as you know, there have been important developments. The EU adopted the AI Act and my organisation, the Council of Europe, finalised a Framework Convention on AI. Thus, we now have innovative rules. It is in this specific context, that I would address the drilled-down role of human rights and the extent to which it is adequately addressed in the Act and the Convention.

I have seven elements in mind.

The first is that, in order to deliver for our human rights, the laws must be comprehensive and loophole free.  Let me point to three dimensions to this premise.

In the first place, our regulatory instruments must embrace a sufficiently wide definition of artificial intelligence to capture all current and future technologies that can impact for human well-being. With regard to the AI Act and the Framework Convention I consider that this largely has been achieved through the embrace of a wide definition that originates in the OECD.

Then, we have to make sure that the regulations equally apply to the private and the public sectors. Here we have been less successful, as the new regulations only partially address the private sector.

Furthermore, effective regulation must embrace a full range of risks to human wellbeing. I make this point because of the continued excessive focus on a very narrow band of human rights including privacy and non-discrimination. So much more is involved. Take, for example, the recent complaint by a group of French NGOs against CNAF before the Conseil d’État. This was a case of algorithms wrongly discontinuing social welfare benefits-based on people’s personal characteristics, a matter of respect for socio-economic rights. Are the new instruments sufficiently comprehensive to address such matters? Do the new laws meet this test? In terms of material scope, probably yes, however, the sectorial exclusions are problematic. It is a matter of regret, that, in large part, the security and defence sectors are not covered by the instruments.

The second of my seven elements has to do with the meaningful delivery of the protection of human rights. This requires that technology be subject to testing for the purpose of assessing its impact for human well-being. What is more, testing must be use-case based and needs to continue through the lifecycle of technology.  Here I am encouraged by ongoing developments. The AI Act requires so-called fundamental rights testing for high-risk technologies, with a few exceptions. EU member states are also increasingly employing “regulatory sandboxes” to human rights test technology. And my own organisation, the Council of Europe, has developed a practical and algorithm-neutral human rights impact assessment tool named HUDERIA. Our challenge now is to ensure that human rights impact assessment eventually becomes a gold standard practice for both private and state actors throughout the lifecycle of relevant technologies.

The third element of effective regulation has to do with the need for strong oversight.

In the first place, it is imperative that humans remain in charge: Al systems must be controlled by a human throughout their lifecycle. In this regard, I welcome that this essential guardrail is already enshrined in the EU Al Act and the Framework Convention. I also salute the global consensus on human oversight of technology reflected in the UN Pact for the Future and its Global Digital Compact.

Turning to institutional oversight, it is essential that it is adequate to the job. The bodies need to have the skills. They need to have the resources. If they are protecting human rights, we need human rights specialists working within them. Not just privacy experts but people skilled across all human rights. Here I am encouraged by the emerging practice under the AI Act for national human rights bodies (so called NHRIs) to take on certain oversight functions – as is now the case in a number of countries including Ireland, Denmark and the Netherlands.

The fourth of my seven elements is with regard to a fundamental principle of human rights: that every violation should come with a remedy. To respect this, we need to make effective the pathway to a remedy for somebody whose human dignity has been violated by an application of technology. Here I am impressed by the victim focus in the Framework Convention and I appreciate the ongoing work on the matter in the EU.

And then the fifth element, which could have been my first because it is so absolutely central to the delivery of all of the other dimensions, is ensuring transparency. It is vital for effective oversight and the proper monitoring of technology that there is transparency as to the contents of the technology. What is more, as we see in ongoing copyright controversies, transparency is necessary in order to equitably reward original creators and owners of content used for training purposes.

As you can imagine, this demand for transparency is met with a lot of resistance. Beyond rudimentary commercial considerations, we often hear that, "it is just not possible- we don't know how the tech reaches that good outcome, do not touch it." I have heard that many times, including recently from a doctor carrying out medical research.

These views need to be challenged.  I recognise that there may be huge complexity in terms of effective delivery of transparency but, at a minimum, in the context of tech that we do not quite understand, what is to stop us demanding that you, the designer of the tech, describe what you do, tell us how you have tested your technology, what algorithms you have deployed, and so on.

Obviously, this struggle for transparency will continue and impact broadly in terms of current and future legislative initiatives.

The sixth of my seven elements is with regard to the need for continuous dialogue as we persist in our efforts to tame technology.

Dialogue is not just a good, it is a necessity. As we continue to work our way forward in this new world, we need everybody on board to figure out the right way to go.

There are many actors to consider, with civil society having an invaluable role, but let me just mention one group that I think is somewhat neglected, and that is the community of national human rights institutions. Here in France, of course, I refer to la Commission nationale consultative des droits de l'homme – la CNCDH.

These bodies, everywhere, need to be part of the conversation. They are unique centres of human rights expertise in our societies. As I mentioned, some of them are assuming oversight functions as regards fundamental rights under the EU AI Act, however, they could offer so much more.

They could, for instance, help bridge the digital divide and promote digital literacy through awareness-raising campaigns. They are especially adept at reaching marginalised communities. I also see their added value in their expertise on discrimination and their ability to provide litigation advice to potential victims of technology misuse.

The seventh and the final of my elements is not about something we must do, but something we must challenge. We have to challenge the very frequently invoked argument that views such as mine will only result in the stifling of innovation and help other countries to leap well ahead us. I am far from convinced.

Research increasingly indicates that the reason Europe lags behind in innovation has very little to do with regulation and rather more to do with such features in Europe as: unharmonized and often punitive bankruptcy laws; a not so single digital single market; poorly developed capital markets; and an underdeveloped scheme to attract high-skilled labour to Europe.

It is also relevant to point out that Europe’s new regulations are far from suffocating. The AI Act adopts a risk pyramid which exempts from scrutiny a vast range of technologies because they pose a low risk. The Framework Convention leaves a very wide margin of consideration on this matter for states.

As my final argument regarding innovation point, I would invoke the trust issue. There is no doubt, and I have yet to see anybody convincingly push back against the view, that a strongly human rights compliant, human rights respectful Al, that is ultimately targeted to human thriving is going to be the most trustworthy Al. Trusted by consumers, by citizens, by everybody in our societies. I am firmly of the view that, in the long game, it is the trustworthy Al that will ultimately win out. It is for this reason that I very much welcome that France is promoting the concept of “Trust in AI” in the context of its upcoming AI Action Summit.

As I conclude, please allow me one general observation - on the need to think beyond regulation. Earlier, I made a reference to the possible dangers of human-like technology such as AGI. Some voices from science tell us that we need an absolute ban on this technology. It is premature to form a view on this call, albeit the issue needs to be followed very closely indeed. In the meantime, let’s not forget that the stronger our democracies and our rule of law the less we will need to consider bans and prohibitions – but that is a topic for another day.

Mesdames et Messieurs,

Permettez-moi de conclure par une réflexion inspirée des propos du Pape François lors de la session du G7 consacrée à l'intelligence artificielle en 2024. Il a évoqué des exemples intéressants que je souhaite utiliser pour conclure ce discours.

Il rappelait que, tout au long de l'histoire, les humains ont toujours cherché à améliorer leur environnement en créant de nouveaux outils. Nos ancêtres ont appris à utiliser le silex pour en faire des couteaux, qui pouvait aussi bien servir à découper des animaux pour fabriquer des vêtements, qu'à se combattre les uns les autres. Plus récemment, nous avons développé des technologies capables de produire de l'énergie et de l'électricité. Ces avancées, qui ont révolutionnés nos quotidiens, peuvent également menacer l'équilibre de notre planète si elles sont mal utilisées.

Ainsi, ce n'est pas toujours l'outil qui pose problème, c'est plutôt la manière avec laquelle il est utilisé.

Chers amis,

La question qui se pose à nous aujourd'hui est de savoir quelle voie nous voulons emprunter face à la révolution portée par l'intelligence artificielle. Je suis convaincu qu'en suivant le chemin des droits de l'homme, nous serons capables de l'utiliser pour en faire un outil qui améliorera de manière bénéfique notre quotidien à toutes et à tous.

Merci. Thank you.

Paris 30/01/2025
  • Diminuer la taille du texte
  • Augmenter la taille du texte
  • Imprimer la page