The use of artificial intelligence in our everyday lives is on the increase, and it now covers many fields of activity. Something as seemingly banal as avoiding a traffic jam through the use of a smart navigation system, or receiving targeted offers from a trusted retailer is the result of big data analysis that AI systems may use. While these particular examples have obvious benefits, the ethical and legal implications of the data science behind them often go unnoticed by the public at large.
Artificial intelligence, and in particular its subfields of machine learning and deep learning, may only be neutral in appearance, if at all. Underneath the surface, it can become extremely personal. The benefits of grounding decisions on mathematical calculations can be enormous in many sectors of life, but relying too heavily on AI which inherently involves determining patterns beyond these calculations can also turn against users, perpetrate injustices and restrict people’s rights.
The way I see it, AI in fact touches on many aspects of my mandate, as its use can negatively affect a wide range of our human rights. The problem is compounded by the fact that decisions are taken on the basis of these systems, while there is no transparency, accountability or safeguards in how they are designed, how they work and how they may change over time.
Encroaching on the right to privacy and the right to equality
The tension between advantages of AI technology and risks for our human rights becomes most evident in the field of privacy. Privacy is a fundamental human right, essential in order to live in dignity and security. But in the digital environment, including when we use apps and social media platforms, large amounts of personal data are collected - with or without our knowledge - and can be used to profile us, and produce predictions of our behaviours. We provide data on our health, political ideas and family life without knowing who is going to use this data, for what purposes and how.
Machines function on the basis of what humans tell them. If a system is fed with human biases (conscious or unconscious) the result will inevitably be biased. The lack of diversity and inclusion in the design of AI systems is therefore a key concern: instead of making our decisions more objective, they could reinforce discrimination and prejudices by giving them an appearance of objectivity. There is increasing evidence that women, ethnic minorities, people with disabilities and LGBTI persons particularly suffer from discrimination by biased algorithms.
Studies have shown, for example, that Google was more likely to display adverts for highly paid jobs to male job seekers than female. Last May, a study by the EU Fundamental Rights Agency also highlighted how AI can amplify discrimination. When data-based decision making reflects societal prejudices, it reproduces – and even reinforces – the biases of that society. This problem has often been raised by academia and NGOs too, who recently adopted the Toronto Declaration, calling for safeguards to prevent machine learning systems from contributing to discriminatory practices.
Decisions made without questioning the results of a flawed algorithm can have serious repercussions for human rights. For example, software used to inform decisions about healthcare and disability benefits has wrongfully excluded people who were entitled to them, with dire consequences for the individuals concerned. In the justice system too, AI can be a driver for improvement or an evil force. From policing to the prediction of crimes and recidivism, criminal justice systems around the world are increasingly looking into the opportunities that AI provides to prevent crime. At the same time, many experts are raising concerns about the objectivity of such models. To address this issue, the European Commission for the efficiency of justice (CEPEJ) of the Council of Europe has put together a team of multidisciplinary experts who will “lead the drafting of guidelines for the ethical use of algorithms within justice systems, including predictive justice”.
Stifling freedom of expression and freedom of assembly
Another right at stake is freedom of expression. A recent Council of Europe publication on Algorithms and Human Rights noted for instance that Facebook and YouTube have adopted a filtering mechanism to detect violent extremist content. However, no information is available about the process or criteria adopted to establish which videos show “clearly illegal content”. Although one cannot but salute the initiative to stop the dissemination of such material, the lack of transparency around the content moderation raises concerns because it may be used to restrict legitimate free speech and to encroach on people’s ability to express themselves. Similar concerns have been raised with regard to automatic filtering of user-generated content, at the point of upload, supposedly infringing intellectual property rights, which came to the forefront with the proposed Directive on Copyright of the EU. In certain circumstances, the use of automated technologies for the dissemination of content can also have a significant impact on the right to freedom of expression and of privacy, when bots, troll armies, targeted spam or ads are used, in addition to algorithms defining the display of content.
The tension between technology and human rights also manifests itself in the field of facial recognition. While this can be a powerful tool for law enforcement officials for finding suspected terrorists, it can also turn into a weapon to control people. Today, it is all too easy for governments to permanently watch you and restrict the rights to privacy, freedom of assembly, freedom of movement and press freedom.
What can governments and the private sector do?
AI has the potential to help human beings maximise their time, freedom and happiness. At the same time, it can lead us towards a dystopian society. Finding the right balance between technological development and human rights protection is therefore an urgent matter – one on which the future of the society we want to live in depends.
To get it right, we need stronger co-operation between state actors - governments, parliaments, the judiciary, law enforcement agencies - private companies, academia, NGOs, international organisations and also the public at large. The task is daunting, but not impossible.
A number of standards already exist and should serve as a starting point. For example, the case-law of the European Court of Human Rights sets clear boundaries for the respect for private life, liberty and security. It also underscores states’ obligations to provide an effective remedy to challenge intrusions into private life and to protect individuals from unlawful surveillance. In addition, the modernised Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data adopted this year addresses the challenges to privacy resulting from the use of new information and communication technologies.
States should also make sure that the private sector, which bears the responsibility for AI design, programing and implementation, upholds human rights standards. The Council of Europe Recommendations on human rights and business and on the roles and responsibilities of internet intermediaries, the UN guiding principles on business and human rights, and the report on content regulation by the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, should all feed the efforts to develop AI technology which is able to improve our lives. There needs to be more transparency in the decision-making processes using algorithms, in order to understand the reasoning behind them, to ensure accountability and to be able to challenge these decisions in effective ways.
A third field of action should be to increase people’s “AI literacy”. States should invest more in public awareness and education initiatives to develop the competencies of all citizens, and in particular of the younger generations, to engage positively with AI technologies and better understand their implications for our lives. Finally, national human rights structures should be equipped to deal with new types of discrimination stemming from the use of AI.
It is encouraging to see that the private sector is ready to cooperate with the Council of Europe on these issues. As Commissioner for Human Rights, I intend to focus on AI during my mandate, to bring the core issues to the forefront and help member states to tackle them while respecting human rights. Recently, during my visit to Estonia, I had a promising discussion on issues related to artificial intelligence and human rights with the Prime Minister.
Artificial intelligence can greatly enhance our abilities to live the life we desire. But it can also destroy them. It therefore requires strict regulations to avoid morphing in a modern Frankenstein’s monster.
Dunja Mijatović, Commissioner for Human Rights