Human Rights and Artificial Intelligence: The here and now and challenges ahead

Welcome to our blog, the Human Righter. We shed light on contemporary human rights issues and comment on human rights developments. We dig deep into our so-called Thematic Areas – our focus areas within human rights. We discuss SDGs and Human Rights. You will also find book reviews and analyses of new laws.

Sue Anne Teo On Human Rights and Artificial Intelligence:

What comes to mind when the words “artificial intelligence” (AI) and “human rights” are mentioned together? Perhaps some form of ‘human’ rights for intelligent robots, inspired by a stream of pop culture references that anthropomorphizes a robot as human-like? Or perhaps the image of Lee Sedol, the former world champion of the board game Go, losing to the AI system that defeated him?

The two scenarios represent two extremes of a spectrum, namely a version of AI that could potentially come to co-exist or replace humans and on the other hand, AI that only has extraordinary capabilities within very narrow fields (in this case, playing the game of Go).

Yet those two examples belies the fact that artificial intelligence systems already have widespread usage within many different sectors today. However, before we even get into that, it is instructive to start on the same page, by clarifying what is AI?

Unfortunately, there is no consensus on what AI is. Scholars adopt either wide or narrow views on its capabilities, or use human abilities, such as human intelligence, as its yardstick. I will not attempt a comprehensive definition, but one can surmise that AI systems can find patterns and correlate massive amounts of data to make a prediction or decision without any human intervention.

AI is an umbrella term that encompasses different types of technologies such as machine learning, reinforcement learning, expert systems and statistical systems such as Bayesian statistics. The term itself was coined in the 1956 at a conference at Dartmouth College. But, AI systems only really took off in the 2000s with the rise of computing power and the availability of big data due in large part to the widespread use of the internet.

AI systems are today deployed in many different fields for instance transportation, healthcare, agriculture and even to assist with human rights investigations.

AI- driven automated decision-making systems are also used to decide upon social security benefits, eligibility for university spots, employment prospects, and insurance and mortgage applications, amongst many others. AI recommendation systems are similarly ubiquitous and arguably innocuous as many of us are familiar with music and movie recommendations in Spotify and Netflix respectively as well as mediation of friendships and content in Facebook or other social media sites.

In fact, in almost every area of the usage of AI systems, human rights concerns will be engaged. But, what indeed might those concerns be? And if we are talking about social media networks or benefiting from quicker decision making from public authorities overwhelmed by claims, tight deadlines and budgets, surely some ‘help’ from AI systems cannot hurt?

To take a very famous case study used by AI ethics scholars, we can start by examining the COMPAS algorithm used in the US justice system. The COMPAS algorithm helps judges to decide upon bail and parole for those facing criminal charges before the court by giving each defendant a ‘risk score’.

In a study conducted by ProPublica in 2016, it was found that COMPAS had wrongly predicted the re-offending rate of black Americans at almost double the rate of white Americans. Mistakes may, of course, happen in any system, computational or otherwise, but the danger of AI systems is that these mistakes were as a result of biased social conditions which were then reproduced through biased modeling parameters used to design the AI system.

When applied, the sheer scale and reach of the AI system means that small statistical errors could translate to thousands of ruined lives. And when it concerns the fundamental question of liberty and non-discrimination, serious threats to human rights could arise.

One might argue that perhaps these human rights threats are less evident in Europe since there are strong national and regional-level human rights protection mechanisms such as the EU Charter of Fundamental Rights and the European Convention on Human Rights.

Yet, the ubiquity of AI and the inability to both peer beneath both the inputs (of data) and outputs of the systems mean that many harms can go unnoticed, making it hard for those affected to both articulate their claims or challenge the decisions made.

This was certainly the case when from 2013 to 2019, the Dutch authorities relied on an AI system that accused thousands of families of childcare benefits fraud and demanded repayment from them. This was found to be false, and lives were ruined beyond repair, with many having lost their jobs, homes and families.

It was found that the AI system flagged ethnic minorities and those who held double nationality as high-risk and thus made up a disproportionately large percentage of those who were falsely accused.

As such, AI systems raise serious human rights issues. These can arise at different stages, each affecting the next link in the chain. Lack of representative data to train algorithms that power these systems mean that its predictions would advantageously skew towards majority groups in society. Biases and discriminatory parameters could also be used in designing the algorithmic models. Lack of a diverse workforce would mean concerns of bias and discrimination might not be caught early on.

When such AI systems are deployed at scale, bearing in mind the low-cost factor, its borderless nature, speed and perceived objectivity, these harms can be exacerbated and amplified. Furthermore, the networked effects of AI systems might mean that one could be caught in a feedback loop through reinforced effects, effectively being put in an ‘algorithmic prison.’

However, the problems are not mere reflection of human bias as algorithms can act in unpredictable ways when deployed. In drawing correlations from the data it interacts with, algorithms can neither understand the world nor the concept of causation and can therefore risk drawing inferences about people that are inaccurate, irrelevant or misleading.

How then should one govern AI and the harms it entails upon individuals and society?

Human rights law is a good starting point due to the established vernacular, practice and institutions in place to ensure the protection and promotion of human rights that are being challenged by AI. Indeed, this has been seen through the increased awareness and assertion of the right to privacy and the strong data protection rights in the EU General Data Protection Regulations (GDPR). However, AI has also revealed certain weaknesses about the human rights protection regime that may render it as a necessary, but insufficient, tool to combat these challenges.

Non-discrimination is a key human right but the historically informed grounds means that only membership in certain groups garner protection. This can be contrasted with the dynamic nature of AI harms that may discriminate against ‘new’ groups such as poor people or classes dynamically created – such as certain groups of loan applicants.

The statist orientation of human rights protection also sits oddly with the increased power of businesses, especially technologically focused corporations, whose use of AI permeates the breadth and depth of human lives through the increasingly infrastructure structural role it plays in our daily existence, including conditioning access to public civic participation.

The nature of AI harms are more granular and yet escapes the human eye, at least when compared with the large-scale exploitative practices of the extractive business industries that dominated business and human rights concerns in the past. The nature of some harms may be internal in nature as compared to the external threats that the human rights regime has responded well towards.

The borderless and scale scaled application of AI systems also makes it hard to trace for accountability when harms do occur. In other words, human rights has to evolve and ask deep questions about its conceptual and normative frameworks especially as AI systems might even erode human agency and autonomy in a way that undermines the foundational bases of human rights.

Some of these concerns are currently being addressed in the recent draft EU AI regulations from the European Commission, much of which builds upon the underlying fundamental rights protection in the European Union. However, one must not be complacent in assuming the sufficiency of the human rights regime when confronted with novel challenges. We need to work hard to ensure that human dignity continues to be preserved and upheld through (a rejuvenated) human rights framework fit for the age of AI.

The blog post was written by Sue Anne Teo who is currently on study leave from RWI to pursue her PhD on human rights and artificial intelligence at the University of Copenhagen where she also teaches Human Rights, Democracy, and Digital Technologies to Master of Law students. The blog post expresses the author’s own personal opinion.

 

 

Share with your friends
Scroll to top