AI

The Raoul Wallenberg Talks: AI and Human Rights

Raoul Wallenberg Talks #1 Jan 20: Per Axbom, digital expertOn January 20th, Per Axbom inaugurated the Raoul Wallenberg Talks lecture series by discussing AI and its connection with human rights. The talks will be given by renowned guests, covering timely human rights issues such as climate refugees, academic freedom, and increasing inequality.  

The lecture focused on the exclusionary potential of new technologies, their tendency to reproduce biases and prejudices, AI’s dependence on warped data, and the problem of accountability.  

Axbom began the lecture by noting how past technological breakthroughs have led to disruptive change at the expense of marginalized or less dominant groups. He highlights the invention of the printing press: it gave visibility to western languages, alphabets, and thinking while neglecting their non-western counterparts. The same has happened with the internet: 98% of the internet is published in 12 languages with over half in English.  

AI, Axbom believes, is not exempt from causing these unequal outcomes. He gives multiple examples, ranging from an Amazon recruiting tool that ranked men higher in preferability than women to a Google AI that equated black people with gorillas. Axbom suggests that those errors exist due to cultural prejudices found in the data these tools use. 

Correcting these mistakes does not free AI technologies from their flaws. Axbom gives the example of expert scepticism regarding chest x-ray machine learning models being used for cancer detection: they are unsuited to rapidly changing environments which necessitate tailored medical attention. Though made to be universally applicable, the effectiveness of an AI is limited by the data fed to it.  

Most datasets are collected from WEIRD (Western, Educated, Industrialized, Rich, Democratic) societies.

According to Axbom, people living in these societies are often statistical outliers. AIs are consequently likely to have skewed notions of normalcy which affect their judgements and render them inflexible. 

Finally, Axbom discussed the problem of AI and accountability. Having outlined how difficult it is to assign blame for AI-derived negative outcomes (Who do we blame? The CBT system owners? The engineers? The delivery or data partners? The people who reviewed the system for purchase?), he highlights the small number of AI engineers in the world – about 3 million, so about 0.0004% of the world population – and the conversely powerful position they hold. These engineers can affect the daily lives of billions of people. Axbom stresses the current asymmetry of power and advocates for a more equal, transparent, and multilateral relationship between AI and excluded and overlooked peoples. If this is not done, he warns of a repeat of the colonizing, homogenizing effects that followed the invention of the printing press and internet.  

Check out the talk 

About Per Axbom 
Per Axbom is a digital ethicist, designer, coach and teacher. He was born in Liberia and grew up in Saudi Arabia and Tanzania. His international background, together with an early interest in IT, computers and responsible innovation, has created a strong will to work for justice issues in the digital world. About Per  

More resources: Slidedeck and examples from the talk

—-
About The Raoul Wallenberg Talks

The Raoul Wallenberg Talks series is arranged by the Raoul Wallenberg Institute and Altitude Meetings and will be broadcast live on Stadshallen.se/livesandning.
Altitude and Stadshallen in the  local newspaper  (in Swedish) When Stadshallen in Lund is refurbished and finished in the winter of 2022, the Raoul Wallenberg Talks will be arranged on site in the newly built premises at the heart of Lund.

Read more

Share with your friends
Scroll to top