AI

Human rights and AI-Powered Content Moderation and Curation in Social Media

sebnem kenis on AIWelcome to our blog, the Human Righter. We shed light on contemporary human rights issues and comment on human rights developments. We dig deep into our focus areas within human rights, discuss SDGs and human rights. You will also find book reviews and analyses of new laws.

Sebnem Kenis joined RWI in 2016 as Programme Officer responsible for gender mainstreaming and results-based management in RWI’s Turkey Human Rights Capacity Development Programme. Sebnem Kenis is currently on leave, carrying out a research project in Edinburgh.

Every second tens of thousands of pieces of content in the form of video, text, image, or audio are shared on social media. Billions of content every single day. Some are highly problematic as they involve cyber-harassment or bullying, disinformation, hate speech, child sexual abuse, suicide, self-harm, violent extremism, violent and graphic content, or scams. Without moderation, social media can be very messy. Social media companies increasingly rely on AI-based technologies to automatically detect and take-down such harmful content as the vast amount of content and the risk of virality make the task impossible to handle manually.

Nearly %85-95 of the content social media companies remove are proactively identified and taken down before anyone reports it. %70-85 of them is removed before anyone views them. This is mainly achieved by automated technologies include digital hashing, image recognition, natural language processing (analysing text to detect harmful content), and metadata filtering. Such AI-powered tools are also employed to triage, prioritize and review content reported by users and –if needed- route them to human reviewers.

The use of automated tools will likely increase as new and proposed laws in the UK, EU, and US (such as EU Regulation on preventing the dissemination of terrorist content online, UK Online Safety Bill, EU Digital Services Act, US Safe Tech Act) impose legal duties on platforms to take proactive measures to prevent dissemination of harmful content with steep fines for their failure to do so.

These proposals constitute a paradigmatic shift from limited liability/immunity regimes in which platforms have flourished over the years without bothering about the misuse and adverse impacts of their services to new accountability regimes that will impose legal obligations on them. Even though these laws don’t oblige the use of automated tools, it is not hard to predict that companies will rely more on automation and over-removal to comply with the rules and avoid fines.

These algorithms perform relatively well in detecting repeated violations and flagging obviously graphic or extreme content such as torture videos, adult nudity, and child abuse. But they tend to make errors in identifying hate speech, bullying, or misinformation as they require more complex, contextual, and nuanced understanding.

AI-based content moderation and curation may have significant implications on the right to freedom of expression and opinion, the right to privacy, the right to protection of personal data, the right to non-discrimination, the right to an effective remedy, the right to human dignity, the rights of the child, and the right to life, security and protection from violence. When algorithms fail to detect sensational violence-inciting or disinformation contents but instead boost them for high degree of user interaction they receive, they can help mobilizing people for genocide as it happened in Myanmar.

Failure to prevent online hate speech, harassment, and defamation against women politicians has led to offline violence and created a chilling effect on women’s political expression, participation, and willingness to run for re-election.

On the other hand, the removal of legitimate political expression by automated filters infringes freedom of expression and restricts access to information. Some big platforms have been accused of using algorithms that suppress leftist or LGBTIQ+-related videos and channels or block dissent posts and hashtags related to recent unrests in Palestine, Colombia, India among other countries.

In some of these incidents, companies admitted that their algorithms have made mistakes. Similarly, algorithms trained to eradicate ISIS propaganda videos have also erased videos and images posted by Syrian human rights watchdogs documenting war crimes in Syria. As Irene Khan, the UN Special Rapporteur on freedom of opinion and expression, points out in her recent report, over-reliance on automated technologies that are not able to capture nuance and understand the context leads to removal of permissible content and violates freedom of expression.

Economic and social rights may also be adversely affected by automated content curation especially when targeted ad algorithms determine which economic opportunities and ads should be shown to which users based on their demographic profiling.

For example, Facebook was sued for its algorithm optimizing job, housing, and credit ads in a discriminatory manner based on protected categories (gender, ethnicity, race, etc.) without the knowledge of advertisers. It has been revealed that ads algorithm had delivered traditionally male-dominated jobs only to men and female-dominated jobs only to women and shown housing or credits ads selectively based on user’s profile –location, socioeconomic status, etc.-  In the absence of design features preventing bias or discrimination, these algorithms do what they are trained to do: maximizing click-through rates.

These problematic aspects of AI-based content moderation indicate the limits of techno-solutionism. Legislators and governments should avoid pinning their hopes on innovations and content-policing by tech companies for the prevention of online harms because technology cannot solve problems online that we haven’t solved in the real world. Most of the online harms from hate speech to cyber harassment, from misinformation to violent extremisms are rooted or intertwined with complex issues such as patriarchy, racism, ultranationalism, homo/transphobia, or poverty.

These issues have been around long before internet and are deeply entrenched in our societies. Technology has scaled them up and helped them gain new online forms. AI-based content moderation tools –regardless of how advanced they are- would not be sufficient to prevent online harms and related human rights issues without underlying causes being effectively addressed through political, legal, educational, and other means.

It is also important that proposed laws should go beyond regulating platforms’ handling of illegal and harmful user-generated content and tackle systemic risks arising from platforms’ own algorithmic architectures and design choices. In that sense, EU Commission’s draft Digital Services Act (DSA) may be promising as it imposes due diligence requirements on ‘very large platforms’ (with more than 45 million users in the EU) to assess and address potential adverse impacts of the design and functioning of their own services (including algorithmic systems for content moderation, recommender or advertising systems) on the exercise of fundamental rights. DSA also sets standards of transparency reporting, oversight, and accountability and introduces provisions to ensure access to remedy through internal complaint-handling mechanisms, out-of-court dispute settlement and judicial redress.

The success of ongoing legal paradigm change in the digital field will largely depend on the extent to which final texts of the laws will be built on human rights-based approach.

 

Share with your friends
Scroll to top