Global Voices RSS (regional)

AI Amplifies Abuse: How Algorithms Target Women Online

Africa human_rights technology
AI Amplifies Abuse: How Algorithms Target Women Online
A new report reveals a dangerous flaw in digital platforms. When artificial intelligence (AI) systems mistake online harassment for popular content, they actively spread it. This often leaves women, particularly women of color, as the main targets. The problem is in the "algorithm." This is the automated software that decides what content users see. Platforms design these algorithms to promote "engaging" material—content that gets strong reactions. But AI cannot reliably tell the difference between viral debate and harmful abuse. Violent threats or fake images aimed at women can be flagged as highly engaging. The system then pushes this abusive content to more people. Experts say this failure disproportionately affects women from the Global Majority, a term for people from non-white ethnic groups worldwide. Their abuse is more likely to be amplified. The report calls for urgent change. It asks tech companies to stop using simple engagement as a measure for promoting content. Instead, it recommends that human safety experts help train AI systems to detect and downgrade abuse. Without this fix, the authors warn, online platforms will keep accidentally harming the very users they aim to serve.