close
Friday April 26, 2024

Amid reckoning on police racism, algorithm bias in focus

By AFP
July 06, 2020

WASHINGTON: A wave of protests over law enforcement abuses has highlighted concerns over artificial intelligence programs like facial recognition which critics say may reinforce racial bias. While the protests have focused on police misconduct, activists point out flaws that may lead to unfair applications of technologies for law enforcement, including facial recognition, predictive policing and “risk assessment” algorithms.

The issue came to the forefront recently with the wrongful arrest in Detroit of an African American man based on a flawed algorithm which identified him as a robbery suspect. Critics of facial recognition use in law enforcement say the case underscores the pervasive impact of a flawed technology. Mutale Nkonde, an AI researcher, said that even though the idea of bias and algorithms has been debated for years, the latest case and other incidents have driven home the message.

“What is different in this moment is we have explainability and people are really beginning to realize the way these algorithms are used for decision-making,” said Nkonde, a fellow at Stanford University´s Digital Society Lab and the Berkman-Klein Center at Harvard. Amazon, IBM and Microsoft have said they would not sell facial recognition technology to law enforcement without rules to protect against unfair use. But many other vendors offer a range of technologies.

Nkonde said the technologies are only as good as the data they rely on. “We know the criminal justice system is biased, so any model you create is going to have ´dirty data,´” she said. Daniel Castro of the Information Technology & Innovation Foundation, a Washington think tank, said however it would be counterproductive to ban a technology which automates investigative tasks and enables police to be more productive.

“There are (facial recognition) systems that are accurate, so we need to have more testing and transparency,” Castro said. “Everyone is concerned about false identification, but that can happen whether it´s a person or a computer.

Seda Gurses, a researcher at the Netherlands-based Delft University of Technology, said one problem with analyzing the systems is that they use proprietary, secret algorithms, sometimes from multiple vendors.

“This makes it very difficult to identify under what conditions the dataset was collected, what qualities these images had, how the algorithm was trained,” Gurses said. - Predictive limits

The use of artificial intelligence in “predictive policing,” which is growing in many cities, has also raised concerns over reinforcing bias.

The systems have been touted to help make better use of limited police budgets, but some research suggests it increases deployments to communities which have already been identified, rightly or wrongly, as high-crime zones.