Wrongfully arrested by AI. They exposed the problems of American police

Concerns about facial recognition software have been with us practically since the first iterations of this technology, especially since We regularly received information about the illegal use of photos from social media to train algorithms or dangerous data leaks. Moreover, experts also pointed out the potential possibility of purchasing such software by private individuals, which raises the risk of persecution or other morally questionable uses, such as “screening” potential customers or employees.

Not to mention the repeated warnings that tThese types of solutions are unreliable and may be biased based on race or gender, which may lead to discrimination and many mistakes. And there is probably no better confirmation of their words than the results of an investigation conducted by The Washington Post, which reveals cases of people wrongly arrested after mistakes made by AI algorithms.

Police officers using AI take shortcuts

As we can read in the new publication, although police officers are warned that the results of using this technology are “unscientific” and “should not constitute the sole basis for making any decisions”, officers often take shortcuts. Instead of maintaining established standards, they rely too much on artificial intelligence capabilities, and innocent people suffer.

As the newspaper suggests, the total number of false arrests caused by AI search errors is impossible to determinebecause prosecutors rarely inform the public when they have used these types of tools, and they are legally required to do so in only seven states. Nevertheless, by analyzing available data and talking to lawyers, police officers and victims, journalists found eight such cases of abuse.

Moreover, there are many indications that if the police maintained the standards of their work, i.e. did not take shortcuts and added facial recognition checking alibis, distinctive signs and fingerprintsall these wrongful arrests could have been avoided.

And in some of them we are talking about truly absurd situations like the arrest of a pregnant woman for stealing a car, even though the witnesses of the theft did not mention the pregnant woman, or a man accused of stealing a watch, whose identity was confirmed by a security guard who was absent from work on the day of the incident!

Not even mentioning arrest and conviction of a 29-year-old father of four, who had no connection with the crime scene or a criminal pastand who took over two years to clear his name.

In short, innocent people go to jail because police officers rely on AI instead of conscientiously performing their duties. And as experts suggest, facial recognition actually works better and better, but even in laboratory conditions, fed with high-resolution photos, it is not infallible, let alone when it works on blurred and unclear recordings from poor quality surveillance.

As Katie Kinsey, a researcher at the NYU Law School, notes, quoted by WP, There has been no independent testing of the accuracy of the technology used to read blurred surveillance imagesso it is difficult to even assess how often AI makes mistakes, leading to personal tragedies.

interior© 2025 Associated Press

Do you have any suggestions, comments or see an error?

Source: geekweek.interia.pl