14.9 C
New York

AI facial recognition falsely identifies pregnant woman as a wanted criminal, she sues police


Artificial intelligence (AI) and machine learning are becoming integral parts of various industries, revolutionising how tasks are performed, decisions are made, and data is analysed. However, it also harbours potential pitfalls. A recent case involving a woman who was falsely accused and arrested for robbery due to faulty facial recognition by AI serves as a reminder not to become overly reliant on this technology.

A few months ago, a 32-year-old pregnant woman named Porcha Woodruff was arrested outside her home in, Michigan, US, in connection with a robbery and carjacking case. According to The New York Times, Woodruff was accused of the crime by Detroit police, who used artificial intelligence-powered facial recognition software to identify potential suspects. Following the arrest, Woodruff was held in custody for 11 hours before being charged in court and subsequently released on a $100,000 (approximately Rs 82 Lakh) personal bond.

“I was having contractions in the holding cell. My back was sending sharp pains through me. I experienced spasms. I believe I might have been having a panic attack,” she recollected. Reportedly, following her release, Woodruff hurried to the hospital, where she was diagnosed with dehydration. A month later, the prosecutor dismissed the case against her.

However, six months later, Woodruff filed a lawsuit for wrongful arrest against the Detroit police. The report highlights that this is not the first case of a false arrest attributed to faulty AI technology. The case of Woodruff marks the sixth instance of a black person being falsely arrested as a result of facial recognition technology used by the police to match an unidentified offender’s face with a photo in a database.

Furthermore, the Detroit police are dealing with three lawsuits related to wrongful arrests caused by the use of facial recognition technology. Notably, Woodruff is the first woman to share her encounter in this regard.

The flawed nature of AI technology raises serious concerns about the reliability of police investigations. Phil Mayor, a senior attorney at the American Civil Liberties Union of Michigan, condemns the use of inadequate technology in criminal cases, stating that police assurances of thorough investigations often fall short in such situations. “It’s deeply concerning that the Detroit Police Department knows the devastating consequences of using flawed facial recognition technology as the basis for someone’s arrest and continues to rely on it anyway,” he said in a statement.

Facial recognition technology, as described by Amazon Web Services, analyses and identifies a person’s identity using their facial features. It works by comparing facial features in images or videos to determine if the faces belong to the same individual.

In the above case, a city document reportedly reveals that the police department employs a facial recognition system known as DataWorks Plus. This technology compares unfamiliar faces with a collection of criminal mugshots, producing a list of potential matches ranked by their likelihood of resemblance. A human analyst then further takes the final call to determine if any of these matches could be a viable suspect.

While AI tech is becoming a powerful tool for identifying suspects and solving crimes, this incident highlights the risks of depending on untrustworthy tools. It emphasises the necessity for more careful oversight and rules, particularly within law enforcement, to ensure safe and responsible utilisation of such technology.

Published On:

Aug 9, 2023

Source link

Related articles