In a troubling incident that raises significant questions about the reliability of artificial intelligence in law enforcement, Angela Lipps, a 50-year-old grandmother from Tennessee, was incarcerated for nearly six months due to a misidentification by facial recognition software. Initially arrested in July 2025, Lipps was wrongfully linked to a bank fraud case in North Dakota, leading to serious repercussions in her life, including the loss of her home, car, and even her pet dog.
The Case of Misidentification
Lipps’ ordeal began when North Dakota police utilised AI technology to analyse surveillance footage from a bank. The software allegedly identified her as a suspect who had utilised a fraudulent military identification to withdraw significant sums of money. Following this identification, a detective cross-referenced the AI-generated image with Lipps’ social media profiles and her driver’s licence photo, although he had not met her in person prior to her arrest.
Despite the lack of concrete evidence linking her to the crime, Lipps was charged with multiple counts of unauthorised use of personal identifying information and theft. It was only after spending 108 days in Cass County Jail that she was able to present her case in court. On Christmas Eve, the charges against her were dismissed after it was revealed that her bank records placed her over 1,200 miles away from the scene of the alleged crime at the time it was committed.
Consequences of Incarceration
The ramifications of this wrongful imprisonment have been devastating for Lipps. Following her release, she found herself homeless, without a vehicle, and separated from her beloved dog, as she was unable to manage her finances while incarcerated. “I had my summer clothes on, no coat, it was so cold outside, snow on the ground, scared, I wanted out but I didn’t know what I was going to do, how I was going to get home,” she recounted.

Lipps was forced to spend Christmas in a hotel room, as authorities in North Dakota declined to cover her travel expenses back to Tennessee. Her story not only highlights the personal toll of wrongful arrests but also casts a shadow over the increasing reliance on technology in criminal investigations.
The Broader Implications of AI in Law Enforcement
Lipps’ case is not an isolated incident. The use of facial recognition technology has been linked to numerous wrongful arrests. In January 2025, another individual in Southampton experienced a similar fate when an AI algorithm mistakenly associated him with a burglary suspect over a hundred miles away. Such occurrences raise critical concerns regarding the ethical implications of deploying AI in law enforcement without adequate safeguards.
As the technology continues to advance, the potential for errors and biases inherent in these systems only becomes more pronounced. Critics argue that the lack of transparency in how these algorithms operate can lead to significant injustices, particularly for vulnerable populations who may be disproportionately affected by errors.
Why it Matters
The wrongful imprisonment of Angela Lipps serves as a stark reminder of the potential pitfalls of integrating AI into law enforcement practices. As society increasingly turns to technology for solutions, it is crucial to scrutinise these tools and ensure they are used responsibly. The implications of misidentification extend beyond individual cases; they challenge the very foundations of justice and trust in law enforcement. With the stakes so high, it is imperative that policymakers, technologists, and civil rights advocates collaborate to establish robust standards and practices that protect citizens from the unintended consequences of technological mishaps.
