Tennessee grandmother Angela Lipps spent nearly six months in jail after Fargo police used facial recognition software to falsely identify her as a bank fraud suspect. On March 11, 2026, her case went public—exposing not just one AI failure, but a systemic pattern. Despite bank records proving she was 1,200 miles away in Tennessee when the alleged crime occurred, she was arrested by U.S. Marshals at gunpoint, held without bail, and lost her home, car, and dog before charges were dismissed. This is the eighth documented wrongful arrest involving facial recognition in the United States.
Police Skipped Basic Detective Work
Fargo police used facial recognition software to identify Lipps as a bank fraud suspect. A detective looked at her social media and driver’s license photo, then wrote that she “appeared to be the suspect based on facial features, body type and hairstyle.” That visual comparison became an arrest warrant. Police never called her. Never checked her bank records. Never verified she could have been in North Dakota on July 14, 2025, when U.S. Marshals arrested her at her Tennessee home.
Bank records later proved Lipps was more than 1,200 miles away at the exact time of the alleged fraud. Police only checked these records after she spent nearly six months in jail. Held without bail as a fugitive, she was flown to North Dakota to face charges. Months later, charges were dismissed. The consequence: she lost her home, car, and dog.
This reveals the core problem. Police are treating facial recognition as definitive evidence instead of an investigative lead. They’re skipping basic detective work—alibi checks, timeline verification, physical evidence—and arresting people based solely on algorithm output.
The Pattern: Eight Cases, Same Failures
Angela Lipps is the eighth person wrongfully arrested due to facial recognition errors in the United States. A 2025 Washington Post investigation found that across these eight cases, police failed to check alibis in six cases, ignored contradictory evidence in two, and neglected to collect key evidence in five. In all eight cases, police arrested someone “without independently connecting the person to the crime.”
Consider other documented failures. Robert Williams was wrongfully arrested in Detroit and later settled for $300,000. Detroit now requires independent evidence before facial recognition-based arrests. Jason Vernau spent three days in jail for allegedly cashing a fraudulent $36,000 check. He actually cashed a legitimate $1,500 check at the same bank on the same day. Police never checked bank accounts or transaction timestamps. Quran Reid was arrested in Atlanta for theft in Louisiana. He repeatedly told police he’d never been to Louisiana. They never sought proof he was at work in Georgia on the day of the crime.
This isn’t isolated human error—it’s a systemic pattern. Police departments are taking shortcuts with facial recognition, making arrests after getting a match without checking alibis or corroborating evidence.
Related: Anthropic Pentagon Lawsuit: AI Ethics vs National Security
Seven of Eight Victims Were Black
Seven of the eight documented wrongful arrests involving facial recognition were Black individuals. This isn’t coincidence—it’s algorithmic bias confirmed by federal research. A National Institute of Standards and Technology (NIST) study found that facial recognition systems are 10 to 100 times more likely to misidentify Black and Asian faces compared to white faces. African-American women have the highest false positive rates of all demographic groups.
The NIST study analyzed hundreds of facial recognition algorithms and found that “across demographics, false positive rates often vary by factors of 10 to beyond 100 times.” Real-world police use compounds this: officers pick one candidate from a ranked list of possible matches, and when those matches disproportionately include Black faces, Black people get arrested for crimes they didn’t commit.
If you’re Black, you’re at significantly higher risk of being falsely identified by facial recognition and wrongfully arrested by police. This is a civil rights issue, not just a technology problem.
Regulation Lags Technology Deployment
Despite documented failures, only 15 states have enacted any legislation restricting police use of facial recognition as of 2025. Most of these laws are weak—they lack accuracy testing requirements, independent verification mandates, or real accountability. Only two states (Colorado and Virginia) require testing and accuracy standards. Five states require notice when facial recognition is used. Six states limit use to serious crimes.
Maryland has the strongest regulations: it prohibits use on sketches, bans real-time identification, requires notice to defendants, and limits use to serious crimes. Meanwhile, the Washington Post investigation found that at least 15 police departments in 12 states were arresting suspects based solely on AI matches with no corroborating evidence. Current regulations are insufficient.
The technology is being deployed faster than regulation can catch up. Without mandatory accuracy testing, alibi verification requirements, or accountability mechanisms, wrongful arrests will continue.
The Accountability Gap
Angela Lipps is now “working to get her life back” after six months of wrongful incarceration. She lost her home, car, and dog. Her lawyer says she’s considering legal action against the Fargo Police Department. But who’s actually liable when facial recognition causes wrongful arrests—police departments, AI vendors, both, or neither? The answer is unclear.
Detroit paid Robert Williams $300,000 and agreed to require independent evidence before facial recognition-based arrests. But most jurisdictions have no such settlements or policy changes. AI vendors like Clearview AI continue to market their systems while claiming “The Myth of Facial Recognition Bias” despite documented NIST research proving otherwise. Police departments claim they relied on technology in good faith. Victims like Angela Lipps are left with destroyed lives and unclear legal recourse.
Without clear liability, there’s no deterrent. Police departments won’t change practices if they face no consequences. AI vendors won’t improve accuracy if they’re shielded from lawsuits. And people like Angela Lipps will continue spending months in jail for crimes they didn’t commit, losing everything they own, while everyone involved points fingers.
Key Takeaways
- Eight documented wrongful arrests from facial recognition errors, with police failing to check alibis in six of those cases—treating AI matches as definitive proof instead of investigative leads
- Racial bias is systemic: seven of eight victims were Black, consistent with NIST findings that facial recognition systems are 10-100 times more likely to misidentify Black and Asian faces
- Only 15 states have facial recognition legislation, and just two require accuracy testing—regulation lags far behind technology deployment
- Accountability remains unclear: Detroit’s $300,000 settlement is the exception, not the rule, leaving victims with destroyed lives and no clear legal recourse
- Angela Lipps spent six months in jail because police never verified she was 1,200 miles away—basic detective work that would have prevented the arrest entirely

