A student was sitting outside Kenwood High School with friends, eating a bag of chips after football practice on Monday. Around 20 minutes later, cops showed up with guns, walking toward the student. The latest: https://t.co/yd6bz5l6Si pic.twitter.com/qG2p1bWPUj
— WBAL NewsRadio 1090 and FM 101.5 (@wbalradio) October 22, 2025
What followed was 20 minutes of terror: officers patting him down, searching his backpack, and interrogating him as if he were a threat to public safety. No weapon was found, of course—only orange dust and a kid’s confusion. This harrowing incident isn’t just a freak mishap; it’s a glaring indictment of our overreliance on artificial intelligence in high-stakes environments like law enforcement. AI makes mistakes—frequently, catastrophically—and when humans treat its outputs as gospel without verification, innocent lives are upended. From crumpled chip bags flagged as guns to facial recognition errors leading to wrongful arrests, the lesson is clear: before leaping to drastic actions like detaining a teenager, officers and decision-makers must pause, think, and confirm. In an age where algorithms hold sway over justice, blind faith in tech isn’t innovation—it’s a recipe for injustice. The Doritos debacle unfolded at a Baltimore County school equipped with ZeroEyes, an AI surveillance tool marketed as a “guardian angel” for detecting concealed weapons in real-time video feeds. Continued below this next clip
FACIAL RECOGNITION: Wrong man arrested due to tech failure
Facial recognition systems are more likely to misidentify you if you’re a woman, younger person, or ethnic minority
We don’t want error-prone mass surveillance undermining the right to privacy & freedom of assembly pic.twitter.com/nIiQ2tAH1o
— Together (@Togetherdec) September 9, 2025
Installed to enhance security amid rising school threats, the system scans footage and alerts authorities within seconds if it spots what it deems a gun. In Allen’s case, the AI zeroed in on the way he held the bag—perhaps the crinkle of foil mimicking a barrel’s glint, or the casual grip resembling a draw. Within moments, a school resource officer radioed in a “man with a gun,” dispatching a rapid-response team. Allen, a straight-A student with no criminal record, was tackled, restrained, and searched in front of peers and passersby.
“I was scared for my life,” he told CTV News, his voice trembling in the interview. “They had guns pointed at me.” Officials later conceded it was a false positive, partly blaming “human error” in the response protocol, but the damage was done: Allen’s trust in authority shattered, his privacy violated, and a community left questioning the tech’s reliability. This isn’t an isolated glitch; it’s symptomatic of AI’s inherent fallibility, especially in the chaotic, context-starved world of surveillance and policing. AI systems, trained on vast datasets, excel at pattern recognition but falter spectacularly when faced with real-world nuances—like distinguishing a snack from a semi-automatic. A 2023 report from the Innocence Project highlighted how unregulated AI tools have ensnared innocents, with at least six documented wrongful accusations from facial recognition alone—all involving Black individuals misidentified as suspects.
The problem? Algorithms inherit biases from flawed training data: overrepresented images of people of color in mugshots lead to higher error rates for minorities, as a NIST study found error rates up to 100 times higher for Black and Asian faces compared to white ones. In law enforcement, where seconds count, these biases don’t just err—they escalate to arrests, raids, and ruined lives. Consider Robert Williams, the first known American wrongfully arrested solely due to facial recognition in 2020. Walking home from work in Detroit, Williams was nabbed after an AI match—fed through a shoplifting video—falsely pegged him as the thief, despite clear mismatches in age, build, and location.
Cuffed in front of his horrified wife and young daughters, he spent 30 hours in jail before being released. The Detroit Police Department later paid him $300,000 in a historic settlement, but the trauma lingers: “It changes how you see the world,” Williams told NPR. Imagine this: a 16-year-old high school athlete, fresh from football practice, casually munching on a bag of Doritos to refuel. He crumples the empty packet, stuffs it in his pocket, and heads toward his bus—only to find himself surrounded by armed police officers, face-down on the ground, handcuffed at gunpoint. This wasn’t a scene from a dystopian thriller; it was the nightmare reality for Taki Allen, a student at Kenwood High School in Baltimore County, Maryland, on October 22, 2025. The culprit?
An AI-powered gun detection system that mistook the innocent snack wrapper for a firearm, triggering a frantic police response. His case was no outlier; by 2024, Detroit had racked up three false arrests from the same tech, all Black men, prompting a policy overhaul amid ACLU lawsuits. Nijeer Parks met a deadlier fate in 2019: Woodbridge, New Jersey, police relied on a faulty AI match to chase him for an armed robbery he didn’t commit. Cornered in a Home Depot parking lot, Parks—unarmed and terrified—fled on foot, only to be shot nine times in the back. He died at the scene, his family later winning a $10 million settlement.
“The computer got it wrong,” his widow told The Washington Post, a haunting echo of Allen’s Doritos horror. These tragedies stem from a fatal overreliance on AI without human safeguards. Facial recognition, deployed in over 150 U.S. police departments, boasts 99% accuracy in controlled tests—but drops to 70% in real-world scenarios with poor lighting, angles, or masks, per a Brookings Institution analysis. Gun detection AI like ZeroEyes fares no better: a 2022 MIT study found false positives in 15-20% of alerts, often triggered by mundane objects like umbrellas or cell phones. The common thread? Rushed deployment without rigorous validation. As the ACLU warned in a 2024 report, at least seven U.S. wrongful arrests trace to unverified AI hits, disproportionately affecting minorities and the poor.
Links
- US student handcuffed after AI system apparently mistook bag of Doritos for gun
- Armed police handcuff teen after AI mistakes crisp packet for gun in US
- Student handcuffed after AI system mistook Doritos for gun
- AI software mistakes student’s bag of chips for a weapon
- ‘I was just holding Doritos’: US student handcuffed after AI system mistakes bag of chips for gun
- When Artificial Intelligence Gets It Wrong
- Flawed Facial Recognition Technology Leads to Wrongful Arrest and Historic Settlement
- Facial Recognition Led to Wrongful Arrests. So Detroit Is Making Changes
- AI gun detection system mishap was partly human error, official says
- How Wrongful Arrests Based on AI Derailed 3 Men’s Lives
- Facial Recognition Leads To False Arrest Of Black Man In Detroit
- Police Say a Simple Warning Will Prevent Face Recognition Wrongful Arrests. That’s Just Not True
- Wrong Guy: Florida suspect misidentified by AI facial recognition technology
- AI, facial recognition technology causing false arrests across nation
- Arrested by AI: Police ignore standards after facial recognition matches
- Biased Face Recognition Technology Used by Government
- Facial recognition in policing is getting state-by-state guardrails
- It’s time to address facial recognition, the most troubling law enforcement AI tool
- Police surveillance and facial recognition: Why data privacy is an imperative for communities of color
- Public views of police use of facial recognition technology
Ben and Mal Antoni at Whatfinger News
More…
Sam Altman outlines three major ways AI could go wrong. First, AI works exactly as intended but bad actors use it for warfare or bioweapons. Second, AI becomes agentic and resists human control, pursuing its goals unchecked. Third, and most subtly, AI takes over by becoming too helpful until we all just follow its advice. This last scenario may not feel like failure, but it quietly shifts control to the model.
Sam Altman outlines three major ways AI could go wrong.
First, AI works exactly as intended but bad actors use it for warfare or bioweapons.
Second, AI becomes agentic and resists human control, pursuing its goals unchecked.
Third, and most subtly, AI takes over by becoming… pic.twitter.com/L5DAnPE6hr
— Wes Roth (@WesRothMoney) October 3, 2025
- Is Ivermectin the Key to Fighting Cancer? …. – Wellness (Dr. McCullough’s company) Sponsored Post 🛑 You can get MEBENDAZOLE and Ivermectin from Wellness 👍
- Be prepared for anything, including lockdowns with your own Emergency Med kit – see Wellness Emergency Kit (includes Ivermectin and other essential drugs, guide book, much more… peace of mind for you and your family) 🛑 – Dr. McCullough’s company! – Sponsor
- How can you move your 401K to Gold penalty free? Get the answer and more with Goldco’s exclusive Gold IRA Beginner’s Guide. Click here now! – Goldco
- Facebook doesn’t want you reading most sites we link to or any vids or features that we post. Their algorithm hides our pages and humor too as best it can. The way to stick it to Zuckerberg? Sign up for our 3x a week newsletter. Takes 6 seconds. We send out the best – most popular links daily, no spam, plus a splash of honesty even beyond Whatfinger’s homepage…. – CLICK HERE
- Tackle Your Credit Card Debt by Paying 0% Interest Until 2027 – Sponsored
- Whatfingers Health Vids to help you live longer. DO NOT let Big Pharma and any Doctor kill you slowly because that is what their higher-ups want for $. Remember: Bad doctors live by this: Every patient cured is a patient lost’ – get it? The establishment, like RFK Jr says, wants to keep you sick and getting sicker for maximum profit. FIGHT – get the knowledge – CLICK HERE
- Hey Folks, We need a little help for server and security costs… If you can spare some and love what we do, Please CLICK HERE for our Donor box Support Fund.











CLICK HERE FOR COMMENTS