Immigration crackdowns ordered by US President Donald Trump have turned deadly, with a second US citizen killed by Immigration and Customs Enforcement (ICE) agents this month.Ā
Cell phone videos from eyewitnesses in Minneapolisshow how several ICE agents tackle 37-year-old ICU nurse Alex Pretti to the ground and then shoot him. He had not drawn his gun, as initially stated by the Department of Homeland Security.Footage from different cell phone recordings shows his gun on a belt, which an ICE officer removes before Pretti is killed.Ā
Earlier this month, ICE agents had shot 37-year-oldĀ Renee Goodas she was driving away in her car.Ā
Videos taken from multiple angles also discredit the official statementthat Good was trying to run over an officer, killing her point-blank by firing through her window.Ā
Amid these incidents, social media has been flooded with a mix of authentic eyewitness videos and AI‑generated fakes, complicating efforts to understand what actually happened. The Las Vegas Metropolitan Police Department warnedit had seen a rise in AI-generated images and video in connection with its forces, adding it “does not participate in proactive immigration enforcement activities.” DW Fact check investigated several viral clips.
ICE officers getting arrested by police?
Claim: ICE officers are getting arrested or beaten by police, as seen in severalĀ videoson different platforms and posts in different languages (here,Ā hereand here).
DW Fact check: Fake
DetailsĀ such asĀ garbled textĀ in the videos giveĀ it away. The subway signs don’t make sense (“Exit Ses”; “SotreĆ© Seet”; “42eet”), logos on uniforms are wrong or misspelled (“pice”; “IICE”).Ā
Body movements appear unnatural, exaggerated, or stiff, for example, when the police officer grabs one of the ICE agents with his right hand, but his left arm hangs down with little to no movement. Dialogue appears to be jumbled, as if the AI forgot to add a response from the other respective character.Ā
ICE agents act like NPCs (Non-Playable Characters) in a videogame — background characters controlled by the game rather than a player: They don’t seem to react to what’s happening. Mouth movements while yelling seem abrupt and exaggerated.Ā
Similar patterns also appear in otherĀ AI-generated content of protesters allegedly confronting ICE officers.Ā
ICE officers entering classrooms or university campuses?
Claim: An ICE officer entered California State University looking for a student,Ā while other agents showed up at a highschool’s soccer match.Ā
DW Fact check: Fake
Both of the videos published on the social media platform TikTok are AI-generated. The logo of AI-generative software Sora can be seen popping up in the video of the university’s classroom — a telltale sign that this has been created with the help of AI and doesn’t show real, authentic footage.Ā
The video ofĀ ICE agents watching a crowd at a soccer match has a weird,Ā glossy look to it. A search forĀ “Agleca soccer” doesn’t get any results. Faces in the crowd seem to be distorted, and writing on posters is gibberish.Ā TikTok has also added a warningĀ saying this video contains AI-generated content.
Are AI fakes drowning out real eyewitness footage?
“One of the problems with all of the AI-generated content and fake videos circulating among the real videos is it becomes very difficult to distinguish what is real,” said Courtney Radsch,Ā Director of the Center for Journalism and Liberty at the Open Markets Institute.
With AI tools now widely accessible, anyone can fabricate videos that appear real.
Radsch warns that disinformation campaigns may intentionally release fake videos to drown out accurate documentation of deadly ICE encounters.Ā
Brittani Kollar, deputy director of MediaWise, Poynter’s digital media literacy project teaching people how to spot mis- and disinformation,says that when false information goes viral, the verified factchecks tend not to get seen by as many people. Ā
“Viral deepfakes can in fact, drown out real videos when it comes to algorithms because more people are watching the deepfakes,” she told DW.Ā
It gets increasingly tough to decipher what’s real and what actually happened, which “could undermine legal processes, undermineĀ trust in video evidence, undermineĀ the trust in eyewitness accounts,” Radsch warns.
Can we still spot AI-generated fakes?
It’s still possible—but increasingly challenging—as generative tools advance. Although detection tools like Hive Moderationexist, AI generating tools evolve faster than the systems designed to identify them. Earlier giveaways such as unnatural eye blinking or distorted reflections occur less frequently.
Kollar advises looking for cluesĀ in the video:
- Watermarks or AI-tool identifiers?Ā
- Odd phrasing, distorted text, or inconsistent lighting?
- Is there audio in a language you understand?
- Captions that seem sensational or lack context?
- Reporting from reputable media outlets?
- Additional footage or alternate camera angles?
- Verification of source accounts?
Radsch adds: “It’s virtually impossible to tell real from fake in many cases if the attempt is to make a deepfake. Even sophisticated experts can’t necessarily do that.”
She argues for stronger technical protocols to authenticate real footage.Ā
Why create these videos? Disinformation — and profit
Experts say motivations vary:Ā Malicious actors aiming to disrupt public discourse andĀ trollsĀ seeking chaos.
In addition, creating AI content could also be motivated by economic interests, according toĀ Radsch. ICE raids could be a very lucrative topic in order to generate followers or boost digital advertising revenue.
Ultimately, Radsch says, it doesn’t matter all that much who’s behind the latest wave of AI content; the deeper issue is collapsing trust: “People are losing faith that facts can be established.”
With virtually no guardrails on how AI-generated videos can be monetized, she says the social media ecosystem incentivizes disinformation for profit. And the consequence is very alarming:Ā When people aren’t able to discern between falsely created and authentic videos, they often avoid the news altogether.Ā
Rachel Baig and Ines EiseleĀ contributed to this report.
Edited by: Silja Thoms
Ā