Artificial intelligence tools widely available online are increasingly being used to “enhance” images from breaking news events, according to an open-source intelligence expert.
Giancarlo Fiorella, director of research and training for Bellingcat - an online investigative group - said in an interview with CTV News Channel on Tuesday that the misuse of AI image upscaling tools played a significant role in spreading misinformation following recent shootings in Minneapolis, Minn., including the fatal shooting of Renee Good.

“What we saw was a large number of images online that had been so called ‘upscaled’ with AI tools by ordinary people, who were wanting to find out exactly what happened in these cases,” Fiorella said.
Instead of clarifying what happened, Fiorella says these people are often fabricating details that do not exist.
According to Fiorella, AI upscaling tools can “hallucinate” information - creating visual elements that were never present in the original image. Those fabricated details could be mistaken for factual evidence by online users, particularly in fast-moving and emotionally charged situations.
In one widely circulated example, users attempted to remove the face mask from an image of an U.S. Immigration and Customs Enforcement (ICE) officer allegedly involved in the fatal shooting of Good.
“The platform has no way of knowing what that individual actually looks like,” Fiorella said. “It fills in the missing data with what it thinks his face could look like based on other images of people that these platforms have been trained on.”

The result was a realistic-looking, but entirely fabricated face, which Fiorella said easily mislead people who are unaware of the limitations of AI-generated imagery.
According to reports from NPR, that altered image contributed to the false identification of the alleged shooter as Steve Grove, a publisher at the Minnesota Star Tribune. Online users began searching for and targeting Grove.
Court documents later identified the alleged shooter as Jonathan Ross.
The Star Tribune publicly denounced what it described as a coordinated online disinformation campaign, stressing that the ICE agent involved had no affiliation with the newspaper and urging the public to rely on reporting from trained journalists, rather than AI-generated content.
Statement from the Minnesota Star Tribune: We are currently monitoring a coordinated online disinformation campaign incorrectly identifying the ICE agent involved in yesterday’s shooting. To be clear, the ICE agent has no known affiliation with the Star Tribune. We encourage…
— The Minnesota Star Tribune (@StarTribune) January 8, 2026
Fiorella said the case highlights the real-world consequences of AI hallucinations when they are mistaken for evidence.
A similar pattern emerged following a second fatal shooting Minnesota, this time involving Alex Pretti.
After the U.S. Department of Homeland Security released a photo of the confiscated weapon, online users attempted to match it to blurry video footage using AI enhancement tools.

Instead of revealing new details, the tools generated sharp, highly detailed images of a weapon that did not match the original.
“This is something that we’re seeing more. More of because of the availability of these AI upscaling tools,” Fiorella warned.
Fiorella also noted that responsibility largely falls on social media platforms themselves.
Some platforms apply labels to AI-generated content, while others rely on community-driven moderation programs. However, Fiorella said enforcement and consistency vary widely.
“It’s mostly up to the platforms themselves to decide whether or not they want to tag this kind of content, how they want to tag it and how strict they want to be with the rules for tagging,” Fiorella said.

