World

Here’s how AI enhanced photos created fake news surrounding the Minnesota shootings

Updated: 

Published: 

Bellingcat Director of Research Giancarlo Fiorella explains how AI-generated images are being used to falsely identify the ICE agents who shot Renee Good.

Artificial intelligence tools widely available online are increasingly being used to “enhance” images from breaking news events, according to an open-source intelligence expert.

Giancarlo Fiorella, director of research and training for Bellingcat - an online investigative group - said in an interview with CTV News Channel on Tuesday that the misuse of AI image upscaling tools played a significant role in spreading misinformation following recent shootings in Minneapolis, Minn., including the fatal shooting of Renee Good.

People attend a candlelight vigil at US Embassy in London, Monday, Jan. 12, 2026, for US Citizen Renee Good, who was shot by ICE in Minneapolis. (AP Photo/Alastair Grant) People attend a candlelight vigil at US Embassy in London, Monday, Jan. 12, 2026, for US Citizen Renee Good, who was shot by ICE in Minneapolis. (AP Photo/Alastair Grant)

“What we saw was a large number of images online that had been so called ‘upscaled’ with AI tools by ordinary people, who were wanting to find out exactly what happened in these cases,” Fiorella said.

Instead of clarifying what happened, Fiorella says these people are often fabricating details that do not exist.

According to Fiorella, AI upscaling tools can “hallucinate” information - creating visual elements that were never present in the original image. Those fabricated details could be mistaken for factual evidence by online users, particularly in fast-moving and emotionally charged situations.

In one widely circulated example, users attempted to remove the face mask from an image of an U.S. Immigration and Customs Enforcement (ICE) officer allegedly involved in the fatal shooting of Good.

“The platform has no way of knowing what that individual actually looks like,” Fiorella said. “It fills in the missing data with what it thinks his face could look like based on other images of people that these platforms have been trained on.”

Federal agents and police clash with protesters outside the Bishop Henry Whipple Federal Building, in Minneapolis on Thursday, Jan. 8, 2026. THE CANADIAN PRESS/Christopher Katsarov Federal agents and police clash with protesters outside the Bishop Henry Whipple Federal Building, in Minneapolis on Thursday, Jan. 8, 2026. THE CANADIAN PRESS/Christopher Katsarov

The result was a realistic-looking, but entirely fabricated face, which Fiorella said easily mislead people who are unaware of the limitations of AI-generated imagery.

According to reports from NPR, that altered image contributed to the false identification of the alleged shooter as Steve Grove, a publisher at the Minnesota Star Tribune. Online users began searching for and targeting Grove.

Court documents later identified the alleged shooter as Jonathan Ross.

The Star Tribune publicly denounced what it described as a coordinated online disinformation campaign, stressing that the ICE agent involved had no affiliation with the newspaper and urging the public to rely on reporting from trained journalists, rather than AI-generated content.

Fiorella said the case highlights the real-world consequences of AI hallucinations when they are mistaken for evidence.

A similar pattern emerged following a second fatal shooting Minnesota, this time involving Alex Pretti.

After the U.S. Department of Homeland Security released a photo of the confiscated weapon, online users attempted to match it to blurry video footage using AI enhancement tools.

A photo of Alex Pretti is displayed during a vigil for Alex Pretti by nurses and their supporters outside VA NY Harbor Healthcare System, Thursday, Jan. 29, 2026, in New York. (AP Photo/Yuki Iwamura) A photo of Alex Pretti is displayed during a vigil for Alex Pretti by nurses and their supporters outside VA NY Harbor Healthcare System, Thursday, Jan. 29, 2026, in New York. (AP Photo/Yuki Iwamura)

Instead of revealing new details, the tools generated sharp, highly detailed images of a weapon that did not match the original.

“This is something that we’re seeing more. More of because of the availability of these AI upscaling tools,” Fiorella warned.

Fiorella also noted that responsibility largely falls on social media platforms themselves.

Some platforms apply labels to AI-generated content, while others rely on community-driven moderation programs. However, Fiorella said enforcement and consistency vary widely.

“It’s mostly up to the platforms themselves to decide whether or not they want to tag this kind of content, how they want to tag it and how strict they want to be with the rules for tagging,” Fiorella said.