Canada

The AI caricature trend has sinister risks, expert warns

Published: 

An example of the AI caricature trend (Source: Claudiu Popa)

The AI caricature trend is the latest craze to spread across social media. It involves telling a chatbot like ChatGPT to create a personalized caricature of yourself by feeding it details of your interests.

While it may seem like a harmless bit of fun, one expert is warning that there are sinister aspects to the gamification of AI, which include risks to your privacy.

Claudiu Popa, a certified cybersecurity specialist and privacy professional, is the CEO of Data Risk Canada. He said now that AI chatbot can generate images, it’s natural for people to want to create one based on themselves.

However, while it may seem like a legitimate use of resources, Popa claims there are both environmental and privacy impacts, as well as the risk of data leaks that come with it.

“These tools are all sharing information about ourselves, and ultimately, it’s a consent grab,” he told CTVNews.ca on Tuesday.

In terms of privacy, Popa said the chatbot is trained to draw as much information as possible out of the user, in the hope that the image will be accurate or amusing. Popa called that an “attention trap,” a tactic that a lot of young people fall for so they don’t miss out on the latest online trend.

computer child A young student on a computer.Photographer: Dhiraj Singh/Bloomberg

However, he warns that there are major issues around privacy and consent, and that people are at risk to participating in their own privacy violations.

Popa said we shouldn’t be using chatbots to “gamify our existence” by creating viral tools that will ultimately benefit for-profit companies. One example is data brokers, which use the personal details we share online towards targeted advertising.

He added that the AI caricature trend is a good example for parents and teachers to use when talking to kids about how they’re being conditioned to share more information about themselves online.

“You’re building predictive tools that allow these platforms to know the types of needs that young people have at their age,” he said. “That’s one way of collecting this type of information, without telling people what you’re doing with it.

“That is the crux of the problem: it comes down to consent,” he added.

When Popa tried doing the caricature trend with AI, he found that it kept asking for more information, such as photos and emails, to build the image.

“It’s not its job to keep on reminding you of the privacy impact of this ongoing iterative activity,” he said. “Regardless of whether it’s presented as a caricature challenge or an online game … we need to be able to catch these things and empower everyone to put a stop to these things, before we find ourselves putting in sensitive information.”

ChatGPT The ChatGPT logo on a laptop computer, Thursday, March 9, 2023. Photographer: Gabby Jones/Bloomberg

What’s also concerning, Popa said, is the move towards “agentive” AI, which gives the tool agency over sensitive information, like email and banking accounts.

“Never consent to this kind of invasive access, if only because it removes the accountability on behalf of a platform,” he said.

If you’ve given an AI tool access to your online banking software, and you get defrauded as a result, Popa said you will never be covered by any sort of terms of service from any bank, because you’ve allowed an agentic tool access to your personal information through “your own access controls and access credentials.”

Finally, Popa stressed that there are concrete consequences to using AI, as seen by the environmental impact of AI data centres. He said that water is being prioritized for data centres, rather than local residents, who are paying the price environmentally.

“This type of viral activity (like the AI caricature trend) doesn’t do anyone any favours and it certainly contributes to a lot of the bad reputation that these AI chatbots increasingly have,” he said.