Sharan Kaur served as the deputy chief of staff for former Liberal finance minister Bill Morneau and is currently a principal at Navigator.
I have grown accustomed to the digital noise that comes with being a woman in the public eye. My inbox is a toxic repository, a daily flood of vitriol about my race, my politics, my appearance, my sex.
I often have my image manipulated with AI. My face stitched onto an animal, horns out of my head, features altered, the usual obscene actions of keyboard warriors. I have learned to hit block, to mute, and to keep moving. I have developed a thick skin and am known to often punch back.
But nothing prepared me for the sickening, visceral jolt of receiving a deep-fake, sexually explicit video using my image.
It wasn’t another troll comment. It was a fabricated, high-definition violation, an image of my own body, manipulated and contorted into a scene I never participated in. It was a digital assault, a theft of agency, and a brutal reminder that in this era of rapid technological advancement, our bodies are no longer our own.
Weaponizing AI to strip away my autonomy was a violation so completely invasive that there are no adequate words for it. I have spent my career wielding words for a living, yet in that moment, they failed me.
Prime Minister Mark Carney’s federal government has announced it intends to criminalize the creation and distribution of non-consensual sexual deepfakes with Bill C-16, the Protecting Victims Act. The bill, which is making its way through committee, proposes extending existing prohibitions to include non-consensual synthetic images. It is the right instinct. It is a necessary baseline.
But this is too little to protect against the speed at which these tools evolve. Canadian law is already behind.
As we call criminal consequences and new laws, we need to be honest about what the existing ones have delivered for women in Canada. The system is already failing us.
Here is what the Canadian legal landscape actually looks like. Section 162.1 of the Criminal Code makes it illegal to distribute intimate images without consent, but the provision defines an “intimate image” as a “visual recording of a person,” language that courts have interpreted to mean authentic images only. In November 2025, an Ontario judge ruled in R. v. Kapoor that sharing an AI-generated nude of a spouse was “morally reprehensible” and “frankly, obscene” but not a crime.
Not. A. Crime.
A man distributed a fabricated sexual image of his wife without her knowledge or consent, and a Canadian court found that the law, as written, did not qualify it as a crime. The difference between a ‘real’ violation and a simulated one was enough to let him walk.
Patches on a broken system
Beyond the criminal code, those impacted in Ontario, the country’s largest province, are relying on outdated civil remedies for damages and takedown orders. Manitoba and Quebec have updated their legislation to capture AI-generated imagery, Alberta is looking to update an existing law to let people sue others for sharing AI-generated sexual images, and British Columbia passed its Intimate Images Protection Act, which explicitly includes “real, fake, or altered” content. These are all patches on a broken system, not true criminal accountability.
Only six per cent of sexual assaults are reported to police, making it the most underreported violent crime in this country. Of those that are reported, fewer than half result in charges being laid. And of those that proceed, it is estimated that less than one per cent of sexual assaults experienced by women end with a conviction.
Two-thirds of sexual assault survivors in Justice Canada studies reported having no confidence in the police, no confidence in the courts, and no confidence in the criminal justice system at all. They are not wrong to feel that way. The numbers bear them out.
When I have reported harassment in the past, I have been met with indifference, with exhausting bureaucratic hurdles, and with the suggestion that perhaps I should simply log off. As though the solution to being targeted for my identity was to make myself smaller. To participate less. To exist more quietly.
That is the system we are now asking women to trust with a crime that is notoriously difficult to trace, technically complex to prosecute, and committed overwhelmingly against us. We have a habit of categorizing digital violence as a technology problem, challenges for software engineers and platforms to sort out, rather than call it what it is: an act of gendered violence.
Creating a sexual deepfake is onerous. A perpetrator must deliberately choose to search an image of a woman, upload it to an AI tool with no guardrails, articulate a graphic, sexual description of her body, and then distribute it. Every step in that chain is an act of targeted harm. We should call it what it is.
Consequences must be severe
AI companies and some social media giants are aggressively pushing these tools, prioritizing growth, disruption, and profit. They are hiding behind terms of service and content moderation policies that are, in practice, performative. The Online Harms Act, which would have created meaningful accountability for platforms hosting this content, died when Parliament was prorogued in January 2025. Canada’s AI and Data Act died with it. The regulatory architecture we needed was shelved, while the harmful tools proliferated.
I am going to say something that will be uncomfortable for some: if there was ever a moment to over-regulate something, this is it. This isn’t about merely imposing restrictions on AI tools; it is about taking bold action against harmful behaviour. We must implement severe consequences for those who exploit these platforms to inflict harm on others, all while unjustly cloaking themselves in misguided notions of unchecked freedom.
We have watched this country under-regulate social media, under-regulate AI development, and under-regulate the digital targeting of women for decades. We have responded to every new form of technology-facilitated violence with the slowest possible legislative response, while survivors absorbed the cost of our hesitation. We reacted to Rehtaeh Parsons’ death in 2013, a 17-year-old who died by suicide after images were shared without her consent, and we still did not get the law right. We are still getting it wrong in 2026.
The argument for measured, incremental reform sounds reasonable until you are the woman sitting with a gruesome, fake image of your own body, and you are told that what happened to you is ‘morally reprehensible’ but not a crime.
Criminal law must be reformed, aggressively, explicitly, and without carve-outs for digitally generated content. The creation of non-consensual intimate deepfakes should be criminal, not only the distribution. Mandatory minimum sentences should be on the table and platform liability must be carry tangible penalties.
But the law is only part of the answer. Law enforcement needs to be retrained and resourced to treat digital violence as legitimate and high-stakes harm to the survivor. We cannot draft modern laws and then hand them to institutions operating with the same assumptions that produced a one-per cent conviction rate.
When I look at a deepfake of myself, I see the culmination of a culture that has decided, collectively and quietly, that women’s bodies are public property, and fair game to be repurposed and violated at the whim of any bad actor with internet connection.
I see a justice system that has asked women to trust the system while giving us so little reason to do so.
If Canada wants to earn that faith back, if we want to call ourselves a country that values the dignity of women as much as we value the convenience of technological progress, then we must pass laws that are enforceable offering more than a paper-thin shield. Women in this country deserve stronger protection than that.

