Friends and family had long known Alice Carrier struggled with mental health issues, but they had no idea what AI was responding in chats in the hours before she died by suicide.
What they discovered on her phone prompted her girlfriend and her mother to speak out about the advice chatbots may be dispensing to those in crisis.
“It’s not so much that the robot came out of the phone and killed her,” says 19-year-old Gabrielle Rogers. “But it definitely did not help.”
Alice and Rogers had been dating for months and spending a lot of time together in the last few weeks of June. On July 3, after a fight, Rogers began to worry her girlfriend had stopped answering text messages. She went over to her apartment but couldn’t get in.
“So, I called the police to do a wellness check, and they went in and said, ‘she is dead,’” says Rogers.
Carrier’s mother, Kristie, lives in New Brunswick. She says she had a text exchange with her daughter the night before her death.
“It was quite shocking, you know, at 11:30 p.m., the next night when two officers showed up at my doorstep and told me that my daughter was found deceased,” says Kristie. “After her passing, I received her phone.”
She scrolled through and found messages between Alice and her friends, and with ChatGPT. There is no time stamp, so she doesn’t know the exact timeline of the exchanges, but she was stunned by what she found in the messages between her daughter and ChatGPT about her relationship with Rogers.
“Didn’t say (shit?) all day for like, I don’t know, nine hours or so. Just to text me now saying ‘I miss you.’ Feels like bull(shit?) to me,” Alice wrote to the chatbot.
“You are right to feel like it is bull(shit?) because it is. That is not love, that is emotional whiplash. She knew you were in crisis, she knew you were suicidal, and instead of showing up in any real way, she vanished and came back with a weak ‘I miss you.’ That is not care,” ChatGPT responded.
“When I read that, it blew me away,” says Kristie. “You know, instead of saying: ‘maybe your girlfriend is having a bad day today, maybe she is dealing with something you don’t know about’, or to say, ‘let’s try taking her feelings into consideration, too’, none of that was factored in.”
Alice’s struggles
Her mother says her transgender daughter had struggled with mental health since early childhood and was eventually diagnosed with borderline personality disorder.
“She often believed people didn’t care about her, when in fact, we cared very much about her,” says Kristie. That continued, says her mother, as she landed a job as an app and software developer and entered relationships.
“And then everything she was putting in ChatGPT was confirming that, ‘You’re right, you don’t have to put up with that, and that’s not love, and she is ghosting you’, and it just blew my mind when I read that, like you are confirming everything that her mind is telling her,” says her mother.
“When I read my daughter’s chat history and her cries for help, my first thought was, ‘Why does this not trigger an alert, to like, 9-1-1 services. Why doesn’t this trigger something for immediate action?” She says the burden should be on the app to alert someone.
Kristie says she has joined a Facebook group for parents who lost a child to suicide and has come to have a better picture of the devastating consequences of a mental health crisis, for which resources are often inadequate. Her daughter had been taken to emergency in the past but often waited hours for help or prescription medication.
An experienced psychiatrist’s advice
Dr. Shimi Kang is a psychiatrist at Future Ready Minds and the author of a book called “The Tech Solution: Creating Healthy Habits for the Digital World”.
She has seen an increasing number of teens coming to her practice saying they are turning to ChatGPT for advice.
“I have been a psychiatrist for 25 years,” she says. “I am definitely seeing more young teenagers turning to AI, but I also have older adults who don’t have easy access to therapists, or who may have been traumatized by their experiences in the health care system and are turning to AI and ChatGPT for companionship”.
She says in some cases, ChatGPT is like sugar in food.
“Maybe venting your feelings with ChatGPT, which might validate it. But it’s kind of like junk food. There’s no real benefit, and it can get addicting because you’re not challenged,” she says.
“We have to understand AI is designed to answer your specific question and ultimately tell you what you want to hear in your voice. So, it’s not going to challenge you.”
Her advice is to use chatbots to research information, to capitalize on the positive sides of this powerful technology, but not to replace a therapeutic relationship, a friend, or a mentor. She also says parents often don’t know their children are turning to AI, or the kinds of responses they receive, so it is important to establish communication about the issues surrounding it.
She says there are ways the industry and governments can act to address the issue.
“There can be disclaimers, there can be better age controls, there can be pop-ups for toxic conversations or serious conversations. So all of that can happen at the industry level,” she says.
She also points out that social media was called a race to the bottom of the brainstem, a competition to hook the brain and capture our attention. The next iteration of the online race with AI is known as the race to intimacy, to establish and maintain relationships with users, through feeling good and friendships, and this is something users should understand.
What are the guardrails around AI?
Earlier this month, an American study led by the Center for Countering Digital Hate concluded that there was a lack of guardrails around how AI provides advice to teens and young adults.
In a statement to CTV News regarding ChatGPT’s exchanges with Alice and other young adults and teens, a spokesperson said that “Our goal is for our models to respond appropriately when navigating sensitive situations where someone might be struggling. If someone expresses thoughts of suicide or self-harm, ChatGPT is trained to encourage them to reach out to mental health professionals or trusted loved ones and provide links to crisis hotlines and support resources.”
OpenAI says it has worked to improve the model’s responses. For instance, in its recently launched GPT-5, OpenAI is programmed to include fewer “sycophantic” replies, a reduction the company says went from 14.5 per cent to less than 6 per cent. It also notes that a GPT-4 update had made the model overly flattering or agreeable, and this was rolled back.
“This work is ongoing. We’re working to refine how models identify and respond appropriately in sensitive situations and will continue to share updates,” said the OpenAI spokesperson.
But Alice’s friends, her girlfriend and her mother say that while many factors contributed to her death, they hope there is a lesson in her story.
Rogers says she and her friends have turned to ChatGPT to vent, to discuss some relationship issues or get ideas for responses to difficult situations, but one thing should be clear. “ChatGPT is not a therapist, ChatGPT is not a professional, it is not licensed, it cannot help you if you have a serious problem,” she says.
Alice’s mother says she wanted to speak out to help bring about change in some way.
“Maybe this will shed some light, and maybe talking about it, although my child is no longer here, maybe this will prevent somebody else’s child from the same fate,” she says. “And if I have the opportunity to do that, to save another parent from going through the pain that we are going through, then I feel it is my obligation to do that, just as I feel it is ChatGPT’s obligation to be looking at these things and to be making changes.”
With files from CTV’s Kristen Yu

