As artificial intelligence (AI) becomes more common across a wide range of Canadian sectors and industries, doctors say the technology – if used carefully and thoughtfully – can help address some of the inefficiencies in Canada’s strained healthcare system.
One of the most common AI tools currently being used by physicians is a scribe. Without it, conversations between patients and clinicians take longer and can be distracting, says Dr. Muhammad Mamdani, former VP of data science and advanced analytics at Unity Health Toronto.
A scribe uses AI to transcribe and summarize conversations between patients and doctors, creating medical notes that physicians otherwise would have to make themselves as they listen to their patient.
In an interview with CTVNews.ca in November, Mamdani said the idea of using an AI scribe at his family health practice was brought to him a number of years ago by a trainee doctor who argued that using the tool would save time while allowing him to provide better care to his patients.
Mamdani explained that when doctors have to take medical notes themselves, they must do so while simultaneously attempting to focus as much as possible on the patient and their health concerns.
“So, I’m constantly looking at the patient, saying, ‘hold on a minute,’ and then I’m typing in stuff, and then I go back and then I’m typing stuff in, and it’s really distracting, not only for you but for patients as well,” he said. “It’s not a great experience.”
‘Frees up 3 to 4 hours’
Since his practice adopted scribes, doctors have been able to focus more intently on their patients, said Mamdani.
“So, what happens is physicians spend much less time on this boring task of typing and note generation, and more time focusing on a patient, and studies have shown that it’ll actually free up about three to four hours per week for physicians,” he noted.
Mamdani acknowledged that the technology isn’t perfect, and that a scribe may mishear something a patient says, which could create errors throughout the medical notes it creates, meaning physicians still need to review any notes created by AI.
“But even with that, (doctors) found that there’s considerable time savings, and that 80 per cent-plus of physicians actually are totally fine; the benefit they gain is much more than the hassle of reviewing notes,” he said.
‘Predicts if somebody is going to die’
While tools like scribes aim to help ease the administrative burden faced by health-care professionals, there are clinical applications too, Mamdani said.
In Toronto at St. Joseph’s Health Centre and St. Michael’s Hospital, both operated by Unity Health, an AI model is used to monitor all of the hospitals’ general internal medicine and general surgery patients, he explained.
“It tracks patients and monitors them every hour, on the hour, and it typically ingests about 150 to 170 parameters, and it predicts if somebody is going to die or go to the ICU in the next 48 hours,” said Mamdani.
“As soon as it reaches a high-risk threshold, it’s automated to the patient medical team, so our protocol is the medical team has to come see the patient within one hour of being paged.”
The tool was implemented by Unity Health around five years ago, Mamdani said, and according to a paper published last year in the Canadian Medical Association (CMA) Journal, it has led to a 26 per cent reduction in unexpected mortality.
“These (tools) essentially help save lives,” Mamdani said.
AI ‘cannot replace us’
AI has also changed the way patients seek medical advice. With the rise of popular AI chatbots like ChatGPT, some Canadians have began using them to help diagnose certain health problems rather than going to a doctor.
But although chatbots can be a useful way to find information, AI is not a substitute for in-person care from a licensed professional, says Dr. Margot Burnell, president of the Canadian Medical Association (CMA).
“I think we need to remember that what is unique to health care is building a trusting relationship with a patient, and so from that perspective, AI will be a tool, but it cannot replace us,” she told CTVNews.ca in a November interview.
“We need that empathy, we need that ability to understand a patient’s goals and care, and to accompany them on the journey to make those decisions that are best for them.”

Burnell also argued that it’s when Canadians don’t have access to a primary care provider that they tend to seek out medical advice from other channels, like from an AI chatbot or online.
“We need to encourage access to primary care health teams, because when individuals do not have access to a care provider that they trust and have built a relationship with, then they will go to social media to get their advice,” she said.
“We know in our surveys that about a third of people have done that, and we know that one in five of those individuals have reported some sort of harm; either a delay in diagnosis, complications of the information they decided to follow, or angst within their family unit of seeking advice from there.”
‘Inefficiency in the system’
Mamdani agrees that AI tools will never be able to replace physicians, but he said it’s important to acknowledge the strain Canada’s health-care system is currently under, and to look at ways AI could help solve problems.
“If you look at the current status of the health-care system, we have long wait times. A lot of people don’t even have a family doctor; it’s tough to get one,” he said.
“If we take a look at mental health supports, two-and-a-half million Canadians don’t receive the mental health supports they need.”
Mamdani also pointed to a 2024 report from the Canadian Institute for Health Information (CIHI), which found that 15 per cent of visits to emergency departments between April 2023 and March 2024 were “for conditions that could potentially have been managed in primary care.”
“There’s a lot of inefficiency in the system right now, and it’s hard to manage all of it,” he explained. “So, can AI actually help alleviate some of that? Because we’re already working the job of three or four people.”
However, in whatever capacity AI is adopted by health-care providers, it needs to be done carefully, said Burnell, as the technology doesn’t come without risks.
One of those risks is privacy and the need to protect sensitive medical information. “Who owns the data? How has the data been obtained? Has it been obtained with appropriate consent? That is critical,” Burnell said.
She also noted that the impacts of widespread AI use in the health-care system need to be monitored closely, saying “we need to make sure that it does not cause increased inequities.”
The CMA points out on its website that AI is “only as good as the data its based on,” noting that if an AI model only has access to a group of people that share similar characteristics like ethnicity or age range, there will be blind spots.
“I think physicians are excited about AI, but they do have concerns that I think we all need to respect,” Burnell said. “Many are within the profession, but most of them are very common amongst our patients and our citizens, so we need to do this in a thoughtful manner.”


