A wide-ranging report by Ontario’s Auditor General has found that the province’s artificial intelligence (AI) strategy still lacks several “key components” and there are problems with thousands of civil servants using unsafe AI websites, and inaccuracies in AI notetaking used by some family doctors and other healthcare professionals.
The audit period ran from January to November 2025 and took a broad look at the use of AI throughout the Ontario government. “OPS staff can access unsafe and unsecured AI websites. The Ministry had not implemented security controls to prevent its staff from inadvertently uploading personal or sensitive information to GenAI websites,” Ontario Auditor General Shelley Spence said in a report tabled Tuesday on the use of AI in the Ontario Government.
Legislation that came into effect in 2025 tasked the Ministry of Public and Business Service Delivery and Procurement with Developing an AI playbook and policies for the Ontario Public Service (OPS). As of September 2025, key AI systems were in differing phases of implementation across the OPS.
But the report found a number of issues with the implementation of appropriate and secure AI tools across government.
AI notetaking systems for health care not always reliable
When it comes to AI Scribe, the AI note taking software made available to healthcare professionals through government-approved vendors, the auditor found that there are sometimes serious errors in terms of the notes that are generated.
For example, most AI systems that were tested generated notes capturing a different drug than what was prescribed in test conversations. The AI systems also sometimes fabricated information and made incorrect suggestions for the patients’ treatment plans, such as referring the patient for therapy or ordering blood tests even though there was no mention of those steps in the simulated recordings.
Speaking with reporters, Spence noted that during a recent appointment with her own doctor, she noticed they were using the software and asked them to make sure they were checking the AI-generated notes.

When assessing vendors, the auditor noted in her report, the ministry assigned relatively low weighting to accuracy of notes.
“Inadequate weightings could result in the selection of vendors whose AI tools may produce inaccurate or biased medical records or lack adequate protection to safeguard sensitive personal health information,” the auditor said.
The auditor also noted that there was no requirement for users to confirm they had checked over the AI generated notes after a conversation and recommended that be made a mandatory part of the software.
Responding to the report, Minister of Public and Business Service Delivery and Procurement Stephen Crawford told reporters that humans are still part of the equation.
“The key to focus on here is that doctors make any final decisions on prescribing medications or anything to that effect,” Crawford said.
He emphasized that the audit caught serious AI “hallucinations” in notetaking during the testing phase for products rather than real-world use.
He also noted that the notetaking AI saves doctors around five hours per week of clerical work on average.
“It’s not about replacing workers. It’s about enhancing their jobs and making decisions more efficiently,” he said.
Government staff accessing unsafe AI sites
The auditor found that the ministry had not blocked OPS staff from accessing numerous unsafe and unsecured AI websites on their government devices.
The auditor also found the ministry had not implemented security controls to prevent OPS staff from inadvertently uploading Ontarians’ personal information – such as health cards, driver’s licences and credit card information, as well as potentially sensitive corporate data, such as vendor contracts and invoices – onto those AI websites.
That leaves the door open to those websites retaining and using personal or sensitive information entered by staff to train the sites’ large language model (LLM) software, the auditor said.
“As AI becomes more widely used, strong oversight is essential to maintain privacy, fairness and public trust,” Spence told reporters. “We found that OPS staff were accessing unsafe or unsecure AI websites on government devices without adequate controls to prevent sensitive information from being uploaded.”
Between April 2025 and Aug. 2025, 12,000 OPS staff accessed approximately 400 AI-related websites, according to data collected for the OPS through Microsoft Defender.
“Of these websites, 244, or about 60%, were deemed unsafe or unsecured since they had a score of five or lower, and not all of these websites were work-related,” the auditor wrote. “We found that 15% of these websites with a score of five or lower were also not work-related and featured inappropriate content.”

While the ministry has had a comprehensive training course on the responsible use of AI since January 2024, the course is not mandatory and just three per cent of OPS staff had completed it as of August.
“By allowing staff to use unapproved, unsafe and unsecured AI websites, and not preventing the inadvertent uploading of data to these sites, the Ministry has not implemented sufficient controls to prevent potential serious data misuse by third parties,” the auditor wrote.
“Even a single incident of one employee uploading personal or sensitive data or clicking a malicious link on these websites can lead to data exposure, credential theft and system outage.”
While Microsoft Copilot Chat is the only GenAI website approved as secure for OPS staff, its use accounted for just six per cent of GenAI websites accessed by OPS staff.
The auditor also found there were security risks associated with the use of AI websites on non-default browsers, such as Google Chrome and Mozilla Firefox instead of Microsoft Edge.
The government says it has since taken steps to block access to unsafe AI sites on OPS devices.
Crawford said that government employees are now using Copilot Chat and that they will be “spoken to” by IT if they are found using non-approved AI sites.
However he added the government is keen to continue pushing the use of AI in order to make government more efficient.
“This is transformational technology. We’re still in the very early stages. It’s a global arms race, and I believe Ontario needs to be a leader in this,” he said.
More testing required for AI system to verify identity
While the government is planning to roll out the use of Document Verification Service (DVS) for Ontarians to confirm their identities to access government services, the auditor found that the ministry has yet to address gaps identified through test reports of the system.
The system requires users to take a live video and perform actions such as smiling or moving their face to show that they are a real person.
But the auditor found the sample size used in testing was too small and was “not representative” of the diverse demographics of Ontario’s population.
“This omission could leave the system open to generating decisions that disadvantage certain demographic groups since it uses technologies such as the facial recognition of different demographic users,” the auditor found.
“As a result, certain groups may experience higher rejection rates or delays when verifying their identities online to access government services.”
Other gaps also had yet to be addressed and the auditor found the ministry had no plans to follow up with the vendor as of the audit period.
The audit period ran from January to November 2025 and took a holistic look at the use of AI throughout the Ontario government.
The auditor says the government has accepted most of her recommendations around better implementation of secure and proper AI practices.


