The Royal Canadian Mounted Police confirmed Thursday that they have been using controversial facial recognition software to investigate online child sexual exploitation cases.

In a news release, the RCMP said that their National Child Exploitation Crime Centre (NCECC) has been using Clearview AI software for the past four months.

"The NCECC has two licenses for the Clearview AI application and has used it in 15 cases, resulting in the successful identification and rescue of two children," the RCMP said.

"Only trained victim identification specialists in the NCECC use the software primarily to help identify, locate and rescue children who have been or are victims of online sexual abuse."

The Mounties join the list of Canadian police services, which includes the Toronto police, who have confirmed using the Clearview AI software.

The RCMP said it would work with the privacy commissioner to develop guidelines and policies that adhere to regulations.

"The Internet has changed the way child sexual exploitation offences are committed, investigated and prosecuted, and Clearview AI is only one of many tools/techniques that are used in the identification of victims of online child sexual abuse," the Mounties said in a release.

Canadian privacy officials have launched an investigation into the use of Clearview AI software in the wake of recent media reports.

Privacy officials said the reports had "raised questions and concerns about whether the company is collecting and using personal information without consent."

The news comes a day after the company behind Clearview AI was hit with a data breach. The Daily Beast reported Wednesday that the company sent a notification to its customers that an intruder had gained unauthorized access to its client list.

In a statement, the company's lawyer Tor Ekeland said: "security is Clearview's top priority. Unfortunately, data breaches are part of life in the 21st century. Our servers were never accessed. We patched the flaw and continue to work to strengthen our security."

Clearview AI has been under scrutiny for collecting billions of public images from the Internet to build a proprietary image search tool, which it sells to law enforcement.

The company became the subject of a New York Times report last month, revealing that more than 600 law enforcement agencies, including the FBI and the Department of Homeland Security, as well as a handful of financial companies, have been using the software.

Toronto police said they have halted the informal testing of the software while Ontario's Information and Privacy Commissioner Brian Beamish conduct a review of the software.

"As we've said, the Toronto Police Service has been engaged in a comprehensive review of its use of Clearview AI. We continue to work with the Information and Privacy Commissioner and the Ministry of the Attorney General, and we will engage other stakeholders going forward if needed," Toronto police spokesperson Meaghan Gray said in a statement.

Police services in Halton, Peel, and Durham have also confirmed that they have stopped testing Clearview AI.

Ontario's privacy commissioner Brian Beamish had said it is collaborating with all its federal and provincial counterparts "to develop guidance on the use of emerging biometric technologies, including facial recognition, by the private sector or public sector organizations."

According to a report by Buzzfeed News, Canada is Clearview's largest market outside of the U.S. The report said more than 30 law enforcement agencies in the country have access to the software.