top of page

Navigating AI Scribe Guidance Across Canada


a canadian flag waving outside of an office building

Within the past year, the Privacy Commissioners of Alberta, Ontario, and British Columbia have each released comprehensive guidance on AI scribes in healthcare. Together, these documents establish a detailed framework for responsible implementation—and highlight where organizations commonly go wrong.


Why Regulators Are Paying Attention

AI scribes capture far more personal information than traditional notetaking methods. As BC's guidance explains, what distinguishes these tools is that "there are many processes taking place with an AI scribe that are more complex, potentially more privacy invasive, and less obvious to the average person."

The privacy implications include biometric data embedded in voice recordings that can identify speakers and reveal characteristics like age, gender, and emotional state; extraneous personal details that clinicians taking handwritten notes would simply omit; information about companions and third parties present during appointments; and in open-concept clinical environments, potentially capturing nearby conversations.


Ontario's guidance adds another dimension: these tools are rapidly evolving. What began as transcription tools now increasingly offer diagnostic recommendations, referral letter generation, and treatment suggestions. This "function creep" creates ongoing compliance challenges that healthcare organizations must actively monitor.


Where All Three Provinces Agree 

Patient Consent Before Recording

All three commissioners require meaningful patient consent before activating an AI scribe. BC is particularly emphatic that implied consent is inappropriate given "the novelty, complexity, and rapid pace of change of AI scribe technologies." Alberta specifically requires written consent because AI scribes use a recording device that isn't visible to patients. Ontario emphasizes that consent must be "knowledgeable"—patients must genuinely understand the technology.


Consent processes must explain what the AI scribe is and how it works, all purposes for which information will be collected and used, whether recordings are retained and for how long, what information goes to the vendor for their own purposes, and that patients can decline without affecting their care. Critically, consent is ongoing—when capabilities change or vendor policies shift, patients need updated information.


Mandatory Clinician Review

The commissioners share deep concern about AI scribe accuracy. Ontario's guidance notes that hallucination rates in large language models "may be getting worse, even as new systems become more powerful." BC lists specific errors to expect: "hallucinations (making up content that isn't correct to fill gaps in the data), omissions (leaving out relevant information), misinterpretations and misspellings (e.g., names, diseases, medications), and biases."


All three provinces warn about automation bias—the documented human tendency to over-rely on automated outputs. BC notes that "once clinicians become comfortable with an AI scribe, and if they find it to be accurate most of the time, there is a risk that the quality of their reviews may decline."


The practical implication: policies requiring review aren't enough. Organizations need ongoing audits to ensure clinicians are actually catching errors, not rubber-stamping AI outputs.


Vendor Verification Required

All three commissioners express skepticism about accepting vendor privacy claims at face value. BC advises organizations not to "take a vendor's word for it that they are PIPA compliant" and warns there is "no accreditation program in Canada that assesses or approves companies' claims of legal compliance." Alberta warns about contracts that reference laws that don't actually apply—like PIPEDA or HIPAA when provincial health information legislation governs.


Where the Guidance Differs

Privacy Impact Assessments

Alberta is most prescriptive: PIAs must be submitted to the Commissioner before implementation, with detailed specifications including data flow diagrams and vendor agreements. Ontario and BC recommend PIAs as best practice to "identify risks early" and "demonstrate due diligence," but don't require submission for regulatory review.


Unique Provincial Concerns

Alberta uniquely addresses mobile device scenarios—tablets moving through practices could "potentially collect, use or disclose other non-essential information from patients waiting in the office." Ontario addresses "Shadow AI," describing a real breach where a physician's personal AI scribe transcribed hospital rounds without authorization after he'd left employment. BC provides the most nuanced analysis of clinician privacy, distinguishing work product information from employee personal information requiring consent.


Bias Frameworks

Ontario provides the most comprehensive framework, identifying three categories: systemic biases embedded in training data reflecting historical inequities, human biases introduced by developers and data curators, and computational biases arising from model design. Ontario notes that models trained in specific regions "may not perform as well in different regions comprised of more diverse populations with varying languages and accents." Healthcare organizations should monitor for differential accuracy rates across patient populations.


Implementation Guidance

Before selecting an AI scribe, define specific purposes the tool will serve, assess what features are needed versus available (and whether unneeded features can be disabled), map how information will flow between patients, the scribe, the vendor, and clinical systems, and plan consent processes.


When evaluating vendors, request documentation of accuracy rates evaluated in diverse clinical settings, ask where data is processed and stored, review security assessment practices, clarify what happens to data when contracts end, and ensure vendors will notify you before updates that affect privacy practices.


For ongoing monitoring, conduct regular accuracy audits of AI outputs, assess staff adherence to review requirements, track reported errors and vendor responses, review vendor policy changes, and reassess privacy implications when scribe capabilities are updated.


Conclusion

The guidance from Alberta, Ontario, and British Columbia reflects a shared understanding: AI scribes present genuine opportunities to reduce administrative burden, but also significant privacy risks that require careful management. The emphasis across all three provinces on meaningful consent, mandatory human review, vendor scrutiny, and ongoing monitoring provides a clear framework for responsible implementation.


Healthcare organizations that treat these requirements as fundamental to their AI scribe programs—rather than compliance checkboxes—will be better positioned to realize the technology's benefits while protecting patient privacy.


As these technologies become more pervasive, we will continue to watch how regulator expectations and guidance evolve.

 
 
bottom of page