The next time you go to your GP’s office, you may be asked for permission to let a generative AI (genAI) tool record your entire conversation with your doctor.

Get used to it: ‘AI scribes’ are set to become commonplace after Australia’s medical regulators recently endorsed them – with caveats.

That endorsement came last month as new formal guidance from the Royal College of General Practitioners (RACGP) – the national body that sets practice standards for GPs– and a statement from the Australian Health Practitioner Regulation Agency (AHPRA), the national body that manages medical registrations and sets rules about doctor behaviour.

The RACGP, for its part, advised caution in embracing AI scribes – popular platforms include Lyrebird Health, PatientNotes, Otter.AI, TurboScribe, Nabla and DeepScribe – and warned that doctors should consider clinical, privacy, security, and workflow issues to head off “known issues with AI products” and “unforeseen legal problems [that] might also arise as their use increases.”

Such problems include the potential for scribes to “mishear” the names of symptoms, medicines, or conditions; the lack of relevant supporting information such as pathology reports; and the need for doctors to review the genAI systems’ output before committing it to the patient’s health record.

“As these tools gain popularity and their use increases,” the guidance warns, “there is potential for GPs to become over-reliant on their use and pay less attention to critical details or forgo the vital process of checking the output generated by the AI scribe, resulting in errors that could affect patient safety.”

What is an AI scribe?

AI scribes – aka digital scribes, virtual scribes, ambient AI scribes, AI documentation assistants, and digital/virtual/smart clinical assistants – use an app, either on the doctor’s computer or their smartphone, to record your consultation.

They’re different than conventional voice-to-text dictation services, which just transcribe audio recordings into text: AI scribes use genAI tools to interpret what they’re hearing – extracting relevant clinical details, such as blood pressure readings and observations during examinations, while ignoring the details of your recent trip to Europe.

Because they’re good at writing coherent text, the systems readily generate summaries of the consultation, referral letters to specialists like psychiatrists and surgeons and – if those specialists are also using the tools – correspondence to be sent back to GPs.

But AI makes stuff up – why would a doctor use this technology?

Once they’re done seeing patients, doctors spend hours per day on paperwork, with RACGP president Dr Nicole Higgins noting that “GPs are increasingly reporting the administrative workload and associated stress among their greatest concerns…. The administrative burden on GPs needs to be reduced urgently.”

One “gushing” doctor called AI scribe “a game changer” during one recent two-month study, in which 10,000 clinicians and staff across the US Kaiser Permanente Medical Group used the tools across 303,266 patient encounters.

AI scribes produced “high-quality clinical documentation for physicians’ editing” – with overall quality rated at 48 out of 50.

And while genAI ‘hallucinations’ were rare in practice, they did happen: one doctor, for example, mentioned that the patient had issues with their hands, feet, and mouth but the AI scribe heard this as a diagnosis of hand, foot, and mouth disease.

Avoiding such issues is the main reason doctors are advised to carefully review AI scribes’ output for accuracy before committing it to a medical record – yet concerns about their accuracy “seem to assume that GPs’ current note-keeping is adequate and accurate,” Melbourne GP Dr Nicholas Francis Carr wrote as the RACGP guidelines were released, noting that such errors are hardly unique to AI.

As a 12-month Lyrebird user, Carr said, “I am a poor typist, and my notes were previously too brief and often omitted important information…. Lyrebird is (mostly) very accurate and records way more detail than I used to.”

“This is important not just from a clinical but also medico-legal perspective. And I get to properly listen to my patient, rather than incompetently pecking at the keyboard.”

But what about my privacy?

If you’ve been following the development of genAI technology, the thought of letting it hear your private medical conversations may be concerning: genAI, after all, can not only make up facts but can be manipulated and may leak confidential information to other users.

Unlike broad-use tools like OpenAI’s ChatGPT and Google’s Gemini, however, the genAI models behind medical scribes exist in their own sphere and their makers are aware of privacy concerns.

Audio recordings are deleted once the transcription is created, meaning that you don’t have to worry about the theft of audio of you revealing medical concerns to your doctor.

“AI transcription software has the potential to increase the speed and accuracy of medical records generation,” professional indemnity insurance provider MIPS wrote in its assessment of AI scribes, “however… it is important that potential medicolegal risks are recognised and addressed.”

The RACGP warns of other security and privacy risks – most importantly, that doctors get your explicit written consent before recording your consultation, since recording a private conversation without consent can be a criminal offence.

Doctors should use multi-factor authentication to prevent outsider access to their AI scribes, and they must review software vendors’ T&Cs to ensure they’re not demanding blanket permission to collect and onsell patient data to third parties – something that many companies try to sneak past the keeper.

The RACGP also advises doctors to be aware of whether their AI scribe stores or sends data overseas – which could violate Australian laws protecting healthcare information if the host country’s privacy laws aren’t as strict as ours.

"This clarity from AHPRA and the RACGP is a game-changer for the healthcare industry,” said physiotherapist and PatientNotes CEO Darren Ross.

“It provides a solid framework for practitioners to confidently embrace AI tools that can significantly enhance patient care and reduce administrative burdens.”

“For patients, it means that when they encounter AI-powered tools they can be confident that these technologies are being used responsibly, with their privacy protected and their best interests in mind.”