The privacy watchdog is looking into a partnership between Australia’s largest radiology chain and a local startup involving the use of the medical information of patients to train artificial intelligence models, potentially without their consent.

The Office of the Australian Information Commissioner (OAIC) has confirmed it has launched preliminary inquiries with the I-Med Radiology Network over its transfer of hundreds of thousands of patient medical scans to Australian startup Harrison.ai, following reports by Crikey that this was done without the consent of patients and potentially in breach of Australian privacy laws.

A preliminary inquiry involves information gathering and comes before the launch of a potential investigation.

I-Med, which operates 250 radiology clinics in Australia, partnered with Sydney-based tech firm Harrison.ai in 2019.

This saw the creation of Annalise.ai, which offers an AI-based tool to read and analyse chest x-rays to detect certain conditions.

The partnership between the two organisations involved Harrison.ai using patient medical scans collected by I-Med to train this AI-based system.

Crikey reported that there is no public information showing that patients consented to their private health data being used for this purpose.

The federal government and the Greens have since raised concerns about these revelations, the OAIC has launched an initial probe into I-Med’s data privacy.

“The OAIC is making preliminary inquiries with I-Med Radiology Network to ensure it is meeting its obligations under the Australian Privacy Principles in relation to reports it has provided private medical scans to a third-party entity for the purpose of training an artificial intelligence model,” a spokesperson for the OAIC told Information Age.

“Under the Australian Privacy Principles, entities governed by the Privacy Act must have a clearly expressed and up-to-date policy about their management of personal information, take reasonable steps to notify individuals of how their personal information is used, and can only use or disclose personal information for the primary purpose for which it was collected, or for a secondary purpose if an exception applies.”

Commonwealth Attorney-General Mark Dreyfus also flagged concerns with the use of private medical data for these purposes.

“The use of health information to train AI models raises privacy concerns about the lawful handling of personal information,” a spokesperson for Dreyfus told Crikey.

“Reform of Australia’s privacy laws is crucial to ensuring appropriate safeguards are in place for AI and other rapidly developing technologies.”

‘World-leading’ technology

Harrison.ai was founded by brothers Dr Aengus Tran and Dimitry Tran in 2018, and its first product used AI to assist with the selection of embryos for IVF.

The company says its Annalise.ai product was trained on 782,000 unique chest x-ray studies that were “sourced from broad datasets from three continents”, and that it is the “world’s most comprehensive AI clinical decision-support solutions for chest x-rays and can identify up to 124 findings”.

This AI tool is now available to a third of all radiologists in Australia, and in clinics across APAC, Europe, the UK, the Middle East and the US.

At the start of 2022 Harrison.ai closed a $129 million Series B funding round, with investors including Blackbird Ventures, Atlassian co-founder Scott Farquahar’s Skip Capital, and Ramsay Healthcare.

Last year Tesla chair Robyn Denholm, who is also an operating partner at Blackbird Ventures, joined the Harrison.ai board.

In announcing the partnership with I-Med, the company said that the Annalise.ai venture will “develop world-leading prediction engines for key imaging modalities”.

De-identified data

In a statement following the Crikey reports last week, a spokesperson for Harrison.ai said that all data used by the startup to train its AI models is de-identified.

“As a clinician-led company, Harrison.ai takes patient safety and data privacy very seriously,” the spokesperson said.

“Our products are improving quality of care for patients around the world by ensuring earlier and more accurate detection of diseases.

“The data we receive for research and development is de-identified, cannot be re-identified, and is encrypted.

“Such medical research and development is done in compliance with all relevant regulations, including privacy laws.”

The OAIC spokesperson said that while the data was de-identified, “entities should be aware that de-identification is context dependent and may be difficult to achieve”.

Australian privacy laws restrict the disclosure of personal data unless it is for the purpose it was collected or a secondary purpose that is “reasonably expected”.

The use of personal data for the purpose of training AI models is a legal grey area currently.

“Given the unique characteristics of AI technology, the significant harms that may arise from its use and the level of community concern around the use of AI, in many cases it may be difficult to establish that such a secondary use was within reasonable expectations,” the OAIC spokesperson said.