A parliamentary inquiry into generative artificial intelligence has suggested products of tech giants such as OpenAI, Amazon, Meta, and Google should treated as “high-risk” under dedicated AI laws.
On Tuesday, the Senate Select Committee on Adopting Artificial Intelligence tabled a report on the “opportunities and impacts” coming from Australia’s uptake of AI.
The 222-page report raised 13 key recommendations on the domestic adoption of AI, with notable consideration given to the definition and treatment of “high-risk” use of AI.
The committee suggested government include general-purpose AI models, such as large language models (LLMs), in a list of “high-risk AI uses”.
Labor senator and committee chair Tony Sheldon explained a key recommendation was for “general-purpose AI models” such as OpenAI’s ChatGPT, Meta’s Llama, and Google’s Gemini, to be explicitly designated as high-risk by default – a move which could see the tech giants’ products subject to legislative guardrails.
Sheldon said the committee was “particularly concerned” with the way developers of LLMs – including Amazon, Google and Meta – had “consistently refused to be transparent” when questioned about their products.
“The tech giants refused to answer direct questions about their use of Australians’ private and personal information, including information collected through the devices and platforms they manage, to train their AI products,” said Sheldon.
“Watching Amazon, Meta, and Google dodge questions during the hearings was like sitting through a cheap magic trick – plenty of hand-waving, a puff of smoke, and nothing to show for it in the end.”
The committee noted that by defining general purpose AI models as high-risk, developers would be held to higher testing and accountability requirements than other “lower-risk” uses of AI.
As for what can warrant a ‘high-risk’ designation, the committee alleged there was “massive market power” and a “record of aversion to accountability and regulatory compliance” among the world’s leading LLM developers, before further highlighting the “non-consensual scraping of personal and private information” and the “potential breadth and scale” of a given model’s applications.
The committee further advocated for new “whole-of-economy” dedicated legislation to regulate high-risk uses of AI – a measure which would rival similar undertakings in the European Union and other jurisdictions.
“Without a whole-of-economy approach to AI regulation there is a risk of fragmentation and, as specific areas of law or uses of AI are prioritised for reform, there is a risk that certain rights and protections fall through the cracks,” the committee said.
8/ We need new standalone AI laws to rein in big tech and put strong protections in place for high-risk AI uses, while existing laws should be amended as necessary. #AI #BigTech
— Senator Tony Sheldon (@TonySheldonNSW) November 27, 2024
Laws needed to protect workers, creatives
The committee was also concerned with workers’ rights, adding that AI tools which surveilled workers or tracked their output ought to be treated as high-risk.
Noting that AI was already being used to automate recruitment, layoffs, and rostering in workplaces, the committee recommended existing Work Health and Safety legislation be extended to cover AI risks.
AI developers were further scrutinised for their use of copyrighted works in training datasets, with the committee calling for appropriate licensing, payment, and a mechanism to ensure fair remuneration for AI outputs which are “based on copyrighted materials used to train AI systems”.
The Media, Entertainment and Arts Alliance union (MEAA) welcomed the committee’s call for greater transparency around copyrighted material in AI training datasets.
“The impunity with which big AI developers have systematically scraped and stolen creative work shows us that copyright laws on their own are no longer fit for purpose to protect the rights and payments of creative and media workers,” said MEAA chief executive Erin Madeley.
Sheldon further argued that where tech companies have scraped copyrighted data without consent or payment, the government “needs to intervene”.
“Creators deserve transparency, and developers should be forced to fairly licence and pay for their work,” said Sheldon.
“There’s no denying the value AI can create, but we need to ensure that value flows to and benefits everyday Aussies, rather than being concentrated in the hands of global tech giants.”