Australia's parliamentary security authority has denied Labor MP Linda Burney's claim that artificial intelligence technology has already been used to breach parliamentary security protections.

Burney, who chairs the House of Representatives’ Joint Committee on Public Accounts and Audit, made the comments on Friday, 15 November during a hearing at Parliament House in Canberra into public sector use of AI.

Audio of the hearing was streamed publicly.

“This is not for formal record, but I'm going to be honest with you,” Burney said during the hearing.

“One of the issues for us as parliamentarians is security, and we know that in some cases when security has been breached — it's been breached by no one in this room, but certainly that AI has been involved.

“But that's not on the record.”

The Department of Parliamentary Services, which runs the Parliamentary Security Service, told Information Age it was "not aware of any breaches that have occurred related to either physical or cyber security that have involved the use of artificial intelligence".

Burney’s office did not respond to a request for comment.

AI can be used to create computer code which may assist criminals during cyberattacks, but it can also produce highly convincing text, audio, images, and video which can be used to trick people during social engineering attacks using techniques such as deepfakes and phishing.

While generative AI (genAI) is increasingly being co-opted by threat actors, it can also be used to test and improve an entity’s cyber security, experts say.

The Department of Parliamentary Services said it had "undertaken a rapid modernisation of its ICT environments in recent years" as AI had brought "a range of positive benefits for efficiency and productivity" but could also be used maliciously.

In an October letter to the committee on public sector use of AI, Australian Federal Police manager of technology strategy and data, Ben Lamont, wrote that the technology could provide the AFP with opportunities to improve its work, including “to better inform human decision making and minimise risks to public safety”.

Some Australian law enforcement figures have also called for changes to privacy legislation which would allow them to use AI-powered facial recognition and decryption.

An unnamed foreign government, reported as China by some Australian media, was blamed for a cyberattack on Australia’s parliamentary networks in 2019.

The major breach was allegedly caused by a so-called watering hole attack which tricked users into visiting a malware-infected website.

The current inquiry into the use and governance of AI systems by public sector entities began in September and is expected to produce a report for the government in the coming months.


A foreign government was blamed for a cyber attack on Australia’s parliamentary networks in 2019. Photo: Shutterstock

Former government workers still had IT access

Rona Mellor, deputy auditor-general of the Australian National Audit Office (ANAO), told Friday’s hearing that government bodies should improve their security posture before implementing any significant uses of AI.

She said ANAO had seen instances of terminated government employees still having access to internal IT systems after leaving their organisations, and cited “a lot of control weaknesses in the sector”.

“We see weaknesses in cyber, weaknesses in authorities to operate — not all entities do this well,” she said.

“We see weakness in privacy impact assessments, weaknesses in change management.

“We’ve called out some very big weaknesses across the sector in access controls — terminations of people who’ve left the organisation who still have access to systems.

“Or, who is a privileged person in the entity?

“There’s a lot of entities so digitised that they have people who sit in the back end and make things work — are these the right people to do this, when you’re doing these big-scale new things?”

ANAO, which is currently auditing how the Australian Taxation Office (ATO) is deploying AI, would continue to monitor whether entities had controls which were “sufficiently robust” to allow for the deployment of new technologies, Mellor said.

In a report for the year to 30 June 2023, ANAO said 48 per cent of all audit findings reported to government entities related to “deficiencies in IT controls or entity IT environments”.

In its submission to the inquiry into government use of AI, ANAO said this trend had continued in recent years, "reflecting deficiencies in the fundamentals of IT governance, including security, change management and user access management”.

“The effective operation of controls, including change management, will be particularly important as entities implement emerging technologies,” the office said.