A group of Australian academics has “unreservedly apologised” for including factually incorrect allegations about big consulting firms – produced by a generative AI tool – to a Senate inquiry submission.

In a letter to the Senate, Emeritus Professor James Guthrie AM – a professor in the Department of Accounting and Corporate Governance at Macquarie University – admitted to having used Google Bard AI to research information for a submission to a Parliamentary inquiry into the conduct of the Big 4 consulting firms, with numerous false claims being generated, as reported by The Guardian.

The other academics in the group are Professor John Dumay (Macquarie University), Professor Jane Andrew (University of Sydney Business School), and Dr Erin Twyford (University of Wollongong), however they were quick to distance themselves from the scandal, making it very clear in their revised submission that the blame was squarely on Professor Guthrie.

“Our original submission contained factual errors due to Professor Guthrie’s use of the Google Bard Large Language model generator referenced in the original submission,” they wrote.

“These errors related to claims of Deloitte, KPMG, EY and PwC audit activities and involvement in financial scandals.”

Bard is Google’s conversational generative AI tool that is in direct competition with OpenAI’s ChatGPT tool.

What the academics said

The academics’ submission included case studies about alleged wrongdoing by large consulting firms that had been produced by Bard but were entirely fictional.

It is believed to be the first time that a Parliamentary Committee has been forced to grapple with the use of generative AI in research and writing submissions to inquiries, which are covered by Parliamentary Privilege and free from any defamation action.

Included in the submission were a number of case studies about consulting and accounting giant Deloitte, including that it had been involved in a “NAB financial planning scandal”, was sued by the liquidators of collapsed construction firm Probuild, had audited cafe chain Patisserie Valerie, and was auditing Westpac at the time of a scandal.

All of these claims are false.

In a letter to the Senate, Deloitte general counsel Tala Bennett said that there had never been a “Deloitte NAB financial planning scandal”, that it was not the auditor of Probuild but had been the administrator and is not being sued in relation to this, and that it has never audited Patisserie Valerie or Westpac.

“Deloitte supports academic freedom and constructive discourse in relation to those matters currently before the Committee, however, it considers that it is important to have factually incorrect information corrected,” Bennett said in a letter to the Committee.

“It is disappointing that this has occurred, and we look forward to understanding the Committee’s approach to correcting this information.”

The submission also made false claims about Big 4 consultancy KPMG, also generated by Google Bard.

The submission falsely claimed that KPMG had been involved in a “KPMG 7-Eleven wage theft scandal” that led to the resignation of several of its partners.

It also incorrectly said that KPMG had audited the Commonwealth Bank during a financial planning scandal.

But KPMG was not involved with either of these scandals.

It’s all my fault

Guthrie has admitted to using generative AI to produce these case studies.

“Given that the use of AI has largely led to these inaccuracies, the entire authorship team sincerely apologises to the Committee and the named Big Four partnerships in those parts of the two submissions that used and referenced the Google Bard Large Language model generator,” Guthrie’s letter said.

“Given we are also accounting academics, we are deeply invested in the public interest and ensuring accountability and transparency in the sector – which is why we unreservedly offer our apologies and deep regret.

“I now realise that AI can generate authoritative-sounding output that can be incorrect, incomplete or biased.”

Inquiry chair and Liberal Senator Richard Colbeck said the incident is a “salient reminder to everyone to check your sources”.

“This shows how out of control things can get if basic facts are not correct.”

Organisations and institutions across every sector around the world are grappling with the widespread adoption and use of generative AI tools such as Bard and ChatGPT.

These generative AI tools are already being used widely in education, law and the public sector, among other areas.

Earlier this year a US lawyer admitted to using ChatGPT for research, leading him to submit a number of “bogus” cases to the court.

A brief submitted by the lawyer included a number of previous cases, but these had been entirely made up by ChatGPT.

The federal government itself has warned that generative AI tools should only be used in the public sector in “low-risk situations”, warning that there is an “unacceptable risk” in using them for decision-making or writing application code.

And last week, The Guardian wrote to Microsoft President Brad Smith after the tech giant’s AI tools inserted a distasteful poll into the news publication’s story about the death of a woman in Sydney.