An Australian lawyer has been referred to a state regulator after submitting fake AI-generated cases to court, after another lawyer was recently busted for failing to check AI-generated documents.
Generative AI tools such as ChatGPT are being increasingly used in workplaces across all sectors and have proved particularly popular in the legal profession.
But the propensity of these tools to ‘hallucinate’ and confidently generate either non-existent information or inaccurate data has led to several cases where lawyers have relied on the technology and submitted false information to courts.
A federal court judgement published this week confirmed an anonymised lawyer in Western Australia was referred to that state’s Legal Practice Board and ordered to pay costs of more than $8,000 after submitting AI-generated documents which cited four cases that did not exist or were used inaccurately.
The lawyer in the immigration-related case admitted he used Anthropic’s Claude AI as a “research tool”, and then Microsoft’s Copilot to validate the information Claude provided.
He admitted he had an “overconfidence in relying on AI tools and failed to adequately verify the generated results” and apologised to the court.
“I had an incorrect assumption that content generated by AI tools would be inherently reliable, which led me to neglect independently verifying all citations through established legal databases,” the lawyer told the court.
Judge deems AI a ‘dangerous mirage’
The judge overseeing the case, Justice Arran Gerrard, said the use of incorrect information generated by AI was “becoming increasingly prevalent in this and other courts”.
“It demonstrates the inherent dangers associated with practitioners solely relying on the use of AI in the preparation of court documents and the way in which that interacts with a practitioner's duty to the court,” Gerrard said.
The judge said the court “does not adopt a luddite approach to the use of generative AI tools” and that he understood its appeal when it came to complicated subjects such as immigration law.
“As demonstrated by this, and other recent cases, that attraction is presently a dangerous mirage,” he said.
“Not only is it not an appropriate substitute for legal research, it comes with considerable risks which, if not mitigated, have the capacity to lead to actions which could be construed as a contempt of court.”

Some Australian jurisdictions have released specific guidelines for the use of generative AI in the law. Image: Shutterstock
Beware the AI hallucinations
The case was just the latest in a series of incidents involving lawyers relying on ChatGPT and other generative AI tools to make submissions to a court.
Earlier this month a Victorian defence lawyer acting for a minor accused of murder referred to non-existent case citations and inaccurate quotes from a parliamentary speech in court filings, which they admitted were produced by AI.
“It is not acceptable for AI to be used unless the product of that use is independently and thoroughly verified,” the judge in the case said, as the Associated Press reported.
The submission containing the fake cases had been sent to prosecutors, who then also did not verify the information and created their own submission based on that false data.
The defence lawyer apologised for the error and re-filed the documents, but they also contained references to legislation that did not exist, generated by AI.
Earlier this month a man’s unfair dismissal case was also dismissed by the Fair Work Commission after it was revealed he had relied nearly entirely upon ChatGPT to prepare his application.
“So much was clear from the deficiencies in the application which failed to address the matters required to make a good claim that Part 3-1 of the Fair Work Act had been contravened,” the Commission said.
The man’s application included advice from ChatGPT that his employer had breached the Fair Work Act, something the Commission said had “no basis”.
The AI tool also advised the man to consult a legal professional or union representative, but he did not do either.
“He simply followed the suggestion made by ChatGPT and commenced the proceedings,” the Commission said.
“The circumstances highlight the obvious danger of relying on artificial intelligence for legal advice.”
Last month a Melbourne law firm was ordered to pay costs after it was caught using fake AI-generated citations and incorrect cases.
Last year three Australian states issued explicit guidelines for the use of generative AI in the law, outlining that it should be limited to “lower-risk and easier to verify tasks” and lawyers must verify all information provided by it.