An Australian law firm has been ordered to pay costs after a junior solicitor submitted several fake and incorrect citations to court that were produced using AI.
In an April ruling published on 2 July, Justice Bernard Murphy ordered Melbourne-based firm Massar Briggs Law to pay costs for filing two documents containing citations which were made up or incorrectly included by a generative AI (GenAI) tool, as first reported by Lawyers Weekly.
The judge warned GenAI was increasingly being used by the legal profession, and proper scrutiny and checks needed to be in balance to avoid the “growing problem” of false citations.
“It is apparent from the consultation with the profession which has taken place to date that many members of the legal profession use AI in some form, and that they see it as a useful tool in the conduct of litigation,” Murphy said.
“It is critical that legal practitioners use proper safeguards to verify the accuracy of the work product.”
Non-existent sources
Massar Briggs Law was acting for the applicant in a native title determination case being heard in the Federal Court of Australia.
The law firm submitted a summary of the applicant and its decision-making process earlier this year, including several footnotes referencing historical reports and papers.
First Nations Legal and Research Services (FNLRS) was then tasked with producing a footnoted document based on this, but after “extensive searches” of its database “concluded that most of the cited documents did not exist, and that others existed but were incorrectly cited”.
An “inexperienced junior solicitor” working for Massar Briggs Law had prepared the footnotes for the application while working from home, where she did not have access to the physical or electronic copies held in the law firm’s office, the court heard.
She instead produced the citations using Google Scholar’s search tool.
When FNLRS raised concerns that some of the references may have been made up by AI, the junior lawyer attempted to recreate them using Google Scholar, but was given different results.
“She could not explain how or why that was so,” Murphy said in the judgement.
“She stated that she had regularly used Google Scholar during her university studies and that this was not a problem she had previously experienced.
“She apologised to the parties and to the court for her error.”

The Federal Court's Justice Bernard Murphy says law firms should 'use proper safeguards' to verify the output of AI tools. Image: Shutterstock
Massar Briggs Law principal solicitor Jason Briggs and the junior solicitor both submitted affidavits to the court over the incident.
The court heard the work of the junior solicitor may not have been checked by others at the law firm.
Briggs deposed that he was “not aware that anyone had checked the junior solicitor’s work”.
“Mr Briggs accepted that it was an error on his part to allow collaborative work to be performed remotely and he described the failure to ensure that anyone checked the junior solicitor’s work as ‘an oversight error’,” the judge said.
“He expressed his regret for the inconvenience to the parties and to the court.”
AI and the law
There have been many examples of lawyers being busted using GenAI in recent years.
In early 2023 a US-based lawyer used ChatGPT for research and submitted a number of “bogus” cases to a court.
AI’s use has also proved dangerous in the child protection space, with a worker last year entering confidential information into ChatGPT — something which was found to have risked serious ramifications on the future placement of the child.
In 2024 authorities in New South Wales, Victoria, and Western Australia warned that lawyers “cannot safely enter” confidential or commercially-sensitive information into GenAI tools, and they must limit uses to “lower-risk and easier to verify tasks”.
High Court chief justice Stephen Gageler recently told Capital Brief AI poses an “existential” threat to the law, and the rise of generative AI tools “calls into question the adversarial system”.
He also revealed the High Court would launch a “modest” trial of AI systems later this year.