A US lawyer has found out the hard way that the new generation of AI is not always accurate, after he used ChatGPT for research and submitted a number of “bogus” cases to the court.

The case in question involved a man suing Colombian airline Avianca after he said he was injured by a metal serving cart striking his knee during a flight, The New York Times reported.

When the airline tried to have his case thrown out, the man’s lawyer, Steven A Schwartz, submitted a brief including a number of previous cases.

But when the opposing lawyer and judge pointed out that these cases didn’t actually exist, the lawyer was forced to admit he had used OpenAI’s ChatGPT tool to research the cases and had not checked their legitimacy, other than asking the AI platform whether they were real.

US District Judge Kevin Castel told the court that “six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations”, and that he would be holding a hearing to consider sanctions against the lawyer in question.

The judge said they were “unprecedented circumstances” and that the brief provided was “replete with citations to non-existent cases”.

For example, the brief cited a Varghese v China Southern Airlines case. While this case does not exist, it does appear to reference a real case, but said it was decided 12 years later than it actually was.

In an affidavit, Schwartz admitted to using ChatGPT to conduct research for the brief, and that the incorrect cases were provided by the AI tool, something he said has “revealed itself to be unreliable”.

This was the first time he had used ChatGPT, Schwartz said, and he was “unaware of the possibility that its content could be false”.

The lawyer also provided screenshots of him asking ChatGPT whether the sources were real, and the AI chat bot saying they were and could be found in legal journals.

Schwartz said he “greatly regrets having utilised generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity”.

A message at the bottom of the ChatGPT tool says that it “may produce inaccurate information about people, places or facts”, while OpenAI’s terms of use also include warnings about potentially inaccurate information.

“Given the probabilistic nature of machine learning, use of our services may in some situations result in incorrect output that does not accurately reflect real people, places or facts,” it said.

“You should evaluate the accuracy of any output as appropriate for your use case, including by using human review of the output.”

Significant concerns have been raised about generative AI since ChatGPT was launched in November last year.

Recently, OpenAI CEO Sam Altman told a US Senate hearing that he is “nervous” about the future of AI and how it could manipulate people via “one-on-one…interactive disinformation”. At the hearing, Altman proposed a licensing scheme for companies that develop AI “above a certain scale of capabilities”.

“Given that we’re going to face an election next year and these models are getting better, I think this is a significant area of concern,” Altman told the hearing. “Some regulation would be quite wise on this topic.”

An Australian mayor is set to become the first person to sue ChatGPT over false information, saying it had incorrectly labelled him a criminal and said he had been imprisoned over a bribery scandal.