An Australian local council mayor may become the first person in the world to sue AI platform ChatGPT for defamation over false claims he was imprisoned over a bribery scandal.

The mayor of Hepburn Shire in Victoria, Brian Hood, discovered late last year that artificial intelligence chatbot ChatGPT was incorrectly claiming that he had pleaded guilty to conspiring to bribe a foreign official and had served time in prison over this.

Hood had instead blown the whistle on a bribery case more than a decade ago involving a subsidiary of the Reserve Bank of Australia.

On 21 March, Hood sent a letter of concern to ChatGPT owner OpenAI demanding the errors be fixed within 28 days or legal action would be launched.

The US-based company has not responded to these demands yet.

“I couldn’t believe it at first, but I went and made some enquiries myself and got this very incorrect information coming back,” Hood told ABC News.

“It told me that I’d been charged with very serious criminal offences, that I’d been convicted of them and that I’d spent 30 months in jail.

“It’s one thing to get something a little bit wrong, it’s entirely something else to be accusing someone of being a criminal and having served jail time when the truth is the exact opposite.

“I think this is a pretty stark wake-up call. The system is portrayed as being credible and informative and authoritative, and it’s obviously not.”

Hood was company secretary of Notes Printing Australia, a subsidiary of the Reserve Bank, in 2005 when he told journalists and officials about bribery at the organisation linked to Securency, which was part-owned by the Reserve Bank.

The company was eventually raided by the police in 2010, resulting in arrests and prison sentences for some involved.

Hood is represented by Gordon Legal, which has said he could claim more than $200,000 in damages.

“It would potentially be a landmark moment in the sense that it’s applying this defamation law to a new area of artificial intelligence and publication in the IT space,” Gordon Legal’s James Naughton told Reuters.

“He’s an elected official, his reputation is central to his role.”

Naughton said that ChatGPT gives users a “false sense of accuracy” because it does not include footnotes.

“It’s very difficult for somebody to look behind [ChatGPT’s response] to say, ‘how does the algorithm come up with that answer?’ It’s very opaque.”

A message at the bottom of the ChatGPT page reads, “ChatGPT may produce inaccurate information about people, places or facts”.

OpenAI’s terms of use also include warnings about potentially inaccurate information.

“Given the probabilistic nature of machine learning, use of our Services may in some situations result in incorrect Output that does not accurately reflect real people, places or facts,” the terms of use say.

“You should evaluate the accuracy of any Output as appropriate for your use case, including by using human review of the Output.”

Italy has already temporarily banned ChatGPT over data privacy and inaccuracy concerns. The service has been limited from processing the data of Italian users while the Italian Data Protection Authority conducts an investigation.

“The information made available by ChatGPT does not always match factual circumstances,” the Italian Data Protection Authority said.

“There appears to be no legal basis underpinning the massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies.”

ChatGPT also experienced its first data breach last month, when a bug in an open source library allowed some users to see the titles and potentially the first messages in the conversations from other user’s chat histories.