A Sydney-based Java software developer who was fired after becoming “ungovernable” partly due to his use of AI in sending workplace emails and in meetings has lost his unfair dismissal case at the Fair Work Commission.
The tech worker, Marcus Wibmer, admitted to having “collaborated” with AI to produce a series of emails, and for assistance during meetings, about a workplace bullying complaint.
The Fair Work Commission (FWC) found this contributed to communications that were “dense, repetitive and often rambling”, as well as “demanding and overbearing”.
According to the FWC, the use of AI escalated what began as a relatively simple bullying issue into a broader dispute involving the company’s dress code, sexual harassment allegations, “ill-informed” claims of whistleblower protections, and arguments about natural justice.
Ultimately, the commission found Wibmer had become “ungovernable”, in part due to his reliance on AI for workplace communication with his colleagues and ruled that his dismissal was valid.
The senior Java developer had been employed by Fujifilm Data Management Solutions, a consulting subsidiary of the Japanese tech giant, for more than a decade when he was dismissed for alleged serious misconduct in August last year.
He later applied to the FWC seeking compensation for unfair dismissal.
A disputed message
The case stemmed from a disagreement in May last year, when a colleague criticised Wibmer’s work, saying it had “caused more pain than gain”.
Wibmer responded with “far q”, later claiming this was a typo for “far out”. The exchange over the message lasted about 45 minutes. He later said he was “offended” by the criticism and requested an apology.
At a follow-up meeting, the colleague offered to apologise, but Wibmer did not accept.
“This is unfortunate – if he had done so, the events that followed would not have occurred,” FWC Deputy President Tony Slevin said.
Escalation through AI-assisted communication
Following this meeting, the software engineer began sending a series of emails to a number of managers at the company that were often “lengthy and perplexing”, the FWC found.
In these emails, the developer raised a previous workplace incident in 2019 where he claimed another colleague had attempted to strike, incidents of compliance with the company’s dress code and accusations that he had not been provided natural justice.
“His simple bullying complaint which had been the subject of an investigation and two meetings had spiralled into complaints about sex discrimination, sexual harassment, failures to provide natural justice, and failures to abide by workplace laws and company policies,” the FWC findings said.
The FWC found that the drafts of these emails were generated by generative AI tools, a claim that was not denied by the worker.
This led to them being “needlessly detailed”, written in a “combative and demanding tone”, seeking unnecessary information and many being “inappropriate” for the workplace, Slevin said.
“They needlessly sought assurances of transparency, accountability, adherence to legal norms and company policies,” the findings said.
“They were disproportionate to the matters being dealt with. They lacked any reasonable regard to purpose or context. They lacked nuance.”
After the meeting, Wibmer began sending a series of emails to multiple managers that the FWC described as “lengthy and perplexing”.
In them, he raised unrelated issues, including a 2019 incident in which he alleged a colleague had attempted to strike him, concerns about dress code compliance, and claims he had been denied natural justice.
Inappropriate for the workplace
The Commission said Wibmer’s original bullying complaint “spiralled” into broader allegations of sex discrimination, sexual harassment, breaches of workplace law, and failures to follow company policy.
The FWC found the emails were generated with the help of AI — something Wibmer did not deny. It said this contributed to messages that were “needlessly detailed”, “combative and demanding” in tone, and often inappropriate for the workplace.
“They were disproportionate to the matters being dealt with,” the findings said. “They lacked any reasonable regard to purpose or context. They lacked nuance.”
Wibmer acknowledged using AI as a “collaborator” and said he accepted responsibility for the communications, adding that he reviewed and edited the drafts himself. He also admitted referring to ChatGPT on his phone during some meetings.
The Commission found he also relied on AI to help prepare submissions in the case, which Slevin described as “unfortunate” and “counter-productive”.
“It led to his demise,” Slevin said.
“The use of AI appears to have given [him] a false sense of security that his communications … were appropriate and acceptable in a workplace setting. Objectively, they were not.”
Dismissal upheld
Wibmer’s employment was terminated in August, with the company stating it had “no confidence” he would behave appropriately and describing him as “ungovernable”.
Slevin agreed with that assessment, finding the dismissal was not harsh, unjust or unreasonable.
Separately, FWC president Justice Adam Hatcher has warned that the Commission’s operations have been “significantly disrupted” by the growing use of generative AI, contributing to a rise in caseloads and submissions that are inaccurate or unlikely to succeed.
The Commission is expected to soon require applicants to disclose whether they have used AI in proceedings and confirm that its outputs have been checked.