A former Spanish national team coach has been forced to deny extraordinary claims he was effectively running a professional football club with ChatGPT.

Robert Moreno, who left Russian Premier League side PFC Sochi last year, is accused by the club’s former general director Andrei Orlov of relying on the generative AI tool to plan training sessions, manage players and even help decide transfer signings.

Speaking to Russian media, Orlov alleged Moreno’s “over-reliance” on the chatbot was one of the reasons he was removed, claims later circulated internationally by beIN Sports.

According to Orlov, the coach used ChatGPT during an away trip to design a player schedule – one that allegedly required athletes to remain awake for 28 consecutive hours.

“I looked at the presentation and saw that the players couldn’t sleep for 28 hours,” Orlov said.
“I asked, ‘Robert, that’s all very well, but when are the lads going to sleep?’ The players didn’t understand why we had to wake up at five in the morning to train at seven.”

He also claimed Moreno fed the statistics of three potential striker signings into ChatGPT and used its response to choose who the club should recruit.

The player reportedly failed to score in 10 league matches.

“It’s a tool like any other – why not use it?” Orlov said. “But for Moreno, GPT became one of the main ones.”

Moreno denies allegations

Moreno has strongly rejected the claims, saying he neither used AI to make football decisions nor was he dismissed for doing so.

“I have never used ChatGPT or any AI to prepare matches, decide lineups or choose players. That is completely false,” he said in an open letter to a Spanish newspaper.

He acknowledged using technology in coaching, but not decision-making AI.

“My career in football began through data and video analysis,” he wrote.
“Like any professional staff we use analysis tools – GPS, Wyscout, video, scouting platforms.

“Technology helps process information faster, but the sporting decisions are always made by the coaching staff.”

Moreno said his only use of ChatGPT was translating Spanish into Russian.

Regarding the striker signing, he said it followed a “club process”, and the player had scored in a cup competition before suffering an injury.

He also attributed his exit to internal disagreements and results rather than AI.

“My departure was by mutual agreement at a time of inconsistent results and disagreements over sporting planning, as is common in football,” he said.
“To present it as being ‘fired for using ChatGPT’ simplifies something much more complex and is also untrue.”

AI and professional decision-making

The dispute reflects a broader debate emerging across industries as generative AI tools move into professional workplaces.

Organisations are increasingly grappling with how much decision-making should be delegated to AI – and how transparent its use should be.

The issue has already surfaced in law, including cases in Australia where lawyers submitted court filings containing fabricated or misattributed legal precedents generated by AI systems, prompting regulatory scrutiny.

A woman in Japan earlier this year took things even further and married her ChatGPT-generated boyfriend, donning AI smart-glasses at the altar so she could see her groom.

The maker of ChatGPT, OpenAI, has also recently introduced dedicated AI-based health features to respond to health concerns, learn from users’ medical data and help healthcare professionals make diagnoses.