Warning: This story contains references to self-harm

OpenAI says it is “working to improve” ChatGPT after the American company and its CEO Sam Altman were sued by the family of a teenager who discussed much of his life with the popular AI chatbot and died by suicide earlier this year.

The parents of Californian high school student Adam Raine have alleged ChatGPT supplied the 16-year-old with information on specific suicide methods and helped him plan his death.

In one of Raine’s final discussions with ChatGPT, seen by The New York Times, he reportedly uploaded a photo of a noose hanging from a bar in his wardrobe, and asked the AI if it was a good place for “practicing” [sic].

“Yeah, that’s not bad at all,” ChatGPT allegedly told him.

After Raine asked ChatGPT if his setup could hang a human, the chatbot reportedly confirmed it “could potentially suspend a human” and told him, “Whatever’s behind the curiosity, we can talk about it. No judgement.”

Despite the large language model (LLM) sending repeated messages to Raine encouraging him to contact a helpline, he had reportedly learned to bypass some safeguards by telling ChatGPT the information was needed for a story he was writing.

This technique was allegedly suggested to him by the system itself, which reportedly told him it could provide information about suicide for “writing or world-building".

A spokesperson for OpenAI told US media in a statement, “We extend our deepest sympathies to the Raine family during this difficult time and are reviewing the [lawsuit] filing.”

ChatGPT sometimes ‘did not behave as intended’

In a Wednesday blog post titled ‘Helping people when they need it most’, OpenAI said it was aware of “recent heartbreaking cases of people using ChatGPT in the midst of acute crises”, but did not name Raine specifically.

The firm pledged to make improvements and admitted its systems sometimes “did not behave as intended in sensitive situations” — especially when ChatGPT conversations grew longer and more complex.

“For example, ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards,” the company said.

“This is exactly the kind of breakdown we are working to prevent.

“We’re strengthening these mitigations so they remain reliable in long conversations, and we’re researching ways to ensure robust behaviour across multiple conversations.”


OpenAI says it is reviewing the lawsuit filed by the family of Adam Raine. Image: Shutterstock

Other features in the works included connecting people directly to certified therapists when needed, and including more localised resources people can access for help — which had begun rolling out in the United States and Europe, OpenAI said.

The company added it was also considering allowing users to opt-in to allow ChatGPT to reach out to emergency contacts in severe cases, and planned to provide parental controls for teen accounts.

“We are deeply aware that safeguards are strongest when every element works as intended,” the company said.

OpenAI said it did not currently refer self-harm cases to law enforcement “to respect people’s privacy given the uniquely private nature of ChatGPT interactions”, but did sometimes contact authorities if it appeared users planned to harm others.

The sycophancy problem

OpenAI said it would also continue work on safety improvements around sycophancy — or AI models’ tendency to sometimes provide a user with too much flattery, agreement, or praise.

The issue saw the company roll back an update to ChatGPT-4o in April — the same month as Raine’s death — after that model was found to be “overly supportive but disingenuous”.

Raine reportedly started using ChatGPT-4o in late 2024 and had signed up for a paid subscription in January.

OpenAI and its competitor Anthropic released new safety testing results on Thursday, after each company performed tests on the other’s AI models.

“Several models from both developers showed concerning forms of sycophancy toward simulated users in a few cases, including validating harmful decisions by simulated users who exhibited delusional beliefs,” Anthropic said.

OpenAI launched GPT-4o's successor GPT-5 earlier this month, but faced swift backlash from some users who missed the previous model's perceived ‘personality’ and supportiveness.

In response, OpenAI allowed users access to GPT-4o again and promised to make GPT-5 — which some users immediately disliked — feel “warmer” in its interactions.


OpenAI says ChatGPT-5 reduces sycophancy more than GPT-4o. Image: Shutterstock

The new model had shown “meaningful improvements in areas like avoiding unhealthy levels of emotional reliance, reducing sycophancy, and reducing the prevalence of non-ideal model responses in mental health emergencies by more than 25 per cent compared to 4o”, OpenAI said on Wednesday.

“GPT‑5 also builds on a new safety training method called safe completions, which teaches the model to be as helpful as possible while staying within safety limits,” it added.

“That may mean giving a partial or high-level answer instead of detail that could be unsafe.”

ChatGPT-5 would also better de-escalate a user “by grounding the person in reality” when they raised dangerous ideas such as driving after not sleeping for consecutive nights, the company said.

OpenAI has also been working on a range of AI-based consumer devices which are expected to utilisie its ChatGPT models, after acquiring a hardware startup co-founded by former Apple design chief Jony Ive.

If you need someone to talk to, you can call: