Sam Altman’s house has been attacked twice over the weekend, just days after the publication of a major exposé article about the OpenAI CEO, that he labelled “incendiary”.
A man threw a Molotov cocktail at Altman’s San Francisco home early on Friday morning, local time, and another fired a gun towards the home on Sunday, according to police reports.
The alleged attacks followed the publication of a New Yorker article on Altman that painted him as a “pathological liar”, with several employees at OpenAI calling him “sociopathic”.
In a blog post over the weekend, Altman claimed someone had thrown a Molotov cocktail at his house at 3.45am on Friday morning, and that “thankfully it bounced off the house and no one got hurt”.
In the post, Altman pointed to what he described as the “incendiary” article published last week.
“Someone said to me yesterday they thought it was coming at a time of great anxiety about AI and that it made things more dangerous for me. I brushed it aside,” Altman wrote in the blog.
“Now I am awake in the middle of the night and pissed, and thinking that I have underestimated the power of words and narratives.”
A 20-year-old man from Texas has been arrested and charged with attempted murder in relation to the incident, according to local authorities.
Two days after the first incident, at about 1.40am on Sunday morning, a car with two people inside it pulled up outside Altman’s house, after it passed the same site just minutes before, according to an initial San Francisco Police Department report.
The person in the passenger seat of the car then put their hand out of the window and appeared to fire a gun towards the house, according to police.
Later that day, police announced that two suspects in their 20s had been arrested and charged with negligent discharge.

The San Francisco home of OpenAI CEO Sam Altman. Image: Google Maps
‘Accumulation of alleged deceptions and manipulations’
The New Yorker article, published last Monday, was based on new interviews and previously unreleased documents, and detailed internal doubts over Altman’s fitness to lead OpenAI.
Internal messages cited in the report suggested the AI company’s “executives and board members had come to believe that Altman’s omissions and deceptions might have ramifications for the safety of OpenAI’s products”.
Documents made by high-level OpenAI employees detailed an "accumulation of alleged deceptions and manipulations”, the article said, including Altman offering the same job to two people, telling contradictory tales about who should appear on a livestream, and issues around safety requirements.
This “does not create an environment conducive to the creation of a safe Artificial General Intelligence [AGI]”, one message said.
Dario Amodei, an AI researcher who served as vice-president of research at OpenAI before leaving and co-founding AI rival Anthropic, compiled more than 200 pages of documents detailing his time at the tech company.
Included in the notes was Amodei saying that “the problem with OpenAI is Sam himself”.
The authors of the New Yorker article interviewed more than 200 individuals with firsthand knowledge of Altman, with “most people” reportedly saying he had a relentless will to power that set him apart from other tech figures.
One board member allegedly said he was “unconstrained by truth”, while several used the word “sociopathic” unprompted.
A secret memo from then-OpenAI chief scientist Ilya Sustkever questioned whether Altman was the right person to lead the tech giant.
“Any person working to build this civilisation-altering technology bears a heavy burden and is taking on unprecedented responsibility,” the memo said.
“I don’t think Sam is the guy who should have his finger on the button.”

Altman and OpenAI's goal is to create AGI that is of similar intelligence to humans. Image: Shutterstock
Altman responds
In his blog post, Altman admitted he had made “many” mistakes, that he is “conflict-averse”, and that this has “caused great pain” for him and OpenAI.
“I am not proud of handling myself badly in a conflict with our previous board that led to a huge mess for the company,” Altman said in the post, referring to when he was briefly ousted as CEO in 2023.
“I am a flawed person in the centre of an exceptionally complex situation, trying to get a little better each year, always working for the mission.”
Altman acknowledged that people have been hurt throughout the process of building OpenAI.
“We knew going into this how huge the stakes of AI were, and that the personal disagreements between well-meaning people I cared about would be amplified greatly,” he said.
“But it’s another thing to live through these bitter conflicts and often to have to arbitrate them, and the costs have been serious.
“I am sorry to people I’ve hurt and wish I had learned more faster.”
He said it has been an “extremely intense, chaotic and high-pressure few years” at OpenAI, and the company now needs to “operate in a more predictable way”.