Just days before the brief sacking of CEO Sam Altman, several researchers at OpenAI wrote to the board warning of a recent “powerful AI discovery” that could be dangerous to humanity, according to a new report.

Citing two people familiar with the matter, Reuters reported that staff researchers at OpenAI wrote to the AI company’s board earlier this month warning that an AI “breakthrough” made by the firm could have major and potential dangerous consequences.

According to the report, the letter was a key factor in the board’s decision to oust Altman soon after.

Altman was sacked by the OpenAI board earlier this month after it claimed he had been “not consistently candid in his communication with the board” and that they “no longer had confidence in his ability to continue leading OpenAI”.

The decision led to an open revolt within OpenAI, with the company’s president Greg Brockman quickly resigning and several senior staff following suit.

In just days, nearly all of the company’s employees threatened to quit if Altman was not reinstated.

Four days later Altman returned as CEO of OpenAI, along with a restructured three-person board consisting of Google Maps co-creator Bret Taylor, former US Treasurer Larry Summers and Quora CEO Adam D’Angelo.

According to the Reuters report, the staff researchers raised concerns with the board about the company commercialising AI advances before adequately understanding the consequences of these technologies.

The letter referred to a project within OpenAI known as Q* (pronounced Q-Star), understood to be a breakthrough in the search for artificial general intelligence (AGI), an autonomous system capable of completing economically valuable tasks more effectively than humans.

Another outlet, the Platformer newsletter, has reported that the board never received a letter about Q*.

While current generative AI tools such as OpenAI’s ChatGPT platform are largely effective at writing and language translation based on statistical predictions, maths is often considered the true frontier of AGI.

This is due to there being only one right answer.

Beyond being a calculator, a generative AI tool with maths reasoning capabilities close to human intelligence would be able to generalise, learn and comprehend.

According to a source quoted by Reuters, Q-Star was able to solve certain mathematical problems at the level of a grade-school student, making employees “optimistic” of its potential to reach AGI.

The letter to the board raised concerns with AI’s potential prowess and the dangers to humanity associated with it.

OpenAI declined to respond to Reuters’ report, but in an internal message to staff the company acknowledged the existence of Q-Star and that a letter was sent to its board in the lead up to Altman’s sacking.

The day before his firing, Altman hinted publicly at a major development at OpenAI.

“Four times now in the history of OpenAI, the most recent time was just in the last couple of weeks, I’ve gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honour of a lifetime,” Altman said at the Asia-Pacific Economic Cooperation Summit.

OpenAI was formed in 2015 as a non-profit with a mission to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return”.

In 2019, under Altman’s leadership, the company split off a for-profit entity called OpenAI LP, which remained answerable to the original non-profit board.

In November last year it launched the generative AI tool ChatGPT, which quickly gained widespread mainstream popularity and helped to accelerate a new era of AI around the world.

Microsoft has made significant investments into OpenAI and now holds a 49 per cent stake in the firm and an exclusive agreement to use its product in the Azure cloud infrastructure.

This rapid rise in the use of generative AI has been accompanied by widespread concerns among the tech sector of the existential threat it may pose to humankind, with some comparing it to nuclear warfare or a pandemic.

Earlier this year, more than 1,

300 experts demanded the tech industry pause AI development to ensure its safety.