The United States’ National Telecommunications and Information Administration (NTIA) says it supports the development of open source artificial intelligence, arguing in a new report that restrictions are not necessary for existing AI models, but may be needed in the future.

The report released on Tuesday called for “embracing openness in AI” for the benefit of small companies, researchers, not-for-profits, and individuals, but also called for the “active monitoring of risks in powerful AI models”.

Open source software is shared publicly for free so that anyone can study or build upon its code.

US Secretary of Commerce, Gina Raimondo, said the US government was “pulling every lever to maximise the promise of AI" while also minimising the risk of harms and adapting to “potential challenges ahead”.

The NTIA report recommended the US government continually update its benchmarks for monitoring AI, and suggested restrictions could still be placed on certain models if benchmarks for intervention were met.

If actions or restrictions were needed, the report authors said the US government could also use risk mitigation measures such as “restricting access to material components in the case of bio-risk concerns, working with international partners to set norms, [and] investing in model research”.

NTIA administrator and US Assistant Secretary of Commerce for Communications and Information, Alan Davidson, said: “The openness of the largest and most powerful AI systems will affect competition, innovation and risks in these revolutionary tools.

“… Government has a key role to play in supporting AI development while building capacity to understand and address new risks.”

The report comes after US President Joe Biden gave his country’s commerce department until July to make recommendations on the benefits and risks of open source AI models.

Different AI approaches from companies, politicians

ChatGPT-maker OpenAI, one of the most prominent American AI companies, initially made several of its systems open source but has since closed off its more capable models, arguing they need to be protected against misuse.

Some industry figures have seen this as an attempt to protect the company’s intellectual property and market position, with Elon Musk even describing the company he helped found as “a lie”.

Musk (who owns X, formerly Twitter) and Meta CEO Mark Zuckerberg have largely been in favour of open source AI, and have argued it aids innovation.

In a blog post last week, Zuckerberg wrote open source AI would “ensure that more people around the world have access to the benefits and opportunities” of the technology.

He also argued open source models would be safer than closed-off ones.

“The United States’ advantage is decentralised and open innovation,” Zuckerberg wrote.


Meta CEO Mark Zuckerberg says open source AI is "the path forward". Photo: Meta / Supplied

“Some people argue that we must close our models to prevent China from gaining access to them, but my view is that this will not work and will only disadvantage the US and its allies.

“Our adversaries are great at espionage, stealing models that fit on a thumb drive is relatively easy, and most tech companies are far from operating in a way that would make this more difficult.

“It seems most likely that a world of only closed models results in a small number of big companies plus our geopolitical adversaries having access to leading models, while startups, universities, and small businesses miss out on opportunities.”

AI policies have so far played a minor part in the US presidential race between former president Donald Trump and Vice President Kamala Harris.

Trump and his running mate JD Vance have both voiced concerns over regulations stifling AI development, while Harris has suggested a need for balance between protecting the public and encouraging innovation.

The Republican party has also committed to repealing Biden’s executive order on AI if it wins office in November, arguing it is a regulatory burden.

Meta this week blamed AI hallucinations — caused when AI systems produce incorrect information — after its chatbot incorrectly said Trump was not recently the victim of an assassination attempt.

The company described such AI hallucinations as "an industry-wide issue".