The CEO of OpenAI – creator of ChatGPT – has told a US senate hearing he is “nervous” about the ability for AI systems to manipulate people via “one-on-one … interactive disinformation” and has proposed a licensing scheme for companies that develop AI “above a certain scale of capabilities”.

Altman’s remarks came at a hearing this week in which he, IBM executive Christina Montgomery, and AI entrepreneur Professor Gary Marcus offered suggestions to US lawmakers about how to regulate AI.

“Given that we’re going to face an election next year and these models are getting better, I think this is a significant area of concern,” Altman said.

“Some regulation would be quite wise on this topic”.

The concerns that AI could actively influence people en masse are beginning to be validated.

A pre-published paper from a team of researchers in the US, Germany, Israel, and India recently tested how using a biased writing assistant would affect people’s opinions.

They conducted an experiment in which participants wrote for/against arguments on a topic with the help of an AI assistant.

Participants who used a writing assistant with a baked-in bias were twice as likely to express strong opinions than the control group.

The researchers concluded that “using an opinionated language model affected participants’ writing such that the text they wrote was more likely to support the model’s preferred view”.

Professor Marcus warned the senate hearing this week that, in light of these preliminary research findings, AI systems “are going to be destabilising” and argued for strict regulation at an international level.

“Democracy itself is threatened,” he said. “Chatbots will also clandestinely shape our opinions, potentially exceeding what social media can do.

“Choices about data sets that AI companies use will have enormous unseen influence. Those who choose the data will make the rules shaping society in subtle but powerful ways.”

Professor Marcus suggested the US government form an agency similar to the Food and Drug Administration that would provide some assurances “prior to widespread deployment”.

“If you’re going to introduce something to a hundred million people, somebody has to have their eyeballs on it,” he said.

Altman agreed, saying if he could dictate regulatory efforts, he “would form a new agency that licenses any effort above a certain scale of capabilities and can take that license away and ensure compliance with safety standards”.

When pressed for IBM’s position, however, Montgomery disagreed that there should be licenses for AI use, saying regulatory efforts ought to be nuanced and precise.

She told the hearing that “the strongest regulation should be applied to the use cases with the greatest risks to people and society” and not broadly.

“There must be clear guidance on AI uses or categories of AI supported activity that are inherently high risk,” Montgomery said.

“This common definition is key to enabling a clear understanding of what regulatory requirements will apply in different use cases and contexts.”

She went on to say that consumers “should know when they’re interacting with an AI system” and that “no person should be tricked into interacting with an AI system”.

IBM’s position, Montgomery explained, is that governments’ role is to create “clear, reasonable policy and sound guardrails” which are matched with “meaningful steps by the business community”.

But if Professor Marcus had his way, he would not leave tech companies alone to do the right thing when it comes to AI.

“The big tech companies’ preferred plan boils down to ‘trust us’, but why should we?” he said. “The sums of money at stake are mind-boggling.”

He mentioned that OpenAI’s original mission was to advance AI “in a way that is most likely to benefit humanity as a whole” but now the company has a US$10 billion deal with Microsoft and is one of the biggest market players in AI.

“Seven years later, they’re largely beholden to Microsoft, embroiled in part in an epic battle of search engines that routinely make things up,” Professor Marcus to continue.

“That’s forced Alphabet to rush out products and de-emphasise safety. Humanity has taken a back seat.”