If you believe the likes of Ray Kurzweil and Elon Musk, there will be a time in the future where AI is so incredibly powerful that it creates a world too strange to comprehend.

Musk thinks this could be as little as ten years away; Kurzweil believes it could be sometime around 2045.

But technological singularity is highly contentious and not everyone – including science fiction writers – believe it’s a surety.

“I don’t see a singularity,” Apex author Ramez Naam told the recent Salesforce Dreamforce conference in San Francisco.

“I think we’re heading for a world of incredible intelligence all around us, but that is serving us and enriching our lives – mostly, with some caveats and concerns.”

Naam believed it would not be computing power that held AI back but rather the extreme difficulty in making it as super-powerful as singularity requires.

“While AI is amazing at playing Jeopardy or driving cars, it remains our servant rather than our master,” he said.

He also questioned what use super-powerful AI would be – questions that cut to the heart of whether anyone would bother to continue developing it.

“I don’t see either the economic incentive to create AI that has its own opinions,” he said.

“Why would you want to make a self-driving car that says, ‘I don’t want to drive in that neighbourhood’?”

David Brin, best known for his Uplift novels, on the other hand leans “slightly toward the direction that I think [singularity] will happen”.

“I personally believe we will see … entities that have motivations that rise above what they were originally given to us,” Brin said.

“There are some scary possibilities”.

However, like Naam, Brin believed humans would still be able to maintain the upper hand over AI.

“In the end I think we’ll be able to talk AI into merging with us,” he said.

“One reason is that we’re really good at setting goals. If we merge with computers they’d be good at strategising how to achieve goals.”

Brin also hypothesised that AI in this situation would be smart enough to learn from humankind’s mistakes and to self-limit its own power.

“We emerged out of an ecosystem that mixed cooperation and competition,” he said.

“What looks like cut-throat competition at one level often looks like cooperation when you look at it from the point of view of the next level.

“If an AI tries to be a single overlord that’s just like those pyramid social structures that ruined our progress for 6000 years called feudalism.

“And those AIs will be able to look at human history and say, ‘Eww. I better have rivals. I better have lots of rivals in a system that limits my power’.

“That takes maturity, but Ben Franklin had it, Washington had it, to some extent we have it, so why wouldn’t our new creations have it?”

Looking for Skynet

While Hollywood has long been preoccupied with emergent AI, in reality it is only a very small part of what many people are referring to when they talk about artificial intelligence.

“We use the word AI to mean almost two totally different things,” Naam noted.

“We use it to mean a self-driving car or a better search engine or machine translation, then we use it to mean these fictional, self-aware creatures with desires and so on.

“If you think about the people who work in AI, basically 99.9 percent of them are working on the first kind that is not intelligent in the way that we think about it but just something that pattern matches and does better work to make more money.

“The set of people who are actually working on general intelligence would fit on this stage and be lonely.”

Naam believed the risks of AI would continue to be those we already see – flash crashes in stock markets and security – rather than “the Terminator, Skynet sort of scenario” that singularity demands.

Brin wasn’t bothered by the emergence of a Skynet, particularly if the military was involved.

“I’m not afraid of the military creating Skynet because they love to have off switches and be able to pull plugs,” he said.

“But I believe if we get to such a level of [super-intelligent] AI it will come out of Wall Street because that is where more money is being spent on AI research than the top two dozen universities combined and it’s all being done in secret.

“That’s an area where I am scared.”