Artificial intelligence (AI) may be enabling self-driving cars and revolutionising healthcare, but a former White House technical advisor believes adoption of the technology should be slowed down until industry can prove these and other platforms can’t be hacked or manipulated.

As a technical specialist seconded to the Pentagon to work around AI cybersecurity and the Department of Transportation to work on self-driving cars, Davey Gibian’s work with the Obama and Trump administrations taught him first-hand how vulnerable emerging AI technologies were to manipulation.

The algorithms underlying the capabilities left them “incredibly easy to hack” by adversaries, he told Information Age, through techniques such as tricking the AI with deceptive data or poisoning the AI model by feeding it “slightly bad” or biased data.

This approach could not only cause AI to malfunction, but could allow cybercriminals to sneak into sensitive systems by teaching increasingly common AI-powered defences that they are benign.

“By injecting small amounts of noise and bad data, bad behaviour is masked as good behaviour over time,” Gibian explained.

Because AI models only learn from the data they’re given, an inability to detect such manipulation leaves AI-dependent ecosystems vulnerable to exploitation – and their unquestioning adopters, exposed to all manner of potential problems.

“The bigger issue is that people view this as a panacea,” he said, “and believe that if they have lots of data and supply it to the AI, the answers will become clear.”

Getting smarter about AI

While AI’s inarguable potential has seen it rapidly reshaping every area of business and government, its genesis in data science has left a blind spot around cybersecurity.

This meant the integrity of critical transportation, health and other systems is being placed in the hands of data scientists for whom cybersecurity is generally of passing interest, or no concern at all.

“Most of these applications are built by data scientists that very rarely have a traditional cyber background,” Gibian explained, “and therefore the risks and challenges of adversarial thinking aren’t drilled into them the way they are for cyber professionals.”

Gibian’s engagements with senior government and military officials – he is a Presidential Innovation Fellow who works with the US military’s Defense Innovation Unit (DIU) – taught him that they are well aware of AI’s weaknesses.

“The need for explainability, security, and the fact that AI systems can be broken is heavily discussed by senior brass,” he said, “but what was missing was how we could do it.”

Davey Gibian. Image: Supplied

Gibian’s desire to fix the problem led him to leave the White House last year, instead founding explainable-AI firm Calypso AI in an effort to help companies make AI secure, resilient, ethical, and accountable.

The firm’s cloud-based explainability solution analyses the AI algorithm’s operation, modelling its normal functioning and measuring its susceptibility to cybercrime in order to make it easier to detect anomalies that cybercriminals may be injecting.

Early trials with financial-services, insurance and other companies have helped explain what AI models are doing and why – but humans’ rapidly growing reliance on AI means that issues of trust and transparency need to become a far more prominent part of decision-making around its adoption.

“As we build more digitally centric systems and a digitally centric society, we need to build that trust in at the same time,” said Mark Pedersen, chief technology officer with strategic advisory firm KJR, which has been working with Calypso AI to trial its models in Australian businesses and is fielding “daily” enquiries about AI.

“As people look at where AI fits within their business, they are asking questions about its implications,” he continued.

“We are increasingly being asked to provide this kind of assurance as people look at where AI fits within their business, and where the explainability isn’t there, it is a huge threat to AI’s uptake.”

Ultimately, Gibian said, the key is being aware of the risks – and not taking anything for granted.

“We’re talking about life-and-death situations,” he explained, “and those areas are where we really have to slow down, and say we’re not necessarily going to do that until these criteria are met.”

“The only way to ensure [compromise] doesn’t happen, is to be worried – because if we’re not always worried that’s when bad things are going to happen.”