Hackers are turning to hardware exploits and AI as developments in tools, techniques and affordability bolster researchers and cyber criminals alike.

The annual Inside the Mind of a Hacker report by security crowdsourcing platform Bugcrowd found some 81 per cent of hackers encountered new hardware vulnerabilities in the last 12 months – indicating manufacturers are overlooking swathes of security faults in market products.

Hardware hacking refers to the exploitation of vulnerabilities in the physical components of a device, rather than in software alone.

This often means manipulating hardware facets such as clock signals, voltage or Bluetooth components to bypass security measures, gain unauthorised access, and ultimately tamper with a device’s intended functionality.

Such has been the case for the popular Ecovacs Deebot X2s series of robot vacuums – which the ABC reports has been hacked multiple times to hurl obscenities at owners and remotely view the contents of in-built cameras.

Bugcrowd’s report – which aims to destigmatise hacking by exploring security trends through the lens of white-hat hackers and security researchers – surveyed nearly 1,300 self-identified ethical hackers on hardware hacking

Sixty-four per cent said they observed more hardware vulnerabilities now than there were a year ago.

Erik De Jong, senior cyber security consultant and ethical hacker, told Information Age that hackers are increasingly turning to hardware hacking due to the “overwhelming amount of ‘smart’ Internet-of-Things devices” being brought to market.

“From affordable doorbell cameras to connected fridges and smart cat-feeders, we see products with very short research and development being rushed to market from sectors that often don't have a long track record in cyber security and privacy,” said De Jong.

“These products are both widely available and cheap, so they make for an enticing target for hackers.”

Bugcrowd’s report pointed to a recent “democratisation of hacking tools”, whereby hackers are now able to pick up increasingly affordable tools and hacking techniques.

One such technique is “side-channel” attacks, where a threat actor observes and monitors a system to catch leaked information such as the device’s power consumption or electromagnetic emission to discern cryptographic keys or other confidential device data.

The report highlights this dangerous tactic has not only been boosted by AI enhancements but has become more accessible due to measuring equipment being more precise and cost-efficient in recent years.

“Given the increase in cheaply made and often unnecessarily complex smart devices on the market and the advancements in tools for hacking, the conditions are ripe for hardware hacking,” wrote Bugcrowd.

“Unfortunately, this also means that the conditions are perfect for threat actors who target hardware, threatening consumers, companies, and governments.”

AI tools a hacker’s dream

Bugcrowd explained hardware security flaws go far beyond household devices, and that the consequences of vulnerabilities in sensitive devices – such as those in the medical industry – can be particularly severe.

In 2017, for example, the US Food and Drug Administration recalled half a million pacemakers over concerns they could be used by hackers to run down the batteries and alter a patient’s heartbeat.

The report further noted that using AI, a threat actor can “tap multiple webcams” to effectively create reconnaissance targets and “large, powerful intelligence networks”, before positing potential risks to physical security thanks to AI’s growing efficiency in producing fake IDs and RFID badges.

Bugcrowd’s vice president of advanced services, Julian Brownlow Davies, told Information Age AI is not only “reshaping hacking techniques”, but has also become a “prime target” for hackers.

AI boosts hacks in equal measure

De Jong noted AI has matured so much that it’s not “just the early adopters” using the technology anymore, with both attackers and defenders adopting AI in an “equal manner”.

“There have been some high-profile incidents where employees were tricked into handing over money through deep fake video calls, but on the other hand, we are also seeing companies adopting machine learning to enhance anomaly detection in network traffic or account activity,” he said.

Shannon Davis, principal security strategist at Surge, the security research team of data analytics company Splunk, told Information Age that while his unit has not seen any new attack surfaces due to the rise of AI, the technology “can help adversaries do their jobs more effectively”.

“We have seen evidence of adversaries utilising AI to help scan the internet for vulnerable services, but this isn’t new, just a use case where AI has helped to do the adversary’s bidding,” he said.

“It isn’t the nuclear weapon we are seeing some people report it to be.

“It really is just a force multiplier both for good and bad.”