Many underpaid and overstressed cyber security professionals are moonlighting as dark web cyber criminals, according to a new study that creates more headaches for security leaders who already admit they are unprepared for the onslaught of AI-enhanced cyber crime.
The findings – which come from a Chartered Institute of Information Security (CIISec) study described in the organisation’s latest State of the Profession report – come from the investigations of a former police officer turned cyber investigation specialist, who spent six months trawling through dark web sites and job advertisements.
Many programmers advertised their skills at bargain basement rates – including, for example, a Python developer and Computer Science uni student asking just $48 (£25) per hour to develop cyber crime standbys like VoIP chatbots, AI chatbots, and hacking and phishing frameworks.
Another developer, who offers their software development, scripting and ‘backend’ software development skills, claims to have “extensive experience with public and private clouds which includes maintaining said infrastructure”.
As well as discovering posts from people purporting to be professional penetration testers and workers for global IT companies, the investigations also uncovered an out-of-work voiceover artist offering to provide the voice for vishing campaigns; graphic designers; a public relations professional; and content writers all willing to help cyber criminals develop more convincing and effective cyber criminal campaigns.
Professionals were relatively easy to differentiate from hardcore cyber criminals, the investigator said, with professionals often alluding to “current legitimate professional roles” or writing their ads “in the same way as someone advertising their services on platforms like LinkedIn.”
“In an industry that is already struggling to stop adversaries,” he said, “it’s worrying to see that bright, capable people have been enticed to the criminal side.”
With cyber security jobs already featuring as among the highest paid roles in the IT industry – one recent survey found Australian employers will pay an average of 27 per cent salary premium for skilled cyber security workers – CIISec CEO Amanda Finch said the posts suggest that work-related stress and burnout were causing workers to consider other avenues for their talents.
Citing Gartner’s recent prediction that a quarter of cyber security leaders will change jobs by 2025, Finch said that “salaries and long hours are contributing to this, and we’re starting to see the impact [as] highly skilled individuals are turning to cyber crime.”
Many of those leaving the cyber security industry “will be desperate enough to work in an area that promises large rewards for their existing skills and knowledge,” she said.
“Preventing this means ensuring we are doing all we can as an industry to attract and retain talent.”
The last thing the CISO needed
For chief information security officers (CISOs) and other executives fighting to protect their company operations from cyber criminals – and already reeling from a year of escalating cyber criminal activity that saw 25 new ransomware gangs established – revelations that they could be inadvertently fighting their own employees, who bring both intimate knowledge of the business and legitimate access credentials, will come as a shock.
Fully 11 per cent of malicious attacks reported to the Office of the Australian Information Commissioner (OAIC) in the second half of 2023 involved rogue employees or insider threats – further challenging executives who are already struggling to deal with the surge in artificial intelligence (AI) assisted cyber security attacks.
A new Darktrace global survey of 1, 773 security professionals in 14 countries, for example, found that 89 per cent believe AI-augmented threats will significantly impact their organisation within the next two years.
Some 60 per cent admit being “unprepared to defend against” such attacks, Darktrace found, with chief product officer Max Heinemeyer noting that “AI, automation, and cybercrime-as-a-service have increased the speed, sophistication and efficacy of cyber security attacks.”
Detections of phishing emails were 14 per cent higher in December than in September, Heinemeyer said, while phishing attacks using “novel social engineering techniques” grew by 35 per cent over the same period.
Surging use of generative AI (genAI) tools – used by around half of Darktrace customers – and a cyber crime as a service ecosystem well stocked with pre-made malware and phishing templates had made it “inevitable that AI will increase the risks and threats facing an organisation,” Heinemeyer said, “but this is not an unsolvable challenge from a defensive perspective.”
Defensive AI systems are becoming an increasingly popular way of pushing back against surging cyber criminal activity – with the US FTC recently pushing back against AI impersonation, Google launching an AI Cyber Defence Initiative, and the European Union recently opening its formal AI Office.
Fully 1,098 AI cyber threat detection-related patents were lodged in 2023 and 53 new companies entered the market last year alone, according to GlobalData figures that highlight what project manager of disruptive tech Caibhav Gundre called “rapid progression in the field of defensive AI”.
The trend reflects “the recognised importance of having formidable cyber defence systems in place,” Gundre said, “signifying substantial R&D activities aimed at overcoming new cyber threats.”
“As organisations harness the power of AI cyber threat detection, they must also confront significant challenges.”