Cyber criminals are circumventing the security methods designed to block them by exploiting email filters, using voice generators to steal one-time passwords (OTP), and deploying bots that, a recent study found, solve CAPTCHA tests better than humans.
OTPs, for one, send unique and time-sensitive verification codes to a user’s registered smartphone or tablet – but a recent CloudSEK analysis found that hosted ‘vishing’ services are bundling OTP scanners with automatic voice generation systems.
Services such as SpoofMyAss.com, for example, offer “the escalation of OTP bots,” CloudSEK cyber threat intelligence researcher Shreya Talukdar noted in describing hosted services that offer OTP extraction; global calling with support for multiple languages; personalisation that includes speaking the victim’s name and account details; anonymous calling; and support for templates that bots use to make themselves sound more convincing.
The approach has proven effective in helping cyber criminals emulate technical support staff or bank helpdesks, with automated bots calling victims, pushing them to log into their accounts, and manipulate them into disclosing the OTPs they receive during the login process.
Threat actors “can pose as trusted entities to trick victims into revealing sensitive information,” Talukdar said, noting that the combination of counterfeit caller IDs and fake websites “ultimately [leads] to data theft and increased security risks.”
The approach has been implicated in the recent major breach of MGM Resorts, which is already facing class-action lawsuits less than a fortnight after a major cyber attack that was, experts have concluded, facilitated when cyber criminals used social-media data to customise a vishing attack that helped them steal a systems administrator’s credentials.
Those credentials were then used to access, copy, and shut down over 100 of the company’s virtual servers, compromising critical operations across the gaming giant’s more than 30 casinos and hotels.
Using victims’ security tools against them
Such attacks highlight the ongoing risks as humans are targeted, with increasing accuracy, by cyber criminals tapping new AI technologies to create convincing attacks even as defenders develop new technologies – such as a new AI-powered scam protection tool from McAfee that scans for malicious web links in text messages, emails, social media posts, and elsewhere.
“As scammers use AI to create increasingly sophisticated attacks,” the firm notes, the new platform “can help you tell what’s real and what’s fake.”
Yet for each new defensive tactic, resourceful cyber criminals continue to find new ways around them – manipulating security tools in the same way that many martial arts disciplines revolve around the idea of using an opponent’s body weight to defeat them.
Cybercriminal gangs such as Kimsuky, LAPSUS$ and Silent Librarian, for example, have been caught reconfiguring victims’ email accounts by using filters and email hiding rules – which flag, delete and forward emails based on their content or other attributes – to send critical data to themselves.
Attackers also use rules to obfuscate details of attacks by automatically moving emails with valuable information into obscure folders; delete real emails from senior executives who are being emulated in business email compromise (BEC) attacks; or delete warning emails – whether relating to the confirmation of account details, alerts about unusual activity, or emails from IT helpdesks containing words like ‘hacked’, ‘phish’, or ‘malware’ – that might otherwise tip off users that their account has been hacked.
Emails and OTPs aren’t the only user authentication methods that cybercriminals have figured out how to mess with: thanks to improving AI and computer vision techniques, a recent study found that humans are worse at completing CAPTCHA verification challenges than the bots they are designed to stop.
Improvements in bots’ underlying AI means automated attacks can complete even challenging CAPTCHAs with an accuracy of 85 to 100 per cent, with most proving to be at least 96 per cent accurate, the study’s authors note – results that “substantially exceed the human accuracy range” in a series of tests that found humans complete the tests accurately just 50 to 85 per cent of the time.
“Bots’ solving times are significantly lower in all cases”, the researchers concluded – with bots even solving image-based reCAPTCHAs, which prompt users to identify particular images out of a changing grid, in 17.5 seconds on average, compared to 18 seconds for humans.
Study participants preferred game-based CAPTCHAs over those using text and images, the study found, noting also that people who primarily use the Internet for work were slower to complete CAPTCHAs than those that use it for other tasks – a reality check for those hoping to use CAPTCHAs to keep bots off their networks.
“Automated bots pose a significant challenge for, and danger to, many website operators and providers,” the investigators warn. “If left unchecked, bots can perform nefarious actions at scale.”