Microsoft is continuing its mission to fit artificial intelligence (AI) into every one of its software products with the announcement of a cyber security chatbot that can reverse engineer malware and analyse log files.
In keeping with its GitHub and 365 AI tools, the product falls under Microsoft’s Copilot branding and is aimed at complementing a security team’s existing workloads and processes.
“Today the odds remain stacked against cyber security professionals. Too often, they fight an asymmetric battle against relentless and sophisticated attackers,” Vasu Jakkal, VP of Microsoft Security, said. “With Security Copilot, we are shifting the balance of power into our favor.”
The sudden proliferation of large language models put fear in the heart of security professionals who worried that cheap, publicly accessible AI would dramatically scale up the capability of adversaries by, for instance, lowering the technical nous needed to engineer bespoke exploits.
Earlier this year, one cyber security expert explained to Information Age how he had used ChatGPT to deliberately obfuscate a piece of known malicious code in a way that effectively hid it from detection.
Microsoft is trying to even the playing field with what it claims is the “first and only generative AI security product enabling defenders to move at the speed and scale of AI”.
Videos of Security Copilot in action show it processing queries in natural language. Examples like easy-to-understand explanations of incidents or reverse engineered malware samples show its potential to uplift a small security team’s response and reporting requirements.
Other than demo videos and plenty of marketing materials, details on Security Copilot’s availability are sparse with Microsoft saying it is “in preview and not generally available”.
So far there is little in the way of pricing or how Security Copilot is bundled and integrated into Microsoft’s other security services like Windows Defender.
Microsoft also promises that your company’s data “is protected by the most comprehensive compliance and security controls in the industry,” adding that “it’s never used to train other AI models”.
Earlier this year, developers launched a class-action lawsuit against Microsoft for using open-source software hosted on its GitHub platform to train the AI-assisted coding platform which GitHub sold back to developers for a monthly subscription fee.
The success of ChatGPT, into whose parent company Microsoft has invested at least $15 billion (US$10 billion), has spurred a flurry of activity among Microsoft’s enterprise software divisions as it looks to capitalise on the AI hype and refresh its productivity tools.
A new update to video conferencing and messaging app Teams landed this week. The app has been completely rebuilt to be more lightweight, less resource intensive, with an updated UI and, of course, more AI “to take the work out of working together”.
There are also reports, via Windows Central, that Microsoft wants to modernise the structure of its Windows operating system to better suit different sized devices, not just PCs.
Codenamed 'CorePC’, the new version of Windows could be state separated – meaning some of it would be installed on read-only drive partitions – and further AI integration that takes advantage of onboard technology like graphics cards instead of calling to the cloud.