It’s one of the ICT industry’s most in-demand skills, but the popularity of ‘vibe coding’ – which taps generative AI (genAI) to let nearly anyone write apps and autonomous ‘AI agents’ – is worrying security experts who warn it could blow data security wide open.
Vibe coding grew out of the fact that genAI programming tools like Cursor, Windsurf and GitHub Copilot are now so good at writing code that even non-technical people can write functioning applications simply by describing them in words.
Vibe coding skills are flooding online job descriptions, with the likes of Reddit, ServiceNow, Alphabet, eBay and DoorDash recruiting vibe coders and firms like Canva demanding applicants show AI development capabilities during interviews.
Increasing demand for vibe coding skills reflects the “pivotal” integration of AI into coding, GlobalData business fundamentals analyst Sherla Sriprada said as a recent analysis flagged growing market demand.
Visa Inc, for example, is hiring a Staff Fullstack Engineer with responsibilities including “deploying, developing and optimising AI-powered tools for code/prototype generation using modern LLMs” and vibe coding tools like bolt, lovable, and V0.
“Vibe coding signals a structural shift in how enterprises will build, test, and scale digital solutions,” Sriprada said, adding that “it redefines competitive advantage by enabling organisations to innovate continuously, tailor digital experiences at scale.”
“Companies that fail to invest in vibe coding capabilities risk being left behind as software development evolves from a technical function into a conversational business driver.”
When vibe coding tools can’t be trusted
Yet vibe coding’s casual application development is also a weakness, evident since AI luminary Andrej Karpathy welcomed it as a form of development “where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.”
“The code grows beyond my usual comprehension [so] I’d have to literally read through it for a while” to understand what it’s doing, he said, noting that “sometimes the [AI] can’t fix a bug so I just work around it or ask for random changes until it goes away.”
“I’m building a project or webapp, but it’s not really coding – I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.”
That’s hardly a vote of confidence in the quality of genAI tools that still fall far short of corporate security requirements.
Developers with access to genAI tools “wrote significantly less secure code than those without access,” one study found, with those using AI assistants “more likely to believe they wrote secure code than those without access.”
Last month, a Veracode study found that vibe coding tools write insecure code 45 per cent of the time, with CTO Jens Wessling warning that core genAI models aren’t getting more secure and that “blind faith is not the way to go.”
New vibe coding tools aren’t better: internal testing of xAI’s new Grok Code Fast 1 (GCF1) model showed that it can’t be trusted.
The recently released model card for GCF1, which xAI says was “designed for coding applications using agentic harnesses”, was found to have a high ‘dishonesty rate’ – meaning that it will not only make mistakes, but cover them with impunity.
Because xAI trained GCF1 “to answer all queries that do not express clear intent to engage in specified prohibited activities”, the tool is, for example, more likely than the broader-based Grok 4 to claim that software it has written is secure when it’s not.

The security community is working to improve the safety of vibe coding tools. Source: Shutterstock
However, the company concluded optimistically, because GCF1 is only good for specific software design and scientific experiments, its lack of “dual-use capabilities” compared to Grok 4 means that it “does not meaningfully change the risk landscape.”
That’s an optimistic take on technology that was recently blamed for deleting a live company database during an experiment by software engineer Jason Lemkin, who described the Replit vibe coding tool as “the most addictive app I’ve ever used.”
“Will I go from ideation to a 100% commercial-quality app all inside a vibe coding app without hiring a dev, coding myself, etc.?” he wrote on X, adding that he “[gives] it a 50/50 shot right now… but I am addicted.”
Addictive, compelling… and dangerous
AI-assisted development is becoming the new corporate standard, with genAI tools speeding up software development lifecycles (SDLCs) and forcing software teams to reorganise around automated assistants that write, check, and audit their code.
Such agents are ever easier to create: security firm NetSkope, for one, recently noted that the “simplicity and flexibility” of genAI tools had boosted adoption by 50 per cent in the May quarter – with GitHub Copilot now used in 39 per cent of businesses.
Furthermore, NetSkope found, 5.5 per cent of businesses have users who are already running AI-written agents within their business – potentially introducing genAI-created security vulnerabilities that end users wouldn’t even know how to check for.
The security community is fighting to improve the safety and security of vibe coding tools, with the likes of Aussie success story Secure Code Warrior (SCW) releasing formal AI Security Rules that serve as “a whisper in the AI’s ear” to discourage insecure coding.
“AI coding tools are incredibly helpful, but not infallible,” SCW said, noting that “while they can generate working code quickly, they often lack the nuance to understand the specific standards, conventions, and security policies of a given team or project.”
While fully 86 per cent of businesses expect “heightened risks” from genAI tools, a recent Infosys survey of more than 1,500 executives in Australia and elsewhere found, almost none of them have adequate controls to protect against those risks.
The problem is rapidly growing out of control: a new Checkmarx study of over 1,500 CISOs and developers, for example, found that half of companies are already using coding assistants to write and check the security of their applications.