It attracted a record 1,010 students from over 20 universities and spawned 183 different projects – but as the dust settled and the winners of this year’s Unihack hackathon were announced, one other silent partner also shared line honours: generative AI (genAI).

Teams from Monash University dominated the winner’s list, taking out all three podium spots with projects including the GPS-free tracking tool Antrum MK1, natural language Sentinel AI drones, and Cowdar iOS app that uses Light Detection and Ranging (LiDAR) to scan a cow to estimate its weight.

Also applauded were the Sydney/Macquarie Human Operating System, Melbourne/Monash SpeechMax public speaking trainer, BeeSafe (Monash), GridSim (USyd/UNSW), StudySphere (UNSW), Eco Quest (Macquarie), and Canary.ai (UNSW/USyd/WSU).

The EU Shared Future Prize – the Delegation of the European Union to Australia funded this year’s Unihack – was awarded to Anchor, an emergency response app that delivers emergency alerts but also explains to recipients why taking action is necessary.

“So many awesome teams and awesome ideas being worked on,” judge Steve Rudakov wrote, adding that it’s “really inspiring to see the creativity and drive to build among the next generation of tech founders… you should all be very proud.”

Is it still innovative if AI is doing the coding?

While hackathons used to be about manually coding under pressure, many of the projects leaned heavily on AI coding tools – a practice that, Unihack founder and community organiser Terence Huynh told Information Age, was debated heavily as recently as last year.

A lot of teams weren’t using it then, but a year later AI tools’ sophistication has made them common in job interviews, ubiquitous in dev teams, and widespread during Unihack, with Huynh noting that AI “has increased the calibre and quality of hacks this year.”

“A lot more people managed to get things done within a short amount of time,” he said, noting that even he used AI tools to write Unihack apps “within hours versus days or weeks…. and it’s definitely helpful in the way it allows us to showcase a diversity of ideas.”

The winning entry came from a team at Melbourne's Monash University – a GPS-free tracking tool that straps to the leg. Photo: Unihack

Yet even as some coding purists openly question the point of participating in hackathons if most contestants are just using AI, Huynh believes hackathon’s bigger goal is to help surface innovative ideas.

“It becomes really weird for us to ban something that we know everyone in the industry is actively using,” he explained, “and if the industry is looking for those skills in the future, it makes no sense for us to say ‘you cannot use this’.”

That said, the competition rules require teams to disclose “any and all third-party tools”, he pointed out, adding that “being very transparent about using AI is much more important for us.”

Towards a human-free hackathon

If ubiquitous genAI tools are changing how students participate in hackathons, others are writing humans out of the equation – with security firm Tenzai recently revealing that its autonomous hacking agent has been outclassing human competitors with aplomb.

Working completely on its own, the AI agent beat over 125,000 human competitors in six major capture-the-flag competitions – websec.fr, dreamhack.io, websec.co.il, hack.arrrg.de, pwnable.Tw, and Lakera’s Agent Breaker – and placed in the top 1 per cent of each.

Tenzai takes it as a win, arguing that “the practical effect is that elite offensive security expertise is available on demand and at a much larger scale than previously possible” – and calling the wins “a milestone in demonstrating offensive cyber abilities of AI agents.”

Startup XBOW is taking the concept and running with it, recently achieving unicorn status after it secured $171 million (US$120 million) in financing to scale up its ‘autonomous hacker’ – which had previously hacked its way to the top of the HackerOne leaderboard.

Such technologies are expected to become ever more deeply embedded in security response, with Gartner recently predicting that AI applications will manage half of cybersecurity incident responses by 2028.

“Attackers are already using AI,” founder and CEO Oege de Moor said, and “defenders need to move just as fast.”

AI is still no substitute for innovation

With a recent Sonar survey finding that 72 per cent of developers who have tried AI coding tools use them every day, use of AI is now the rule rather than the exception – but it’s not always an intrinsic advantage.

Indeed, a newly released Canadian study of 11 large language models (LLMs) found that leading AI coding tools are only accurate 75 per cent of the time, meaning that on average one out of every four lines of code they produce is wrong.

And when Anthropic – whose Claude Code tool has become a favourite of developers – put 52 mostly junior software engineers into AI and no-AI groups, it found heavy AI use was associated with “less independent thinking and more cognitive offloading”.

Coders used AI to not only write code, but to solve problems and debug apps – but “productivity benefits,” Anthropic warned, “may come at the cost of skills necessary to validate AI-written code if junior engineers’ skill development has been stunted by using AI in the first place.”