The title of the blog post read:

'I Will F**king Piledrive You If You Mention AI Again.'

Published in June by Australian software engineer Nikhil Suresh, it went viral for pretty obvious reasons.

Beyond its sweary tongue-in-cheek headline, the post possessed a seriously fed-up tone and some hilariously brutal barbs, including:

“It is with great regret that I announce that the next person to talk about rolling out AI is going to receive a complimentary chiropractic adjustment … i.e., I am going to f**king break your neck."

The missive was said to have caused at least one reader to literally choke on their morning coffee, and it allegedly went on to garner more than 1 million views.

Here was a director of a software consultancy arguing the world of artificial intelligence was “largely fraudulent” and full of “grifters” using the hype around the technology “to inflate their headcount so they could get promoted, or be seen as thought leaders”.

The post called out businesses which rushed to implement off-the-shelf AI programs and chatbots, despite not being able to create “the most basic applications imaginable” themselves.

It argued while some AI systems clearly had their uses, implementing what was essentially still experimental tech into an organisation was not just a recipe for disaster but "a cookbook for someone looking to prepare a twelve-course f**king catastrophe”.

“It’s genius. No notes,” says Emily Kager, a senior software engineer in San Francisco, after reading Suresh’s article.

“I completely agree there is a massive disconnect between the engineers on the ground working in the codebase day-to-day and the higher ups making huge decisions to pivot to or utilise AI,” she tells Information Age.

Kager, a self-described “AI sceptic”, says industry peers have warned that her scepticism could see her get “left behind”.

"But honestly, most of my engineering peers are also sceptical and we all make jokes about AI,” she admits.

“If you ask most people raving about AI what AI actually is, most of them can’t answer beyond a sentence or two.

“I saw the same cycles play out with blockchain, crypto[currencies] and NFTs (non-fungible tokens).

“Working in tech in San Francisco, you meet a lot of people riding this hype cycle.”

The concerns of people like Suresh and Kager fly in the face of the barrage of press releases which technology journalists like myself receive each day about AI — almost every single one of them touting world-altering benefits for businesses, individuals, and society at large.

Cutting through the fanfare, Suresh’s viral blog post felt like he had screamed out loud what so many have been quietly pondering — is there too much hype around AI?

‘The largest gold rush in the history of capitalism’

While AI technologies have been in use for decades, it was the introduction of OpenAI’s popular chatbot ChatGPT in November 2022 which set off the gold rush around what’s known as generative AI.

GenAI has enabled everything from faster data analysis to major medical breakthroughs, but has also given people the ability to create whatever manipulated text, images, videos, audio or computer code they want.

OpenAI, which is based in San Francisco, is now one of the fastest growing companies in history, after building on technology first introduced by Google researchers in 2017.

The money being poured into the industry is immense, despite many tech companies and startups cutting costs and headcounts in recent years.

Toby Walsh, a member of the Australian government’s AI Expert Group and the AI Ethics Committee at ACS (the publisher of Information Age), told a House of Representatives committee in July:

“I don't think it's fully appreciated that we're actually in the middle of what is the largest gold rush in the history of capitalism.

“I did the numbers recently, and some $500 million every day is being spent globally on artificial intelligence.

“We have never seen that scale of investment, I suspect, for any previous technology.”


Tech companies are investing billions of dollars into their AI products and services. Photo: Shutterstock

Even relatively young AI startups are now being valued in the billions of dollars.

Funding for AI companies reportedly made up almost half of the $83 billion ($US56 billion) which startups in the US received between April and June alone.

Chip-maker Nvidia, whose graphics processing units (GPUs) are helping to fuel AI development, was at one point the world’s most valuable company.

American software firm Domo estimates $444,000 ($US293,000) is spent globally on AI every 60 seconds, with more than 19,000 images and 7,000 minutes of video being generated by the tech in that same amount of time.

The Tech Council of Australia, which represents the likes of Microsoft, Google and Adobe, argues AI could create 200,000 Australian jobs by 2030 with the right support — as the companies it represents push to scale up their AI products and services.

A recent Adobe report found around two-thirds of companies in Australia and New Zealand intended to restructure in 2024 in order to allow for uses of generative AI and the introduction of AI leadership roles.

Some organisations even marked ‘Artificial Intelligence Appreciation Day’ on 16 July, and the first Australian AI Awards are due to take place in Sydney this November.

But for all the excitement and eye-watering amounts of money, generative AI has “little to show for it so far” when it comes to monetary returns, investment bank Goldman Sachs said in June.

Companies such as Google, Microsoft and Amazon Web Services have also reportedly tempered their AI sales expectations, as their customers grow more cautious about implementing the technology.

Facebook owner Meta has even scrapped its costly celebrity AI chatbots — its first big play in consumer AI, which was announced less than a year ago.

'The honeymoon is over’

Brad Kasell is the principal technology strategist at Domo, and believes “the honeymoon is kind of over” for AI.

“That’s not necessarily a bad thing,” he tells Information Age.

“I think it's a necessary thing. And so now we'll see if AI can really deliver.

“People are questioning, ‘Where's the return on investment? Where's the impact?’”

Kasell, who previously worked at the likes of Microsoft, IBM and Oracle, expects to see “quite a consolidation of failed projects” in the next 12 to 18 months, including investors pulling their funds and some startups finding “they run out of runway”.

“A lot of customers and a lot of people I speak to are feeling the same way,” he says.

“It’s a bit of an inflection point — a bit of a turning point for AI.”

Given the risk of AI hallucinations (like Google’s AI suggesting glue belongs on pizza), bias in training data, copyright and intellectual property infringement, and the spreading of misinformation, Kasell says “the upside just isn't there in comparison to the potential risks” for many companies.

“I think there will be some substantial test cases or proving points where people are really made examples of,” he says.

“I think people are just nervous right now that they don't want to be the ones who are the proof point for these kinds of issues.”

Despite these concerns around the accuracy of generative AI products and issues monetising them, many companies are implementing AI systems and chatbots.

In June, the Real Estate Institute of New South Wales claimed it was the first Australian organisation to employ an AI bot as a board advisor — a move which AI expert Toby Walsh deemed “a terrible idea” and “a PR stunt”.

In a recent survey of more than 100 Australian “business decision makers”, enterprise AI company Pegasystems found more than two-thirds of respondents (68 per cent) admitted to having experienced a failed AI implementation in their organisation.

Around half of them expressed concern over AI’s impact on their brand.

“I think a lot of companies are kind of enamored with the potential,” Kasell says.

“They're rolling it out in a lot of cases without really thinking too hard about, ‘Is this the right solution?’”

Emily Kager says many of her engineering peers in the US often see AI investments as “quite misguided and wasteful”, given how volatile the tech industry has been in the past few years.

“People are concerned because companies are throwing huge amounts of money and resources into these unproven initiatives and often laying off scores of their workers in order to do so,” she says.

“It seems like every leader is throwing resources into AI for fear of being left out and most of them are asking the wrong questions.

“Asking ‘what can we do with AI in our product?’ will probably lead you down the wrong path.”

Novel AI-focused consumer products have also struggled to find their footing this year.

There was clear excitement around Rabbit’s AI personal assistant device r1 ahead of its launch in March, but reviewers questioned its worth in a world of smartphones, and some clever folk discovered its software could even be made to run on a smartphone.

Humane, a company founded by ex-Apple employees and backed by big investors, began shipping its AI pin in April, but it was reported to be seeking a buyer for the business in May following negative reviews.


The Rabbit r1 and Humane AI pin are AI-based consumer products which launched this year. Photos: Supplied

The ‘deceptive’ marketing of ✨AI ✨

Kager admits she brings “a heavy dose of scepticism” to any marketing of AI which aims to suggest the tech is similar to magic — which is actually quite a lot of it.

Given how difficult it is to illustrate a generative AI system, companies have essentially co-opted the sparkle emoji to represent the seemingly magical vibe they want for their AI products.

“We’ve just seen so much deceptive marketing of AI in the past,” Kager says, before alluding to overblown corporate AI announcements and examples of cheap human labor discovered under the technology’s facade.

Deceptive AI marketing has already seen two investment firms — Delphia and Global Predictions — pay civil penalties in the US after being charged with "making false and misleading statements about their purported use of AI”.

The charges were seen as the first significant move to combat so-called ‘AI washing’, in which organisations intentionally overstate their work (much like in ‘greenwashing’), thereby misleading investors and the public.

Angel Zhong, an associate professor of finance at RMIT University, says some companies have tried to “exploit [AI’s] market hype” by associating themselves with it.

“The financial market is built on trust,” Zhong says.

“If this keeps happening, it will erode trust on both sides of the market.

"If it erodes investor trust, that means they may be hesitant to invest.

“That would slow down capital flow in financial markets, which then would slow down the growth of companies, and then the growth of the economy."

Zhong also sees “a mismatch” between the small amount of AI talent available in Australia and the large number of local companies claiming to use or develop the technology.


(L-R) Emily Kager, Brad Kasell and Angel Zhong. Photos: Supplied

Brad Kasell from Domo says AI washing is “definitely a thing”, and argues it is partly driven by market expectations.

“You can’t be seen to be not having an AI capability,” he says.

“So you end up with AI-powered washing machines, AI-powered fridges, and AI-powered TVs — everything has to be AI, when it’s really not.

“It's really just an element of data being processed in a certain way.

“I think because AI is so vague, it's really easy to say everything is AI — so it becomes more of a marketing imperative than actually a reality.”

In fact, a study published in June by Washington State University researchers found that marketing products as using AI often made them less popular, and tended to lower consumers' trust.

Europe has already moved to more closely regulate AI products, and Australia’s financial services and market regulator ASIC says developing regulations around the technology is among its top priorities.

The Australian Competition and Consumer Commission (ACCC) is also examining how generative AI will impact competition and search engine services, while Treasury has promised to strengthen existing consumer laws around the technology.

Zhong hopes ‘AI washing’ will be considered in future Australian regulations, but believes rules will need to keep evolving.

“I suggest that any regulations associated with emerging technology need to be frequently reviewed and updated, just to keep up to date with the development of the industry,” she says.

'Not a magic pill yet’

Emily Kager says she worries about generative AI’s environmental impacts, as well as how it is changing the software engineering profession.

She believes AI “will continue to widen the divide between good and bad engineers” if some workers keep over-relying on it, and if systems continue to be trained on publicly-available code of questionable quality.

“The other day I saw a student using these tools to complete an assignment and they were simply copying and pasting piece-by-piece exactly what was generated from ChatGPT," she says.

“If [the code] didn’t compile, they deleted it all and told ChatGPT, ‘That didn’t work.’

“If we’re going to proliferate these tools, we need to use them well and still be able to use our brains.

“It’s good to struggle a bit — this is how engineers learn to design, build, and debug.”

Kager says too many leaders in tech believe they can replace engineers with systems like ChatGPT, and a significant proportion of them are too caught up in the AI hype cycle.

While she sees the wider AI field as exciting and accepts some of its tools are worthwhile, she says “they are not a magic pill yet and they are still tools that need human-driven expertise”.

"Some of these new products end up being something useful, but most of these investments and startups fail,” she says.

“The founders and executives jump onto whatever new hype, and the cycle starts again.

“It’s difficult to believe AI will be any different with what I’ve seen so far.”