Generative AI may be helping starry-eyed users write emails, prepare presentations and even apply for jobs – but with many businesses and individuals still leery of its very real risks, one expert warns that “slaphazard” adoption of AI risks threatens to sideline the technology.

Although technology vendors have fallen over themselves to add generative AI features to their products this year, many companies have already been burnt after embracing generative AI only to face its limitations head on.

San Francisco based serial entrepreneur and mentor Jeremy Shapiro – who has heard all kinds of AI success stories and disasters during his 14 years as a facilitator with business networking and scale-up advisory group Bay Area Mastermind – warns that just because the technology is available doesn’t mean it should be used everywhere.

“AI has previously been pretty inaccessible to entrepreneurs and founders to use in their businesses,” he told Information Age, but easy access to generative AI tools means “business owners are using AI far more readily because the barriers to entry are gone – and their teams are also using it, whether they know it or not.”

Absent formal workplace AI policies and training, many workers who embrace the technology may be unaware of its limitations and faults – leaving businesses to answer for outcomes that may be neither accurate nor aligned with company values.

“When a company doesn’t have a policy, there is an underlying policy that’s actually happening,” Shapiro said. “When there isn’t a policy, people don’t know what’s being used or how it’s being used – and that can lead to things being questioned.”

Many companies that rushed to implement AI-driven chatbots faced pushback against platforms that were focused more on helping the company than helping their customers, he said – serving as a warning for companies like DoorDash, which recently announced plans to build an AI-driven phone order system to reduce the burden on restaurant staff so overworked that they that miss nearly half of incoming calls.

Making such systems work for customers requires introspection: “We’ve all had the experience as consumers of trying to connect with a company and dealing with an infuriating chatbot that is not actually helping us solve a problem at all,” Shapiro said.

“Companies believe that by putting this technology in the way they’re going to help their customer and defect requests – and you are deflecting requests, but are customers happier with the experience they’re having? Generally not.”

Shapiro has heard similar reports from companies that have embraced ChatGPT as an omniscient copywriter, presentation maker, personal assistant, contract reviewer, and more – without critically evaluating the technology’s underlying limitations.

While ChatGPT can write convincing letters and marketing materials, its grasp of marketing communications nuances like brand voice can be shaky at best – making it less useful for producing customer-ready EDMs than for drafting rough content that must be refined and tweaked by human subject matter experts.

“Saying ‘we have AI powered content creation’ has become table stakes from a competitive [vendor] standpoint, but I’ve seen a lot more companies starting to just plug in AI slaphazardly,” said Shapiro.

“We do have AI powered content creation, but is it really valuable and on point?” Shapiro asked. “Generally not. You need to look at what your industry is doing, whether consumers are expecting AI to be in your product in certain ways, and where AI can be used to better help customers.”

Businesses kerb their enthusiasm

Amidst unbridled industry enthusiasm, surveys suggest that both businesses and the general public are taking generative AI – which Gartner recently flagged as being on its Peak of Inflated Expectations – with a grain of salt.

Fully 57 per cent of the 1481 respondents to a recent Roy Morgan survey, for example, said they believe AI ‘creates more problems than it solves’, citing concerns about job losses, the need for regulation, and AI’s ability to be misused.

Only 18 per cent said the technology can be beneficial if used correctly – with just 16 per cent agreeing that the pros of AI outweigh the cons.

“Australians are excited about the benefits that AI technology can bring to everyday life,” noted Roy Morgan CEO Michele Levine, “but on the balance the majority of us feel the potential for job losses, misuse, and inaccuracy outweigh these benefits.”

Professional services body CPA Australia found similar reluctance amongst businesses, with its recently released Business Technology Report survey of 697 accounting and finance professionals finding that the number of businesses using AI remained “static” over the past 12 months even though 55 per cent said they had used ChatGPT and other tools.

Although AI-based tools like predictive text in email systems had become common, CPA Australia senior manager for business investment policy Gavan Ord said use of AI “is much less likely to involve the kinds of next-gen AI that we often see in the media.”

“It’s very fashionable to talk about using AI tools like ChatGPT,” Ord explained, “but the reality is that most small and medium sized businesses are still considering how to integrate them. Many are still unsure about how they can use AI in a safe and secure way.”

Recognising that ChatGPT, in particular, has been plagued by its intrinsic inaccuracies, OpenAI has been working hard to improve its results and recently launched an enterprise version designed to better address such issues.

Yet caution remains key to making AI work for customers, Shapiro warned.

“Depending on what your quality gate is and how you’re viewing your customer experience,” he said, “it starts to matter how and where you’re using AI within the business.”

“The technology has been democratised and it is far more accessible to everyone from the business owner, to the CMO, to anyone doing a job within the office. But if you trust it too much, it can really come back and bite you.”