Australian companies are far more likely than those in other countries to ban generative AI platforms like ChatGPT, according to a new survey that also found that their use can improve efficiency, enhance creativity, and improve a company’s appeal to younger workers.
Although the use of generative AI for security purposes was acceptable to 92 per cent of Australian executives responding to a new BlackBerry survey – which polled 2,000 IT decision makers across Australia and seven other countries – they were also more likely to be considering bans on the use of generative AI applications within the workplace.
Some 93 per cent of respondents said they have either banned ChatGPT and its ilk or were considering a ban – well above the 75 per cent global average – even though, paradoxically, 92 per cent agreed that such bans would signal “excessive control” over corporate devices.
Despite their concerns about generative AI – 67 per cent of global respondents cited data security and privacy concerns stemming from its application in corporate environments – Australian respondents were the most likely in the world to see its use as a way of attracting young talent, with 58 per cent agreeing.
Fully 54 per cent of respondents said embracing generative AI could increase efficiency, boost innovation, and enhance creativity.
Furthermore, the strong Australian support for generative AI’s security benefits was well ahead of other countries, with just 81 per cent of global respondents flagging the technology’s security value.
The seemingly contradictory findings highlight the difficulties of the love-hate relationship between generative AI and its potential users, with BlackBerry Cybersecurity APJ director Jonathan Jackson noting “growing concerns among ANZ organisations as they grapple with the impact of generative AI apps in the workplace.”
“In less than a year,” he said, “generative AI has accomplished what no other technology has – revolutionising content creation and the way people work – but there are also a lot of unknowns.”
AI is testing corporate governance
For all its potential benefits – a recent Technology Council of Australia report suggested responsible use of generative AI would be a “productivity shot in the arm” that could contribute from $45 billion to $115 billion annually to Australia’s economy by 2030 – lingering concerns over its unrestricted use in workplaces continue to temper expectations and trigger executives’ risk management alarms.
New research by the AI Infrastructure Alliance – involving over 1,000 respondents from large businesses in the US, UK, Canada, and Europe – found strong support for generative AI, with 67.2 per cent of respondents saying it is a top priority to implement the technology by year’s end.
“While LLMs have proven incredibly useful already for individuals and are starting to work their way into small business workflows,” the report notes, “we didn’t expect them to rate as highly on enterprise radar, mostly because they are hard to control and don’t fit the pattern of traditional deterministic IT applications.”
Fully 88.3 per cent of the AI Infrastructure Alliance survey respondents were planning to implement policies to control the internal usage of generative AI – echoing the BlackBerry results – with the most important concerns including data integrity (prioritised by 62.9 per cent), governance (60.2 per cent), and security and compliance (56.4 per cent).
Many early adopters had already been burnt by poor AI/ML projects, with 41 per cent saying a botched AI implementation had caused them to miss financial goals, 35 per cent suffering from “poor decision outcomes due to bad data or analysis”, and 32 per cent seeing customer experience suffer.
Fully 54 per cent said they had incurred losses due to the failure to govern AI/ML applications – with 29 per cent citing financial losses of $US50 million to $US100 million, 24 per cent losing from $US100 million to $US200 million, and 10 per cent losing more than $US200 million.
Concerns about inappropriate generative AI usage recently led the Australian government to prohibit employees from using ChatGPT for decision-making, preparing tenders, or other critical tasks such as writing code to be integrated with government systems.
Even if they have been burnt by poorly conceived AI implementations in the past, businesses have little choice but to continue embracing the technology as they face the need to ‘fight fire with fire’ amidst recent revelations that cyber criminals have developed malicious generative AI applications like WormGPT and FraudGPT to automate cyber criminal activity.
Finding the balance, BlackBerry’s Jackson said, requires businesses to walk the fine line between innovation and good practice.
“Businesses can lose out on valuable innovation by taking a heavy-handed approach to company-wide bans on ‘risky apps’ when there is an easier path forward,” he said.
“We urge companies to keep up-to-date on the risks, but remain flexible, revisit policies, and ensure the right software tools are in place for visibility, monitoring, and management of apps used in the workplace.”