Privacy budgets are being cut even as workers feed generative AI (genAI) systems sensitive data about employees, customers, internal processes and non-public business details – with new data highlighting the risks that have seen the technology banned by a quarter of ANZ businesses.
Fully 53 per cent of respondents to the Cisco 2024 Data Privacy Benchmark Study – which surveyed over 2,600 security professionals in Australia and 11 other countries – said they have used internal process details when shaping queries to ChatGPT and other genAI engines.
Around a third of respondents said they had done the same with employee data, non-public company material, and customer information.
Doing so has become a growing privacy and security concern because genAI continually trains itself based on the content of user prompts – potentially regurgitating that information to others.
Indeed, Cisco found, 77 per cent of respondents believe data entered into genAI systems could be shared publicly or with competitors, with 69 per cent concerned that use of genAI could hurt their organisation’s legal and intellectual property rights.
This had led over half of surveyed organisations to limit the types of data that can be entered into genAI systems, restrict the kind of genAI tools that can be used at work, or outright ban the tools as 27 per cent of organisations Cisco surveyed have done.
That’s a sobering contrast to widespread enthusiasm for genAI tools – which, security firm Indusface found in a recent survey of over 2,000 UK businesses, are being used for tasks such as writing up reports, translations, research, client emails, and internal emails.
Such use cases may seem like clear productivity boosters to employees, but Indusface founder and president Venky Sundar warned that they can easily become something else altogether.
“When you share proprietary information into ChatGPT, there’s always a risk that this data is available for the general public,” he said, noting that 55 per cent of surveyed companies don’t trust working with another business that uses genAI tools in their workplace.
With 39 per cent of advertising, 38 per cent of legal, and 28 per cent of government and defence firms saying they utilise ChatGPT, Sundar warned of the risks of exposure and reputational damage through inadvertent data leakage.
“You may lose your IP,” he explained, “so never ask ChatGPT for documentation on proprietary documents including product roadmaps, patents, and so on… The maturity level of addressing the data, and ownership of trust, is still not well defined and businesses are right in not trusting it completely.”
Such warnings come too late for the Australian Public Service, which is redoubling its commitment to digitise operations and in January followed major corporates AGL, Bupa and NAB by beginning a six-month trial of Microsoft’s 365 Copilot productivity tools that, Prime Minister Anthony Albanese said, will allow APS staff “to trial new ways to innovate and enhance productivity.”
Exacerbating a privacy shortfall
Amidst concerns that lawmakers are not moving fast enough to regulate the technology’s safety, rampant and uncontrolled use of genAI – which is being integrated into all manner of business and personal technologies – is challenging the still immature privacy practices of Australian businesses staring down the barrel of the most significant Privacy Act changes since 2014.
Despite the potential impact of those looming changes, new figures point to a yawning gap in privacy awareness and practice – with just 39 per cent of Oceania respondents to ISACA’s Privacy in Practice 2024 survey calling their organisation’s privacy obligations “straightforward” and just 44 per cent confident of meeting data privacy obligations.
“Every organisation in ANZ and across the world has a responsibility to protect the privacy of its customer and stakeholder data,” said ISACA Oceania ambassador Jo Stewart-Rattray, who noted that with governments “updating legislation to ensure best practice… It is paramount that organisations understand what is expected of them to devise an effective privacy policy and implement accordingly.”
Jo Stewart-Rattray is Vice President of Community Boards at ACS, the publisher of Information Age.
In 2024, DoorDash head of technical privacy Nandita Rao Narla noted in response to the ISACA survey, ‘effective privacy’ will necessarily involve addressing AI risks, with privacy professionals “likely [to] see their roles expand to include responsible AI management” as they are directed to “build sustainable AI governance programs and extend safeguards for AI use cases.”
Yet even as CIOs increase their spending on AI systems and genAI tools, 51 per cent of privacy professionals said they are expecting a decrease in their budget – far more than a year ago, when just 12 per cent anticipated budget cuts.
Reconciling those two competing pressures will be a key issue as genAI becomes ever more capable during 2024, yet knowledge about how to do this is still evolving – with the Australian Cyber Security Centre the latest organisation to offer guidance about managing AI risk.
“Privacy has become inextricably tied to customer trust and loyalty,” noted Cisco vice president and chief privacy officer Harvey Jang. “They are looking for hard evidence the organisation can be trusted…. This is even more true in the era of AI, where better investing in privacy positions organisations to leverage AI ethically and responsibly.”