Relentless cybercriminals and AI’s unchecked data consumption have made compliance with tightening privacy laws more important than ever but shrinking budgets are driving stress in the small, shrinking and under-resourced privacy teams that are struggling to keep up.
The median size of privacy teams dropped from eight last year to five this year, ISACA’s newly released State of Privacy 2026 survey of 1,800 privacy professionals has found, leading 63 per cent of respondents to say their roles are more stressful today than five years ago.
Rapidly evolving technology was named as the leading source of stress by 71 per cent of respondents, a surge from 63 per cent last year that coincides with the continuing surge in the use of AI that’s infiltrating doctors’ offices, insurance companies, and email inboxes.
Fully 43 per cent said their privacy budget is underfunded but – despite growing efforts around privacy regulation reform in Australia and elsewhere – some 60 per cent of Oceania respondents are expecting further privacy budget cuts this year.
Significantly, that’s higher than the 50 per cent figure globally – suggesting Oceania companies are proving particularly uninterested in privacy compliance and are setting the pace when it comes to paring back essential privacy compliance workers.
Companies in other countries are, it seems, more prepared to invest to stay ahead of changing technologies and laws – with 22 per cent of all respondents expecting a privacy budget increase this year, compared with just 8 per cent of Oceania respondents.
“Many organisations are asking small privacy teams to manage complex compliance obligations, emerging technologies like AI, and growing breach risk all at once,” ISACA board vice chair Jamie Norton said.
“Lower budgets can mean that organisations risk falling behind regulatory expectations as scrutiny continues to intensify.
“When investment doesn’t keep pace, privacy risk quickly becomes a broader business and governance issue.”
OAIC wants to plug holes in privacy’s leaky boat
Amidst efforts to promote ethical AI and put teeth behind Australia’s Privacy Act, privacy regulators at the Office of the Information Commissioner (OAIC) are flat-out investigating tech giants and the privacy depredations of AI startups and invasive data matchers.
That includes OAIC’s first ever privacy compliance sweep, in which its staff will actively review the privacy policies of around 60 property, pharmacy, car rental, car sales, and pawnbroking firms as well as licensed venues.
In what has been called “the most significant data privacy challenge [businesses] have faced in years,” firms’ privacy policies will be evaluated against Australian Privacy Principle (APP) 1.4, the Privacy Act “backbone” that defines what information firms must provide and how.
OAIC “takes a risk based and proportionate approach to regulation,” the agency advises, “and if non-compliance is detected as part of the sweep, the OAIC will consider its recently expanded regulatory toolkit in determining the most appropriate regulatory response.”
“We hope this will catalyse some reflection about how robust entities’ privacy practices are.”
AI upsetting data privacy’s apple cart
European companies have learned firsthand the risks of poor privacy compliance, with the gold-standard General Data Protection Regulation (GDPR) driving stiff fines and increasing enforcement including 2,731 privacy breach fines worth over €7 billion ($12 billion).
Last year EU data protection authorities managed 443 privacy breach notifications per day, on average – a rise of 22 per cent year on year – and handed out €1.2 billion ($2 billion) in fines.
Despite the potential financial penalties, declining investments in privacy staff and compliance may suggest Australian companies are willing to wear the risk.
“It’s possible that organisations are becoming a bit numb because they’ve seen businesses breached and it didn’t destroy them,” Norton said, adding that “maybe the fine is something that sits within their broader risk appetite – and they’re just managing to an outcome.”
New tech magnifies the risk, with Gartner urging companies to avoid new AI browsers and predicting that by next year “improper use” of generative AI (genAI) will cause 40 per cent of AI-related data breaches as data leaks to countries with different privacy regulations.