Up to 3,000 flood victims in NSW have had their data exposed after a government contractor fed their private details to ChatGPT.
Following the severe floods which hit hundreds of postcodes in 2022, the NSW and Australian Governments co-funded a $920 million-dollar Resilient Homes Program to help victims in NSW’s Central West and Northern Rivers regions with home buybacks, rebuilds, and other property resilience measures.
Three years later, the NSW Reconstruction Authority (RA) has announced the personal and health information of up to 3,000 Northern Rivers residents may have been “disclosed” in a major data breach.
“We understand this news is concerning and we are deeply sorry for the distress it may cause for those who have engaged with the program,” RA said Monday.
The data breach occurred thanks to a former RA contractor who uploaded a Microsoft Excel spreadsheet to AI tool ChatGPT.
This spreadsheet contained more than 12,000 rows of information across 10 columns, including at least names, home addresses, email addresses, phone numbers and “some personal and health information”.
“Every row is being carefully reviewed to understand what information may have been compromised,” said RA.
The authority said it “took immediate steps” to contain any further risk once it understood the scope of the breach – which took place between 12 and 15 March 2025.
When asked when RA first became aware of the data breach, a spokesperson directed Information Age to an existing statement.
“This process has been complex and time-consuming and we acknowledge that it has taken time to notify people,” said RA.
“Our focus has been on ensuring we had the right information to contact every impacted person accurately and completely.”
Government monitors dark web for leak
Though RA believed the “risk of misuse” from the data breach was “low”, the authority said it was working with state security agency Cyber Security NSW to monitor the internet and dark web for “any signs that this information is accessible online.”
“There is no evidence that any uploaded data has been accessed or distributed by a third party,” said RA.
“We expect to have a complete understanding of the information uploaded within the coming days.”
Mandy Turner, adjunct lecturer in cyber criminology at The University of Queensland, said if the information was exposed to a threat actor, it could be used for attempted fraud.
“For example, contacting flood victims while pretending to be from the RA or another NSW government agency, " said Turner.
“[A threat actor could] use the information available to trick people into making payments, provide account log in credentials, or hand over more information that could be used for identity theft or other crimes.”
RA urged those affected to stay alert for “suspicious emails or messages that ask for your personal details”, while potential victims will be contacted “this week” regarding whether they have been impacted.
Don’t feed sensitive data to AI
Although the risk of identity theft was “fairly low”, Jon Robertson, founder of Australian cybersecurity company Tarian Cyber, said the “main concern” of the breach was the upload itself.
“The data was uploaded to an AI tool, without consent, without an understanding of how it will be stored and used after the main task at hand is completed,” said Robertson.
Indeed, Turner echoed that ChatGPT’s potential use of chat prompts and uploads for training purposes meant the flood victims’ information was “now being used in a way they did not agree”.
“It is heartbreaking that these people, who have already gone through so much with flooding, now have the added stress that their information was uploaded to a platform where they have no control of where their personal and sensitive data may go or be used,” said Turner.
Turner added “no online platform is immune from the threats of data exposure”, noting generative AI models have had data breaches in the past.
Indeed, ChatGPT suffered its first major data leak in March 2023 when a bug allowed users to potentially access limited parts of each other’s chat interactions.
More recently, users discovered ChatGPT conversations were in some cases being indexed by Google and other search engines, allowing them to be searched and read by strangers.
Robertson said the RA leak demonstrated a lack of awareness about feeding sensitive data to large language models, stating users are generally “far too trusting with online services”.
“It’s a problem that will continue to emerge over time.”
RA has notified the NSW Privacy Commissioner of the breach and put “safeguards” in place to prevent a similar incident in the future.
“We’ve reviewed and strengthened our internal systems and processes and issued clear guidance to staff on the use of unauthorised AI platforms, like ChatGPT,” said RA.