Welcome to our three-part series on disinformation, misinformation, and malign influence (DMMI). We look at how DMMI is changing our everyday lives. Here, in part 3, we examine how much of our life is spent under surveillance.
Western commentators have long-painted China’s evolving social credit system (SoCS) as an example of how intrusive citizen monitoring creates a surveillance state – yet amidst inaccuracies about China’s SoCS, many didn’t realise Australia’s surveillance is in some ways on par.
Although the idea of a SoCS was floated by Chinese authorities a decade ago, intervening years have in truth been dominated less by automatic surveillance networks and peer-reported behaviour scores that restrict access to services and shape people’s lifestyles à la Black Mirror, than by scattered pilot programs by local governments experimenting with how a social scoring system might reward and punish social behaviour like donating blood and jaywalking.
It was only in late 2022 that the system – which under the microscope of impartial analysis seems to be more about building a universal, Western-styled credit scoring system than a system of social microcontrols – got a refresh as new laws codified the experiences of the previous eight years and laid down a far less contentious vision for SoCS that was backed by Chinese government support for UN guidelines banning the use of artificial intelligence (AI) for social scoring.
In feeding Western hysteria over China’s SoCS, however, many observers have been distracted by the creeping growth of AI in our own backyards – where AI-driven surveillance is already so well progressed that, for example, the Australian Federal Police (AFP) was forced into damage control after revelations it had trialled Clearview AI, a controversial face-matching tool that scraped millions of people’s images from the Internet to allow them to be identified from photos or, even, surveillance footage taken in public spaces.
Clearview AI – whose investigation led tech giants to halt the sale of AI systems to police amidst calls for tighter regulations – was eventually ordered to delete its data and banned from making private sales, but its overreach opened eyes to the real threat of mass surveillance and information gathering that has become even easier as the technology improves by leaps and bounds.
Yet many of the activities enabled by AI-driven surveillance “are unlikely to be compatible with Australia’s international human rights obligations with respect to privacy in particular, and indirectly also to freedom of speech and other associated rights,” notes Dr Adam Fletcher, an academic at the RMIT University Graduate School of Business and Law.
The Australian Government, he warned, “typically justifies such measures, which are far-reaching by the standards of many like-minded jurisdictions, by providing little more than cursory assertions of a need for greater operational agility…. Such assertions cannot legally assure the necessity and proportionality of the measures in question.”
They really are watching you
If you think this story ends with tech companies winding down their surveillance activities and Australians living happily and privately ever after, think again.
For all the hysteria and finger-pointing around everything that China’s SoCS is not, revelations that Australians are regularly being tracked and identified during their everyday lives – something that was seen during the COVID-19 pandemic’s early days and continues in major grocery stores and around 40 per cent of retail shops – are a chilling reminder that intrusive surveillance has become an everyday fact of life.
A data collection and matching service called Auror, it was revealed last year, is being quietly used by surveillance-heavy companies like Coles and Woolworths to collect details of more than 100,000 incidents per year – including theft, arguments, staff abuse, assaults, and more.
The system includes a broad range of automatic data feeds and staff are encouraged to log incidents that may include CCTV footage, descriptions of suspects, social media content, license plate details and more – which are assembled into a massive central content database whose AI algorithms can build profiles of individual repeat offenders.
The system allows proactive surveillance – for example, by typing in the license plate of a stolen car, authorities will be notified if that car passes any Auror-connected surveillance cameras – and is so effective that last year it was revealed that NSW, Victoria, and the AFP’s ACT police have reportedly using or trialling it, some without conducting formal privacy assessments about its impacts.
The AFP ultimately suspended its use of Auror, but other revelations – including that the agency was meeting with Clearview AI officials long after that platform was banned – confirm that authorities are far from stepping back in their quest to tap AI-driven face recognition and information gathering that has become more powerful than ever.
Yet the biggest threat is not necessarily from law enforcement authorities – which are subject to procedural scrutiny and oversight even when they overstep their boundaries – but from a pervasive, increasingly connected ecosystem of data collected by private businesses that see you and your information as commodities to be marketed to and monetised.
Tracking your digital footprints
In a political and social climate where disinformation, misinformation, and malign influence (DMMI) have become everyday occurrences, it’s important to be aware just how enthusiastically companies you deal with are gathering video, photo, and textual information about you, your activities, your preferences, and your potentially exploitable characteristics.
While you likely came to appreciate the pervasive data collection online some time ago – companies like Google have for years been using cookies and trackers to profile your web surfing habits, even when they promise not to – you might be stunned by just how much information is collected, and about what.
Download your Netflix viewing history, for example, and you’ll get a comprehensive list not only of the titles you’ve watched but every time you’ve been shown an ad for a movie, how long you hovered over it, which titles you looked at before choosing what to watch, and so on.
Look at the offers on the websites of your Woolworths or Coles loyalty programs, and you’ll see an eerily familiar list of the products you’ve bought from them – with incentives to repurchase particular items.
Bunnings and Kmart, for their part, were shamed into stopping the use of facial recognition to monitor customers who generally had no idea they were being watched as they traversed the stores’ aisles.
Companies like Netflix, Woolworths and Bunnings collect oceans of data so they can not only track what you have done, but what you might be likely to do in the future – and what, based on your interests, you might be likely to pay extra for.
Private-sector interests see this as a good thing, with Center for Data Innovation policy analyst Patrick Grady arguing that a proposed EU ban on private-sector social scoring “would ultimately hurt consumers, businesses, and the economy.”
“Companies should be allowed to use AI to profile, incentivise, and evaluate users to improve their services and create more positive experiences,” Grady writes.
“Consumers should be able to vote with their money.”
Few consumers would argue with that sentiment – yet research shows that many are becoming increasingly unsettled by the license that the private sector has taken with personal information that is collected and amassed every time you go online, visit a shop, enter an airport, or even get into your car.
The challenges of pervasive data collection were writ large after recent scrutiny of connected cars’ data collection practices revealed that automobile makers were surreptitiously sharing information about customers’ driving habits with insurance companies – who, in turn, were increasing the cost of insurance or denying it altogether.
Indeed, where consumers see connected vehicles as a luxury or a convenience, automotive companies have used them as an excuse to gather any type of conceivable information about millions of customers including racial profiling, physical characteristics, genetic information, location details and even – in a revelation that suggests in-car cameras are being used for more than tracking drivers’ attention – details of sexual activity.
From placid data lakes to treacherous data oceans
Even as the challenges of generative AI (genAI) push authorities to tighten controls on DMMI content, the application of more conventional AI techniques to mass data collection – a concept first popularised over a decade ago as companies began building ‘data lakes’ filled with the outputs of ‘big data’ systems – is fuelling the constant profiling of individuals throughout their daily activities.
Social media posts, for example, are regularly harvested and analysed to figure out what you like, who you consort with, what your political beliefs might be – researchers long ago proved that individuals can be identified with just a few data points – and there’s no way to avoid it without simply cutting yourself off from the modern world.
That companies freely accumulate, utilise, and sell such data – often defending unsavoury practices with allegations that consumers somehow consented to data collection because of a clause buried in voluminous Terms & Conditions documents – makes it more important than ever that you consider just how far and wide your digital footprints extend, and just how much surveillance you consider to be acceptable.
A newly released auDA study, Digital Lives of Australians 2024, found that 64 per cent of 1,500 surveyed consumers said they avoid some online activities due to concerns about data security, while 61 per cent would feel more comfortable about AI if there were stronger regulatory safeguards around it.
Fully 81 per cent believe companies should do more to protect customers’ personal information from cyber attacks, with 83 per cent arguing that organisations should be penalised if they fail to protect customers’ personal information.
Such penalties are being applied in some cases – Medibank copped a $250 million fine over its breach and the government recently sued Optus over its 2022 data breach after increasing the maximum fine for security lapses – yet in data breach after data breach, it is consumers who suffer the most.
Even if you tolerate the collection of data about your identity and activities, you’re unlikely to be happy about that data being published online or sold to cybercriminals eager to scam you with it – often using genAI technologies to develop highly complex, personalised phishing and scam attacks that may even utilise the voices of loved ones.
The sensitive data collected through mass data gathering is being made public time and again, with millions of Australians’ personally identifiable information (PII) compromised in the mass hacks of companies like Optus, Medibank, Dymocks, Ticketmaster, MediSecure, Dell, and countless others.
Add the oceans of data that companies collect about you and crossmatch from social media sites, and it has never been easier for companies, cyber criminals, and governments to find out nearly anything they want to know about you.
And while the Australian government is crawling towards a Privacy Act overhaul designed to modernise the protection of your data, in the short term we can expect more of the same, University of Sydney Law School professor Kimberlee Weatherall explained during an ACS Think Tank session.
“The Privacy Act was never designed to address the fundamental right to privacy, which is broader than just data protection,” she said.
“It was never designed to really address questions about surveillance – which is why we need recognition of a broader right to privacy where we can talk about what use of this kind of technology is reasonable.”
“We need law reform to affirm a fundamental right to privacy and start to deal with some of these questions around surveillance,” she said, “and we need to review data protection laws.
"There’s a lot of work to do.”
Read more:
Part 1: What is true anymore?