Australian workplaces have adopted AI and automated decision making (ADM) at a pace so unprecedented that HR functions using the tools should be considered high risk, according to a new report flagging employers’ “concerning and excessive” use of surveillance and data collection.

The Future of Work report – which was handed down by the House of Representatives Standing Committee on Employment, Education, and Training nearly a year after it was ordered to evaluate workplaces’ digital transformation – is filled with warnings about unchecked use of AI and ADM.

“Although automation has driven the transformation of work since the industrial revolution, the complexity and speed of change in AI and ADM systems is unprecedented,” it notes, warning that “poor design, implementation and use of AI and ADM in the workplace can pose a range of risks.”

Those risks – including “excessive” collection of personal data and problematic workplace surveillance – have compounded longstanding concerns that AI and ADM systems will obviate many traditionally human roles, feeding worker stress and dehumanising business operations.

“Employers are increasingly using emerging technologies to automate not only tasks but decisions traditionally made by humans,” the report notes.

“This is considerably impacting how work is organised and how the employment relationship is managed.”

The committee visited sites including Woolworths’ Melbourne South Regional Distribution Centre – where AI-driven surveillance recently drove a 17-day strike, disrupting the grocery giant’s supply chain and costing it $140 million and a major reputational hit.

Woolworths’s experience highlights the potential repercussions for employers that, the report notes, have often had few compunctions about adopting “excessive and unreasonable” surveillance of employees who are growing tired of being constantly under a microscope.

Rebalancing this dynamic should be a priority, the report advised in advocating “meaningful” consultation, transparency, accountability and procedural fairness with employees “to overcome the inadequate design, implementation, and use of AI and ADM systems in workplaces.”

AI and ADM also pose “specific” work health and safety risks, the committee noted, advising the government and Safe Work Australia develop a code of practice to address those risks – including “establishing limits on the use of AI and ADM in workplaces to mitigate psychosocial risks.”

That includes a ban on the use of technologies like AI and ADM systems for “final decision-making without any human oversight” – in particular, when they are used for hiring, firing, and other HR decisions that have very real implications for workers’ lives.

Biased data leads to an unbalanced workforce

As well as creating problems at work, the report warns that “biased outputs” due to AI training data are marginalising women and other groups – such as AI video interview analysis systems that have disqualified job applicants with disabilities due to their involuntary facial movements.

“Algorithmic bias is a prevalent challenge in AI and ADM systems [and]… marginalised cohorts are at risk of being left behind” if HR blindly adopts the tools, it warns, arguing that “clear policies must be developed to ensure these cohorts are part of the digitally transformed workforce.”

Employers using AI and ADM systems must “take active measures to mitigate bias and discrimination based on protected attributes,” the report advises, noting that AI developers can do their part by ensuring that AI development teams are “diverse and representative.”

“It is important to create requirements like this on technology developers so that AI training data and models do not perpetuate social biases.”

Given their significant risks, the committee advises that employment functions be classified as “high-risk” – mandating DISR policy guardrails on the use of AI and ADM for recruitment, referral, hiring, remuneration, promotion, training, apprenticeship, transfer or termination of employees.

Recognising the latent risks in new technologies that are generally not well protected by ‘guardrails’ to limit their bias, committee Chair Lisa Chesters advised the government to “urgently… establish and enforce proper management and safeguards, especially in high-risk settings.”

Existing legislation inadequate to manage the risks

Many of the report’s recommendations reflect concerns that “digital transformation has exposed some gaps in regulatory frameworks and workplace protections,” the committee noted.

This includes, for example, reviewing the Fair Work Act 2009, industry awards, and National Employment Standards to ensure AI and ADM decision making is covered – and that employers remain liable for their decisions – as well as federal and state collaboration to enhance employee protections.

AI developers should be obligated to demonstrate that their AI systems have been developed using “lawfully obtained data”, the report advises, with “explicit protections” for Indigenous Cultural Intellectual Property (ICIP) and an AI Fund established to support struggling Australian creatives.

The government should also work with universities, industry, employers, peak industry bodies, and educational providers to develop microcredentials, university curricula, and other materials to train workers in the use of AI and ADM systems, and how the tools relate to their jobs.

Noting that rampant adoption of new technologies had birthed “particularly concerning… shortfalls in data and privacy protections,” the report advises the government ban “high-risk uses of worker data.”

This includes “disclosures” of data to technology developers and the sale of workers’ personal data and “any data collected in connection to work or undertaken during employment” – crucial for good-faith collaboration to help employees and employers find the right balance for AI and ADM.