Australian government agencies will be able to implement artificial intelligence (AI) responsibly and consistently, thanks to a new Digital Transformation Agency (DTA) technical standard that the agency has called a “comprehensive lifecycle approach” to embedding AI into agencies’ operations.

Designed to guide agencies through the process of implementing an AI system, the new Technical Standard for Government’s Use of Artificial Intelligence (TSGUAI) provides “practical guidance” for implementation of machine learning, deep learning, and generative artificial intelligence (genAI).

Part of the Australian Government Architecture (AGA), it splits the lifecycle of AI systems into 42 ‘statements’ spread across three phases – discover, operate, and retire – and eight stages including design, data, training, evaluation, integration, deployment, monitoring, and decommissioning.

Each statement has its own criteria and guidance, addressing a different aspect of the AI process from defining a reference architecture and building people capabilities to enabling AI auditing, providing explainability, managing bias, ensuring data quality, validating models, and more.

The DTA also advises that systems should be tested for specific behaviours, checked for intended and unintended consequences, integrated with agencies’ existing environments, supported with business continuity plans and rollback, and regularly tested and monitored to ensure performance.

Developed in consultation with Australian public service (APS), TSGUAI “is designed with public trust front of mind” and draws on international and domestic best practice, DTA general manager of digital strategy, policy and performance Lucy Poole explained in launching the new standard.

The technical standard “isn’t about adding more processes to its users,” she added, but “designed to integrate with what agencies already do… It allows agencies to embed responsible AI practices into existing governance, risk and delivery frameworks.”

Codifying ethics in AI deployments

Adoption of genAI within government agencies is soaring, with a newly released US Government Accountability Office (GAO) report finding the number of agency AI use cases nearly doubled from 2023 to 2024 – but the number of genAI use cases grew ninefold, from 32 to 282.

While the TSGUAI provides practical guidance for Australian agencies, it also reflects a broader effort to keep their AI deployments within the scope of the AI Ethics Principles (AIEP) – a set of eight “aspirational” and voluntary principles that complement existing regulations and practices.

Those principles consider AI’s impact in a range of contexts including human, societal and environmental wellbeing; human-centred values; fairness; privacy protection and security; reliability and safety; transparency and explainability; contestability; and accountability.

Human oversight of AI is essential, DTA says. Photo: Shutterstock

Since many uses of AI are low impact – for example, in email and accounting software – agencies need to make their own judgements as to whether to apply them, with a key trigger being whether the AI deployment will make decisions that impact people, the environment or society.

Backed by the Department of Industry, Science and Resources (DISR), the AI Ethics Principles are complemented by the Voluntary AI Safety Standard (VAISS) which was published last September and includes ten key guardrails focused on ensuring good AI governance and accountability.

For all the rhetoric of senior ministers – who have floated a slew of AI-related policies including making the VAISS guardrails mandatory or implementing a local AI Act similar to the one recently passed in the EU – erratic guidance about AI safety and best practice has emerged in fits and starts.

Whereas the expert-led AIEP and VAISS each target a more specific part of the AI lifecycle, the new TSGUAI provides clear guidance across an AI system’s entire lifecycle – expanding the scope of guidance from a government that some worry is dragging the chain in the name of safety.

A year ago, for example, agencies were ordered to outline their use of AI by the end of 2024, and to appoint an accountable official to ensure the safe rollout of AI technology – yet standards like TSGUAI have taken much longer to lay down and codify.

Moving boldly on AI, but prudently

Even as the DTA’s broadening set of ethical and practical guidelines set expectations for government agencies’ use of AI, business leaders have continued to warn against overregulation of a technology that is developing quickly and with fewer constraints overseas.

Yet the government’s work in clarifying ethical AI development may well have come at just the right point in AI’s development, Atlassian co-founder Scott Farquhar, who chairs industry group the Tech Council of Australia (TCA), noted in a recent speech at the National Press Club.

Farquhar has long advocated for rapid and unconstrained development of tech policy but, with AI rapidly maturing, he now believes government does have a role to play in setting the standards, supporting data centre builds, building skills, and generally shepherding the technology’s growth.

Referencing his last major speech, in which he delivered the 2014 Bradfield Lecture, Farquhar said that back then “what we needed was for government to get out of the way” but that these days, “we need to partner with government to pave the way.”

Government leaders should take a similar approach to private-sector leaders, he added in advising them to use AI themselves on a daily basis; redesign internal processes; and reimagine core offerings by asking themselves how AI could solve their problems 10 times better.

“We should create digital agents for every interaction with governments,” Farquhar said, warning that “as businesses move at an AI pace, governments will increasingly become the bottleneck” and urging them to build APIs to facilitate government transformation.

By providing clear steps to manage AI through its lifecycle, the DTA is aiming to ensure that people issues don’t fall out of the mix as government bodies race to keep up.

“The DTA has strived to position Australia as a global leader in the safe and responsible adoption of AI, without stifling adoption,” Poole said.

“At every stage of the AI lifecycle, the standard helps agencies keep people at the forefront, whether that’s through human oversight, transparent decision-making or inclusive design.”