Government agencies must publicly outline their use of artificial intelligence within six months and appoint an accountable official to ensure the safe rollout of the technology under new Australian Public Service (APS) guidelines.
The federal government last week unveiled its policy for the responsible use of AI within government, covering all departments and agencies except for those in Defence and the “national security community”, which includes ASIO and the ASD.
The policy outlines mandatory requirements for agencies when it comes to their AI usage, along with policies which are strongly recommended, with an aim of making the government an “exemplar in the use of AI safely”.
The policy is based around how the APS will embrace the benefits of AI, strengthen public trust in it, and adapt over time to evolutions in the technology and in policy environments.
“This policy will ensure the Australian government demonstrates leadership in embracing AI to benefit Australians,” said Digital Transformation Agency general manager for strategy, planning and performance Lucy Poole.
“Engaging with AI in a safe, ethical and responsible way is how we will meet community expectations and build public trust.”
AI is already being used in the public service, including through a controversial trial of Microsoft’s generative AI Copilot tool.
The new policy for AI usage in the Australian government comes into effect from the start of September.
Reports and accountable officials
The key mandatory requirements emerging from the guidelines are the appointment of a dedicated AI accountable official (AO) within all agencies, and the public documentation of all AI usage within the APS.
By the end of the year, all agencies must designate an individual or body with accountability for implementing the safe AI policy.
This person or body will be accountable for the implementation of the overall policy within their agency, must notify the Digital Transformation Agency when a new high-risk use case of AI is identified, act as the contact point for whole-of-government AI coordination, and engage in whole-of-government AI forums and processes.
“We’re encouraging AOs to be the primary point of partnership and cooperation inside their agency and between others,” Poole said.
“They connect appropriate internal areas to responsibilities under the policy, collect information and drive agency participation in cross-government activities.”
Agencies have also been given six months to publish a public statement outlining their approach to AI adoption and use.
This statement will be reviewed and updated annually, or sooner if any significant changes are made.
It will include information on compliance with the AI policy, measures to monitor the effectiveness of AI systems, and efforts to protect the public against negative consequences of it.
“Statements must use clear, plain language and avoid technical jargon,” Poole said.
The policy also strongly recommends that all agencies conduct AI fundamentals training for all of their staff within the next six months, along with additional training for staff if they are more closely involved with the use of AI.
The government is currently piloting an AI assurance framework, and agencies have been recommended to participate in this and provide feedback to the Digital Transformation Agency on it.
Earlier this year the Labor government unveiled its interim response to its industry consultation on the responsible use of AI.
In its response, the government outlined a policy aiming to balance efforts to prevent the high risks associated with the use of AI while not stifling its innovative and economic potential.
The plan steered away from any outright bans of AI use, instead looking to introduce mandatory safeguards for high-risk use of AI.