Generative AI tools like ChatGPT and Google Bard should only be used in “low-risk situations”, according to new government guidelines that warn public servants of the “unacceptable risk” of using the tools for decision making or writing application code.
Positioned as a baseline policy that will evolve over time, the Digital Transformation Agency’s (DTA’s) new Interim Guidance for Agencies on Government Use of Generative AI Platforms outlines four core principles that delineate its perception of the key risks posed by generative AI technologies.
These include the need to deploy AI responsibly; transparency and explainability; privacy protection and security; and accountability and human-centred decision making.
“Due to the rapid evolution of technology, there is a growing demand for guidance when government staff members assess potential risks involved in its use,” DTA chief executive officer Chris Fechner said as the guidelines were released.
Users should be cautious about over-relying on generative AI engines to process information or make recommendations, the guidelines advise, and should both remain sceptical about their outputs and double-check any factual statements they make as generative AI tools “can provide incorrect answers in a confident way”.
“The information provided by public AI tools is often not verified, may not be factual, or may be unacceptably biased,” the guidance notes, warning that users should “be able to explain and justify their advice and decisions” as well as critically examining outputs for inaccuracy.
Convincing users bewitched with generative AI to maintain their scepticism won’t be easy: one recent Capgemini Research Institute survey of 10,000 consumers, for example, found that 72 per cent of Australian respondents trust the content written by generative AI systems – a result consistent with all 13 surveyed countries.
In the interests of transparency, the guidance recommends users consider adding “markings” in briefings and official communications to indicate whether AI was used to generate any of the information they contain.
Privacy and security risks must be particularly considered by public servants who should avoid entering classified, sensitive, or personal information into generative AI systems, the guidelines advise, warning that this also precludes entering information “that would allow AI platforms to extrapolate classified or sensitive information based on the aggregation of content you have entered over time.”
“Any data entered is stored externally to government and we do not know who has access to it,” the guidelines warn, advising users to disable the saving of chat histories for training purposes.
The guidelines’ advice about accountability – which was released contemporaneously with the damning findings of the Royal Commission into the former government’s algorithmic ‘Robodebt’ scheme – emphasises the importance of accountability and warns that “humans should remain as the final decision maker in government processes” even where generative AI tools are used to “brainstorm options or draft content”.
When reviewing generative AI platforms’ outputs, it says, “users should ensure the content aligns with their understanding of the issue, and if in doubt should fact check the content using reputable sources.”
Practical advice is evolving
Publication of the interim principles comes on the heels of new discussion papers designed to stimulate discussion about generative AI’s risks – and the regulations that should control it – and follows last month’s adoption of a draft of the European Union’s new AI Act.
Yet even as experts, academics and ethicists float worst-case scenarios, promote restraint in AI’s development and consider just what AI actually is, DTA’s interim guidance also offers real-world advice for users.
This includes avoiding creating a generative AI account if possible – or, if not, using corporate credentials to sign into generative AI platforms – and confirming whether a particular department or agency requires explicit approval before users adopt it.
Users should not incorporate any identifiable information in their generative AI prompts: while it might be acceptable to make ChatGPT generate a baseline project plan, for example, users must not supply details such as the project’s name, agency, systems or software details, high level requirements, or staff members involved.
Similarly, a user could ask ChatGPT to generate a generic presentation outline or tender document but could not feed it any sensitive information, enter any information that is not already on a public website, or even mention the word ‘tender’ or any other words that could hint that a particular tender was being created.
Application code created by generative AI platforms “must be considered as potentially malicious” and not distributed until it has been vetted and proven safe, according to the new guidelines, which also recommend that public servants follow other best practice guidelines in areas such as AI ethics and Australian Government Architecture–AI guidelines.
Users should also avoid clicking on any links provided or generated by public AI platforms of bots, the guidance advises, noting that such links “could lead to phishing sites or malware downloads”.