Australia will soon have a nation-wide approach to generative AI in schools, following the release of a draft framework last week that seeks to guide how students and teachers use the next-generation technology positively and with human agency always in mind.

A dedicated 'AI in schools' taskforce developed the draft National AI in Schools Framework which comprises six core elements: teaching and learning, human and social wellbeing, transparency, fairness, accountability, and privacy and security.

Those core ideas are then broken down into sets of associated principles that are designed to set “clear expectations for how students can utilise these tools in safe, responsible and ethical ways”.

ChatGPT’s emergence late last year rattled the Australian education sector, triggering a spate of bans in state schools as educators quickly came to terms with what appeared to be a free, widely available, and virtually undetectable plagiarism tool.

Yet at the same time, businesses and workers were being urged to start using AI or risk being left behind as the world collectively experienced a monumental glimpse at what the future of work looks like.

Together, the Commonwealth, state and territory education ministers have recognised the importance of generative AI as a skill that needs to be nurtured, not ignored; after all, “AI is not going away,” as federal Education Minister Jason Clare said.

“Like the calculator or the internet, we need to learn how to grapple with this new technology,” he said.

“There are lots of opportunities, but there are also challenges and risks.

“We need to make sure students use AI for good and get the marks they deserve and don’t use it to cheat, while also ensuring their privacy is protected.”

The draft Australian Framework for Generative AI in schools offers 22 guiding principles to help teachers and curriculum designers figure out how to appropriately bring AI into classrooms.

It’s not only about students using AI for assignments and learning – the framework also covers the use of AI for marking and other forms of decision-making with a view of ensuring accountability and contestability.

At a basic level, the principles include statements about generative AI having “a positive impact on teaching and learning outcomes”; that they should uphold academic integrity by providing “a fair and unbiased evaluation of students’ performance, skills, and knowledge”; and ought to be “well understood before they are used”.

But the draft principles also cover bigger topics like human agency, copyright and cultural appropriation, accountability and contestability, and issues around privacy.

Some of the principles speak to the procurement and development of AI products for use in schools, pointing to the requirements that potential vendors will have to meet for their generative AI tools to be deemed acceptable.

“Developers of generative AI should be expected to source and build their datasets, products and services ethically and be transparent about their practices,” says a section explaining principle 2.3, which covers human rights.

The section on principle 2.2 – ensuring AI exposes users to a diversity of perspectives – mentions developers needing to be “aware of, and take reasonable steps to manage, potential bias in generative AI”.

Developers and product designers will need to “ensure that plain-English information about the workings and expected use of their products is available to consumers”, the draft framework says; that includes information “about the reliability of generative AI products” and “evidence that products are fit for purpose”.

You can provide feedback on the draft National AI in Schools Framework through this form until Wednesday, 16 August or by contacting AISecretariat@det.nsw.edu.au.