Students must be engaged in the normalisation of generative AI within schools and universities, academics have argued amidst revelations that Monash University has flagged around 250 suspected breaches of student misuse of generative AI (GenAI) this year alone.
While many breaches are “undetectable”, Monash acting deputy vice-chancellor for education Professor Allie Clemans told a House of Representatives committee that “there have been hunches, senses, or people have used their own techniques to consider whether there have been breaches.”
Rather than focusing exclusively on catching students out, however, Monash is working to teach students appropriate ways to use ChatGPT and other GenAI tools – including creating a new category of academic integrity breach that, Clemans said, punishes “inappropriate use of artificial intelligence in assessment… to allow us to take action should that occur.”
Section 1.17 of the Monash Assessment and Academic Integrity Policy advises students to use GenAI “with honesty and in a manner that is responsible and ethical” – acknowledging where they have used the technology and refraining from using such tools where proscribed in assessments.
“Our approach has been to educate our way out and to err on the side of opening up and proper use, rather than to set up the conditions for misuse,” Clemans explained during the recent ninth hearing of the Inquiry Into the Use Of Generative Artificial Intelligence In the Australian Education System, which featured a conga line of academics relating their institutions’ experiences as GenAI became ubiquitous this year.
“There should be clear signalling to students around what is allowable in their tasks,” she continued. “We then have to do this with fair process and with evidence to ensure that we can identify that there has been a breach – which of course is quite difficult to do.”
Universities need to consider not only where GenAI should not be used but where it can be legitimately useful for students, noted Professor Phillip Dawson, co-director of the Deakin University Centre for Research in Assessment and Digital Learning.
“For the moments where we want them to not use generative AI, it's actually really hard to get people to not use it,” he explained, “and to be completely sure that they haven't used it if you're not, in some way, supervising that.”
A recent voluntary survey of 4,600 Monash students found that many students were using GenAI primarily for teaching themselves content and consolidating content rather than having it write assessments for them.
Students in STEM-related faculties such as Engineering and IT reported using GenAI far more than those in Arts subjects – bolstering the contention that the technology is being used as a learning aid more than as a proxy to write assignments.
Within the IT faculty “Generative AI is inevitable,” one student known only as ‘Witness A’ testified, arguing that the IT faculty “has embraced that in a way to make the assessments even harder.”
“We have to code something,” the student said, “but we have to be tested on how much we actually understand our code, how much we can alter it ourselves and if we can fix a bug somewhere – so we can’t rely fully on GenAI. We do have to take responsibility for our learning as such.”
Gen AI education should start early
The university community has dominated the first nine hearings of the committee – which held its first hearing in early September after accruing nearly 100 submissions – despite the secondary school community’s attempts to engage in the process.
Although they have yet to be called to the committee, submissions from the Australian Secondary Principals’ Association (ASPA), Independent Schools Australia, Australian Science and Mathematics School, Association of Heads of Independent Schools Australia and other bodies have acknowledged GenAI as both inevitable and transformative in schools.
“If students are supported to develop the skills to use AI safely, ethically and effectively they can take advantage of opportunities afforded with potentially more equitable access to improved learning outcomes,” ASPA’s submission argues – encouraging a “well supported, methodical integration of AI into schools” through a “nationally consistent approach” while warning that “restricting access to AI at school may significantly disadvantage their prospects in an AI integrated society.”
Today’s secondary students, after all, will be tomorrow’s university students and soon enter a workforce where GenAI has already been normalised: one recent ISACA poll, for example, found that 63 per cent of regional employees are using Gen AI even though just 36 per cent of employers permit its use – and just 11 per cent have formal policies around gen AI.
With a recent scandal showing how problematic GenAI remains, ensuring that learners build AI skills early in their education will be crucial, committee member Zoe Daniel MP noted: “If you haven’t learned the ethics [of GenAI] before you get to the university,” she said, “it’s going to be really difficult to suddenly teach these kids what the ethics of AI are.”
Ultimately, speakers agreed that universities must work with secondary schools as they reverse blanket bans and normalise GenAI technology – something already being trialled with the deployment of purpose-built Gen AI engines in Queensland and South Australia.
“We’re all learning this together,” agreed Professor Margaret Bearman of Deakin University’s Research for Educational Impact Strategic Research Centre. “The students are learning it as we’re learning it, and there’s a humility that we all need to take on in this.”
“[Over time] it’s going to be impossible to distinguish between the human and the robot. It’s going to become naturalised and embedded.
“We will write like that. Artists will work like that. It's the future.”