Students at the University of Sydney will next year be permitted to use artificial intelligence tools such as OpenAI’s ChatGPT to assist with some assessments, under a new “sector-leading” policy.
Education is one of the sectors most impacted by the skyrocketing popularity of generative AI (genAI), with institutions around the world facing widespread usage of the tools and questions over whether to ban the technology entirely or incorporate it in some way.
While some schools have banned genAI tools entirely, the University of Sydney has instead unveiled new guidance that permits the use of genAI in most cases.
From the first semester of 2025, the default rule for assessments will be that AI usage is allowed, except for exams and in-semester tests, or if the teacher expressly said it was not allowed.
From semester two 2025, the university will adopt a “two-lane approach” that will incorporate secure, in-person assessments where AI use is banned, and “open” assessments where all available tools, including genAI, can be used.
This will “ensure students can learn and prosper in the contemporary world”, University of Sydney deputy vice-chancellor of education Professor Joanne Wright said.
“This is a substantial change to our teaching, assessment and program design, and it is absolutely necessary to ensure our graduates are equipped with the tools they need for the modern workforce.
“Generative AI has already had a profound impact on workplaces and our graduates are expected to demonstrate skilled use of the relevant tools in job interviews.”
The government’s Tertiary Education Quality and Standards Agency (TEQSA) recently tasked two University of Sydney academics with investigating the risks and opportunities of AI use in assessments.
They were able to demonstrate how easy it was for a novice to complete many traditional university assessments using the technology, which you can see in the video below.
“It’s no secret that any take-home assessment can be completed to a high level by AI, and it’s not practical, enforceable or desirable to implement a blanket ban on these tools, or even to restrict their use,” said University of Sydney’s pro vice-chancellor of educational innovation, Professor Adam Bridgeman.
“We have worked closely with the regulator, educators and students to chart a way forward in a world where AI is ubiquitous and that satisfies the principles and purposes of higher education.”
The university has also partnered with Microsoft to make its genAI tool Copilot for Web available to all students and staff for free, along with regular training sessions.
The move comes just weeks after a federal parliamentary inquiry into genAI recommended that tools developed by the likes of OpenAI, Amazon, Meta, and Google be treated as “high risk” under new laws, which could subject them to legislative guardrails.
genAI was already being used by some students at the University of Sydney, with 330 instances of apparent plagiarism using AI identified in 2023.
In a LinkedIn post, University of Adelaide AI coordinator Eddie Major said the university was currently seeking feedback on its own pilot “4-level” AI guidelines.
New AI rules for NSW lawyers
While the education sector may be beginning to embrace AI, the chief justice of New South Wales recently issued guidelines for the use of genAI in the state’s legal sector, banning it entirely for work relating to affidavits, witness statements, character references, or expert reports.
The guidelines also state that if genAI is used in preparing written submissions or skeletons of arguments, lawyers have to verify all citations, legal and academic authority, and case law and legislative references actually exist and are accurate.
The guidelines order judges to not use genAI in the formulation of reasons or the assessment or analysis of evidence preparatory to the delivery of reasons for judgement, but do allow use of the technology for secondary legal research purposes.
This comes after a group of Australian academics were forced to “unreservedly apologise” for the inclusion of factually incorrect allegations regarding big consulting forms that were produced by genAI and put in a submission to a Senate inquiry late last year.