If you feel confident that you can discern real answers than those given by ChatGPT, then you may need to think again, as Anna* discovered.
Over the course of her long career in IT, Anna has interviewed and recruited many employees.
Late last year, Anna recruited Sue*, a Chinese national, for the role of cloud engineer.
As per the usual process, the initial interview was conducted over the phone. Anna was happy with the responses Sue provided. The interview progressed to the next stage, leading to Sue being employed by the company.
Later, Sue would divulge to Anna, that she used ChatGPT to answer the questions that Anna put to her during their phone interviews.
First off, Anna was not sure how to respond as she was shocked, but later realised that this new cloud engineer was being honest.
Sue’s reference checks and skills set all checked out. Even though she used ChatGPT, she did have the credentials to do the job.
Regardless, Anna did have concerns that this new hire may not actually be able to carry out her responsibilities, and this weighed heavily on her because ultimately the success of this role lay with Anna, not Sue.
Anna has spoken with her and told her she will need to prove herself to demonstrate that she can work without ChatGPT.
Although this new employee is showing sincerity and dedication, it is clear to Anna that Sue will need closer monitoring to ensure success.
Use of AI contextual
Professor Cath Ellis from the School of the Arts and Media at the University of New South Wales is currently undertaking research in the area of academic integrity with a particular interest in contract cheating and ChatGPT, said AI in itself isn't a problem.
“If you've got face recognition on your phone, if you use predictive text on your email, or using a GPS navigator to figure out the best route to get from where you want; these things are using AI tools. They’re ubiquitous and we are using AI everyday unconsciously – only we’ve stopped calling it AI.”
At the same time, there are people putting them to use in ways that might be considered unacceptable in some context.
Prof Ellis adds, when questioning the ethics in any situation, it is mostly determined by what is considered acceptable and unacceptable assistance in that context.
“ChatGPT isn’t going away, with the updated GPT-3 about to be launched. Students today are using it and the question many are asking is, will they be able to do work without the assistance of chatting or any other AI?”
One contributing factor to its uptake is that it is free.
“ChatGPT has risen sharply, largely because it’s free, making it accessible to students in schools and universities,” she adds.
States ban ChatGPT
The ongoing debate around the use of ChatGPT has fuelled much press and commentary across the nation.
Three states – Queensland, New South Wales and Western Australia – do not permit the use of ChatGPT in their schools and universities. To date, South Australia has given ChatGPT the green-light for students to use in schools.
The obvious concern is that ChatGPT could potentially be used by university and school students to cheat on written assignments without being detected.
It's been described as ‘students outsourcing their homework to robots’.
But now, the creators of ChatGPT have announced new software to detect the use of the popular generated text.
Due to ongoing concern within the academic and teaching circles that its tools enable cheating, OpenAI has released an “imperfect” software tool to identify text generated by artificial intelligence.
The developer warned that system was not foolproof, stating the method for detecting AI-written text “is imperfect” and would sometimes be wrong.
AI Law
Lyria Bennett Moses, Associate Dean of Research, Faculty of Law and Justice the the University of New South Wales, said presently, there are no technology-specific laws around the use of Chat-GPT.
However, there are laws that may be applicable. For example, a person who generates content through Chat-GPT but claims that it is their own work (for which they are paid) may be guilty of the offence of fraud (obtaining a benefit by deception).
Or they may be in breach of a contract. A person promoting the use of Chat-GPT for student ‘cheating’ at university may be guilty of an offence under the Tertiary Education Quality and Standards Agency Act 2011. In some cases, there may be breaches of copyright law.
“School and university policies can result in disciplinary action being taken against students. In some cases, this will have long term consequences, where a law student is no longer able to be admitted as a legal practitioner,” she said.
As to how these would hold up in a specific fact scenario – it depends on the context. And none of this has been tested specifically in the courts, adds Prof Bennett Moses.
In the context of a fraudulent interview, she said this could constitute fraud (obtaining a benefit by deception) depending on the circumstances, for example whether they did in fact benefit.
“It also depends of course on whether they have been caught out. Most importantly, if the employer finds out and the person is sacked, I doubt there would be much of a case for unfair dismissal.”
Currently, Prof Bennett Moses is a member of a Standards Australia committee working on AI standards.
The standards are being developed at an international level through ISO (International Standards Organisation) and IEC (The International Electrotechnical Commission) joint technical committee, including on issues related to management and governance as well as trustworthiness.
“Standards will deal with issues across the whole lifecycle for development and use of AI systems, and Australia is actively engaged in the drafting process.”
*names changed to protect the privacy of individuals