The convenience of artificial intelligence may come at the expense of real brain power after researchers found using ChatGPT can reduce cognitive engagement and lead to a decrease in learning skills.

Over a four-month period, Massachusetts Institute of Technology (MIT) researchers asked 54 adults to write a series of three essays using either ChatGPT, search engines or nothing but their brain.

The MIT team then measured the results of these groups by examining electrical signals in the brain, performing language analysis on finished essays, and conducting post-session interviews.

According to the study, the cognitive engagement of those who used ChatGPT was markedly lower than those who didn’t.

The AI-using participants demonstrated “poor recall and incorrect quoting” of the essays they’d written, felt less ownership of their work, and were outperformed by their “brain-only” counterparts at all levels: including neural, linguistic and essay scoring.

They also appeared to make less of an active effort towards the end of the four-month study – employing “minimal editing” on essays which were “mostly copy-paste”.

“AI tools, while valuable for supporting performance, may unintentionally hinder deep cognitive processing, retention, and authentic engagement with written material,” read the study.

“If users rely heavily on AI tools, they may achieve superficial fluency but fail to internalise the knowledge.”

Those who wrote without AI or a search engine meanwhile demonstrated “stronger memory”, “greater semantic accuracy”, and “higher brain connectivity” compared to other groups.

A cognitive debt

Although the educational impact of using large language models (LLMs) is only beginning to “settle with the general population”, the study’s authors warned of “a likely decrease in learning skills” based on their results.

A preliminary finding in the study found participants accumulating a “cognitive debt” – a condition the authors say, “defers mental effort in the short term” but results in long-term costs such as “diminished critical inquiry, increased vulnerability to manipulation [and] decreased creativity”.

Indeed, not only did repeated ChatGPT usage yield reduced brain connectivity over time, but when AI-using participants later switched roles with the brain-only group to write an essay on their own, they still performed worse.

Conversely, those who switched from the brain-only group to ChatGPT-assisted writing showed a “significant increase in brain connectivity” and performed better in memory tests – suggesting the technology can yield favourable outcomes depending on how it’s approached.

AI reshapes language

Compared to the “statistically homogeneous essays” produced by participants who used ChatGPT, the brain-only group exhibited “strong variability” in their approach to essay writing.

The initial ChatGPT-using group further demonstrated a “bias in LLM specific vocabulary” and reportedly “kept reusing same vocabulary and structure” in their eventual self-written essays.

Both ChatGPT and search engine users also had smaller variability in the length of their words, while, notably, brain-only participants later “gave in to LLM suggestions” when rewriting their essays with AI.

Late last year, researchers at the Max Planck Institute for Human Development observed a similar trend in AI’s influence on vocabulary, when analysis of some 280,000 academic YouTube videos showed speakers using words like “meticulous”, “delve”, “realm”, and “adept” up to 51 per cent more in the 18 months after ChatGPT was released.

“We haven’t had ChatGPT and LLMs for that long,” said Toby Walsh, chief scientist of the University of New South Wales AI Institute and member of the ACS AI and Ethics committee.

“There’ll need to be some longer-term research to observe the effects on, for example, children as they acquire language.

“But it is worrying… the brain is a muscle and if we don’t use it, we don’t develop strength.”

AI can rot your brain, but so can a calculator

University of South Australia associate professor Vitomir Kovanovic and lecturer Rebecca Marrone meanwhile criticised the MIT results, arguing brain-only participants were given the advantage of gradually becoming more familiar with the study task.

“When the AI group finally got to ‘use their brains’, they were only doing the task once,” they wrote for The Conversation.

Walsh ultimately suggested generative AI is a “double-edged sword” – he’d certainly encountered students using it to cheat, but argued those doing so were essentially missing the chance to use it as a “fantastic personal tutor”.

“When I was in school, calculators introduced a vigorous debate – if we relied on calculators too much, would anyone know how to do arithmetic?” said Walsh.

“I expect the debate around language models will unfold in a similar way.

“Students had to be taught their mathematics skills adequately before picking up a calculator.

“Similarly, it’s important people learn to communicate, to construct an argument and to analyse problems critically, and we’re probably going to have to require that people don’t use a language model until they’ve actually developed those skills for themselves.”