The English version of Wikipedia now prohibits editors from using generative artificial intelligence tools to generate or rewrite articles, following a vote by its volunteers.

“Text generated by large language models (LLMs) such as ChatGPT, Gemini, Claude, DeepSeek, etc. often violates several of Wikipedia's core content policies,” reads Wikipedia’s guidelines, which were updated last week.

Under the site’s content policies, all articles must be written objectively, use reliable and verifiable third-party sources, and not include any original research.

Wikipedia's updated policy on the use of LLMs was accepted in a 40-to-2 vote among editors, 404 Media reported.

Editors will still be allowed to use LLMs “to suggest basic copyedits to their own writing”, according to the policy, but AI cannot be allowed to introduce content of its own.

“Caution is required, because LLMs can go beyond what is asked of them and can change the meaning of the text such that it is not supported by the sources cited,” the new policy stated.

Wikipedia editors will also be allowed to use LLMs to translate existing articles in other languages for the English version of the site, but must follow further guidelines – as AI-based translations have reportedly created errors in the past.

The Australian arm of the Wikimedia Foundation, the non-profit organisation which hosts Wikipedia, said while it does not develop or enforce the site’s editorial policies, it “certainly supports the editing community in ensuring articles are accurate, written to a high standard and that sources are verifiable”.

“When Wikimedia Australia runs training for new editors, we ensure participants are aware of relevant policies, and will include this recent update on AI use,” a spokesperson told Information Age.

The international Wikimedia Foundation did not respond to a request for comment.

Wikipedia’s ‘big responsibility’ in the age of AI

Australian Alex Lum, who has been a Wikipedia editor for more than 20 years and a site administrator for most of that time, said the Wikipedia community had “a pretty big responsibility these days”, as its content was openly licensed and relied upon by everything from Google to personal assistants like Apple’s Siri and Amazon’s Alexa, as well as AI models and chatbots.

Incorrect or non-neutral text in a Wikipedia article could be “picked up by a web scraper or AI scraper bot and end up being returned to someone who happens to ask or put in a prompt in a particular way”, Lum said.

“There's a lot of responsibility for editors, and I guess if you're using LLMs and generative AI, it kind of risks creating a feedback loop of AI making something up.”


Wikipedia has previously stated it would not replace human editors with AI. Image: Shutterstock

Using generative AI “particularly at a pretty large scale”, would not be in line with Wikipedia’s content policies, Lum added.

“There's risks in terms of bias, defamatory material, false material, things being completely made up and hallucinated and so on,” he said.

How can Wikipedia's AI ban be enforced?

Wikipedia’s updated policy has not detailed how its community plans to enforce the generative AI ban, but stated that “some editors may have similar writing styles to LLMs”.

“More evidence than just stylistic or linguistic signs is needed to justify sanctions, and it is best to consider the text's compliance with core content policies and recent edits by the editor in question,” it added.

Lum said many Wikipedia volunteers pointed to the use of em dashes (—) or the capitalisation of titles to claim text or articles on the site had been generated using AI.

“But the Wikipedia style is to use em dashes,” he said.

Some Wikipedia contributors “do make it obvious” when they use AI though, Lum added, such as by linking to ChatGPT instead of other Wikipedia pages.

“Or they forget to cut out the ‘I can rewrite this if you like’ line that [AI chatbots] always helpfully put in,” he said.

Lum suggested Wikipedia editors were “certainly not against AI”, as they often used AI-based tools to detect vandalism on Wikipedia pages and to alert human administrators.

Wikipedia released a new AI strategy in April 2025, and said it would not “replace Wikipedia's human-curated knowledge with AI”.

“For nearly 25 years, Wikipedia editors have researched, deliberated, discussed, built consensus, and collaboratively written the largest encyclopedia humankind has ever seen,” the Wikimedia Foundation said at the time.

“Their care and commitment to reliable encyclopedic knowledge is something AI cannot replace.”

Australian Wikipedia editors are expected to discuss how AI is impacting all aspects of Wikimedia’s platforms at WikiCon Australia 2026 in Canberra in April.