• Home
  • Blog
  • AI News
  • Wikipedia Halts AI Summary Pilot After Backlash From Editors

Wikipedia Halts AI Summary Pilot After Backlash From Editors

Updated:June 12, 2025

Reading Time: 2 minutes

Wikipedia has paused its pilot project, AI-generated summaries, due to pushback from the editors.

Before halting, the summaries appeared at the top of select Wikipedia articles. They were only shown to users who had installed a special browser extension and opted in. 

Each summary came with a yellow “unverified” label, and users had to click to view the content.

The Wikimedia Foundation introduced the trial earlier this month to test how AI could summarize lengthy articles. 

Wikipedia logo
Image Credit: Wikipedia

The Opposition 

Editors began to express concern over the accuracy of the AI summaries. They argued that the feature could damage the site’s reputation.

The main issue was trust, AI models often generate errors, “hallucinations”, false or misleading statements presented as fact. 

Even a small mistake could mislead millions. The editors argued that Wikipedia should avoid making the same mistake.

AI’s History of Errors

Wikipedia is not the first to test AI summaries; Bloomberg and CNET have done the same, and both faced similar issues. 

There was also an issue with the Apple Intelligence news summary feature. In their case, the tech company had to suspend the feature, and this was after public trust had been breached. 

In several cases, the summaries contained factual mistakes. These incidents led to public corrections and reduced trust.

AI can process text quickly and can produce summaries in seconds. However, that speed comes at a cost. 

The output often lacks nuance or context. This is especially dangerous for an information platform like Wikipedia.

A Project Pause

The Wikimedia Foundation has paused the experiment, but it has not ruled out future use of AI.

The team remains interested in exploring how the technology can support accessibility.

For instance, simplified summaries may help users with learning challenges or language barriers. 

However, any future use will likely involve stronger review processes.

Human Oversight Remains Key

Wikipedia relies on volunteers to read, review, and revise every article. This effort ensures the platform’s accuracy.

By contrast, AI cannot reason or verify facts. Instead, it predicts language patterns based on training data. While it can assist, it cannot replace editorial oversight.

Many editors feel strongly about this point. They believe that trust must come before convenience. AI, in their view, should support, not replace, human judgment.

Lolade

Contributor & AI Expert