The rise of AI tools like ChatGPT has sparked both excitement and trepidation in the Wikiverse. Will these innovative technologies enhance platforms like Wikipedia? Or do they pose an existential threat?

This was the subject of passionate discussion at Wikipedia Day, an event organized by Wikimedia CH as part of our annual General Assembly programme in Bern. The takeaways from the discussion mirrored the results of our chapter’s exploration into AI, namely, how to harness the power of AI to advance Wikimedia projects while remaining aligned with the Wikimedia principles of sustainability, transparency and equity. Below are the key points to consider as we look to the future of human knowledge in a time of machine-generated content.

Human-AI collaboration

While we see real potential for AI to support the research and creation of Wikipedia content, human curation and scrutiny must remain central. The very term “artificial intelligence” is misleading— these are tools created by human intelligence, trained on human-generated data. Simply put, AI is not independent from human agency.
There is an underlying concern about prioritizing AI outputs over nuanced depth and rigor. Generative AI tends to oversimplify, which raises the risk of diminishing quality and insight. Human editors are needed to provide nuance; overreliance on AI has the potential to “dumb things down,” removing one of the key features of Wikipedia and other open knowledge tools.

Another issue is curation and contextualization. Even a straightforward AI-generated summary involves an AI tool making inherently subjective decisions about what information to include or omit. Wikipedians remain the critical lens to frame and prioritize information. Machines can write content, but they can’t make sense of it for human readers.

Principle-based AI integration

There are also concerns within the community that AI may not fully align with the core Wikimedia principles of transparency, sustainability and equity. Many generative AI tools do not provide references for information, “hallucinate” to create misinformation, and generate biased content. If these tools are to become part of the editing and creation process, it’s critical for human editors to verify the generated content for verity and equity.

A key issue is transparency, which was much discussed during the Wikipedia Day debate. It was generally agreed that if an editor uses AI assistance, they must clearly indicate it to maintain full authorship transparency.

Broader impact assessment

Overall, we must balance AI’s potential to enhance efficiency and counter misinformation with renewed investment in human discernment and critical thinking skills. The focus can’t solely be implementing AI, but on improving our capacities to wield these tools ethically while cultivating nuanced knowledge.

It will also be essential to understand and mitigate the impact of AI tools on Wikipedia and other Wikimedia projects. Generative AI is reshaping the way people access information — a shift that is already impacting visits to Wikipedia. While ChatGPT and other tools divert direct traffic from Wikipedia, they heavily rely on its vast repository of human knowledge. Despite decreased visits, Wikipedia remains a cornerstone of verified, fact-based knowledge essential for both humans and machines.

Help ensure the sustainability of Wikimedia projects

Wikimedia CH is working hard to ensure the longevity and relevance of Wikipedia and other Wikimedia projects in the face of evolving technological and societal challenges. In particular, we work to empower volunteer editors, foster innovation around AI and other topics, and advocate for open knowledge as it relates to AI. In doing so, we’re safeguarding the future of knowledge for generations to come.

As a non-profit organisation, Wikimedia CH relies on donors like you to make all of this possible.

Please consider making a tax-deductible donation by clicking the button below.