AI Echo Chambers: ChatGPT Sources | Analysis by Brian Moineau

When one AI cites another: ChatGPT, Grokipedia and the risk of AI-sourced echo chambers

Information wants to be useful — but when the pipes that deliver it start to loop back into themselves, usefulness becomes uncertain. Last week’s revelation that ChatGPT has begun pulling answers from Grokipedia — the AI-generated encyclopedia launched by Elon Musk’s xAI — isn’t just a quirky footnote in the AI wars. It’s a reminder that where models get their facts matters, and that the next chapter of misinformation might not come from trolls alone but from automated knowledge factories feeding each other.

Why this matters right now

  • Grokipedia launched in late 2025 as an AI-first rival to Wikipedia, promising “maximum truth” and editing driven by xAI’s Grok models rather than human volunteer editors.
  • Reporters from The Guardian tested OpenAI’s GPT-5.2 and found it cited Grokipedia multiple times for obscure or niche queries, rather than for well-scrutinized topics. TechCrunch picked up the story and amplified concerns about politicized or problematic content leaking into mainstream AI answers.
  • Grokipedia has already been criticized for controversial content and lack of transparent human curation. If major LLMs start using it as a source, users could get answers that carry embedded bias or inaccuracies — with the AI presenting them as neutral facts.

What happened — a short narrative

  • xAI released Grokipedia in October 2025 to great fanfare and immediate controversy; some entries and editorial choices were flagged by journalists as ideological or inaccurate.
  • The Guardian published tests showing that GPT-5.2 referenced Grokipedia in several responses, notably on less-covered topics where Grokipedia’s claims differed from established sources.
  • OpenAI told reporters it draws from “a broad range of publicly available sources and viewpoints,” but the finding raised alarm among researchers who worry about an “AI feeding AI” dynamic: models trained or evaluated on outputs that themselves derive from other models.

The risk: AI-to-AI feedback loops

  • Repetition amplifies credibility. When a large language model cites a source — and users see that citation or accept the answer — the content’s perceived authority grows. If that content originated from another model rather than vetted human scholarship, the process can harden mistakes into accepted “facts.”
  • LLM grooming and seeding. Bad actors (or even well-meaning but sloppy systems) can seed AI-generated pages with false or biased claims; if those pages are scraped into training or retrieval corpora, multiple models can repeat the same errors, creating a self-reinforcing echo.
  • Loss of provenance and nuance. Aggregating sources without clear provenance or editorial layers makes it hard to know whether a claim is contested, subtle, or discredited — especially on obscure topics where there aren’t many independent checks.

Where responsibility sits

  • Model builders. Companies that train and deploy LLMs must strengthen source vetting and transparency, especially for retrieval-augmented systems. That includes weighting human-curated, primary, and well-audited sources more heavily.
  • Source operators. Sites like Grokipedia (AI-first encyclopedias) need clearer editorial policies, provenance metadata, and visible mechanisms for human fact-checking and correction if they want to be treated as reliable references.
  • Researchers and journalists. Ongoing audits, red-teaming and independent testing (like The Guardian’s probes) are essential to surface where models are leaning on questionable sources.
  • Regulators and platforms. As AI content becomes a larger fraction of web content, platform rules and regulatory scrutiny will increasingly shape what counts as an acceptable source for widespread systems.

What users should do today

  • Ask for sources and check them. When an LLM gives a surprising or consequential claim, look for corroboration from reputable human-edited outlets, primary documents, or scholarly work.
  • Be extra skeptical on obscure topics. The reporting found Grokipedia influencing answers on less-covered matters — exactly the places where mistakes hide.
  • Prefer models and services that publish retrieval provenance or let you inspect the cited material. Transparency helps users evaluate confidence.

A few balanced considerations

  • Not all AI-derived content is inherently bad. Automated systems can surface helpful summaries and surface-level context quickly. The problem isn’t automation per se but opacity and lack of corrective human governance.
  • Diversity of sources matters. OpenAI’s claim that it draws on a range of publicly available viewpoints is sensible in principle, but diversity doesn’t replace vetting. A wide pool of low-quality AI outputs is still a poor knowledge base.
  • This is a systems problem, not a single-company scandal. Multiple major models show signs of drawing from problematic corners of the web — the difference will be which organizations invest in safeguards and which don’t.

Things to watch next

  • Will OpenAI and other major model providers adjust retrieval weightings or add filters to downrank AI-only encyclopedias like Grokipedia?
  • Will Grokipedia publish clearer editorial processes, provenance metadata, and human-curation layers to be treated as a responsible source?
  • Will independent audits become standard industry practice, with third-party certifications for “trusted source” pipelines used by LLMs?

My take

We’re watching a transitional moment: the web is shifting from pages written by people to pages largely created or reworded by machines. That shift can be useful — faster updates, broader coverage — but it also challenges the centuries-old idea that reputable knowledge is rooted in accountable authorship and transparent sourcing. If we don’t insist on provenance, correction pathways, and human oversight, we risk normalizing an ecosystem where errors and ideological slants are amplified by the very tools meant to help us navigate information.

In short: the presence of Grokipedia in ChatGPT’s answers is a red flag about data pipelines and source hygiene. It doesn’t mean every AI answer is now untrustworthy, but it does mean users, builders and regulators need to treat the provenance of AI knowledge as a first-class problem.

Sources




Related update: We recently published an article that expands on this topic: read the latest post.

Séance of Blake Manor: A Haunting | Analysis by Brian Moineau

The Séance of Blake Manor: A Halloween detective that’s already haunting my bookmarks

Turnips! Everywhere! As far as the eye can see! Well, not quite — but that cheeky image from Eurogamer’s piece captures the game’s mix of whimsy and creeping dread perfectly. The Séance of Blake Manor is the kind of spooky, intelligent detective game that slips into your brain the way a good ghost story slips under a door: slow, deliberate, and impossible to shake once it’s inside.

Why this one feels special

  • It’s a first-person detective mystery set on All Hallows’ Eve, 1897, in a remote Irish manor full of mystics, secrets, and theatrical supernatural trappings.
  • You play Declan Ward, a private investigator racing against time to find Evelyn Deane before a grand séance – and every action nudges the clock forward.
  • The game blends interrogation, deduction, and environmental exploration with a layered narrative that leans into Irish folklore and folk horror rather than cheap jump-scares.
  • The tone oscillates between wry and unsettling: characterful guest interactions, moral ambiguity, and symbolic artifacts (yes, including turnips and other evocative props) that root the hauntings in cultural and historical context.

Key takeaways

  • The game nails atmosphere: ornate, graphic-novel-inspired visuals and a dynamic soundtrack that supports the mood rather than hogging it.
  • Investigation systems reward curiosity: note-taking, cross-referencing clues, and interrogations let players feel like actual sleuths rather than passive observers.
  • The narrative aims beyond thrills: themes of cultural appropriation, colonial legacies, and trauma are woven into the mystery, giving the scares weight and relevance.
  • Short, focused design: with a clear 48-hour time framing, the game promises tension and pacing that suit a Halloween playthrough.
  • Positive early reception: demos and early reviews show strong player and critic enthusiasm, positioning it as a standout indie release this autumn.

What I love (and what might ruffle you)

  • Atmosphere and craft: The manor is a character in its own right. Rooms, objects, and lighting are composed with purpose — you’ll pause in hallways just to take it all in.
  • Detective pleasures: The game puts deduction front and center. There’s delight in stitching together testimony, forensic details, and subtle environmental hints to build a coherent case.
  • Narrative ambition: Tackling topics like diaspora and historical injustice within a gothic context is bold for a game of this scale, and when it lands, it adds meaningful depth to otherwise familiar spooky tropes.
  • Time-pressure trade-off: The 48-hour countdown creates urgency, but that same constraint can feel tense in a way some players might find frustrating—especially if you like long, leisurely investigations.
  • Balance of supernatural and rational: The line between eerie atmosphere and outright horror is carefully walked; players expecting nonstop scares may instead find slow-burn unease and philosophical payoffs.

How it fits the season (and your library)

If you love detective games with character-driven narratives (think Return of the Obra Dinn, The Vanishing of Ethan Carter, or narrative-led indie mysteries) and also crave a game that leans into autumnal vibes, this is tailor-made for late-October gaming sessions. Shorter playtime and a single-location setting make it ideal for a focused weekend run — perfect for Halloween night with a cup of something warm and a dim lamp.

SEO-friendly reasons to care:

  • “The Séance of Blake Manor” offers a mix of folk horror and detective gameplay that taps into current interest in narrative-driven indie games.
  • It’s developer Spooky Doorway’s ode to gothic storytelling, backed by publisher Raw Fury — names that indie fans watch closely.
  • Steam demo impressions were positive, and launch coverage suggests the game already resonates with critics and players.

A short reflection

There’s something quietly radical about a game that invites you to interrogate more than suspects: interrogate assumptions. The Séance of Blake Manor uses the trappings of séance theatrics and haunted manors to point at deeper cultural questions, while still delivering the immediate satisfaction of solving puzzles and unmasking half-truths. It’s the sort of experience that lingers after you close the game: not just which twist you missed, but which stories get told and why.

Sources




Related update: We recently published an article that expands on this topic: read the latest post.


Related update: We recently published an article that expands on this topic: read the latest post.