14 Trust Lessons for Education Publishers Building AI | Magic EdTech
Skip to main content
Blogs - AI for Learning

14 Things Education Publishers Can Learn from a History Encyclopedia About Building AI-People Trust

  • Published on: February 23, 2026
  • Updated on: February 23, 2026
  • Reading Time: 8 mins
  • Views
Eric Stano
Authored By:

Eric Stano

VP, Consulting, Curriculum, and Product Strategy

When I sat down to talk with Jan van der Crabben, Founder and CEO of World History Encyclopedia, I expected a conversation about content, digitisation, and the obvious “AI is changing everything” riffs.

We got those, too.

What I did not expect was how many practical product and strategy lessons were hiding inside a conversation about history. Especially for education publishers and EdTech companies shipping AI features into schools, where “cool demo” and “classroom reality” are two different spheres.

Below are the moments from the episode that felt genuinely new, or at least newly urgent, for anyone building AI into learning products.

 

1. “Access” Is Not the Same Thing as “Free”

Jan’s origin story is basically a market gap analysis in human form. He was doing historical research for strategy games and realised the internet had a triangle problem:

  • Free content that is not always reliable
  • Reliable content that is not free
  • Academic content that is either too dense or too expensive, or both

His point for publishers is sharper than it looks: open access is only step one. For
learning to happen at scale, content has to be readable, engaging, and well-presented.
“Free” is not a pedagogical strategy.

 

2. History Is a Web. Build Products as You Believe in Them

Jan makes a compelling case that we teach history in disconnected “windows,” but the real world is connected.

He gave a perfect example: the Louisiana Purchase, Napoleon’s wars, and Latin American independence movements are often taught separately, yet they are causally linked. When students can see those links, history stops being trivia and starts being a system.

For product teams: stop thinking in chapters and start thinking in relationships. Your content model should not be a bookshelf. It should be a graph.

 

3. Hyperlinks Are Not a UX Garnish. They Are the Learning Experience

The World History Encyclopedia did not just sprinkle links into articles. They built a custom system that treats topics, images, timeline events, and articles as connected entities so the site can automatically surface relationships.

That is a big deal for publishers and platforms because it reframes “content discovery” as something other than search and filters. It is a guided exploration. It is learning design expressed as information architecture.

 

4. History, Taught Well, Is a Critical Thinking Engine

Jan argues that history has a bad reputation because it is often taught as dates and facts, not as source evaluation and human decision-making.

But if you teach it properly, history becomes the discipline that trains learners to ask:

  • Who is telling me this?
  • What did they want?
  • What incentive shaped what they wrote?
  • How trustworthy is the account?

That is not just a history class. It is a lesson in “how to survive the modern internet.”

 

5. In an AI World, “Historical Thinking” Is the Skill We Keep Outsourcing

One of the best moments in the episode is the explicit connection between history and democratic participation. Jan’s argument is basically: if people cannot evaluate sources, they cannot participate responsibly in civic life. And if you cannot do that, democracy gets fragile.

Publishers building AI into learning tools should sit with that. AI is not just a product feature. It is an amplifier of whatever source literacy we have managed, or failed, to teach.

 

6. AI Makes Retrieval Too Easy, Which Makes Verification Too Rare

Jan’s concern about AI is not “robots are coming.” It is “verification is leaving.”

With traditional search, you click around. You encounter multiple sources, conflicting framings, and side paths. With AI, you get one synthesised answer that sounds coherent enough to stop thinking.

And once you stop thinking, the model becomes the authority. That is a design outcome, not a neutral byproduct.

 

7. “Algorithms Are Never Neutral” Is a Product Requirement

You cannot mince words here. A handful of companies controlling information retrieval for the world should make all of us uncomfortable, especially in education.

He even points to an overt example of how a worldview can be baked into an AI system when creators advertise it as such. That is not a political statement so much as a warning: if you rely on a black box, you are also relying on its incentives.

If your product roadmap includes “AI answers,” you need a governance plan, not just a model.

 

8. Teacher Feedback That Should Scare Every AI Product Manager: “These Answers Are Too Long”

This is where the episode gets very real, very fast.

The World History Encyclopedia tested its AI tool with teachers. Teachers liked the idea of using their curated content as a source base. What they did not like was the ChatGPT-style verbosity.

In classrooms, long answers are not automatically helpful. They are often suspicious. Teachers want:

  • Shorter answers
  • Stronger emphasis on sources
  • Clear bibliographies
  • More obvious pathways to verify

Translation: in education, “long and fluent” can look like “made up.”

 

9.The Best Repositioning Move They Made Was Calling It a Research Tool

Naming matters. Teachers heard “chat” and thought “ChatGPT,” and that triggered a whole set of valid concerns.

So they leaned into what the tool actually needed to be: a research assistant. Short answers, citations, and a push toward reading the underlying sources.

This is a useful lesson for any publisher or EdTech company tempted to market “AI tutor” features. Schools do not need more AI bravado. They need tools that behave like a good librarian.

 

10. Guardrails: Refuse Essay-Writing, Inappropriate Prompts, and Off-Topic Prompts

Jan describes guardrails that many companies still treat as optional. They blocked:

  • Essay-writing behaviour
  • Inappropriate queries
  • Non-history queries

And that matters because classroom-safe AI is not about being “responsible” in the
abstract. It is about reducing very specific failure modes: cheating, misuse, and brand
damage.

If your tool can generate a full submission, you have built a shortcut machine. You can
call it “learning support” if you would like. The classroom will call it something else.

 

11. Teacher Mode and Student Mode Should Not Behave the Same

This was one of the freshest ideas in the episode.

Teachers want to generate lesson plan ideas and classroom activities. A five-sentence answer is not useful for that. So they reduce some guardrails when the intent is teacher preparation.

That is how you design for real use cases rather than one-size-fits-all policies. It is also how you avoid punishing teachers in the name of preventing student misuse.

 

12. The “Thin Corpus” Problem: Hallucinations Appear When Your Coverage Runs Out

Most publishers start by originally restricting AI to their own content. When the system has enough material, the answers are solid. When it does not, the model drifts into improvisation.

That is a problem every publisher will recognise the moment they ship a constrained retrieval experience: when coverage is thin, the model still tries to be helpful.

The fix is the corpus strategy and fallback design.

 

13. A Very Publisher-Specific Move: Expand with Open-Access Journals Instead of the Open Web

To improve coverage while staying grounded, Jan’s tool integrated with a large open-access research corpus (via a university-linked index). That allowed the tool to answer more questions without defaulting to random internet content.

This is a key design choice for publishers: you can expand coverage while still controlling quality. It also lets you do something important pedagogically: translate high-level research into student-appropriate language while still showing where it came from.

 

14. The Business Model Gut Punch: If AI Becomes the Interface, Publishers Lose the Relationship

If AI systems become the primary way people consume information, fewer people will visit publishers directly. If fewer people visit publishers, less revenue supports the creation of trustworthy content. If trustworthy content becomes less viable to produce, the entire knowledge ecosystem gets weaker.

In other words, AI can accidentally starve the sources it depends on.

Publishers and edtech companies cannot control the whole internet, but they can control their own product posture: make sources visible, reward click-through, and build trust as a differentiator, not as a footer link.

A woman working on a laptop in a bright home office with plants and notes on the wall, reflecting the use of AI in education publishing to streamline digital content creation and review.

 

A Final Framing I Cannot Stop Thinking About

Jan and our conversation landed on a metaphor that is hard to unsee:

AI is starting to look like the ultra-processed food of information. It is fast, convenient, engineered to taste right, and not automatically good for you.

Education is supposed to be more than convenience. It is supposed to build minds that can evaluate, question, and verify.

That is the tension. And, increasingly, that is the product challenge.

 

If You Are an Education Publisher Building AI Right Now

If you are shipping AI answers into a learning product and want a practical gut check on citations, guardrails, and classroom-safe design, contact Magic EdTech. We will help you pressure-test your AI experience against the failure modes schools actually care about (not the ones that only show up in slide decks).

Because “it sounded correct” is not a quality standard. It is a warning label.

 

Eric Stano

Written By:

Eric Stano

VP, Consulting, Curriculum, and Product Strategy

Eric brings 30 + years of leadership experience in academic publishing and edtech, focusing on acquiring, developing, and delivering K–20 content across disciplines.

FAQs

Treat coverage gaps as a first-class state: explicitly say when the system cannot answer from the available corpus, then route users to search/browse within trusted sources. Do not let the model "fill in" just to be helpful. A graceful fallback protects trust more than a confident guess.

Teacher mode can allow broader ideation (lesson activities, discussion prompts, scaffolding options) while still keeping citations and provenance clear. Student mode should be stricter about refusing direct submission work and should emphasise step-by-step support over final products. The split is about intent and risk, not "more power" for its own sake.

Make the product reward reading, not just answers: visible sources, strong internal linking, and flows that nudge users into the underlying material. Measure success partly by click-through and time on source, not only by session completion. If the interface hides the source, it quietly devalues the work that makes answers trustworthy.

It usually needs shared ownership: product defines the classroom use cases, editorial defines source quality, and engineering enforces guardrails and auditability. The fastest way to keep it real is to turn policies into testable requirements (what gets refused, how citations render, and what happens on low coverage). Teams that need help operationalizing those checks often work with implementation partners like Magic EdTech to set up evaluation workflows, classroom-safe guardrails, and review loops that stay connected to real teacher feedback.

A smiling man in a light blue shirt holds a tablet against a background of a blue gradient with scattered purple dots, conveying a tech-savvy and optimistic tone.

Get In Touch

Reach out to our team with your question and our representatives will get back to you within 24 working hours.