E3AI Deep Dive: Experience
What the AI Is Allowed to Know
The three-tier corpus, what makes each tier trustworthy, and why the refusal behavior is the feature that makes confidence meaningful.
Hallucination is not a bug you can patch with a better prompt. It is a structural property of how frontier models work. When a model doesn’t know the answer to something specific, it generates a plausible one. The plausibility is the problem. The output reads confident. The team ships it. A week later somebody notices that the cited regulation says the opposite of what the AI said it said, or the quoted statistic came from no published source, or the historical anecdote about the company’s founding is fabricated.
Every team that has used AI for real work for more than a few months has a story like this. The fix most teams reach for is the wrong one: better prompts, more specific instructions, telling the model to “only use verified sources.” None of that changes the underlying geometry. The Experience layer changes the geometry. This is the layer the manifesto introduces but doesn’t have room to unpack.
The three-tier corpus
Every E3AI instance is built with a retrieval system that sources answers in order, from most authoritative to least.
Building Tier 1
The most common mistake is including the wrong things. What makes a good Tier 1 source is specificity and finality: something that reflects an actual decision or an actual piece of work, not a draft, not a summary, not meeting notes with TBDs.
The first step of every Experience engagement is a corpus audit: what you actually have, what version, who approved it, and when. The gap between what the team thinks is documented and what is actually usable is almost always larger than expected.
Why Tier 2 needs provenance
Telling a model to “use this book as a reference” and uploading the actual text into the retrieval system are not the same thing.
When the model knows a book exists and trusts its own training data’s representation of it, it will answer as if it has read the book. Sometimes that representation is accurate. Sometimes it drifts. The model has no way to know which. It produces an answer that sounds like the book and may not be.
When the actual text is in the retrieval system, the model retrieves a passage, quotes it, and cites the location. The reader can verify it. The verification is the point.
Tier 2 with provenance turns “this sounds like what that book says” into “this is what page 147 says, and you can check it.”
Refusal is the feature
When none of the three tiers can answer a question with confidence, the system says so. It does not generate a plausible-sounding answer.
What a refusal looks like in practice
“I don’t have a source for this in your corpus. To answer accurately, I’d need your Q2 pricing review document, or a specific source you’d like me to reference. Here’s what I can confirm from what I do have: [what the system actually knows].”
This is not a failure. This is the system working correctly.
The confident wrong answer is the failure. The refusal is what makes confidence meaningful. When the E3AI-bound system does answer without hedging, you can trust it in a way you cannot trust an unbound model, because you have seen it refuse. The model that never says “I don’t know” has given you no reason to believe it when it says “I do.”
Regulated industries understand this immediately. An AI that produces confident outputs with no verifiable sourcing is a liability. An AI that distinguishes what it knows from its own sources versus what it’s inferring, and refuses to fabricate the former, is an asset.
What this looks like with a real question
Same question. One answer is plausible. One answer is correct and citable.
The question
“What are our standard payment terms for a new client engagement?”
The person reading both answers cannot tell the difference from the confidence level. The sourcing is what distinguishes them.
The corpus grows with you
The three-tier structure is not set-and-forget. The corpus is a living system.
Tier 1 grows every time a significant piece of work completes. A new proposal that won gets added. A revised SOP replaces the previous version. A client transcript from a quarterly review goes in. The indexing is part of the workflow, not a separate project: when the work gets filed, the relevant pieces get added to the corpus.
Tier 2 changes more slowly. New sources get added when the team decides they trust them enough to cite them. Old sources get removed when the thinking shifts. The quarterly corpus review covers both tiers: what’s in, what should be added, what’s outdated.
What the team notices, usually by the third or fourth month: the corpus has crossed a threshold where the AI starts surfacing things the team forgot it had. A decision from two years ago that’s relevant to today’s client. A section of a proposal that maps exactly to the current situation. The corpus becomes a retrieval system for the team’s own accumulated knowledge, not just a constraint on what the AI is allowed to say.
If your team has shipped something the model made up
The Experience layer is the fix, not a better prompt. The corpus audit is where it starts: what you actually have, what version, who approved it. From there, indexing is a few days of structured ingestion work. The gap between what the team thinks is documented and what is usable usually surfaces things worth knowing.
Sketch your Tier 1 corpus on the discovery call
We’ll walk through what you already have, what version it’s in, and which three to five documents would make the highest-impact Tier 1 sources to start.
Or book the discovery call directly.
