What the AI Is Allowed to Know

E3AI Deep Dive: Experience

What the AI Is Allowed to Know

The three-tier corpus, what makes each tier trustworthy, and why the refusal behavior is the feature that makes confidence meaningful.

Hallucination is not a bug you can patch with a better prompt. It is a structural property of how frontier models work. When a model doesn’t know the answer to something specific, it generates a plausible one. The plausibility is the problem. The output reads confident. The team ships it. A week later somebody notices that the cited regulation says the opposite of what the AI said it said, or the quoted statistic came from no published source, or the historical anecdote about the company’s founding is fabricated.

Every team that has used AI for real work for more than a few months has a story like this. The fix most teams reach for is the wrong one: better prompts, more specific instructions, telling the model to “only use verified sources.” None of that changes the underlying geometry. The Experience layer changes the geometry. This is the layer the manifesto introduces but doesn’t have room to unpack.

The three-tier corpus

Every E3AI instance is built with a retrieval system that sources answers in order, from most authoritative to least.

Tier 1: the client’s own corpus

This is the only source that cannot hallucinate about your business, because it is your business. SOPs. Transcripts of past client work, proposals, and delivery documents. Internal decisions and how they were made. Prior writing in your voice. The actual email the account executive sent last quarter, not a reconstruction of what they might have sent.

Tier 1 gets indexed first. Every output that can be sourced to Tier 1 material is sourced there.

Tier 2: vetted secondary material

The books and frameworks the client actually trusts. The analysts they read. The mentors whose thinking has shaped how they run the business. These go into the retrieval system with explicit provenance, meaning the actual text is in the system, not just a reference to it.

Provenance matters: when the actual text is indexed, citations can be verified. The AI quotes directly from what’s in the system.

Tier 3: the frontier model’s training data

Everything the model already knows from pre-training. This stays available as a fallback. Common knowledge, general context, publicly established facts belong here.

Specific claims about your business, your clients, your decisions, your numbers belong to Tier 1 or they don’t belong in the output at all.

Building Tier 1

The most common mistake is including the wrong things. What makes a good Tier 1 source is specificity and finality: something that reflects an actual decision or an actual piece of work, not a draft, not a summary, not meeting notes with TBDs.

Good Tier 1 sources Bad Tier 1 sources
  • The MSA template legal approved and uses
  • The proposal that won last quarter
  • Transcript of the client onboarding call
  • SOPs your ops team actually follows
  • The pricing model with version number and approval date
  • The brainstorming doc from two years ago nobody resolved
  • The first draft of an SOP that got superseded
  • Slack threads where someone said “let’s discuss later”
  • Meeting notes that say “to be determined”
  • Summaries of documents rather than the documents

The first step of every Experience engagement is a corpus audit: what you actually have, what version, who approved it, and when. The gap between what the team thinks is documented and what is actually usable is almost always larger than expected.

Why Tier 2 needs provenance

Telling a model to “use this book as a reference” and uploading the actual text into the retrieval system are not the same thing.

When the model knows a book exists and trusts its own training data’s representation of it, it will answer as if it has read the book. Sometimes that representation is accurate. Sometimes it drifts. The model has no way to know which. It produces an answer that sounds like the book and may not be.

When the actual text is in the retrieval system, the model retrieves a passage, quotes it, and cites the location. The reader can verify it. The verification is the point.

Tier 2 with provenance turns “this sounds like what that book says” into “this is what page 147 says, and you can check it.”

Refusal is the feature

When none of the three tiers can answer a question with confidence, the system says so. It does not generate a plausible-sounding answer.

What a refusal looks like in practice

“I don’t have a source for this in your corpus. To answer accurately, I’d need your Q2 pricing review document, or a specific source you’d like me to reference. Here’s what I can confirm from what I do have: [what the system actually knows].”

This is not a failure. This is the system working correctly.

The confident wrong answer is the failure. The refusal is what makes confidence meaningful. When the E3AI-bound system does answer without hedging, you can trust it in a way you cannot trust an unbound model, because you have seen it refuse. The model that never says “I don’t know” has given you no reason to believe it when it says “I do.”

Regulated industries understand this immediately. An AI that produces confident outputs with no verifiable sourcing is a liability. An AI that distinguishes what it knows from its own sources versus what it’s inferring, and refuses to fabricate the former, is an asset.

What this looks like with a real question

Same question. One answer is plausible. One answer is correct and citable.

The question

“What are our standard payment terms for a new client engagement?”

Without Experience layer With Tier 1 loaded

“Standard payment terms are typically Net 30, with invoicing at project milestones.”

Sounds right. May not match your actual MSA. Account executive uses it on a client call. Discrepancy surfaces at contract signature.

“Per your Master Service Agreement template (v2.3, approved January 2026): Net 45 from invoice date, with a 50% deposit required on engagements above $15,000. A 1.5% monthly late fee applies after 60 days. This is in Section 4.2.”

Correct, citable, verifiable.

The person reading both answers cannot tell the difference from the confidence level. The sourcing is what distinguishes them.

The corpus grows with you

The three-tier structure is not set-and-forget. The corpus is a living system.

Tier 1 grows every time a significant piece of work completes. A new proposal that won gets added. A revised SOP replaces the previous version. A client transcript from a quarterly review goes in. The indexing is part of the workflow, not a separate project: when the work gets filed, the relevant pieces get added to the corpus.

Tier 2 changes more slowly. New sources get added when the team decides they trust them enough to cite them. Old sources get removed when the thinking shifts. The quarterly corpus review covers both tiers: what’s in, what should be added, what’s outdated.

What the team notices, usually by the third or fourth month: the corpus has crossed a threshold where the AI starts surfacing things the team forgot it had. A decision from two years ago that’s relevant to today’s client. A section of a proposal that maps exactly to the current situation. The corpus becomes a retrieval system for the team’s own accumulated knowledge, not just a constraint on what the AI is allowed to say.

If your team has shipped something the model made up

The Experience layer is the fix, not a better prompt. The corpus audit is where it starts: what you actually have, what version, who approved it. From there, indexing is a few days of structured ingestion work. The gap between what the team thinks is documented and what is usable usually surfaces things worth knowing.

Sketch your Tier 1 corpus on the discovery call

We’ll walk through what you already have, what version it’s in, and which three to five documents would make the highest-impact Tier 1 sources to start.

Or book the discovery call directly.

Book a Discovery Call

Scroll to Top