What is E3AI?
The constraint layer that makes frontier AI sound like you, refuse what you would refuse, and only say what your sources can back.
You’re already using AI every day. The output looks competent. Generic, though. Slightly off. Once in a while it invents things that aren’t true. You know the feeling. The response is structurally fine but it isn’t yours. It treats your client the same way it would treat anyone else’s client. It draws on knowledge nobody on your team actually trusts. You ship it anyway because the alternative is writing the thing from scratch, and you don’t have time.
That gap is the problem E3AI solves. E3AI is Ethics, Empathy, and Experience bounded AI: three constraints that wrap a frontier model so the output is yours.
The failure mode behind the discomfort
When you point a frontier model at a real piece of business writing, three things go wrong at once. The voice drifts to a default register that sounds like a consultancy white paper. The output respects no values except the ones the model’s safety team decided everyone should share. And the content draws on training data that includes your competitors, your industry’s worst clichés, and a lot of things that simply aren’t true about your specific business.
You can patch around it. A long custom-instructions block. A short style guide pasted into every chat. A few example outputs. These help. They do not solve the geometric problem, which is that you’re asking a single model trained on the internet to produce work that should reflect a specific person, a specific set of values, and a specific body of knowledge.
Think of it as a geometry problem. The model is doing its job. Nothing is constraining its output toward the answer you actually want.
E3AI in one sentence
E3AI stands for Ethics, Empathy, and Experience bounded AI. It’s the constraint layer that wraps a frontier model so the output sounds like you, refuses what you would refuse, and only says what your sources can back. Three boundaries, all coequal, all enforced at runtime. The model receives a prompt that has been geometrically bounded by all three before it ever starts generating.
Each boundary is a real artifact: a file the AI reads before it speaks.
Why three, and why these three
People ask why not two boundaries, or five. The answer is that these three are the ones that fail independently. Each one can be done perfectly while the other two are broken, and the output will still feel wrong.
Great values won’t save you from shipping a hallucination. A perfect knowledge base doesn’t stop you from writing a tone-deaf email to a partner who needed a different register. Nailing the audience won’t keep you from crossing an ethical line nobody wrote down. Two boundaries leave a gap somebody falls through. Four becomes a checklist nobody can hold in their head.
Three is the minimum count where every common failure mode you can name maps to one of the boundaries. Ethics catches “we shouldn’t have said that.” Empathy catches “we said it to the wrong person in the wrong voice.” Experience catches “we said something that isn’t true.” Anything else, on inspection, turns out to be a special case of one of those three.
The other thing about three is that it’s small enough to operationalize. Each E gets a folder of files. The folders get reviewed quarterly, and a review takes an afternoon. A four-dimensional framework is a slide. A three-dimensional framework is a working system.
Ethics: the won’t-do boundary
The values the AI is required to respect, written down where a model can read them
Ethics is the easy one to get wrong because most people treat it as a slogan. “We care about responsible AI” goes on the homepage and then nothing changes about how the AI actually behaves.
E3AI treats Ethics as a charter. A real file, written in the client’s voice, that names the values the AI is required to respect. The prohibitions are listed. The escalation rules are concrete. When the AI is asked to do something that crosses one of those lines, it stops and asks for a human. When it’s asked to do something with high cost of mistake, it checks before it acts.
The charter sits at the start of every prompt. Hooks in the runtime block the operations that the charter says are off-limits. The audit log records every time an ethical flag triggered, who reviewed it, and what the resolution was. Six months in, the client can answer the question “show me where the AI almost did something wrong and how we caught it” with a real list, not a hand wave.
There’s a second thing happening here. A values charter authored in the client’s voice forces the client to actually decide what they believe. Most teams have never written this down. The act of writing it is itself useful. The AI becomes the reason the team finally has the conversation.
A practical illustration. One of the standard charter rules is the cost-of-mistake matrix: low cost and easy to verify can run autonomously, low cost and hard to verify gets spot-checked, high cost and easy to verify gets reviewed every time, high cost and hard to verify stays human-only. That matrix lives in a file. The hook reads it. When the AI is about to send an external email, the hook checks the matrix, sees that external email is in the high-cost row, and pauses for a human. The check costs nothing when the matrix is right. It saves you from the bad day when it isn’t.
For regulated industries, the charter is mapped to the NIST AI Risk Management Framework or ISO 42001 controls during the Enterprise engagement, so the audit log answers the question the way risk and compliance want it answered. That mapping is part of the deliverable, not an add-on conversation.
What you get:
A values charter the team authored, runtime hooks that read it, and an audit log of every ethical flag that fired and how it was resolved.
Go deeper on Ethics
If your situation is regulated, audited, or otherwise high-stakes on the governance side, the Ethics layer is where to look next. The charter file, the cost-of-mistake matrix, the audit log schema, and how the engagement maps to NIST AI RMF and ISO 42001 controls.
Empathy: the who-it-serves boundary
A map for every audience the AI’s output will reach
Every output goes to someone. A client. A colleague. A partner you co-build with. A supplier you’re negotiating with. A regulator who sees only the finished product. Generic AI talks to all of them in the same voice, with the same assumptions, at the same register. The failure is sitting in plain sight.
E3AI builds a map for each one. Not a persona deck. A working file the AI reads before it drafts.
For clients, the map captures their actual jobs, the pains they’re hiring you to fix, and the gains they care about. For colleagues, it captures how your team prefers to hand off work, what tone you use internally, and which contexts call for which voice. With partners, the file describes co-branding rules, what you do and don’t disclose, and the boundary between collaboration and disclosure. With suppliers, it covers procurement voice, payment terms, and the difference between a vendor email and a partner email. When regulators are in scope, the file captures the tone and the specific things you do and don’t say in front of compliance audiences.
Five maps. Sometimes more, depending on the business. The AI detects which audience the current piece of work is aimed at and loads the right map. The result is a baseline assumption that the audience matters and that the assumptions about that audience are written down somewhere a model can find them.
This is the dimension consultancies usually skip because it’s slow. There is no shortcut. The maps have to be built. The team has to sit down and answer real questions about who they serve and how. The AI is downstream of that work, not a substitute for it.
What you get:
Empathy maps for every audience your team serves, and an AI that loads the right one based on who the output is going to.
Go deeper on Empathy
The five audience maps, what each one actually contains, how the runtime detects which map to load, and why the build takes the time it takes. The audiences your AI talks to deserve more than a default register.
Read: The Audience Problem Most AI Deployments Never Solve →
Experience: the what-it-knows boundary
Three tiers of curated sources, with refusal in place of hallucination
This is where hallucination lives. When a frontier model doesn’t know the answer to something specific, it generates a plausible one. The plausibility is the problem. The output reads as confident. The team ships it. A week later somebody notices that the cited paper doesn’t exist, or the quoted regulation says the opposite of what the AI said it said, or the historical anecdote about the company’s founding is fabricated.
E3AI fixes this at the source, not at the output. The constraint is that the AI is allowed to draw from three tiers, in order.
Tier one is the client’s own corpus. Their SOPs, their transcripts of past client work, their decisions, their actual prior writing. This gets indexed first. Every output that can be backed by Tier one source gets backed by Tier one source.
Tier two is vetted secondary material. The books the client trusts, the analysts they read, the mentors whose frameworks they actually use. These are loaded into the same retrieval system with explicit provenance so the AI can cite them and the client can verify the citation.
Tier three is the public corpus the frontier model already has. This stays available as a fallback. The model can use it. It’s used last.
When none of those three tiers can answer the question with confidence, the AI says so. It refuses to confabulate. The output reads “I don’t have a source for this; here’s what I’d need” instead of generating a confident answer that turns out to be wrong. Refusal is the feature that makes the system trustworthy.
What you get:
Your own Tier one corpus indexed and queryable, retrieval returning your sources first, and refusal in place of confabulation when nothing qualifies.
Go deeper on Experience
The three-tier corpus, what goes in each tier and why, how the refusal behavior works, and why refusal is the feature that makes confidence meaningful. If your team has shipped something the model made up, this is the layer to build first.
How it actually binds at runtime
Here is what this looks like when somebody types a prompt and hits enter.
The wrap, in order:
- User writes a prompt.
- Ethics charter loads first as system context.
- Empathy map for the detected audience attaches next.
- Retrieval pulls in the relevant Tier one and Tier two sources.
- Writing guardrails and the voice DNA file go in last.
- Wrapped package reaches the frontier model.
The model produces an output. The output passes through the guardrails on the way out: AI tells get stripped, voice patterns get verified, citations get checked against the source list. If something looks off, the system flags it before the human ever sees it. The human reads what comes out and either ships it or sends it back.
That’s the geometry. The frontier model is still doing the heavy lifting. The constraints decide what shape the answer can take.
Here is a small concrete example. Suppose a marketing lead inside your firm types: “Write a thank-you email to the client who just signed the renewal.” A naked frontier model produces a four-paragraph note full of “exciting partnership” language and a closing that promises things the lead can’t actually deliver. The E3AI-bound version reads the Empathy map for this specific client (they prefer short, dry, peer-to-peer), reads the Ethics charter (no promises beyond what the contract says), and pulls from Tier one (the actual contract terms and the past two thank-yous you sent this client). Output: three sentences, in your voice, that reference the specific renewal scope and read like a person on your team wrote them. Same prompt. Different geometry. Different output.
The PAI substrate
E3AI runs on top of PAI, which stands for Personal AI Infrastructure. PAI was created and is maintained by Daniel Miessler as an open framework for individuals who want to run AI as a system rather than as a chat window. The skills, the hooks, the memory layer, the agent orchestration: all of that is PAI doing its job underneath.
E3AI is the productized constraint layer on top. PAI is the substrate. Credit goes where credit is due. The reason we can deliver an E3AI engagement in weeks rather than years is that PAI already exists and works.
What changes when you have this
A team running E3AI looks different than a team using ChatGPT well. Both teams move fast. The thing that separates them is what gets shipped: every output from the E3AI team sounds like the people who run the place and only says what those people can back.
The hours saved go up over time, not down. The first month, a team gains hours back on routine work. By the third month, the corpus is bigger and the maps are sharper, so the gains stack. By the sixth month, new use cases keep showing up that the team hadn’t thought to ask for. The encoded knowledge has crossed a threshold where it starts generating ideas instead of just executing them.
The mistake rate stays low. Hallucinations get caught at the source instead of at the customer. Ethics violations get blocked at the hook layer instead of at the lawsuit. The output sounds enough like a real person on the team that clients stop asking “did an AI write this?” and start treating it as ordinary work.
The deliverable:
AI doing real work in your voice, on your sources, inside your rules.
If this sounds like what you’ve been trying to build
E3AI is delivered as a service engagement. We sit down with you, audit what’s in your head and on your hard drive, encode it into the three boundary files, wire the runtime, and hand off a system you own. The first engagement takes a few weeks. You get hours back in the first week.
The first conversation isn’t a pitch. It’s a working session. By the end of that call, you have a real picture of what an E3AI instance for your business would look like. Whether or not we end up working together, you leave the call with something usable.
Or book the discovery call directly and we’ll start sketching yours.
