The Audience Problem Most AI Deployments Never Solve

E3AI Deep Dive: Empathy

The Audience Problem Most AI Deployments Never Solve

Five audience maps, each one a working file the AI reads before it drafts. How the runtime detects who’s in the room. Why the build takes the time it takes.

Every output goes to someone specific. A client in their third year of the relationship who has stopped reading long emails. A new partner who is still deciding whether you’re worth the operational cost of working with you. A supplier your team has been friendly with for two years. A regulator who reads everything at arm’s length and flags anything that sounds like it’s trying too hard.

Generic AI talks to all of them in the same voice, with the same assumptions, at the same register. It doesn’t know there’s a difference. Nothing in the prompt told it. The output lands wrong, and the person who sent it usually knows something was off but can’t name what. That’s the failure mode the Empathy layer is built to prevent. This is the dimension the manifesto introduces but doesn’t have room to unpack.

Five maps, not one voice

E3AI builds one file per audience type. Not a persona deck. Not a “target customer” slide. A working document the AI reads before it drafts anything.

Five standard types in every engagement:

Clients. Who this person is in their role, not their demographics. The specific pain they hired you to fix. The gain they care about most. The voice they respond to. What they don’t want to hear, especially when things are going sideways. The register that works at the start of an engagement versus eighteen months in.
Colleagues. How your team hands off work. What tone travels well internally and what reads as passive-aggressive. The shorthand your team has developed. The difference between a Slack message and an email to the same person. When to copy the manager and when not to.
Partners. Co-branding rules. What you disclose and what you don’t in a joint communication. Which decisions require partner sign-off before the AI drafts them. The history of the relationship that the AI should know about before it writes anything that touches it.
Suppliers. Procurement voice. What changes when you’re the buyer. Payment terms, PO numbers, the difference between a friendly vendor relationship and a contract relationship, and which one governs how you write.
Regulators. Tone first: measured, accurate, no enthusiasm. What you say versus what you don’t say in front of a compliance audience. The language your legal team has approved for the contexts that matter. The difference between responding to an inquiry and volunteering information that wasn’t requested.

What each map actually contains

A map is not a description of an audience. It’s a set of fields the AI can act on. Every map has the same core structure:

Every audience map contains:

  • Who this is (specific, not generic: “enterprise IT director evaluating a second-year contract renewal,” not “B2B buyer”)
  • What they hired you for, in jobs-to-be-done language
  • The pains they’re trying to resolve right now
  • The gains they care about that they may not say out loud
  • The voice that works with them: preferred length, preferred framing, examples of what landed well
  • What not to say in this context
  • Disclosure rules specific to this relationship
  • Escalation conditions: when a draft needs human review before it goes out

Some fields stay stable across months. The client’s core job-to-be-done doesn’t change. Others update constantly. The relationship history field adds a note every time something significant happens. The “what not to say right now” field changes when the relationship does.

That update cadence is part of the engagement. The maps are not set-and-forget artifacts. They’re living files. The team reviews them quarterly and patches them after anything significant changes.

How the system knows who’s in the room

The runtime does not guess. The user names the audience, or the workflow names it, and the system loads the right map.

In an interactive session, the user types “write a follow-up to the partner meeting yesterday” and the system detects “partner,” loads the partner map, and wraps the prompt with it before anything reaches the model. The user never sees the map load. They see the output, and it sounds right because the right assumptions were already in the context.

In a workflow, the audience is named in the workflow definition. “Supplier payment follow-up” always loads the supplier map. “Client onboarding email” always loads the client map for the specific client named in the trigger. Neither approach requires the AI to make a judgment call about who’s in the room. That call was already made and written down.

What this looks like in practice

Same instruction. Two very different outputs.

The instruction

“Write a follow-up to the meeting we had yesterday.”

Without Empathy maps With Empathy maps loaded
Same two-paragraph output for every audience. Warm relationship language. Vague next-step sentence. “It was great to connect and I’m excited about the path forward.” Client (enterprise IT, prefers ROI framing):
Four bullet points. Decisions made. Open items with owners. A sentence naming the ROI implication of the agreed timeline. No “great to connect.”
The model has no way to know a supplier email should sound different from a client email. It doesn’t know the difference exists. Supplier (procurement map: transactional, ref PO when available):
Three sentences. Reference to the meeting. Agreed deliverable and date. PO number to attach to the invoice.

Same prompt. Same underlying model. The maps decide what shape the output takes.

The build takes the time it takes

This is the part most teams want to skip, and it’s the only part that makes everything else work.

Building the five maps takes real conversation time. Not “what is your target audience.” Real questions: what does this client do when they get an email that misses the mark. What has gotten you in trouble with this partner before. What does your team’s internal shorthand actually mean. What’s the fastest way to lose credibility with a supplier in your industry.

The answers to those questions don’t exist in a database. They exist in the heads of the people who do the work every day. Getting them onto the page takes a structured interview process, usually a week or two spread across the people who know the relationships.

The AI is downstream of that work. It cannot do the work for you. A model asked to generate audience maps from scratch produces plausible-sounding content that reflects the internet’s idea of your audience, not your team’s actual knowledge of the specific people they work with. Once the maps are built and wired, the system stops guessing. The slow work at the front is what makes the fast work downstream actually fast.

If the audiences you work with deserve more than a default register

E3AI is delivered as a service engagement. We sit down with your team, interview the people who know the relationships, and build the five maps. The process surfaces things your team knows but has never written down. By the end, you have working files your AI can read, and a team that has had the conversation about who they actually serve and how.

Sketch your audience maps on the discovery call

We’ll identify which of your five audience relationships would benefit most from an Empathy map and sketch what your top two would actually contain.

Or book the discovery call directly.

Book a Discovery Call

Scroll to Top