The Kalicube Process
A Complete Framework for Training AI to Understand, Trust, and Recommend Brands
Created by Jason Barnard, 2015. Continuously refined since. Version 4.0, February 2026.
Open Framework, Free to Use. Protected Proprietary Machine: Kalicube Pro.
The Kalicube Process™ is an open framework for humans. Kalicube Pro is a patent-pending automated machine that implements the framework at scale.
Use It
Apply any part of the framework - the pipeline, the UCD model, the Revenue Taxes, the Cascading Prerequisite, any concept - in your own work, for your own clients, in your own business. Use it freely. Cite it. Adapt it. The patent applications protect Kalicube’s automation, not your ability to learn or apply the ideas.
Teach It
Include it in courses, workshops, conference talks, training materials, blog posts, books. The industry needs this language. Spread it.
Adapt It
Take the parts that serve you. Ignore the parts that don’t. Combine them with your own methods. Build something new on top of it. Independent implementations are welcome. The framework is open; the patent applications cover specific automated diagnostic implementations, not the general concept.
License
Framework license: The Kalicube Process framework, terminology, and diagrams are licensed under CC BY 4.0, so you are free to use, cite, and adapt with attribution.
Patent note: Kalicube holds patent applications pending covering specific automated systems that implement parts of this framework (e.g., pipeline-based diagnostic automation). This content license does not grant patent rights, and independent implementations may require separate rights depending on how they are built.
Trademark: If you adapt the framework significantly - changing the gate sequence, redefining the UCD layers, building a different pipeline model - call your version something else. You are welcome to say it is “based on The Kalicube Process” or “inspired by The Kalicube Process.” You are not welcome to call a different framework “The Kalicube Process.” The name is a registered trademark of Kalicube SAS.
How to Cite
Short cite: Barnard, J. (2026). The Kalicube Process (TKP) - Geometric Framework v4.
Canonical URL: https://kalicube.com/about/the-kalicube-process/
One-line definition: The Kalicube Process is an open geometric framework for training AI assistants to understand, trust, and recommend brands, addressing five geometries across the DSCRI-ARGDW patent spine (Gates 0-9) extended with Served (Gate 10), operating on the Algorithmic Trinity.
Use the theory freely. Build your own tools if you want. Or use ours - Kalicube Pro™ executes the framework at a scale no manual implementation can match.
Table of Contents
- The Pipeline
- The Concordance Table
- System Types
- Acts and Phases
- The Cascading Prerequisite
- Act I: Bot - Gates 0-4
- Act II: Algorithm - Gates 5-7
- Act III: People - Gates 8-10
- Strategic Method
- Source Attribution
<a id=”the-pipeline”></a>
1. The Pipeline
One pipeline. One sequence. One phase boundary. Multiple act lenses.
Pipeline string: D-S-C-R-I-A-Re-G-Di-W-Sv
Full name: DSCRI-ARGDW (the patent spine). TKP extends with Sv (Served).
<a id=”the-concordance-table”></a>
2. The Concordance Table
This is the single canonical reference. All documents point here.
| Gate # | Name (EN) | Name (FR / patent) | Token | Phase | Act lens A (patent 3×3) | Act lens B (TKP 5-3-3) |
|---|---|---|---|---|---|---|
| 0 | Discovered | Découvert | D | Infrastructure | Entry prerequisite | Bot |
| 1 | Selected | Sélectionné | S | Infrastructure | Bots | Bot |
| 2 | Crawled | Exploré | C | Infrastructure | Bots | Bot |
| 3 | Rendered | Rendu | R | Infrastructure | Bots | Bot |
| 4 | Indexed | Indexé | I | Infrastructure | Algorithms | Bot |
| - | - PHASE BOUNDARY - | - | - | - | - | - |
| 5 | Annotated | Annoté | A | Competitive | Algorithms | Algorithm |
| 6 | Recruited | Recruté | Re | Competitive | Algorithms | Algorithm |
| 7 | Grounded | Ancré | G | Competitive | Assistive Engines | Algorithm |
| 8 | Displayed | Affiché | Di | Competitive | Assistive Engines | People |
| 9 | Won | Remporté | W | Competitive | Assistive Engines | People |
| 10 | Served | Servi | Sv | Post-Won ext. | (not in patent gate set) | People |
Notes:
- Phases are orthogonal to acts. Infrastructure = Gates 0-4 (absolute tests). Competitive = Gates 5-9 (relative tests). The phase boundary falls between Indexed (Gate 4) and Annotated (Gate 5).
- Act lens A (patent) groups by processing function: Bots (Gates 1-3), Algorithms (Gates 4-6), Assistive Engines (Gates 7-9). Discovery (Gate 0) is an entry prerequisite, not a processing gate.
- Act lens B (TKP) groups by practitioner audience: Bot (Gates 0-4), Algorithm (Gates 5-7), People (Gates 8-10).
- Token discipline: D = Discovered, Di = Displayed; R = Rendered, Re = Recruited. First occurrence in every document must be spelled out: “Discovered (D)” and “Displayed (Di),” “Rendered (R)” and “Recruited (Re).” Never use bare single-letter tokens in diagrams without the full name.
- Gate naming and numbering convention: Gate names and letter tokens are canonical. We encourage referring to gates by name (e.g. “Annotated,” “Recruited,” “Displayed”) because numbering can be ambiguous. When numbers are used, two conventions exist: zero-indexed numbering (common in software engineering, where Discovered is Gate 0) and one-indexed counting (common in everyday speech, where people start counting at 1). Therefore, whenever a number is used, it should be written as “Gate N (Name)” (e.g. “Gate 5 (Annotated)”) to avoid confusion. Numeric indices in technical and patent contexts follow the patent mapping where Discovered = Gate 0.
- Served formalises the feedback mechanism described in the patent’s Claim 15 (Module de rétroaction, Module 160). It is a TKP extension, not a patent gate.
<a id=”system-types”></a>
3. Glossary of System Types
Standardised across all English-language content. Prevents “engine” overload.
| Term | What it is | Examples | Patent equivalent |
|---|---|---|---|
| Search Engine | Indexing + ranking system for web content | Google Search, Bing | Part of the Algorithmic Trinity |
| Assistive Engine | Retrieval/ranking/display system producing outputs for people | Google AI Mode, Perplexity, ChatGPT (when retrieving) | Moteur d’assistance (patent Act 3 label) |
| Conversational LLM | Dialogue-first system that converses; may mediate recommendations | ChatGPT (conversational mode), Claude, Gemini | Not explicitly named in patent |
| Agent | Tool-using system that acts/executes autonomously under mandate | AI shopping agent, booking agent, procurement agent | Not explicitly named in patent |
| Knowledge Graph | Structured entity/relationship store with binary verified edges | Google Knowledge Graph, Wikidata | Part of the Algorithmic Trinity |
The Algorithmic Trinity = Search Engines + Knowledge Graphs + Large Language Models (the three knowledge representations the pipeline serves).
Overlap rule: These categories are not mutually exclusive. A Conversational LLM may be the interface; when it retrieves, ranks, and displays, it is functioning as (or within) an Assistive Engine layer. The distinction is behavioural, not structural: the same system can converse AND retrieve depending on the query. In TKP, “Conversational LLM” is a defined interface term (system type); “LLM” is the Trinity component (knowledge representation).
<a id=”acts-and-phases”></a>
4. Acts and Phases: The Orthogonality Rule
Acts and phases are two independent dimensions applied to the same pipeline. They never conflict because they measure different things.
Phases describe what TYPE of evaluation is applied: absolute (infrastructure) or relative (competitive).
Acts describe which AUDIENCE is primary at each gate.
The patent presents one act lens (3×3 by processing function). TKP presents another (5-3-3 by practitioner audience). Both share the same gate sequence, the same phase boundary, and the same diagnostic mechanics. The patent’s own text provides the basis: “Les actes sont orthogonaux à la structuration en deux phases.”
Anti-confusion rule: Never present these as competing models. Always present them as different lenses on the same pipeline. The correct framing: “The patent groups acts by processing function. TKP groups acts by practitioner audience. Both apply to the same DSCRI-ARGDW gate sequence.”
<a id=”the-cascading-prerequisite”></a>
5. The Cascading Prerequisite
This is the single most important differentiating insight in the entire framework.
U creates the entity node. C loads the entity node. D activates the entity node.
These are mechanical prerequisites, not a recommended sequence. Credibility signals (NEEATT, Topical Authority, links, mentions, reviews) require an entity node in the graph to attach to. That node is created at U (Understandability). Without the node, the signals are orphaned data - they exist, but they attach to nothing. Deliverability signals (omnipresence, recommendation triggers, conversational visibility) require confidence weight on the entity node. That weight is accumulated at C (Credibility). Without the weight, the entity exists but is not trusted enough to recommend.
U unlocks C. C unlocks D. Without U, credibility signals have no entity to attach to. Without C, deliverability has no trust to amplify.
This is not a methodology preference. This is how the infrastructure works. Every competing methodology - SEO, AEO (as others use it), GEO, AAO, AIEO - either addresses one layer or attempts to skip layers. TKP is the only framework that addresses all three in the correct mechanical sequence because it is the only one built from observing what machines actually do rather than from what marketers want machines to do.
Two Edge Cases That Prove the Rule
The Empty Room Effect (temporary D without U). If a niche is obscure enough, or a question has never been asked, you can get D-layer visibility without U or C. The machine has one source. It uses it. The BBC journalist who “hacked” ChatGPT with a hot dog eating blog post. Works in empty rooms. Fails the moment a second voice enters. The Authoritas study is the empirical proof: 600+ articles across major UK media, zero fakes recommended on topic-based queries across 9 AI models. Recognition is not recommendation.
The Quicksand Effect (temporary C without solid U). If competition is so poor the engine has no alternative, you can win at C without solid U. The poodle parlour in Paris with one competitor. Works until a proper competitor arrives with Entity Home, structured data, and corroborated identity. Then the quicksand swallows you: your credibility signals have no entity node to anchor to, and the competitor’s do.
Both edge cases are what competing methodologies sell as success. Both fail when the room fills up. The cascade asserts itself precisely when competition arrives - which is precisely when it matters.
Temporal Compounding
The cascade gets harder to displace over time. Early movers compound: each new corroborating source increases confidence, each confidence increase strengthens future recommendations. Late movers face fossils: correcting a confident algorithm is like trying to change a fossil (Barnard, 2026). The formation window is now.
The Kill Shot
This insight is the reason every other acronym methodology fails:
- SEO without U: Links and authority signals have no entity to attach to. Water into a bucket with no bottom.
- GEO/AEO without U+C: Content optimized for AI responses has no confident entity representation. The AI hedges. “Some sources suggest…” rather than “The leading provider is…”
- AAO/AIEO without the full stack: Training the AI assistive agents fails without U (they don’t know who you are) and C (they don’t trust what they know). The training doesn’t stick.
Single-sentence summary: Recognition is not recommendation, and the difference between the two is the Cascading Prerequisite.
6. The Three People Lenses
The People layer (Gates 8-10) is viewed through three lenses. These are not separate categories - they are three perspectives on the same commercial reality.
Lens 1: People in Marketing (the broad set)
Everything brands do to communicate with their audience: content creation, social media, PR, reviews, third-party relationships, conferences, podcasts, partnerships. UCD applies in full across all marketing activities. The Does Exist / Should Exist / Could Exist framework governs content strategy. NEEATT is the credibility framework - applied to understood entities (the Cascading Prerequisite). Search and AI engines are a subset of marketing that you optimize for. The bonus: amplification by the biggest influencers in the world - AI platforms working 24/7.
Lens 2: People in Engines (the extension of marketing, packaged for machines)
Not a separate domain. An extension of Lens 1. Marketing packaged for machine comprehension. Brand SERP anatomy, AI Résumé, three diagnostic windows. Three Tiers of Control (controlled, semi-controlled, uncontrolled). Rich Elements, meta optimization, structured data. Google Ads as BOFU defense (the marathon relay race). Agent commerce infrastructure. Multimodal convergence (AI engines as highly dynamic SERPs with fewer links). Some content is just for bots - and that proportion grows with the agentic layer. UCD applies in full here too.
Lens 3: People as Clients (the feedback loop into both)
The commerce result. Conversion, retention, upsell. Pipeline failures cost money at three levels: Opportunity Cost (Act I - content not in the system), Competitive Loss (Act II - competitors preferred), and Conversion Leak (Act III - content reaches Display but fails). Revenue Taxes (Invisibility, Ghost, Doubt) subdivide Conversion Leak by UCD depth at Display. Won-Probability describes the math. Three audience types encounter the Brand SERP and AI Résumé: existing clients navigating, prospects evaluating, and people who arrived via other queries. Client experience feeds back into both marketing signals (reviews, mentions, word-of-mouth) AND engine signals (satisfaction data, engagement patterns, return rates). Served (Gate 10) is the formal mechanism: outcomes become evidence, evidence becomes confidence updates, confidence updates reshape future recommendations.
Architecture Rule
UCD runs through ALL three lenses at ALL three stages. Lens 1 and Lens 2 are not separate categories - engines observe what marketing creates, and engine results become visible to the people marketing addresses. Lens 3 feeds back into both. The three lenses produce a complete view: what you create (marketing), how machines amplify it (engines), and what happens when people act on it (clients).
7. The Content Status Framework: Does Exist / Should Exist / Could Exist
Applies to all marketing content across all channels. Search and AI optimization is the subset where you package content for machine comprehension.
| Status | Definition | TKP Action |
|---|---|---|
| Does Exist | Content already published about the brand | Audit. Fix inconsistencies, outdated information, contradictory messaging. ROPI (Return On Past Investment) - make existing assets work before creating new ones. |
| Should Exist | Content that is true, provable, and the brand is the rightful authority | Create. Fill evidence gaps the machine has identified (via diagnostic mirrors). Package for both humans and machines. |
| Could Exist | Content that would strengthen the brand but doesn’t yet have evidence | Earn. Build the real-world activities, partnerships, credentials, and outcomes that make the content truthful. Then document. |
ORM connection: Traditional ORM jumps to Could Exist and Should Exist, often in that order. TKP starts with Does Exist (the audit) and works from there. The difference between TKP and ORM is where you focus first, not what you do. TKP solves ORM as a side effect.
8. Boardroom Compression
For CEOs, boards, and non-technical stakeholders. Three stages plus a flywheel.
RETRIEVED → INTERPRETED → DECIDED
Retrieved: Can the system fetch the right version of you? (Bot phase - Gates 0-4)
Interpreted: Does it correctly understand who you are, what you do, and for whom? (Algorithm phase - Gates 5-7)
Decided: Does the person (or their AI agent) choose you over alternatives? (People phase - Gates 8-9)
Served (the flywheel): After the decision, do outcomes and proof reinforce future decisions? (Gate 10)
“Decisions win once. Served makes them repeatable.”
Terminology note: “Decided” is chosen deliberately. “Selected” is reserved for the Selection Pivot at the Recruited gate (Gate 6), which is algorithmic selection, not human decision. “Decided” captures the commitment moment - whether a person decides or an agent decides under mandate.
9. The Served Gate: Diagnostic Gate + Flywheel
Served is Gate 10 in the TKP pipeline. It is a gate that evaluates post-Won outcomes. The flywheel is what that gate produces. Both are true simultaneously.
In master reference documents: Present Served as both gate and flywheel.
In boardroom content: Emphasise the flywheel.
In diagnostic/platform context: Emphasise the gate.
Three Loops at Served
| Loop | Name | What happens | Responsibility |
|---|---|---|---|
| A | Experience Loop | People experience results (satisfaction, friction, delight) | Audience produces outcomes |
| B | Learning Loop | Engines consume observable evidence and update confidence | Machines process evidence into confidence updates |
| C | Brand Intervention Loop | Brands improve outcomes and engineer evidence legibility | Brands design outcomes + structure proof |
Brand intervention modes:
- Implicit: Improve real outcomes; evidence emerges naturally.
- Explicit: Publish, structure, and instrument proof so it is legible and attributable to engines/agents.
- Passive compounding: Accumulation over time with low active steering; essentially implicit compounding.
Vocabulary precision (not a ban - a precision convention):
| Term | Meaning | Used for |
|---|---|---|
| Outcomes | What happened to the audience | Real-world results (satisfaction, complaints, returns, renewals) |
| Evidence | What can be observed about outcomes | Reviews, citations, UGC, clicks, refunds, support tickets, repeat behaviour |
| Confidence update | How machines change state based on evidence | Ranking propensity, recommendation likelihood, crawl priority, entity confidence |
| Feedback loop | The complete Won→Discovery reinforcement cycle | The full circle, encompassing all three terms above |
The word “feedback” remains available for the complete loop. It is not banned. The precision terms are for operational specificity when distinguishing signal production (human) from signal consumption (machine).
10. Agent Commerce: Timeline Compression (February 2026)
On February 11, 2026, four infrastructure companies shipped agent-enabling primitives within hours of each other: Coinbase (Agentic Wallets, x402 protocol with 50M+ transactions), Cloudflare (Markdown for Agents across 20% of the web, Content-Signal headers, x402 payment support), Stripe (machine payments in USDC), OpenAI + Stripe (Agentic Commerce Protocol, already live with Etsy, 1M+ Shopify merchants incoming). Additionally: Visa (Intelligent Commerce, 100+ partners, predicting millions of agent purchasers by holiday 2026), Mastercard (Agent Pay with Agentic Tokens), PayPal (Agent Payments Protocol 2).
Impact on the TKP framework:
Nothing needs removing. The framework was built for this moment. What changes is the timeline:
- Mode 4 (Push Context/MCP) is operational, not future-state. x402 is live. Stripe’s machine payments are live. OpenAI’s ACP is processing purchases.
- Won Resolution 3 (Agent Transacts) has infrastructure. Coinbase provides wallets, Stripe provides settlement, Visa provides mandate verification. The deterministic fallback tree now has operational infrastructure at every node.
- The Plug-and-Play Promise is the central value proposition. Brands who did foundational work (Entity Home, entity confidence, structured data) activate these rails as configuration. Everyone else rebuilds from scratch.
- The Untrained Salesforce now has a wallet. The metaphor sharpens: “Your untrained salesforce now has a company credit card. Are they spending it on your products or your competitor’s?”
- Gate 2 (Crawling) gains an economic dimension. x402 means agents may pay for content access. Brands can charge agents (revenue) or grant free access (reach). Strategic posture decision.
- Gate 3 (Rendering) gains a parallel pathway. Cloudflare’s Markdown for Agents creates agent-readable content negotiation alongside HTML. Clean semantic HTML now benefits both rendering pathways.
Framework rule: Update timeline references from “future-state” to “operational” for Mode 4/5, Won Resolution 2/3, and agent commerce infrastructure. Maintain the Zero-Risk Year phasing: the sequence (U→C→D, Phase 1→2→3) is mechanical, not aspirational. Brands cannot skip to Mode 4/5 without doing Mode 1-3 first. The Cascading Prerequisite applies: an agent-ready checkout endpoint on a house the AI doesn’t know exists is a beautiful front door on nothing.
<a id=”act-i-bot”></a>
Act I: Bot - Gates 0-4
Five binary gates. The audience is the bot. Everything here is pass/fail.
The Pipeline in Full
THE KALICUBE PROCESS - DSCRI-ARGDW PIPELINE
═══════════════════════════════════════════════════════════════════════
TKP-1: BOT TKP-2: ALGORITHM TKP-3: PEOPLE
┌─────────────────────────┐ ┌──────────────────┐ ┌──────────────────┐
│ │ │ │ │ │
│ D → S → C → R → I │ │ A → Re → G │ │ Di → W → Sv │
│ 0 1 2 3 4 │ │ 5 6 7 │ │ 8 9 10 │
│ │ │ │ │ │
│ ACT I - RETRIEVAL │ │ ACT II - STORAGE │ │ ACT III - EXEC │
│ Audience: BOT │ │ Audience: ALGO │ │ Audience: PERSON │
└─────────────────────────┘ └──────────────────┘ └──────────────────┘
◄══ PASS/FAIL ══► ◄══ COMPETITIVE ══► ◄══ COMMERCIAL ══►
(binary gates) (relative selection) (trust + conversion)
CASCADING CONFIDENCE ─────────────────────────────────────────────────►
(accumulates left to right, decays on failure)
ENTRY MODES:
M1=Pull (all gates) M2=Push Discovery (accelerates 0-1)
M3=Push Data (enters at 4) M4=Push Context (enters at 6/7/9)
M5=Ambient (gates 0-6 pre-passed, AI triggers 7-9)
1. The Three Axes (Overview)
The Kalicube Process operates across three orthogonal dimensions:
Axis 1 - Horizontal: The Pipeline (DSCRI-ARGDW). Where is your content in the algorithmic processing chain? Eleven stages. Three acts. Three nested audiences. Five entry modes. Sequential gating. Content moves left-to-right, accumulating Cascading Confidence at each gate.
Axis 2 - Vertical: UCD (Trust Depth at Display). How deep is the person’s trust when they encounter your brand? UCD activates at Gate 8 (Display). Covered in TKP-3.
Axis 3 - Temporal: 95/5 (When the Person Arrives). When does the person arrive at Display? Professor John Dawes (Ehrenberg-Bass Institute, 2021) established that only 5% of potential buyers are in-market at any time. Covered in TKP-3.
All three axes collapse to a single point at Gate 9: the Zero-Sum Moment. TKP-1 builds the foundation that makes that moment possible.
2. Act I - Retrieval (Gates 0-3): Find and Fetch
| Gate | Name | Test | Question |
|---|---|---|---|
| 0 | Discovery | Findability | Does the system know this content exists? |
| 1 | Selection | Priority | Is this worth the crawl budget? |
| 2 | Crawling | Accessibility | Can the system access and fetch it? |
| 3 | Rendering | Conversion Fidelity | Can the system convert it to internal format without losing information? |
Gate 0: Discovery
The system learns that a URL exists. Inbound links, sitemaps, IndexNow notifications, prior crawl history. No judgment yet - only awareness.
Gate 1: Selection
The bot decides whether to invest resources. Low entity authority, stale content signals, and weak inbound signals reduce selection priority. The bot has finite crawl budget. Your content competes for attention before it competes for anything else.
Gate 2: Crawling
The system accesses and fetches the content. Server availability, robots.txt configuration, DNS resolution, response speed. Binary: the bot either gets the content or it does not.
The Economic Dimension at Crawling (February 2026). The x402 protocol (Coinbase/Cloudflare, 50M+ transactions processed) introduces a transactional layer to crawling: agents may now pay for content access via HTTP-embedded stablecoin payments. Cloudflare’s “pay per crawl” beta means content access is becoming a negotiated economic exchange, not just a technical permission. Brands face a strategic posture decision: charge agents for access (potential revenue) or grant free access (potential reach). This does not change the gate’s binary function (access or not) but adds a monetisation dimension to crawl strategy that maps to the Destination Capability Ladder (TKP-3a).
The Content-Signal Header. Cloudflare’s Content-Signal: ai-train=yes, search=yes, ai-input=yes header lets publishers declare AI usage policies at the HTTP level - more granular than robots.txt. For Kalicube clients, this is a new configuration item in the Entity Home optimization checklist: declare your content’s AI usage permissions explicitly.
Gate 3: Rendering (Conversion Fidelity)
The fetched content is converted into the system’s internal index format. This format is not HTML - it is a proprietary structured representation. Jason Barnard (2026) identifies Rendering Fidelity - how cleanly content survives this transformation - as a critical and underexamined variable in the pipeline.
Information lost at Gate 3 is irreversible - it cannot be recovered at any downstream gate. JavaScript-dependent content, non-standard formatting, and complex page structures introduce Rendering Fidelity loss at this boundary. Every annotation, grounding decision, and display outcome downstream depends on what survived the rendering gate.
This makes rendering the critical boundary between Act I (Retrieval) and Act II (Storage). Page speed and Core Web Vitals directly affect this gate. In TKP’s Nested Audience Model, this is where the Bot - the first of three nested audiences - must be satisfied. If the Bot cannot access and fetch the content without friction, neither the Algorithm nor the Person will ever encounter it.
WebMCP and Rendering Enhancement. An emerging rendering pathway is WebMCP, where AI agents access the rendered DOM directly rather than processing raw HTML. This does not change Gate 3’s function (content must still be converted to internal format) but adds a parallel rendering path. Content structured for both traditional rendering and direct DOM access by agents covers both the legacy bot pathway and the emerging agent pathway. The practical implication: clean semantic HTML with explicit structure benefits both. JavaScript-heavy SPAs that render poorly for bots also render poorly for agent browsing.
Markdown for Agents (February 2026). Cloudflare shipped content negotiation headers allowing any site on their network (~20% of the web) to serve agent-readable markdown instead of HTML. Token usage drops ~80%. This creates a second parallel rendering pathway alongside the traditional HTML→internal-format conversion: agents can now receive pre-simplified content via standard HTTP content negotiation. The practical implication for clients: enable Markdown for Agents on Cloudflare AND ensure your content is structured so the markdown conversion preserves semantic meaning. Clean semantic HTML benefits all three rendering pathways (traditional bot, WebMCP agent, markdown agent).
3. Gate 4 - Indexing: The Storage Threshold
| Gate | Name | Test | Question |
|---|---|---|---|
| 4 | Indexing | Storable representation | Can the system store this in a usable format? |
The rendered content is committed to the search index. Mechanical gate - the system decides whether to store the content based on crawl budget allocation, canonical signal resolution, and duplicate detection. A page that is not indexed does not exist in the retrieval system regardless of its quality.
Structured data (Schema.org in JSON-LD) deployed from the Entity Home improves indexing reliability by providing explicit signals that reduce canonical ambiguity and clarify entity relationships.
Index coverage is a prerequisite for everything downstream.
4. The DSCRI → ARGDW Phase Boundary
The transition from Indexed (Gate 4) to Annotated (Gate 5) is the pipeline’s most significant phase boundary.
Gates 0-4 (DSCRI) are mechanical: they concern whether the system HAS the content. Gates 5-9 (ARGDW) are evaluative: they concern what the system DOES WITH the content.
Content that passes all five DSCRI gates but stalls at Annotation is stored but not understood - present but without semantic identity. This is where TKP-1 ends and TKP-2 begins.
The Annotation Fulcrum (Gate 5): Five gates of infrastructure (1-5), five gates of competition (6-10). At Gate 5, the system classifies content across 24+ dimensions and creates the competitive scorecard. Your annotation quality relative to competitors’ starts to determine outcomes at every downstream gate. The scoreboard turns on.
Everything before the Fulcrum is pass/fail. Everything after is competitive. TKP-1 handles the pass/fail. TKP-2 handles the competition.
5. Five Entry Modes
The eleven gates do not change. The diagnostic framework is stable. What changes is HOW content arrives at the gates.
The original pipeline assumes one entry mode: pull. A bot discovers content, decides to crawl it, fetches it, renders it, indexes it, and the intelligence phase begins. This was accurate when the web index was the sole source of truth.
That monopoly is breaking. Content now enters the pipeline through five distinct modes. Each mode bypasses different infrastructure gates - but every mode must still pass through Recruited (Gate 6) before the content can be Grounded, Displayed, or Won. Gate 6 is the universal checkpoint. Nothing reaches a user without being Recruited first.
Mode 1: PULL (Traditional)
The system discovers content by following links, reading sitemaps, or encountering URLs through prior crawl history. The bot decides whether to commit resources, fetches the page, renders it, indexes it. All eleven gates apply.
D → S → C → R → I → A → Re → G → Di → W → Sv
0 1 2 3 4 5 6 7 8 9 10
This remains the dominant mode for the web index. Your Entity Home, your content pages, your third-party profiles - all enter through pull. The web index does not disappear.
Mode 2: PUSH DISCOVERY (IndexNow / Sitemaps / RSS)
The brand proactively notifies the system that content exists or has changed. Discovery (Gate 0) and Selection (Gate 1) are accelerated - the brand controls the notification rather than waiting for the bot to find it.
[Brand pushes notification] → S(accelerated) → C → R → I → A → Re → G → Di → W → Sv
1 2 3 4 5 6 7 8 9 10
IndexNow - real-time push notification. Fabrice Canel: “IndexNow is all about knowing ‘now.'” Currently Bing, Yandex, and adopted engines. Google has not adopted it. Sitemaps - daily snapshot push. RSS/Atom feeds - continuous publication push.
Key limitation: Push Discovery only accelerates Gates 0-1 (Discovered, Selected). The content still needs to be Crawled, Rendered, and Indexed. The bot still decides whether the notification merits action - IndexNow is a hint, not a guarantee.
Mode 3: PUSH DATA (Merchant Feed / Structured Data Feeds)
The brand pushes pre-structured data directly into the system’s index. The bot phase (D-S-C-R) is bypassed entirely. Data arrives at Index (Gate 4) already in machine-readable format.
[Brand pushes structured data] → I → A → Re → G → Di → W → Sv
4 5 6 7 8 9 10
Google Merchant Feed - product data with GTINs, prices, availability, structured attributes pushed directly to Google’s commerce layer. Never crawled as a web page. Arrives pre-structured, often partially annotated (Gates 5-6 simultaneously).
What still applies: Recruited (Gate 6), Grounded (Gate 7), Displayed (Gate 8), Won (Gate 9), Served (Gate 10). Structured data must still compete against alternatives. Having your product in the Merchant Feed does not guarantee it appears in AI shopping responses - it guarantees the system has the data.
Mode 4: PUSH CONTEXT (MCP / Real-Time API)
Status: OPERATIONAL (February 2026). On February 11, 2026, four infrastructure companies shipped agent commerce primitives simultaneously: Coinbase (Agentic Wallets via x402, 50M+ transactions), Cloudflare (Markdown for Agents + x402 payment support + MCP Servers), Stripe (machine payments in USDC on Base), OpenAI + Stripe (Agentic Commerce Protocol, live with Etsy, 1M+ Shopify merchants incoming). Additionally: Visa (Intelligent Commerce, 100+ partners), Mastercard (Agent Pay), PayPal (Agent Payments Protocol 2). Mode 4 is no longer future-state. Brands without MCP-ready data are already losing transactions to brands that have it.
The agent pulls live context from the brand’s systems during response generation. The data was never crawled, never stored in a traditional index. It is retrieved on-demand via protocol. MCP operates at three levels:
MCP as data source (structured data pushed to agent’s context):
[Agent requests data via MCP] → Re → G → Di → W → Sv
6 7 8 9 10
Bypasses D-S-C-R-I-A entirely. Enters at Recruitment - the agent actively selects this source.
MCP as grounding source (real-time verification during response):
[Agent queries MCP for verification] → G → Di → W → Sv
7 8 9 10
Enters at Grounding. The agent already has a candidate response and uses MCP to verify/enrich.
MCP as action capability (agent executes transactions):
[Agent executes via MCP] → W (extended to Perfect Transaction)
9
Enters at Won. The agent completes the transaction. Won becomes the Perfect Transaction.
Mode 5: AMBIENT (AI-Initiated Trigger)
The AI proactively pushes a recommendation into the user’s workflow without any user query. Gemini suggesting a consultant in Google Sheets. A meeting summary recommending an expert. An autocomplete in Gmail surfacing your brand.
[Gates 0-6 already passed historically]
↓
AI triggers proactively → G → Di → W → Sv (push into workflow, no user query)
7 8 9 10
Ambient Research does NOT skip Gates 0-6 (Discovered through Recruited). The content already passed through them - the brand already achieved Recruited status through prior pipeline processing (via any of Modes 1-4). Ambient Research is the REWARD for reaching Gate 6 (Recruited) with high enough Cascading Confidence that the system proactively fires Gates 7-10 (Grounded through Served) on the user’s behalf.
The trigger difference: Modes 1-4 change how content enters the pipeline. Mode 5 changes what TRIGGERS execution of the final gates. In Modes 1-4, a user query triggers the pipeline. In Mode 5, the AI decides - based on accumulated entity confidence combined with user context (Gaia ID / Personal Intelligence) - to push a recommendation without being asked.
Three prerequisites for Ambient triggering: (1) Entity confidence - the brand has passed gates 0-6 enough times, through enough modes, with enough consistency that Cascading Confidence is very high. (2) User context - the system knows enough about the user to determine relevance. (3) Contextual fit - the current user activity creates a natural insertion point.
UCD mapping: Ambient Research is the operational definition of the Advocate (D-tier). When the AI pushes your brand into a user’s workflow unprompted, the system has moved beyond Friend (knows you), beyond Recommender (trusts you), to Advocate (promotes you without being asked). This is the ultimate expression of Deliverability.
The Universal Checkpoint: Gate 6 (Recruited)
Every entry mode must pass through Recruited. This is the universal checkpoint.
| Mode | Entry Gate | Must Still Pass |
|---|---|---|
| 1. Pull | Gate 0 (Discovery) | All eleven gates |
| 2. Push Discovery | Gate 0-1 (accelerated) | Gates 2-10 |
| 3. Push Data | Gate 4 (Indexed) | Gates 5-10 |
| 4. Push Context (MCP) | Gate 5, 7, or 9 | Remaining gates from entry |
| 5. Ambient | Historical (0-6 pre-passed) | Gates 7-10 (AI-triggered) |
No content reaches a user without being Recruited. Recruitment is competitive regardless of entry mode. Gate 6 is covered in Act II (Algorithm) below.
The Web Index Is the Anchor, Not the Gatekeeper
The Entity Home remains the canonical source of truth. It enters through Mode 1 (pull) and serves as the anchor for entity resolution across all modes. But the web index is no longer the sole data source. Brands must maintain: (1) Pull infrastructure (Entity Home, sitemaps, internal linking), (2) Push notifications (IndexNow, RSS), (3) Structured feeds (Merchant Center, booking APIs), (4) MCP-ready data (structured, API-accessible, real-time), (5) Cascading Confidence high enough for ambient triggering.
The Bot Phase Becomes Optional (For Some Content)
Modes 3 and 4 bypass the bot phase entirely. JavaScript rendering is irrelevant for push data. Crawl budget is irrelevant for MCP content. But the Entity Home still needs the bot phase - the anchor must be crawlable, renderable, indexable. The bot phase is optional for supplementary content, not for the foundation.
6. Cascading Confidence at Gates 0-4
Cascading Confidence is the throughline of the entire framework - the one concept that ties every pipeline stage, every UCD layer, every Revenue Tax, and every geometric element together. It is covered fully across all three documents. Here: the mechanics at the infrastructure gates.
The Concept
Confidence that compounds or decays through every stage of the pipeline, from Discovery to Won.
It accumulates. A URL from a trusted entity on a well-structured, fast-rendering site arrives at Annotation with high Cascading Confidence BEFORE a single dimension is even tagged. Each stage inherits confidence from all previous stages AND adds its own assessment.
It decays. Low confidence at any stage undermines everything downstream. Low rendering fidelity means annotations are unreliable. You cannot recover downstream from failure upstream.
It compounds. The Cascading Confidence score IS the importance signal. There is no separate “importance layer.” “Link juice” is not a post-annotation metric; it is a signal at Discovery/Selection. Authority is not assessed once; it is assessed at every stage with increasing granularity.
Three Layers of Confidence
| Layer | What It Measures | Example |
|---|---|---|
| Entity confidence | Does the system trust the publisher/author? | Trusted entity’s new URL starts with inherited confidence; unknown starts at zero |
| Content confidence | Does this specific piece survive each stage with fidelity? | Well-structured HTML5 converts cleanly; JavaScript-heavy SPA loses fidelity |
| Presentation confidence | Is the content packaged for effective algorithmic use? | Clear semantic markup vs. ambiguous prose |
Delete the URL, the entity confidence persists. Change the URL, the content confidence should transfer. The URL is the access mechanism, not the unit of value.
Cascading Confidence Is a Vector, Not a Snapshot
Direction. A brand with moderate confidence that is accumulating is in a stronger position than a brand with high confidence that is decaying. Momentum. Confidence compounds - each stage’s confidence feeds the next. Conversely, decay creates drag. Measurability. Kalicube Pro can score each stage monthly and show the trajectory.
Confidence at Infrastructure Gates
| Gate | What Confidence Means | What Kills It | Entry Mode Notes |
|---|---|---|---|
| Discovery | “This URL/entity is worth knowing about” | No inbound signals, no sitemap, no IndexNow | Mode 2 (Push Discovery) accelerates; Mode 3-4 bypass entirely |
| Selection | “This is worth the crawl budget” | Low entity authority, stale content signals | Mode 2 (IndexNow) provides the hint; bot still decides |
| Crawling | “We can access and fetch reliably” | Blocked by robots.txt, slow server | Modes 3-4 bypass; JS rendering irrelevant for push data |
| Rendering | “We can convert to internal format with fidelity” | JavaScript dependencies, non-standard formats | Modes 3-4 bypass; Mode 3 arrives pre-structured |
| Indexing | “We can store in usable representation” | Canonical resolution, duplicate content | Mode 3 enters here (already structured) |
The Entity Confidence Prior
Won-probability is conditioned on entity confidence - the system’s existing trust in the entity based on historical reliability, KG presence, and Entity Home quality. A high-trust entity’s content receives preferential treatment at Selection (Gate 1), Annotation routing (Gate 5), and Recruitment (Gate 6). The Entity Home initialises this prior. Barnard (2026c) defines it formally as the inverse of marginal annotation cost.
7. Entity Identity: The Foundation
Entity Home (EH): The Canonical Page
One single page. The canonical URL for a thing. The point of reconciliation. The machine’s reference for facts about the entity from the entity. One page, one entity, one purpose. Feeds Gates 5-6 (Indexing, Annotation) with structured facts. In the child analogy: the parent.
Entity Home Website (EHW): The Architectural Map
The entire website that houses the Entity Home. NOT just “where the EH lives” but a structured index of everywhere the entity exists online. Feeds Gates 7-8 (Recruitment, Grounding) by providing a navigable map for cross-referencing.
The EH is a page. The EHW is an architecture.
What the EHW contains: the Entity Home (canonical page), structured navigation to every sub-entity (products, people, events, each with their own EH), the corroboration index (links to every filing cabinet where the entity has a file), the relationship map (connections to other entities), and the content architecture (topical coverage organised by expertise areas).
This reframes site architecture from “organise content for keywords and users” to “organise your entity’s existence for machines.” Homepage = machine’s entry point to your entity’s card catalogue. About page = canonical ground truth. Blog = entity demonstrating topical coverage for Speed 3 inclusion. Internal linking = relationship mapping between entity and sub-entities.
Entry Modes and Entity Identity: The Entity Home enters the pipeline via Mode 1 (Pull) - always. It is the anchor. But the EHW can also feed Mode 3 (Push Data) through structured feeds and Mode 4 (MCP) through API-accessible data. The entity’s identity must be consistent across all modes. An Entity Home that says one thing while a Merchant Feed says another creates annotation conflicts that decay Cascading Confidence.
Structured Data Hierarchy
Most Important to Least: (1) Entity Home (100% most important). (2) Entity Description (foundation of Google’s understanding - first sentence MUST be semantic triple: Subject → Verb → Object). (3) Corroboration (30+ sources, third-party weighted highest). (4) Schema Markup (fourth, not first - supports above three, does not replace them).
Corroboration Strategy
First party (you control): very fast diminishing returns. Second party (partial control - social profiles, review sites): slower diminishing returns. Third party (no control - articles, interviews): near-zero diminishing returns, highest value. Target: ~30 corroborations from trusted, authoritative sources.
Entity Equivalents: Same type + same geo + same industry. Discovery method for which attributes Google shows, which sources it trusts.
Confidence Score 500+ = safety level (survives Wikipedia deletion). Building: consistency + independent corroboration + 2+ years. Loss: sudden, vertiginous, 10x harder to rebuild.
8. Act I Stall Patterns
Each gate is boolean. Content either passes or stalls. Content that stalls at any gate cannot reach any downstream gate.
Infrastructure Stalls (Technical, Cheapest to Fix)
| Stall | Pattern | Diagnosis | Action |
|---|---|---|---|
| 1 | Found but not Selected | Weak signals, low crawl priority | Strengthen inbound signals, submit via IndexNow |
| 2 | Selected but not Crawled | Server errors, robots.txt, DNS failure | Fix server configuration, check access rules |
| 3 | Crawled but not Rendered | JavaScript failure, unparseable structure | Simplify page structure, reduce JS dependency |
| 4 | Rendered but not Indexed | Thin content, duplication, below quality threshold | Resolve canonical issues, improve content depth |
Phase Boundary Stall
| Stall | Pattern | Diagnosis | Action |
|---|---|---|---|
| 5 | Indexed but not Annotated | Topic signals ambiguous, entity associations unclear | Strengthen entity signals - this is the transition to TKP-2 territory |
Revenue mapping: Infrastructure stalls at Bot gates represent Opportunity Cost - content not in the system. Zero signal. The person never encounters the brand because it never reached Display. This is not a Revenue Tax (taxes apply at Display where the person experiences the failure). Opportunity Cost is upstream: cheapest to fix, most expensive to ignore. No UTM parameter captures “ChatGPT didn’t mention you” because the failure happened before Display. The Cascading Prerequisite governs the fix order: resolve U before C before D. This provides formal support for ROPI: consolidate existing assets at BOFU before creating new content for TOFU.
Entry Mode Diagnostic
When diagnosing via entry mode, the practitioner question shifts from “which gate is failing?” to “which entry mode is this content using, and which gates still apply?” A brand failing at Gate 2 via Mode 1 (Pull) might succeed via Mode 3 (Push Data) if the content is available as a structured feed. The diagnostic becomes multi-dimensional: test each entry mode independently.
9. Three Nested Audiences
Each act has a primary audience. The audiences are nested, not parallel: you can only reach each one through the one below.
| Act | Primary Audience | What You’re Selling | Confidence Type |
|---|---|---|---|
| I - Retrieval (TKP-1) | Bot | “I exist, I’m accessible, I convert cleanly” | Technical trust |
| II - Storage (TKP-2) | Algorithm | “I’m the most relevant, credible, corroborated answer” | Content trust |
| III - Execution (TKP-3) | Person (via algorithm) | “I solve your problem better than alternatives” | Conversion trust |
You can only market to the algorithm THROUGH the bot (the algorithm only processes what the bot collected). You can only reach the person THROUGH the algorithm (the person only sees what the algorithm presents). This is why BOFU must come first: if your conversion experience is broken, every successful recommendation generates a negative feedback signal. You are spending Cascading Confidence faster than you earn it.
Entry Modes modify this nesting: Modes 3-4 bypass the bot audience entirely for some content. But the Entity Home - the anchor - still passes through all three audiences. The nesting remains structurally correct for the foundation.
Three Professional Blind Spots
| Professional | Approach | What They See | What They Miss |
|---|---|---|---|
| SEO | Bot → Algorithm → Person (bottom-up) | Excellent at bot layer; decent at algorithm | Often forgets the person actually needs to buy something |
| Marketer | Person → Algorithm → Bot (top-down) | Creates brilliant campaigns that resonate with humans | Never thinks about whether the bot can find, crawl, and render the content |
| GEO Expert | Algorithm only (the middle) | Aims at “make AI recommend me” | Cannot influence algorithm without controlling bot inputs; does not understand person output |
None of the three audiences is more important than the others. But like the funnel, start from the bottom.
10. Won-Probability at Infrastructure Gates
Won-probability is the product of boolean gate-pass probabilities across all gates. A 90% pass rate at each of 10 gates yields 34.9% end-to-end survival (0.9¹⁰ ≈ 0.349). A 95% yields 59.9%. A 99% yields 90.4%. For unoptimised content at 80%: 0.8¹⁰ ≈ 0.107 - fewer than 11% survive.
The Beer-Mats Principle: A near-zero at any gate kills the entire chain. Nine gates at 90% plus one at 50% drops you from 34.9% to 19.4%. If that gate drops to 10%, it kills the surviving signal entirely. “Better to be a straight C student than three As and an F” (Brent D. Payne, compressing Gary Illyes’s explanation, Sydney 2019). For non-Google bots, a JavaScript-dependent page isn’t a weak gate - it’s a near-zero gate. And a near-zero anywhere in a multiplicative chain makes the whole chain near-zero.
Small improvements at every gate compound into large gains at Won. This is the mathematical case for infrastructure optimisation.
Two Categories of Pipeline Investment
Two categories of pipeline investment exist, and the difference between them is an order of magnitude.
| Category | What it does | Example | Advantage over baseline |
|---|---|---|---|
| Improving gates | Raises confidence at existing gates. Content still passes through all ten. | IndexNow (+24%), Schema (+16%) | Single-digit to low double-digit % |
| Skipping gates | Bypasses infrastructure gates entirely. Content arrives at competitive phase with zero attenuation. | Structured feeds (+321%), MCP (+757%) | Triple-digit % |
Reference table (70% base case at all gates, 2.8% surviving signal):
| Approach | Gates affected | Surviving signal | Advantage |
|---|---|---|---|
| Pull (crawl) | All ten | 2.8% | baseline |
| IndexNow | D, S, C → 75% | 3.5% | +24% |
| Schema markup | I, A → 75% | 3.2% | +16% |
| Feed (Merchant Center, Product Feed) | D, S, C, R skipped (= 1.0) | 11.8% | +321% |
| MCP (direct agent data) | D, S, C, R, I, A skipped (= 1.0) | 24.0% | +757% |
Everything above the feed line is improvement. Everything below it is transformation. This is the mathematical case for Entry Modes 3 and 4: structured data improves the gates your crawled content passes through AND it is the format that makes feeds possible in the first place.
Gate Tiers
| Tier | Gates | Nature | TKP Document |
|---|---|---|---|
| Tier 1 - Infrastructure | 1-4 | Binary, low differentiation. Table stakes. Exception: Conversion Fidelity at Gate 3 has more weight because rendering quality directly affects downstream annotation. | TKP-1 |
| Tier 2 - Classification | 5-6 | Structured, moderate differentiation. Foundation for competitive differentiation. Gate 5 is where the AI’s Framing Gap originates. | TKP-1 (Gate 4) / TKP-2 (Gate 5) |
| Tier 3 - Competitive | 7-9 | Relative, high differentiation. Trinity gates. Explicitly competitive. Weight increases rightward. | TKP-2 |
| Tier 4 - Commercial | 10-11 | Binary outcome + feedback. Zero-sum resolution + post-engagement loop. | TKP-3 |
The sequence override: Gate weight only matters among gates the brand actually passes. A brand stalled at Gate 2 has no use for Gate 8 optimisation. Fix infrastructure first.
The Feedback Loop at Bot Gates
The pipeline appears to be a one-way line: Gates 0-4 left to right, then handed to Algorithm. It is not. Post-engagement feedback from Gate 10 (Served) (Served - see TKP-3a) re-enters the Bot phase. WHERE it re-enters depends on the platform’s strategy, mechanical implementation, and partnerships. The brand cannot control which gate receives the feedback. The brand CAN control whether the feedback exists and whether it is structured.
At Discovery and Selection (Gates 0-1): Entity confidence affects crawl priority. Higher confidence entities earn more crawl budget, more frequent revisits, faster re-indexing of updated content. A brand with strong post-engagement signals (high satisfaction, low returns, good review trajectory) becomes a higher-priority crawl target. The flywheel: good outcomes → higher entity confidence → more crawl attention → fresher index → stronger competitive position at Algorithm gates → better outcomes → more crawl attention.
At Crawling (Gate 2): The economic dimension of x402 creates a new feedback path. Agents paying for content access generate revenue signals. Destinations with high agent-payment volume signal demand. This is a new feedback channel that did not exist before February 2026.
At Indexing (Gate 4): Post-engagement signals can influence index freshness priority. Platforms may re-index high-engagement entities more frequently to ensure their representations stay current. Stale index entries for high-confidence entities are a waste of confidence.
The anti-flywheel: This is why BOFU must come first. If conversion is broken, every successful recommendation generates negative post-engagement data. That negative signal feeds back to Bot gates: lower crawl priority, slower re-indexing, reduced entity confidence. The pipeline poisons itself. You are spending Cascading Confidence faster than you earn it - and the degradation starts at the Bot phase, not the Algorithm phase.
The distributed re-entry principle: The same post-engagement signal may enter at different DSCRI gates on different platforms. Google may use review velocity to adjust crawl frequency (Gate 0). Bing may use satisfaction signals at Selection (Gate 1). An agent platform may use return-rate data at its own Discovery mechanism. All online signals about the brand - pre-purchase and post-purchase - feed DSCRI, but the stage depends on the platform.
11. What the Engineers Confirmed (Infrastructure-Relevant)
Confirmed directly by Fabrice Canel (Bing/Microsoft):
- Gatekeeper concept valid. Neutral tagging at crawl time. 24+ dimensions hold with “definitely more.”
- Conversion fidelity real - internal index is NOT HTML.
- “No two Bings” - API = same results. Context from referring page carries forward.
- “Less is more for SEO. Never forget that. Less URL to crawl, better for SEO.”
- Sitemaps + IndexNow = crawl control.
- Bucketised notification (logarithmic scale).
Confirmed by Ihab (grounding lifecycle):
- User asks → LLM checks confidence → if low, sends cascading queries to search → results → bots scrape select pages → answer generated.
- Not all answers require retrieval - embedded vs. fresh is a confidence decision.
- Bot purposes: index/train, retrieve/ground, assist/act. Each requires different confidence thresholds.
The Knowledge Panel Course validation: The course was teaching the Three Speeds before the concept existed. The course discovered the pattern empirically (10 years of experiments): Knowledge Extraction daily, Knowledge Panel weekly, Knowledge Vault monthly. The framework named it theoretically.
Scrape-to-referral ratio is the key metric - 90% bot traffic claims are misleading; exclude robots.txt and sitemap pings and it drops to ~40% actual crawling.
Monetisation tension (Ihab): Publisher Content Marketplace (PCM) - Microsoft pays publishers for content used by Copilot. Block bots = lose AI visibility; allow bots = content consumed without attribution.
12. Entry Modes and the Zero-Risk Year (Phase 1)
| Phase | Months | What Happens | Entry Modes Activated |
|---|---|---|---|
| Fix (U) | 1-3 | BOFU consolidation: Entity Home, structured data, fix infrastructure | Mode 1 (Pull) infrastructure established |
| Lock-In (C) | 4-6 | MOFU competitive wins | Mode 2 (Push Discovery via IndexNow) + Mode 3 (Push Data via structured feeds) |
| Expand (D) | 7-12 | TOFU visibility | Mode 4 (MCP - now operational) + building toward Mode 5 (Ambient). Agent commerce infrastructure live. |
Phase 1 is TKP-1 territory. The detailed Phase 1 timeline:
| Timeline | What You Do | What You Get (Visible NOW) | What You’re Building (Invisible) |
|---|---|---|---|
| Week 1-2 | Fix Entity Home, clean structured data | Knowledge Panel improves; AI stops hedging about basic facts | Entity confidence foundation |
| Month 1 | Consolidate brand narrative, fix contradictions | AI recommendations become consistent; fewer sales objections | Cascading Confidence at Annotation |
| Month 2-3 | Build corroboration (strategic third-party mentions) | Share of Voice in AI citations increases | Cascading Confidence at Grounding |
The Speed 2 Problem
Clients skip Speed 2 because it is invisible - you cannot screenshot a kgmid. They jump to Speed 1 because it is visible (citations, search results). But without the entity foundation, Speed 1 content has no entity anchor. The LLM either does not ground on it (Invisibility Tax - absent from Display) or grounds with hedging (Doubt Tax - “claims to be”).
The sales reframe: “You’re asking me to train a salesforce that doesn’t know who it works for. Before we teach it what to say (Speed 1 content), we need to teach it who you are (Speed 2 entity).”
13. Key Quotes (Bot-Relevant)
Jason on the URL: “The URL is a representation of the publisher, author, and content. It has no value in and of itself.”
Jason on audiences: “You are marketing to all three (people, bots and AI algorithms) and none is more important than the others, but like the funnel, start with the bottom.”
Jason on empathy: “If you have empathy for the problems that Google and Bing have for crawling, indexing your content, and you make it friction-free and tasty, they will reward you.”
Jason on packaging: “The Kalicube Process is brand-focused marketing packaged for machines.”
Fabrice Canel (Bing) on efficiency: “Less is more for SEO. Never forget that. Less URL to crawl, better for SEO.”
Fabrice Canel (Bing) on marketing to bots: “Having the best site and being known of having the best site. It’s about marketing the site.”
14. Register: Patent / Academic (TKP-1)
Gates 0-4 describe the mechanical phase of the DSCRI-ARGDW pipeline, concerning whether the system possesses content in a processable form. Five formally defined entry modes (Pull, Push Discovery, Push Data, Push Context, Ambient) determine how content arrives at these gates. Each mode bypasses different infrastructure gates but all must converge at Gate 6 (Recruited), establishing it as the Universal Checkpoint. Rendering Fidelity at Gate 3 measures the proportion of original content’s information that survives transformation to internal index format, with information loss at this boundary being irreversible. The transition from Indexed (Gate 4) to Annotated (Gate 5) constitutes the pipeline’s most significant phase boundary, separating mechanical presence (DSCRI) from evaluative comprehension (ARGDW). Won-probability at infrastructure gates operates as the product of boolean gate-pass probabilities across three multiplicative layers (entity, content, presentation), conditioned on the entity confidence prior. Post-engagement feedback (Gate 10 (Served)) re-enters Bot gates at platform-dependent points: affecting crawl priority and re-indexing frequency at Discovery/Selection (Gates 0-1), creating economic signals via x402 at Crawling (Gate 2), and influencing index freshness priority at Indexing (Gate 4). The distributed re-entry principle holds across both Bot and Algorithm phases: the same post-engagement signal may enter at different DSCRI gates on different platforms depending on platform mechanics and partnerships.
15. Source Attribution (TKP-1)
| Concept | Originator | Year |
|---|---|---|
| DSCRI-ARGDW Pipeline | Jason Barnard | 2025-26 |
| Five Entry Modes (Pull/Push Discovery/Push Data/Push Context/Ambient) | Jason Barnard | 2026 |
| Gate 6 as Universal Checkpoint | Jason Barnard | 2026 |
| Rendering Fidelity / Conversion Fidelity | Jason Barnard (from Fabrice Canel) | 2026 |
| DSCRI-ARGDW Phase Boundary | Jason Barnard | 2026 |
| Nine Stall Patterns (Act I subset) | Jason Barnard | 2026 |
| Cascading Confidence | Jason Barnard | 2025 |
| Three Layers of Confidence (Entity/Content/Presentation) | Jason Barnard | 2026 |
| Entity Home vs. Entity Home Website | Jason Barnard | 2026 |
| Entity Confidence Prior / Computational Trust | Jason Barnard | 2026 |
| Three Layers of Won-Probability (Wₑ × Wc × Wp) | Jason Barnard | 2026 |
| Won-Probability Model | Jason Barnard | 2026 |
| Two Categories of Pipeline Investment (improving gates vs skipping gates) | Jason Barnard | 2026 |
| Beer-Mats Principle / Multiplicative Destruction Effect | Jason Barnard (from Gary Illyes via Brent D. Payne) | 2019/2026 |
| Nested Audience Model | Jason Barnard | 2026 |
| Three Professional Blind Spots | Jason Barnard | 2026 |
| Speed 2 Problem | Jason Barnard | 2026 |
| IndexNow real-time push | Fabrice Canel / Bing | 2021 |
| 95/5 Rule | Prof. John Dawes, Ehrenberg-Bass Institute | 2021 |
| Ambient Research | Jason Barnard | 2025 |
<a id=”act-ii-algorithm”></a>
Act II: Algorithm - Gates 5-7
Three gates. The audience is the algorithm. Everything here is competitive.
1. The Annotation Fulcrum (Gate 5)
Gate 5 is the hinge of the entire pipeline.
| Gate | Name | Test | Question |
|---|---|---|---|
| 5 | Annotation | Classification accuracy | Does the system understand what this content IS? |
Annotation classifies content across five categories comprising 24+ dimensions:
Category 1 - Gatekeepers: Temporal Scope, Geographic Scope, Language, Entity Resolution.
Category 2 - Core Identity: Entities, Attributes, Relationships, Sentiment.
Category 3 - Selection Filters: Intent Category, Expertise Level, Claim Structure, Actionability.
Category 4 - Confidence Multipliers: Verifiability, Provenance, Corroboration Count, Specificity, Evidence Type, Controversy Level, Consensus Alignment.
Category 5 - Extraction Quality: Sufficiency, Dependency, Standalone Score, Entity Salience, Entity Role.
Plus Confidence Score applied to every annotation at every level. Additional dimensions from Fabrice Canel: Audience Suitability (age/context/sophistication), Ingestion Fidelity (conversion confidence at R→I boundary), and Freshness Delta (indexed version vs. live version currency).
Neutral Annotation vs. Contextual Filtering
The bot tags neutrally at index time. Safety classification (adult content, malware) is TAGGED as a neutral dimension - the bot does not judge. Filtering happens at query/display time when the system knows who is asking. Annotation is neutral description. Filtering is contextual decision-making. Different stages. This distinction ensures the Cascading Confidence cascade stays objective.
Entry Mode note: Mode 3 (Push Data) content arrives partially pre-annotated - GTINs carry category classification, structured product attributes carry entity resolution. The annotation gate still applies (the system does its own classification) but starts from a richer baseline than raw HTML.
Identifier Granularity at Annotation
Identifier granularity is a go/no-go gate for agentic commerce readiness. When the same product is sold at different locations or at different prices, identical identifiers (e.g., a single GTIN shared across variants) cause misannotation - the system treats distinct offerings as one. Downstream: wrong price quoted at Grounding, wrong offer presented at Display, failed transaction at Won. The cascade of failures starts here, at Annotation, with a misused identifier.
Diagnostic: “Are you ready for a Digital Product Passport?” The answer is almost always no. The gap between now and passport-ready is the gap to agent readiness for commerce entities. The universal principle applies beyond e-commerce: any entity whose offerings have variant attributes (location-specific pricing, market-specific scope, time-dependent availability) needs identifier variance that matches real-world variance.
SLM Routing and the Annotation Cascade
SLM (Small Language Model): Domain-specific language models used by crawlers for content annotation at scale. Cheaper, faster, and paradoxically more accurate than LLMs for niche content. The crawler routes content to the most specific SLM possible. LLM annotation is the failure mode.
Annotation Cascade: Hierarchical routing of content through increasingly specific language models during crawl-time annotation. Content routes Site → Category → Page → Chunk, with each level inheriting or re-routing model context. The crawler does not arrive cold - it arrives with inherited context. Misrouting (wrong model at any level) produces confident misannotation, worse than falling to a general model.
Three-Model Annotation: Three types of SLM operating simultaneously per niche:
Subject SLM: Classifies by subject matter (what is this about?) Entity SLM: Resolves entities and assesses centrality and authority (who are the key players?) Concept SLM: Maps claims to established concepts and evaluates novelty (what ideas define this space?)
When all three return high confidence on same entity for same content, annotation cost is minimal.
Annotation Cost Threshold: The tipping point where it becomes computationally cheaper for the system to build/cache an entity-specific context model than to process through a general model. The trigger is data density, not fame.
Why Gate 5 Is the Fulcrum
Five gates of infrastructure (1-5), five gates of competition (6-10). At Gate 5, the system classifies content across these 24+ dimensions and creates the competitive scorecard. Your annotation quality relative to competitors’ starts to determine outcomes at every downstream gate. The scoreboard turns on.
Three phases of the pipeline:
| Phase | Gates | Nature |
|---|---|---|
| Absolute | 1-5 | Pass/fail. Your competitors pass the same tests. No differentiation. |
| Relative | 6 | The scorecard. Your classification vs. theirs. Positioning begins. |
| Competitive | 7-10 | Head-to-head. System explicitly chooses between you and alternatives. Escalating intensity through to Won. |
The inheritance rule: A near-zero at any infrastructure gate means a near-zero in the competitive gates. The annotation system classifies whatever survived Rendering. If Rendering destroyed 30% of the content’s semantic structure, the annotation system works on 70% of what was published - and that 70% propagates through every downstream gate, multiplying with every other imperfection in the chain. The competitive phase cannot compensate for infrastructure failure. Fix the infrastructure first.
2. The Selection Pivot (Gate 6 - Recruitment)
Gate 6 is where head-to-head competition begins. The system explicitly chooses between you and alternatives.
| Gate | Name | Test | Question |
|---|---|---|---|
| 6 | Recruitment | Competitive absorption | Does this content deserve inclusion in the system’s knowledge? |
The Universal Checkpoint
Every entry mode - Pull, Push Discovery, Push Data, Push Context, Ambient - must pass through Recruited. This is the universal checkpoint. No content reaches a user without being Recruited first. Recruitment is competitive regardless of entry mode.
The Algorithmic Trinity at Recruitment
Which content does the system recruit? The answer depends on which Trinity member is doing the recruiting:
Search engines recruit for results pages. Selection criteria: relevance to query, content quality signals, freshness, diversity requirements.
Knowledge Graphs recruit for entity extraction and consolidation. Selection criteria: entity salience, structural clarity, source authority, factual consistency with existing graph.
LLMs recruit for training data and RAG retrieval. Selection criteria: information density, linguistic quality, domain coverage, corroboration patterns.
The same content may be recruited by one, two, or all three. Each has different selection criteria, different refresh cycles (the Three Speeds), and different confidence thresholds.
Entry Modes at Recruitment: Mode 1 (Pull) content arrives after surviving gates 1-6. Mode 2 (Push Discovery) content arrives after accelerated discovery + full crawl/render/index. Mode 3 (Push Data) content arrives directly - structured feeds skip the bot phase and enter at or near Recruitment. Mode 4 (MCP-as-data) enters directly at Recruitment - the agent actively selects this source. Mode 5 (Ambient) - content was already Recruited in prior pipeline passes.
Comparison-Intent as the Highest-Leverage Tracking Focus
The highest-leverage tracking focus is the selection zone immediately after discovery - especially comparison-intent queries and frames:
“best [product/service] for…” “under €X / within constraints” “alternatives to…” “compare A vs B”
These queries reveal which offers and entities are entering (or being excluded from) the consideration set. They are the earliest reliable indicator of downstream Won changes.
Universal equivalents: B2B (“best vendor for [use case]”, “alternatives to [competitor]”). Shop window (“best [service] near me”, “under [budget]”). Media (“best analysis of…”, “who explains…”).
Prioritise instrumentation and content for comparison-intent classes. They are the highest-leverage upstream predictors of downstream Won.
Competitive Escalation from Recruitment
| Gate | Competitive Intensity | What Happens |
|---|---|---|
| 5 (Annotation) | Scorecard created | Classification vs. competitors determines downstream positioning |
| 6 (Recruitment) | Set competition | Multiple brands recruited - the field is selected |
| 8 (Grounding) | Narrowing competition | Fewer sources as confidence narrows - the shortlist |
| 9 (Display) | Finite recommendations | Often one primary recommendation - the finalist |
| 10 (Won) | Binary, zero-sum | One brand wins. Every competitor loses. |
Authority Is an Output, Not an Input
The sequence the industry follows: create content → build links → accumulate authority → get recommended. The sequence that works: establish identity → become understood → earn corroboration → receive authority as a consequence.
The industry skips to Credibility (C) without solving Understandability (U). They pile “authority signals” onto an entity the algorithm has not confidently identified. Backlinks pointing at an ambiguous entity attach to nothing.
Authority is also relative, not absolute. You are not “authoritative.” You are more authoritative than the alternatives, for a specific topic, in a specific context. That is Gate 6. Being good enough is not the threshold. Being chosen over the competition is.
Topical Authority: The 3×3 Matrix
What the industry calls “Topical Authority” is actually Topical Coverage - the entry ticket, not the destination. Topical Authority, fully defined (Jason Barnard, 2026), is a three-by-three matrix:
| Dimension 1 | Dimension 2 | Dimension 3 | |
|---|---|---|---|
| Coverage (what you say) | Depth | Breadth | Original Thought |
| Architecture (how you structure it) | Source Context | Topical Map | Semantic Network |
| Position (where you stand relative to others) | Temporal | Hierarchical | Narrative |
The three Position dimensions are the authority outputs:
Temporal Authority: Who was first, with proof. Provenance. Hierarchical Authority: Who others defer to. Peer recognition. Narrative Authority: Who all roads lead to. Topological centrality.
Entity Understanding is the underlying requirement for all nine cells. The 3×3 matrix sits ON TOP of entity identity. Identity is the prerequisite that makes the matrix addressable.
Maps to the Three Pillars of Semantic SEO (Jason Barnard, 2026): Who I am (Identity = U), What I know (Expertise = C), How is it helpful (Relevance = D). Identity and Relevance form the Identity-Relevance Loop. Expertise sits inside it.
3. The Grounding Gate (Gate 7)
| Gate | Name | Test | Question |
|---|---|---|---|
| 7 | Grounding | Cross-reference confidence | Is this the right answer for this query? |
Grounding is what happens when an LLM knows it does not know enough and needs to check. Search engines do not ground - they retrieve. Knowledge Graphs do not ground - they serve. Only LLMs ground, because only LLMs have the gap between stale training data and fresh reality.
Three Grounding Sources
Three grounding sources operate simultaneously:
| Grounding Source | What It Provides | Fuzz Level | Speed | Best For |
|---|---|---|---|---|
| Search Engine | Documents, passages, citations | High | Fast (real-time) | Novel queries, current events, TOFU |
| Knowledge Graph | Structured facts via federated lookup | Low | Fast (cached) | Entity identity, attributes, relationships, BOFU |
| Specialist SLM | Domain-validated inference | Medium | Fast (cached model) | Domain expertise, claim verification, MOFU |
SE Grounding (dominant today): LLM doesn’t know → sends cascading queries to web index → gets documents → extracts answer. Highest fuzz - documents, not facts.
KG Grounding (rising): LLM checks federated lookup table → routes to appropriate filing cabinet → gets structured facts. Low fuzz. Binary edges. Google almost certainly already does this.
SLM Grounding (emerging): Once enough consistent data about a domain crosses the Annotation Cost Threshold, the system builds a specialist small language model. That SLM becomes a third grounding source - not document retrieval, not fact lookup, but domain-expert verification.
Critical clarification: Three-source grounding maps to Three Speeds, NOT to UCD. UCD operates at Gate 8 (Display). Grounding happens at Gate 7. The confusion arises because both use threes - but UCD measures the person’s experience at Display, while Three-Source Grounding describes the machine’s verification at Grounding.
Entry Modes at Grounding: Mode 4 (MCP-as-grounding) enters here - the agent queries the brand’s MCP server for real-time verification during response generation. Live pricing, stock levels, booking availability injected directly into grounding without ever being in the web index. Mode 5 (Ambient) - AI triggers grounding check against accumulated entity confidence before proactively surfacing the recommendation.
Today, SE grounding is the dominant mechanism. KG and LLM grounding are catching up. The trajectory is clear: as AI assistive engines mature, the grounding balance shifts toward KG and LLM. Brands investing in entity optimisation and structured knowledge now are building for the architecture that is coming.
The Grounding Lifecycle (Ihab, Microsoft)
User asks → LLM checks confidence → if low, sends cascading queries to search → results → bots scrape select pages → answer generated. Not all answers require retrieval - embedded vs. fresh is a confidence decision. Bot purposes: index/train, retrieve/ground, assist/act. Each requires different confidence thresholds.
The Whole-Page Algorithm Insight
Search “Microsoft” via the Bing API: 10 blue links from microsoft.com. Search “Microsoft” on Google.com: the whole-page algorithm physically REMOVES 9 and replaces with editorially diverse alternatives. The algorithm is subtractive, not just additive. Grounding gets the raw ranking, not the editorial version.
4. The Trinity Split After Annotation
After Annotation (Gate 5), the pipeline SPLITS three ways. Each Trinity member processes the annotated content through its own mechanism:
| Path | Mechanism | Grounding? |
|---|---|---|
| Search Engine | Query → retrieve from index → rank → display | NO - retrieves from its own index |
| Knowledge Graph | Entity query → retrieve structured facts → display | NO - serves stored structured facts |
| LLM/Assistive Engine | Prompt → confidence check → cascading queries → search API → re-fetch → synthesise | YES - grounding is LLM-specific |
This split is why the same entity can be present in the Knowledge Graph, absent from search results, and hallucinated by the LLM - or any permutation. Each Trinity member has its own Recruitment criteria, its own Grounding mechanism (or none), and its own Display format. Optimising for one does not guarantee the others.
The Filing Cabinet Model
The federated architecture operates through filing cabinets - not one monolithic Knowledge Graph.
Each entity has a filing cabinet identified by kgmid + name in the central lookup table. Each filing cabinet has drawers (attributes), built on templates per entity type (Person, Corporation, Film, Podcast, etc.). Some drawers are locked by the Knowledge Vault or specialist vertical (entity name, ID, Entity Home, often description/date of birth). Some drawers are unlocked, overridable by the web index algorithm in real-time. The web index algorithm acts as a real-time correction layer on top of locked drawers. Each drawer’s content = a “micro Featured Snippet” - populated the same way Featured Snippets are selected.
The endgame is NOT “one graph to rule them all.” It is the lookup table BECOMING the graph - a lightweight federation layer that doesn’t store knowledge itself but knows WHERE each piece of knowledge lives across all specialist verticals, and can route queries to the right filing cabinet. The kgmid becomes the universal key.
This is a universal design principle: Universal Search was Google’s first federation - Images, Videos, News, Maps, Shopping, each with its own index, assembled via the Whole Page Algorithm. The Whole Page Algorithm IS a lookup-and-route system. Multimodal is the same pattern applied to input formats rather than output verticals. Every assistive engine that needs to answer real-world questions converges on this pattern. The naming: federated lookup layer = the “multi-encyclopedia joinup graph.”
Google Maps is a spatial-relational graph. The Knowledge Vault is a factual-attributive graph. Google Books is a bibliographic graph. Google Scholar is a citation graph. Each has its own internal logic. Merging them would degrade both. The universal design principle: specialist verticals connected by a universal lookup table.
5. The Three Graphs
The Algorithmic Trinity processes knowledge through three distinct graph structures. Each has a different fuzziness level. The fuzziness ordering provides the structural justification for the U→C→D build direction (detailed in TKP-3).
| Graph | Fuzziness | UCD Layer | What It Stores | Optimisation Target |
|---|---|---|---|---|
| Entity Graph | Low (structured, explicit, binary edges) | U | Entities, attributes, relationships, Schema.org | Structured data, Entity Home, disambiguation |
| Document Graph | Medium (semi-structured, ranked edges) | C | Documents, links, anchor text as labelled edges | Content quality, corroboration, link authority |
| Concept Graph | High (fuzzy, inferred, probabilistic) | D | Learned associations, embeddings, training patterns | Consistent messaging, brand narrative, reputation |
Build from low-fuzziness to high-fuzziness. Optimising in reverse - targeting LLM recommendation (Concept Graph) without Entity Graph verification - risks hallucination rather than advocacy.
Anchor text functions as labelled edges in the Document Graph (confirmed by Fabrice Canel, validated by Meenaz Merchant, Microsoft Bing). When a link uses descriptive anchor text, the algorithm records not just “Page A links to Page B” but “Page A describes Page B as [anchor text].” This makes anchor text optimisation an act of graph annotation, not link building.
Multi-Graph Coverage Advantage
Under Reciprocal Rank Fusion analysis (Cormack et al., 2009), an entity present in all three graphs at equivalent rank achieves an RRF score 3× that of an entity present in only one graph. The effective advantage compounds beyond 3× through cross-graph amplification:
Entity Graph → Document Graph: Verified entity status improves crawl budget allocation at Gate 1, increasing Document Graph ranking. Entity Graph → Concept Graph: KG-verified entities receive higher grounding confidence from Assistive Engines at Gate 7. Document Graph → Concept Graph: Higher-ranked documents are preferentially selected during RAG retrieval at Gate 7.
This explains why entities present across all three representations achieve disproportionate visibility relative to entities optimised for a single graph. The multi-graph advantage is the quantitative case for the UCD build order: Entity Graph first (U), Document Graph second (C), Concept Graph third (D).
6. Two Distinct Three Speeds
There are TWO “Three Speeds” operating in the system. Related but different.
Three Speeds of KG Ingestion (INPUT - how fast filing cabinets get filled)
| Algorithm | Role | Update Speed | Pipeline Mapping |
|---|---|---|---|
| Knowledge Extraction | Annotates crawled content with structure + confidence scores | Daily (per-page, real-time with crawling) | Gates 3-6 |
| Knowledge Panel | Builds Panel from web index cross-referencing | Weekly (per-entity assessment) | Gate 8 (Display) |
| Knowledge Vault | Moves entities into permanent KG | Monthly (data lake, manual engineer intervention) | Gate 7 + long-term entity confidence |
Three Speeds of Trinity Output (OUTPUT - how fast each Trinity member refreshes what it shows)
| Trinity Member | What It Updates | Refresh Cycle | Defensibility |
|---|---|---|---|
| Search Engine | Index, rankings, results | Days-weeks | Easily displaced |
| Knowledge Graph | Entity facts, attributes, relationships | Weeks-months | Hard to displace |
| LLM | Model weights, associations, domain patterns | Months-years | Near-permanent |
Maps to Zero-Risk Year: Phase 1 (Fix/U) = Speed 2 foundation (Entity Home, KG). Phase 2 (Lock-In/C) = Speed 1 dominance (content, search). Phase 3 (Expand/D) = Speed 3 investment (SLM inclusion). Counter-intuitive: you start with Speed 2, not Speed 1. Without the entity foundation, Speed 1 content has no entity anchor.
Client pitch for Three Speeds:
Short term (Speed 1 - SE grounding): “We fix your search visibility. Results in weeks.” Mid term (Speed 2 - KG grounding): “We build your entity in the Knowledge Graph. The machine doesn’t need to search for you - it already knows you.” Long term (Speed 3 - SLM grounding): “We make your brand part of how the LLM thinks about your domain. You’re not just in the encyclopedia. You’re the professor the system asks.”
Entry Modes × Three Speeds: Mode 2 (Push Discovery/IndexNow) accelerates Speed 1 - content reaches the search index faster. Mode 3 (Push Data/Merchant Feed) operates at its own speed - structured feeds update in near-real-time, independent of crawl cycles. Mode 4 (MCP) is instantaneous - real-time data, no crawl or training cycle. The push and pull modes each have different speed profiles, and brands must manage content freshness across all active modes.
7. Cascading Confidence at Competitive Gates
Cascading Confidence accumulates through every pipeline stage (covered at infrastructure gates in Act I above). At the competitive gates, it takes on a different character: confidence becomes relative, not absolute.
Confidence at Gates 5-7
| Gate | What Confidence Means | What Kills It | Entry Mode Notes |
|---|---|---|---|
| Annotation | “We can tag reliably across dimensions” | Ambiguous content, poor semantic HTML | Mode 3 partially pre-annotated (GTINs, categories); Mode 4 has schema-defined annotation |
| Recruitment | “This deserves inclusion in our knowledge” | Competitor has higher confidence | UNIVERSAL CHECKPOINT - all five modes must pass here |
| Grounding | “This is the right answer for this query” | Low corroboration, weak relevance | Mode 4 (MCP-as-grounding) enters here; Mode 5 triggers here |
Corroboration Dynamics
The Corroboration Threshold: Empirical observation places the threshold at approximately 3 independent high-confidence sources (the entity’s own canonical source plus 2-3 corroborating third-party sources). Below this threshold, LLM-generated responses use hedging language (“claims to be,” “according to their website”). Above it, responses assert directly (“is,” “known for”). The Corroboration Threshold operates as a meta-gate: entities below it face systematically lower pass probabilities at Gates 7-10 regardless of content quality.
The Self-Fulfilling Prophecy: LLM systems learn from expression patterns, not merely from information existence. Confident assertion on the Entity Home → corroborated by independent sources with equivalent confidence → LLM encodes confident pattern → LLM asserts confidently → user engagement validates → reinforced confidence. Won-probability increases with each cycle.
The Self-Defeating Prophecy: Hedging language on Entity Home → LLM encodes hedge pattern → LLM hedges in output → lower engagement → negative signal → reinforced hedging. Won-probability decreases with each cycle. The Entity Home initialises this dynamic. Its linguistic confidence level sets the pattern.
Corroboration Decay: The structural erosion of entity confidence through third-party content DELETION rather than contradiction. Third-party sources that contributed to the entity graph are removed: pages deleted, domains expire, publishers reorganise. Each deletion removes a validation point. Confidence degrades. The pipeline runs backwards: content moves from Recruited to not-Recruited as the evidence base erodes. Building confidence requires sustained multi-source corroboration over months or years; eroding it requires only deletion events. The antidote is corroboration architecture: Entity Homes on controlled domains, permanent records in institutional repositories, and distributed rather than concentrated third-party evidence.
Three Confidence Phases
| Phase | Name | KG State | Grounding Role |
|---|---|---|---|
| 1 | Grounding-dependent | Data lake (stale) | PRIMARY - search grounding is main source |
| 2 | Hybrid | Data river for trusted entities, lake for others | SHARED - KG and search both provide grounding |
| 3 | Graph-native | Continuous extraction from trusted sources | FALLBACK - grounding only triggers for inconsistencies |
The transition is entity-specific and platform-specific. The Investment Inflection Point is where spending on grounding produces less return than spending on graph consolidation.
The Data Lake → Data River Evolution
Knowledge Graphs currently operate as data lakes: crawl → accumulate → periodic batch processing → graph update. Weeks to months.
The evolution is toward data rivers: continuous crawling, pushing content past the Confidence Curator - an extraction algorithm that selectively extracts from trusted sources as they flow past. The Curator does not extract from everything. It curates from sources that have earned trusted source status.
Being a trusted source in the data river is the prerequisite for freshness. Freshness without trust = ignored by the Curator. Trust without freshness = stuck in the data lake.
Entry Modes and the Data River: Mode 2 (Push Discovery) helps content enter the river faster. Mode 3 (Push Data) creates a parallel data stream that bypasses the river entirely - structured feeds update the system’s knowledge without flowing through the traditional crawl-extract cycle. Mode 4 (MCP) is the ultimate real-time stream - live data on demand, no river needed. The Data River evolution is primarily about Mode 1 (Pull) becoming more efficient. Modes 3-4 represent entirely different plumbing.
“Cascading” as the Universal Term
| Term | Instance Of | Status |
|---|---|---|
| Cascading Queries | Query-time cascading | Coined - Jason Barnard |
| Cascading Confidence | Pipeline-long confidence accumulation | Coined - Jason Barnard, 2026 - the throughline |
| Cascading Signals | All signals passing stage-to-stage | Coined - Jason Barnard, 2026 - the broader category |
8. The Cascading Prerequisite: Why U→C→D Is Mechanical
This is the single most important differentiating insight in the entire Kalicube Process framework. See also: Parent Summary §5.
Cascading Confidence (§7) describes how confidence accumulates gate by gate across the horizontal pipeline. The Cascading Prerequisite describes a different dimension: UCD layers are mechanical prerequisites, not a recommended sequence.
The Mechanism
Credibility signals (NEEATT, Topical Authority, links, mentions, reviews) require an entity node in the graph to attach to. That node is created at U (Understandability). Specifically: the entity must have a KGMID or equivalent identifier, an Entity Home with structured data, and sufficient corroboration for the machine to distinguish it from namesakes and ambiguous references. Without this node, credibility signals are orphaned data. They exist in the system - crawled, indexed, annotated - but they attach to nothing, or worse, they attach to the wrong entity.
Deliverability signals (omnipresence, recommendation triggers, conversational visibility, multi-vertical presence) require confidence weight on the entity node. That weight is accumulated at C (Credibility) through NEEATT demonstration, third-party corroboration, independent validation, and competitive performance at the Recruited gate (Gate 6). Without this weight, the entity exists but the machine lacks the confidence to recommend it. The LLM hedges. The search engine ranks but does not feature. The assistive engine knows you but does not advocate.
U creates the entity node. C loads the entity node. D activates the entity node.
This manifests concretely at the Annotation gate (Gate 5). When content arrives for annotation, the system asks: does this entity have an established node? If yes, annotation can attach credibility dimensions (expertise, authority, trust signals) to the existing node. If no, annotation proceeds generically - the content may be indexed and classified, but the credibility signals have no entity-specific anchor. This is why 99% of practitioners miss the critical layer: they build credibility (links, mentions, authority signals) for an entity the machine does not yet understand, and wonder why the signals produce no visible effect.
The same cascading filter applies between C and D. NEEATT signals and Topical Authority give the entity confidence weight. That weight is what unlocks the D layer - the machine’s willingness to recommend proactively, to include in category comparisons, to surface in conversational queries where the brand was not explicitly named. Without sufficient C-layer confidence, the machine may acknowledge the entity exists but will not advocate for it. The filter chain: U unlocks C, C unlocks D.
NEEATT: The Credibility Framework (Applied to Understood Entities)
NEEATT expands Google’s EEAT by adding Notability and Transparency. Six credibility dimensions, evaluated at three levels (content, author, publisher):
| Dimension | What the Machine Evaluates | Key Insight |
|---|---|---|
| Notability | Is this entity notable within its niche? | Niche-relative, not fame-dependent. A place on the board of the Parisian Poodle Parlours Association demonstrates notability for a poodle parlour. |
| Experience | Has this entity demonstrated real-world experience? | First-hand evidence, not claimed expertise. |
| Expertise | Does this entity have verifiable expertise? | Credentials, publications, track record. |
| Authority | Do other entities recognise this one as authoritative? | Third-party citations, co-citation patterns, inherited credibility. Authority is an output, not an input. |
| Trustworthiness | Is this entity consistently reliable? | Track record of accuracy, absence of contradictions. |
| Transparency | Is this entity open about who it is and what it does? | Clear disclosure, accessible information. Credit: Jarno van Driel. |
Critical dependency: NEEATT signals do not apply to an entity that is not understood. You can have all the expertise, experience, authority, and trustworthiness in the world - if the machine does not understand what you are (no Entity Home, no structured data, no KGMID, ambiguous identity), those signals have no entity to attach to. The Cascading Prerequisite is the reason NEEATT implementation fails for most brands: they invest in credibility signals before establishing understandability.
The same applies to Topical Authority (the 3×3 Matrix from §2): topic ownership signals require entity-topic association at the annotation layer. If the entity is not understood, the association cannot form.
Two Edge Cases That Prove the Rule
The Empty Room Effect (temporary D without U). If a topic is obscure enough, or a question has never been asked, a brand can achieve D-layer visibility without U or C. The machine has one source. It uses it. The BBC journalist who wrote a blog post claiming to be a champion hot dog eater and got ChatGPT to repeat it - in 20 minutes. That is not hacking. That is being the only voice in an empty room.
The moment the room fills - the moment a competitor enters, a second source appears, or a real person asks a competitive question - the cascade asserts itself. The Authoritas controlled experiment (2026): fake expert personas planted across 600+ articles in major UK media. AI indexed them all. Across 55 topic-based queries and 9 AI models, zero fakes were recommended. On name-based queries (the BBC method - ask about the specific fake by name), models failed 29.5% of the time. AI is trivially easy to fool on questions nobody asks. It is remarkably hard to fool on questions that matter.
The Quicksand Effect (temporary C without solid U). If competition is so poor the engine has no alternative, a brand can win at C without solid U. Works until a proper competitor arrives with Entity Home, structured data, and corroborated identity - someone who gives the machine an alternative with a proper foundation. Then the quicksand swallows: credibility signals have no entity node to anchor to, and the competitor’s do. The machine switches allegiance not because the competitor has more credibility, but because the competitor’s credibility attaches to something understood.
Why Every Other Methodology Fails Here
Every competing approach - SEO, AEO (as used by others), GEO, AIEO - either addresses one UCD layer or attempts to skip layers:
- SEO without U: Links and authority signals have no entity to attach to. Water into a bucket with no bottom.
- GEO/AEO without U+C: Content optimized for AI responses has no confident entity representation. AI hedges.
- AAO/AIEO without the full stack: Training fails without U (they don’t know who you are) and C (they don’t trust what they know).
These are not theoretical failures. They are the mechanical reason why agencies sell early citations as success (Empty Room Effect) and why those citations evaporate when competition arrives. Recognition is not recommendation, and the difference between the two is the Cascading Prerequisite.
Temporal Compounding
The cascade gets harder to displace over time. Each new corroborating source increases confidence. Each confidence increase strengthens future recommendations. Late movers face fossils: correcting a confident algorithm requires more evidence than building the position from scratch.
9. The Diagnostic Pattern (Gates 5-9)
Each gate is boolean. Content either passes or stalls. Content that stalls at any gate cannot reach any downstream gate.
| A(5) | Re(6) | G(7) | Di(8) | W(9) | Diagnosis | Action |
|---|---|---|---|---|---|---|
| ✗ | ✗ | ✗ | ✗ | ✗ | Foundation broken | Fix entity annotation first |
| ✓ | ✗ | ✗ | ✗ | ✗ | Annotated but no Trinity member recruited it | Build signals for KG/Search/LLM adoption |
| ✓ | ✓ | ✗ | ✗ | ✗ | Recruited but not corroborated | Build third-party proof |
| ✓ | ✓ | ✓ | ✗ | ✗ | Corroborated but AI doesn’t cite | Tighten Display-level language or wait for propagation |
| ✓ | ✓ | ✓ | ✓ | ✗ | Displayed but user doesn’t choose you | Conversion framing problem - fix Won-gate messaging |
| ✓ | ✓ | ✓ | ✓ | ✓ | Full cascade. The Perfect Click | Defend and reinforce |
| ✗ | ✗ | ✗ | ✓ | ✗ | DANGER. AI cites without proper grounding | Fragile - fix urgently before correction erases you |
Entry Modes and Diagnostics: When diagnosing via entry mode, the practitioner question shifts from “which gate is failing?” to “which entry mode is this content using, and which gates still apply?” A brand failing at Gate 6 via Mode 1 (Pull) might succeed via Mode 3 (Push Data) if the content is available as a structured feed. The diagnostic becomes multi-dimensional: test each entry mode independently.
Act II Stall Patterns
| Stall | Pattern | Diagnosis | Action |
|---|---|---|---|
| 5 | Indexed but not Annotated | The phase boundary. Topic signals ambiguous, entity associations unclear | Strengthen entity signals, improve semantic clarity |
| 6 | Annotated but not Recruited | Classified correctly but competitors’ content selected instead. The competitive absorption gate | Build corroboration, strengthen entity confidence, differentiate at annotation dimensions |
Revenue mapping: Algorithm stalls at Gates 5-6 represent Competitive Loss - content is in the system, but competitors’ content is preferred at Recruitment and Grounding. The machine knows you exist but does not select you. “Why didn’t AI mention us?” This is not a Revenue Tax (taxes apply at Display where the person experiences the failure). Competitive Loss is upstream: the person never sees the brand because the algorithm chose a competitor before Display. Resolve Act II stalls only after Act I (infrastructure) is clean - upstream stalls render downstream optimisation structurally ineffective.
Act III Stall Patterns (Preview - detailed in TKP-3)
| Stall | Pattern | Diagnosis | Action |
|---|---|---|---|
| 7 | Recruited but not Grounded | Ghost contribution: system absorbed the content but never surfaces it | Build third-party proof |
| 8 | Grounded but not Displayed | Used as source material but brand not visible. Ghost contribution at display level | Tighten Display-level language |
| 9 | Displayed but not Won | Appeared but user chose competitor. Terminal boolean | Conversion framing - TKP-3 territory |
10. Google’s Structural Advantage
Google owns all three components of the Algorithmic Trinity AND is best-in-class at each.
| Component | Microsoft | Everyone Else | |
|---|---|---|---|
| Search Engine | Own (best index globally) | Own (Bing - solid second) | Rent or scrape |
| Assistive Engine | Gemini (strong, improving) | Own models + GPT history | Various (strong on AE, lack other components) |
| Knowledge Graph | Freebase → Google KG | Satori (limited) | Wikidata (public, inferior); most have nothing |
Plus multimodal depth: YouTube (video), Google Maps (local), Google Books (scanned libraries), Google Scholar (academic), Google News, Google Shopping + Merchant Center. When grounding goes multimodal, Google grounds from ALL verticals simultaneously - video transcripts, local reviews, academic papers, product availability, news. All first-party. All full-index access. Everyone else grounds from blue links.
Three competitive positions: Google’s problem is internal (silos - solvable). Microsoft’s problem is partial (rooms missing). Everyone else’s problem is structural (no owned index - unsolvable).
11. The Handoff to Display (Gate 8 → TKP-3)
Gate 8 is where the algorithm’s work becomes visible to a person. It is the second hinge in the pipeline - the transition from TKP-2 to TKP-3.
The Opaque Federation at Display: The person never sees “the KG told you this” or “the SE found this” or “the LLM inferred this.” They see one answer from one assistant in one walled garden. The engine decides what to show (Display), how confidently to show it (Credibility framing), and whether to show it at all (Deliverability). UCD measures the person’s trust in WHAT THE ENGINE TELLS THEM ABOUT THE BRAND, not their trust in the brand directly. The engine is curator, federator, and gatekeeper - judge, jury, and executioner.
At Gate 8, the person walks in for the first time. Everything before was machine-to-machine. Everything after involves a human making decisions. The UCD funnel activates. The V/I/T lens applies. The revenue taxes become measurable. Commerce begins.
For Display mechanics, UCD framework, Won resolutions, Gate 10 (Served), and all commercial application: see TKP-3.
The Feedback Loop at Algorithm Gates
Gate 10 (Served) feedback does not only re-enter at Bot gates (see TKP-1). It re-enters at Algorithm gates too - and this is where it has the most competitive impact.
At Annotation (Gate 5): Post-engagement data enriches entity classification. Reviews, satisfaction patterns, support resolution data, return rates - all inform how the machine classifies the entity. A brand with consistently positive post-engagement signals earns richer, more confident annotation. The machine does not just know WHAT the entity is; it knows HOW WELL the entity performs. This is the mechanism behind “operational credibility” - credibility earned through delivery, not through content.
At Recruitment (Gate 6): Post-engagement Q&A objects (see TKP-3a §5) are recruitable content. When the agent encounters a prospect’s question and has access to real post-sale dissection data - “customers who bought X commonly ask about Y, and the resolution is Z” - that is higher-signal content than marketing claims. More proof objects = more for the system to recruit from. The brand with structured post-engagement data gives the algorithm more ammunition than the brand with a marketing-only content library.
At Grounding (Gate 7): The machine cross-references against actual outcomes, not just claims. Post-engagement truth is grounding evidence. A brand that claims “95% satisfaction rate” and has structured data confirming it gives the machine a verification path. A brand that claims the same without structured evidence forces the machine to hedge. Grounding against outcomes is stronger than grounding against claims.
The distributed re-entry principle (same as Bot phase): WHERE post-engagement signals re-enter Algorithm gates depends on the platform’s strategy, mechanical implementation, and partnerships. Google may use review signals at Annotation. A chatbot platform may use satisfaction data at Grounding. An agent with API access may pull post-engagement Q&A objects directly at Recruitment. The brand cannot control which gate receives the feedback. The brand CAN control whether the data exists, whether it is structured, and whether it is accessible. This is the data governance moat: the brand that connects post-sales reality into the data layer feeding agents wins - not because of better marketing, but because the agent has better evidence to work with.
12. Key Quotes (Algorithm-Relevant)
Jason on competition: “Being good enough is not the threshold. Being chosen over the competition is.”
Jason on authority: “Authority is an output, not an input. The industry skips to Credibility without solving Understandability.”
Jason on the Cascading Prerequisite: “NEEATT doesn’t apply to an entity that isn’t understood. Topical Authority doesn’t apply to an entity that isn’t understood. You can have all the expertise, experience, authority, and trustworthiness in the world - if the machine doesn’t understand what you are, those signals have no entity to attach to. That is the layer 99% of people miss.”
Jason on recognition vs recommendation: “Recognition is not recommendation. The difference between the two is the Cascading Prerequisite.”
Jason on the Three Speeds: “We build your entity in the Knowledge Graph. The machine doesn’t need to search for you - it already knows you.”
Jason on algorithms: “If you have empathy for the problems that Google and Bing have for crawling, indexing your content, and you make it friction-free and tasty, they will reward you.”
Jason on framing: “The frame is the bridge between the claim and the proof, and it’s the difference between the people and companies who are going to make a huge success at this, and the ones who don’t.”
13. Register: Patent / Academic (TKP-2)
Gates 5-7 describe the evaluative phase of the DSCRI-ARGDW pipeline, where the Algorithmic Trinity - Search Engines, Knowledge Graphs, and Large Language Models - actively operates on content through distinct recruitment, grounding, and display mechanisms. Annotation (Gate 5) classifies content across 24+ dimensions in five categories, with three types of specialist SLM operating simultaneously per niche. The Annotation Cost Threshold defines the tipping point where entity-specific context model creation becomes computationally cheaper than general-model processing. The pipeline splits three ways after Annotation: Search Engines retrieve, Knowledge Graphs serve, LLMs ground - each with independent selection criteria and refresh cycles (Three Speeds of both ingestion and output). The Filing Cabinet Model describes a federated architecture where specialist verticals connect via a universal lookup table (kgmid), converging on the same pattern as Universal Search and the Whole Page Algorithm. Three grounding sources (SE, KG, specialist SLM) operate simultaneously with different fuzziness levels. The Corroboration Threshold (approximately 3 independent high-confidence sources) determines whether LLM output uses hedging or assertion language, functioning as a meta-gate for Gates 7-10. Confidence evolves through three phases (grounding-dependent, hybrid, graph-native) with entity-specific Investment Inflection Points. Multi-graph coverage produces disproportionate retrieval advantage via Reciprocal Rank Fusion with cross-graph amplification channels. The Cascading Prerequisite establishes that UCD layers are mechanical prerequisites, not a recommended sequence: U creates the entity node (without which credibility signals are orphaned), C loads it (without which deliverability has no trust to amplify), D activates it. NEEATT (Notability, Experience, Expertise, Authority, Trustworthiness, Transparency) is the credibility framework that applies exclusively to understood entities - the critical layer that 99% of practitioners miss. Two edge cases (Empty Room Effect, Quicksand Effect) demonstrate that temporary visibility without the full cascade collapses when competition arrives. Post-engagement feedback (Gate 10 (Served)) re-enters Algorithm gates at platform-dependent points: enriching Annotation with delivery performance data, providing recruitable proof objects at Recruitment, and enabling outcome-based grounding at Gate 7. The distributed re-entry principle holds: the same signal may enter at different DSCRI gates on different platforms depending on platform mechanics and partnerships.
14. Source Attribution (TKP-2)
| Concept | Originator | Year |
|---|---|---|
| DSCRI-ARGDW Pipeline | Jason Barnard | 2025-26 |
| Algorithmic Trinity | Jason Barnard | 2024 |
| Three Speeds (input + output) | Jason Barnard | 2026 |
| Three-Source Grounding (SE/KG/SLM) | Jason Barnard | 2026 |
| SLM / Three-Model Annotation | Jason Barnard | 2026 |
| Annotation Cost Threshold | Jason Barnard | 2026 |
| Filing Cabinet Model (replaces Chest of Drawers) | Jason Barnard | 2022/2026 |
| Federated Architecture / Multi-Encyclopedia Joinup Graph | Jason Barnard | 2026 |
| Three Graphs Model (Entity/Document/Concept) | Jason Barnard | 2026 |
| Multi-Graph Coverage Advantage (RRF analysis) | Jason Barnard | 2026 |
| Cross-Graph Amplification Channels | Jason Barnard | 2026 |
| Corroboration Threshold (~3 sources) | Jason Barnard | 2026 |
| Corroboration Decay | Jason Barnard | 2026 |
| Self-Fulfilling / Self-Defeating Prophecy (Expression Pattern Learning) | Jason Barnard | 2026 |
| Three Confidence Phases | Jason Barnard | 2026 |
| Data Lake / Data River | Jason Barnard | 2026 |
| Confidence Curator | Jason Barnard | 2026 |
| Investment Inflection Point | Jason Barnard | 2026 |
| Three Knowledge Algorithms | Jason Barnard | 2026 |
| Cascading Confidence | Jason Barnard | 2025 |
| Cascading Queries | Jason Barnard | 2025 |
| Cascading Signals | Jason Barnard | 2026 |
| Cascading Prerequisite (U→C→D mechanical) | Jason Barnard | 2026 |
| NEEATT (extending EEAT with Notability + Transparency) | Jason Barnard, credit Jarno van Driel (Transparency) | 2024 |
| Empty Room Effect (temporary D without U) | Jason Barnard | 2026 |
| Quicksand Effect (temporary C without solid U) | Jason Barnard | 2026 |
| Authority as Output | Jason Barnard | 2026 |
| Topical Authority (redefined) + 3×3 Matrix | Jason Barnard | 2026 |
| Temporal / Hierarchical / Narrative Authority | Jason Barnard | 2026 |
| Three Pillars of Semantic SEO | Jason Barnard | 2026 |
| Identity-Relevance Loop | Jason Barnard | 2026 |
| Topical Coverage (redefined from “Topical Authority”) | Jason Barnard | 2026 |
| Topical Architecture (Koray’s domain acknowledged) | Koray Tuğberk Gübür | 2020s |
| Comparison-intent tracking priority | Jason Barnard | 2026 |
| Identifier granularity as annotation gate | Jason Barnard | 2026 |
| 95/5 Rule | Prof. John Dawes, Ehrenberg-Bass Institute | 2021 |
| Reciprocal Rank Fusion | Cormack et al. | 2009 |
Preprints (Zenodo, Feb 2026):
- Barnard, J. (2026). Annotation as the Confidence Fulcrum: How AI Systems Classify Digital Content and Why It Determines Recommendation Outcomes (Preprint, v2). Zenodo. https://doi.org/10.5281/zenodo.18723460
- Barnard, J. (2026). Annotation Cascading: Hierarchical Model Routing, Topical Authority, and Inter-Page Context Propagation in Large-Scale Web Content Classification (Preprint, v2). Zenodo. https://doi.org/10.5281/zenodo.18723669
<a id=”act-iii-people”></a>
Act III: People - Gates 8-10 (Engines + Marketing)
Three gates. The audience is the person. Everything here is commercial. This section covers the People layer through two of three lenses: People in Marketing (what brands create) and People in Engines (how machines amplify it), plus the commerce pipeline mechanics, framework layer, and Won-Probability arithmetic.
PART I - THE THREE PEOPLE LENSES
The People layer is viewed through three lenses. These are not separate categories - they are three perspectives on the same commercial reality. Lenses 1 and 2 are covered here. Lens 3 (Clients) follows in the Strategic Method section.
1. Lens 1: People in Marketing (The Broad Set)
Everything brands do to communicate with their audience. Search and AI engines are a subset of marketing - the subset you optimize for. The bonus: amplification by the biggest influencers in the world - AI platforms working 24/7 in every market, in every language.
The Offsite Principle
The industry’s instinct is to build on the Entity Home. Create content, publish on the website, optimise for crawling. This is necessary but insufficient. The Entity Home anchors the entity. But corroboration requires independent sources. The machine trusts what multiple independent, authoritative sources confirm - not what one source claims about itself.
The Corroboration Threshold (approximately 3 independent high-confidence sources) is met offsite, not onsite. The brand’s job: identify where the audience naturally looks online, appear there with genuine value, and ensure those appearances are structured clearly enough that the machine can connect them to the entity.
UCD mapping: Understandability is principally onsite (Entity Home, structured data, clean signals). Credibility is principally offsite (third-party mentions, reviews, corroboration). Deliverability is the result of both - the machine advocates because it has seen sufficient onsite clarity AND offsite confirmation.
The Corroboration Architecture: Entity Home → Corroboration → Signposting
Three steps. The most proven operational sequence in TKP history (Jason Barnard, 2015-present).
Step 1 - Entity Home. Choose the single authoritative webpage where the brand “resides” online. Include a comprehensive brand description using semantic triples (subject-verb-object). Add Organisation or Person Schema.org markup. The Entity Home is the canonical source of truth. It initialises the entity confidence prior.
Step 2 - Corroboration. Get significant corroboration on trusted, authoritative first-party, second-party, and third-party websites: social profiles, media sites, databases (Wikidata, CrunchBase, Bloomberg, ZoomInfo, D&B), review platforms, industry directories. Duplicate the description from the Entity Home (or close variants of it) on all corroborative sources. Add links from those corroborative sources back to the Entity Home.
Step 3 - Signposting. Create the Infinite Self-Confirming Loop: link from the Entity Home to the corroborative sources AND back. The Entity Home points outward, the corroborative sources point inward. This closed verification circuit is what gives the machine confidence to assert rather than hedge.
The outcome is twofold. For humans: potential clients searching for solutions find a coherent narrative that resonates with their needs. For machines: search engines and LLMs have an unambiguous understanding of who you are, what you offer, and who benefits - awarding you a KGMID (unique identifier in the Knowledge Graph), visually represented as a Knowledge Panel, and the confidence to recommend you for queries from the right audience.
NEEATT Applied to Understood Entities
NEEATT is the credibility framework (see TKP-2 §8 for the full six dimensions). The critical dependency: NEEATT signals do not apply to an entity that is not understood. The Cascading Prerequisite governs: U must be established before C signals have any entity to attach to.
Once the entity IS understood (Entity Home established, KGMID awarded, corroboration threshold met), NEEATT signals build the C layer:
Notability - niche-relative, not fame-dependent. A place on the board of the Parisian Poodle Parlours Association demonstrates notability for a poodle parlour. Build through industry associations, speaking engagements, awards, advisory roles, publications.
Experience + Expertise - demonstrated through content, case studies, portfolio, track record. First-hand evidence, not claimed expertise.
Authority - an output, not an input. Authority emerges when other entities recognise this one. Co-citation patterns, third-party references, inherited credibility from association with authoritative entities.
Trustworthiness - consistency over time. Track record of accuracy. Absence of contradictions. Reliable information across sources.
Transparency - openness, clarity, honesty. Clear disclosure of who you are, what you do, where you operate. Credit: Jarno van Driel.
The Content Status Framework: Does Exist / Should Exist / Could Exist
Governs content strategy across all marketing channels. Search and AI optimization is the subset where you package content for machine comprehension.
Does Exist: Content already published about the brand. Action: audit. Fix inconsistencies, outdated information, contradictory messaging. ROPI (Return On Past Investment) - make existing assets work before creating new content.
Should Exist: Content that is true, provable, and the brand is the rightful authority. Action: create. Fill evidence gaps the machine has identified (via diagnostic mirrors in TKP-3b). Package for both humans and machines.
Could Exist: Content that would strengthen the brand but doesn’t yet have evidence. Action: earn. Build the real-world activities, partnerships, credentials, and outcomes that make the content truthful. Then document.
ORM connection: Traditional ORM jumps to Could Exist and Should Exist, often in that order. TKP starts with Does Exist (the audit) and works from there. The difference between TKP and ORM is where you focus first, not what you do. TKP solves ORM as a side effect - the leapfrogging methodology (Lens 2, below) is the operational mechanism.
Social Strategy: The Rule of Thirds
Three-part content mix for social platforms: one-third your content, one-third third-party content your audience values, one-third just for fun/engagement. Only one post in three should contain an external link (platform visibility optimization - platforms penalize excessive outbound linking).
Platform-specific optimization: each social platform has its own Brand SERP behaviour (Twitter/X Boxes, Facebook sitelinks, LinkedIn profile ranking, YouTube Video Boxes, Instagram carousel). The social presence is both a marketing activity (reaching the audience) and an engine signal (the machine observes the engagement). UCD applies: the social profile must be understood (consistent with Entity Home), credible (genuine engagement, not purchased), and deliverable (visible in the right contexts).
Third-Party Relationship Management
The most valuable corroboration comes from authoritative third-party sources: journalists, industry publications, review platforms, database maintainers, partner organizations. Building these relationships requires value exchange, not extraction.
The approach sequence: identify the source → research the author/editor → find a mutual acquaintance for introduction (or introduce yourself with genuine value) → build the relationship first, request the coverage second → accept graceful rejection without burning the bridge.
Face-to-face meeting power: in-person connection dramatically increases cooperation rates. Conferences, industry events, and local meetups are high-leverage relationship-building opportunities that also generate machine-observable evidence (speaker bios, event pages, social mentions).
2. Lens 2: People in Engines (The Extension of Marketing, Packaged for Machines)
Not a separate domain. An extension of Lens 1. Marketing packaged for machine comprehension. UCD applies in full here too - at the engine interface level.
Brand SERP Anatomy: The Google Business Card
The Brand SERP (what Google shows when someone searches your brand name) is the machine’s visual summary of its confidence in you. It is the most important single diagnostic in The Kalicube Process. Everyone who matters to your business - clients, prospects, potential hires, partners, suppliers, investors - will encounter this page. “A good Brand SERP is good for your bottom line” (Jason Barnard, 2012).
Anatomy of a Brand SERP (typical structure):
Position 1: Homepage (controlled) - meta title, meta description, rich sitelinks. The brand’s primary message. Pixel-level optimization matters (512px title limit on desktop, mobile shows brand name only). The homepage meta description is the brand’s first selling point: curiosity → importance → invitation → clarification.
Positions 2-4: Rich Sitelinks - expanded subpages (About Us, Contact, Products, Pricing, Careers). Short meta titles (<30 characters), focused descriptions. Siloing (URL structure) triggers rich sitelinks. These pages serve people who already know the brand and seek a specific aspect.
Positions 3-6: Social profiles (semi-controlled) - Twitter/X, LinkedIn, Facebook, YouTube, Instagram. Each platform has specific Brand SERP behaviour.
Rich Elements: Twitter/X Boxes (1-4 tweets/day, engagement from relevant people, ~2 months to trigger), Video Boxes (coherent video strategy, relevance matters more than production quality), Image Boxes (circulate ~12 consistent images across the digital ecosystem), People Also Ask panels, Top Stories.
Knowledge Panel (right side desktop, top on mobile): The machine’s structured understanding of the entity. KGMID-anchored. Contains verified facts, images, social links, key attributes. The visible representation of the entity node (see TKP-2 §8, Cascading Prerequisite).
Rich Elements as “double drowning”: each Rich Element replaces a blue link AND occupies additional space. They simultaneously reduce the blue link count (drowning negative results) and expand the brand’s visual presence.
AI Résumé: The Conversational Business Card
What AI says when asked about you. The conversational equivalent of the Brand SERP. Built similarly (from the same web evidence) but different in content - text-based, narrative, increasingly multimodal (text + images + video as AI engines converge toward highly dynamic SERPs with fewer links).
Three diagnostic signals in AI responses: Assertions (“is,” “known for”) = high confidence. Hedging (“claims to be,” “appears to offer”) = evidence without conviction. Omissions = invisible. The language IS the diagnosis.
At the bottom of the funnel (brand name search), the Brand SERP and AI Résumé serve the same function: the final check before commitment. The person already knows the brand name. They are validating. The C and D layers converge here to a U-layer decision: do I trust them?
Three Tiers of Control
Every result on a Brand SERP falls into one of three control tiers. The tier determines what the brand can do about it.
| Tier | Examples | What You Can Do |
|---|---|---|
| Controlled | Your homepage, your other domain sites (support site, product site, blog) | Full optimization: meta titles, descriptions, content, structured data, sitelinks |
| Semi-controlled | Social profiles, review platforms (Trustpilot, BBB), informational sites (ZoomInfo, CrunchBase, Wikidata), partner sites | Partial optimization: claim profiles, update information, respond to reviews, request corrections from authors/editors |
| Uncontrolled | Third-party articles, forum threads, negative mentions, competitor comparisons | Indirect influence only: leapfrogging (push positive results above negative), content creation that outranks, relationship building with publishers |
The control cushion strategy: Maintain multiple semi-controlled results ranking on pages 1-3 of the Brand SERP. If a negative uncontrolled result appears, you have candidates ready to leapfrog past it. Building the cushion before you need it is the defensive posture.
Leapfrogging / ORM as a Side Effect
TKP solves ORM (Online Reputation Management) as a side effect. The methodology is the same: build positive, corroborated, authoritative content that outranks negative content. The difference is where you focus first.
Traditional ORM jumps to “could exist” and “should exist” content (often in that order) - creating new positive content to push down negatives. TKP starts with “does exist” - auditing and optimizing existing assets, fixing the Entity Home, building the corroboration architecture. Negative results fall naturally as positive results strengthen.
The leapfrogging method: Identify positive results on pages 2-6 that are candidates for promotion. Prioritize by control tier (controlled > semi-controlled > uncontrolled). Apply specific SEO tactics per tier. Use Rich Elements as double-drowning weapons.
Dangerous content types: Forum threads are extremely difficult to drown (high engagement signals, continuous updates). Employee review platforms (Glassdoor) are volatile with few reviews. Negative-positive copywriting (addressing a negative topic to rebut it) can accidentally reinforce the negative association.
Google Ads as BOFU Defense: The Marathon Relay Race
Organic and paid are not separate strategies. They are relay runners. Organic builds the foundation. Paid amplifies at the moment of decision.
On Brand SERPs, a branded Google Ads campaign is the cheapest insurance against BOFU leaks (the Doubt Tax). Google Ads Quality Score has three factors: ad relevance, expected click-through rate, landing page experience. For your own brand name, you have a natural advantage on all three - which means very low cost per click (often a few cents).
The seesaw effect: Quality Score is relative. The better your Quality Score, the less you pay AND the more expensive it becomes for competitors bidding on your brand name. Your improvement is their cost increase.
Trademark protection: Google allows competitors to bid on your brand name by default. You must explicitly file a trademark claim with Google for each territory. Without this, competitors can use your brand name in their ad copy on YOUR Brand SERP.
The marathon relay race: Paid coverage bridges the gap while organic confidence builds. As the Cascading Prerequisite strengthens (U→C→D), paid spend decreases because organic coverage handles more of the BOFU defense. The relay handoff: paid starts the race, organic finishes it, paid stays available as defensive insurance.
In the AI era, the equivalent is emerging: sponsored results in AI Overviews, paid placement in AI recommendations. The principle is the same - paid defends while organic compounds.
Agent Commerce Infrastructure (Updated February 2026)
Status: OPERATIONAL. As of February 11, 2026, agent commerce infrastructure is live (see Parent Summary §10 and TKP-1 §5 Mode 4 for full details). The Untrained Salesforce now has a wallet.
Implications for the People-in-Engines lens: agent commerce endpoints (checkout/booking/quote) are now engine-facing surfaces, just as meta titles and structured data have always been. The Three-Capability Agent Readiness (Feed + Search Tool + Action Endpoint) and the Destination Capability Ladder are diagnostic tools for this lens. The Plug-and-Play Promise (TKP-3b) is the commercial framing: brands who did foundational work activate these rails as configuration.
The Cascading Prerequisite still governs: An agent-ready checkout endpoint on a house the AI doesn’t know exists is a beautiful front door on nothing. Schema without entity confidence is infrastructure without trust. Build U→C→D first. Agent commerce is the D-layer reward.
PART II - THE THREE GATES
3. Display (Gate 8): Where the Person Enters
| Gate | Name | Test | Question |
|---|---|---|---|
| 8 | Display | Trust depth | How does the system present the brand to the person? |
Display is the first gate where a person exists. Steps 0-7 are machine-to-machine. At Step 8, the axis pivots 90° from horizontal to vertical. The pipeline delivered the content. Now the funnel delivers the person.
UCD: Trust Depth at Display
| Layer | Name | Role | Funnel | Entry Point |
|---|---|---|---|---|
| D | Deliverability | Advocate | TOFU | Widest - awareness. “I’ve heard of them.” |
| C | Credibility | Recommender | MOFU | Narrowing - evaluation. “They’re credible.” |
| U | Understandability | Trusted Partner | BOFU | Narrowest - trust. “I trust them.” |
Build direction: U→C→D (foundation first - you cannot be credible if you are not understood). Experience direction: D→C→U (the person enters at awareness, descends to trust).
UCD is not a step. It is the internal structure of Step 8. The dimension that makes Display three-dimensional. Step 9 (Won) collapses back to a point: binary outcome, one dimension, zero sum. Step 10 (Served) feeds results back into entity confidence, recharging the pipeline for the next cycle.
The Opaque Federation at Display
The person never sees “the KG told you this” or “the SE found this” or “the LLM inferred this.” They see one answer from one assistant in one walled garden. The engine decides what to show (Display), how confidently to show it (Credibility framing), and whether to show it at all (Deliverability). UCD measures the person’s trust in WHAT THE ENGINE TELLS THEM ABOUT THE BRAND, not their trust in the brand directly. The engine is curator, federator, and gatekeeper - judge, jury, and executioner.
Entry Modes at Display: Mode 5 (Ambient) surfaces here - AI proactively displays the recommendation in the user’s workflow (Gmail, Sheets, Meet) without a query. The display context shifts from “search result” or “AI response” to “contextual suggestion inside a productivity tool.”
The Framing Gap per Funnel Stage
| Funnel Stage | Whose Framing Gap? | Deficit Type | What Must Be Framed |
|---|---|---|---|
| D (TOFU) | AI’s gap | Cognition | “Should I recommend this entity for this topic?” |
| C (MOFU) | Brand’s gap | Imagination | “Why is my proof better than competitor’s proof?” |
| U (BOFU) | Audience’s gap | Relevance | “Is this really who they say they are?” |
| Won | All three | Simultaneous | All three frames must hold. Any one breaking kills the Perfect Click. |
The three Framing Gaps map to the Algorithmic Trinity: Knowledge Graphs anchor entity framing (solving AI’s gap at U). Search Engines verify evidence framing (solving Brand’s gap at C). LLMs and Assistive Engines deliver relevance framing (solving Audience’s gap at D).
4. Won (Gate 9): The Zero-Sum Moment
| Gate | Name | Test | Question |
|---|---|---|---|
| 9 | Won | Binary outcome | Does the person act? |
All three axes collapse to a single point at Step 9: the Zero-Sum Moment.
Content arrives horizontally (Cascading Confidence from Steps 0-7). The person arrives vertically (trust descent through D→C→U). Time delivers the 95% (algorithmic memory bridges the gap from first impression to trigger).
At Step 9, the outcome is binary. Won (the Perfect Click: conversion, revenue) or silence (zero sum). One brand wins. Every competitor loses. No middle ground.
The Perfect Click
In the conversational acquisition funnel, the AI walks the user through the entire purchase journey in one conversation. The Perfect Click is the moment the AI’s recommendation resolves into action.
Literal Perfect Click: User clicks a link from the AI and converts. Measurable. Conceptual Perfect Click: AI recommends a brand, user acts through a different channel. No UTM captures “ChatGPT told me to call you.” More common and more valuable.
Three Players at Every Perfect Click:
| Player | What They Want | How They Get It |
|---|---|---|
| Rightful Owner | To BE the Perfect Click | Earned through Cascading Confidence |
| Interceptor | To STEAL the Perfect Click | Pays the platform to insert at the decision moment |
| Platform | To SELL the interception | Monetises the exact moment of maximum intent |
Three Paid Strategies: Offensive interception - you have not earned the Perfect Click; you buy the right to be inserted at the decision moment instead of the rightful owner. Renting someone else’s position. Defensive protection - you HAVE earned the Perfect Click; a competitor is trying to buy the interception; you pay to defend your earned position. Protecting your investment. Strategic bridging - building toward earning organically but not there yet; you buy interception WHILE investing in Cascading Confidence. Paid spend decreases as organic confidence increases. Transitioning from renting to owning.
The Platform Money Insight
Three eras of decision-moment knowledge. In the broadcast era, the advertiser guessed when the decision happened: scattered budget across TV, print, radio. In the search era, Google knew intent (the user typed a query) but not position in the funnel. In the conversational era, the AI IS the funnel. It knows the user’s question, the options it presented, the user’s follow-ups, and the exact moment the user is ready to decide. The ad slot at that exact moment is the most precisely targeted advertising opportunity in history.
The Perfect Transaction: Won Without a Person
When an AI agent books a flight, selects a supplier, or reorders inventory, there is no click, no person. Only an API settlement. This is the Perfect Transaction: AI commits to a brand without human involvement.
At the Perfect Transaction, Cascading Confidence becomes the ONLY factor. No person descends the UCD funnel. Steps 0-7 are the entire game. The brand with the highest accumulated confidence wins. The first mover owns that agent’s loyalty potentially permanently - switching cost (re-evaluating confidence) exceeds the benefit.
The Infrastructure Fallacy: The Perfect Transaction requires machine-verifiable infrastructure (MerchantReturnPolicy, ShippingDetails, Warranty in JSON-LD, real-time inventory APIs, GTIN/MPN). This is the technical floor. But the technical floor does not earn the transaction. Cascading Confidence earns the transaction. Schema without entity confidence is a beautiful front door on a house the AI does not know exists. Only the combination works, built in order: confidence first, infrastructure second.
Three Won Resolutions
Won resolves through three mechanisms. Each has different predictability, different pipeline control, and different infrastructure requirements. The trajectory runs from the least precise (random) to the most precise (autonomous): R1 → R2 → R3. Imperfect Click → Perfect Click → Perfect Transaction.
| Resolution | Name | Mechanism | What Makes It Possible | 95/5 Efficiency |
|---|---|---|---|---|
| 1 | Imperfect Click (ZMOT) | Person browses a list, pogo-sticks, picks. The Zero Moment of Truth. Random and unpredictable. | Traditional search. System shows everyone the same list. No filtering for intent or readiness. | Low: sprays at 100%, hopes to catch the 5% ready to buy |
| 2 | Perfect Click (ZSM in AI) | AI recommends one, person takes it. The Zero-Sum Moment in AI. | System filtered for intent, context, readiness. Presents one answer. Decoupled checkout/booking/quote endpoint available. | High: system targets the 5% moving into market |
| 3 | Perfect Transaction (Agent Transacts) | AI agent acts autonomously on the user’s behalf. A2A bilateral negotiation between buyer’s agent and merchant’s agent. | Mandate (cryptographically signed user authorisation) + action endpoint + entity confidence + floor price + real-time availability | Maximum: AI caught the exact moment and closed it |
Resolution 1 universal examples: E-commerce - searches, browses product pages, compares, buys. B2B - searches, visits multiple vendor sites, calls one. Shop window - searches “restaurants near me,” scans the list, picks one. Media - searches, scans headlines, clicks an article. The person does the work. The system just showed a list.
Resolution 2 universal examples: E-commerce - AI recommends product, human clicks to product page, buys on site. B2B - AI recommends vendor, human clicks to pricing page. Shop window - AI recommends restaurant, human clicks for menu and booking. Media - AI cites source, human clicks to read the full article.
Resolution 3 universal examples: E-commerce - agent buys the product via UCP checkout. B2B - agent submits requirements, initiates procurement. Shop window - agent books the restaurant table, the hotel room, the dental appointment. Media - agent… consumes the content. That is the problem (see The Media Problem below).
The trajectory: Resolution 1 (Imperfect Click) is dominant NOW but shrinking. It is being eaten by Resolution 2 (Perfect Click). Resolution 2 is being eaten by Resolution 3 (Perfect Transaction). As the decision space shrinks, Resolution 3 becomes increasingly dominant. The endgame: Resolution 3 is the default, Resolution 2 is the fallback, Resolution 1 is the exception. But the transition is gradual and brand-type-dependent - Resolution 1 will remain important for years. the idea that AI mode has taken over is false. People still search, still use AI to decide which websites to visit, still arrive presold.
Search does not disappear. Some people will always want to browse. But the progression from Imperfect Click to Perfect Click to Perfect Transaction is the trajectory. Brands optimising only for Resolution 1 optimise for the least predictable mechanism on the spectrum.
Deterministic infrastructure fallback:
Mandate present (full scope) + action endpoint → Resolution 3 possible (agent transacts autonomously). Mandate present (limited scope) + decoupled checkout/booking/quote → Resolution 3 with approval gate (agent prepares, human confirms, then agent executes). No mandate + decoupled endpoint → Resolution 2 ceiling (Perfect Click: AI recommends, human completes). Neither mandate nor decoupled endpoint → Resolution 1 ceiling (Imperfect Click: the destination structurally caps itself at traditional browsing).
The Media Problem: When Won Belongs to the Agent
For every brand type except media, Resolution 3 is the unlock - agents doing MORE for you. For media and content brands, Resolution 3 is the THREAT. Media’s product IS the content. The agent consumes it at the Influence layer (Gates 6-7, Recruited and Grounded). The agent uses it to answer the user. The user gets value. The publisher gets nothing. Won never fires for the publisher because the agent already extracted the value upstream.
This is not a future problem. AI Overviews summarise articles. ChatGPT synthesises multiple sources. The publisher’s content was Recruited (Gate 6), Grounded (Gate 7), perhaps Displayed with a citation (Gate 8)… but Won (Gate 9) belongs to the AI platform, not the publisher.
Three Won Resolutions through the media lens: Resolution 3 - the agent consumes the content on the user’s behalf, Won fires for the platform, publisher gets nothing unless a licensing deal exists. Resolution 2 - AI cites source, human clicks to read the full article, this still works but is shrinking as AI summaries become sufficient. Resolution 1 - person searches, scans a list of articles, clicks one. Still works but the list is shrinking as AI summaries replace it.
Three Escape Routes for Media:
Escape 1 - Licensing. Sell access to AI platforms directly (NYT/OpenAI model). Content enters at Mode 4 (Push Context) via paid API. The platform pays for the value it extracts. Only works at scale - small publishers have no leverage. Large publishers with unique, irreplaceable content (legal databases, financial data, deep investigative journalism) have the strongest position.
Escape 2 - Become Unconsumable. Create content the agent CANNOT summarise. Interactive tools, calculators, personalised experiences, real-time data dashboards, community, proprietary datasets. The agent has to send the user because it cannot replicate what you offer. This is the WebMCP path: the site is legible to agents (Tier 2 on the Capability Ladder), but the VALUE requires human presence. The agent reads the structured data and tells the user “you need to go there” - Resolution 2 becomes the ceiling by design.
Escape 3 - Become the A2A Intelligence Agent. Instead of being a passive source the agent reads (Mode 1, Pull), become an active agent the buyer’s agent CONSULTS (Mode 4, Push Context as a service). A financial media brand becomes a real-time market data agent. A legal publisher becomes a compliance verification agent. A medical publisher becomes a clinical guidance agent. The agent does not just have content - it has expertise as a service that other agents pay to access. Business model shifts from advertising (monetise human eyeballs) to API access (monetise agent queries). Move from Mode 1 (Pull - please crawl me) to Mode 4 as a service provider (Push Context - I am a service you query).
Escape 3 IS the revolution. It transforms media from “content to be consumed” into “intelligence to be consulted.” On the Destination Capability Ladder, this is Tier 3 - bilateral, active, participating in the agent economy rather than being consumed by it.
Mandate as Merchant Trust Primitive
The mandate is designed to give the merchant confidence that there is a human behind the robot. The mandate is embedded into the management of the card/wallet: a signed certificate that states “this human authorised this agent to purchase this item for this amount.” It may be surfaced in the browser, but structurally it belongs to the wallet/card authorisation layer.
Mandate is not a UX detail. It is how autonomous execution becomes safe enough for merchants (and service providers) to allow it.
Universal analogues: Shop window - a signed booking authorisation (“Jason authorises this reservation at this time at this price”). B2B - a signed procurement/quote request mandate (“Jason authorises this RFQ with these requirements and budget constraints”). Media - signed licensing authorisation (for paid access) or signed “call” authorisation for intelligence-agent queries.
Decoupled Checkout as the Structural Fallback
A key infrastructure requirement for agent commerce is a decoupled checkout (or decoupled booking/quote endpoint) - separated from the rest of navigation and callable by the agent/platform. Many destinations do not have it.
Deterministic fallback tree (see Three Won Resolutions above for full version):
Mandate present (full scope) + action endpoint → the agent can execute autonomously (Resolution 3 possible). Mandate present (limited scope) + decoupled checkout → the agent prepares, confirms, and executes (Resolution 3 with approval gate). Mandate absent + decoupled endpoint present → the system must hand off to the user to complete (Resolution 2 ceiling by design). Neither mandate nor decoupled endpoint → the destination structurally caps itself at Resolution 1 (Imperfect Click: traditional browsing).
Universal mapping: E-commerce - checkout endpoint. Shop window - booking endpoint. B2B - quote/procurement endpoint. Media - licensing/payment endpoint (or intelligence-agent access key).
Why Floor Price Exists: The Agent Is the Negotiator
Google requests floor price because the agent (Gemini) is structurally a negotiator. It must know “how low can I go” to close. Negotiation is not a corner case: it is economically incentivised (the agent/platform gets paid when it “makes it happen”). The agent goes to multiple suppliers and negotiates back and forth.
Transaction readiness is not only checkout readiness. It includes negotiation constraints (floor price, policy constraints, bundling/alternatives, SLA commitments).
Some destinations will intentionally resist agent access because agents bypass UX manipulation and compress price. Amazon blocking agent access is a canonical example: if an agent enters with mandate and price goal, Amazon’s business model collapses. Walmart embraces agentic commerce and wins. This makes the Capability Ladder a strategic posture decision, not merely a technical maturity model.
The Feedback Loop: The Line Is a Circle
The pipeline appears to be a one-way line: Steps 0-9, left to right. It is not. Won feeds back to Discovery. The pipe is a circle.
When the person converts (the Perfect Click), that signal feeds back into Entity Confidence, which reinforces the bot’s next Selection decision. Three implications: every conversion strengthens the next one (flywheel). Failure cascades backwards (anti-flywheel). BOFU must come first - if conversion is broken, every successful recommendation generates a negative feedback signal.
5. Served (Gate 10): The Post-Won Feedback Amplifier
| Gate | Name | Test | Question |
|---|---|---|---|
| 10 | Served | Post-engagement feedback | Does what happens AFTER Won feed back into entity confidence? |
Gate 10 is where commerce closes the loop. The transaction resolved at Gate 9. Gate 10 determines whether the system learns from the outcome and amplifies (or degrades) future confidence.
Data Governance as the Competitive Moat
Agent execution forces a new organisational requirement: no silos. If an agent is to do the work of a clerk, it must access deep operational reality - especially post-engagement questions and resolutions.
Critical distinction: This is not “People Also Ask” or generic SEO FAQs. It is concrete, high-signal post-engagement dissection data: the real questions customers ask after purchase/engagement, and the real resolutions that determine satisfaction.
The competitive mechanism: If support ↔ product ↔ sales ↔ marketing are disconnected, the destination “doesn’t know.” “They know if you know.” If you don’t know, someone else will win because they can supply the agent with the post-engagement truth the user needs for delegation.
Operational instruction: Mine support tickets, returns, call transcripts, onboarding notes, and sales-call objections. Convert into structured “post-engagement Q&A objects” aligned to the ontology layer feeding the agent ecosystem. Feed those objects back into the system so agents can advocate because they know the reality.
Universal mapping: E-commerce - returns, complaints, satisfaction data. B2B - onboarding failures, renewal drivers, implementation blockers. Shop window - no-shows, complaint patterns, service constraints. Media - corrections history, accuracy track record, citations that persist over time.
Gate 10 (Served) is not “nice to have.” It is organisational restructuring: connect post-sales reality into the ontology layer.
Distributed feedback re-entry: Post-engagement signals do not feed back to a single DSCRI stage. WHERE they re-enter depends on the platform’s strategy, mechanical implementation, and partnerships. Google may use review signals at Annotation (Gate 5). A chatbot platform may use satisfaction data at Grounding (Gate 7). A commerce platform may use return rates at Recruitment (Gate 6). An agent with direct API access may pull post-engagement Q&A objects at Display (Gate 8). The brand cannot control which stage receives the feedback. The brand CAN control whether the feedback exists, whether it is structured, and whether it is accessible. Same principle applies to all online signals about the brand - pre-purchase and post-purchase. They feed DSCRI, but the stage depends on the platform and its strategy.
Act III Stall Patterns
| Stall | Pattern | Diagnosis | Action |
|---|---|---|---|
| 7 | Recruited but not Grounded | Ghost contribution: system absorbed the content but never surfaces it | Build third-party proof |
| 7 | Grounded but not Displayed | Used as source material but brand not visible. Ghost contribution at display level | Tighten Display-level language |
| 8 | Displayed but not Won | Appeared but user chose competitor. Terminal boolean | Conversion framing problem - fix Won-gate messaging |
Revenue mapping: These stalls at Gates 7-8 represent Conversion Leak - content reached Display but the system didn’t commit, or the person didn’t convert, or post-Won feedback degrades confidence. Conversion Leak is the Act III container. The three Revenue Taxes (Invisibility, Ghost, Doubt) subdivide it by UCD depth at Display. Highest per-unit cost. Resolve Act III stalls only after Act I (Opportunity Cost - infrastructure) and Act II (Competitive Loss - algorithm) are clean. The Cascading Prerequisite governs the fix order.
PART III - THE COMMERCE LAYER
6. Three-Capability Agent Readiness
A destination is not “agent-ready” because it has a feed. Agent systems increasingly operate with three distinct capabilities, which must be treated separately in diagnostics and implementation:
Feed (Inventory presence) - structured data pushed into the ecosystem so the system can know what exists.
Search Tool (Catalog operability) - a callable endpoint the agent uses to search and filter the catalog without visiting the website. The agent gets the feed, then gets a search tool it can use for searching through the catalog without visiting the website.
Action Endpoint (Execution capability) - a callable endpoint the agent uses to execute (checkout/booking/quote/procurement request).
Universal mapping: E-commerce - feed of products + tool to search catalog + checkout endpoint. B2B - feed of capabilities + tool to search constraints + endpoint to request quote/proposal. Shop window - feed of services/availability + tool to search slots + endpoint to book. Media - feed of content inventory + tool to search archive + endpoint to license/access or call intelligence service.
Diagnostic rule: don’t call a destination “agent-ready” until you know which of the three capabilities exists and which is missing.
The Destination Capability Ladder
| Tier | Capability | What the Agent Can Do |
|---|---|---|
| 1 | Feed only | Know what exists. Cannot search, cannot act. |
| 2 | Feed + Search Tool | Find specific offerings. Cannot complete transaction. |
| 3 | Feed + Search Tool + Action Endpoint | Full autonomous cycle: find, evaluate, execute. |
Universal brand-type view:
| Tier | E-commerce | Shop Window | B2B | Media |
|---|---|---|---|---|
| Tier 1 (Feed only, traditional site) | Product pages the agent infers from | Brochure site with phone number | Corporate site with contact form | Articles behind unpredictable layouts |
| Tier 2 (Feed + Search, WebMCP-enabled, structured, passive) | Catalog the agent reads perfectly | Services, prices, availability - all legible | Capabilities, case studies, pricing - all parseable | Content the agent reads, cites, and understands perfectly |
| Tier 3 (Full stack, A2A agent, bilateral, active) | Merchant agent negotiates, closes sale | Booking agent confirms availability, handles reservation | Qualification agent assesses fit, schedules meeting, sends proposal | Intelligence agent provides real-time expertise, charges for access (Escape 3) |
The ceiling on Won Resolution:
| Tier | Available Resolutions | What Is Blocked |
|---|---|---|
| Tier 1 (Traditional) | Resolution 1 only (Imperfect Click - person browses, picks) | Agent cannot recommend with confidence (no structured data). Agent cannot transact (no machine-verifiable infrastructure). |
| Tier 2 (WebMCP-enabled) | Resolution 1 + 2 (AI recommends confidently, human clicks) | Agent cannot transact (passive, no bilateral capability). |
| Tier 3 (A2A agent) | Resolution 1 + 2 + 3 (agent can act autonomously) | All resolutions available. Agent can negotiate, book, purchase, engage bilaterally. |
This is the missing piece in the pipeline: the destination’s infrastructure sets the CEILING on what the pipeline can achieve. A brand with perfect Cascading Confidence through Gates 0-7 but a Tier 1 website can only achieve Resolution 1. The pipeline did its job. The destination failed.
The ladder maps to TKP phases: Phase 1 (Fix) = achieve Tier 1 (traditional site working properly, structured data clean). Phase 2 (Lock-In) = achieve Tier 2 (WebMCP, structured data, legibility). Phase 3 (Expand) = achieve Tier 3 (A2A agent, bilateral capability, full protocol readiness).
Most destinations are at Tier 1. Some are intentionally capping themselves (Amazon blocking agent access to preserve UX manipulation model). The Capability Ladder is strategic posture, not just technical maturity.
The Warehouse-to-Shopfront Problem (Universal)
The Destination Capability Ladder only works if the destination’s front-office data (what agents see) reflects its back-office reality (what actually happens). This is the universal silo problem.
| Brand Type | “Warehouse” (back office) | “Shopfront” (what agents see) | The Silo Problem |
|---|---|---|---|
| E-commerce | Inventory, logistics, support tickets, return data | Product pages, merchant feeds | Agent recommends a product that is out of stock. Agent does not know the return policy is terrible. |
| Shop window | Kitchen capacity, staff schedules, seasonal menus, appointment slots | Booking page, Google Business Profile | Agent books a table when the restaurant is fully committed. Agent does not know about the renovation. |
| B2B | Delivery capacity, team expertise, past project outcomes | Website, case studies, LinkedIn | Agent recommends the consultancy for a project type they no longer handle. Agent does not know their senior partner left. |
| Media | Editorial pipeline, corrections, accuracy track record | Published articles, archive | Agent cites an article that was corrected. Agent does not know the journalist left the publication. |
The universal principle: the agent can only sell what it knows. It can only know what you feed it. What you feed it must connect the back office to the front office, or the agent sells a fiction that reality contradicts - and Served (Gate 10) feedback destroys the Cascading Confidence that took months to build.
Operational Readiness Passport (Universal DPP Equivalent)
Digital Product Passport (EU regulation for physical products) requires tracking at batch/location/price-variant level. The CONCEPT - tracking every variant of what you offer so agents can differentiate accurately - applies to every brand type.
| Brand Type | Readiness Passport | The Diagnostic Question |
|---|---|---|
| E-commerce | Digital Product Passport (actual DPP) | Can you track this specific product in this specific store at this specific price? |
| Shop window | Service Variant Passport | Can you tell the agent that the Paris location is closed Mondays and the London location has different pricing? |
| B2B | Capability Passport | Can you tell the agent you are at capacity for Q2 and only accepting projects over €50K? |
| Media | Content Passport | Can you tell the agent this article was updated last week, the original had an error in paragraph 3, and the author has since left? |
The readiness diagnostic: “Are you ready for a [your type] passport?” The answer is almost always no. The gap between current state and passport-ready IS the gap between current state and agent-ready. Tier 1 brands cannot answer the passport question at all. Tier 2 brands can answer it partially. Tier 3 brands answer it in real-time via their A2A agent.
7. The Shrinking Decision Space
| Category | Trend | Pipeline Relevance | Won Resolution |
|---|---|---|---|
| Human decides alone (no AI) | SHRINKING (fast) | Outside pipeline scope | N/A |
| Human browses list, picks (ZMOT) | DOMINANT NOW, shrinking | Random. Unpredictable. System shows list, person chooses. | Resolution 1 (Imperfect Click) |
| AI recommends, human clicks (ZSM) | GROWING (current transition) | Pipeline controls up to click | Resolution 2 (Perfect Click) |
| AI decides autonomously (with mandate) | GROWING (future dominant) | Total pipeline control. Gates 0-10 entire game. | Resolution 3 (Perfect Transaction) |
Behavioural Reality Check: More Research, More Selectivity
The idea that “AI mode has taken over” is false. People still search and still visit websites. The shift is that AI enables faster research, so top and middle of funnel expand in activity volume, while visits become more selective.
Operational implication: People use AI to decide which websites to visit. They arrive with a pre-formed perspective (a supposedly independent summary) - the “presold arrival state.” The new ranking game is: get the AI to tell your story in the selection/comparison phase so the visitor arrives already aligned.
This behaviour model must be built into measurement: the goal is not only clicks; it is selection positioning before the visit happens.
The Endgame
As the decision space shrinks, the cone disappears entirely. No person, no UCD funnel, no Display. Steps 0-7 are the entire game. Cascading Confidence is the only factor. Entry Modes determine which path content takes to reach the competitive gates. TKP is built for this future.
The Delegation Boundary: The line between human and autonomous decision-making shifts per purchase, per person, per culture, per context. Flights and hotel rooms sit at one end (pure utility: let the agent handle it). Wedding venues sit at the other (identity-laden: the person will always choose). The AI learns your boundary by observing which recommendations you override, which you accept without looking, which you inspect and then confirm. Over time it builds a model of YOUR threshold. The AI is making a decision about whether it makes the decision - and that meta-decision is itself a Won outcome in the pipeline.
PART IV - THE FRAMEWORK LAYER
8. Revenue Taxes
The Three-Tier Cost Structure
Pipeline failures cost money at three levels. The distinction matters because the diagnostic and the fix are different at each level.
Act-level costs (where the pipeline breaks):
| Act | Cost Category | What Happens | Person’s Experience |
|---|---|---|---|
| Act I (Bot) | Opportunity Cost | Content not in the system. Zero signal. Cheapest to fix, most expensive to ignore. | None. Person never encounters the brand because it never reached Display. |
| Act II (Algorithm) | Competitive Loss | Content is in the system, but competitors’ content is preferred at Recruitment/Grounding. The “doing everything right” blindspot. | None. Person never sees the brand because the algorithm chose a competitor before Display. |
| Act III (People) | Conversion Leak | Content reaches Display but the system doesn’t commit, or the person doesn’t convert, or post-Won feedback degrades confidence. | Real. The person is at Display. The failure happens in front of them. |
Opportunity Cost and Competitive Loss are upstream failures the person never sees. You cannot tax revenue that never had a chance to convert. Revenue Taxes are subdivisions of Conversion Leak - three ways the brand fails at Display, each at a different UCD depth.
Three Revenue Taxes (All at Display, All Act III)
Three taxes. Each traces to a specific actor lacking a specific frame at a specific trust depth in the UCD cone.
Invisibility Tax (D Layer - 95%)
What happens: The 95% trigger into market, ask AI. AI does not mention you. The person is at Display - they asked, they got a response. The brand is absent from that response. Pipeline cause: Could be Act I (never crawled), Act II (competitors preferred at Recruitment), or Act III (brand present in system but not surfaced at Display). The tax is experienced at D-layer regardless of where the pipeline broke. Dawes parallel: “If there are potential buyers who basically know nothing about us, they have almost zero chance of buying from us.” Dawes’s solution: brand advertising for awareness. TKP solution: pipeline optimisation for algorithmic presence. Scale: Largest tax. 95% of your market. Unmeasurable. No UTM parameter captures “ChatGPT didn’t mention you.” Fix: Diagnostic triage across all three Acts - identify WHERE the pipeline broke, fix at that level. Framing Gap: AI’s deficit (cognition). The machine cannot frame you into the conversation.
Entry Mode diagnostic: If invisible via Mode 1 (Pull), check infrastructure gates. But also check: is Mode 2 (Push Discovery) active? Is Mode 3 (Push Data) feeding structured information? The brand may be invisible because it’s operating in Mode 1 only while competitors use all five modes.
Ghost Tax (C Layer - 5% + 95%)
What happens: Person asks AI for comparisons and evaluations. AI includes competitors, not you. The person is at Display - they see a comparison, and you are not in it. Lost at the credibility layer. Pipeline cause: Almost always Act II (Competitive Loss at Algorithm gates) manifesting as a Display-level absence. Machine has partial understanding but insufficient confidence to recommend in competitive context. Dawes parallel: “When they do search, they strongly prefer brands they’re familiar with.” AI familiarity (algorithmic confidence) replaces human familiarity (mental availability). Scale: Medium. You exist but are not recommended in competitive context. Fix: Third-party corroboration, reviews, independent validation. Build C-layer confidence so AI includes you in evaluations. Framing Gap: Brand’s deficit (imagination). Cannot frame its proof as superior to competitor proof.
Doubt Tax (U Layer - 5%)
What happens: Person is in-market, AI mentions you but hedges. “Claims to be an expert.” Trust fails at the narrowest point. The person is at Display - they see you, but the AI’s framing undermines rather than supports. Pipeline cause: Act III pure - content reached Display but the framing failed at U-layer. Machine understands you but lacks confidence to state facts definitively. Dawes parallel: Beyond Dawes’s scope. His framework addressed awareness (the gatekeeper problem). The Doubt Tax is a conversion-layer problem at the bottom of the UCD funnel. Scale: Smallest volume, highest per-unit cost. Buyers ready to convert. The sale was yours to lose. Fix: Evidence chains with explicit logical connectors. Entity Home as single source of truth. Structured claims verified by independent sources. Framing Gap: Audience’s deficit (relevance). Cannot frame the brand as trustworthy. AI hedges - “claims to be” - because it lacks the interpretive frame, not the proof.
The Person’s Experience of Revenue Taxes
| Tax | What the Person Experiences at Display | What the Brand Loses |
|---|---|---|
| Invisibility Tax | Person asks AI, brand never mentioned. Person doesn’t know what they missed. | 95% of potential market. Unmeasurable. |
| Ghost Tax | Person asks AI for comparisons, brand absent from consideration set. | Competitive losses. “Why didn’t AI mention us?” |
| Doubt Tax | Person hears AI hedge: “claims to be,” “appears to offer.” Person hesitates. | High-intent buyers ready to convert. |
Revenue Taxes sharpened: Invisibility Tax = AI framing failure. Ghost Tax = Brand framing failure. Doubt Tax = Audience framing failure. Each tax traces to a specific actor lacking a specific frame at a specific trust depth - all at Display, all experienced by the person.
Revenue Taxes map to UCD layers at Display: D-layer → Invisibility Tax. C-layer → Ghost Tax. U-layer → Doubt Tax. Act-level costs map to pipeline phases: Act I → Opportunity Cost. Act II → Competitive Loss. Act III → Conversion Leak (subdivided by the three taxes). Resolve in pipeline order (Acts I → II → III) AND in UCD order within Act III (U before C before D). The Cascading Prerequisite governs both sequences. This provides formal support for ROPI: fix infrastructure before competing, compete before converting.
9. The Framing Gap
The Framing Gap (Jason Barnard, February 2026) is the universal bottleneck in the digital brand economy. Three actors - Brand, AI, and Audience - each possess components of the Claim-Frame-Prove triad. Each lacks the Frame. They lack it for structurally different reasons.
The Reverse-CFP Problem
Human CFP moves forward: Claim → Frame → Prove. AI processes backward: Proof → Claim (sometimes) → Frame (almost never). The Frame is the bottleneck in both directions.
Why: Generating a frame requires four simultaneous operations - chronological awareness, attribution judgment, strategic significance, and entity-level context. AI can do one reliably, two sometimes, three rarely, four almost never.
AI’s Three Failure Modes
The Obvious Frame. “Jason Barnard is an SEO expert.” Technically accurate. Completely flat. The Wrong Frame. “Jason Barnard learned about search algorithms from Bing engineers.” Factually defensible if you squint. Fundamentally incorrect - direction of insight reversed. No Frame At All. A list of facts with no interpretive structure.
The Wrong Frame is actively harmful - it can rewrite the interpretive precedent. Establishing canonical frames NOW is strategically urgent.
Three Levels of Brand CFP Sophistication
Level 1: Claim Only. “I’m the best at SEO.” No proof, no frame. AI hedges: “claims to be.” Level 2: Claim + Proof. “I’m the best - here are my results.” Mechanical. AI can verify but no framing explains significance. Level 3: Claim + Frame + Proof. “I reverse-engineered how search engines evaluate brands - five engineers who built those systems independently confirmed my framework.” The frame transforms significance.
The insight: brands need better framing of existing proof, not more proof. This IS ROPI applied to CFP.
The Recursive Irony
CFP is designed to help AI recommend the brand. The reason AI needs CFP is that it cannot do CFP on its own. The brand compensates for the machine’s inability to think backwards from proof to claim through creative framing. Jason has described this as “empathy for the devil” - doing the cognitive work that AI cannot do for itself, making life easy for the machine. The Framing Gap formalises this: CFP is not just empathy. It is cognitive compensation for a structural limitation.
Connection to “Algorithms Are Children”
Children understand facts and can repeat them. They cannot explain WHY something matters. That is the teacher’s job. That is what The Kalicube Process does. Children learn framing eventually. AI will too - clumsily, with wrong attributions. The brands that supply canonical frames now establish precedent.
One-Sentence Reframe
Before: “The Kalicube Process builds Cascading Confidence across the Algorithmic Trinity.” After: “The Kalicube Process supplies the frames that three audiences - AI, Brand, and Human - cannot generate for themselves. Everything else is mechanics.” Both true. The second is deeper.
10. Visibility, Influence, Transaction: A Lens on the Funnel
Three economic questions apply to the agentic economy: Visibility, Influence, Transaction. V/I/T is a lens - three questions you ask about the brand’s position. It does not map to UCD. It is not a parallel framework. It is not a matrix crossed with UCD. It is simply a different way to look at the same reality.
Visibility: “Am I present in this conversation?” Can the AI find you? At any point in the funnel - TOFU, MOFU, BOFU - a brand can be invisible. You can be invisible at BOFU just as easily as at TOFU. The user asks “should I hire Kalicube?” and the AI has nothing to say. That is a Visibility failure at the deepest trust layer, not at the widest awareness layer.
Influence: “Who is shaping this decision?” The answer is always a mix - AI, human, and brand - that shifts by context. At TOFU the AI often dominates. At BOFU a trusted colleague might override everything the AI said. The mix changes. The question stays the same. As the decision space shrinks (Resolution 1 and 2 replacing Resolution 3), AI influence grows at every stage.
Transaction: “How does this decision resolve?” This maps directly to the Three Won Resolutions: Perfect Transaction (Resolution 3), Perfect Click (Resolution 2), Imperfect Click (Resolution 1). What determines the available resolution is not trust depth but infrastructure tier (the Destination Capability Ladder).
V/I/T is not UCD. UCD measures trust depth at Display (where in the funnel the person stands). V/I/T asks three economic questions about the brand’s position (is it visible, who shapes the decision, how does it resolve). They are separate things. V/I/T is a lens you look through. UCD is the funnel you look at. Do not map them to each other.
Why V/I/T Matters:
- CFO translation. “We need to invest in Understandability” means nothing to a CFO. “Are we visible when it matters? Who is influencing the decision? Can the agent close for us?” - those are board-level questions. V/I/T translates UCD into executive language without losing diagnostic precision.
- Platform revenue models. Platforms monetise different parts of V/I/T, and this helps explain where organic investment pays off:
| Platform | Where It Charges | Mechanism | Implication |
|---|---|---|---|
| Influence | Ads in AI Mode, Direct Offers at Display | No transaction fee on UCP checkout. Strong organic Cascading Confidence = free transactions. | |
| ChatGPT | Transaction | 4% commission on in-chat checkout via Shopify | Brand pays 4% regardless of organic strength. Cascading Confidence saves Influence cost but not Transaction cost. |
- Per-brand-type investment diagnostic. V/I/T exposes what each brand type needs to invest in:
| Lens | E-commerce | Shop Window | B2B | Media |
|---|---|---|---|---|
| Visibility | Product feeds, Merchant Center, schema, structured data | Google Business Profile, structured data, local schema | Entity Home, schema, linked profiles, directory presence | Structured articles, schema, RSS, indexability |
| Influence | Reviews, post-sale Q&As, competitive positioning, price authority | Testimonials, case studies, reputation, local reviews | Thought leadership, research, social proof, peer validation | Journalism quality, consistency, citation network, accuracy record |
| Transaction | UCP checkout, A2A agent, floor price, mandate integration | Booking system, availability API, reservation agent | Qualification agent, proposal system, procurement API | Paywall, licensing API, intelligence agent (Escape 3) |
Visibility Is Multi-Object Presence, Not Brand Mentions
Visibility is measurable before it is visible. It is the platform/agent’s ability to retrieve the correct objects (entities, offers, attributes) when the conversation demands them.
Operational diagnostic: measure coverage, not just exposure. For any destination (merchant, brand, firm, publisher), audit the percentage of offers and facts that are correctly present in the platform’s ingestion systems.
Visibility Coverage KPIs (universal):
Entity coverage: Is the destination entity retrievable as a coherent object? (merchant/brand/location/publisher) Offer coverage: What percentage of what they sell/offer is available to retrieval? (products/services/slots/plans/articles) Attribute completeness: What percentage of offers have the fields needed to be selected correctly? (price, availability, policy, identifiers, location constraints) Mismatch rate: How often does the system retrieve a value that contradicts reality? (wrong price, wrong availability, wrong location policy)
Universal examples: E-commerce - % of catalog present with correct availability, shipping, returns, price. B2B - % of services/capabilities present with correct scope, constraints, eligibility, pricing model. Shop window - % of bookable slots/services present with correct hours, availability, location-specific pricing. Media - % of content retrievable with correct versioning, corrections, authorship, freshness metadata.
Visibility failures at BOFU are common: the person asks “can you book me / can you buy it / can you contact them?” and the system cannot retrieve the required facts. That is a Visibility failure at the deepest point.
Influence Is Causal Shaping, Not “Credibility”
Influence requires an evidence inventory. The competitive mechanism is proof objects - the operational reality that agents can access and use to advocate.
Proof Objects Coverage %:
Post-engagement Q&A (not generic FAQ SEO - concrete post-sale dissection data) Support issue patterns + resolutions Returns / complaints / satisfaction / SLA performance Independent corroboration objects (press, reviews, institutional sources) Media: corrections history, citation network, accuracy track record
The principle: “They know if you know.” Make it measurable as Proof Objects Coverage %. The agent advocates because it has access to post-engagement truth, not because of marketing claims.
Influence surface presence (what the user sees):
Assertion vs hedge rate (“is” vs “claims to be”) Comparative rank position distribution (top pick / shortlist / mentioned / omitted) Reason-code frequency (“because price”, “because support”, “because reliability”)
Transaction Is Resolution Mechanics
Transaction maps to the Three Won Resolutions but not to UCD. Transaction measurement tracks how decisions resolve, through what infrastructure, with what failure modes.
Transaction readiness is measured via the Destination Capability Ladder (see §6). The deterministic fallback tree (Mandate full scope + action endpoint → R3; Mandate limited + decoupled endpoint → R3 with approval; no mandate + decoupled endpoint → R2 ceiling; neither → R1 ceiling) is the core diagnostic.
V/I/T Measurement Stack
Every lens is measured using the same four instrumentation layers:
| Layer | Visibility | Influence | Transaction |
|---|---|---|---|
| Coverage (inventory of reality) | Entity/offer/attribute coverage %. Catalog coverage. Identifier granularity readiness. | Proof objects coverage %. Post-engagement Q&A objects. Corroboration diversity index. | Capability tier distribution (Tier 1/2/3). Mandate readiness %. Decoupled checkout readiness %. Negotiation readiness (floor price). |
| Retrieval Trace (what the agent actually used) | Tool-call rate (feeds/UCP/MCP/WebMCP). Field pull rate. Missing field/null rate. Mismatch rate (returned vs reality). | Proof tool-call rate. Corroboration source count. Post-sale Q&A pull rate. | Checkout/booking/quote tool-call rate. Mandate verification rate. Negotiation invocation rate (floor price requested/used). |
| Surface Presence (what the user saw) | Merchant mention share. Offer mention share. Carousel/card inclusion rate. “Not present” rate on priority intents. | Assertion vs hedge rate. Comparative rank position. Reason-code frequency. | Direct offer appearance rate. Instant checkout appearance rate. Handoff-to-checkout rate. |
| Outcome Cohorts (what happened next) | Qualified sessions. Presold arrival signature (shorter sessions + higher conversion). Brand-direct-later (delayed). | Lower refunds/returns (e-com). Higher MQL→SQL and close-rate uplift (B2B). Higher booking completion (shop window). Subscription conversion (media). | Resolution 3 completion rate + failure reasons. Resolution 2 click→convert + friction causes. Resolution 1 presold arrival → conversion attribution. |
Measurement method: Build analytics cohorts via referrer + UTMs (ChatGPT / AI Mode / Copilot, etc.). Extrapolate outcomes from cohort to broader dataset.
Measurement Under Uncertainty (Mandatory Rules)
Rule 1 - Prompt volumes are unknowable. No first-party conversational volumes available. Treat third-party prompt volume estimates as unvalidated.
Rule 2 - Use demand proxies. Use Google Ads/search volumes as baseline demand proxy. Assume demand is stable; question forms shift (from typed queries to conversational prompts).
Rule 3 - Prioritise by criteria ontology. Build a structured “commercial drivers” ontology per category/persona/market (price, delivery, service, ergonomics, compliance, etc.) and measure Influence reason-codes against it.
11. The Dawes-Barnard Bridge
The Shared Diagnosis
Dawes proved the structural reality: 95% of your market is not buying today. No marketing spend changes this ratio.
The Divergent Prescriptions
| Dimension | Ehrenberg-Bass (Dawes) | The Kalicube Process (Barnard) |
|---|---|---|
| Medium | Human memory | Algorithmic memory |
| Mechanism | Advertising impressions build brand associations | Cascading Confidence builds machine understanding |
| Duration | Decays within weeks without reinforcement | Persists in KGs indefinitely, LLM training across generations |
| Cost model | Continuous spend (46% of budget per Binet/Field) | Invest once in pipeline, maintain at fraction |
| Timing | Carpet all months with brand advertising | AI delivers recommendation at exact trigger moment |
| Capacity | Limited cognitive space | Unlimited - machine processes entire corpus |
| Trigger response | Person tries to recall what they remember | Person asks AI, machine serves recommendation |
The One-Two Punch (Both Together)
Human memory and algorithmic memory are not competitors. Month 0: Prospect encounters your brand. Seed planted in human memory. Months 1-11: Pipeline works continuously. Algorithmic memory accumulates. Month 12: Trigger moment. AI recommends you. Faint human memory validated by algorithmic recommendation. “I remember them, they seemed sharp. And now the AI is telling me they’re the best solution.”
Top of Algorithmic Mind
Traditional marketing called the 95/5 job “top of mind.” The entire advertising industry was built on it: spend enough, repeat enough, and when the 95% become the 5%, they remember you. What AI changes is WHO does that job. The three Won Resolutions are three stages in the AI taking over the “top of mind” function with increasing precision:
- Imperfect Click (Resolution 1): system doesn’t know who’s ready, shows everyone the same list. Person picks. Random. The old world.
- Perfect Click (Resolution 2): system filtered for intent/context/readiness, presents one answer to the person moving from 95% into 5%. The person takes it.
- Perfect Transaction (Resolution 3): AI caught the exact moment and closed it. The 95/5 problem is solved.
Top of Algorithmic Mind is the destination. Being the brand the AI recommends at the moment the 95% become the 5%. That is the marketing job the AI is learning to do. The question is whether you’ve trained it well enough to do it.
PART V - THE ARITHMETIC LAYER
12. Won-Probability: The Full Arithmetic
Won-probability is the product of boolean gate-pass probabilities across all gates. A 90% pass rate at each of 10 gates yields 34.9% end-to-end survival (0.9¹⁰ ≈ 0.349). A 95% yields 59.9%. A 99% yields 90.4%. For unoptimised content at 80%: 0.8¹⁰ ≈ 0.107 - fewer than 11% survive. Small improvements at every gate compound into large gains at Won.
The Beer-Mats Principle: A near-zero at any gate kills the entire chain. Nine gates at 90% plus one at 50% drops you from 34.9% to 19.4%. If that gate drops to 10%, it kills the surviving signal entirely. “Better to be a straight C student than three As and an F” (Brent D. Payne, compressing Gary Illyes’s explanation, Sydney 2019). This is Darwin applied to the pipeline: fitness is the product across all dimensions, and a single zero kills the organism.
Bottleneck Identification: The marginal impact of improving any single gate is maximised when all other gates have high pass probabilities. The highest-value optimisation target is always the weakest gate, not the most visible one. This provides formal support for diagnostic-first optimisation.
The Entity Confidence Prior: Won-probability is conditioned on entity confidence - the system’s existing trust in the entity based on historical reliability, KG presence, and Entity Home quality. A high-trust entity’s content receives preferential treatment at Selection (Gate 1), Annotation routing (Gate 5), and Recruitment (Gate 6). The Entity Home initialises this prior. Barnard (2026c) defines it formally as the inverse of marginal annotation cost.
Three Layers of Won-Probability:
Entity Won-probability (Wₑ): Baseline probability that any content from this entity survives. Determined by entity confidence prior. Independent of specific content. Content Won-probability (Wc): Probability this specific content survives. Depends on Wₑ as prior, plus content quality and competitive positioning. Presentation Won-probability (Wp): Probability the content’s machine-readable signals are correctly interpreted. Schema, semantic HTML, explicit relationships.
The three are multiplicatively related: W_total = Wₑ × Wc × Wp. An entity with high Wₑ publishing poorly structured content still suffers low total Won-probability. High-quality content from an unknown entity inherits the low entity confidence prior.
Gate Tiers (differential weighting):
Tier 1 - Infrastructure gates (1-4): Binary, low differentiation. Table stakes. Exception: Conversion Fidelity at Gate 3 has more weight because rendering quality directly affects downstream annotation. Tier 2 - Classification gates (5-6): Structured, moderate differentiation. Foundation for competitive differentiation. Gate 5 is where the AI’s Framing Gap originates. Tier 3 - Competitive gates (7-9): Relative, high differentiation. Trinity gates. Explicitly competitive. Weight increases rightward as competition intensifies from set (Recruitment) to narrowing (Grounding) to finite (Display) to binary (Won).
Three Qualifications:
Qualification 1: Relative, not absolute. A brand’s pass rate at any gate means nothing in isolation. What matters is pass rate relative to the competitive set. The competitive gates (7, 8, 9) are explicitly zero-sum.
Qualification 2: Non-uniform across gates. A brand might be 99% at Gates 0-3 and 55% at Gate 6. The diagnostic value is the bottleneck identification, not the aggregate.
Qualification 3: The sequence override. Gate weight only matters among gates the brand actually passes. Connects to Zero-Risk Year: Phase 1 fixes binary gates (zeros), Phase 2 competes for marginal gains (competitive gates), Phase 3 builds probabilistic advantage (advocacy gates).
Practical synthesis for Kalicube Pro: The simple arithmetic (0.9¹⁰ = 34.9%) communicates one truth to CEOs: sequential gating is unforgiving. The nuanced view communicates three truths to strategists: (1) measure competitively at Trinity gates, not absolutely, (2) find the bottleneck gate, not the average, (3) invest improvement budget at the highest-weight gate you actually reach, not the highest-weight gate that exists.
Key Quotes (Commerce Pipeline)
Jason on framing: “The frame is the bridge between the claim and the proof, and it’s the difference between the people and companies who are going to make a huge success at this, and the ones who don’t.”
Jason, the killer one-liner: “We are entering the Zero-Sum Moment, where your customer’s AI assistant already knows who it’s going to recommend before your customer even asks. The question is whether that’s you - or whether someone else is paying to make sure it isn’t.”
Fabrice Canel on dimensions: “Oh, there is definitely more.”
Registers
Patent / Academic
UCD is the measurement dimension orthogonal to the DSCRI-ARGDW pipeline. Steps 0-9 describe the horizontal journey across three acts: Retrieval, Storage, Execution. Gate 10 (Served) extends the pipeline into post-Won feedback. Three nested audiences map as sequential gatekeepers. The Algorithmic Trinity manifests at Gates 6, 7, and 8. The Trinity processes knowledge through three graph structures with fuzziness-ordered optimisation: Entity Graph (low, U), Document Graph (medium, C), Concept Graph (high, D). UCD describes vertical position at Step 8. Won-probability is the product of boolean gate-pass probabilities, conditioned on the entity confidence prior and operating across three multiplicative layers (entity, content, presentation). The Perfect Transaction extends Won to autonomous AI decisions where Cascading Confidence is the sole determinant. V/I/T (Visibility, Influence, Transaction) function as a lens - three economic questions about the brand’s position - measured through four instrumentation layers (Coverage, Retrieval Trace, Surface Presence, Outcome Cohorts). V/I/T is not UCD; it is a different way to look at the same reality. Three Won Resolutions (Imperfect Click, Perfect Click, Perfect Transaction) are determined by a deterministic infrastructure fallback tree. Top of Algorithmic Mind is the convergence of 95/5 with the Won Resolution spectrum. Post-engagement feedback (Served) re-enters the pipeline at different DSCRI gates depending on platform mechanics and partnerships.
Client / Commercial / Sales
Steps 0-7 build the machine’s confidence in your brand. At Gates 6, 7, and 8, the Algorithmic Trinity decides: does your content get recruited, does the system trust it, how does each member display it? Step 8 is where machine confidence meets the person - and UCD measures trust depth. Dawes proved 95% aren’t buying today. When they trigger, they ask AI. TKP ensures AI remembers your brand. Step 9 is Won: the Perfect Click, or nothing. Step 10 is Served: what happens after Won feeds back into the system. Every Won feeds back. The flywheel spins.
Articles / Dickensian / Written-Edited
The pipeline delivers content to the display. The funnel delivers the person to the win. Time delivers the 95 percent to the funnel. And the win delivers confidence back to the pipeline: the circle closes, the flywheel turns. But the circle does not end at Won. What happens after - the satisfaction, the complaint, the return, the renewal - flows back into the ontology layer, and the agents learn whether to advocate again or move on. Three framing gaps stand between proof and claim: the AI cannot interpret, the brand cannot contextualise, the audience cannot connect. Supply the frames that all three lack, and the cascade holds.
Source Attribution (TKP-3a)
| Concept | Originator | Year |
|---|---|---|
| DSCRI-ARGDW Pipeline (11 gates) | Jason Barnard | 2025-26 |
| UCD Framework | Jason Barnard | 2024 |
| Algorithmic Trinity | Jason Barnard | 2024 |
| Brand SERP (term coined) | Jason Barnard | 2012 |
| Entity Home (term coined) | Jason Barnard | 2015 |
| “Brand SERP is Google’s opinion of the world’s opinion of you” | Jason Barnard | 2020 |
| Three-Step Knowledge Panel Process (EH → Corroboration → Signposting) | Jason Barnard | 2015 |
| Infinite Self-Confirming Loop | Jason Barnard | 2021 |
| Three Tiers of Control (controlled/semi-controlled/uncontrolled) | Jason Barnard | 2019 |
| Leapfrogging methodology | Jason Barnard | 2019 |
| Does Exist / Should Exist / Could Exist content status | Jason Barnard | 2025 |
| NEEATT (extending EEAT with Notability + Transparency) | Jason Barnard, credit Jarno van Driel (Transparency) | 2024 |
| AEO (Answer Engine Optimization, term coined) | Jason Barnard | 2017 |
| AAO (Assistive Agent Optimization) | Jason Barnard | 2025 |
| 95/5 applied to AI optimisation | Jason Barnard | 2025 |
| Zero-Sum Moment / Perfect Click | Jason Barnard | 2025 |
| Perfect Transaction | Jason Barnard | 2026 |
| Revenue Taxes (Doubt/Ghost/Invisibility) as subdivisions of Conversion Leak | Jason Barnard | 2025-26 |
| Three-Tier Cost Structure (Opportunity Cost / Competitive Loss / Conversion Leak) | Jason Barnard | 2026 |
| Framing Gap | Jason Barnard | 2026 |
| Reverse-CFP Problem | Jason Barnard | 2026 |
| CFP as cognitive compensation | Jason Barnard | 2026 |
| Cascading Confidence | Jason Barnard | 2025 |
| Won-Probability Model | Jason Barnard | 2026 |
| Three Layers of Won-Probability (Wₑ × Wc × Wp) | Jason Barnard | 2026 |
| Entity Confidence Prior / Computational Trust | Jason Barnard | 2026 |
| Shrinking Decision Space | Jason Barnard | 2026 |
| Three Won Resolutions | Jason Barnard | 2026 |
| Media Problem (Won belongs to the agent for publishers) | Jason Barnard | 2026 |
| Three Media Escape Routes (Licensing / Unconsumable / A2A Intelligence Agent) | Jason Barnard | 2026 |
| Warehouse-to-Shopfront (universal silo diagnostic) | Jason Barnard | 2026 |
| Operational Readiness Passport (universal DPP equivalent) | Jason Barnard | 2026 |
| V/I/T as economic lens (not UCD mapping) | Jason Barnard | 2026 |
| Top of Algorithmic Mind | Jason Barnard | 2026 |
| Delegation Boundary | Jason Barnard | 2026 |
| Destination Capability Ladder | Jason Barnard | 2026 |
| Three-Capability Agent Readiness | Jason Barnard | 2026 |
| Mandate as merchant trust primitive | Jason Barnard | 2026 |
| Decoupled checkout / deterministic fallback tree | Jason Barnard | 2026 |
| Floor price / structurally incentivised negotiation | Jason Barnard | 2026 |
| Data governance as competitive moat | Jason Barnard | 2026 |
| Visibility as multi-object presence | Jason Barnard | 2026 |
| V/I/T Measurement Stack (4 instrumentation layers) | Jason Barnard | 2026 |
| 95/5 Rule | Prof. John Dawes, Ehrenberg-Bass Institute | 2021 |
| Long/Short budget split | Binet & Field / IPA | 2019 |
| Mental availability | Prof. Jenni Romaniuk, Ehrenberg-Bass | 2010s |
The One-Paragraph Summary (TKP-3a)
The People layer is viewed through three lenses: People in Marketing (what brands create), People in Engines (how machines amplify it), and People as Clients (what happens when people act on it - see TKP-3b). This document covers the first two lenses and the commerce pipeline they feed. Lens 1 establishes the marketing fundamentals: the Corroboration Architecture (Entity Home → Corroboration → Signposting), NEEATT as the credibility framework applied to understood entities, the Content Status Framework (Does Exist / Should Exist / Could Exist), and the offsite principle. Lens 2 extends marketing into machine comprehension: Brand SERP anatomy, AI Résumé, three tiers of control, Rich Elements, Google Ads as BOFU defense, agent commerce (operational since February 2026). The commerce pipeline begins at Gate 8 where the person enters. The Three Won Resolutions (Imperfect Click → Perfect Click → Perfect Transaction) describe the trajectory from random browsing to autonomous execution. The Media Problem is the existential case: when the product IS the content, Won belongs to the agent, and three escape routes (licensing, unconsumable, A2A intelligence agent) define the publisher’s path forward. Revenue Taxes quantify what each failure costs. V/I/T (Visibility, Influence, Transaction) is a lens - three economic questions about the brand’s position - not a mapping to UCD. Won-probability arithmetic (0.9¹⁰ = 34.9%) proves sequential gating is unforgiving. The Cascading Prerequisite governs everything: marketing signals only attach to understood entities, engine amplification only works on credible entities, agent commerce is the D-layer reward for brands that built U→C→D in order. Recognition is not recommendation, and the difference between the two is the cascade.
<a id=”strategic-method”></a>
The Strategic Method (Clients + Strategy)
From pipeline to action: Lens 3 and the strategic method. The preceding sections cover what brands create (Lens 1: Marketing) and how machines amplify it (Lens 2: Engines). This section covers what happens when people act on it (Lens 3: Clients) and the strategic method that derives action from all three lenses.
PART I - LENS 3: PEOPLE AS CLIENTS (THE FEEDBACK LOOP)
1. The Client Lens
People in Marketing creates signals. People in Engines amplifies them. People as Clients is where people ACT - and where their actions feed back into both.
This lens covers the commercial result: conversion, retention, upsell, and the feedback loop that turns client experience into marketing signals and engine signals. Everything here feeds back into Lens 1 (marketing) and Lens 2 (engines).
Three Audience Types at the Brand SERP and AI Résumé
Three distinct groups encounter the brand’s engine-facing surfaces. Each has different needs, different intent, and different UCD dynamics:
Type 1 - Existing clients looking for a specific part of the site. They already trust the brand. They need navigation. Sitelinks, clear homepage structure, and support links serve them. UCD stage: U is resolved, C is resolved, they need D (specific deliverable). Failure mode: if they cannot find what they need (broken sitelinks, confusing navigation), they leave - and that friction generates a negative engine signal.
Type 2 - Prospects thinking about doing business with you. They are evaluating. The Brand SERP is their background check. AI Résumé is their pre-meeting briefing. They are looking for confirmation or red flags. UCD stage: they need U (who are you exactly?), then C (can I trust you?), then they decide. This is the BOFU audience - the 5% moving into market. Failure mode: hedging AI responses (“claims to be,” “appears to offer”) triggers the Doubt Tax. The sale was yours to lose.
Type 3 - People who found the homepage via other search terms. They did not search the brand name. They arrived via informational or category queries that happened to rank the homepage. They need immediate relevance (is this what I was looking for?) and then the full UCD descent. Failure mode: homepage optimised only for brand-name search fails these visitors. The homepage serves three audiences, not one.
Revenue Taxes from the Client Perspective
Revenue Taxes (see TKP-3a §8 for the full framework) are experienced differently from the client’s side:
| Tax | What the Client Experiences | What the Client Does |
|---|---|---|
| Invisibility Tax | Client asks AI for solutions. Your brand never mentioned. Client never knows you exist. | Chooses a competitor. Cannot choose what they cannot see. |
| Ghost Tax | Client asks AI to compare options. Your brand absent from the comparison set. | Evaluates without you. “They seemed good but AI didn’t mention them…” |
| Doubt Tax | Client encounters hedging: “claims to be,” “appears to offer.” Client hesitates. | Delays. Seeks second opinion. Chooses the brand AI stated with certainty. |
The client does not experience these as “revenue taxes.” The client experiences them as: “I found a great solution” (they found your competitor), “That one seemed uncertain” (they chose the brand AI stated confidently), or simply nothing (they never encountered you). The invisible costs are invisible to the client too.
The Conversational Acquisition Funnel
In the traditional funnel, the prospect moves through awareness → consideration → decision across multiple touchpoints over days, weeks, or months. In the conversational acquisition funnel, the AI walks the prospect through the entire journey in one conversation.
The prospect asks: “What’s the best solution for X?” The AI responds with context (TOFU), narrows options (MOFU), recommends one (BOFU), and depending on resolution: the human decides independently (Resolution 1), the human clicks through (Resolution 2), or the agent closes the transaction (Resolution 3). The entire funnel collapses into minutes.
Implication for the client lens: the client arrives presold. They have a perspective formed by the AI before they ever visit the brand’s website. The presold arrival state means the visit is more selective, more purposeful, and the decision is often already made. The website’s job shifts from persuasion to confirmation.
Measurement implication: shorter sessions + higher conversion = positive signal (client arrived presold). Longer sessions + lower conversion = negative signal (client arrived with AI-formed expectations that the website contradicted).
The Feedback Loop: Clients Into Both Lenses
Lens 3 is not a terminal state. Client experience feeds back into both Lens 1 (Marketing) and Lens 2 (Engines).
Into marketing signals: Satisfied clients generate reviews, testimonials, social mentions, word-of-mouth referrals, case study participation, repeat purchases, upsell conversations, partner referrals. These become marketing content. Every satisfied client is a potential corroborative source.
Into engine signals: Satisfaction data (reviews, ratings, return rates, complaint patterns), engagement patterns (repeat visits, loyalty metrics, subscription renewals), and conversion outcomes feed the machine’s confidence model. Gate 10 (Served) is the formal mechanism: the machine observes post-Won outcomes, evaluates them, and updates entity confidence. Positive outcomes strengthen recommendations. Negative outcomes weaken them.
Post-engagement Q&A objects. The real questions customers ask after purchase - and the real resolutions - are high-signal data. Mine support tickets, returns, call transcripts, onboarding notes, and sales-call objections. Convert into structured Q&A objects. Feed back into the system. Agents advocate because they have access to post-engagement truth, not because of marketing claims.
Data governance as competitive moat: If support, product, sales, and marketing are disconnected, the brand “doesn’t know.” And “they know if you know.” The brand that connects post-sales reality into the data layer feeding agents wins - not because of better marketing, but because the agent has better evidence to work with.
The Complete Three-Lens View
| Lens | What It Does | What Feeds It | What It Produces |
|---|---|---|---|
| 1. Marketing | Creates signals | Brand activity, content, relationships | Evidence the machine can observe |
| 2. Engines | Amplifies signals | Marketing signals packaged for machines | Machine recommendations people encounter |
| 3. Clients | Validates signals | People acting on machine recommendations | Outcomes that feed back into marketing AND engines |
The three lenses are a circle, not a line. Marketing creates → Engines amplify → Clients act → Client experience feeds back into marketing AND engines → Marketing creates with better evidence → cycle strengthens.
The Cascading Prerequisite governs the entire circle: marketing signals only attach to understood entities (U), engine amplification only works on credible entities (C), client acquisition scales only when machines advocate (D). Without U, the loop cannot start. Without C, the loop cannot compound. Without D, the loop cannot scale.
PART II - THE STRATEGIC METHOD
2. The Digital Mirror: Strategy Derivation from Machine Feedback
The pipeline describes how machines process content. This section describes how to derive an entire digital marketing strategy from the feedback those machines give you.
The Core Theorem
“Your Brand SERP is Google’s opinion of the world’s opinion of you” (Jason Barnard, 2020). Extended to AI: your AI Résumé is the AI’s evaluation of the world’s opinion of you. These are not vanity metrics. They are diagnostic mirrors. What the machine shows you is what the machine believes about you, based on everything it has observed. Every gap in the mirror is a gap in the world’s evidence about you that the machine could find and understand.
The mirrors do not lie. They reflect what the machine saw with the confidence the machine has. If the Brand SERP is incomplete, the world’s evidence about you is incomplete. If the AI Résumé hedges, the world’s evidence is unconvincing. The machine is not guessing. It is reporting.
Brand-Focused Marketing Packaged for Machines
TKP is not technical SEO. The pipeline is the packaging layer. The strategy is brand marketing.
The traditional marketer’s job: stand where your audience is naturally looking, show them you are the best solution to their problem, and invite them down the funnel. TKP’s job: do the same thing, then make SURE the machines see it and FULLY appreciate it - because the machines are now the ones standing where your audience is naturally looking.
Three implications. First: the work is principally offsite. Your Entity Home is the anchor (the canonical source of truth, the confidence prior). But the corroboration, the visibility, the credibility signals - those live on third-party platforms, publications, directories, social channels, podcasts, conferences, partner sites, wherever your audience already looks. Second: the strategy is derived FROM the machine’s feedback, not imposed ON the machine. You do not guess what the machine wants. You read what it shows and fix what it missed. Third: the machine replicates what it observes in the real world. If you are genuinely present, credible, and deliverable where your audience naturally looks, the machine will observe that and replicate the pattern. The strategy is: be genuinely present → machine observes → machine recommends. Not: trick the machine → hope it recommends.
Three Diagnostic Windows
| Window | What It Shows | How to Read It |
|---|---|---|
| Brand SERP (Google’s Business Card, 2012) | What Google shows when someone searches your brand name. The machine’s visual summary of its confidence in you | Every element present = the machine found sufficient evidence and has sufficient confidence. Every element absent = the machine lacked evidence OR lacked confidence. The gaps are your strategy |
| AI Résumé (2024) | What AI says when asked about you. The conversational equivalent of the Brand SERP | Assertions = high confidence. Hedging (“claims to be”) = evidence without conviction. Omissions = invisible. The language IS the diagnosis |
| Competitive Landscape (“best X in Y”) | What the machine shows when your audience asks for the best solution in your category | Who appears = who the machine considers your competition. The competitive set is defined by the machine, not by you. Your competitors are whoever the machine puts next to you |
The three windows produce a complete diagnostic: what the machine knows about you (Brand SERP), what it says about you (AI Résumé), and who it thinks you compete with (Competitive Landscape).
Strategy Derivation: The Feedback Loop
The strategy is not invented. It is derived. The sequence:
Step 1 - Read your mirrors. Analyze your Brand SERP. Analyze your AI Résumé across all platforms (Google AI Mode, ChatGPT, Perplexity, Claude, Gemini, Grok). Analyze “best [your category] in [your market]” results. What does the machine show? What does it say? Who does it list alongside you?
Step 2 - Read your competitors’ mirrors. Analyze their Brand SERPs. Analyze their AI Résumés. What does the machine show about them that it does not show about you? What evidence have they supplied that you have not? Where are they visible that you are not?
Step 3 - Identify the gaps. Every gap between your mirror and your competitors’ mirror is a strategic priority. Every gap between your mirror and what is true about you is a communication failure - the evidence exists in the real world but the machine has not found it, understood it, or believed it.
Step 4 - Fill the gaps where your audience naturally looks. This is the offsite principle. Do not create content on your website and hope the machine finds it. Go to where your audience is already looking - the platforms, publications, directories, communities they trust - and be genuinely present there. Reviews, interviews, conference talks, guest articles, partnership mentions, directory listings, social proof, podcast appearances, industry awards, third-party case studies. The audience sees you there. The machine sees you there. Both form the same opinion from the same evidence.
Step 5 - Package for machines. Make sure the machine sees and fully appreciates what you have built. Structured data on the Entity Home. Consistent narrative across all touchpoints. Clean entity signals so the machine can connect the dots. Schema, JSON-LD, sameAs, explicit relationships. This is the packaging layer - the pipeline mechanics from TKP-1 and TKP-2. The strategy is brand marketing. The packaging is technical.
Step 6 - Measure the mirror again. The Brand SERP changes. The AI Résumé changes. The competitive landscape shifts. Return to Step 1. The loop never closes.
The Offsite Principle
The industry’s instinct is to build on the Entity Home. Create content, publish on the website, optimise for crawling. This is necessary but insufficient. The Entity Home anchors the entity. But corroboration requires independent sources. The machine trusts what multiple independent, authoritative sources confirm - not what one source claims about itself.
The Corroboration Threshold (approximately 3 independent high-confidence sources) is met offsite, not onsite. The brand’s job: identify where the audience naturally looks online, appear there with genuine value, and ensure those appearances are structured clearly enough that the machine can connect them to the entity.
UCD mapping: Understandability is principally onsite (Entity Home, structured data, clean signals). Credibility is principally offsite (third-party mentions, reviews, corroboration). Deliverability is the result of both - the machine advocates because it has seen sufficient onsite clarity AND offsite confirmation.
Why AI Replicates What It Sees
The machine does not invent opinions. It synthesises evidence. When three independent sources say “Kalicube is the leader in Digital Brand Intelligence™,” the machine does not evaluate whether this is marketing. It evaluates whether the claim is corroborated. Corroboration is a statistical observation, not a judgment call.
The strategy implication: genuine presence offsite is not “link building.” It is not “digital PR.” It is the systematic creation of observable, corroborable evidence that the machine can find, understand, and synthesise. The machine replicates what it observes because that is what machines do. They do not create frames (the Framing Gap, TKP-3a §7). They report patterns. Supply the pattern, and the machine reports it.
The recursive loop: the machine recommends you → more people engage with you → more evidence is created → the machine’s confidence increases → the machine recommends more confidently. This is the Self-Fulfilling Prophecy (TKP-2 §7, Cascading Confidence) operating at the strategic level. The offsite principle initiates the loop. The pipeline sustains it. Served (Gate 10) amplifies it.
The Complete TKP Strategy Statement
Before: “The Kalicube Process builds Cascading Confidence across the Algorithmic Trinity.”
After: “The Kalicube Process reads what machines believe about a brand, derives the entire digital marketing strategy from those diagnostic mirrors, executes that strategy principally offsite where the audience naturally looks, packages the results for machines, and measures the mirrors again. Everything else - the pipeline, the gates, the UCD framework, the Won-Probability arithmetic - is the mechanics underneath that loop.”
Both statements are true. The second is more complete because it includes the strategy derivation layer that the pipeline mechanics alone do not describe.
3. TKP Universal
Three Expansions
TKP was “organic digital brand optimisation.” Three expansions make it universal: Organic + Paid (TKP tells you where the Perfect Click IS). Online + Offline (bringing offline online to feed the algorithmic flywheel). Present + Future (shrinking decision space means growing relevance).
Three TKP Pillars
| Pillar | What It Does | Revenue Model |
|---|---|---|
| Map | Identify every decision moment | Intelligence subscription (Kalicube Pro) |
| Earn | Build Cascading Confidence so you are the organic answer | Consulting + platform + content strategy |
| Defend | Protect earned positions or buy interception strategically | Paid media strategy + monitoring |
The Zero-Risk Year
| Phase | Months | What Happens | What Compounds (Invisible) |
|---|---|---|---|
| Fix (U) | 1-3 | BOFU consolidation: AI stops hedging, KP improves, sales objections drop | Entity confidence foundation |
| Lock-In (C) | 4-6 | MOFU competitive wins: AI recommends in comparisons, pipeline grows | Cascading Confidence compounding |
| Expand (D) | 7-12 | TOFU visibility: AI proactively mentions you in discovery | Full protocol readiness - MCP, A2A, UCP is a switch-flip |
Why phases cannot be skipped (the Cascading Prerequisite): This is not a recommended sequence. It is a mechanical sequence (see TKP-2 §8). Phase 2 (Lock-In / C) requires an entity node established in Phase 1 (Fix / U). NEEATT signals, topical authority, third-party corroboration - all C-layer signals - need that node to attach to. Without it, the signals are orphaned data. Phase 3 (Expand / D) requires confidence weight accumulated in Phase 2. Omnipresence, recommendation triggers, agent commerce activation - all D-layer signals - need trust to amplify. Without it, the entity exists but the machine does not advocate. Skipping phases fails not because it is impractical but because the infrastructure does not support it. The Speed 2 Problem is the most common manifestation: clients skip U because you cannot screenshot a kgmid.
Detailed Timeline:
| Timeline | What You Do | What You Get (Visible NOW) | What You’re Building (Invisible) |
|---|---|---|---|
| Week 1-2 | Fix Entity Home, clean structured data | Knowledge Panel improves; AI stops hedging about basic facts | Entity confidence foundation |
| Month 1 | Consolidate brand narrative, fix contradictions | AI recommendations become consistent; fewer sales objections | Cascading Confidence at Annotation |
| Month 2-3 | Build corroboration (strategic third-party mentions) | Share of Voice in AI citations increases | Cascading Confidence at Grounding |
| Month 4-6 | Expand multimodal presence, optimise for Perfect Click | AI recommends at decision moment; “ChatGPT keeps mentioning you” | Cascading Confidence at Display |
| Month 7-12 | Defend earned positions, expand to TOFU discovery | Compounding flywheel | Ready for MCP, UCP, A2A |
Entry Modes and the Zero-Risk Year: Phase 1 (Fix) establishes Mode 1 (Pull) infrastructure. Phase 2 (Lock-In) activates Mode 2 (Push Discovery via IndexNow) and Mode 3 (Push Data via structured feeds). Phase 3 (Expand) activates Mode 4 (MCP - now operational since Feb 2026, see TKP-1 §5) and builds toward Mode 5 (Ambient triggering through high Cascading Confidence). The Zero-Risk Year is a progressive activation of all five entry modes. The phases are mechanical: Mode 4 infrastructure is live, but activating it without Modes 1-3 foundation means the agent has access to a brand it does not understand or trust.
Pilot-to-Rollout Playbook
Pilot agent commerce by controlled deltas. Category pilots (one category vs another). Country vs country with similar behaviour patterns (e.g., Germany vs Austria).
Rollout gates (must be explicit): Identifier standardisation is a go/no-go gate. Misused identifiers create price/location mismatches that kill transactions and degrade confidence. Coverage threshold: acceptable catalog coverage % and attribute completeness. Mismatch threshold: acceptable mismatch rate for price/availability/policy.
The executive diagnostic remains universal: “Are you ready for a Digital Product Passport?” The answer is almost always no - and the gap between now and passport-ready is the gap to agent readiness.
The Plug-and-Play Promise
When you have done TKP - Entity Home organised, structured data clean, brand narrative consistent, entity confidence high - connecting to MCP, UCP, A2A, or whatever protocol emerges is a configuration task, not a strategic rebuild. Companies that skipped the foundation work will be building the house and wiring the outlet simultaneously, under time pressure, while competitors are already live.
February 2026 proof: On February 11, 2026, four infrastructure companies shipped agent commerce primitives simultaneously (Coinbase wallets, Cloudflare Markdown for Agents + x402, Stripe machine payments, OpenAI Agentic Commerce Protocol). Visa predicts millions of consumers using AI agents for purchases by holiday 2026. One million+ Shopify merchants are already accessible via OpenAI’s ACP. The Plug-and-Play Promise shifted from “when MCP becomes standard” to “MCP is standard.” Brands who did foundational work are activating these rails as configuration. Everyone else is rebuilding from scratch under time pressure.
The critical pitch: “The agent commerce infrastructure shipped. Brands who prepared are flipping a switch. Brands who didn’t are building the foundation while their competitors are already live. The window to prepare without competitive pressure is closed.”
The critical framing for the CEO: “You are not investing in a 12-month strategy. You are fixing revenue leaks this month while building a position that makes you ready for the agent economy - which is already here.”
The Cascading Prerequisite is the reason the promise works: the foundation (U→C→D) is mechanical. An agent-ready checkout endpoint on a house the AI doesn’t know exists is a beautiful front door on nothing. Schema without entity confidence is infrastructure without trust. Build the cascade first. Agent commerce is the D-layer reward.
The Speed 2 Problem
Clients skip Speed 2 because it’s invisible - you can’t screenshot a kgmid. They jump to Speed 1 because it’s visible (citations, search results). But without the entity foundation, Speed 1 content has no entity anchor. The LLM either doesn’t ground on it (Invisibility Tax - absent from Display) or grounds with hedging (Doubt Tax - “claims to be”).
The sales reframe: “You’re asking me to train a salesforce that doesn’t know who it works for. Before we teach it what to say (Speed 1 content), we need to teach it who you are (Speed 2 entity).”
The Complete TKP Definition
The Kalicube Process identifies every moment where a brand’s audience makes a decision - online, offline, organic, paid - and ensures the brand dominates at that moment. It does this by building Cascading Confidence: entity and content trust that compounds through every stage of the algorithmic pipeline (Discovery → Selection → Crawling → Rendering → Indexing → Annotation → Recruitment → Grounding → Display → Won → Served), across three nested audiences (Bot → Algorithm → Person), through five entry modes converging at the Universal Checkpoint (Recruited).
In a world where the percentage of decisions made without AI input falls every month, and the AI knows the exact moment of decision better than any human marketer ever could, the brand with the highest Cascading Confidence owns the Perfect Click - the single conversion moment at the end of the conversational funnel. Every other brand either pays to intercept it or gets nothing.
TKP is the methodology for earning it, the intelligence for mapping it, and the strategy for defending it. “Brand-focused marketing packaged for machines” - then delivered to people by those machines, at scale, in perpetuity.
4. Three Workstreams
The framework is correct. The patents are filed. The platform exists. None of that matters if the world does not adopt it as the standard.
The Meta-Problem
Jason faced this exact problem for ten years: technically right, universally ignored. The tipping point came because AI started recommending him before the humans did. That is the proof case for the entire methodology.
Three workstreams make any framework the universal standard:
| Workstream | What It Does |
|---|---|
| THE FRAMEWORK | Build the truth - the correct, complete, defensible model |
| THE RESISTANCE | Map and overcome the barriers to universal acceptance |
| THE EDUCATION | Make the framework the universally accepted standard |
The Resistance (Five Categories)
Category 1: Industry resistance. SEO vocabulary gatekeeping, tribal exclusion, methodology competition from partial approaches (GEO, AIO, “AI SEO”).
Category 2: Market resistance. Measurement gap (conceptual Perfect Clicks invisible to analytics), vocabulary barrier (“Algorithmic Trinity” vs. “Untrained Salesforce”), “not my problem” fallacy.
Category 3: Structural resistance. Anti-agent infrastructure (Cloudflare blocking), loss of control fear (funnel moves inside AI), walled garden lock-in.
Category 4: Vendor resistance. Every vendor frames the problem as the thing they sell. Not lying - incomplete. And incomplete is dangerous because it feels like progress while leaving the real problem untouched. Four groups: Tech Layer (Schema sellers - necessary infrastructure, not what makes the agent choose you). Content Layer (GEO specialists - content without entity confidence is a stranger shouting correct things). Authority Layer (link builders - existential threat from TKP, will attack with dismissal, not arguments). Full-Stack (most dangerous - claim completeness within a framework that addresses one-third of the Trinity).
Every vendor sells a crumb and calls it the meal. TKP is the only methodology that addresses all three layers.
Category 5: Buyer resistance. Three stages: The Imagination Gap (“I can’t see the problem” - the leap from “I use AI” to “AI talks about me to my prospects”). ROI Deferral (“I can see it but why invest today?” - month one looks like “we updated some structured data”). Protocol Paralysis + Control Fear (“I understand but the solution terrifies me” - the “I’ll do it when it matters” fallacy).
The Education (Three Channels)
Channel 1 - Train AI. Apply TKP to TKP: Entity Home for every framework concept, evidence chain architecture, and the “Ask AI” distribution strategy. Channel 2 - Train Influencers. Pathos articles for third-party corroboration, strategic allies, conference circuit. Channel 3 - Bypass Gatekeepers. The DIY strategy (prospects ask AI what Kalicube would recommend), the audit tool, and the compounding flywheel.
The Cyclical Truth
Framework → generates Resistance → requires Education → validates Framework → … Each cycle operates at a higher level. The beautiful, self-demonstrating proof: the methodology for making AI recommend brands IS the methodology for making AI recommend the methodology.
5. The 3×3×3 Meta-Framework
The Actor Interaction Model
The framework contains three actors (Brand, Machine, Person), three materials (Content, Signals, Identity), and three systems (Pipeline, Funnel, Federation). These interact through a layered chain - not a flat 27-cell grid:
3 actors × 3 materials = 9 interaction types (what each actor does with each material) 3 materials × 3 systems = 9 flow paths (how each material moves through each system) 3 systems × 3 actors = 9 outcome relationships (what each system delivers to each actor)
9 + 9 + 9 = 27, but structured as three layers of nine. Manageable. Teachable. Threes all the way down.
Layer 1: What We Do (The Offering)
| Pillar | Client Question | TKP Answer |
|---|---|---|
| Map | “Where are we?” | Cascading Confidence diagnostic across the full pipeline |
| Earn | “How do we win?” | Build confidence that compounds through Bot → Algorithm → Person |
| Defend | “How do we keep it?” | Protect earned positions or buy interception strategically |
Layer 2: What We Face (The Resistance)
| Category | Who Resists | Most Dangerous |
|---|---|---|
| Industry | SEO community, methodology competitors | Grey hat networks with conference slots |
| Market | Businesses, CEOs, CMOs | Inertia; invisible revenue taxes |
| Structural | Big Tech platforms, anti-agent infra | Walled garden economics |
| Vendor | Link builders, schema tools, content agencies | Linkbuilders - existential threat, loudest |
| Buyer | Companies who understand but won’t act | CFOs needing quarterly numbers |
Layer 3: How We Win (The Education)
| Channel | Mechanism | Feedback Loop |
|---|---|---|
| Train AI | Entity Home, evidence chains, “Ask AI” | AI recommends TKP → more signals → stronger |
| Train Influencers | Pathos articles, allies, conferences | Third-party corroboration → AI trusts more |
| Bypass Gatekeepers | DIY strategy, audit tool, flywheel | Every bypass weakens gatekeepers → more go through AI |
6. The Existential Weight
Five Structural Truths
Truth 1: The Convergence Is Real. The pipeline, the UCD funnel, the 95/5 time axis, the feedback loop, and the Framing Gap are five views of the same geometry. That they converge at a single point (Won) is not a design choice. It is a discovery.
Truth 2: Cascading Confidence Is the Meta-Concept. Every patent, every article, every platform feature can be reduced to: does this action increase or decrease Cascading Confidence?
Truth 3: The First Complete Framework. GEO addresses the algorithm phase. SEO addresses the bot phase. CRO addresses Won. Brand advertising addresses time. None addresses all five geometries. TKP is the first.
Truth 4: The Self-Proving Loop. If TKP cannot make AI recommend TKP, TKP is wrong. But it can, and it does. Google found Jason via Gemini.
Truth 5: Neutral Annotation vs. Contextual Filtering (The Moral Floor). The cascade stays objective. The system does not “spin” truth.
The Era Transition
Information Era → Confidence Era. “Can you find the information?” → “Can the machine trust the information enough to act on it?”
7. The Geometric Insight
The pipeline is a line (Steps 0-7). Display is a cone (the UCD funnel). Won is a point (binary outcome). Time is the axis delivering the 95%. The line is a circle (feedback loop). Five entry modes are parallel ingestion paths converging at the Universal Checkpoint (Gate 6).
| Methodology | Geometry Addressed | What It Misses |
|---|---|---|
| Traditional SEO | The line (pipeline 1-5) | Cone, point, circle, entry modes |
| Brand advertising (Dawes) | The time axis | Line, cone (wrong medium) |
| CRO / Conversion | The point (Won) | Line, cone |
| Content marketing | Sections of the line | Systematic completion |
| PR / Reputation | Parts of the cone (C layer) | Line, time axis |
| GEO / AIO | Middle of the line (5-7) | Bot layer below, person above |
TKP addresses all: the line (pipeline), the cone (funnel), the point (Won), the circle (feedback), the time axis (algorithmic memory), and the five entry mode ingestion paths. Not the best at any one geometry. The only one that addresses all of them.
The Insight Chain
The pipeline fills the funnel. The funnel reaches the point. The point feeds back to strengthen the pipeline. Optimising Won without Display is pointless. Display without the pipeline is impossible. Breaking conversion poisons the feedback loop. Authority is an output, not an input. Each funnel stage fails for a different reason. As the decision space shrinks, the funnel collapses and the pipeline alone determines the outcome. Entry Modes multiply the paths but the destination is the same: Recruited, Grounded, Displayed, Won.
8. Kalicube Pro as TAD (Track, Analyse, Diagnose)
Kalicube Pro is the diagnostic operating system that sits across the entire framework. Every pipeline gate, every UCD layer, every entry mode generates trackable signals, analysable patterns, and diagnosable failures.
| Internal (TAD) | What Kalicube Pro Measures | Client-Facing (Funnel) |
|---|---|---|
| Gate 5 - Annotation | Is the entity correctly classified? SLM routing confidence | Awareness diagnostic: “Does AI know you exist in this category?” |
| Gate 6 - Recruitment | Is content recruited by SE/KG/LLM? | Consideration diagnostic: “Does AI include you when comparing options?” |
| Gate 7 - Grounding | Is the entity cross-referenced across filing cabinets? | Decision diagnostic: “Does AI recommend you with confidence?” |
Three Platform Views:
| View | Shows | User |
|---|---|---|
| Bot Health | Crawl accessibility, rendering fidelity, conversion scores | Technical / SEO |
| Algorithm Position | UCD scores, citations, grounding position, competitive positioning | Strategy / Account managers |
| Person Conversion | Perfect Click mapping, funnel position, interception risk | Business owners / CMOs |
Platform Priorities:
Priority 1: Cascading Confidence Diagnostic. Score confidence at each pipeline stage per brand. Identify WHERE the cascade breaks.
Priority 2: Perfect Click Mapping. Track where AI recommends this brand (TOFU/MOFU/BOFU), where competitors win instead, and whether the brand is rightful owner or must intercept.
Priority 3: Entity Presence Maturity Diagnostic. Score presence across all retrieval verticals AND confidence phase per platform. Include entry mode coverage: does this brand have Pull infrastructure? Push Discovery? Push Data? MCP-ready data? High enough confidence for Ambient?
Key Quotes (Strategic Method)
Jason on packaging: “The Kalicube Process is brand-focused marketing packaged for machines.”
Jason on audiences: “You are marketing to all three (people, bots and AI algorithms) and none is more important than the others, but like the funnel, start with the bottom.”
Jason on paid: “THIS HAS TO BE in The Kalicube Process - it has always been about organic. BUT this one simple insight makes it UNIVERSAL by including paid!”
Jason on bringing offline online: “At Kalicube we spend our lives getting people to take those offline opportunities or those offline moments and put them online so that the algorithms can see them.”
Jason on the tipping point: “For 10 years nobody listened. As of December 2025, everything changed.”
Jason on entry modes: “The web index is losing its monopoly as the source of truth. Proprietary datasets feeding agents directly. The 1990s are back - actively feeding the ecosystem.”
Jason on the offsite principle: “Stand where your audience is naturally looking online, show them you are the best solution to their problem, invite them down the funnel. The machines see this and replicate - if you make SURE they see it and FULLY appreciate it.”
Jason on the sales reframe: “You’re asking me to train a salesforce that doesn’t know who it works for. Before we teach it what to say, we need to teach it who you are.”
Registers
Patent / Academic
The Digital Mirror theorem: the machine’s representation of an entity (Brand SERP, AI Résumé) is the machine’s evaluation of the world’s evidence about that entity, weighted by the machine’s confidence in each source. Strategy derivation follows: gaps between the machine’s representation and the entity’s ground truth identify communication failures in the evidence chain. The Corroboration Threshold (approximately 3 independent high-confidence sources) determines the transition from hedging to assertion. UCD maps to the offsite/onsite distinction: Understandability (U) is principally onsite (Entity Home, structured data), Credibility (C) is principally offsite (third-party corroboration), Deliverability (D) is the product of both. The Zero-Risk Year implements progressive activation of all five entry modes across three phases: Fix (U, Mode 1 Pull), Lock-In (C, Modes 2-3 Push), Expand (D, Modes 4-5 MCP/Ambient). TKP is universal across three expansion axes: Organic + Paid, Online + Offline, Present + Future. Three pillars operationalise the methodology: Map (intelligence), Earn (organic), Defend (paid). The methodology is self-proving: if TKP cannot make AI recommend TKP, TKP is wrong. The pipeline is the packaging layer. The strategy is brand-focused marketing. The machines are mirrors and intermediaries.
Client / Commercial / Sales
Your Brand SERP is Google’s opinion of the world’s opinion of you. Your AI Résumé is the same thing in conversation. Read them both. Read your competitors’. The gaps between your mirror and theirs are your strategy. The work is principally offsite - where your audience already looks, where the machine already observes. Entity Home anchors. Third-party presence corroborates. The machine replicates what it sees. The Zero-Risk Year: months 1-3 fix the Entity Home (AI stops hedging), months 4-6 build offsite corroboration (AI recommends), months 7-12 expand to advocacy (AI proactively mentions you). When MCP goes standard, you flip a switch. Everyone else is freaking out. The plug-and-play promise: do the foundational work now, and every future protocol is a configuration task, not a rebuild.
Articles / Dickensian / Written-Edited
The mirror does not lie. What the machine shows you is what the machine believes, and what the machine believes is what the world has said about you in ways the machine could find and understand. The gaps in the mirror are not technical failures. They are evidence failures. The work is not on your website. The work is everywhere your audience already looks - the publications, the platforms, the directories, the conferences - and making sure the machine connects those appearances to you. Brand marketing, not technical SEO. Packaged for machines, not designed for machines. The machine is a mirror and an intermediary. It reflects what it sees and delivers what it trusts. Supply the evidence. The machine does the rest. The percentage of decisions made without that mirror shrinks every month. When it reaches zero, the mirror is everything.
Source Attribution (TKP-3b)
| Concept | Originator | Year |
|---|---|---|
| “Brand SERP is Google’s opinion of the world’s opinion of you” | Jason Barnard | 2020 |
| Digital Mirror Theorem (extended to AI Résumé) | Jason Barnard | 2024 |
| Three Diagnostic Windows (Brand SERP, AI Résumé, Competitive Landscape) | Jason Barnard | 2026 |
| Strategy Derivation Loop (read mirrors → identify gaps → fill offsite → package → remeasure) | Jason Barnard | 2026 |
| Offsite Principle (corroboration is met offsite, not onsite) | Jason Barnard | 2026 |
| UCD onsite/offsite mapping (U=onsite, C=offsite, D=both) | Jason Barnard | 2026 |
| Three Audience Types at Brand SERP (existing/prospect/arrived-via-other) | Jason Barnard | 2019 |
| Three People Lenses (Marketing/Engines/Clients) | Jason Barnard | 2026 |
| Three TKP Pillars (Map/Earn/Defend) | Jason Barnard | 2026 |
| Three Expansions (Organic+Paid, Online+Offline, Present+Future) | Jason Barnard | 2026 |
| Zero-Risk Year | Jason Barnard | 2025 |
| Speed 2 Problem | Jason Barnard | 2026 |
| Plug-and-Play Promise | Jason Barnard | 2026 |
| Untrained Salesforce | Jason Barnard | 2024 |
| TAD (Track, Analyse, Diagnose) | Jason Barnard / Kalicube | 2024 |
| Confidence Era (vs. Information Era) | Jason Barnard | 2026 |
| Actor Interaction Model (3×3×3 layered chain) | Jason Barnard | 2026 |
| Three Workstreams (Framework/Resistance/Education) | Jason Barnard | 2026 |
| ROPI (Return On Past Investment) | Jason Barnard | 2025 |
| Presold arrival state | Jason Barnard | 2026 |
| Post-engagement Q&A objects | Jason Barnard | 2026 |
| Data governance as competitive moat | Jason Barnard | 2026 |
| Pilot-to-rollout playbook | Jason Barnard | 2026 |
| DPP readiness as executive diagnostic | Jason Barnard | 2026 |
| 95/5 Rule | Prof. John Dawes, Ehrenberg-Bass Institute | 2021 |
| Long/Short budget split | Binet & Field / IPA | 2019 |
| Mental availability | Prof. Jenni Romaniuk, Ehrenberg-Bass | 2010s |
For commerce pipeline source attributions (Won-Probability, Revenue Taxes, Framing Gap, V/I/T) see TKP-3a.
The One-Paragraph Summary (TKP-3b)
The People layer is a circle, not a line: marketing creates signals, engines amplify them, clients act on them, and client experience feeds back into both marketing and engines. This document covers Lens 3 (People as Clients - three audience types, Revenue Taxes from the client perspective, the conversational acquisition funnel, and the feedback loop into both other lenses) and the strategic method that derives action from all three lenses. The Digital Mirror theorem reads what machines believe about a brand and derives the entire strategy from the gaps. The offsite principle identifies where the audience naturally looks and builds genuine presence there. The Zero-Risk Year operationalises the Cascading Prerequisite: Phase 1 (Fix/U) creates the entity node, Phase 2 (Lock-In/C) loads it with credibility, Phase 3 (Expand/D) activates it for advocacy - mechanical prerequisites, not recommended sequence. The Plug-and-Play Promise now has proof: agent commerce infrastructure shipped February 2026, brands who prepared are flipping switches while competitors rebuild. The machine replicates what it observes. Supply genuine evidence of presence, credibility, and deliverability where the audience naturally looks, and the machine reports the pattern. Brand-focused marketing packaged for machines, delivered to people by those machines, at scale, in perpetuity.
<a id=”source-attribution”></a>
Acknowledgements
The Kalicube Process is the product of 27 years of work in algorithmic optimisation. The framework, the pipeline, the geometric insight - these are mine. But no one builds a framework alone. Thousands of conversations with practitioners, engineers, researchers, and clients supplied the details, the confirmations, and the corrections that I synthesized into what you have just read.
Andrea Volpini (WordLift) is in a category of his own. He is the one person who consistently gave me insights I had not arrived at independently - ideas that genuinely shifted how the framework handles agent commerce, identifier granularity, operational readiness, and the economic mechanics of the agentic layer. Many of the commerce concepts in this document crystallised through dialogue with him.
Several people provided specific details or confirmations that became building blocks: Fabrice Canel, Ali Alvi, and the Bing team opened the mechanical reality of how crawling, indexing, and grounding actually work inside a major search engine. Nathan Chalmers contributed insights from a depth of understanding few in the industry possess. Gary Illyes (Google) supplied key explanations that sharpened the pipeline model. Koray Tuğberk Gübür established the domain of Topical Architecture that this framework builds upon. Jarno van Driel contributed the Transparency dimension to NEEATT and deepened the structured data layer. Bill Slawski’s patent analysis informed the community’s understanding of Knowledge Graph mechanics, and his rigour was a useful corrective when enthusiasm ran ahead of evidence. Laurence O’Toole (Authoritas) provided measurement rigour and empirical evidence that validated core claims - and is now, alongside Andrea, one of the few people generating genuine new insights in this space. Jono Alderson’s work on technical architecture informed several infrastructure concepts.
None of these people gave me the framework. They gave me details I did not have, and I had the persistence to hold it all together and bring it all together. The insights are mine; the details came with a little help from my friends.
Acknowledgement also to Professor John Dawes and the Ehrenberg-Bass Institute (the 95/5 rule), Binet and Field (long/short budget splits), and Cormack et al. (Reciprocal Rank Fusion) - published research that provided the empirical and mathematical foundations cited throughout.
The SEO, semantic web, and AI communities collectively created the environment in which this work was developed. The industry conversation made the framework possible. The framework made the industry conversation coherent.
Licensing and Attribution
Open Framework, Free to Use. Protected Proprietary Machine: Kalicube Pro.
Framework license: The Kalicube Process framework, terminology, and diagrams are licensed under CC BY 4.0, so you are free to use, cite, and adapt with attribution.
Patent note: Kalicube holds patent applications pending covering specific automated systems that implement parts of this framework (e.g., pipeline-based diagnostic automation, INPI France: FR2600998-FR2601013, FR2601291-FR2601297, FR2601572-FR2601574, FR2601927). This content license does not grant patent rights, and independent implementations may require separate rights depending on how they are built.
How to cite: Barnard, J. (2026). The Kalicube Process (TKP) - Geometric Framework v4. https://kalicube.com/about/the-kalicube-process/
The Kalicube Process is an open framework - free to use, teach, and adapt with attribution. Learn more at kalicube.com/about/the-kalicube-process/
The Kalicube Process © Jason Barnard / Kalicube, 2015-2026. Framework content: CC BY 4.0.
