From Empathy to Intelligence: How Jason Barnard’s Single Principle Became a Complete Framework for the AI Age
By Bernadeth Brusola | February 2026
In 2015, Jason Barnard stood on a stage in Metz, France, and told a room full of SEOs that they were thinking about Google wrong. They were treating it as an adversary - something to trick, game, and outwit. He told them to have empathy for it instead.
They thought he was being provocative. He was being literal.
Eleven years later, that single principle - understand the system’s struggles, help it do its job, and it will reward you - has evolved into a complete framework for how brands survive and thrive in the AI age. Every concept Barnard has developed since 2015 grows from the same root. This is the story of how one idea became a methodology, a platform, and an industry.
2015: Empathy for the Devil
The insight was simple, and at the time, it felt almost naive: Google is not your enemy. Google is an overworked employee trying to satisfy an audience with impossible expectations. Every user expects the perfect answer instantly. Google can never fully deliver. The gap between expectation and delivery is permanent.
Most SEOs responded to this reality by trying to exploit Google’s limitations - stuffing keywords, building manipulative links, gaming the system. Barnard’s argument was the opposite: if you understand Google’s job and help it do that job better, it will reward you. Not out of gratitude. Out of self-interest. You make it look less inadequate to the audience it can never fully satisfy.
He called it Empathy for the Devil - a deliberate nod to the Rolling Stones, and a deliberate provocation to an industry that treated the algorithm as the enemy.
The principle: understand the system’s constraints. Work within them. Help it succeed. And it helps you succeed.
At the time, this applied to one system: Google Search. The reward was better rankings. The mechanism was straightforward - give Google clean, structured, unambiguous information about your brand, and it would represent you accurately in search results.
But the principle was bigger than search. Barnard just didn’t know it yet.
2017: Algorithms Are Children
Two years later, Barnard pushed the idea further. If Google isn’t an adversary, what is it? The answer came from watching how the Knowledge Graph worked - how it accumulated understanding of entities through repeated, consistent signals from multiple sources.
It behaved like a child learning about the world. Not stupid. Eager. Absorbing everything. Building understanding through repetition and consistency. Getting confused by contradictions. Growing more confident as evidence accumulated.
He presented this at SEO Camp in Lyon: “Éduquons Google - c’est un enfant en soif de connaissances.” Let’s educate Google - it’s a child thirsting for knowledge.
This shifted the frame from empathy to pedagogy. It was no longer just about understanding Google’s struggles. It was about taking responsibility for teaching it. The algorithm doesn’t understand your brand because you haven’t taught it properly. The Knowledge Panel is wrong because your digital ecosystem sends contradictory signals. The search results misrepresent you because the information you’ve put into the world is incomplete, inconsistent, or outdated.
The principle evolved: algorithms are students. You are the teacher. The quality of their understanding reflects the quality of your teaching.
This became the foundation of everything Kalicube would build. The Kalicube Process™ isn’t a set of technical tricks. It’s a curriculum. You teach the algorithm who you are (Understandability), prove you’re credible (Credibility), and demonstrate you can deliver what the user needs (Deliverability). UCD - a teaching framework disguised as a marketing methodology.
2020: Darwinism in Search
By 2020, the search landscape had fragmented. Google wasn’t just returning ten blue links anymore. It was assembling rich results - Knowledge Panels, People Also Ask, Featured Snippets, image packs, video carousels. The SERP had become an ecosystem where different content formats competed for visibility.
Barnard’s principle adapted. It wasn’t enough to teach the algorithm who you are. You had to teach it in the format it needed. The fittest format survives. A brand that provides only text when Google wants video gets filtered out. A brand that provides structured data when Google needs entity confirmation gets amplified.
Darwinism in Search: the format that best serves the algorithm’s needs in a given context wins the real estate.
This wasn’t a departure from Empathy for the Devil. It was the same principle applied to a more complex environment. The algorithm still has a job to do. It still needs help doing it. But now “helping” means understanding not just what information the algorithm needs, but how it needs that information delivered.
The reward shifted too. It was no longer just about ranking. It was about occupying SERP features - the rich results that increasingly dominated user attention. Brands that understood what the algorithm was trying to build on the results page, and provided content that fit that structure, won disproportionate visibility.
2024: The Untrained Salesforce
Then AI assistants arrived. And everything changed. And nothing changed.
By 2024, users weren’t just searching Google. They were asking ChatGPT, Perplexity, Claude, Gemini, Copilot, Siri, and Alexa. Seven AI systems, each one capable of recommending, describing, comparing, and evaluating brands. Seven systems working 24 hours a day, seven days a week, answering questions about your business.
Seven employees you never hired, never trained, and never managed.
Barnard called them the Untrained Salesforce. Because that’s exactly what they are. AI assistants are employees - they just work for every company simultaneously. And like any untrained employee, they either sell for you or they sell for your competitors. The difference depends entirely on whether you’ve trained them.
The principle held perfectly: understand what these systems need. Help them do their job. And they perform for you.
But the scale changed dramatically. In 2015, Barnard was talking about one algorithm (Google) with one output (search results) in one format (a web page). In 2024, he was talking about seven algorithms, each with different knowledge bases, different confidence levels, different response formats, and different ways of evaluating authority. The fundamental principle - empathy for the system, education of the system, helping the system do its job - was identical. The complexity of applying it had multiplied sevenfold.
The Untrained Salesforce metaphor also reframed the business conversation entirely. CEOs don’t care about algorithms. They care about employees and revenue. When Barnard tells a CEO that seven AI employees are currently describing their company to prospects, and those employees were never trained, the response is immediate: “How do we train them?”
That question is the entire Kalicube business model. The answer is The Kalicube Process - the same UCD curriculum built for Google’s Knowledge Graph, now applied to every AI system simultaneously.
2026: Make AI Less Disappointing
And then something unexpected happened. The principle turned inward.
Barnard spent the past year working with AI assistants daily - not just optimising brands for them, but using them as working tools. Building with them. Collaborating with them. And watching them fail in ways that taught him something he says he should have understood from the start.
Every problem he discovered with AI assistants as a user mapped directly to a problem he’d already solved for brands as an optimiser. The same architecture. The same failure modes. The same solutions.
Knowledge Rot - the silent degradation of an AI assistant’s knowledge base - is the same problem as a brand’s digital footprint going stale. The Confidence Fallacy - trusting confident AI output without verifying currency - is the same problem as users trusting what AI says about your brand without you verifying what it’s saying. The Colleague Fallacy - assuming AI has associative memory when it actually uses serial keyword retrieval - is the same problem as assuming AI “just knows” your brand has evolved.
“The user side and the brand side are mirror images of the same system,” Barnard says. “And the principle that solves both is the one I articulated in Metz in 2015. Understand the system. Help it do its job. Maintain what you’ve built.”
But there’s a layer he hadn’t articulated until this year, and it connects everything: the satisfaction gap is permanent, and your job is to make it smaller.
He calls this the Eternal Dissatisfaction Cycle. Technology improves. Expectations rise faster. The gap between what users expect and what any system can deliver never closes. Google could never satisfy every searcher. AI assistants can never satisfy every user. Your brand’s AI representation can never be perfect across seven platforms simultaneously.
The goal was never perfection. The goal was always to close the gap as much as possible - to make the algorithm less disappointing to the user.
“That’s what Empathy for the Devil meant in 2015,” Barnard says. “That’s what it means in 2026. Make AI less disappointing. That’s the job. It always was.”
The Arc
Eleven years. Five frames. One principle.
| Year | Frame | Where | Mechanism | Reward |
|---|---|---|---|---|
| 2015 | Empathy for the Devil | SEO Camp, Metz | Help Google do its job | Better rankings |
| 2017 | Algorithms Are Children | SEO Camp, Lyon | Educate the algorithm like a child eager to learn | Algorithmic understanding |
| 2020 | Darwinism in Search | Provide the fittest format for the context | SERP feature inclusion | |
| 2024 | The Untrained Salesforce | Train your seven AI employees | AI recommendation | |
| 2026 | Make AI Less Disappointing | Close the satisfaction gap | Citation, trust, revenue |
Each row looks different. The audience changed. The technology changed. The business language changed. But read the “Mechanism” column from top to bottom, and it’s the same sentence rephrased five times: understand the system, help it succeed, and it helps you succeed.
That’s Empathy for the Devil. It was always Empathy for the Devil.
Why This Matters Now
What makes Barnard’s trajectory unusual isn’t the consistency - though eleven years of the same principle evolving is rare in an industry that reinvents itself every eighteen months. What makes it unusual is that the principle keeps being proven right by technologies that didn’t exist when he first articulated it.
In 2015, there were no AI assistants. There was no ChatGPT, no AI Overviews, no Perplexity. The idea that you should “teach algorithms like children” was a metaphor for optimising Google’s Knowledge Graph. By 2026, it’s a literal description of how brands need to operate: educate seven AI systems, maintain that education, and monitor whether they’re representing you accurately.
The entrepreneurs who follow this framework will have AI systems that recommend their brands accurately for years. The brands that ignore it will wonder why AI keeps getting their story wrong - never realising that the story AI tells is the story they taught it, whether they meant to or not.
For SEOs, the implications are direct: the skills built over two decades of search optimisation - content structure, authority building, entity disambiguation, information architecture - are the same skills needed to optimise for AI. The pipeline didn’t change. The interface did.
For business professionals using AI assistants daily, the implications are equally direct: the system you’re talking to doesn’t remember like a colleague. It retrieves like a search engine. Understanding that difference - and adjusting how you provide input - is the single most impactful change you can make to the quality of your AI interactions.
And for anyone who has ever felt that AI was disappointing, unreliable, or frustrating: it probably was. But the system wasn’t failing you. It was doing exactly what it was taught to do with whatever information it had access to. The gap between what you expected and what you got isn’t a technology problem. It’s a teaching problem.
Jason Barnard has been saying this since 2015. The technology keeps proving him right.
This article is part of a series on how humans and AI systems work together. Related reading:
- Knowledge Rot: The Silent Killer of Every AI Assistant You’ll Ever Build
- The Colleague Fallacy: Why You’re Talking to AI Wrong
- Why SEOs Are Already the Best AI Prompters - Jason Barnard, Search Engine Land [LINK]
- You Already Understand AI - Someone Just Made It Sound Complicated - Jason Barnard, Search Engine Land [LINK]
- Naming for the Listener: The Glossary Test and Why the Terms That Stick Are the Ones That Open Doors - Jason Barnard [LINK]
Bernadeth Brusola is a content strategist at Kalicube, specialising in Digital Brand Intelligence™ and AI-driven brand optimisation.
