The Revenue Signal — Issue 04

Two AEOs. One stack.
Alan Zhao is the co-founder of Warmly, a B2B SaaS company in visitor identification and intent data. In March 2026, he opened ChatGPT and asked it to recommend the best website visitor identification tools.
His own product was not mentioned. Not in the top five. Not in the "also consider" section. Nowhere.
He ran the same test on ChatGPT, Perplexity, Gemini, Claude, and Copilot, using twelve phrases actual buyers type. Warmly showed up in six of the twelve. Invisible for the other half.
Thousands of paying customers, real revenue, the exact category buyers were asking about. The fastest-growing research channel on the internet had no idea the company existed. Zhao documented it in March 2026.
What he learned in the next sixty days is worth sharing. The bigger lesson is why his company was invisible in the first place, and it is not the one most B2B marketing teams are working on.
This issue covers the two disciplines hiding behind the same three letters and why working on only one leaves pipeline on the table. The Signal is the split. The Build is what Warmly changed. The Move is a full two-layer audit with every instruction you need to run it. Reading time, about 14 minutes.
The Signal

AEO now refers to two different problems. Most B2B teams are solving one or neither.
For most of 2025, AEO meant Answer Engine Optimization — being named in AI chatbot responses when buyers ask for vendors. That problem is growing.
G2 surveyed 1,076 B2B software buyers in March 2026. Fifty-one percent now start research with an AI chatbot more often than Google, up from twenty-nine percent seven months earlier. Almost doubled. Sixty-nine percent chose a different vendor than originally planned because an AI chatbot recommended it. One in three bought from a company they never heard of.
That is the top layer of the stack. Buyers open ChatGPT. The chatbot delivers a shortlist. The companies named are the companies considered.
On April 11, 2026, Addy Osmani, Director of Engineering at Google Cloud AI, published a separate definition of AEO: Agentic Engine Optimization. Same three letters. Completely different discipline. Osmani's AEO is about what happens after a chatbot fetches your site: whether the agent reading the page can make sense of it.
His argument rests on a number most B2B teams have never measured: token count. AI agents have context windows of roughly 100,000 to 200,000 tokens. The Cisco Secure Firewall Management Center REST API Quick Start Guide he cites comes in at 193,217 tokens — one document, 718,000 characters, consuming the agent's entire usable context. The agent then truncates, skips the page, or hallucinates from training data. Your analytics show a 400-millisecond page view with no scroll depth. Nothing tracked.
Osmani names six layers most teams treat as one or none. (1) Access control: whether robots.txt and the emerging agent-permissions.json spec actually let AI crawlers in. (2) Discovery: an llms.txt file at the root as a sitemap for agents. (3) Capability signaling: skill.md files that describe what an API does, not just how to call it. (4) Content formatting: Markdown over HTML, consistent headings, code directly after the claim it illustrates, tables instead of prose. (5) Token surfacing: page-level token counts exposed as metadata. (6) Copy for AI: a button that hands an agent clean Markdown instead of nav-heavy HTML.
Search Engine Land's coverage on April 15 was explicit: this is not the same AEO you have been reading about. The term collision is already causing confusion.
The two are stacked, not substitutes. Answer Engine Optimization decides whether your company is named in the response. Agentic Engine Optimization decides whether the agent fetching the page can actually read it. If the agent times out, your name comes out of the answer.
The second layer is invisible by design. No form fills. No click events. No referral traffic from the agent itself. For a contract signer reviewing the marketing budget, the dollars being spent on AEO might be funding only half the work.
The Build

How Warmly went from 5% to 30% of inbound demo requests from AI search in 60 days
Zhao and the Warmly team tested twelve buyer queries across five AI engines. They spent three months fixing what the tests revealed, and fixed both layers at once.
The Answer Engine layer. Deploy FAQ schema across 312 blog posts via the Webflow API in a single afternoon. Add full Organization schema with founders, aggregate rating, and social profiles. Put a Quick Answer block in the first 500 words of every important page — the tokens Osmani flags first. Fix Core Web Vitals. Refresh every post so nothing is older than 60 days, and write dedicated pages for every query where Warmly was invisible.
The Agentic Engine layer. Structured data agents can parse without reading the HTML. Comparison tables with real numbers instead of "contact sales." Tables for parameter references instead of prose. Content with at least fifteen connected entities per page (named competitors, categories, concepts), because citation probability climbs sharply above that threshold.
The result, measured on their demo request form: in February 2026, AI search drove 5% of inbound demos. By the end of March, it was 30%. Six times in sixty days.
A second number worth keeping: AI-sourced traffic converted to demos at 14.2% versus 2.8% for Google organic, per Exposure Ninja's cross-platform analysis. Buyers from AI search arrive pre-qualified, already told by the AI that the company is a fit. The close rate follows.
Zhao was candid about what still was not working. Warmly remained invisible on several high-value queries. The point is not that they won. It is that they ran the two-layer audit at the same time, and the pipeline moved because both layers moved.
The Move

The full two-layer audit you can start this week
Step one — the Answer Engine audit. 30 minutes, no tools beyond a browser.
Open ChatGPT, Perplexity, Gemini, Claude, and Copilot in separate tabs. Run these ten queries in each tool, substituting your category, your top competitor, and the incumbent in your space:
Best [your category] tools 2026
[Your company] vs [your primary competitor]
Alternatives to [the incumbent in your category]
Best [your category] for [your typical customer size or segment]
[Your category] pricing comparison
How to choose a [your category] vendor
What are the risks of adopting [your category of solution]
Top [your category] companies for [your customer's industry]
[Your category] vs [an adjacent category]
What are the downsides of [your company]
For each query, record four things in a spreadsheet: which vendors are named, whether you appear, where you rank in the list, and how you are described. Fifty cells. The queries where you are invisible are your content roadmap. The queries where you are described incorrectly are your structured-data and reviews roadmap. The tenth query in particular — the negative one — will tell you which G2, Reddit, or forum posts AI models are currently citing about your product, word for word.
Step two — the Agentic Engine baseline. One hour, mostly a browser and a token counter.
First, robots.txt. Open yourcompany.com/robots.txt in any browser. Search the file for these user-agent strings: GPTBot, ClaudeBot, anthropic-ai, PerplexityBot, Google-Extended, and CCBot. If any of them appear under a Disallow: / directive, the AI crawler is silently blocked from your site. Either remove the block or confirm with legal and engineering that it is intentional. This is the change Osmani puts first because it is ten minutes of work that can erase months of content effort downstream.
Second, llms.txt. Open yourcompany.com/llms.txt. If the browser returns a 404, you do not have one. A minimum viable llms.txt is a Markdown file at the root of your domain with a one-line summary of what your company does and a linked list of the pages an agent would need to answer a buyer question. Hand engineering this template as the brief:
# [Your Company]
> [What you do, in one sentence.]
## Product
- [Pricing](/pricing): Plans, tiers, and what each includes
- [How it works](/how-it-works): Core workflow and data model
- [Integrations](/integrations): Available connectors and APIs
## Docs
- [Quick start](/docs/quickstart): 5-minute install and first call
- [API reference](/docs/api): Endpoints, auth, rate limitsShip the file to yourdomain.com/llms.txt before the next sprint closes. Keep the full file under 5,000 tokens. Osmani calls this a few hours of work — it is the single highest-leverage technical change on this list.
Third, token counts. Paste your three most important pages — the homepage, the pricing page, and the top-converting product or solution page — into OpenAI's free tokenizer at platform.openai.com/tokenizer. If any page comes back over 25,000 tokens, flag it for the content team to chunk or shorten. That is a page an agent will truncate or skip.
Step three — ship the five quick wins. One to two weeks with content and engineering.
While the full diagnostic runs, ship these five changes this sprint. Every one is a published pattern from the Warmly playbook or Osmani's article.
Add FAQ schema to the top ten pages that rank for buyer-intent queries. Webflow, WordPress, and Contentful all have APIs for bulk deployment.
Add Organization schema to the site root with founders, aggregate review rating, social profiles, and founding date.
Put a Quick Answer block in the first 500 words of every important page — a 30- to 60-word direct answer to the title of the page, with bolded specifics. This is the 500-token rule Osmani flags.
Replace every "contact sales for pricing" on the pricing page with concrete numbers or ranges. AI models cite pages with concrete numbers and skip pages without them.
Refresh the publication date on any evergreen page that still reads 2024 or earlier and is still accurate. Warmly's team saw content updated within 60 days cited 1.9 times more often in AI answers.
Step four — find out where you sit.
Steps one through three cover maybe half of Osmani's six layers. The deeper question is whether your AI portfolio is pointed at growth or efficiency, and whether the foundation underneath it can support what you're building. The 20/80 AI Growth Benchmark scores you across five dimensions PwC's 2026 AI Performance Study identified as the differentiators between the 20% of companies capturing AI's economic returns and the 80% working harder with similar tools. Fourteen questions, twenty minutes, free. Scored personally by Elizabeta with one matched recommendation at the end — not a menu, not a sales pitch. If we are not a fit, we say so.
The hands-on work in steps one through three takes a few hours. The change in pipeline attribution takes weeks. Companies running both layers of the audit right now are the ones being named in the chatbots by Q3.
Elizabeta Kuzevska Co-Founder, Revenue Experts AI https://revenueexperts.ai
Sources
Alan Zhao / Warmly, "How B2B Buyers Use ChatGPT to Research Vendors (And How to Show Up)," March 2026 — https://www.warmly.ai/p/blog/b2b-buyers-chatgpt-geo-guide
Tim Sanders / G2, "In the Answer Economy, Don't Win the Click — Win the Answer," April 15, 2026 — https://company.g2.com/news/g2-research-the-answer-economy
Addy Osmani, "Agentic Engine Optimization (AEO)," April 11, 2026 — https://addyosmani.com/blog/agentic-engine-optimization/
Danny Goodwin / Search Engine Land, "Agentic engine optimization: Google AI director outlines new content playbook," April 15, 2026 — https://searchengineland.com/agentic-engine-optimization-google-ai-director-474358
OpenAI Tokenizer (free utility for measuring token counts on any page) — https://platform.openai.com/tokenizer
