The Revenue Signal — Issue 05

In late October 2025, Reddit created a hidden test post on the platform. The content was visible only to Google's search crawler, inaccessible anywhere else on the internet.
Within hours, the post appeared in Perplexity's AI search results.
The screenshots became key evidence in a federal complaint filed October 22, 2025 in the Southern District of New York. The complaint, Reddit Inc. v. SerpApi LLC (25-cv-08736), accused Perplexity AI and three scraping firms (SerpApi, Oxylabs UAB, and AWMProxy) of running what Reddit's chief legal officer Ben Lee called an "industrial-scale 'data laundering' economy."
Reddit had already licensed its data to Google for $60M a year and to OpenAI in May 2024. Perplexity was the holdout.
Three months after the filing, Perplexity's citation graph looked nothing like it did in mid-2025. The map most B2B teams set their AEO strategy against had already redrawn itself.
This week: how a lawsuit reshaped Perplexity's citation graph in 90 days, how Apollo.io reclaimed its category description in LLM answers, and four executive moves you can make before next Thursday. ~12 minutes to read.
The Signal

A lawsuit shifted Perplexity's source mix overnight. Most B2B AEO strategies are still planned against the old one.
Per CMSWire's reporting, Perplexity's Reddit citation share dropped 86% almost immediately after the October 2025 lawsuit, with YouTube absorbing most of the displaced share. Conductor's research on the same period reinforces the direction — Reddit's overall AI citation frequency across LLMs dropped roughly 50% over the four months that followed, and sole-source citations (where Reddit is the only source the model cites) rose 31%.
The pre-lawsuit baseline mattered. In mid-2025, Profound's analysis of 680 million AI citations had Reddit at 46.7% of Perplexity's top-10 source share — among the platforms Perplexity returned to most often, nearly half were Reddit. (Profound's separate total-citation figure for Reddit on Perplexity was 6.6%; the 46.7% measures concentration at the top of Perplexity's source pool, where buyer-intent queries actually surface.) By January 2026, Tinuiti measured Reddit at 24% of all Perplexity citations. Still significant, but in a meaningfully different competitive position from mid-2025.
The visible trigger was legal pressure, not a model update. The speed was 90 days, not three years.
The mechanism is harder to pin down precisely. Reddit's complaint argued Perplexity pulled data through Google's index rather than from Reddit directly. After the filing, the citation share fell sharply — whether because Reddit hardened anti-scraping, Perplexity adjusted retrieval, or both is not publicly documented. The result was the same: Reddit got de-weighted in Perplexity's answers.
B2B citation graphs are concentrated. A query like "best revenue intelligence platforms for mid-market SaaS" pulls from a small set of high-authority sources, not the hundreds that surface for "best running shoes." When one source gets de-weighted, the competitive map redraws around the rest.
If your AEO strategy was set in mid-2025 and you have not measured since, three things are silently true now.
The Reddit-for-Perplexity play is depreciating. Reddit still cites for ChatGPT and Gemini, but the model-by-model variation is severe — Gemini cites Reddit in 0.1% of responses, Perplexity still pulls 31% of its citations from social media, per Tinuiti. A single Reddit-heavy plan is not optimizing across LLMs at all.
Quarterly content budgets built around the mid-2025 graph are spending against a depreciating channel. The dollars went out before the citation share moved.
Your competitor map is stale. The companies beating you on Perplexity in January 2026 are likely not the ones beating you in July 2025, even if neither side changed its content.
The teams that win AEO over the next three years will not be the ones with the cleverest one-time strategy. They will be the ones with the fastest measurement cycle.
The Build

How Apollo.io reclaimed its category description inside LLM answers.
The B2B company that has been most public about responding to AEO citation instability is Apollo.io, the sales engagement platform.
Brianna Chapman, who leads Reddit and community strategy at Apollo, started by checking whether Apollo appeared when ChatGPT, Perplexity, or Gemini were asked about sales tools. Apollo was being cited. It was being cited wrong.
"LLMs kept positioning us as 'just a B2B data provider' when we're actually a full sales engagement platform," Chapman said in HubSpot's 2026 AEO case study roundup. "Competitors were getting cited for capabilities we had, and sometimes did better."
The mechanism was simple. LLMs were pulling from old Reddit threads that described Apollo with incomplete or outdated information. The threads existed and were crawlable, so the LLMs treated them as truth.
Chapman's response was a measurement-and-correction loop, not a content campaign. She pulled first-party data from Enterpret (customer feedback), social listening, and prompts users were giving inside Apollo's own AI Assistant — building a list of about 200 buyer-intent prompts per topic. She tracked citations across LLMs in AirOps. Then she built r/UseApolloIO as an owned community resource and posted a detailed comparison thread on when teams should choose Apollo versus a competitor.
The new thread displaced the old one within a week. Apollo gained +3,000 citations across key buyer prompts in LLMs. The subreddit reached 1,100+ members and 33,400+ content views in over five months. Apollo's brand citation rate hit 63% on AI awareness prompts and 36% on category prompts.
What revenue leaders can take from this: misrepresentation is worse than invisibility. Being cited as the wrong category cost Apollo deals before the fix went in. Owned community surfaces, like a brand-run subreddit or a comparison hub the team controls, are leverage points LLMs respect. And citation tracking is a measurable capability now, not a hope.
The Move

Verify it yourself. Ask who owns it. Move the budget.
Four executive moves. Each takes 30 minutes or less. The first pass requires no new tools.
1. Tonight — open Perplexity yourself and run three buyer queries. Ask the three queries that should position your company in the top three. If you do not appear, or appear in the wrong category (Apollo's exact problem), the conversation with your CRO tomorrow has a different opening. This is not a marketing task to delegate. It is the gut-check only you can run.
2. Tomorrow — ask one question in your leadership standup. "When was our last AEO citation audit, and who owns the next one?" Most B2B companies cannot name an owner because the work sits between marketing (content), product (the website), and sales (the buyer narrative). The fix is not more budget. It is one VP with the measurement cycle on their plate.
3. By Friday — pull AEO out of the marketing line item. If your AEO investment lives inside "content marketing" or "SEO," it is not a measurable channel. It is a vibe. Ask your CFO to break it out as its own line in next quarter's plan, with a 90-day citation rate target attached. A channel that cannot be measured cannot be defended to the board.
4. Next QBR — add the citation re-audit to executive cadence. Put the 90-day re-audit on the same forum where you review pipeline. The point is that "we ran the audit" becomes a yes-or-no question your CRO answers in front of you, not something that quietly slips off the marketing backlog between quarters.
If you want to put numbers behind step 1 instead of running it yourself, Revenue Experts AI runs two audits at different depths.
The free AI Visibility Audit is AI-powered, self-serve, and returns results in 60 seconds. It tests whether your site is structurally ready to be cited by AI engines — whether ChatGPT, Claude, Gemini, and Perplexity can parse your content, find your authority signals, and pass your technical accessibility checks. It tells you if your site is ready. It does not tell you if you are currently cited.
The $497 AI Visibility Audit runs the Revenue Experts AI Citation Audit Method — 50 buyer-intent prompts across ChatGPT, Claude, Perplexity, and Gemini, three runs each, 600 measured LLM calls. Human-delivered. 5-7 business days. It tells you whether you are currently being cited, who beats you when you are not, and exactly what to fix in what order.
Different jobs. The free audit tests readiness. The paid audit tests citation. A company can pass readiness and still be invisible in citations because no buyer is asking the kind of question that surfaces its content. A company can fail readiness and still have decent citation share because its authority signals are strong enough to compensate. Most B2B SaaS companies need to know both.
A B2B SaaS company can be cited 60% of the time on ChatGPT, 30% on Claude, 5% on Perplexity, and 0% on Gemini. The single-LLM number that gets reported as a generic "AEO score" misses the entire signal. The four-LLM coverage in the paid audit is what produces the diagnostic difference between "we are winning" and "we are third."
But the more important point is the cadence, not the audit. A 90-day measurement cycle is the floor for B2B SaaS in any fast-moving category. A 6-month or 12-month cycle leaves you executing against citation maps that have already shifted.
Elizabeta Kuzevska is the co-founder of Revenue Experts AI, where she builds curated knowledge libraries and multi-agent systems that decide whether AI retrieves the right answer — inside a client's own systems and inside the public AI search engines that cite them. Her methodology applies the same retrieval discipline to private RAG pipelines and AEO citation strategy, treating both as one engineered system. See the courses and try some agents · Connect on X: @ekuzevska · Connect on LinkedIn
Sources
CNBC, "Reddit accuses Perplexity of stealing user posts, expanding data rights battle with AI industry," October 23, 2025 — https://www.cnbc.com/2025/10/23/reddit-user-data-battle-ai-industry-sues-perplexity-scraping-posts-openai-chatgpt-google-gemini-lawsuit.html
TechCrunch, "Reddit says it's made $203M so far licensing its data," February 22, 2024 — https://techcrunch.com/2024/02/22/reddit-says-its-made-203m-so-far-licensing-its-data/
TechCrunch, "OpenAI inks deal to train AI on Reddit data," May 16, 2024 — https://techcrunch.com/2024/05/16/openai-inks-deal-to-train-ai-on-reddit-data/
CMSWire, "Reddit's Rise in AI Citations: What Marketers Must Know About AEO Strategy," 2026 — https://www.cmswire.com/digital-marketing/reddits-rise-in-ai-citations-what-marketers-must-know-about-aeo-strategy/
Conductor, "Reddit AI Citations Are Dropping. Here's How Brands Win the Visibility Back," March 2026 — https://www.conductor.com/academy/reddit-ai-citation-decline/
Profound, "AI Platform Citation Patterns: How ChatGPT, Google AI Overviews, and Perplexity Source Information," 2025 (analysis of 680M citations, Aug 2024–June 2025) — https://www.tryprofound.com/blog/ai-platform-citation-patterns
SaaS Intelligence (citing Tinuiti Q1 2026 AI Citations Trends Report), March 2026 — https://saasintelligence.substack.com/p/reddits-ai-citation-share-just-grew
HubSpot, Jenny Romanchuk, "Answer engine optimization case studies that prove the ROI of AEO in 2026," updated March 2026 — https://blog.hubspot.com/marketing/answer-engine-optimization-case-studies
Revenue Experts AI, "The Revenue Experts AI Citation Audit Method" — https://revenueexperts.ai/the-revenue-experts-ai-citation-audit-method/
Revenue Experts AI, "AI Visibility Audit" (free, self-serve) — https://aeovisibility.revenueexpertsai.com/
Revenue Experts AI, "$497 AI Visibility Audit" (full citation diagnostic, 5-7 business days) — https://meetings.hubspot.com/john2956
