The 4 reasons your competitor appears in Perplexity and not you — and how to reverse the situation
If your competitor systematically appears in AI answers and you don't, it's not a matter of luck or budget. It's the result of 4 precise structural advantages they've built — consciously or not — on LLM selection criteria. Here are these 4 advantages and how to erase them one by one.
The moment everything becomes concrete
You ran the test. You opened ChatGPT or Perplexity, typed a query about your sector, and saw your competitor's name appear — not yours.
That moment is uncomfortable. It's also extremely useful, because it transforms an abstract concept — "AI visibility" — into a concrete, measurable reality. Your competitor is in the answer. You're not. There's a precise reason for that. Probably four.
The good news: none of these reasons are related to the actual quality of your offer, your market seniority or your marketing budget. LLMs don't distinguish between a large company and an agile SME. They select based on objective criteria you can work on methodically.
Reason 1 — Their entity is more consistent than yours
This is the most frequent reason and the least intuitive.
LLMs build their understanding of a brand by aggregating information from dozens of sources — your website, LinkedIn, Crunchbase, Clutch, G2, Google Business Profile, sector directories, article mentions. If these sources tell different stories about what you do, who you do it for and how you position yourself, the language model struggles to categorize you with confidence.
Your competitor appearing in AI answers has probably — often without knowing it — a consistent description across all these sources. Same wording, same positioning, same semantic keywords. This may seem trivial. For an LLM, it's a major reliability signal.
Check right now: type your competitor's name on Google, look at how they're described on their site, on LinkedIn, on Clutch, on Crunchbase. Now do the same for your brand. If your descriptions vary significantly from one source to another, you've found the first reason for your invisibility.
What you can do: unify into one precise, identical sentence your description across the 8 priority sources. This action takes half a day and improves your entity score within 4 to 6 weeks. This is exactly what we did for Storyzee — from 5/100 to 52/100 in 6 weeks.
Reason 2 — Their content directly answers questions, yours describes your offer
This is the most important difference between classic SEO-optimized content and LLM-optimized content.
Classic SEO taught you to write pages that describe your offer, highlight your competitive advantages and drive action. This content is useful for convincing a prospect who has already landed on your site. It's useless for an LLM looking for a source to cite in response to a question.
LLMs select pages that directly answer a precise question, with the answer at the top of the page, verifiable factual data and clear structure. This is the BLUF format — Bottom Line Up Front. Your competitor appearing in AI answers probably has content structured this way, even if they don't know this term.
A concrete example. If someone asks Perplexity "how to choose an international SEO agency", Perplexity will look for a page that directly answers this question — not a page that says "our agency is the best to support your international development." The first page will be cited. The second will be ignored.
Look at the 5 main pages of your site. Does each one start with a direct answer to a question your prospects actually ask? Or does each one start with a description of your company and values? If it's the second option, you've found the second reason for your invisibility.
What you can do: rewrite your 3 most important pages in BLUF format. Start with the direct 2-sentence answer, then develop. Add a FAQ section at the bottom of each page with 5 real questions and their answers. This rewrite takes 2 to 3 days and produces results within 6 to 10 weeks.
Reason 3 — Their schema markup speaks to AI engines, yours is absent or generic
Before even reading your content, AI engine crawlers read your site's structured code — schema markup. This is a technical layer invisible to your human visitors but decisive for LLMs.
Schema markup explicitly tells AI engines what your organization is, what it does, who leads it, where it is, what services it offers, and crucially — via the sameAs field — where it exists elsewhere on the web. This is the layer that connects your site to all your other digital presences and allows the LLM to build a consistent picture of your entity.
Your competitor appearing in AI answers probably has a complete Organization schema with sameAs links to their Clutch, LinkedIn, Crunchbase profiles. They may also have FAQPage schemas on their key pages — FAQs being the format LLMs cite most easily.
And they may have done something very few players in France have yet done: created an llms.txt file at the root of their site. This file is the equivalent of robots.txt but for LLMs — it explicitly tells them who you are, what you do and how to cite you correctly.
Check right now: open your browser's developer tools on your competitor's site, search for "application/ld+json" in the source code. If they have structured schema markup and you don't, you've found the third reason.
What you can do: implement Organization schema with complete sameAs, FAQPage schema on your priority pages, and create your llms.txt. If your site is on Webflow or a modern CMS, this implementation takes 1 to 2 days. This is the layer with the best effort/impact ratio in the entire method.
Reason 4 — They're in the sources AI engines cite, you're not
This is the most powerful reason — and the longest to fix.
LLMs give maximum trust to sources they consider independent and authoritative. When Perplexity answers a recommendation question — "what is the best agency X" — it doesn't just crawl agency websites. It prioritizes third-party sources that have evaluated and ranked these agencies: Clutch, G2, DesignRush, editorial rankings published by recognized consultants, articles in industry media, discussions in professional forums.
Your competitor appearing in these answers is present in these sources. They have verified reviews on Clutch. They're mentioned in a "top [sector] agencies 2026" article published on a high-authority sector blog. They may have participated in a podcast or been interviewed in a specialized media outlet.
These third-party presences don't build overnight — which is what makes this layer the hardest and longest to work on. But it's also what creates the most durable advantages. Once you're cited in an authoritative editorial ranking, that citation works for you continuously, in all AI answers on recommendation queries in your category.
Check right now: search your competitor's name in quotes on Google. Look at how many third-party sources mention them — articles, rankings, reviews, interviews. Now do the same for your brand. The gap between the two numbers is the gap in your AI invisibility on recommendation queries.
What you can do: identify the 5 priority third-party sources in your sector and build a presence plan for each. For review platforms (Clutch, G2): create or complete your profile and activate client review collection. For editorial rankings: identify the authors of these rankings and propose a relevant angle to be included. For sector media: propose a guest article on a topic where you have real expertise. Timeline to see results on recommendation queries: 8 to 16 weeks.
In what order to address these 4 reasons
The 4 reasons are not addressed in any order. There's a logical sequence that maximizes impact at each step.
Start with reason 1 — entity consistency. This is the absolute prerequisite. If LLMs don't recognize your brand as a reliable and consistent entity, optimizations in subsequent layers will have limited impact. Timeline: 1 to 2 days of work, results in 4 to 6 weeks.
Continue with reason 3 — schema markup. This is the fastest technical layer to implement once your positioning is unified. It immediately amplifies the entity signals from layer 1. Timeline: 1 to 2 days of work, impact in 4 to 6 weeks.
Then work on reason 2 — content citability. With a consistent entity and schema markup in place, your BLUF-restructured content will be much more easily indexed and cited. Timeline: 2 to 5 days of work depending on volume, results in 6 to 10 weeks.
Finally activate reason 4 — third-party sources. This is the layer that takes the most time but produces the most durable results. It builds in parallel with the first three but only produces measurable effects once layers 1, 2 and 3 are in place. Timeline: 4 to 12 weeks of continuous actions, results in 8 to 16 weeks.
This sequence is exactly what the Storyzee method follows — in this order, for this reason. See how our 8 agents measure each layer.
Benjamin Gievis
Founder of Storyzee. Former agency owner turned AI visibility specialist. Building the tool and methodology so SMEs exist in answers from ChatGPT, Perplexity, Gemini, Claude and Grok.
Talk to Benjamin — 30 min free