8 analysis agents. A score out of 100. Calibrated criteria.

The Storyzee platform runs 8 specialised agents that query ChatGPT, Perplexity, Gemini, Claude and Grok in real time. Each agent measures a precise dimension of your AI visibility using weighted criteria and numeric benchmarks. Together they generate your AI Visibility Score — a global indicator out of 100.

No subjective assessment. Calculated scores. Verifiable criteria. Measurable progress.

One global score, 8 weighted indicators

Your AI Visibility Score is calculated from 8 weighted indicators. Each indicator corresponds to a software agent that analyses a specific dimension of your AI visibility, produces a score on calibrated criteria, and contributes to the global score proportionally to its weight.

22%
18%
15%
13%
10%
10%
Citation Tracker (22%) Authority Scanner (18%) Knowledge Probe (15%) Referral Scanner (13%) Content Analyzer (10%) Competitor Analyzer (10%) Social Scanner (7%) Technical Scanner (5%)

Every point of the score is measurable, explainable and improvable.

The 8 platform agents

Citation Tracker

22%

Is your brand cited in AI answers?

Sends your target queries to ChatGPT, Perplexity, Gemini, Claude and Grok, then detects every mention of your brand — position, frequency, and which competitors appear in your place.

Share of Voice 35%
Multi-platform coverage 25%
Position in responses 20%
Citation quality 20%

Authority Scanner

18%

Are you present and trusted on third-party platforms?

Checks your presence on platforms AI engines consult as trust signals — Trustpilot, G2, Capterra, Clutch, Crunchbase, Google Business Profile, Product Hunt — then evaluates listing quality, ratings and review volume.

Platform coverage 25%
Ratings & review volume 25%
Profile quality 20%
Review sentiment 15%
Google Knowledge Panel 10%
NAP consistency 5%

Knowledge Probe

15%

Do AI engines know and recognise your brand?

Asks each AI platform "What is [your brand]?" and analyses responses. Detects hallucinations, outdated information, confusion with similarly-named entities, and measures knowledge depth across each engine.

Brand recognition 25%
Factual accuracy 25%
Cross-platform consistency 20%
Knowledge depth 15%
Entity clarity 15%

Referral Scanner

13%

Is your brand referenced across the web?

Measures the web-wide referral signals that heavily influence AI training data: backlinks from high-authority domains, editorial mentions in news and publications, Wikipedia and Wikidata presence.

Referring domain volume 20%
Reference authority quality 25%
Wikipedia & Wikidata 20%
Editorial & news coverage 20%
Link quality (doFollow) 15%

Content Analyzer

10%

Is your content ready for AI engines?

Evaluates whether AI engines can extract, understand and cite your content. Checks heading structure, answer-first patterns (BLUF), FAQ blocks, entity clarity, citable claims, meta descriptions, and content freshness.

Structure & extractability 30%
Content quality & citability 25%
FAQ sections 20%
Entity clarity 15%
Content freshness 10%

Competitor Analyzer

10%

How do you compare to your competitors?

Head-to-head comparison against your competitors across all AI visibility dimensions. Probes AI knowledge, citation overlap, web presence, and content readiness for each competitor. Identifies where you lead and where you need to catch up.

AI knowledge gap 25%
Citation share gap 30%
Web presence gap 25%
Content readiness gap 20%

Social Scanner

7%

What is your social media presence?

Evaluates your presence on LinkedIn, X/Twitter, YouTube and sector-relevant platforms. LinkedIn is directly indexed by Bing/Copilot, YouTube transcripts feed training data, Reddit influences AI recommendations.

Platform presence 30%
Follower authority 20%
Posting frequency 20%
Content quality 20%
Profile completeness 10%

Technical Scanner

5%

Is your site technically ready for AI?

Audits your site's technical foundation for AI engines: schema.org markup, robots.txt AI crawler policies (GPTBot, ClaudeBot), llms.txt, sitemap, canonical tags, Open Graph, security headers, and page speed.

Schema.org markup 25%
AI crawler access 25%
llms.txt 15%
Meta & Open Graph 15%
Performance & security 20%

The Analyst — strategic synthesis

After the 8 agents, the Analyst combines all results to surface cross-agent patterns, prioritise actions by business impact, and produce your AI visibility strategic roadmap. The Analyst sees what no single agent can see on its own.

How the platform works

1

Multi-engine scan

All 8 agents simultaneously query ChatGPT, Perplexity, Gemini, Claude and Grok against a panel of target queries defined for your business and market.

2

Calibrated scoring

Each agent produces a weighted score on calibrated numeric criteria. The platform aggregates results into a global AI Visibility Score out of 100.

3

Strategic synthesis

The Analyst cross-references results from all 8 agents, identifies cross-cutting patterns, and produces an action plan prioritised by impact and effort.

4

Execution & measurement

Our experts execute the identified actions. The platform re-runs the analysis every 2 weeks to measure real impact on your score.

What sets Storyzee apart

Calibrated criteria, not opinions

Each agent measures precise criteria with numeric thresholds. A Citation Tracker score of 75 means exactly the same thing from one audit to the next. No subjectivity, no vague wording — comparable, reproducible data.

Empirically calibrated methodology

Every weight and threshold in the platform was calibrated against brand audits across B2B SaaS, consulting, legal and agency sectors. Scoring criteria are continuously validated against real results — not theoretical models.

5 AI engines covered simultaneously

The platform queries ChatGPT, Perplexity, Gemini, Claude and Grok in parallel. Your visibility can vary dramatically from one engine to another — a brand visible in Perplexity can be completely absent in ChatGPT. Our 8 agents measure each engine separately.

Want to see your 8 indicators in real time?

FAQ

How are the 8 indicator weights calculated?

Weights reflect the empirically measured impact of each dimension on overall AI visibility. Citation Tracker carries the highest weight (22%) because direct brand citation in AI answers is the most visible outcome and the strongest signal of visibility. Authority Scanner (18%) and Knowledge Probe (15%) follow because third-party trust signals and entity recognition are foundational to being cited. Technical Scanner carries the lowest weight (5%) because technical readiness is a prerequisite — once implemented, it doesn't continue to progress. Every weight is calibrated by 25 years of digital visibility expertise and continuously validated against real client audit results.

How does the platform measure results concretely?

The platform runs all 8 specialized agents against a panel of 20 to 40 target queries defined for your business and market. Each agent produces a weighted score using calibrated numeric criteria — not subjective assessments. A Citation Tracker score of 75 means exactly the same thing from one scan to the next. Together the 8 agents generate a global AI Visibility Score out of 100. Every 2 weeks, the platform automatically re-runs the complete analysis across ChatGPT, Perplexity, Gemini, Claude and Grok to track progress. Our experts interpret the delta and adjust the GEO/AEO strategy accordingly.

Why does AI visibility vary so much between engines?

Each AI engine uses different training data, different retrieval augmentation methods and different ranking logic. ChatGPT relies heavily on web crawl data and has specific partnerships. Perplexity performs real-time web search and prioritizes recent, citable sources. Gemini draws from Google's search index and Knowledge Graph. Claude uses a distinct training corpus. Grok integrates X/Twitter data. A brand that scores well on Perplexity may score zero on ChatGPT — which is exactly why the Storyzee platform measures all 5 engines separately and produces per-engine breakdowns.

Do I need to rebuild my website to improve AI visibility?

Rarely. In most cases, improving AI visibility means optimizing what you already have: restructuring existing content in BLUF (Bottom Line Up Front) format so AI engines can extract clear answers, adding JSON-LD structured data (Organization, FAQPage, BreadcrumbList), creating an llms.txt file, opening robots.txt to AI crawlers (GPTBot, ClaudeBot, PerplexityBot), and fixing inconsistencies across third-party directory listings. A full site rebuild is only recommended when the architecture fundamentally prevents AI crawling or content extraction.

What is BLUF and why does it matter for AI citability?

BLUF — Bottom Line Up Front — is a content structuring method where every section leads with the answer before providing context and evidence. AI engines extract answers by identifying the first complete, factual statement that directly addresses a query. Content structured in BLUF format gives AI engines a clean, citable extract immediately — instead of making them parse through introductions, storytelling or background. Our Content Analyzer agent specifically measures BLUF compliance as part of the Structure & Extractability criterion.

How does the Analyst differ from the 8 agents?

The 8 agents each measure one specific dimension of AI visibility independently. The Analyst is a synthesis layer that combines all 8 agent results to surface cross-agent patterns that no single agent can detect. For example: the Analyst might identify that your Knowledge Probe score is low because your Authority Scanner detected inconsistent brand descriptions across third-party listings — a connection that neither agent would flag on its own. The Analyst produces the prioritized strategic roadmap that guides the entire GEO/AEO engagement.