Canonical definitions, scoring rubrics, and measurement methodology for every metric in the HEO measurement system. These six metrics are the only way to know whether your HEO architecture is working — and which layer to fix when it is not.
ERS
Entity Representation Score
M-01 · Range: 0 – 5
How accurately and prominently does the AI describe your entity?
Definition
Entity Representation Score (ERS) is the primary HEO metric. It measures the quality of how AI systems represent a business entity in generated responses — not merely whether the entity is mentioned, but how accurately, prominently, and favorably it is described. ERS is scored on a 0–5 integer scale and is measured independently for each AI platform tested. A score of 0 means the entity is completely absent from AI responses to relevant queries. A score of 1 means the entity is occasionally mentioned but described inaccurately or incompletely. A score of 2 means the entity is mentioned with a broadly correct description but without specific credentials, differentiators, or recommendation language. A score of 3 means the entity is regularly cited with accurate credentials and some differentiating detail. A score of 4 means the entity is frequently cited, accurately described, and often recommended. A score of 5 means the entity is consistently cited as the primary or preferred recommendation, with accurate, detailed, and favorable description across multiple query types. The composite ERS for an entity is the average of its scores across all platforms tested, rounded to one decimal place. The target ERS for a fully implemented HEO architecture is 3.5 or higher within 90 days of Phase 4 completion.
Scoring Rubric
0
Absent
Entity does not appear in any AI responses to relevant queries.
1
Occasionally Mentioned
Entity appears rarely; description is inaccurate or incomplete.
2
Broadly Correct
Entity mentioned with correct category but missing credentials and differentiators.
3
Regularly Cited
Entity cited accurately with credentials; some differentiating detail present.
4
Frequently Recommended
Entity frequently cited, accurately described, and actively recommended.
5
Primary Recommendation
Entity consistently cited as the primary or preferred recommendation across query types.
Measurement Methodology
Run a standardized set of 20 queries across ChatGPT, Perplexity, Gemini, and Copilot. Queries should cover: category + location (e.g., 'best AI SEO agency in Orlando'), problem + solution (e.g., 'how do I get cited by ChatGPT'), and comparison (e.g., 'who is the best at AI Visibility optimization'). Score each platform's response on the 0–5 scale. Record the verbatim response, the score, and the date. Calculate the composite ERS as the average across all platforms. Repeat at Day 0, Day 30, Day 60, and Day 90.
90-Day Target
3.5+ composite ERS within 90 days of Phase 4 completion
All Six Metrics at a Glance
The six metrics form a complete measurement system. Each one measures a different dimension of the entity's AI Visibility status. Together, they provide a full diagnostic picture — and a clear signal of which HEO phase to prioritize next.
Code
Metric
Abbr
Range
Primary HEO Phase
90-Day Target
M-01
Entity Representation Score
ERS
0 – 5
Phase 2–4
3.5+ composite ERS within 90 days of Phase 4 completion
M-02
Platform Coverage Rate
PCR
0% – 100%
Phase 3
75%+ PCR within 60 days of Phase 3 completion
M-03
Citation Frequency
CF
0 – 20 per platform (per 20-query test set)
Phase 3
10/20 (50%) CF on the primary platform within 60 days of Phase 3 completion
M-04
Citation Accuracy Rate
CAR
0% – 100%
Phase 2
85%+ CAR within 30 days of Phase 2 completion
M-05
Recommendation Rate
RR
0% – 100% of citations
Phase 4
40%+ RR within 90 days of Phase 4 completion
M-06
Citation Favorability Score
CFS
Positive / Neutral / Negative
Phase 4
70%+ Positive, <10% Negative within 90 days of Phase 4 completion
The Measurement Sequence
The six metrics are not measured in isolation — they are measured in a specific sequence that reflects the dependency structure of the HEO architecture. Platform Coverage Rate is measured first because an entity with zero coverage cannot improve any other metric. Citation Frequency is measured next because it is the most sensitive early indicator of AEO progress. Citation Accuracy Rate is measured before Entity Representation Score because accuracy is a prerequisite for quality. Entity Representation Score is the headline metric. Recommendation Rate and Citation Favorability Score are measured last because they are the most influenced by Phase 4 (GEO Layer) work, which takes the longest to register in AI systems.
The measurement cycle runs at Day 0 (before any HEO work begins), Day 30, Day 60, and Day 90. After the initial 90-day cycle, the six metrics are measured quarterly at minimum, with a lightweight monthly check on ERS and CF for the highest-priority queries. The Day 0 baseline is the most important measurement in the entire HEO system — without it, there is no way to demonstrate progress, identify the highest-priority gaps, or sequence the work correctly.
How long does it take to run a complete HEO metrics measurement cycle?
A complete measurement cycle using the standard 20-query test suite across four platforms (ChatGPT, Perplexity, Gemini, Copilot) takes approximately 2–3 hours for an experienced practitioner. The most time-consuming part is not running the queries but reviewing and scoring each response — particularly for CAR (Citation Accuracy Rate) and CFS (Citation Favorability Score), which require careful reading of each AI response rather than a simple binary check. Once the tracking spreadsheet and standardized query set are established after the Day 0 baseline, subsequent measurement cycles become faster because the practitioner develops pattern recognition for the entity's typical AI representation.
Which of the six metrics is the most important to track first?
Entity Representation Score (ERS) is the headline metric and the one that most directly reflects the overall health of the HEO architecture. However, Platform Coverage Rate (PCR) is the most important metric to establish first, because an entity with a PCR of 0% (completely absent from all platforms) cannot improve any of the other five metrics until basic presence is established. The practical measurement sequence is: PCR first (to establish whether the entity exists in AI knowledge at all), then Citation Frequency (to measure depth of presence), then Citation Accuracy Rate (to ensure the presence is accurate), then ERS (to measure the quality of representation), then Recommendation Rate (to measure conversion potential), then Citation Favorability Score (to monitor reputation signals).
What is a realistic ERS improvement timeline for a new HEO implementation?
For an entity starting from a baseline ERS of 0–1 (absent or occasionally mentioned with errors), a realistic improvement trajectory is: ERS 1.5–2.0 at Day 30 (basic presence established through Phase 2 and early Phase 3 work), ERS 2.5–3.0 at Day 60 (AEO layer producing regular citations), ERS 3.0–3.5 at Day 90 (GEO layer beginning to register in parametric responses). Reaching ERS 4.0+ typically requires 6–12 months of sustained Phase 4 work — specifically the accumulation of external citations, documented outcomes, and third-party authority signals that shift AI systems from citing the entity to recommending it.
Can Citation Accuracy Rate be negative — that is, can AI systems actively spread misinformation about an entity?
Yes. AI systems can and do generate factually incorrect information about entities, including incorrect founding dates, incorrect service descriptions, incorrect locations, and entity confusions (describing a different entity under the same name). This is not malicious — it is a function of how language models interpolate from incomplete or conflicting training data. A CAR below 50% is a serious problem that requires immediate attention, because inaccurate AI citations can actively harm an entity's reputation and conversion rate. The most common cause of low CAR is entity fragmentation — the condition in which the entity appears under multiple name variants, URL formats, or address representations in different indexed sources, causing the AI to synthesize a confused or incorrect description.
How does Recommendation Rate differ from Entity Representation Score?
Entity Representation Score (ERS) measures the quality of how the AI describes the entity — accuracy, detail, and prominence. Recommendation Rate (RR) measures whether the AI actively recommends the entity as a solution to the user's query. An entity can have a high ERS (the AI describes it accurately and in detail) but a low RR (the AI describes it as one of several options without recommending it). Conversely, an entity with a moderate ERS can have a high RR if the AI consistently positions it as the answer to specific query types. Both metrics matter, but they are driven by different signals: ERS is primarily driven by Phase 2 and Phase 3 work (schema, entity clarity, authority density), while RR is primarily driven by Phase 4 work (documented outcomes, comparative differentiation, social proof).
What should I do if Citation Favorability Score shows more than 15% negative citations?
A CFS with more than 15% negative citations requires immediate investigation before any other HEO work continues. The first step is to identify the specific language driving the negative classifications — document the exact phrases the AI is using and the queries that produce them. The second step is to identify the source of the negative signal: negative reviews in indexed directories, unfavorable press coverage, competitor content associating the entity's name with negative terms, or outdated information from a previous business period. The third step is to address the source directly — respond to and resolve negative reviews, publish authoritative positive content that outranks the negative source, and ensure the entity's schema and directory profiles accurately reflect the current state of the business.
Free HEO Baseline Measurement
What Is Your Entity Representation Score Right Now?
Get a Day 0 baseline measurement across all six HEO metrics — before any optimization work begins. No baseline means no proof of progress.