HomeHEOCase Study
HEO Case Study90-Day EngagementProfessional ServicesFlorida MarketSix-Metric Analysis

From AI-Invisible to AI-Recommended
in 90 Days.

A documented HEO engagement for a professional services firm in a competitive Florida market. Starting from a fragmented entity with a 40% Citation Accuracy Rate and zero AI recommendations, the client achieved all six 90-day HEO metric targets — including 100% Platform Coverage Rate and a 47% Recommendation Rate — through the five-phase NinjaAI HEO methodology.

ERS
3.8
from 0.8
Entity Score
PCR
100%
from 25%
Platform Coverage
CF
15/20
from 2/20
Citation Freq.
CAR
94%
from 40%
Accuracy Rate
RR
47%
from 0%
Recommend Rate
CFS
76%+
from 20%+
Favorability

All metrics measured using the NinjaAI HEO standard test suite: 20 queries per platform, 4 platforms (ChatGPT, Perplexity, Gemini, Copilot), measured at Day 0, Day 30, Day 60, and Day 90. Client identity withheld at client request. Industry: Professional Services, Florida.

The Client · Starting Conditions

A Competitive Local Market. A Fragmented Entity. Zero AI Recommendations.

The client is an established professional services firm with over a decade of operation in a competitive Florida market. The firm had invested consistently in traditional SEO — maintaining a well-structured website, a Google Business Profile, and a modest content program — and ranked well in traditional search results for its primary service category. By every conventional digital marketing metric, the firm was performing adequately.

The problem emerged when the firm's principals began noticing that competitors — some with weaker traditional SEO profiles — were being named and recommended by ChatGPT and Perplexity when potential clients asked AI systems for recommendations in the firm's service category. The firm itself was absent from these responses entirely, or appeared with incorrect information when it did appear.

An initial NinjaAI AI Visibility Audit revealed the root cause: the firm had accumulated three distinct business name variants across 40+ directory listings over its decade of operation — a legacy DBA, a shortened colloquial name, and the current legal name. AI systems had no coherent model of the entity and were either omitting it entirely or citing it with the wrong name, wrong location, or wrong service category. The entity clarity problem was suppressing every other AI Visibility signal the firm had built.

Baseline Audit Findings

Business name variants found3 distinct variants
Directory listings audited40+ platforms
NAP conflicts identified27 conflicting entries
Structured schema on websiteNone
AI citations in 80-query test2 citations (both inaccurate)
AI recommendations0
Negative AI citations40% of all citations
Competitor AI citations (same category)34 citations across 4 platforms

Six-Metric Analysis · 90-Day Progression

Every Target Met. Every Metric Documented.

Select any metric below to view the full progression data, measurement methodology, and the specific HEO actions that drove each improvement.

M-01 · ERS

Entity Representation Score

✓ Target Met+375%
Baseline (Day 0)
0.8 / 5
Day 30
1.9 / 5
Day 60
3.1 / 5
Day 90
3.8 / 5
90-Day Target
3.5+

The client began with a fragmented entity — three different business name variants across 40+ platforms, no structured schema, and a Google Business Profile that contradicted the website's NAP data. AI systems had no coherent model of the entity and consistently omitted it from generated responses. By Day 90, the entity was consistently cited by name across ChatGPT, Perplexity, and Google AI Overviews for the client's primary service category.

View Full ERS Definition & Scoring Rubric →

The Five-Phase Engagement · Action Attribution

What Was Done, When, and Why It Worked.

Every metric improvement in this engagement is attributable to a specific set of actions executed in a specific phase. The following section documents what was done in each phase, the specific actions taken, and the metric outcomes produced.

01

Entity Audit

Days 1–14
  • Conducted a full AI Visibility Audit across ChatGPT, Perplexity, Gemini, and Copilot — 20 queries per platform, 80 total test queries
  • Documented all 40+ directory listings, identifying three distinct business name variants and two conflicting address formats
  • Established baseline scores for all six HEO metrics
  • Mapped the four primary retrieval pathways AI systems were using to reach the client's entity
  • Identified the three negative review aggregations driving the 40% Negative CFS baseline
  • Produced a prioritized remediation roadmap with 47 action items across five phases
Phase Outcome

Complete entity map with baseline metrics, conflict inventory, and prioritized action plan delivered to client.

02

SEO Foundation Build

Days 15–30
  • Standardized business name, address, and phone number across all 40+ directory listings to a single canonical NAP format
  • Implemented Organization, LocalBusiness, and Person JSON-LD schema on all primary website pages
  • Built a canonical entity definition page (500 words, DefinedTerm schema) establishing the business name, category, service area, and founding date
  • Added BreadcrumbList schema to all 12 website pages
  • Submitted updated sitemap.xml to Google Search Console and Bing Webmaster Tools
  • Resolved the Google Business Profile NAP conflict — updated to match canonical website data
  • Added SpeakableSpecification to the homepage and About page targeting the entity definition and service description sections
Phase Outcome

CAR improved from 40% to 72% within 30 days. ERS moved from 0.8 to 1.9. PCR reached 50%.

03

AEO Layer Build

Days 31–60
  • Rewrote the homepage with six AEO-optimized definition blocks — each structured as a quotable, machine-extractable statement of 40–60 words
  • Built a dedicated FAQ page with 18 questions covering the client's primary service category, each answer structured as a standalone extractable unit
  • Added FAQPage schema to the FAQ page and to four high-traffic service pages
  • Published three long-form service pages (2,500 words each) with HowTo schema, step-by-step process descriptions, and embedded FAQ sections
  • Built a Glossary page defining 12 industry-specific terms with DefinedTermSet and DefinedTerm schema
  • Implemented Article schema with author Person entity on all three new long-form pages
  • Distributed the canonical entity definition to six industry directories and two local business citation networks
Phase Outcome

CF jumped from 2/20 to 11/20 — crossing the 50% threshold. PCR reached 75%. ERS moved to 3.1.

04

GEO Layer Build

Days 61–80
  • Published three detailed case narratives documenting specific client outcomes with named metrics, timelines, and before/after comparisons
  • Added structured outcome data to the homepage — a statistics block with three verified performance numbers and source citations
  • Built a cross-platform testimonial footprint: 14 new verified reviews across Google Business Profile, Yelp, and two industry directories
  • Published a 'Why Us' comparison page documenting five specific differentiators with supporting evidence for each
  • Added Review and AggregateRating schema to the homepage and service pages
  • Distributed the three case narratives to two industry publications as guest posts, generating three external citations pointing to the canonical case study URLs
  • Built an llms.txt file indexing 34 canonical URLs across definitions, services, case studies, and FAQ pages
Phase Outcome

RR moved from 17% to 33%. CFS Positive share reached 62%. ERS reached 3.1 and continued climbing.

05

Measurement & Iteration

Days 81–90
  • Ran the full 80-query test suite across all four platforms to establish Day 90 baseline scores
  • Identified two remaining citation accuracy issues — one platform was still using the legacy DBA name in 6% of citations
  • Published a targeted correction page with explicit entity disambiguation language and cross-references to the canonical entity definition
  • Updated llms.txt with three new case study URLs and two new FAQ page entries
  • Delivered a 90-day HEO Performance Report documenting all six metric progressions with annotated timelines
  • Established a monthly monitoring cadence: 20-query spot-check per platform, monthly CFS sentiment review, quarterly full audit
Phase Outcome

All six HEO metrics met or exceeded 90-day targets. ERS: 3.8 (target 3.5+). PCR: 100% (target 75%+). CF: 15/20 (target 10/20+). CAR: 94% (target 85%+). RR: 47% (target 40%+). CFS Positive: 76% (target 70%+).

Key Findings · What This Engagement Proved

Five Findings That Apply to Every HEO Engagement.

01

Entity clarity is the prerequisite for everything else.

The single highest-impact action in this engagement was NAP standardization — resolving three conflicting business name variants across 40+ directories. This single intervention drove CAR from 40% to 72% in 30 days and eliminated the confusion-based Negative citations that were suppressing CFS. No amount of content, schema, or authority work will produce accurate AI citations if the entity itself is ambiguous.

02

AEO content architecture is the fastest path to Citation Frequency gains.

CF improved from 2/20 to 11/20 in 60 days — a 450% increase — driven almost entirely by the AEO content work in Phase 3. The client had strong topical authority but had never structured their content for AI extraction. Once the FAQ architecture, definition blocks, and quotable statements were in place, AI systems had extractable content to cite and CF responded within weeks.

03

Recommendation Rate requires documented specific outcomes.

RR was 0% at baseline and 17% at Day 30 — despite significant entity and content improvements. The breakthrough came in Phase 4 when three detailed case narratives with named metrics were published. AI systems consistently cite specific, documented outcomes when making recommendations. Generic service descriptions, no matter how well-structured, do not drive Recommendation Rate.

04

Reputation signals affect AI citations differently than traditional SEO.

The 40% Negative CFS baseline was partly driven by review aggregations that AI systems were pulling from Yelp and Google — but the mechanism was different from traditional SEO. AI systems were not ranking the client lower because of negative reviews; they were generating negative framing in their descriptions of the client. The fix required both review generation and structured review response architecture, not just star rating improvement.

05

Platform Coverage Rate is the leading indicator of HEO progress.

PCR moved from 25% to 75% by Day 60 — before ERS, RR, or CFS had reached their targets. This is consistent with the HEO model: PCR measures the first threshold (is the entity present at all?) and tends to respond first to entity clarity and AEO content work. Practitioners should monitor PCR weekly in the first 60 days as the primary leading indicator of whether the HEO foundation is working.

Frequently Asked Questions

Questions About This Case Study

HEO · Full Resource Cluster
Definition

What Is HEO?

The canonical 2,500-word definition of Hybrid Engine Optimization — coined by Jason Todd Wade.

ninjaai.com/heo
Implementation

HEO Checklist

Five-phase, 47-checkpoint implementation sequence from Entity Audit through Measurement.

ninjaai.com/heo-implementation-checklist
Measurement

HEO Metrics Tracker

The six core HEO metrics — ERS, PCR, CF, CAR, RR, CFS — with scoring rubrics and 90-day targets.

ninjaai.com/heo-metrics-tracker
Case Study

This Page

90-day documented engagement: all six metric targets met. Professional services, Florida market.

ninjaai.com/heo-case-study