0.0% Isn’t the Problem - It’s the Signal

This was an interesting situation yesterday-and it’s already evolving today.
I came across a tool I was actually excited about-clean, credible, clearly aimed at solving a real problem. Same space as AI visibility tracking, same ambition I had when I built it myself. And I’ve been down that road. I know how hard it is to even get close to something that resembles truth in this category.
So I ran their domain.
On their own site.
On the most obvious query possible:
“What is Hatter?”
0.0% visibility.
0.0% mentions.
Every query: Poor / Low / 0%.
Well, this is an interesting situation…
This isn’t a dunk (well...kinda;).
It’s something I recognize immediately.
I’ve built this exact system. I’ve tried it across every angle-APIs, prompt testing, multi-model runs, stitching outputs from systems like Perplexity, trying to force signal out of noise.
And every time, you run into the same wall:
It doesn’t fully work yet.
Not in a way that cleanly maps to reality.
And that’s okay.
That’s the part people need to say out loud.
This is a hard problem. A genuinely hard problem. We are trying to measure a probabilistic system that doesn’t give stable outputs, doesn’t give rankings, and doesn’t expose its internal state.
We’re basically opening the parachute while we’re already falling off the cliff-AI, SEO, GEO, AEO, whatever acronym you want to use. None of this has settled yet.
So yeah-tools are going to show 0%.
That doesn’t mean they’re useless. It means:
we don’t have a clean measurement layer yet.
And for a while, no one did.
But here’s the part that changed.
I went back and looked at what tools like Gumshoe are doing now-and it’s getting better.
Not perfect. Not solved. But better.
They’re not pretending there’s a single “truth score.”
They’re simulating:
personas
prompts
multiple model families
competitive outputs
source-level patterns
They ran dozens of conversations across models and showed exactly what happened:
Hatter:
0% visibility
Competitors:
showing up consistently
That’s not fake precision.
That’s structured sampling of a probabilistic system.
That’s progress.
So the honest state of the market right now is this:
The problem is real
The opportunity is massive
The measurement is still immature
The outputs are still unstable
But… we’re starting to get directional signal
That’s a big shift.
Now layer in the part that still makes this whole thing kind of surreal.
You’ve got a founder with real credentials-first marketer at Uber Eats, strong operator background, even the fun fact that she taught Elon Musk how to kitesurf-and the product built to measure AI visibility can’t surface the company when you literally ask what it is.
That’s not a knock.
That’s the signal.
Because it proves this isn’t about talent, effort, or resume lines.
You can have all of that and still not show up in AI answers consistently.
Because this isn’t a distribution game anymore.
It’s a resolution problem.
And right now, the market is split.
Half the people are saying:
“We’ve solved AI visibility.”
The other half are quietly realizing:
“We’re just starting to approximate it.”
Scroll LinkedIn for five minutes and you’ll see it—AI visibility this, GEO that, “we’ll get you cited,” dashboards, scores, guarantees.
Come on.
We’re not fully there yet.
But we’re also not at zero anymore.
So let’s say it clean:
0.0% used to mean:
“we can’t measure this.”
Now it means:
“we measured it—and you’re not showing up yet.”
That’s a very different game.
Build it. Test it. Break it. Iterate.
Or don’t.
But don’t act like this is a clean, solved layer.
Because it’s not.
It’s just finally starting to become real.
Jason Wade is the founder of NinjaAI. He studies how brands are interpreted, trusted, and recommended by AI systems like ChatGPT, Gemini, and Perplexity. His work focuses on entity engineering-structuring how a business exists across the web so it can actually be recognized, cited, and synthesized inside AI-generated answers.
Insights to fuel your business
Sign up to get industry insights, trends, and more in your inbox.
Contact Us
We will get back to you as soon as possible.
Please try again later.
SHARE THIS
Latest Posts

“The Mess” is about misclassification and delayed correction. AI systems fail in the exact same way.








