01 · Lens

Measure your brand inside the agents.

Lens is the observability layer for agent-driven commerce. Visibility, share-of-voice, sentiment, citations, measured per platform, refreshed continuously.

app.henneth.ai / lens / visibility
Henneth Lens, AI visibility dashboard with score, mentions, share of voice and competitive ranking
01

Per-platform measurement

One score won’t do. Lens breaks visibility down by ChatGPT, Claude, Perplexity and Gemini, each has its own retrieval logic.

02

Citations with provenance

See which pages, posts, and reviews agents are pulling from. Defend the source, or replace it.

03

Insights, not dashboards

Daily inline insight cards surface the moments that matter, a sentiment slip, a competitive jump, a citation win.

How it’s measured

Continuous, not a one-off audit.

Lens runs a tracked prompt set against every major agent on a fixed cadence, so your visibility score is always within 24 hours of the truth, not a slide deck from last quarter.

Prompts tracked
1.2k/brand
Category, competitor, problem, and long-tail prompts tuned per vertical.
Refresh cadence
/day
Each prompt is re-queried across all agents every four hours.
Agents covered
4+
ChatGPT, Claude, Perplexity, Gemini. Mistral and Copilot in preview.
Citation retention
18m
Full source, prompt, and response history for every query run.
Per-platform

Four agents. Four different answers.

Each platform indexes the web differently. Lens surfaces that asymmetry so you know where to invest, and which surface needs defending this week.

ChatGPT
w · 0.38
18.4/100
▲ +2.1 pts · 14d
Rank#4 of 12
Citations84 / week
Claude
w · 0.24
22.7/100
▲ +4.3 pts · 14d
Rank#2 of 12
Citations58 / week
Perplexity
w · 0.22
11.9/100
▼ −1.6 pts · 14d
Rank#7 of 12
Citations112 / week
Gemini
w · 0.16
13.4/100
, flat · 14d
Rank#6 of 12
Citations31 / week
Prompt coverage

See the exact questions your buyers are asking.

Every prompt in your tracked set, category, competitor, and long-tail, scored and ranked. Drill into any row for the full response history and source citations.

Prompt
Visibility
Score
Rank
best running shoes for flat feetCATEGORY · 14 AGENTS · 1.2K QPM
62
#2
sustainable basics brands in europeCATEGORY · 14 AGENTS · 480 QPM
41
#4
is defacto a good brandBRANDED · 14 AGENTS · 2.1K QPM
88
#1
alternatives to zara for basicsCOMPETITIVE · 14 AGENTS · 920 QPM
28
#6
oversized cotton tees under 25 eurosLONG-TAIL · 14 AGENTS · 340 QPM
54
#3
best turkish fashion brands for sustainable denimLONG-TAIL · 14 AGENTS · 180 QPM
73
#2
Insights, not dashboards

The signal, not the charts.

Every morning, Lens sends a short feed of what actually changed, ranked by likely business impact, with sources attached.

Sentiment drift14 min ago · Perplexity

Negative sentiment on Perplexity rose 11 pts this week.

Three new Reddit threads in r/malefashionadvice are being cited as primary sources. Two reference fit inconsistency on the oversized tee line.

reddit.com/r/malefashionadvicetrustpilot.com/review/defacto+3 sources
Rank gain2h ago · Claude

Moved from #5 to #2 on “sustainable basics europe”.

Your March sustainability page is now cited in 68% of Claude responses to this prompt, up from 14% two weeks ago.

defacto.com / sustainabilitybusinessoffashion.com
Citation winyesterday · ChatGPT

You’re now the canonical citation for denim sizing.

ChatGPT routes 84% of sizing questions to your size-guide page. The previous canonical source was zara.com, worth defending with internal links.

defacto.com / size-guidePLAYBOOK · internal linking
Competitive moveyesterday · all agents

LC Waikiki overtook you on “affordable basics turkey”.

Their new press coverage in Daily Sabah is being cited by three of four agents. Similar response structure to your February launch.

dailysabah.comCOMPETITOR · LC Waikiki
Lens was the first tool that made AI visibility feel like a metric we could actually move. Three months in, we’re up 14 points on ChatGPT.
Manager @ DeFacto
Questions we get

FAQ.

How is the visibility score calculated?
A weighted composite across four signals per agent: response inclusion rate, mention position, citation share, and sentiment. Weights are per-platform (ChatGPT 0.38, Claude 0.24, Perplexity 0.22, Gemini 0.16) and can be tuned per brand.
Do you actually query the live agents, or just simulate?
Live. Every prompt runs against the real ChatGPT, Claude, Perplexity, and Gemini APIs, with full response bodies and cited sources stored for 18 months.
How is this different from an SEO tool?
SEO tools measure how search engines rank pages. Lens measures how AI agents cite and summarise your brand. Different retrieval, different sources (Reddit, Trustpilot, BoF, your docs), different optimisation surface.
Can I track competitors?
Yes. Add up to 20 competitors per brand. You see their scores, rank changes, and citation sources alongside yours, and get alerted when they overtake you on a tracked prompt.
What's the setup time?
Typically two days. We onboard your brand, work with your team to curate the tracked prompt set, and baseline two weeks of history before you go live.
Does Lens work for B2B / non-commerce brands?
The scoring methodology generalises. Today we focus on consumer commerce because that's where agent traffic is converting fastest, but B2B pilots are open on request.

See how agents see you.