Periscope, launching soon
Who is ChatGPT recommending to your customers instead of you?
One founder I tested got 3 mentions across 47 buyer-intent prompts. A competitor they had never heard of got 19. Periscope shows you the gap across ChatGPT, Claude, Perplexity, and Gemini, and the fixes most likely to close it.
Get notified when audits open up
No spam. One email when Periscope is ready, and occasional notes on what I learn from the early runs.
What you get
- A visibility matrix. Your brand vs. the named competitors, scored across ChatGPT, Claude, Perplexity, and Gemini.
- The competitor surface. The exact products being recommended in the slot you want, including ones you have never heard of.
- A prioritized fix list. Ranked by effort: 15 minutes, one day, one week. Concrete, not generic SEO advice.
Delivered as a single Markdown or PDF report inside two business days of URL submission.
How it works
- 1. You submit a URL. Your landing page or product page.
- 2. Periscope generates 30 to 50 buyer-intent prompts. Spanning discovery, comparison, and evaluation stages.
- 3. Every prompt runs across all four LLMs. ChatGPT, Claude, Perplexity, and Gemini.
- 4. Mentions are tallied per brand. Yours, your competitors, and anyone else showing up.
- 5. The site is inspected against the most likely causes. Positioning, schema, content coverage, third-party mentions.
- 6. Fixes are ranked by effort against expected lift.
Early findings
From beta runs against two anonymized indie SaaS products. Small sample, but the patterns are consistent.
- An indie SaaS 4 days post-launch: 0 mentions out of 115 prompt-LLM combinations. Cold start is real and measurable.
- A Shopify-adjacent tool with active users: 6 out of 160 mentions (3.75%). The audit also surfaced a positioning gap between the landing page and the marketplace listing that was invisible to the founder before the run.
- Cross-LLM brand variance: the same prompt to ChatGPT, Claude, Perplexity, and Gemini returns different brand sets. There is no single "AI search" channel to optimize for.
- Consumer vs. API divergence: the API answer for a buyer-intent prompt can list completely different brands than the consumer chat UI for the same prompt, because the consumer UI may pull live web search.
What I have learned about LLM visibility
- LLMs return different brand sets for the same prompt. You cannot optimize for "AI search" as a single channel.
- Vendor-published comparison content (the classic "X vs. Y vs. Z" listicle on the vendor's own blog) is currently the most reliable indexing mechanism.
- API outputs and consumer chat outputs diverge. Both need tracking if you care about being found.
- Effects of fixes take weeks to surface, which shapes how reporting cadence and re-runs should be priced.
AEO and GEO, briefly
- AEO (Answer Engine Optimization).
- Making your product the answer when a user asks a buyer-intent question to an LLM, instead of just ranking on a search results page.
- GEO (Generative Engine Optimization).
- Same idea, broader framing: shaping what generative models say about your category and your product.
Want one when audits open up?
Drop your email. I will send one note when Periscope is ready.
About
I am Andrei. I build small, focused tools for SaaS founders and run experiments in public at surdu.de. Periscope is one of them.