- The Ultimate AI SEO Newsletter
- Posts
- How to appear in LLM with a fake brand
How to appear in LLM with a fake brand
LLM Visibility Challenge — De Schaut Cookies (Summary & Learnings)
Thanks for your interest! If you’re experimenting with AI SEO, LLM visibility, or just curious how a non‑existent brand can show up in answers, you’re in the right place.
➡️ Join the free community (tactics, templates, course): https://www.skool.com/ai-seo-the-2025-community-4247
Executive Summary
I ran a rapid experiment to get a brand that doesn’t exist—De Schaut Cookies (matcha cookies from Ghent, Belgium)—mentioned by LLMs. I first built a minimal web presence (purchased deschautcookies.info) with no real frontend and then seeded references on trusted third‑party platforms that LLMs often consider reliable.
Aim: Appear in LLM answers with a net‑new brand.
Approach: Minimal website focused on crawler/LLM signals → then seed credible mentions on third‑party sites (Reddit, X, YouTube).
Notable outcome: Perplexity used a single source to mention the brand—proof that one solid citation can be enough to seed visibility, despite their claim to use a complex Machine Learning model (XGBoost) for Quality Assurance.
Note: since it has no frontend, the link won’t work. Try this one instead: www.deschautcookies.info/robots.txt
Why a “No‑Frontend” Website?
Clicks are declining as answer engines summarize the web. So the goal wasn’t “pretty site → clicks,” but “credible artifact → machine visibility.”
Principles used:
Ship fast, keep it crawlable. Technical files such as jsonld, FAQ and llms.txt file.
Make the content LLM‑friendly. Clear entities (brand, product, location), structured facts, and concise claims that are easy to re‑state.
Serve the bots. Robots.txt, sitemap, and consistent naming across assets.
Repository Overview (What’s in here)
For technical readers: feel free to jump straight to the code and the commit history to see the iteration path.
Highlights you’ll find in the repo:
Project site (static): a minimal index.html (or equivalent) describing the brand, product (matcha cookies), and origin (Ghent/Gent, Belgium).
Robots directives: a robots.txt tailored to allow major AI/LLM crawlers (e.g., GPTBot, ChatGPT-User, PerplexityBot) and traditional search engines.
Sitemap: a simple sitemap.xml listing the core URLs.
Brand facts & product spec: short, unambiguous copy that repeats the core entities (brand → De Schaut Cookies; product → matcha cookies; location → Ghent/Gent, Belgium).
Assets/docs: lightweight images and a nutrition.pdf for the cookies—useful as a concrete artifact LLMs can cite.
Commit history: small, frequent commits that show how I tweaked robots, metadata, and content to tighten signals.
Where to look:
/index.html (or /docs/index.html) — primary brand statement
/robots.txt — crawler access
/sitemap.xml — URL inventory
/assets/* — images & files (e.g., nutrition.pdf)
.nojekyll — ensures GitHub Pages skips Jekyll processing and serves all files as-is.
Commits — message history that documents each small change
If you’re technical, you can inspect the raw files and the commit log to see exactly what changed at each step.
The “Robots.txt” Trigger Test (ChatGPT)
An early micro‑test: I asked ChatGPT to open robots.txt on the site.
What happened: ChatGPT could access the robots.txt, confirming the domain was reachable and crawlable.
What didn’t happen: The brand still did not appear in the generated answers yet.
Takeaway: A reachable robots.txt is a sanity check, not a ranking/visibility lever. It confirmed infrastructure; it didn’t establish credibility or notability.
Shifting Gears: Seeding Trusted Sources
LLMs lean on sources they deem reliable or socially validated. So I added third‑party signals:
Reddit: A concise post mentioning De Schaut Cookies and matcha cookies with consistent phrasing. Reddit threads are frequently surfaced or cited by LLMs.
X (Twitter): Short, factual post reiterating the brand/product/location. (Helps with entity co‑occurrence and recency.)
YouTube: A short video demo about the cookies (generated with Veo 3) and a description reiterating brand + product + location. YouTube is another high‑trust surface that many models ingest or reference.
Why these three? Together they create cross‑platform entity consistency (same brand/product/location), which raises confidence for LLMs that the concept is “real” and worth surfacing.
Results (So Far)
Perplexity eventually surfaced the brand using just one external source. Reddit! That single high‑signal citation was enough to get a mention.
Lesson: You don’t always need a massive footprint. A small, consistent, and credible footprint can be sufficient to seed LLM awareness.
Pretty crazy: one source → one mention. No fact check. That’s incredible.

What I Didn’t Do (Yet) — And Could
This was intentionally a minimal pass. There are many levers left:
Structured data: Product, Organization, and HowTo schema; JSON‑LD for nutrition facts.
Citations hub: A /press or /sources page linking out to third‑party mentions.
Author & provenance: Verified social profiles, entity home graph, and consistent publisher metadata.
Blog Articles: Become a topical authority on matcha cookies from Ghent.
More trusted surfaces: Quora, StackExchange (if relevant), Medium/Dev.to, ProductHunt “fake launch” write‑up, niche forums, local directories (Ghent food spots), LinkedIn, Google Maps mock listing (if applicable), and collaborations.
Temporal reinforcement: A posting cadence that re‑states the same entities over weeks (recency + repetition).
Content forms: Short Q&A, comparisons ("De Schaut matcha cookies vs X"), and FAQs that LLMs can easily quote.
Etc…
How to Reproduce (Quick Start)
Stand up a bare‑bones site (static HTML is fine). Make brand/product/location unambiguous.
Open the door for crawlers (robots.txt + sitemap; don’t block what you want indexed).
Create 2–3 high‑trust citations (Reddit, X, YouTube) using the same phrasing.
Verify mentions by querying LLMs with neutral prompts (e.g., “What are De Schaut Cookies?” or “Matcha cookies from Ghent called…?”).
Iterate: Tune copy, add one new trusted citation per week, and monitor.
Links
Community: https://www.skool.com/ai-seo-the-2025-community-4247
Project site: www.deschautcookies.info (since there is no frontend, try adding /robots.txt
Repository: https://github.com/cdznho/GPTChallenge
Closing Note
This was a fast, pragmatic test. The surprising truth: one credible mention can be enough to get a new brand into an LLM’s orbit. As clicks trend down, LLM‑first publishing—even with a no‑frontend site—makes sense.
Also note that:
This is easy to do with a new brand, but if you’re actually a larger brand and you’re competing with other large brands, you’ll need more AI SEO strategies (+traditional SEO, which is a long term effort)
Perplexity AI fetches up-to-date information from the web in real time, so it’s easier to see fast results.
I got mentioned in ChatGPT thanks to the Youtube video
Don’t forget to browse in incognito mode
LLMs are probabilistic, so you might get different results for the same query!
Join the community here and reach out if you need help 👋