BlogVisibilite IA & AEO3 months of AI optimization for zero citations: autopsy of a failure (and how to fix it)
Back to blog
Visibilite IA & AEO

3 months of AI optimization for zero citations: autopsy of a failure (and how to fix it)

A B2B SaaS company invested 3 months in AI visibility and got nothing. We analyzed what went wrong, identified the 5 critical mistakes, and explain how to avoid them.

Alan Schouleur
COO & CTO
22 March 2026
9 min read
0 views

A B2B SaaS company reached out to us after spending three months on AI optimization. They'd implemented llms.txt, added schema markup, published 12 "AI-optimized" articles, and set up monitoring. The result: zero new citations on ChatGPT, Perplexity, or Gemini.

They were frustrated. And they wanted to know: what went wrong?

We audited everything they'd done. The answer wasn't one big mistake -- it was five smaller ones that compounded into total invisibility. Here's the autopsy.

Mistake 1: generic content that AI can't distinguish

Their 12 articles were well-written, technically correct, and covered important topics in their industry. The problem? They read exactly like every other article on the same topic.

"5 ways to improve your project management" -- there are 47,000 articles with this exact angle. An LLM has no reason to cite your version over any other. It contains no original data, no unique perspective, no proprietary insight.

The fix: every article must contain at least one element that doesn't exist anywhere else. Your own data, your client results, your specific methodology, your contrarian opinion backed by evidence. The question to ask before publishing: "Why would AI cite THIS article instead of any other on the same topic?"

Mistake 2: schema markup with errors

They'd implemented Organization and FAQPage schemas. Good intent. But the Organization schema had a sameAs pointing to their old LinkedIn URL (that redirected to a 404). The FAQPage schema listed questions that didn't appear on the actual page. And the Article schema on their blog posts had "Admin" as the author with no linked profile.

Technically, the schemas were there. Practically, they were either misleading or useless. Google's Rich Results Test showed warnings on every page. LLM crawlers likely ignored the signals entirely.

The fix: validate every schema with Google's testing tool AND manual review. Ensure sameAs links work, FAQ questions match visible content, and authors have real, linked profiles. Broken schema is worse than no schema -- it sends a negative signal.

Mistake 3: zero off-site presence

This was the biggest issue. In three months, they hadn't published a single guest article, participated in any Reddit discussion, obtained any press mention, or updated their review profiles. Their entire strategy was on-site.

On-site optimization is necessary but not sufficient. LLMs build trust through multi-source validation. If only your website says you're good at what you do, the AI has one source. If your website, three press articles, a Reddit thread, a Clutch profile, and a Wikidata entry all say the same thing, the AI has six sources. The difference in citation probability is enormous.

The fix: allocate at least 50% of your AI visibility effort to off-site activities. Guest posts, digital PR, community participation, review solicitation. On-site is the foundation; off-site is what gets you cited.

Mistake 4: wrong prompts to monitor

They were testing prompts like "What does [company name] do?" and "Is [company name] good?" These are vanity prompts. Nobody asks ChatGPT about a company they've never heard of.

The prompts that matter are the ones your prospects actually ask: "Best project management tool for agencies under 50 people," "Alternative to Monday.com for European teams," "How to choose a PM tool for creative agencies." These are the prompts where AI recommends products -- and where your company needs to appear.

The fix: build your prompt universe from customer research, not brand ego. Ask your sales team: "What questions do prospects ask before they buy?" Those are your target prompts.

Mistake 5: impatience and inconsistency

They published 12 articles in the first month, then stopped. They added schema in week 2, then never checked if it was working. They set up monitoring but checked it daily (creating anxiety from natural fluctuations) rather than bi-weekly (which shows trends).

AI visibility is a compounding game. The work you do in month 1 may not show results until month 4. But only if you keep building in months 2 and 3. Stopping after a burst of activity is like planting seeds and pulling them up after a week to check if they're growing.

The fix: commit to a 6-month plan with consistent weekly activities. 1-2 articles per month (not 12 in month one then zero). Regular off-site contributions. Bi-weekly monitoring. Monthly strategy reviews. Consistency beats intensity every time in AI visibility.

The turnaround

After our audit, the company restarted with the corrected approach. In the following 4 months:

They published 8 articles with original data from their user base.
They fixed all schema errors and added Author schemas with real LinkedIn profiles.
They secured 4 guest posts in industry publications and 2 podcast appearances.
They actively participated in 3 relevant subreddits.
They got their G2 profile from 8 reviews to 45.

The result: from 0/20 target prompts to 6/20 on Perplexity, 3/20 on ChatGPT, and 4/20 on Gemini. Not yet where they want to be, but a clear trajectory. And critically, two inbound leads that explicitly mentioned AI recommendations as their discovery channel.

AI visibility mistakes are fixable. But the sooner you identify them, the less time you waste. If you've been investing in AI optimization without seeing results, the answer is almost always in one of these five areas.

Share:
Alan Schouleur
COO & CTO

Co-fondateur et COO d'AISOS. Spécialiste technique SEO et visibilité IA, il développe les outils et méthodologies d'optimisation pour les moteurs de réponse.