Stop Ranking. Start Getting Cited.
AI Overviews didn't kill search traffic — they redirected it toward whichever sources the model cites. The brands winning the zero-click SERP shifted their goal from ranking #1 to becoming the cited source. Here's the playbook.
Ranking #1 no longer wins the click — AI Overviews now sit above the blue links and route attention to whichever three to seven sources the model decides to cite. The metric that matters has shifted from rank position to citation share: whether your page is one of those cited sources across Google AIO, Perplexity, ChatGPT, and Gemini. If your KPI is still "rank #1," you're measuring a game that no longer exists at the top of the page.
What AI Overviews actually changed
Google rolled out AI Overviews to general availability in the United States in May 2024 and has since expanded to more than a billion users globally. The mechanic is simple: an extractive summary sits above the ten blue links, with three to seven cited sources rendered as link tiles.
This isn't a Featured Snippet (a single quoted passage from one ranking page) or a People Also Ask box (related-question dropdowns). It's a synthesized answer with multiple attributed sources — the model picks who to cite, and the user sees that decision before they ever scroll.
AI Overviews appear most on informational, definitional, and comparison queries: "what is webhook idempotency," "Stripe vs Adyen," "how does ARR differ from MRR." They appear far less on transactional ("buy red sneakers size 11") or navigational ("notion login") queries — those still drive direct clicks.
Implication
The SERP now has two layers: an answer layer (AI Overview + cited sources) and a results layer (the traditional ten blue links). For top-of-funnel queries, the answer layer is where the attention lives.
Zero-click reality
Pew Research's 2025 analysis found that users click a traditional search result roughly 8% of the time when a Google AI Overview appears, compared to about 15% on standard SERPs without one. That's roughly a halving of clickthrough on the queries where AI Overviews surface.
But this drop sits on top of a baseline that was already lopsided. SparkToro's 2024 study found roughly 60% of Google searches end without a click to a non-Google property. That includes searches "answered" by Google's own knowledge panels, image carousels, and direct answers — not just AI summaries.
Two different problems
There's a useful distinction here:
- Zero-click for the SERP: the user got their answer on Google and didn't click anywhere. The query is satisfied; the publisher is bypassed.
- Zero-click for the publisher: even when a user does click, they don't click you.
AI Overviews intensify both. But they also create a new dynamic: a user who reads the Overview and then clicks usually clicks one of the cited sources. The cited sources win disproportionately. The bottom of the page loses.
What gets cited vs what gets ignored
Princeton's GEO research is the most useful empirical work here. The team found that generative engines cite content with explicit statistics, quotations, and citation markers 30-40% more often than equivalent unstructured prose.
The pattern, across our own observation:
- Crisp definitional sentences get extracted. "ARR is annual recurring revenue, calculated as MRR multiplied by twelve" gets cited. "ARR is a critical metric for SaaS businesses to track over time as part of a holistic measurement strategy..." does not.
- Comparison tables get cited because they encode structured comparisons the model can lift directly.
- Original data — even small proprietary numbers — beats retold third-party stats. The model prefers a unique source.
- Domain authority still matters, but it's no longer sufficient. A high-authority domain with sloppy passage structure loses to a mid-authority domain with crisp, answer-shaped paragraphs.
The biggest loser is thin content: AI-rewritten listicles, content-mill pages, anything that looks like a remix of competitor articles. Citation engines de-duplicate. If your page says what three other pages already say, none of you get cited.
From rankings to citation share
Rank tracking alone now misses the story. Your page can rank #3, lose its click traffic to the AI Overview at the top, and still appear as a cited source inside that Overview — meaning your impressions stay flat or rise while your clicks fall. Three years ago that pattern was a problem. Today it's information.
KPIs worth tracking
| Metric | What it tells you |
|---|---|
| Citation rate | % of target queries where you appear in the AI Overview source list |
| Citation position | 1st, 2nd, 3rd cited source — first cited gets the most attention |
| Share-of-voice across engines | Same query measured on Google AIO, Perplexity, ChatGPT, Gemini |
| Branded entity attribution | Does the AI summary mention your brand by name, or just your URL? |
You can audit this manually by hand-checking 20–30 priority queries weekly across each engine. It's tedious, but it's data nobody else on your team has.
Reading GSC differently
In Google Search Console, the new pattern looks like: impressions stable or up, clicks down, average position stable. A year ago that combination meant something was broken. Today it usually means an AI Overview appeared and you may or may not be cited inside it. Cross-reference with manual citation audits before reacting.
A practical playbook
- Lead with a one-sentence answer. The first paragraph of every page should contain the definitional sentence the model would want to extract. Cut the throat-clearing.
- Add original data. Run a small survey of your customers. Ship a benchmark from your own product analytics. Publish a number nobody else has. Even N=80 beats retold third-party stats.
- Use structured data. Google's official guidance recommends Article and FAQPage structured data to help its systems understand and surface content. A minimal FAQPage block:
{
"@context": "https://schema.org",
"@type": "FAQPage",
"mainEntity": [{
"@type": "Question",
"name": "What is citation share?",
"acceptedAnswer": {
"@type": "Answer",
"text": "Citation share is the percentage of target queries where your domain appears in an AI Overview's cited sources, across one or more generative engines."
}
}]
}
- Replace listicles with comparison frameworks. A "Top 10 X tools" post is a citation engine's least favorite shape. A side-by-side comparison for "Stripe vs Paddle for SaaS billing" is its favorite.
- Build branded entity signals. Use your brand name consistently in author bylines, About pages, and structured data so the model attributes claims to you, not just to your URL.
Side-by-side: same answer, two shapes
Thin version:
Webhooks can sometimes fire multiple times, which can cause issues for developers building integrations. There are several strategies to handle this, including various deduplication approaches that teams use depending on their architecture.
Citation-friendly version:
Webhook idempotency is the practice of designing webhook handlers to produce the same result whether a payload arrives once or three times. The standard implementation: store a hash of the event ID in your database before processing, and reject any event whose hash already exists.
The second version gets cited. The first gets paraphrased and forgotten.
What to stop doing immediately
- Stop chasing keyword volume on definitional queries you can't win. If "what is OAuth" is dominated by Auth0 and Okta, your blog post won't flip that.
- Stop publishing AI-rewritten competitor content. De-dup engines see right through it.
- Stop measuring top-of-funnel content purely in clicks. Track citation rate alongside.
- Stop gating primary answers behind email walls. Gated answers don't get cited.
The brands adapting fastest aren't writing more content. They're writing fewer, better, structurally citation-shaped pages — and tracking a different scoreboard.
More on tracking that scoreboard in our other GEO writing, and on how GEON measures citation share across AI engines.
Deniz
Content & GEO Strategy