Skip to content
Go back

A Guide to SEO, AEO, GEO, LLM-O... WTF-O?

Stuart Brameld

Stuart Brameld

Founder
Updated:

Table of contents

Open Table of contents

The Alphabet Soup of Modern Search Optimisation

If you’re feeling confused by all the acronyms being thrown around in marketing right now, you’re not alone. We’ve gone from SEO (Search Engine Optimisation) to AEO (Answer Engine Optimisation) to GEO (Generative Engine Optimisation) and now people are talking about LLM-O (Large Language Model Optimisation).

What’s actually happening? Search is fundamentally transforming from ranked lists to synthesised answers. As Kevin Indig puts it, search is moving from abundance to synthesis - with AI delivering single definitive responses rather than multiple options for users to evaluate.

AI search now represents at least one-third of all searches. AI Overviews appear in 21% of Google keywords as of November 2025, and ChatGPT handles roughly one-fifth of Google’s query volume. According to Growth Unhinged, AEO is the single biggest channel where B2B marketers are increasing investments this year.

There’s a lot of noise and hype in this space. Let’s cut through it and focus on the tactics that are actually backed by data and cited by multiple experts.

What Is Answer Engine Optimisation (AEO)?

Answer Engine Optimisation is the practice of structuring your content so that AI-powered search tools - ChatGPT, Perplexity, Google AI Overviews, Claude - can accurately surface it in their responses.

Unlike traditional SEO where you’re optimising for blue links and click-through rates, AEO is about becoming the source that AI systems reference and cite when providing answers to user queries.

The key difference: with SEO, you’re fighting for a click. With AEO, you’re fighting for a citation.

What Is Generative Engine Optimisation (GEO)?

GEO is closely related to AEO but focuses specifically on ensuring your brand and content appear prominently when large language models generate responses.

In practice, AEO and GEO are used interchangeably. Both address the same fundamental challenge: how do you maintain visibility when users increasingly get answers directly from AI rather than clicking through to your website?

Agent Browsers: A New Optimisation Surface

Beyond answer engines, a new class of AI tools is starting to interact with your website directly: agent browsers. Tools like ChatGPT Atlas, OpenAI Operator, Perplexity Comet, and Anthropic Computer Use navigate pages on a user’s behalf. They click buttons, fill forms, and complete tasks. And they do not see your site the way humans do.

A recent breakdown by Search Engine Journal explains three perception methods these agents use:

The implication: the agent’s experience of your site is determined by your accessibility implementation, not your visual design. Missing ARIA labels, non-semantic divs masquerading as buttons, and JavaScript-only shells without server-side rendering all degrade agent task success.

Two data points from the same piece are worth flagging:

If your site is invisible to a screen reader, it is invisible to half the agents now browsing the web on behalf of buyers. Accessibility is no longer a compliance checkbox alone. It is a prerequisite for showing up in agent-led journeys.

The Numbers That Matter

Before diving into tactics, here are the statistics that should inform your strategy (sourced from Kevin Indig’s State of AI Search Optimization 2026):

The implication is clear: traditional SEO still matters (ranking in the top 10 helps AI citations), but third-party mentions and technical performance are now equally critical.

Proven Tactics: What Actually Works

After reviewing the research from Kevin Indig, Ethan Smith (Graphite), Omniscient Digital, PostHog, Joost de Valk, and others, these are the tactics consistently cited as effective:

1. Technical Performance (High Confidence)

Multiple sources confirm that fast sites get more AI crawler attention:

This isn’t new advice, but it’s more important than ever. AI crawlers are time-constrained and favour fast-loading, well-structured pages.

2. Content Freshness (High Confidence)

Content updated within the last 3 months performs significantly better for AI citations. This aligns with what both Indig and Omniscient Digital have observed.

One simple lever within reach today: adding visible “last updated” dates to your articles yields an average 15% uplift in citations, according to Growth Unhinged.

The implication: a content refresh strategy is no longer optional. Prioritise updating your highest-value pages regularly.

3. Third-Party Validation (High Confidence)

This is perhaps the most important shift: 85% of brand mentions in AI search come from third-party sources, not your own website.

What this means in practice:

A word of caution: don’t over-rely on any single platform. In September 2025, Reddit citations in ChatGPT crashed 80% after a single parameter change. Build broad presence, but always prioritise channels you own.

Aggregated citation data also hides huge vertical variation. Nate at Ten Speed points out that the June 2025 Profound study, which crowned Reddit the top citation source across 680 million answers, averaged across verticals. Drill into specific industries and the picture changes: healthcare answers cite PubMed and Mayo Clinic, DevOps answers cite GitHub, and Reddit can account for as little as 7% of citations in some B2B niches. In one edtech competitive set Ten Speed analysed, Reddit was absent from the top eight cited domains entirely, with YouTube acting as the primary user-generated source instead. Before investing in Reddit, run a prompt-level audit of the queries your buyers actually ask, and check which domains the AI is citing for your category.

Many Reddit-as-a-service offerings rely on offshore teams spinning up fresh accounts and karma-farming with fake engagement. That risks account bans and lasting brand damage if the manipulation is exposed. If you do invest in Reddit, have real employees and customers post in their own voice.

You cannot control AI citations through your own content alone. You need to build presence across the web.

4. Structured Content Formatting (Moderate Confidence)

Multiple sources agree that structured data improves excerpt probability:

5. Conversational Query Coverage (Moderate Confidence)

Ranking for head terms is less important than covering the conversational queries people actually ask AI systems. Think about how someone would phrase a question to ChatGPT, not just traditional keyword targeting.

The shift is concrete: the average Google query is 3.4 words, while the average ChatGPT query is 60 words (Growth Unhinged). People aren’t typing keywords into AI tools. They’re describing their full situation, role, constraints, and goal in a single prompt. Optimise for the long, specific questions your buyers actually ask.

6. Author Credentials (Emerging Evidence)

Author credentials, certifications, and expertise signals are increasingly important. This aligns with Google’s existing E-E-A-T framework but is now being picked up by AI systems too.

7. Bottom-Funnel SEO First (High Confidence)

The good news? AEO is really just smart bottom-funnel SEO on your website before anything else. Control the narrative with your owned content, then worry about other channels.

Sam Dunning recommends prioritising what your dream clients search when they need your solution now and are comparing options. Omniscient Digital report that comparison and alternatives content was the single most-requested content type they saw across client programs this quarter. Create best-in-class pages to hit this intent:

Nail these landing pages, articles and listicles before anything else. Not only are these pages crucial for prospects evaluating your solution, but many have solid chances to rank in classic organic search, AI Overviews, and LLMs.

PostHog also recommends starting with defensive SEO - competitor comparisons, integration guides, and “best tools” roundups. Another good option is low volume keywords. These are easier to rank for and can create a solid base of traffic. Writing dozens of these adds up - don’t underestimate them.

Do this first. Then worry about link building, brand mentions, and moving onto Reddit and other communities when you have the resource or budget ready.

8. Quality Over Gaming (High Confidence)

Google is dumber than you think. It doesn’t understand your content - it understands user behaviour. Users clicking and spending a lot of time reading shows content is good.

If your content doesn’t rank well, it’s probably because it just isn’t very good. Ask yourself:

If not, improve the content first before worrying about technical optimisation.

9. The Three Channels That Work for AEO (High Confidence)

Ethan Smith, CEO of Graphite, identifies three primary tactics that consistently drive AI citations:

  1. Landing pages - Comprehensive, original content addressing specific questions
  2. YouTube videos - AI models cite video content; descriptions are particularly important
  3. Reddit comments - Authentic participation (not spam) in relevant communities where your expertise applies. Caveat: confirm Reddit is actually showing up in citations for your vertical before committing time here, since the platform’s weight varies sharply by category.

Recent citation share data underscores why YouTube belongs at the top of the list. YouTube now accounts for roughly 16% of citations in LLM answers, compared with 10% for Reddit (Growth Unhinged). If you have to pick one channel to invest in first, video has the edge.

Smith also highlights an often-overlooked opportunity: help centre optimisation. Your documentation suddenly becomes your highest-converting content channel because AI systems heavily reference support content when answering product-related queries. Talal Syed (GrowthX) found that docs pages were 8.6x more likely to get cited than other page types when buyers were actively evaluating products.

10. Authenticity Over Optimisation (High Confidence)

This insight comes from multiple sources but Ethan Smith states it clearly: avoid hyper-SEOed content. AI models detect and penalise content optimised primarily for algorithms rather than genuine helpfulness.

The data backs this up. Graphite research, shared via Growth Unhinged, found that fewer than 20% of articles cited by ChatGPT are AI-generated. Roughly 80% are written by humans.

AI-generated content fails because models trained recursively on AI derivatives collapse in quality. Human expertise and firsthand experience matter. Original content is required - there’s no shortcut here.

The AI Platform Shift (Hard Data)

The numbers speak for themselves:

The pattern is consistent across companies: AI traffic is low volume but converts at dramatically higher rates than traditional search. These visitors arrive with more context, clearer intent, and a specific recommendation from the AI. They are not browsing. They are buying.

Traditional SEO was built on backlinks — get other sites to link to yours, and you climb the rankings. In the age of AI search, the currency has changed. What matters now is being mentioned, not linked to.

Sarah Bedell makes a compelling case that community platforms now dominate as the primary source of brand mentions that feed LLM training data. Think Reddit, YouTube, dev.to, Medium, and Stack Overflow — platforms built on user-generated content, originating from that timeless growth lever: word of mouth.

This is a fundamental shift. Backlinks were something you could manufacture — outreach emails, guest posts, link exchanges. Mentions on community platforms are earned. They come from real people recommending your product in a Reddit thread, creating a YouTube tutorial, or answering a Stack Overflow question using your tool.

Bedell breaks down mention sources into four categories:

  1. Community forums (Reddit, Stack Overflow) — reflect overall brand sentiment and top-of-funnel awareness
  2. Long-form UGC (Medium, dev.to, YouTube) — the highest-volume citation source, mostly user-created
  3. First-party technical content (docs, guides) — enables specific implementation suggestions from AI
  4. Integrations and agent platforms (MCPs, plugins) — allow LLMs to directly recommend and use your product

The takeaway? It all starts and ends with word of mouth. Whether someone hears about you from a billboard, a friend, or a YouTuber, that awareness feeds the mentions that feed the training data that feeds the AI responses. You cannot shortcut this flywheel — you have to earn it.

The Ghost Citation Problem

There’s an important nuance here that most marketers miss. Kevin Indig’s research into the ghost citation problem reveals that getting cited and getting mentioned are two very different things.

Analysing 3,981 domains across 115 prompts, 14 countries, and 4 AI engines, Indig found that 74.9% of domains received citations (source links), but only 38.3% were actually mentioned by name in the AI’s response. That means 61.7% of all citations are “ghost citations”, where the AI links to your content but never says your brand name. Only 13.2% of appearances resulted in both a citation and a mention.

The gap varies wildly by platform. Gemini mentions brands 83.7% of the time but only cites sources 21.4% of the time. ChatGPT does the opposite: 87% citation rate but only 20.7% mention rate. In 22% of tested scenarios, different AI engines contradicted each other on whether to mention the same brand at all.

What does this mean for your strategy?

What to Ignore (For Now)

There’s a lot of speculation in the AEO space. These tactics are frequently discussed but lack strong evidence:

Stick to the fundamentals until better evidence emerges.

The Unintended Consequences of SEO

It’s worth understanding how we got here. Joost de Valk, founder of Yoast SEO, wrote a thoughtful piece on the unintended consequences of SEO.

He acknowledges that while the foundational practices remain sound - readability, content structure, internal linking - the industry developed a checklist mentality. Writers optimised for green lights rather than value. The result was a flood of content that hit the right technical notes but didn’t add real value.

The current industry consensus is shifting toward:

This is good news if you’ve been creating genuinely helpful content all along.

Measuring AI Visibility with Microsoft Clarity

Here’s where things get practical. Microsoft Clarity introduced reporting that shows AI bot traffic and activity across websites, offering transparency into how automated agents crawl and interact with content.

The new “Bot Activity” report is a dashboard that tracks server-side signals to show exactly how AI agents access your site.

Unlike standard analytics that track human visits, this requires a CDN or server integration to capture the “upstream” activity - the scraping and crawling that happens before a user ever sees an answer.

The report breaks down traffic by “Bot Operator” (e.g., OpenAI, Anthropic, Google) and, crucially, “Bot Activity” type, distinguishing between an “AI Crawler” (scraping for training data) and an “AI Assistant” (fetching a live answer for a user).

Why This Matters

AI bots don’t behave like search bots - they don’t just index content, they consume it. What was once hidden in complex server logs is now visible, letting you easily track whether an AI is reading your site 10,000 times a day or ignoring it completely.

This dashboard democratises the “AI Request Share” metric, allowing you to quantify how much of your infrastructure is serving non-human agents versus actual customers without needing a data science team. It effectively separates “Training” (extractive) from “Inference” (potential visibility).

The New Reality: It’s OK Not to Get Traffic

It’s okay not to get traffic. That’s the new reality we have to accept. The real breakthrough here is that now it’s much easier to know whether an AI actually used your content as the basis for those answers.

Previously, “Zero Click” was a black box - you had to guess if your content was fuelling the AI’s response or if you were just being ignored. Now, you have proof. If you see high AI consumption of your content, you know you are winning mindshare and influencing the answer, even if you aren’t getting the click.

This metric finally validates the strategy of “feeding the bot” to maintain brand relevance in a world where the user might never leave the chat interface.

How to Set Up Microsoft Clarity Bot Tracking

  1. Enable Server-Side Integration - You cannot get this data with just the JavaScript snippet. Connect your CDN (Cloudflare, etc.) to Clarity to see the server logs.

  2. Audit “Path Requests” - Identify what AI systems are reading. Are they scraping your high-value proprietary data (pricing, JSON endpoints) or your brand-building content (blog, about page)?

  3. Calculate Your “AI Conversion Rate” - Compare your AI Request Volume (from Clarity) to your AI Referral Traffic (from GA4). If the ratio is massively skewed, you need to rethink your content strategy for agents.

The Three Layers of AEO Measurement

Visibility metrics alone are not enough. Omniscient Digital recommend measuring AEO success across three distinct layers, and most teams stop at the first one.

1. AI Visibility

Are you being cited at all, and in what context? Tools like Peec AI and Profound track brand mentions and citations across ChatGPT, Perplexity, Gemini, and Google AI Mode.

This is the easy layer. It tells you whether you exist in the model’s answer space.

2. LLM Referral Traffic

How much traffic arrives from ChatGPT, Perplexity, and Gemini? Track this in GA4. It is not the strongest success metric on its own, but a clickthrough means you appeared in a prompt where the user had a reason to leave the chat. That is a meaningful signal.

3. Revenue Attribution

This is the layer most teams skip, and it is the only one that connects AI visibility to pipeline.

Add a self-reported attribution field to your contact and demo forms. A simple “How did you find us?” question. When a prospect names ChatGPT, Perplexity, or Gemini, brief your sales team to ask one follow-up on the call: what prompt did you use?

Log that prompt in your CRM. Map it to opportunities and closed-won deals. Now you have a feedback loop telling you which prompts drive revenue, not just which ones cite you.

What Prompts Should You Track?

Where do these prompts come from?

The best source is your customers and prospects, not your marketing team’s intuition. Discovery, demo, and onboarding call transcripts are gold mines for voice-of-customer research. The questions prospects ask on sales calls are the same ones they ask ChatGPT. The questions customers ask during onboarding are the questions someone might have before they purchase.

A few principles:

This is not a set-and-forget exercise. As your ICP and category evolve, the prompts they use evolve with them.

The Three-Stage Pipeline: Retrieved, Cited, Trusted

Kevin Indig outlines a useful framework for thinking about AI search optimisation as a sequential process:

  1. Retrieval Stage - Your content must first enter the candidate pool. This requires proper crawling, indexing, and fast server response times. Without this, nothing else matters.

  2. Citation Stage - Models then select which sources to reference in their answers. This is where content quality, structure, and authority come into play.

  3. Trust Stage - Finally, users validate and act on citations. Your brand reputation and the accuracy of your content determine whether users trust the AI’s reference to you.

Most companies fail at stage one. Fix your technical foundation before worrying about content strategy.

SEO Experiments to Try

Nobody knows exactly what works in AI search yet. Here are some experiments worth running:

1. Build Question-and-Answer Pages

Create many highly specific pages where the title is the exact query someone might ask. Structure each page as:

  1. The question (as the title)
  2. A direct answer in the first 45 words
  3. A TL;DR summary with bullet points
  4. The full content

See AI Marketing Labs for examples of this format in action. Avoid the 500-word intro — just give AI the answer upfront.

2. Inject Proprietary Data

AI is looking for new information it hasn’t seen before. Generic content gets ignored. Inject your own data, benchmarks, survey results, or original research. Case studies become citations — your internal knowledge, externalised, becomes the thing AI references.

3. Target the Massive Long Tail

The search landscape has shifted from “best CRM” to “what is the best CRM for 50-person manufacturing firms?” Low volume per query, but infinite variations. Manual writing cannot compete at this scale.

As Animalz puts it, the differentiator isn’t speed or volume — it’s originality. AI has exposed the cheapness of content that was always cheap.

4. Add JSON-LD FAQ Schema

Create FAQ sections on your pages and mark them up with JSON-LD structured data. AI systems can parse structured data more reliably than unstructured prose.

5. Cite the Sources AI Already Trusts

Ask Perplexity: “What are the most trusted sources for [your topic]?” Then cite those sources in your content. AI trusts sources it already knows — use that to your advantage.

6. Make Content Scannable for Robots

If humans can skim it, robots can read it. Provide structure and hierarchy — clear H2 headers, bullet points, and comparison tables. AI systems love data tables for extracting structured information. List your competitors, highlight the differences.

7. Automate FAQ and Schema Markup

Josh Grant (formerly VP Growth at Webflow) automated AEO FAQ generation across six feature pages (Design, CMS, SEO, Shared Libraries, Interactions, Hosting). The workflow used AI to research Google’s “People Also Ask,” Reddit, and forums via Perplexity, identified FAQ gaps, generated Q&A pairs with on-brand answers, then auto-structured them into schema markup.

The results: +331 new AI citations (57% of all new citations across Webflow.com in that period), +149K SEO impressions, and a 24% lift. As Grant put it: “AI didn’t change the game. It raised the bar for answering existing questions.”

This is a repeatable playbook. Identify your highest-traffic feature or product pages, research the questions people are actually asking, and add structured FAQ sections with proper schema markup.

8. Publish Machine-Readable Product Truth

Most product pages are designed for humans. AI agents need something different. Dima Durah (Toloka.ai) recommends publishing structured, machine-readable versions of key pages alongside human-readable ones. Think /pricing.json, /product-facts.md, or a /llms.txt file that agents can reliably parse.

As Durah puts it: “Agents don’t want pages; they want structured truth they can act on. Teams that win make context explicit, machine-readable, impossible to misinterpret.”

This is still early, but as AI agents increasingly make purchasing decisions on behalf of buyers, having structured product data will become a significant advantage.

9. Build a Content System

The new role of a growth marketer is to build AI-powered systems that produce highly specific content at scale. Feed the AI your proprietary data so it cites you, not your competitors. Automate the full workflow — drafts, JSON schema, visual assets, metadata — using multiple LLMs for different strengths.

This is where marketing team structure starts to matter. As Adam Goyette argues, when execution is automated, the value shifts to the people who decide what systems to build and which audiences to target. The marketer who identifies which content system to build creates more value than the one who produces the content manually.

Action Plan: What to Do This Quarter

Week 1: Measure Your Starting Point

  1. Set up Microsoft Clarity with server-side integration
  2. Build a prompt library of 15-25 queries your ICP actually asks (pull these directly from discovery, demo, and onboarding call transcripts, not internal brainstorms), mapped across the funnel:
    • Top of funnel: “What is [category]?”, “How does [category] work?”, “Do I need a [category] tool?”
    • Mid-funnel: “Compare [you] vs [competitor]”, “Best [category] for [industry/use case]”, “Pros and cons of [you]”
    • Bottom of funnel: “[You] pricing”, “[You] vs [competitor] for enterprise”, “Is [you] worth it?”
  3. Run each prompt across ChatGPT, Perplexity, Gemini, and Google AI Mode. For each, track: are you mentioned? Are you recommended or just listed? How are you described?
  4. Audit your page load times (aim for under 1 second)
  5. Check your server response times (aim for under 200ms TTFB)

Week 2-4: Fix Technical Issues

  1. Address any page speed issues identified
  2. Ensure semantic HTML structure across key pages
  3. Update content that’s older than 3 months
  4. Add structured data (tables, FAQs) where appropriate

Ongoing: Build Third-Party Presence

  1. Encourage customer reviews on relevant platforms
  2. Participate in industry forums and communities
  3. Seek guest posting and podcast opportunities
  4. Monitor brand mentions across the web

Metrics to Track

People and Companies to Follow

If you want to stay current on AEO and AI search optimisation, these are the voices worth following:

Individuals

Companies and Publications

Tools to Watch

Further Reading

The Bottom Line

The proliferation of acronyms - SEO, AEO, GEO, LLM-O - reflects a genuine shift in how people find information. But the proven tactics remain grounded in fundamentals:

  1. Fast, well-structured websites get more AI attention
  2. Fresh, helpful content gets cited more often
  3. Third-party validation matters more than your own claims
  4. Measurement is finally possible with tools like Microsoft Clarity

Don’t chase every new tactic that emerges. Focus on these fundamentals, measure your progress, and iterate based on data - not hype.

How Growth Method Helps

Growth Method is the only work management platform built for growth marketers. We help you track and optimise your marketing experiments across all channels - including the emerging world of AI search optimisation.

Our platform helps you:

“We are on-track to deliver a 43% increase in inbound leads this year. There is no doubt the adoption of Growth Method is the primary driver behind these results.” — Laura Perrott, Colt Technology Services

We help companies implement a systematic approach to grow leads and revenue. Book a call today.


Back to top ↑