secture & code

From SEO to AI-first search: how to position your brand on AI-generated answers (Part 1)

AI-first search

SEO isn't what it used to be. Google's AI Overviews, formerly known as Search Generative Experience (SGE), are transforming the way users interact with search results (AI-first search).

With 2 billion monthly users experiencing these AI-generated responses and ChatGPT processing 37.5 million queries daily, gone are the days of simple link lists.

For years the big battle was to gain rankings on Google. Gone are the days of simple link lists; now, AI-generated summaries, powered by models like Gemini 2.0, appear directly at the top of the results page (SERP), reducing the CTR of the number one position by a devastating 34.5%.

Today, users also ask directly to AI as ChatGPT, Perplexity o Claude. And the big question is: how do I get my brand to appear in the answers given by an AI? This fundamental change implies that SEO strategies must evolve towards what we call AI-first.
search.

Before we begin, let's ask ourselves the easy question:

¿What is SEO for AI?

When a person queries an AI model, it doesn't search in real time: it feeds on training corpus, indexed documents and search APIs. If your content is not clear, reliable and easy to interpret, the AI will simply ignore it.

💡 Training Corpus(massive set of texts, data and documents that are used to train an AI model) with specific cut-off dates that vary by platform: ChatGPT is aware until June 2024, while Claude goes until October 2024.

But here comes the surprise that many SEO professionals have yet to digest: contrary to popular belief, ChatGPT, Claude and Perplexity all have real-time web search capabilities since 2024-2025. Claude added this functionality in March 2025, ChatGPT integrated it with SearchGPT and Perplexity has had it since launch. This means that your fresh, up-to-date content can appear in AI responses, even if it was posted yesterday.

The third source is proprietary APIs and databases, such as the Bing API used by ChatGPT or the PerplexityBot crawler that builds its own database. And here comes the data that should make you rethink your whole strategy: 95% of AI citations do not correlate with traditional SEO metrics like traffic or backlinks.

In fact, sites with just 1-9 backlinks average 2,160 citations, while those with more than 10 backlinks only reach 681 citations. It's an upside-down world where the rules we used to know no longer apply. If your content is not clear, reliable and easy to interpret, the AI will simply ignore it. You no longer optimize just for Google: now you have to think about how an AI understands, trusts and recommends your brand, and also includes you in its general descriptions when it returns a response.

If your content is not clear, reliable and easy to interpret, the AI will simply ignore it. In other words: you no longer optimize just for Google, now you have to think about how an AI understands, trusts and recommends your brand, and also includes you in its general descriptions when it returns a response.

AI-first search

What are AI overviews?

These are descriptions that synthesize information from multiple sources to provide concise, conversational answers. Their primary goal is to save users time by providing quick, detailed answers to complex questions. While a solid SEO foundation is still advantageous, source selection for these descriptions does not rely solely on traditional organic SEO.

The numbers are stark and somewhat frightening for those who make their living from traditional SEO. AI Overviews currently appear in the 30% of U.S. searches and 18% globally, with monthly growth of 72% showing no signs of slowing.

When these appear, the average organic CTR drops by 15.49%, and most worryingly, the 60% of the
Google searches now result in zero clicks. The user gets his answer directly in the SERP and does not need to visit any website.

What makes this even more complex is understanding what types of searches trigger these responses. 99.2% of AI Overviews triggers are informational queries, not transactional or navigational. Responses average 5,337 characters in length and can cite up to 95 different sources, creating a comprehensive synthesis that leaves little incentive to click on traditional organic results.

Key factors that drive inclusion in the IA overviews:

  • Content quality and contextual relevance: The content must directly and comprehensively satisfy the underlying intent of the user's query. AI prefers what it can “copy and paste” as a direct response. Use short paragraphs, numbered lists and clear headings.

    Forget about repeating the same keyword 50 times. Models understand synonyms, relationships and context. Work semantic clustersfor example, instead of “SEO in AI”, it also talks about “AI-first search”, “content optimization for ChatGPT”.” o “positioning in AI-generated results”.”
  • E-E-A-T Principles (Experience, Expertise, Authoritativeness and Trustworthiness): This framework is crucial for evaluating the quality of content and its creators, especially for certain topics. A high E-E-A-T translates into higher AI confidence, which makes content more likely to be selected as reliable information.

    How do we achieve this? Citing reliable sources. Publish case studies, internal papers, technical benchmarks or your own insights. The more solid your content is, the more likely it is to be cited.
  • Use of structured data and clear content formatting: Schema-structured data helps AI to accurately extract and present details. In addition, clear content formatting and a focus on user intent are critical for AI systems to use information effectively.

    Many IAs are updated with recent data via the web. Publish fresh, versioned content (e.g., 2025 guides, annual trends).
  • Constant experimentsAnd the usual, trial and error. Each model has different biases. What works in Perplexity is not the same in Gemini or ChatGPT.

    Test and document which prompts return your brand. You should check if your brand appears and measure mentions in Perplexity, ChatGPT Browsing or Gemini. You can also track clicks coming from generative summaries (Google SGE already gives some data in Search Console in beta).


How to optimize for each AI platform

Each AI platform has its own preferences and peculiarities that you must understand in order to optimize effectively:

  • ChatGPT with SearchGPT shows a clear preference for long, detailed content, typically more than 1,500 words, with citation-ready paragraphs of between 60 and 100 words that can be easily extracted and cited. Interestingly, it disproportionately favors older domains: 45.8% of its citations come from sites more than 15 years old.

    Wikipedia dominates with 1.3 million citations, suggesting that ChatGPT values established authority and encyclopedic content. To optimize for ChatGPT, FAQ sections, detailed how-to guides and TL;DR (Too Long; Didn't Read) summaries at the beginning of long articles have proven particularly effective.
  • Perplexity AI takes a completely different approach. It uses multiple models simultaneously, including GPT-4o, Claude and Gemini, allowing it to provide more nuanced answers. This platform significantly prioritizes visual content: articles with original graphics, charts and infographics see a 40% increase in visibility.

    Reddit is their favorite source with 3.2 million citations, reflecting their preference for user-generated content and authentic discussions. Its proprietary crawler, PerplexityBot, creates an independent database, which means it does not rely solely on third-party indexes.
  • Claude de Anthropic, which launched web search capabilities in March 2025, is showing a marked
    preference for well-structured technical documentation and professional content. Its ability to process context windows of more than 200,000 tokens allows it to analyze extremely long and complex documents. A unique feature is its Model Context Protocol (MCP),
    which allows companies to have granular control over how Claude accesses and uses their content.
  • Google Gemini and AI Overviews are more agnostic about sources, but show a clear
    preference for YouTube and multimedia content. The current integration with Gemini 2.0 and the forthcoming
    upgrade to Gemini 2.5 promise even more sophisticated contextual understanding capabilities and
    summary of information

E-E-A-A-T framework is critical for AI

Experience, Expertise, Authoritativeness and Trustworthiness have evolved from being simple quality guidelines to become critical signals for content selection by AI systems.

Google has been explicit in stating that «untrusted pages have low E-E-A-T no matter how Experienced, Expert or Authoritative they are», placing Trust as the central pillar of the entire framework.

  • For Experience, They are looking for demonstrable first-hand knowledge, not theories or compilations. Case studies with real metrics, original research with clear methodology and proprietary benchmarks not found elsewhere are pure gold for AI algorithms. It is not enough to say you have experience; you must show it with concrete examples, your own data and specific situations you have faced.
  • The Expertise is evaluated in a more traditional but equally rigorous manner. Clear author biographies with verifiable credentials, contributions from recognized experts in the field, and a consistent publication history on the topic are critical. IAs can cross-reference to verify whether an author really is who he or she claims to be and whether he or she has the expertise he or she claims.
  • For Authoritativeness, IAs look beyond the individual site. Your presence in the Knowledge Graph
    from Google, referrals and citations from other trusted sources, and recognition as an industry leader on multiple platforms all contribute to this signal. It's a digital reputation game where every mention counts.
  • The Trustworthiness, that central component, requires verifiable factual accuracy with cited sources.
    and external links to studies and data. Transparency about authorship, content creation methods, and any potential conflicts of interest are rigorously evaluated.

Interestingly, Google accepts AI-generated content as long as it demonstrates these E-E-A-T qualities, although it faces significantly more scrutiny in terms of factual verification.


Semantic clusters replace keywords

The understanding of semantic clusters has evolved beyond simple keyword research. Modern models use RAG (Retrieval-Augmented Generation) which combines information retrieval with text generation.

GraphRAG adds another layer with hierarchical knowledge graph extraction. Vector embeddings transform your text into multidimensional numerical representations that capture deep semantic meaning.

Vector databases such as Weaviate and Pinecone allow semantic search at impressive speeds. This means that repeating keywords is counterproductive.

Instead of hammering «SEO for AI» fifty times, you need to work full semantic clusters: AI-first search, optimization for ChatGPT, ranking in generative results, GEO (Generative Engine Optimization), AEO (Answer Engine Optimization) and all the natural variations a human would use to describe the concept.

AI models don't look for keyword density; they look for deep contextual understanding. An article that explores all facets of a topic, using natural and varied language, will be infinitely more valuable than one optimized for a specific keyword. It's a fundamental shift in how we think about content optimization.

The real impact on different industries

Not all industries are being impacted equally and understanding where your sector stands is
crucial to gauge the urgency of change.

  • Online dating and relationships are seeing more than half of their searches covered by AI Overviews.
  • B2B business and consulting face almost 40% of coverage.
  • The food and beverage industry, with its recipes and recommendations, sees more than a third of its searches transformed.
  • On the other hand, e-commerce has experienced a dramatic collapse, dropping from 29% to 4% of
    coverage. This might seem like good news for retailers, but in reality it reflects that Google
    is protecting its transactional search advertising revenues. Local searches,
    with a mere 0.14% of coverage, suggest that Google Maps and traditional local results
    maintain their dominance for the time being.
  • What all sectors have in common is that informational queries are the most affected. If your business relies on educational content to attract customers at the beginning of the funnel, you need to adapt urgently. If you rely primarily on direct transactional searches, you have more time, but don't get complacent: the trend is clear and accelerating.

What we are witnessing is not a gradual evolution of SEO but rather a complete revolution in
how information is discovered and consumed online.

Rules that worked for two decades are being rewritten in real time. The 95% of AI citations do not follow traditional SEO logic. Sites with less traditional authority are outperforming established giants. Content is being evaluated for actual usefulness, not technical optimization.

In this first part, thanks to the assistance of Antonio Romero, we have explored the what and why of the change. In the second part , which we will publish in a few weeks, we will explore the practical implementation, in the meantime, you can read other posts from our blog.

CMO

Picture of Raquel Pérez

Raquel Perez

Marketer who doesn't know how to code.
Picture of Raquel Pérez

Raquel Perez

Marketer who doesn't know how to code.

We are HIRING!

What Can We Do