How to Get LLMs to Remember Your Brand

How to Get LLMs to Remember Your Brand

This is the LinkedIn edition of AI Native, the newsletter for marketers who want to win the AI Age. Subscribe on Substack to follow along with the latest AI Search insights and strategies.

We’re undercounting the impact of AI Search in a way that makes my head hurt.

Over the past few days, everyone in my feed has been sharing a new report from BrightEdge. It shows that while AI Search traffic is growing rapidly, it remains less than 1% of all referral traffic.

Article content

Every few weeks, one of these reports comes out, and everyone grabs their megaphone to scream that AI Search is overrated. And that’s a fair instinct! Our collective tendency is to hype every new thing into oblivion.

I’m flabbergasted, though, that everyone ignores one very important caveat: Referral traffic data doesn’t account for the biggest AI Search platform of them all.

That would be Google. AI Overviews — which dominate the top half of most results — and AI Mode are both powered by Gemini, Google’s LLM. The issue? When people click a link in AI Overviews or AI Mode, it looks like it’s coming from traditional organic search. But really, that’s all AI Search traffic. We’re just not counting it correctly.

That’s why I remain bullish on investing early in optimizing for AI Search. The impact is much larger than we realize, and we’re clearly headed for a world in which Google is just an AI Search platform. Smart marketing leaders will seize the opportunity and develop a strong AI Search strategy. The added benefit, of course, is that AI Search and traditional SEO are intertwined, and everything we’re recommending in this series will benefit your SEO as well.

In this week’s newsletter, we’re breaking down the third pillar of our AI Search framework:

  • First, we covered visibility, revealing how to ensure that LLMs can see your brand and mention you.
  • Next, we covered citability, the art of getting AI to cite and link to your brand’s content across platforms.
  • Now, it’s time to get real wonky with retrievability, the science of getting AI to recall your brand at the right moment from the LLM’s retriever system.

Think of it this way: Visibility is about getting AI to mention you. Citability is about getting AI to link to you. But without retrievability, neither matters. Think of retrievability as the modern equivalent of “indexability” from traditional SEO, with a twist.

With retrievability, you’re trying to get the LLM to remember you so that it recalls your brand, product, or entity as part of its knowledge base. Without it, you risk being inconsistently surfaced, confused with competitors, or forgotten entirely by the model.

How AI retrievability works

Warning: This section is super wonky. If you just want to know what steps to take, skip to the next section. But if you want to gain a stronger technical understanding of retrievability, read on.

LLMs like ChatGPT, Perplexity, Gemini, and Claude rely on a Retrieve-Then-Generate (RTG) pipeline. This has four components:

  1. Query Embedding

The engine turns the user’s words into a dense vector (an embedding) that captures meaning, not just exact keywords. The LLM is basically asking, “What did they really ask?”

This is similar to Google’s contextual search. It’s why “pricing tiers for HubSpot Analytics” will still match “Acme plans & costs.”

  1. Retriever Selection

The LLM’s retriever fans out across several indexes: web pages, trusted reference sets (like Wikipedia), structured catalogs/APIs — even private data that you’ve given a tool like ChatGPT access to — in order to figure out the best answer.

Think of it like a librarian pulling information from many shelves. It grabs the top snippets/entities that might answer the question, then uses search to add fresh, external facts that the model doesn’t store in its core training data.

  1. Reranking

The model then scores those results by relevance, trust, and recency. It’s basically double-checking, “What’s most likely to answer the question the user is asking?”

  1. Generation

Finally, the LLM synthesizes an answer using these top-ranked sources and data. If you fail at step two, the retriever selection, you’re essentially invisible at this stage.

How to boost your brand’s retrievability

Step 1: Entity Mapping

The first step is figuring out what the AI thinks exists. Regular readers of AI Native will note that we went deep on Entity Mapping in our Visibility Guide, since it applies to visibility as well. But here’s a quick reminder of the key elements in this process:

  1. Audit your entities: List all relevant brand, product, service, and category names.
  2. Check official sources: Use Google’s Knowledge Graph Search API, Wikidata, Crunchbase, and LinkedIn to confirm whether your entities are formally recognized.
  3. Run retrieval tests: Prompt ChatGPT, Gemini, Perplexity, and Claude with questions like “What is [Brand]?” or “Who makes [Product]?”. Note whether you appear, how you’re described, and which sources are cited.
  4. Identify gaps: Document where you’re missing (e.g., your company doesn’t exist in Wikidata) and where competitors are present.

Key action: Create a spreadsheet of your core entities and track which sources (Knowledge Graph, Wikipedia, Crunchbase, Reddit, etc.) currently reference them. That’s your retrievability baseline.

Step 2: Canonicalization

AI engines hate contradictions. If your headcount is listed as 2,000 on LinkedIn but 800 on Crunchbase, retrievers may suppress you altogether for that query.

  • Inventory your facts: Collect key data points — founding year, HQ location, employee count, funding, product specs, customers, key ROI data points, and how your company is described.
  • Audit across sites: Compare LinkedIn, Wikipedia, Crunchbase, press releases, and your website.
  • Get on the same page: Update every node to match.

Key action: Create a single source of truth that your team uses when creating any new content or web pages. Create a process for updating every site consistently on a quarterly or bi-annual basis.

Step 3: Embedding Optimization

Retrievers work by comparing vector embeddings — numbers used to measure similarity across complex data, such as images, text, and audio. If your content blurs into competitors’, you’ll lose the recall lottery.

  • Rewrite anchor facts: Use short, declarative sentences that are unique and easy to embed. (Weak example: “Our company has grown significantly and now employs over 2,000 people.” Strong example: “[Brand] employs 2,012 people as of 2025.”)
  • Increase semantic density: Cluster related entities together (like brand + product + category) in the same passage so retrievers connect them.
  • Cut filler: Edit out filler language around your key facts.

Key action: Rewrite your About page and product pages with fact-first sentences. Give ChatGPT a Deep Research task to compare them to competitors and evaluate whether they’re distinct enough.

Step 4. Redundancy & Refresh

One mention isn’t enough. Retrievers reward recency and repetition across trusted sites.

  • Seed key facts everywhere: Replicate anchors across Wikipedia, Crunchbase, G2, App Stores, your blog, and anywhere else that’s relevant for your industry.
  • Maintain freshness: Update these anchors quarterly with the latest numbers and milestones (press releases, blog posts, product updates).
  • Monitor for drift: Outdated data on a high-weight repository (e.g., Wikipedia) can override your own site and sink retrievability.

Key action: Choose 3–4 high-weight repositories relevant to your industry and ensure your brand facts are planted there. Set a quarterly reminder to refresh them.

Step 5. Ongoing Monitoring

You can’t improve what you don’t measure. So feel free to crib Pepper’s Retrievability Score model, which we use to evaluate how well our customers’ content will be retrieved across engines:

Article content

Then, follow these steps:

  • Run fan-out tests: Prompt 20–30 variations of the same question and measure how often your brand is recalled.
  • Iterate: If you fail, go back to redundancy, canonicalization, and embedding optimization.

Key questions: If you don’t want to deal with our nerdy formula, you can measure progress by running multiple query variations weekly. If you only show up in 2/10 answers, your retrievability is weak.

I know this is a lot. That’s why we’re hosting our first-ever global AI Search Summit on October 1: Index ‘25, where you’ll hear from marketing and search leaders from McKinsey, Zoom, Meta, and Salesforce. We’ll also be hosting hands-on workshops to revamp your AI Search strategy and give you practical playbooks you can put into action immediately.

Attendance is free for AI Native subscribers. Just use promo code AINATIVE when you register.


NEW AND NOTEWORTHY

AI news is stacking up faster than lawsuits against Google and OpenAI. Here are the top stories to know this week.

  • A new Gartner study finds consumers aren’t sold on AI Search; more than half are wary of AI-powered results.
  • The browser wars are heating up, with brands leading the charge — see: Atlassian buying The Browser Company to “reimagine the browser for knowledge work in the AI era,” and PayPal handing out early invites to Perplexity’s Comet.
  • The publishers vs. AI Search saga continues, as Google faces lawsuits from Penske (Rolling Stone, Billboard), and Perplexity is sued by Encyclopedia Britannica and Merriam-Webster.
  • AI Mode goes multilingual, with Google rolling out its AI-powered search experience in Hindi in an effort to expand reach across India.
  • Stateside, Google gets an antitrust win. Regulators have backed off the push to separate Chrome and Android from its massive ad machine.
  • Apple wants to arm Siri for the AI era. Earlier in September, the company announced a major upgrade for Siri called World Knowledge Answers (remember when Apple used to be good at naming things?). It’s the latest effort to help Siri catch up.
  • Google says the quiet part out loud. After years of denial, the company now admits the open web is in “rapid decline.”
  • In related news, Google AI Overviews is confidently hallucinating history, claiming that Elon Musk’s DOGE was a figment of our collective imagination. Looks like that “don’t be evil” mantra really panned out as planned.


UPCOMING EVENT: AI-SEO CROSSOVER WEBINAR SERIES

Article content

Join an exclusive conversation with CMOs from Samsara, Cequence, and Ironclad as they discuss how AI is reshaping brand visibility and customer engagement. This session will dive into the shift from traditional marketing playbooks to AI-driven strategies, and what it means for building trust, relevance, and growth in an AI-first world. The panel will share bold ideas, practical insights, and proven approaches to future-proof your brand.

Event details:

  • 🗓 Date: September 23, 2025
  • 🕚 Time: 8:00 - 9:15 PT
  • 🌐 Platform: Zoom

Sign up here to attend live or get the recording on demand.


MEME OF THE WEEK

Article content
Image credit: Mark Williams-Cook on LinkedIn

This is the LinkedIn edition of AI Native, the newsletter for marketers who want to win the AI Age. Subscribe on Substack to follow along with the latest AI Search insights and strategies.


kushagra sanjay shukla

Masters in Computer Applications/data analytics

1mo

Excellent research

Like
Reply
Nitesh Malviya

Tech Content Writer | Ex- Contributor @ GeekFlare | Delivering authoritative and engaging writeups with clarity in expression.

1mo

Quite logical. Uploading fresh idea and perspectives around the product/ brand will enhance its feasibility and genuineness in the industry. Early SEO was decoded with stuffing which gradually improved to natural writing but plagiarism is penalized. Is AEO also penalizing AI content or negatively impacting its ranking on the LLMs; irrespective of the results on search engines.

To view or add a comment, sign in

More articles by Pepper

Others also viewed

Explore content categories