GEO Cheat Sheet: How to Audit Your AI Search Visibility
The first step of tackling AI Search is figuring out where (and how) your brand is showing up.
Want to see a crazy stat? Semrush projects that LLM traffic will completely overtake traditional Google Search by 2028.
The driving force here isn’t just ChatGPT, which is still only about 1-2% of the search market. It’s Google, which is rapidly shifting to an AI Search experience by default and providing elaborate buying recommendations for long-tail queries.
Take a high-consideration B2B buying decision, like choosing a marketing automation platform. Five years ago, you’d need to sift through a maze of vendor websites and review platforms. Today, Google’s AI Overviews does the heavy lifting, delivering direct recommendations from the first query.
That’s why Generative Engine Optimization (GEO) — the practice of optimizing for AI Search — is a once-in-a-marketing-generation opportunity for CMOs. Do it well now, and you’ll have an unfair advantage over the competition. And as I wrote two weeks ago, there are three core pillars to an effective GEO strategy:
Visibility: Can LLMs see your content?
Citability: Can LLMs trust your content?
Retrievability: Can LLMs use your content?
Today, we’re going to take a deeper dive into how to get a handle on your AI Search visibility.
Cheat sheet: How we audit AI Search visibility
There’s an old dad joke about SEO: “Where’s the best place to hide a dead body? Page two of Google.” (ZING!)
AI Search ruins the bit. LLMs dig past page one — Semrush shows ~90% of ChatGPT citations come from pages that rank in positions 21+ (page three and beyond). In other words, nothing stays buried. Tony Soprano would not be pleased.
As ChatGPT would say, “In the rapidly evolving AI Search landscape, an entirely new paradigm has emerged.”
Mind-numbing jargon aside, the game has changed. With SEO, visibility was straightforward. Specific URLs had a fixed ranking for a specific keyword. With AI, it’s trickier. LLMs serve a different answer every time and increasingly personalize results to each user.
While roughly two-thirds of your SEO strategy will translate to GEO, the remaining one-third will separate the winners from the losers. Most brands have no idea how or why they’re appearing in AI Search, and the first step to developing a strong AI Search strategy is to shed that shroud of darkness and step into the light.
At Pepper, we have a simple methodology for AI visibility that you can steal:
First, audit your brand’s visibility across a wide array of relevant prompts to understand where you’re showing up.
To build an effective strategy, you first need a clear picture of how you’re showing up in AI Search. Since generative AI tools are non-deterministic — meaning that they show a different answer to each user each time — this is not easy! We’re used to static keyword rankings.
At Pepper, we use a combination of homegrown and third-party tools to run thousands of relevant prompts. The hard part is figuring out which prompts to run, since AI Search queries tend to be much more elaborate than traditional Google searches, phrased more like complex questions. One method that works for us is using a proxy to scrape Google’s People Also Ask feature, which helps uncover the most common long-tail and exploratory queries tied to your core topics. From there, we expand the list with “who,” “what,” “when,” “where,” “why,” and “how” questions connected to a brand’s keywords. This ensures coverage across both branded and category-level queries.
To enrich this dataset, we also pull insights directly from our sales and customer success teams. The questions they hear most often become valuable input into the prompt set — real-world signals of what buyers and users actually want to know.
Next, analyze performance across your competitive set.
To understand why certain competitors show up in AI Search and you don’t, you need to look at the signals LLMs treat as trust markers. At Pepper, we track three in particular: brand mentions, site mentions, and citations.
Brand mentions indicate whether your company is part of the broader category conversation — if competitors are being named more often, they’re more likely to be surfaced by AI. GEO is kind of like SEO meets PR. LLMs weigh established media websites like Wikipedia, Forbes, Reddit, and Business Insider very highly. How your brand is mentioned — and in what context — is often significantly more influential than the content on your own website.
Site mentions reinforce domain authority by signaling repeated, consistent recognition of your website across the web. And citations from credible outlets or review sites act as validation loops: The more often trusted sources reference you, the more likely you are to appear in AI-generated answers.
Then, build out a custom GEO Source Weightage Table.
Next, you need to determine not only where your brand is surfacing but also which types of sources carry the most weight in your category.
At Pepper, we do this by analyzing more than 50,000 prompts across multiple LLMs to understand which categories of sources have the greatest influence on visibility. For each citation, we classify the underlying sources into buckets — PR (news media), community forums (like Reddit and Quora), analyst sites (G2, Gartner, etc.), brand websites, and more.
The outcome is a Source Weightage Table that serves as a roadmap for improving AI Search visibility. Basically, these are the sites where you need to try to get more mentions, whether that’s through a review strategy, guest op-eds, or strategically answering user questions on Reddit and Quora.
Finally, audit your brand’s content infrastructure to identify potential gaps.
Just like in traditional SEO, the first big question is whether a brand’s challenge comes from a lack of content or from the way content is structured. Here, we’re looking at things like schema markup, JSON-LD, LLM.txt, FAQs, and machine-readability.
With this analysis in place, you can identify the big gaps and create a strategy for improving visibility. This game plan typically involves everything from the above-mentioned PR push to technical fixes to refreshing content so that it’s designed to be cited and retrieved by AI — which we’ll dig into in more depth next week.
In the meantime: If you really want to get ahead of the competition, snag a spot at the first-ever global GEO Summit on October 1: Index ‘25. You’ll hear from marketing and search leaders from McKinsey, Zoom, Meta, and Salesforce. We’ll be hosting hands-on workshops to revamp your AI search strategy and give you practical playbooks you can put into action immediately.
Best of all? It’s free for AI Native subscribers. Just use promo code AINATIVE when you register.
Joe Lazer is the best-selling author of The Storytelling Edge and fractional CMO at Pepper.
AI news piles up faster than your unread Slack DMs. Here’s the recap from the past week.
Perplexity made a bold, all‑cash bid to buy Google Chrome, a move that would plug it into the popular browser’s 3+ billion user base. Unclear if the offer is a genuine strategic play or simply a PR stunt.
Publishers are feeling the heat. Over the past eight weeks, median year‑over‑year referral traffic from Google Search has dipped by 10%, a signal that AI‑generated summaries are reshaping the search-to-page‑view pipeline.
Google’s AI suggestions could steer you straight into a scam. Bad actors are early adopters of AI Search, manipulating it to surface phishing scams and “helpful” contact info.
A new survey reveals nearly three‑quarters of patients use AI tools to find medical providers, with about one-third crediting LLMs like ChatGPT with swaying their choice.
OpenAI just rolled out ChatGPT Go, a budget-friendly tier priced under $5/month. It’s launching exclusively in India for now, but will likely expand in the future.
OpenAI is now tweaking GPT-5 to be “warmer and friendlier” after a wave of user feedback over the new model’s “cold” outputs. Seems like users are looking for less HAL 9000, more sycophantic intern.
Claude (specifically Opus 4 and 4.1) now has the ability to end “harmful or abusive” conversations on its own — not to protect users, but for the model’s own welfare.
Sam Altman confirms suspicions about AI’s impact on the markets: Yes, we’re in an AI bubble. Maybe. He believes there’ll be a correction… just not for OpenAI.
Google’s Index Still Calls the Shots
Recently, technical marketer Chris Lever shared a few helpful insights into how AI platforms pull from websites and what that means for AI Search visibility.
One of his most important points is simple yet often overlooked: If Google can’t index your content, it won’t show up in Gemini, AI Overviews, or most other AI outputs. That’s because many chatbots lean heavily on Google’s index when generating answers, so strong rankings in Search remain the best path to inclusion in AI results.
Credit: Chris Lever on LinkedIn
The catch is that most LLMs don’t process JavaScript very well. So even if Google can crawl and rank your JS-heavy pages, they may still be invisible to AI models. The safest move is to make sure your most important pages are easy to read in plain HTML — using techniques like server-side rendering or pre-rendering — so both search engines and AI tools can reliably use them.
AI’s not taking all the jobs — some of the best ones are still up for grabs. Here are a few opportunities in AI and/or content that caught our eye.
Data Reporter @ GiveDirectly (New York, NY, salary range $75,000 - $108,000 USD)
Product Marketing Copywriter @ Stripe (remote, salary range $138,500 - $207,800)
AI Visual Content Producer @ Newsweek (remote, salary range $60,000 - $70,000)



















