What is Adobe LLM Optimizer and Do You Need It?
Almost every search interaction that consumers take these days has its input or output shaped by an LLM, and it's absolutely FUNDAMENTALLY changing the way that websites need to present data. These large language models (LLMs), and the advanced orchestration around them, try to make sense of your website content so as to present the answers your potential customers are searching for. But is your content organized and presented in such a way that the robots can use it?
That's precisely what we dive into and experiment-with in this podcast: what Adobe's LLM Optimizer is, what problems it solves, and whether or not you need it at all.
And because I've got a natural distrust of fancy "sample sites" that synthesize data and results, in this episode we did the experiments and showed results from our own website - so this is live, real data - in some cases damningly-so. :)
Also available on Apple Podcasts and as an audio or video podcast on Spotify.
LLM Optimization: Basics & Our Experiment
There are a few main pieces to discuss in this topic:
- How do you find out what people are searching for in LLMs that MIGHT lead them to your content?
- How do you measure how many of them are typing those things?
- How do you gain insights from what interactions people are taking on the various AI search tools like Google AI Mode, Perplexity, Grok, Copilot, ChatGPT, etc?
- How do you find out whether your content can even get picked up by LLMs? Is content that you've created even being surfaced to those search tools? THIS IS THE CRITICAL BIT
- If your content is not being surfaced to LLMS, how do you handle that?
This is exactly what we tried to take you through in our podcast, and with our experiments. Edge Delivery Services is SO FUN to develop on, in that you can rapidly generate new and highly-performant, highly-compelling experiences in NO TIME, stitching together content from multiple sources in a very high-performance manner. However, sometimes these rapid client-side-heavy design patterns can result in CRITICAL CONTENT that LLMs CANNOT SEE.
An example that we discussed in the podcast was this Edge Delivery Services Resources page, where we're using a simple sheet-based data-table approach to aggregate Adobe Edge Delivery sites in a spreadsheet, and display that data onto the page. Only problem - 100% of the SUBSTANTIVE content on the page that you'd want to surface to LLMs was COMPLETELY INVISIBLE to them. You'd see this using the great AI Visibility Checker plugin that Adobe has released (linked below).
Other points we discussed:
- 0:00 - Introduction to Adobe LLM Optimizer
- 2:30 - What is Generative Engine Optimization (GEO)?
- 5:45 - The problem: Why LLMs miss content (AEM/Edge Delivery examples)
- 9:20 - Demo: Plugging LLM Optimizer into the site/blog
- 14:00 - Running experiments – Before & after results
- 20:15 - How Edge Optimization makes content LLM-visible (PDFs, JS issues)
- 27:40 - Real test outcomes: LLM responses & citations improved
- 34:10 - Integration with CDNs and multiple LLMs (ChatGPT, Grok, Gemini)
- 40:25 - Key metrics: Readability scores, visibility insights, sentiment
- 46:50 - Who needs this? AEM Cloud users & free tier details
- 50:00 - Future of SEO/GEO and final thoughts
- 52:30 - Wrap-up & discussion highlights
Resources from the Podcast
Following were resources mentioned on the podcast:
- About Adobe LLM Optimizer
- What is Edge Optimization (Adobe Docs)
- The Adobe AI Content Visibility Checker Plugin discussed in the podcast
- Our LLMO Edge Optimization Experiment for surfacing EDS / DAM Integration
- The EDS / AEM Assets Integration & PDF List that we mentioned in the podcast
- Cedric's great Adobe DevLive talk on LLMO
Podcast Speakers
Like what you heard? Have questions about what’s right for you? We’d love to talk! Contact Us