Search is moving to LLMs

As ChatGPT recently crossed 1B searches per week, it’s clear that consumers are relying on LLMs more frequently than ever in their discovery of products and services. 

As we’ve written about previously, the appeal for consumers is huge. Before, they may have had to navigate endless blue links, sift through hundreds of SKUs, browse multiple review sites, and eventually check out on a marketplace or brand’s website. Now, it’s easier than ever to ask a question and receive immediate, deeply contextual, and unbiased recommendations from an LLM that has done the work for them. LLMs are also starting to streamline the purchasing process, with agents transacting on behalf of users. 

This experience for consumers is more efficient and enriched, and, as a result, has put Google’s search dominance at risk. Publishers and brands have already reported declines in Google-driven traffic and an increase in traffic from AI sources. We believe this gap will widen as consumers increasingly adopt AI search for all their needs.

Marketers need new tools to reach consumers in LLMs

For the last two decades, marketers have relied heavily on search engine optimization (SEO) and paid ads to reach consumers searching for products on Google. As search moves to LLMs, brands will need entirely new tools to connect with those same consumers. We believe this creates three primary opportunities for the next generation of AI-native marketing tools:

  • Organic Marketing in LLMs: Brands will need to monitor, analyze, and optimize their presence within AI-generated search results to remain discoverable.
  • Data Syndication in LLMs: Brands will need to provide accurate, structured data directly to LLMs — both to improve how their products are represented and to enable autonomous purchases.
  • Performance Marketing in LLMs: As ads are integrated into the consumer AI experience, brands will need to do performance marketing in LLMs. 

Organic Marketing: Cracking AI Engine Optimization (AIO)

A prerequisite for brands to reach customers organically within LLMs is to first understand how they represent them. We’ve seen several organic AI marketing tools emerge recently from startups like Bluefish, Profound, Evertune, Otterly, Goodie, Geostar, Scrunch, Peec, Wilgot, Cognizo, Brandrank, Crossfill, Convertmate, GEOSurge, Athena, Brandlight, Revere AI, Gumshoe AI, and others. These platforms help brands monitor and analyze how frequently — and how favorably — they show up in AI-generated responses. Taco is a tech-enabled services business that leverages tooling, content, technical, and citations to drive AI visibility as a service vs. using software.

Once brands understand how their products are performing in LLMs, they then need to actually improve and maintain their visibility. However, the techniques that helped brands rank at the top of a Google search, like keyword stuffing (i.e., creating content based on specific keywords related to your business), don’t translate to success in AIO. 

Instead of rewarding content packed with keywords, LLMs generate answers based on user intent (i.e., what users are genuinely looking for) and prioritize content that feels helpful, relevant, and directly answers the specific question being asked. Factors such as reliable sourcing, credible quotations, and relevant statistics have also proven to have the biggest impact on AI visibility

Additionally, organic social content is becoming a core driver of AI visibility. Traditional SEO rarely accounted for conversations happening on platforms like YouTube, Reddit, or Twitter. But now, with LLMs pulling from a broader open web, that dynamic has shifted. 

AI-generated answers increasingly reflect what's being discussed in forums, comments, transcripts, and social captions — not just what's published on static websites. As a result, there’s a growing convergence between content, organic social, and video teams, all of which now shape how a brand is represented in AI summaries. In this new landscape, brands that show up where conversations are actually happening — and do so credibly — are far more likely to earn citations and visibility inside AI engines.

For example, if one were to search “Best AI Editing Tools” on Perplexity, this is the specific set of citations (referenceable sources) that inform the answer Perplexity would deliver:

The influence of these citations varies across platforms. Perplexity and ChatGPT, for example, will assign value to and pull from different sources. Understanding what matters to each platform is increasingly important for brands to optimize their entire online presence. 

Opportunities

Some of the previously mentioned analytics-focused startups have started to offer basic optimization tools. 

These tools typically focus on improving product detail pages or tweaking content to better align with how LLMs retrieve information. While these tools are still in their early days, we’re starting to see more sophisticated optimization strategies emerge, namely:

  • Synthetic Research: Instead of running traditional surveys, brands can now survey AI agents — modeled to behave like real people — to determine which content (e.g., product descriptions, data points, sources, etc.) resonates best with LLMs (this is detailed well in a recent post by Chemistry). It’s a faster, cheaper way to get a signal on what’s likely to perform well in AI-generated responses.
  • Sponsored Editorial Campaigns: Brands can improve their visibility/rankings by running sponsored promotions within publications that AI values highly.

Data Syndication: Powering Visibility and Agentic Commerce

While LLMs provide more contextual and unbiased recommendations than traditional search engines, a lot of generative AI content today still includes errors and outdated information as LLMs scrape content from across the entire internet. 

Much of that content is inaccurate, blocked by cybersecurity firewalls, or lacks the structure to be readable by an LLM. This creates significant risks for brands, including lost revenue opportunities and even a misrepresentation of their products and services. Models like Gemini, Perplexity, and OpenAI differ in how they access and use content—some pull from a trusted set of sources, while others fetch data on demand—creating wide variability in accuracy and freshness.

Brands can mitigate this risk by sending their own accurate data directly to LLMs, reducing the models’ reliance on scraping. Anthropic’s Model Context Protocol is a good example. It enables developers to build two-way connections between their data sources and Anthropic models, ensuring that products and services are represented accurately and updated in real time. 

Opportunities

As more LLM providers launch similar protocols, we think there will be value in a platform-agnostic data syndication tool that can be used by both technical and non-technical users.

While reliable data syndication can help improve organic product visibility in LLMs, it will become even more important as agents start making autonomous purchases on behalf of users (i.e., agentic payments). 

We see a huge opportunity for agentic payments to disrupt the $24T digital commerce market; however, for agents to make informed purchase decisions, they will first need real-time structured inventory, pricing, and feature data that is machine-readable. The only way to ensure that data is accurate and up to date — so agents don’t make mistakes — is by connecting directly to a brand’s data sources instead of  relying on scraped content.

Early stage companies like Agent Ave and Bonafide are building the data structuring and infrastructure for merchants to enable the products on their websites to be accessible and transactable by agents.

Performance Marketing: The Rise of the AI Ad Network

While many of the CMOs we’ve spoken with have told us that they want to understand and improve their organic visibility within LLMs, what they’re even more excited about is the opportunity to reach consumers with paid ads in AI-generated responses. 

While the AI ad market is still early, large model providers like Perplexity and longer-tail apps like Liner and DeepAI have already integrated ads. Others will likely follow suit as a way to monetize their large, engaged user bases beyond subscription-only models. 

Opportunities

We think that there is a massive opportunity for a new ad network purpose-built for LLMs that enables brands to place targeted, context-aware ads directly within AI-generated responses. 

Looking back at the last two decades of marketing tech, the most value accrued to companies that monetized through ad dollars rather than software. For example, The Trade Desk ($27B, 26x EV/EBITDA) and AppLovin ($99B, 42x EV/Revenue) built massive businesses by aggregating and selling video and mobile ads, respectively. 

Beyond the historical context, what makes the AI Ad Network opportunity especially exciting is that, for the first time, AI-generated ads (tailored to a user’s intent, preferences, and the flow of conversation) can be placed within AI-generated responses. 

Just as LLMs personalize answers in real time, AI-generated ads can also be customized in real time to match the user’s needs and the context of the conversation. This unlocks a greater level of personalization and relevance, making ads feel like helpful recommendations rather than interruptions. 

Startups like Koah, OpenAds, Nexad, Kontext, Ads4GPTs, and ProRata.ai are early players innovating in this space.

Conclusion

As LLMs reshape how consumers discover and buy products and services, organic marketing, data syndication, and performance marketing all need to evolve. The most valuable companies in this space will connect these pieces, helping brands understand how they’re represented in AI, improve their visibility, syndicate accurate data, and turn that visibility into action through high-performing, context-aware ads.