When machines started buying on our behalf, the world’s best storytellers found themselves pitching to code. Turns out, the algorithm doesn’t care about your brand voice, your mission statement, or your purpose. It just wants clean data and maybe a little confession of human weakness.
The average American supermarket carries over 30,000 distinct items, or SKUs. For a century, the primary goal of a consumer packaged goods (CPG) company like Procter & Gamble or Unilever has been to win the battle for attention on that crowded shelf. They paid for eye-level placement, designed vibrant packaging, and spent billions on advertising to build a flicker of brand recognition that would translate into a purchase decision in the fraction of a second a human shopper scans an aisle. That entire economic model is predicated on a simple fact: the consumer is human.
That fact is no longer a given.
What happens when your weekly shop is automated or one of your customers says: “Hey Google, add paper towels to my shopping list.” Or, more disruptively: “Order me more paper towels.” There is no shelf. There is no packaging. There is no moment of cognitive battle between Bounty, with its quicker-picker-upper jingle stored in your memory, and the generic store brand. There is only an intent, an algorithm, and a transaction. The consumer, in the traditional sense, has been abstracted away. In their place is the Algorithmic Consumer, and marketing to it requires a fundamentally different strategy.
This is a platform shift that threatens to upend the core tenets of brand, distribution, and advertising. The new gatekeepers are not retailers, but the AI assistants that mediate our interaction with the market. For businesses, the urgent strategic question is shifting from “How do we reach the consumer?” to “How do we become the machine’s default?”
The Great Compression: From Funnel to API Call
The classic marketing funnel: Awareness, Interest, Desire, Action (AIDA), is a model designed for the psychology of a human buyer. It’s a slow, expensive, and inefficient process.
* Awareness is built with Super Bowl ads and billboards—blunt instruments for mass attention.
* Interest is cultivated through content marketing and positive reviews.
* Desire is manufactured through aspirational branding and targeted promotions.
* Action is the final click or tap in a shopping cart.
The AI assistant acts as a powerful compression algorithm for this entire funnel. The user simply states their intent: “I need paper towels.” The stages of Awareness, Interest, and Desire are instantly outsourced to the machine. The AI evaluates options based on a set of parameters and executes the Action. The funnel is compressed into a single moment.
This has devastating implications for brands built on awareness. The billions of dollars spent by P&G on making “Bounty” synonymous with “paper towels” have created a cognitive shortcut for humans. An AI, however, has no nostalgia for commercials featuring clumsy husbands. It has an objective function to optimise. The machine’s decision might be based on:
* Price: What is the cheapest option per sheet?
* Delivery Speed: What is available for delivery in the next hour?
* User History: What did this user buy last time?
* Ratings & Reviews: What product has the highest aggregate rating for absorbency?
* User Preferences: The user may have once specified “eco-friendly products only,” a constraint the AI will remember with perfect fidelity.
The strategic imperative shifts from building a brand in the consumer’s mind to feeding the algorithm with the best possible data. Your API is your new packaging. The quality of your structured data: price, inventory, specifications, sourcing information, carbon footprint—is more important than the cleverness of your copy. This is the dawn of Business-to-Machine (B2M) marketing.
Generative Engine Optimisation (GEO)
For the past two decades, Search Engine Optimisation (SEO) has been the critical discipline for digital relevance. The goal was to understand and appeal to Google’s ranking algorithm to win placement on the digital shelf of the search results page. The coming paradigm is Generative Engine Optimisation (GEO), but it is different in several crucial ways.
SEO is still fundamentally a human-facing endeavour. The goal is to rank highly so that a human will see your link and click it. The content, ultimately, must persuade a person.
GEO is a machine-facing endeavour. The goal is to be the single best answer that the AI assistant returns to the user. Often, there is no “page two.” There is only the chosen result and the transaction. The audience is the algorithm itself.
The factors for winning at GEO are not keywords and backlinks, but logic-driven and data-centric attributes:
1. Availability & Logistics: An AI assistant integrated into a commerce platform like Amazon or Google Shopping will have real-time inventory and delivery data. A product that can be delivered in two hours will algorithmically beat one that takes two days, even if the latter has a stronger “brand.” The winner is not the best brand, but the most available and convenient option.
2. Structured Data & Interoperability: Can your product’s attributes be easily ingested and understood by a machine? A company that provides a robust API detailing its product’s every feature—from dimensions and materials to warranty information and sustainability certifications—provides the AI with the granular data it needs to make a comparative choice. A company with a beautiful PDF brochure is invisible.
3. Cost & Economic Efficiency: Machines are ruthlessly rational economic actors. If a user’s prompt is “order more paper towels,” and no brand is specified, the primary variable for the AI will likely be optimising for cost within a certain quality band. This is a brutal force of commoditisation. Brand premiums built on psychological messaging are difficult to justify to a machine unless they are explicitly encoded as a user preference (“I only buy premium brands”).
The absurdity of this new reality can be humorous. One can imagine marketing teams of the future not brainstorming slogans, but debating the optimal JSON schema to describe a toaster’s browning consistency. The Chief Marketing Officer may spend more time with the Chief Technology Officer than with the ad agency.
The Aggregation of Preference
This shift fits perfectly within the framework of Aggregation Theory. The AI assistant platforms: Amazon’s Alexa, Google’s Assistant, Apple’s Siri and the LLMs building this out directly in their apps and websites
1. They own the user relationship. They are integrated directly into our homes and phones, capturing our intent at its source.
2. They have zero marginal costs for serving a user. Answering one query or one billion is effectively the same.
3. They commoditise and modularise supply. The paper towel manufacturers, the airlines, the pizza delivery companies; they all become interchangeable suppliers competing to fulfill the intent captured by the Aggregator.
The ultimate moat in this world is the default.
When a user says “Claude, order a taxi,” will the default be Uber or Lyft? Anthropic will have the power to make that decision. It could be based on which service offers the best API integration, which one pays Amazon the highest fee for the referral, or it could be an arbitrary choice. The supplier is in a weak position; they have been disconnected from their customer.
This creates a new, high-stakes battleground. The first time a user links their Spotify account to their Google Home, they may never switch to Apple Music. The first time a user says, “From now on, always order Tide,” that preference is locked in with a far stronger bond than brand loyalty, which is subject to erosion. It is now a line of code in their user profile. Winning that first transaction, that first declaration of preference, is everything.
We will likely see three strategic responses from suppliers:
* The Platform Play: Companies will pay exorbitant fees to be the default choice. This is the new “slotting fee” that CPG companies pay for shelf space, but on a winner-take-all, global scale.
* The Direct Play: Brands will try to build their own “assistants” or “skills” to bypass the Aggregator. For example, “Ask Domino’s to place my usual order.” This works for high-frequency, single-brand categories but is a poor strategy for most products. Nobody is going to enable a special “Bounty skill” for their smart speaker.
* The Niche/Human Play: The escape hatch from algorithmic commoditisation is to sell something a machine cannot easily quantify. Luxury goods, craft products, high-touch services, and experiences built on community and storytelling. These are categories where human desire is not about utility maximisation but about identity and emotion. The machine can book a flight, but it can’t replicate the feeling of being part of an exclusive travel club.
The Strategic Humanist’s Dilemma
This brings us to the human cost of algorithmic efficiency. A world where our consumption is mediated by machines is an intensely practical one. We might get lower prices, faster delivery, and more rational choices aligned with our stated goals (e.g., sustainability). This is the utopian promise: the consumer is freed from the cognitive load of choice and the manipulations of advertising.
However, it is also a world of profound sterility. Serendipity, discovering a new brand on a shelf, trying a product on a whim, is designed out of the system. The market becomes less of a vibrant, chaotic conversation and more of an optimised, silent database. Challenger brands that rely on a clever ad or a beautiful package to break through have no entry point. Power consolidates further into the hands of the platform owners who control the defaults.
The strategic implications are stark and urgent.
1. For CPG and Commodity Brands: The future is B2M. Investment must shift from mass-media advertising to data infrastructure, supply chain optimisation, and platform partnerships. Your head of logistics is now a key marketing figure.
2. For Digital Native Brands: Winning the first choice is paramount. The focus must be on acquisition and onboarding, with the goal of becoming the user’s explicit, locked-in preference.
3. For All Brands: Differentiate or die. The middle ground of “decent product with good branding” will be vaporised by algorithmically-selected, cost-effective generics on one side and high-emotion, human-centric brands on the other. You must either be the most efficient choice for the machine or the most meaningful choice for the human.
The age of marketing to the human subconscious is closing. The slogans, jingles, and emotional appeals that defined the 20th-century consumer economy will not work on a silicon-based consumer. The companies that will thrive in the 21st century are those that understand this shift, reorient their operations, and learn to speak the cold, ruthlessly efficient language of machines.
Tag: AI Strategy
-

The Day Marketing Realised Its Audience Had No Pulse
-
From SEO to Survival: The Three Biggest LLM Questions Leaders Can’t Ignore
The buzz around generative AI is impossible to ignore. With McKinsey estimating it could add between $2.6 and $4.4 trillion in value to the global economy each year, it’s no wonder leaders are feeling the pressure to get their strategy right.
But where do you even begin?
We sat down with Dan, one of our in-house experts on LLM SEO, to cut through the noise and map out a practical path forward for any brand navigating this new landscape.
Q1. What’s your advice for a business leader who is just starting to think about what LLMs mean for their company?
Dan: The honest answer? Start with a dose of humility and a lot of measurement. There’s a ton of confident commentary out there, but the truth is, even the people building these models acknowledge they can’t always interpret exactly how an answer is produced. So, treat any strong claims with caution.
Instead of getting caught up in speculation, get a concrete baseline. Ask yourself: for the questions and topics that matter to our business, do the major LLMs mention us? Where do we show up, and how do we rank against our competitors? We call this a “visibility score.” It takes the conversation from abstract theory to a tangible map you can actually work with.
If you’re wondering why this is urgent, two external signals make it crystal clear. First, Gartner predicts that by 2026, traditional search engine volume could drop by 25% as people shift to AI-powered answer engines. That’s a fundamental shift in how customers will discover you.
Second, the investment and adoption curves are only getting steeper. Stanford’s latest AI Index shows that funding for generative AI is still surging, even as overall private investment in AI dipped. Together, these trends tell us that your brand’s visibility inside LLMs is going to matter more and more with each passing quarter.
Q2. Once you know your visibility baseline, what should you do to move the needle?
Dan: Think in two horizons:
The model horizon (slow).
Core LLMs are trained and fine-tuned over long cycles. Influence here is indirect: you need a strong, persistent digital footprint that becomes part of the training corpus. This is where classic disciplines: SEO, Digital PR, and authoritative content publishing still matter. High-quality, well-cited articles, consistent mentions in credible outlets, and technically sound pages are your insurance policy that when the next model is trained, your brand is part of its “memory.”
The retrieval horizon (fast).
This is where you can act immediately. Most assistants also rely on Retrieval-Augmented Generation (RAG) to pull in fresh sources at query time. The original RAG research showed how retrieval improves factuality and specificity compared to parametric-only answers. That means if you’re not in the sources LLMs retrieve from, you’re invisible; no matter how strong your legacy SEO is.
This is why reverse engineering how machines are answering today’s queries is a strategic real-world data point. By mapping which URLs, articles, and publishers are being cited in your category, you uncover the blueprint of what LLMs value: the content structures, schemas, and PR signals they consistently lean on.
From there, your levers become clear:
Digital PR – Ensure your brand is mentioned in trusted publications and industry sources that models are already surfacing.
SEO – Maintain technically flawless pages with schema, structured data, and crawlability, making your content easy for retrieval pipelines.
Content strategy – Match the formats models prefer (lists, tables, FAQs, authoritative explainers), and systematically fill topical gaps.
Analytics – Track citations, rank shifts, and model updates to iterate quickly.Q3. Let’s say you’ve mapped your visibility, identified the gaps, and set your priorities. What do you do on Monday morning?
Dan: This is where you turn your analysis into action with briefs and experiments.
First, audit what the models are already rewarding. Look at the URLs they cite as sources for answers on your key topics. For each one, study its:
Structure: Does it have clear headings, tables, lists, and direct answers to common questions?
Technical setup: How is its metadata, schema, and internal linking structured? Is it easy to crawl?
Depth and coverage: How thoroughly does it cover the topic? Does it include definitions, practical steps, and well-supported claims?Doing this at scale can be tedious, which is why we use tools like Spotlight to analyse hundreds of URLs at once and find the common patterns.
Next, create a “best-of” content brief. Let’s say for a key topic, ChatGPT and other AIs consistently cite five different listicles. Compare them side-by-side and merge their best attributes into a single master blueprint for your content team. This spec should include required sections, key questions to answer, table layouts, reference styles, and any recurring themes or entities that appear in the high-ranking sources. You’re essentially reverse-engineering success.
Then, fill the gaps the models reveal. If you notice that AI retrieval consistently struggles to find good material on a certain subtopic; maybe the data is thin, outdated, or just not there; create focused content that fills that void. RAG systems tend to favour sources that are trustworthy, specific, and easy to break into digestible chunks. The research backs this up: precise, well-structured information dramatically improves the quality of the AI’s final answer.
Finally, instrument everything and track your progress. Treat this like a product development cycle:
Track how your new and updated content performs over time in model answers and citations.
Tag your content by topic, format, and schema so you can see which features are most likely to get you included in an AI’s answer.
Keep an eye out for confounding variables, like major model updates or changes to your own site, and make a note of them.This is critical because the landscape is shifting fast. That Gartner forecast suggests your organic traffic mix is going to change significantly. By reporting on your LLM visibility alongside classic SEO metrics, you can keep your stakeholders informed and aligned. You should get into a rhythm of constant experimentation. The AI Index and McKinsey reports both point to rapid, compounding change. Run small, fast tests: tweak your content structure, add answer boxes and tables, tighten up your citations, and see what moves the needle. Think of 2025 as the year you build your playbook, so that by 2026 you’re operating from a position of strength, not starting from scratch.
Closing Thoughts
Winning visibility in LLMs is about adapting to a fundamental shift in how people access knowledge and how machines assemble information. The path forward starts with three simple questions: Where do you stand today? Which levers can you pull right now? And how do you turn those levers into measurable experiments?
The data is clear: the value on the table is enormous, your competitors are already moving, and the centre of gravity for discovery is shifting toward answer engines. The brands that build evidence-based content systems and learn to iterate in this new environment will gain a durable advantage as the market resets.
Evidence & Sources
- $2.6–$4.4T in Annual Value: McKinsey estimates generative AI could add this much value per year across 63 different use cases. Source: McKinsey & Company, “The economic potential of generative AI: The next productivity frontier,” June 2023. https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier
- Search is Shifting to Answers: Gartner forecasts that traditional search engine volume will drop by about 25% by 2026 as users move to AI chatbots and agents. Source: Gartner, “Gartner Predicts Search Engine Volume Will Drop 25 Percent by 2026,” April 2024. https://www.gartner.com/en/newsroom/press-releases/2024-04-17-gartner-predicts-search-engine-volume-will-drop-25-percent-by-2026-due-to-ai-chatbots-and-other-virtual-agents
- Enterprise Adoption is Real: IBM’s Global AI Adoption Index reports that 42% of large companies have already deployed AI, with another 40% in the exploration or experimentation phase. Source: IBM, “Global AI Adoption Index 2023,” January 2024. https://newsroom.ibm.com/2024-01-17-IBM-s-Global-AI-Adoption-Index-2023-Finds-AI-Adoption-is-Steady,-But-Barriers-to-Entry-Remain-for-the-40-of-Organizations-Still-on-the-Sidelines
- GenAI Investment Keeps Surging: Stanford HAI’s 2024 AI Index Report found private investment in generative AI soared in 2023, reaching $25.2 billion—nearly 8 times the investment level of 2022. Source: Stanford University, “Artificial Intelligence Index Report 2024,” April 2024. https://aiindex.stanford.edu/report/
- Why RAG Matters: The original Retrieval-Augmented Generation research showed that models produce more specific and factual answers when they can pull in fresh, retrieved sources—a foundational concept for any near-term brand visibility strategy. Source: Lewis, et al., “Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks,” May 2020. https://arxiv.org/abs/2005.11401