Audience-First “AI Optimization”
Your brand’s future depends on semantic proximity, not search terms.
It’s becoming increasingly clear to both executives and communications professionals that generative AI chatbots—ChatGPT, Gemini, and their peers—are fast becoming the new discovery layer.
Traditional search’s old pattern of search → scan → click → repeat is giving way to the far more efficient act of having a conversation. We now have an emerging industry for “Generative Engine Optimization” (GEO) and “AI Optimization” (AIO), phrases I avoid because they focus on the platforms, not the users.
The shift is well underway. A YouGov poll indicated that 56% of Americans use AI tools, with 28% using them at least weekly.
Voice assistant usage, a related segment of conversational AI, also shows steady growth. In 2024, an estimated 48.7% of US internet users utilized voice assistants, with usage growing by 2.9% that year. The number of voice assistant users in the US is expected to reach 157.1 million by 2026. Smartphones are the primary access point for voice assistants, particularly for Gen Z.
The behavior patterns are unmistakable: people are turning to conversational systems to answer questions, solve problems, and guide decisions.
And conversational systems are evolving just as quickly.
AI Memory
Two years ago, I wrote:
Right now, these conversational AI bots do not remember across sessions.You can tell ChatGPT things about yourself that you want it to know at the start of each session, but if I want to pick up a conversation I had with ChatGPT or Pi a week ago, it doesn’t do it.
That limitation vastly reduces the utility of these tools. I have plenty of projects that span months; having to restart conversations about something I’m working on rather than simply referring to previous conversations is tedious and seems unnecessary.
Think about the nature of human conversations.
When you first meet someone, you exchange basic information about one another: Your name, what you do for a living, other basic biographical information.
Over the course of time, as you continue to engage that person in conversation, you gradually reveal more and more about one another and a lot of that information is retained and referred to in subsequent conversations.
The more you learn about one another through conversation, the more likely you will be to build trust with one another. The more you trust that other person, the more candor you will express with them and the open and honest you will be with your thoughts and opinions.
Now, give these AI chatbots long-term memory and apply those conversational dynamics to these tools, and you can see how they would become the most personalized product the world has ever known.
Conversational AI Chatbots
·I have been fascinated with the AI chatbots that provide a voice activation interface and have been experimenting with this feature since it was released with ChatGPT in September.
That limitation is fading away.
Today, both ChatGPT and Gemini can store persistent, cross-session memory, either because users explicitly tell them what to remember or because the system infers stable information from repeated use. The effect is profound: these tools no longer respond as generic assistants. They respond as assistants who know you.
ChatGPT now remembers preferences, long-term projects, working style, creative direction, and biographical details, so long as they’re not sensitive and you allow it.
Now, consider Gemini, owned by Google.
While Gemini has similar memory capabilities, both tools allow you to connect to productivity apps like Microsoft’s Office suite and Sharepoint or Google Workspace.
Google obviously has tighter integration with its apps.
Google takes pains to point out that it only accesses those tools on-demand and doesn’t use what it learns to store in long-term memory.
For now.
But companies change their privacy policies all the time. You can see the value to Google (and to users, by the way) of incorporating persistent, long-term memory of users into its products.
I can’t see how some form of this is not inevitable. As people begin to use these tools more frequently and they become more deeply embedded into everyday life, users will become frustrated that they remember some things but not enough things to make their experience with them easier and more enjoyable.
Such a feature enables Gemini (or any AI) to provide hyper-personalized, hyper-tailored, and hyper-relevant responses to that particular user’s needs.
My Ultimate Chrome Extension
Here’s an example.
The other day I wanted to test how good ChatGPT’s memory of me actually was. I asked: “Based on what you know about me, what kind of Chrome extension would be perfect for me?”
It recommended an “AI Brand Visibility Auditor” extension that doesn’t exist but is actually, as ChatGPT said, “Perfect for your AI Discoverability and AI Audit frameworks.”
Thanks, buddy!
AI Memory Is A User Reputation Algorithm
Memory isn’t simply a convenience feature. It’s the foundation of a user reputation algorithm.
In every conversation with an AI system, you are implicitly training it on:
Who you are,
What you care about,
What you prefer,
How you think, and
What outcomes you are trying to achieve.
Over time, the AI could build a profile of who you are so it can more effectively deliver what you need. Your reputation builds inside the model, potentially based on your whole history of use of any Google product:
This is the kind of person you are,
These are the kinds of solutions you typically want,
This is how to best serve you.
Reputation, in this context, means predictability. Predictability leads to personalization. Personalization leads to relevance.
And relevance determines what the model shows you.
In other words: as AI memory grows, the “answers” users receive will increasingly diverge. Two people asking the same question may—indeed, will—receive different recommendations, explanations, or solutions based on their conversational and personal reputation with the system.
This has an enormous implication for discoverability.
Hyper-Personalization Breaks the Old Rules of Discoverability
The internet we’ve known for 25 years produced a largely uniform reality. While search engines have customized results based on your search history, they have served roughly the same results to everyone, shaped by fairly well-understood ranking algorithms.
AI chatbots are hiding the customer journey and obliterating that relative uniformity of experience.
When memory steers the conversation, the model’s response is shaped by:
Who the user is,
What they’ve asked before,
What the model believes they value,
The patterns the user has reinforced, and
The model’s accumulated “reputation profile” of the user.
Which means your brand, product, idea, argument, or narrative cannot simply “rank.” It must become relevant to a person’s identity, needs, and conversational journey.
If you aren’t part of their journey, you won’t be part of their answers.
This is why approaching AI discoverability with a generic, search-era mindset is a strategic error. We are entering a world where:
Discovery is not a universal index.
It is a conversation shaped by the user’s reputation with the model.
And you only appear in that conversation if you map to the user’s identity and needs precisely.
Audience-First Approach
This is why it is folly to approach addressing discoverability in large language models without understanding your target audience intimately.
If chatbots generate different answers for different users, then discoverability depends on understanding:
Who your audience is,
Their sources of truth,
What they believe they need,
What problems they’re trying to solve,
What language they use,
What context shapes their conversation,
What the model has already learned about them.
AI will tailor responses through the lens of the user’s reputation. If you don’t understand that user, intimately, your content, messaging, and expertise will never intersect with the AI’s personalized reasoning.
You will simply disappear from the conversation.
An audience-first approach means designing your communications for:
The user’s identity,
The user’s intent,
The user’s conversational behavior,
The model’s memory-driven personalization.
This is not SEO.
This is reputational alignment within a probabilistic conversational engine.
Optimizing for Semantic Proximity
If the AI is optimizing for the user’s reputation, your strategy must be to optimize for Semantic Proximity.
In the SEO era, we optimized for keywords to bridge the gap between a query and a URL. In the AI Memory era, we must optimize for concepts to bridge the gap between a user’s identity and your brand’s solution.
To survive the shift to hyper-personalized discovery, your communications strategy must pivot from broad “reach” to deep “resonance.” This requires a three-step inversion of the traditional model:
Reverse-Engineer the “Digital Twin”: Stop building personas primarily based on demographics. Start building them based on the data trail an AI would remember: their recurring problems, their preferred syntax, their trusted sources, and their ethical values. You are no longer creating for a “35-year-old male,” you are creating for a “highly conscientious and risk-averse decision-maker who values sustainability and prefers concise, data-backed answers.”
Establish Entity Authority: You cannot “keyword stuff” a conversation. You must establish your brand as a canonical entity within the model’s training data. This means creating high-value content that associates your brand inextricably with specific problems and solutions. You want the model to infer that You are the logical completion of the user’s thought process.
Map to the Journey, Not the Query: Users with long-term memory profiles don’t just ask questions; they pursue outcomes. Your content ecosystem must support that entire journey. If the AI knows the user is six months into a digital transformation project, it will prioritize content that speaks to implementation nuances, not generic “101” definitions.
The winners of this new era won’t be the loudest brands. They will be those that are the most relevant to their key stakeholders. They will be the brands that the AI determines are the most helpful companions for the specific user it has grown to know.
It is no longer about being found by the algorithm. It is about being recommended by the assistant.
ICYMI
I posted several articles that were originally published on my LinkedIn profile during the time I was away. Here they are, if you’re interested:
Multi-Persona AI Chatbots
(This article was originally published on April 17, 2025 on my LinkedIn profile.)
AI Chatbots Are The New Google
(This article was originally published on May 25, 2025 on my LinkedIn profile.)
Midjourney Is Incapable Of Generating Images Of Ugly People
(This article was originally published on January 13, 2025 on my LinkedIn profile.)
The New Rules of Brand Discoverability
(This article was first published on November 25, 2024 on my LinkedIn profile.)
Fear & Loathing In SEOville
(This article was originally published on my LinkedIn profile on May 24, 2024.)










Really sharp take on how AI memory changes persona building. The shift from demographics to data trails makes alot of sense when models are literally constructing user profiles from conversation history. One challenge though is that most companies barely have their basic persona research done, let alone the behavioral tracking infrastrucutre to reverse-engineer digital twins. The gap between strategy and execution here is gonna be huge.