

Discover more from The Reputation Algorithm
Search Generative Experience
Digital Marketing News: Advertising, Analytics, AI, Social Media & Glorious Midjourney Mistakes

If you haven’t yet signed up for the Search Generative Experience (SGE) yet, you really should so you can get a preview of what the search experience is likely to look in the age of generative AI.
<SIDE RANT>
What the hell is wrong with Google?!? Search Generative Experience? That’s awkward.
Google clearly doesn’t devote much payroll to branding professionals.
</SIDE RANT>
What Is Search Generative Experience?
Search generative experience is the application of Large Language Models (LLMs) to Search Engine Results Pages (SERPs) in order to generate an answer to a given search query on the fly, similar to generative AI platforms like ChatGPT or Google’s Bard, turning search engine results into a conversational chatbot experience.
LLMs are a type of artificial intelligence that are trained on a massive trove of articles, Wikipedia entries, books, internet-based resources, and other input to produce human-like responses to natural language queries.
The term “search generative experience” appears to have been coined by Google for its generative AI search experiment.
Discoverability Challenges In Generative Search World
This Search Engine Land article details some of the concerns marketers have that generative AI-powered search will make it that much more difficult to attract website traffic from organic search results. The article, however, is curiously dismissive of those concerns.
As I discussed in my post about the emerging brand discoverability crisis, generative AI search is pushing the traditional 10 links on search engine result pages further down the page to be practically out of sight on desktop and below the scroll on mobile.
Additionally, click-through rates from Google SERPs have been in decline on both desktop and mobile since 2016.
Discoverability in search for companies, brands and creators is directly tied to link attribution, which, of course, is how people visit webpages from SERPs. There is currently sparse link attribution within Google’s Search Generative Experience results.
The Search Engine Land article tries to make the case that we shouldn’t be so concerned about generative AI-powered search, comparing it to other on-page search features such as instant answers and People Also Ask modules. The author has a point but it’s just not that convincing.
I think we should be concerned.
Here’s a screenshot of Google’s SGE results for a search I knew would contain some of my content: “What is the reputation mismanagement industry?”
(Sharp-eyed readers will notice that the phrasing of the first paragraph, paraphrased from my blog post, is misleading and would lead the searcher to believe that “reputation mismanagement industry” is actually the name of a company, which it is not and is not what I said.)
As you can see, the traditional SERP link to my blog post is pushed way down below the AI-generated content. Yes, there is a carousel of open graph modules on the right side and one of them features my blog post, but it is placed outside of the F-shaped scanning area where most people begin their examination of search results.
The most likely behavior in this environment, especially for question-based searches such as this one, will be that the searcher will begin to scan the text for an answer to their question. They may glance at the visual carousel of links but if the intent of the search is to answer a question, they will likely satisfy that intent with the on-SERP explanation and then leave or execute another query without ever clicking on the source links.
Those little quotation marks after each paragraph are actually link attributions. Here’s what it looks like when they are clicked:
That little popup window that cites the source of the content for that paragraph and includes a clickable URL, is likely to remain unclicked for the vast percentage of searchers simply because the generative AI search result will have satisfied the intent of the search query.
If anything, the mere fact that the link is included is likely to boost confidence in the generated result and psychologically reduce the need to click and investigate further.
This is similar to the same psychological dynamic that leads to more confidence in a pharmaceutical when ads for that drug include a long list of side effect warnings.
Traditional SERPs vs Search Generative Experience Results
Let’s do some before/after comparisons.
As you can see from the first screenshot, the traditional listing for my blog post below Google’s SGE result uses a lot more screen real estate, conveys a lot more information about me and my site, and begins with a long link that practically begs to be clicked.
Here’s what you get character-wise for a traditional search listing:
Title/Link = The first 600 pixels, or approximately 60-70 characters on desktop
Description = About 155 characters on desktop or 120 characters on mobile
The Title and Description will typically work in tandem to persuade the searcher to click.
Now, compare that to the single-character quotation mark you get with Google’s SGE.
This Is About Competition & Ad Sales
C’mon, now; let’s be real.
The reason Google rolled out its Search Generative Experience now is because Microsoft beat them to the punch with Bing Chat.
While the Search Engine Land article does have a point that generative AI-powered search is probably a good user experience, it will have the effect of depressing organic click-through rates even further.
And the predictable response to this impending discoverability crisis will be increased search ad budgets.
Is SEO Dead?
The proclamation that search engine optimization is dead has been used as a click-bait headline for decades. The idea has never come to fruition and I doubt it will this time around.
But let’s not kid ourselves: Generative search promises to be a profound change from what we’ve all come to expect from search so we’ll need to optimize differently.
So what’s a search marketer to do?
Determine if the search results you care about are accurate when displayed in a generative AI context.
Identify how to correct inaccurate results.
Think about search in terms of branding and awareness as well as a direct-response tactic.
If search becomes a conversational medium, either through text chat exchanges on the desktop or through vocal conversations through a mobile app or smart speaker, think about designing content in terms of conversations.
Harmonize your data.
Need help with this? Hit me up on LinkedIn. These are just the type of complicated communication problems my agency Tunheim was designed to help solve.
Digital Marketing News
Advertising
CNBC - Walmart is bringing ads to an aisle near you as retailers chase new moneymakers - Shoppers will soon see more third-party ads on screens in Walmart self-checkout lanes and TV aisles; hear spots over the store’s radio; and be able to sample items at demo stations.
Walmart’s push into advertising resembles similar moves by retailers like Kroger, which struck a deal to bring digital smart screens to cooler aisles in hundreds of its stores, and Target, which began testing in-store demos and giveaways, including a recent “Barbie” branded event with Mattel that took place at about 200 stores.
This is such a no-brainer for national retailers. Given all the customer data they are sitting on, the targeting options they can offer will have tremendous appeal to advertisers.
Analytics
The Verge - Google will switch on its cookie-replacing tools for Chrome developers next week - Google says that it will gradually begin enabling the Privacy Sandbox toolkit for Chrome developers set to replace third-party tracking cookies with privacy-preserving API alternatives.
There are still several stages to go until Google completes its Privacy Sandbox rollout, but shipping these APIs is a significant milestone toward the company’s goal of phasing out third-party cookies entirely. Google is still aiming to enable an opt-in testing mode that will allow advertisers to experiment with the Sandbox tools without cookies by late 2023 and to turn off third-party cookies for 1 percent of Chrome users sometime in Q1 2024. The company has set a goal to completely turn off third-party cookies by Q3 2024.
It’s time to start getting familiar with these APIs to understand how this transition will affect your interpretation of campaign and analytics data.
Ars Technica - Google’s nightmare “Web Integrity API” wants a DRM gatekeeper for the web - Perhaps the most telling line of the explainer is that it "takes inspiration from existing native attestation signals such as [Apple's] App Attest and the [Android] Play Integrity API." Play Integrity (formerly called "SafetyNet") is an Android API that lets apps find out if your device has been rooted. Root access allows you full control over the device that you purchased, and a lot of app developers don't like that. So if you root an Android phone and get flagged by the Android Integrity API, several types of apps will just refuse to run. You'll generally be locked out of banking apps, Google Wallet, online games, Snapchat, and some media apps like Netflix. You could be using root access to cheat at games or phish banking data, but you could also just want root to customize your device, remove crapware, or have a viable backup system. Play Integrity doesn't care and will lock you out of those apps either way. Google wants the same thing for the web.
And a contrary view.
Artificial Intelligence
Microsoft - Furthering our AI ambitions – Announcing Bing Chat Enterprise and Microsoft 365 Copilot pricing - …we’re significantly expanding Bing to reach new audiences with Bing Chat Enterprise, delivering AI-powered chat for work, and rolling out today in Preview – which means that more than 160 million people already have access. Second, to help commercial customers plan, we’re sharing that Microsoft 365 Copilot will be priced at $30 per user, per month for Microsoft 365 E3, E5, Business Standard and Business Premium customers, when broadly available; we’ll share more on timing in the coming months. Third, in addition to expanding to more audiences, we continue to build new value in Bing Chat and are announcing Visual Search in Chat, a powerful new way to search, now rolling out broadly in Bing Chat.
Looks like Microsoft is going to incorporating Large Language Models and generative AI into all of its products.
Axios - Google assistant getting AI makeover - Google plans to overhaul its Assistant to focus on using generative AI technologies similar to those that power ChatGPT and its own Bard chatbot, according to an internal e-mail sent to employees Monday and seen by Axios…Other companies including Amazon are making similar moves. The commerce giant is working on an AI-powered reboot for Alexa, its longtime digital assistant.
Incorporating generative AI into smart speakers is an obvious move to greatly improve the utility of these products. From my experience, Alexa has pretty limited use cases. I use mine as an alarm clock, ask it for the forecast in the morning, and use it to set timers in the kitchen.
Beause it has access to Google’s search index, Google Home is much more valuable to me as a digital assistant because I can get answers to complex questions that Alexa just cannot answer. Turning that current capability into a much richer conversational medium with generative AI seems to be a tactic that can breathe new life into this product category.
PetaPixel - Adobe’s New Generative Expand Can Change a Photo’s Aspect Ratio - Photoshop Generative Expand is similar to Generative Fill in that it uses the same AI technologies, but slightly adjusts how it is implemented. Generative Expand will allow users to expand and resize any image with the Crop Tool to push an image beyond its original borders.
Pretty cool.
The Guardian - ‘A certain danger lurks there’: how the inventor of the first chatbot turned against AI - In 1966, an MIT professor named Joseph Weizenbaum created the first chatbot. He cast it in the role of a psychotherapist. A user would type a message on an electric typewriter connected to a mainframe. After a moment, the “psychotherapist” would reply…The software was relatively simple. It looked at the user input and applied a set of rules to generate a plausible response. He called the program Eliza, after Eliza Doolittle in Pygmalion. The cockney flower girl in George Bernard Shaw’s play uses language to produce an illusion: she elevates her elocution to the point where she can pass for a duchess. Similarly, Eliza would speak in such a way as to produce the illusion that it understood the person sitting at the typewriter.
I’ve used the example of Eliza for the past 10 years in a presentation about emerging technologies and how people can form what feels like personal relationships with technology. This is a great in-depth story of how Mr. Weizenbam came to create Eliza and then advocate against the dangers of AI.
Social Media
New York Times - Twitter Threatens Legal Action Against Nonprofit That Tracks Hate Speech - X Corp., the parent company of the social media company, sent a letter on July 20 to the Center for Countering Digital Hate, a nonprofit that conducts research on social media, accusing the organization of making “a series of troubling and baseless claims that appear calculated to harm Twitter generally, and its digital advertising business specifically,” and threatening to sue.
The letter cited research published by the Center for Countering Digital Hate in June examining hate speech on Twitter, which Mr. Musk has renamed X.com. The research consisted of eight papers, including one that found that Twitter had taken no action against 99 percent of the 100 Twitter Blue accounts the center reported for “tweeting hate.” The letter called the research “false, misleading or both” and said the organization had used improper methodology.
I can’t remember who made this argument but it wasn’t me, so if you know who it is, please say so in the comments. The argument is this: The only rational explanation for what Musk is doing as the owner of X.com nee Twitter is to use it as a tool with which to foment societal/political chaos. It seems to me that makes more sense than any other explanation I’ve read.
The Verge - What is Reddit CEO Steve Huffman doing? - In a New Yorker article about ultra-wealthy doomsday preppers, Huffman’s the opening anecdote: having gotten laser eye surgery, he is prepared for “‘the temporary collapse of our government and structures,’ as he puts it. ‘I own a couple of motorcycles. I have a bunch of guns and ammo. Food. I figure that, with that, I can hole up in my house for some amount of time.’” This individualist ethos isn’t uncommon in Silicon Valley, but it’s also not actually how people survive disasters. Cooperation is key, as is community.
I pulled this quote because this seems to be a thing among tech billionaires and helps to shed light on their seeming lack of interest in protecting society from the harms their platforms create. I mean, if you believe societal collapse in inevitable, then what’s the incentive for preventing it. Might as well accumulate as many resources as you can to survive the impending doom.
Though the article is an examination of the current Reddit revolt, it also illustrates the digital signals generated when reputational expectations fail.
Glorious Midjourney Mistakes
I tried to illustrate the idea of having a conversation with a chatbot.








