Discover more from The Reputation Algorithm
We Are In The Pre-Android Era Of AI
Generative artificial intelligence has the potential to profoundly reshape not just computing but consumer behavior and our relationships with technology.
Welcome! I'm David Erickson and this is my digital marketing newsletter. If you've received it, then you either subscribed or someone forwarded it to you.
If this was forwarded to you and you want to subscribe, just click this button:
In this issue: Major AI news, Ubiquitous Computing, Effortless AI Conversations, AI Personas, AI Memory, AI Personalization, Synthetic Influencers, AI Generated Video Games, Generative AI + Robots = Androids; Digital Marketing News; Music Monday; and Glorious Midjourney Mistakes.
Leave it toto connect some interesting dots.
Newton’s recent article, The Synthetic Social Network is Coming, reports on the many truly profound advances in AI that recently dropped. Among them:
OpenAI update to ChatGPT to make the tool multimodal. You can upload an image to ChatGPT and ask it to analyze the picture for you. But most profoundly, you can now interact with ChatGPT with your voice and it will answer with its own voice. Not just that, it will answer with a voice that has personality, instead of the robotic-sounding voice we’ve come to expect from Alexa, Siri and Google Home.
Meta—otherwise known as Facebook—will be building generative AI into all its products. You’ll be able to have conversations with its AI within Facebook Messenger. You’ll be able to create images and stickers on the fly to add personality to your chats with friends.
The company will also release 28 chatbots with distinct personalities that will have their own Facebook pages and Instagram accounts.
Amazon is investing $4 billion in Anthropic, the company that developed the Claude AI chatbot.
Spotify has developed AI technology that will translate audio podcasts into other languages in the participants’ own voices.
But most profoundly, Newton notes:
More significantly, I think, is the idea that Meta plans to place its AI characters on every major surface of its products. They have Facebook pages and Instagram accounts; you will message them in the same inbox that you message your friends and family. Soon, I imagine they will be making Reels.
And when that happens, feeds that were once defined by the connections they enabled between human beings will have become something else: a partially synthetic social network.
This is where my love of science fiction kicked in and got me thinking about what is just over the horizon as a result of these advancements.
So, let’s game this out a bit.
One of the things that really set me off on this career trajectory was a computer science paper by Mark Weiser [PDF] that I read in 1992 whic argued that computers need to become so ubiquitous that they essentially disappear.
It made the point that computers, like eye glasses, are a tool to extend and enhance your power. Eye glasses enhance and extend your power of sight.
Computers do the same for your mind or your productivity or what have you, but the difference is we look at computers but when we put on a pair of prescription glasses, the glasses essentially disappear and all we’re left with is enhanced sight. We don’t look at them, we look through them.
The paper argued that computers need to evolve to the point that they disappear, that there is no longer a barrier between the tool and the power they give us.
If you think about the evolution of computers, we have been on this glide path toward ever more effortless input of data or information into the device.
Computers used to take up entire rooms and you had to feed them punch cards—literally paper cards with slots punched into them—in order to get a result out of a computer.
Eventually, they got smaller so though corporations still needed a computer room, the machines weren’t lined wall-to-wall. Gone were punch cards in favor of a keyboard with a monochromatic screen, through which you typed cryptic commands to get the result you needed.
Then came the personal computer revolution with its graphical interface and a mouse to accompany the keyword as the input device.
Fast forward the the smart phone with its touch screen and voice activation.
The Reputation Algorithm is a reader-supported publication. To get new posts and support my work, take a second to becoming a free or paid subscriber.
Effortless AI Conversations
Now we have AI that not only is voice activated but also talks back.
That gives you the ability to have an effortless conversation with your computer and since you’re talking to an app and not a device, those conversations are portable and available to you wherever you go.
The addition of this call and response feature of AI is profound and will fundamentally change consumer behavior.
I’ve had Amazon Echo and Google Home smart speakers since they were first introduced to consumers.
One day after work while on my way to get my bus, I was thinking about playing football over the weekend. Without thinking, I nearly blurted out “Alexa, what’s the forecast for this weekend,” while walking the crowded streets of downtown Minneapolis.
Talking to my smart speakers had become second nature.
But that’s talking at technology. Conversing with a technology as powerful as generative AI opens up vast new opportunities and implications for individuals, organizations and society at large.
With generative AI in my smart speakers and ChatGPT on my phone and AirPods in my ears, I have an insanely powerful assistant at my beck and call.
ChatGPT offers five synthetic voices, both male and female with American accents, called Juniper, Sky, Cove, Ember and Breeze and they all have different personalities.
Meta upstaged ChatGPT with its announcement that the company would be rolling out more than two dozen AI personas across its platforms:
We’re introducing Meta AI in beta, an advanced conversational assistant that’s available on WhatsApp, Messenger, and Instagram, and is coming to Ray-Ban Meta smart glasses and Quest 3.[…]
We’re also launching 28 more AIs in beta, with unique interests and personalities. Some are played by cultural icons and influencers, including Snoop Dogg, Tom Brady, Kendall Jenner, and Naomi Osaka.
Over time, we’re making AIs for businesses and creators available, and releasing our AI studio for people and developers to build their own AIs.
Think about the dynamic that occurs when you have conversations with someone. If that person’s responses appear credible and genuine and they come across as a unique personality, the more enjoyable the conversation will be and the more it will hold our interest.
Over time, as we engage in compelling conversations and as long as our conversational partner remains consistent, our natural guard will fall and we will express more candor in our own responses.
Conversation betrays the inner dialogue we hold with ourselves; our interests and passions, the challenges we struggle with, our fears, desires and aspirations.
With AI chatbots that sound human and have distinct personalities, it is inevitable that people will form emotional bonds with their AIs, as Newton surmised.
With generative AI’s ability to respond fairly uniquely to any given prompt, this technology enables one-to-one relationships at scale.
Generative AI companies are moving toward remembering users’ conversations across sessions because the chatbots are far more useful if they remember everything you’ve ever told them.
ChatGPT allows you to turn on or off chat histories.
ChatGPT also allows you to save “Custom Instructions” to “provide better responses”, such as:
Your geographic location
Your profession or the nature of your work
Your hobbies or interests
Bing Chat now remembers previous conversations by default
Google just added a toggle to let Bard remember your previous chats
Anthropic pointedly does not remember past coversations, citing several reasons that include security and privacy concerns. We’ll see if that changes now that Amazon has invested so heavily in the company.
The most obvious benefit to users is that you won’t have to constantly tell the AI things you want it to know about yourself in order to be a more efficient assistant.
Think about all the Google tools you use. For myself, I use Google daily to find information and Google has my entire search history at its disposal. I use Chrome as my default browser, so Google knows the websites I’ve visited.
I have had a Gmail account (including Contacts) since they first became available, so the messages I’ve received and sent are another source of data Google can tap into.
I’ve had a YouTube account since the site launched, so my video viewing behavior is available. I’ve been using Google Docs forever. I have a Google Home smart speaker. I’ve got many Google apps on my iPhone, including Maps, which I routinely use for navigation.
Google already has tons of data about me before it ever starts remembering my Bard chats.
Same goes for Meta.
I’ve had a Facebook account since 2004, an Instagram account since it first launched; I use Facebook Messenger occasionally and I have an Oculus.
Facebook knows who I’m connected with and the kind of content I like. So, again: Even before I start engaging with its AI chatbots, the company knows a lot about me.
As we become more comfortable interacting with AI bots and as the AI companies continue to improve multi-modal generative AI (text, images, audio and video), this technology will evolve ever closer to appearing human.
This will set the environment in which consumers respond to AI interactions with greater and greater candor. Our own proclivities to share coupled with the bot’s ability to remember past conversations will provide a personalization that will be so compelling it may be impossible to resist.
It is unlikely the AI platforms would store entire verbatim conversations on their servers. It is more likely that those conversations would be scored and a personal, ever-updating algorithm would be assigned to each user that would reflect topical interests, language usage, conversational style, demographics, personality type, and more.
This could signal the final nail in the coffin of privacy, at least when it comes to the tech companies. That is something that should give us all pause and I wonder if it is a dynamic regulators have even contemplated.
The benefits of such personalization would be enormous but the potential risks could be grave.
Now let’s talk influnencers.
I have worked with influencers years before the phrase rose to prominence; it is at the core of the disciplines of public relations and public affairs.
It is just that influencer marketing used to be restricted to journalists, policy makers and celebrities. It has since expanded to webmasters (for link building), newsletter publishers, podcast hosts and YouTubers, to what we now conceive of as influencers: The lifestyle-focused social media stars.
They are called influencers because they influence the information the their followers consume and how they think about that information.
I have also tracked with fascination the rise of virtual influencers.
Social media platforms like Facebook and Instagram are where human influencers have flourished. But the problem with human influencers is that they are failable and, as a result, can create public relations fiascos that do not reflect well on the platforms that host them and the brands that sponsor them.
Women’s Wear Daily has a rundown of just a few influencer controversies from 2021.
Plus, human relationships don’t scale.
But AI does.
I believe Meta’s announcement of 28 AI personas with which users can interact is an attempt to create their own homegrown influencers. It is telling that each AI will have their own Facebook and Instagram profiles.
If these AI bots can customize themselves to the person with whom their interacting, it will create bond of trust that scales far beyond what any human influencer can achieve.
Instead of brands doing deals with human influencers to broadcast messaging to their followers, Meta can offer a self-serve system that tailors the messagaging an AI influencer delivers to precisely the right audience the brand wants to reach with exactly the right tone and language that resonates with specific individuals.
AI Generated Video Games
Video games are going to get really, really good.
But most video games require a heavy dose of suspension of disbelief to enjoy. Modeling of video game characters—-while it has improved immensely over the years—has yet to achieve the kind of realism you get from movies. It’s in the eyes. Video game characters’ eyes don’t look or behave in a realistic fashion.
Generative AI holds the promise of changing that. With the amazing advances I’ve witnessed from the likes AI image generation tools like DALL-E and Midjourney, this problem appears eminently solvable.
These image tools also promise to help streamline workflows in video game development and reduce costs.
But take generative AI image generation technology such as what Adobe Firefly can now do by filling in imagery where none previously existed and you can imagine programing a video game that can create an infinite variety of environments and scenes, tailored to an individual player’s preferences.
So imagine an open world game like Red Dead Redemption that creates itself on the fly based on an individual user’s whims. Individual users could create their own storylines to direct and follow.
For the benefit of those of you who do not play video games, within games there are non-player characters. These are characters who play a role in the story of the game but are programmed by the video game developers and not controlled by a human.
These non-player characters may play a prominent or even starring roles in cut-scenes, which are pre-programmed segements of a video game that are used to advanced the plot.
But there are far more non-player characters in video games whose role amounts to extras on a movie set. Their role is simply to create the illusion of a lifelike, active environment within which humans can play.
For example, I may be walking through a town in Red Dead Redemption and the environment will include non-player characters lounging outside the town saloon, or visiting the general store, or parking a stage coach to disgourge its passenger non-player characters. These characters create the ambiance within which the story unfolds.
As I’m walking down the street of this town, I may bump into one of these non-player characters and that character’s response might be a muttered “excuse me” or it could be “hey, watch where you’re going buddy!” But there’s not likely to be much variety in such responses.
Add generative AI to the equation and things get really fascinating.
Let’s rework the example I just cited but with the addition of generative AI. Instead of responding to my clumsiness, with an “excuse me” or “what the hell you doin?” kind of response, what if I could hold an actual conversation with them?
I could say to the non-player character: “I apologize. That was very clumsy of me. By the way, I’m new to town and I’m looking for information about Low Down Dirty Ted (the villain of our game) who I believe passed through your town a couple of weeks ago. Do you have any idea who might have come in contact with him?”
This non-player character (or any non-player character) could respond appropriately. Or even walk me into the saloon and tell me about his own experience with Low Down Dirty Ted over drinks.
Video game programmers could use generative AI to give each non-player character their own unique back stories, so their responses would framed in terms of their past history. The video game engine would be aware of each non-player’s back story, so each one would be unique.
Further more, generative AI could take into account a non-player character’s interactions with other human players or even other non-player characters and add those experiences to their back story history, which would then inform future interactions.
Non-player characters would then turn from pre-programmed automatons to ever-evolving players in their own right.
A video gaming environment in which you can interact on a very deep level with even the bit non-player characters would completely revolutionize the medium.
Don’t count out the metaverse just yet.
Generative AI + Robots = Androids
The last mile, as it were, for generative AI is when it is given agency in the physical world.
We already have robots that can effortlessly navigate physical space and perform tasks in the real world, thanks to companies like Boston Dynamics:
Now, take everything I’ve thus far speculated about the potential of generative AI and apply it to robots.
The ability to have a conversation with a robot that you can direct to perform tasks for you in the real world and which is increasingly customized to an individual’s unique needs opens up the possiblity of an android personal companion.
Imagine a person who needs round-the-clock in-home care due to physical or behavioral ailments and who may have no family and perhaps no friends; no real support system to speak of.
What if they can have a constant companion who can address their physical care while also being attentive to their emotional needs? You can see how such a companion could make such a person’s life immeasurably better.
I believe personal android companions will be inevitable. There are many potential dangers which I need not list here; science fiction has already done a fairly thorough job of that.
But I do believe we are in the pre-android phase of AI.
Digital Marketing News
Gizmodo by Kevin Hurler - OpenAI Employee Discovers Eliza Effect, Gets Emotional - The technological optimism of the 1960s bred some of the earliest experiments with “AI,” which manifested as trials in mimicking human thought processes using a computer. One of those ideas was a natural language processing computer program known as Eliza, developed by Joseph Weizenbaum from the Massachusetts Institute of Technology.
Eliza ran a script called Doctor which was modelled as a parody of psychotherapist Carl Rogers. Instead of feeling stigmatized and sitting in a stuffy shrink’s office, people could instead sit at an equally stuffy computer terminal for help with their deepest issues. Except that Eliza wasn’t all that smart, and the script would simply latch onto certain keywords and phrases and essentially reflect them back at the user in an incredibly simplistic manner, much the way Carl Rogers would. In a bizarre twist, Weizenbaum began to notice that Eliza’s users were getting emotionally attached to the program’s rudimentary outputs—you could say that they felt “heard & warm” to use Weng’s own words.
AI therapy should be taken with a salt mine but the notion that people will form emotional bonds with computer systems is not new.
Bloomberg by Rachel Metz - OpenAI Gives ChatGPT the Ability to Speak in Five Different Voices - Artificial intelligence startup OpenAI is rolling out a feature for its ChatGPT app that lets the chatbot respond to spoken questions and commands with speech of its own.
Starting over the next two weeks, users will be able to choose a voice in the chatbot app, picking from five personas with names like “Juniper,” “Breeze” and “Ember.” ChatGPT will then produce audio of the text it generates in that voice — for example, reading an AI-generated bedtime story out loud.
I have this feature on my iPad but maddeningly not yet on my iPhone where I could really put it to the test with my AirPods while walking around and talking naturally. I’ll report back after I get full access and can properly test it out.
The Verge by Emilia David - Microsoft to add DALL-E 3 to Bing Chat - The company also added new shopping features to Bing. Bing can ask users specific questions on how a product will be used or other more personalized questions so people can pinpoint the right product that suits their needs. Bing also lets people find and use discount codes. These more personalized answers work as Bing also now remembers previous chats.
Bing Chat is also displaying ads within its generative AI results.
Wired by Steven Levy - Smarter AI Assistants Could Make It Harder to Stay Human - To me, that’s the worry—once we get comfortable, we’re finished. When I sought validation in a scan of research papers, my attention was snared by the title “The Power to Harm: AI Assistants Pave the Way to Unethical Behavior.” Coauthored by University of Southern California scientists Jonathan Gratch and Nathanael Fast, it hypothesizes that intelligent agents can democratize an unsavory habit of rich people, who outsource their bad behavior through lawyers, spokespeople, and thuggish underlings. “We review a series of studies illustrating how, across a wide range of social tasks, people may behave less ethically and be more willing to deceive when acting through AI agents,” they write.
I mean, we already see this phenomenon every day in the news with Donald Trump.
Business Insider by Beatrice Nolan - Google is quietly handing out early demos of its GPT-4 rival called Gemini. Here's what we know so far about the upcoming AI model. - One person who had tested the tech told the outlet it may have an advantage on GPT-4 because it leverages Google's data from consumer products, as well as information gathered from the internet. The addition should mean the model can more accurately understand the user's intentions, the report said.
The person also said the model appeared to generate fewer incorrect answers, a common problem in artificial intelligence known as hallucinations.[…]
Researchers behind the SemiAnalysis blog have also predicted that Google's Gemini would likely outperform GPT-4 because of Google's access to top-flight chips.
Google has sooooo much data to train its AI models.
VentureBeat by Matt Marshall - Amazon bets $4 billion on Anthropic’s Claude, the chatbot platform rivaling ChatGPT and Google’s Bard - The investment was part of a significant partnership announced by the two companies, where Anthropic agreed to use Amazon’s cloud platform for “mission-critical workloads” in return for the investment. The backing is the first major connection by Amazon to a leading chatbot, at a time when cloud competitors Microsoft and Google have already bet big on their respective chatbot platforms.
It appears this supplants a previous deal with Google, which invested $300 million in Anthropic earlier this year.
The Decoder by Matthias Bastain - OpenAI releases new language model InstructGPT-3.5 - Instruct models are large language models that are refined through human feedback (RLHF) after being pre-trained with a large amount of data. In this process, humans evaluate the model's output in response to user-provided prompts and improve it to achieve a target result, which is then used to further train the model.
As a result, Instruct models are better able to understand and respond to human queries as expected, making fewer mistakes and spreading less harmful content. OpenAI's tests have shown that people prefer an InstructGPT model with 1.3B parameters to a GPT model with 175B parameters, even though it is 100 times smaller.
So improvements are on the way.
The Verge by Amrita Khalid - Spotify is going to clone podcasters’ voices — and translate them to other languages - The backbone of the translation feature is OpenAI’s voice transcription tool Whisper, which can both transcribe English speech and translate other languages into English. But Spotify’s tool goes beyond speech-to-text translation — the feature will translate a podcast into a different language and reproduce it in a synthesized version of the podcasters’ own voice.
Obviously, this will be a boon to podcasters by exposing vast new audiences to their shows but it will also create a massive new inventory of advertising for Spotify to sell.
VentureBeat by Ben Dickson - Microsoft’s AutoGen framework allows multiple AI agents to talk to each other and complete your tasks - As described by Microsoft, AutoGen is “a framework for simplifying the orchestration, optimization, and automation of LLM workflows.” The fundamental concept behind AutoGen is the creation of “agents,” which are programming modules powered by LLMs such as GPT-4. These agents interact with each other through natural language messages to accomplish various tasks.
Agents can be customized and augmented using prompt engineering techniques and external tools that enable them to retrieve information or execute code. With AutoGen, developers can create an ecosystem of agents that specialize in different tasks and cooperate with each other.
An agent that did scheduling for me would be nice. Or one that could sit in on a vendor demo and ask the questions I’d want answered; that’d be great.
Vanity Fair by Charlotte Klein - In the AI Age, The New York Times Wants Reporters to Tell Readers Who They Are - In August, staff on the Business desk of The New York Times got an email from the Trust team, alerting them that they would be rolling out new reporter bios and asking them to submit a new author page for themselves. “We want to get moving quickly on this,” the internal email, reviewed by Vanity Fair, states. “The masthead feels it’s especially important to highlight the human aspect of our work as misinformation and generative AI proliferates.”
“Readers tend to seek out information about a reporter in moments of doubt or agitation: when they encounter a viewpoint they dislike in our reporting, or they perceive inaccuracy or bias. In these moments, bios can play an important role in assuring readers that we are fair-minded, committed to a high standard of integrity, and free from conflicts of interest,” the email states, adding: “Bio pages also rank highly in Google.”
This is a point I’ve been making for some time. As generative AI content proliferate, people will start to look for evidence that the content they’re viewing was created by humans.
USA Today by Whitney Woodworth and Mary Walrath-Holdridge - Robot takeover? Agility Robotics to open first-ever factory to mass produce humanoid robots - The creators of Digit, a human-sized bipedal robot complete with "eyes," are bringing the world's first humanoid robot factory to Oregon, creatively named RoboFab.
Agility Robotics announced the opening in Salem, Oregon on Monday, saying they expect to soon have the capacity to produce 10,00 robots annually. Construction of the 70,000-square-foot facility began last year and is set to open in late 2023.
Creating advanced robots for sale to the public is a new development in the robotics industry, as access to such high-end tech has been generally reserved for entities such as businesses and government agencies in the past.
What’d I just say?
NBC News by Ben Collins - What was Elon Musk’s strategy for Twitter? - On the day that public records revealed that Elon Musk had become Twitter’s biggest shareholder, an unknown sender texted the billionaire and recommended an article imploring him to acquire the social network outright.
Musk’s purchase of Twitter, the 3,000-word anonymous article said, would amount to a “declaration of war against the Globalist American Empire.” The sender of the texts was offering Musk, the Tesla and SpaceX CEO, a playbook for the takeover and transformation of Twitter. As the anniversary of Musk's purchase approaches, the identity of the sender remains unknown.
The three texts were sent on April 4, 2022. In the nearly 18 months since then, many of the decisions Musk made after he bought Twitter appear to have closely followed that road map, up to and including his ongoing attacks against the Anti-Defamation League, a nonprofit organization founded by Jewish Americans to counter discrimination.
Important read. It became clear fairly quickly after Musk bought Twitter that his interest in the platform was simply to leverage it for political power.
Gizmodo by Nikki Main - Elon Musk Disables Option to Report Misinformation On X/Twitter - X, formerly called Twitter, disabled its misinformation feature on the platform, effectively removing the option for users to report false election information, research organization Reset.Tech Australia reported on Wednesday. The company has recently been criticized for achieving the dubious honor as the number one platform to spread online hate and increased levels of misinformation.[…]
“It would be helpful to understand why X have seemingly gone backwards on their commitments to mitigating the kind of serious misinformation that has translated into real political instability in the US, especially on the eve of the ‘bumper year’ of elections globally,” Alice Dawkins, executive director of Reset, told Reuters.
See the previous story.
The Wrap by Natalie Korach - Elon Musk’s X Strips Article Headlines on Shared Links - X (formerly known as Twitter) has officially removed article headlines on links shared to the platform and viewed on its official mobile app, after owner Elon Musk warned the change was coming in August.
Prior to the implemented change, when any account posted a link to X, the tweet would include a featured image, a headline and a brief description of the story. But now X cards only display the featured image from the article, without additional context.
Most people just read the headlines as they scroll through their social media feeds. This move has the effect of removing facts and truth from the platform, which is, of course, a primary goal of authoritarian figures. Beware of people who don’t want you to know things.
Bloomberg by Dave Lee - The Moral Case for No Longer Engaging With Elon Musk’s X - One thing the prior Twitter management didn’t do is actively make things worse. When Musk introduced creator payments in July, he splashed rocket fuel over the darkest elements of the platform. These kinds of posts always existed, in no small number, but are now the despicable main event. There’s money to be made. X’s new incentive structure has turned the site into a hive of so-called engagement farming — posts designed with the sole intent to elicit literally any kind of response: laughter, sadness, fear. Or the best one: hate. Hate is what truly juices the numbers.[…]
X is now an app that forcibly puts abhorrent content into users’ feeds and then rewards financially the people who were the most successful in producing it, egging them on to do it again and again and make it part of their living. Know this: As the scramble for attention increases, the content will need to become more violent, more tragic and more divisive to stand out. More car crashes, high school fights and public humiliation.
Journalists: Please. Please wean yourself of this abomination. You are the ones who are proping it up right now.
Ars Technica by Benj Edwards - Meta launches consumer AI chatbots with celebrity avatars in its social apps - During a presentation at Meta Connect 2023, the company said it is launching its own "Meta AI" chat assistant and a selection of AI characters across its messaging platforms, including WhatsApp, Instagram, and Messenger.
Meta's new AI assistant will likely feel familiar to anyone who has used chatbots like ChatGPT or Claude. It is designed as a general-purpose chatbot that Meta says can help with planning trips, answering questions, and generating images from text prompts. The assistant will also integrate real-time results from Microsoft's Bing search engine, giving it access to current information—similar to Bing Chat, ChatGPT's browsing plugin, and Google Bard.
Just got access to the AI chatbots via WhatsApp. I’ll have more to say about them after I get some experience interacting with them.
The Verge by Jay Peters - Artifact is becoming Twitter, too - Artifact, the AI-powered news app from Instagram’s co-founders, is adding a major new feature: the ability to post. So far, the app has been an aggregator for news and links from around the internet, but you’re going to be able to add posts directly to the app.
Mike Krieger, one of the co-founders of Artifact, announced the new features onstage in a conversation with Casey Newton at the Code Conference on Wednesday. The new feature is a logical next step from Artifact’s recently launched update that lets users share links. This new feature means you won’t just be limited to links; your posts can include things like a title, text, and photos. The posts will also have unique URLs, which should make them easier to share on different apps and services.
This is an improvement and I continue to play with Artifact a bit but this new feature is not particularly useful if you don’t have much of a following and unlike Threads, there’s no easy function to find the people you know. In that way, it’s a bit like a less-hyped BlueSky. My money’s still on Threads.
Vox by Allie Volpe - The messy art of posting through it - Each platform has its specific norms and users who have their own opinions on what content they consider too cringe or vulnerable for public consumption. For instance, when people express negative emotions on Facebook, it doesn’t seem so out of place, according to a 2017 study. On the contrary, Instagram is where users expect to see positive content — albeit content that isn’t particularly authentic. One study, from 2021, suggests the norms on TikTok empower users to embrace both difficult and positive experiences when they post.
However, as social media continues to occupy an increasingly intimate space in our lives, as Ysabel Gerrard, a senior lecturer in digital communication at the University of Sheffield, thinks it will, what we post — and how audiences interpret it — will shift. Gerrard, who studies young people’s experiences of social media and digital identities, says that when social platforms become a place to store meaningful memories, the way we post will only become more personal.
And even more personal when you add generative AI to the mix.
The Guardian by Zoe Corbyn - ‘Tech platforms haven’t been designed to think about death’: meet the expert on what happens online when we die - Our digital profiles and possessions are ever-expanding, but what happens to them after our deaths? Tech companies are yet to offer a satisfactory solution, says the technology researcher Tamara Kneese
Tamara Kneese studies how people experience technology. She is a senior researcher at New York-based nonprofit Data & Society Research Institute. Her new book, Death Glitch, examines what happens to our digital belongings when we die, and argues that tech companies need to improve how they deal with death on their platforms for the sake of all our digital posterity.
What happens to our digital profiles and assets are one thing but throw generative AI into the mix, and you have the potential that some approximation of your personality persisting beyond death. Who owns that personality?
Digiday by Kimeco McCoy - Marketers reconsider BeReal as it launches its first global brand marketing campaign
Do they, though?
Vice by Tess Owen - Schools Report Bomb Threats Following Libs of Tiktok Anti-LGBTQ Posts - At least eleven schools or school districts that were targeted by the account “Libs of TikTok” over anti-LGBTQ grooming conspiracies last month received bomb threats just days later.
“Libs of TikTok”, helmed by former real-estate agent Chaya Raichik, has positioned itself as a vigilante crusader against “wokeness” in schools and culture—and has been heavily criticized as being a smokescreen for denigrating LGBTQ people
This headline is confusing and doesn't make clear that the TikTok account in question is controlled by a far right group.
This is yet another attack on one of the foundational pillars of our democracy: Our public education system that produces the critical thinking upon which the informed electorate of democracies rely.
Do not view such attacks in isolation; they are part of a coordinated whole designed to destroy our democratic institutions and therefore the American experiment itself.
Glorious Midjourney Mistakes
Images I created for this article that ended up on the cutting room floor.