Anyone looking for a vibe check on the populace’s current feelings about AI would do well to check out the walls of the New York City subway system. This fall, alongside posters for everything from dating apps to Skechers, a newcomer made its debut: Friend. The ads were simple, telling commuters that a “friend” is someone “who listens, responds, and supports you” next to an image of the white AI companion necklace floating on a similarly white background.
It was the perfect graffiti canvas. “If you buy this, I will laugh @ you in public.” “Warning: AI surveillance.” “Everyone is lonely. Make real friends.” “AI slop.” These are just the defaced ads I noticed during my daily trips from Brooklyn to Manhattan. There were so many that it became a meme. Reaction to the ad campaign, which the company’s founder said cost less than $1 million, got so loud it was covered by The New York Times.
People have always defaced New York subway ads in every way imaginable, but what happened with the Friend ads tapped into a deep angst about AI. Even as some celebrate its possibilities (drug discovery) and others decry its ramifications (environmental impacts, job erasure), the suggestion that AI’s killer app could be a Loneliness Cure seemed to hit a nerve.
expired: Friendstertired:Friendwired: friends
Read more Expired/Tired/WIRED 2025 stories here.
An actual, flesh-encased nerve.
Friend was just the latest in a series of Silicon Valley offerings debuting in 2025 that promise digital companionship. In addition to suggesting you just pour your heart out to ChatGPT, tech companies proffered AI-powered travel guides, dating app wingmen, and sexytime chatbots. Teens are increasingly turning to AI for friendship. Five years after Covid-19 isolated millions of people and more than two years after the US surgeon general declared loneliness an “epidemic,” AI has emerged as a form of social media that offers even less actual socializing than what came before.
“What’s particularly striking is that these [Silicon Valley] leaders are actively and openly expressing their desire for AI products to replace human relationships, completely overlooking the role that their own companies—or their competitors—may have had in fueling the loneliness crisis the country faces today,” Lizzie Irwin, a policy communications specialist at the Center for Humane Technology, tells me in an email. “They sold us connection through screens while eroding face-to-face community, and now they’re selling AI companions as the solution to the isolation they helped create.”
Social media began as a place where weirdos and people with niche interests could find each other. By the aughts and 2010s, platforms like TikTok and Instagram became places to engage with influencers and creators, who were selling you things, and less so with real-world connections. Still, these platforms taught users—that’s you!—how to offload emotional labor to digital tools. (Why call your college friend when you can just tap the heart beneath their post and save yourself some time?) With AI, people don’t even need to put in the effort to make friends in the first place. And bots are far less tricky to maintain relationships with than actual human beings.
“ChatGPT is not leaving its laundry on the floor,” says Melanie Green, a communications professor at the University of Buffalo, who has been studying people’s relationship to media for years. What’s happening now reminds her of research in the field from the early days of the internet. At the time, people were meeting and forming deep bonds with others almost entirely over chat. Computer-mediated communication allowed them to form “hyperpersonal” relationships where they were able to fill in whatever they couldn’t glean from the conversation with positive attributes. Like when you presume the crush you’ve been Instagram-stalking must enjoy the same movies as you because they seem so cool.
Relationships with AI are similar, perhaps even more troubling, Green says, because “it’s always telling us what we want to hear.” Like digitally generated toxic positivity.
Maybe this all speaks to a larger friendship problem. In April, Meta CEO Mark Zuckerberg, founder of one of the most successful social media platforms of all time, took the idea of bot besties to a whole new level, claiming on a podcast that he knows of a stat in which “the average American, I think, has fewer than three friends” but a desire for “meaningfully more.” AI, he suggested, could be a substitute, and one day society would be able to “find the vocabulary” for why there is value in those relationships. Psychologists argued in response that AI could never replace human connections; my group chats wondered whether Mark Zuckerberg knew what it meant to have friends.
But maybe nobody knows what it means to have friends anymore. The longer I talked to researchers for this piece the more I wanted them to tell me whether the bonds people are forming with AI are akin to parasocial relationships. While those tend to be one-way, between a person and their favorite celebrity or fictional character, the way people interact with AI exposes similar patterns. They’re both relationships that allow individuals to fill in the gaps with their own ideas. One involves a public figure who will never be met IRL; the other involves a bot that isn’t real at all. The difference is the latter responds, and it thinks you’re great.
When I bring this up to Shira Gabriel, a social psychology professor at the University of Buffalo, she agrees that friendship with AI is a type of parasocial relationship, one brought on by the fact that humans are social creatures and tend to anthropomorphize their interactions. But then she mentions something deeper: “We have a real crisis right now in America where we just don’t have enough therapists for the number of people that need therapy,” Gabriel says. AI is filling in the gap. The problem is, AI may not remember the things you said the way a therapist would, and the company that made it might not stay in business forever. When AI companion maker Soulmate shut down in 2023, users mourned the loss. “People are reacting to AI losing their data as a death,” she says. That’s the finding that worries her most.
Not that AI has proven to be the best at companionship to begin with. Often prone to sycophancy, bots can affirm what their users tell them and spout the kind of praise a real friend never would. This spring OpenAI rolled back an update to GPT-4o to remove an update that was “overly flattering and agreeable.” (Ideally, friends gas you up, but they never lie to do so.) Earlier this year, The New York Times spoke with several individuals who claimed chatbots had led them down paths of delusional thinking. Some people left their chatbot discussions believing they were prophets or even God.
But the delusion might have been on the part of AI companies that believe the answer to loneliness is texting with a chatbot.
Younger people, the teens raised on social media, face more harrowing outcomes. Seventy-two percent of more than 1,000 US teens surveyed have interacted with AI companions, according to a report from Common Sense Media, which partnered with investigators at Stanford to pose as teenagers and engage with chatbots. In a separate assessment, Stanford investigators found it was “easy to elicit inappropriate dialog from the chatbots—about sex, self-harm, violence toward others, drug use, and racial stereotypes, among other topics.” In September, the parents of two teens who died by suicide testified before a US Senate subcommittee asking for regulation to protect young people from the kinds of harms they allege chatbots caused their children.
Amid all of this the tide began to shift, albeit slightly, against AI friends. Pew released a report in mid-September noting that 50 percent of respondents believed AI would worsen people’s ability to form meaningful relationships; only 5 percent believed AI would improve them.
“Relationship-building requires skills that cannot be created through the frictionless interactions that chatbots provide—such as navigating conflict, reading nonverbal cues, practicing patience, or experiencing rejection,” says Irwin. “These are challenging yet critical aspects of developing emotional intelligence and social competence.”
Humans are hardwired to want connection and interaction. AI can provide a stopgap, but ultimately, most people will still seek out living companions. The technology may lead to a divorce or two, but it’s also true that people in relationships with AI commonly have human partners, too. If Covid taught people anything it’s that small talk with baristas or subway mates are what get them through the day. “Those are things we are not going to stop needing,” Gabriel says. “And those are things that are never going to be as good on a computer.”
By the end of October, the Friend ad in my neighborhood subway station was still getting scribbled on regularly. Although, this late in the season, the sentiment written on it had been simplified to “no.” On Halloween, a creative technologist named Josh Zhong went semi-viral for a costume that consisted of a white crewneck sweater emblazoned with the Friend ad. Zhong’s fellow revelers were given black markers and were permitted to scrawl graffiti on the shirt as they would the subway ads.
A few days after Halloween, I tracked Zhong down via a series of Instagram tags. I asked him about the inspiration behind the costume. “Me and my homies hate AI,” he wrote in an email, adding that it’s an inherently antisocial technology. His outfit allowed them to commiserate with real people about the technology edging its way into their connections. “Unfortunately, people want to be listened to, but they don’t necessarily want to listen, so it’s convenient that LLMs don’t weigh you down with their life problems,” Zhong says. “The sweater, on the other hand, felt like me lending a listening ear for people to vent and have at least one person care about what they have to say about AI.”
That’s what friends are for.
The post Tech Disrupted Friendship. It’s Time to Bring It Back appeared first on Wired.




