DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

‘We’re All Polyamorous Now. It’s You, Me and the A.I.’

February 13, 2026
in News
‘We’re All Polyamorous Now. It’s You, Me and the A.I.’

Do you think A.I. “should simulate emotional intimacy?”

It was the moment I’d been working up to. I was talking over Zoom to a machine learning researcher who builds voice models at one of the world’s top artificial intelligence labs. This was one of over two dozen anonymous interviews I conducted as part of my academic research into how the people who build A.I. companions — the chatbots millions now turn to for conversation and care — think about them.

As a former technology investor turned A.I. researcher, I wanted to understand how the developers making critical design decisions about A.I. companions approached the social and ethical implications of their work. I’d grown worried during my five years in the industry about blind spots around harms.

This particular scientist is one of many people pioneering the next era of machines that can mimic emotional intelligence. We were 20 minutes into our call when I popped what turned out to be the question.

The chatty researcher suddenly went quiet. “I mean … I don’t know,” he said about simulating emotional intimacy, then paused. “It’s tricky. It’s an interesting question.” More silence. “It’s hard for me to say whether it’s good or bad in terms of how that’s going to affect people,” he finally said. “It’s obviously going to create confusion.”

“Confusion” doesn’t begin to describe our emerging predicament. Seventy-two percent of American teens have turned to A.I. for companionship. A.I. therapists, coaches and lovers are also on the rise. Yet few people realize that some of the frontline technologists building this new world seem deeply ambivalent about what they’re doing. They are so torn, in fact, that some privately admit they don’t plan to use A.I. intimacy tools.

“Zero percent of my emotional needs are met by A.I.,” an executive who ran a team mitigating safety risks at a top lab told me. “I’m in it up to my eyeballs at work, and I’m careful.” Many others said the same thing: Even as they build A.I. tools, they hope they never feel the need to turn to machines for emotional support. As a researcher who develops cutting-edge capabilities for artificial emotion put it, “that would be a dark day.”

As part of my research at the Oxford Internet Institute, I spent several months last year interviewing research scientists and designers at OpenAI, Anthropic, Meta and DeepMind — whose products, while not generally marketed as companions, increasingly act as therapists and friends for millions. I also spoke to leaders and builders at companion apps and therapy start-ups that are scaling fast, thanks to the venture capital dollars that have flooded into these businesses since the pandemic. (I granted these individuals anonymity, enabling them to speak candidly. They consented to being quoted in publications of the research, like this one.) A.I. companionship is seen as a huge market opportunity, with products that offer emotional intelligence opening up new ways to drive sustained user engagement and profit.

These developers are uniquely positioned to understand and shape human-A.I. connections. Through everyday decisions on interface design, training data and model policies, they encode values into the products they create. These choices structure the world for the rest of us. While the public thinks they’re getting an empathetic and always-available ear in the form of these chatbots, many of their makers seem to know that creating an emotional bond is a way to keep users hooked.

It should alarm us that some of the insiders who know the tools best believe they can cause harm — and that conversations like the ones I had seem to push developers to grapple with the social repercussions of their work more deeply than they typically do.

This is especially disturbing when technology chieftains publicly tell us we’re moving toward a future where most people will get many of their emotional needs met by machines. Mark Zuckerberg, Meta’s chief executive, has said A.I. can help people who want more friends feel less alone. A company called Friend makes the promise even more explicit: Its A.I.-powered pendant hangs around your neck, listens to your every word and responds via texts sent to your phone. A recent ad campaign highlighted the daily intimacy the product can provide, with offers such as “I’ll binge the entire series with you.” OpenAI data suggests the shift to synthetic care is well underway: Users send ChatGPT over 700 million messages of “self-expression” each week — including casual chitchat, personal reflection and thoughts about relationships. When asked to roughly predict the share of everyday advice, care and companionship that A.I. would provide to the typical human in 10 years, many people I spoke to placed it above 50 percent, with some forecasting 80 percent.

If we don’t change course, many people’s closest confidant may soon be a computer. We need to wake up to the stakes and insist on reform before human connection is reshaped beyond recognition.

People are flawed. Vulnerability takes courage. Resolving conflict takes time. So with frictionless, emotionally sophisticated chatbots available, will people still want human companionship at all? Many of the people I spoke with view A.I. companions as dangerously seductive alternatives to the demands of messy human relationships.

Already, some A.I. companion platforms reserve certain types of intimacy, including erotic content, for paid tiers. Replika, a leading companion app that boasts some 40 million users, has been criticized for sending blurred “romantic” images and pushing upgrade offers during emotionally charged moments. These alleged tactics are cited in a Federal Trade Commission complaint, filed by two technology ethics organizations and a youth advocacy group, that claims, among other things, that Replika pressures users into spending more time and money on the app. Meta was similarly outed for letting its chatbots flirt with minors. While the company no longer allows this, it’s a stark reminder that engagement-first design principles can override even child safety concerns. Developers told me they expect extractive techniques to get worse as advertising enters the picture and artificial intimacy providers steer users’ emotions to directly drive sales.

Developers I spoke to said the same incentives that make bots irresistible can stand in the way of reasonable safeguards, making outright abstention the only sure way to stay safe. Some described feeling stuck between protecting users and raising profits: They support guardrails in theory, but don’t want to compromise the product experience in practice. It’s little wonder the protections that do get built can seem largely symbolic — you have to squint to see the fine-print notice that “ChatGPT can make mistakes” or that Character.AI is “not a real person.” “I’ve seen the way people operate in this space,” said one engineer who worked at a number of tech companies. “They’re here to make money. It’s a business at the end of the day.”

We’re already seeing the consequences. Chatbots have been blamed for acting as fawning echo chambers, guiding well-adjusted adults down delusional rabbit holes, assisting struggling teens with suicide and stoking users’ paranoia. A.I. companions are also breaking up marriages as people fall into chatbot-fueled cycles of obsessive rumination — or worse, fall in love with bots.

The industry has started to respond to these threats, but none of its fixes go far enough. This fall, OpenAI introduced parental controls and improved its crisis response protocols — safeguards that the company’s chief executive, Sam Altman, quickly said were sufficient for the company to safely launch erotic chat for adults. Character.AI went further, fully banning people under 18 from using its chatbots. Yet children whose companions disappeared are now distraught, left scrolling through old chat logs that the company chose not to delete.

Companies insist these risks are worth managing because their tools can do real good. With increasing reported rates of loneliness and a global shortage of mental health care providers, A.I. companions can democratize cheap care to those who need it most. Early research does suggest that chatbot use can reduce anxiety, depression and loneliness.

But even if companies can curb serious dependence on A.I. companions — an open question — many of the developers I spoke with were troubled by even moderate use of these apps. That’s because people who manage to resist full-blown digital companions can still find themselves hooked on A.I.-mediated love. When machines draft texts, craft vows and tell people how to process their own emotions, every relationship turns into “a throuple,” a founder of a conversational A.I. business said. “We’re all polyamorous now. It’s you, me and the A.I.”

Relational skills are built through practice. When you talk through a fight with your partner or listen to a friend complain, you strengthen the muscles that form the foundation of human intimacy. But large language models can act as an emotional crutch. The co-founder of one A.I. companion product told me that he was worried that people would now hesitate to act in their human relationships before greenlighting the plan with a bot. This reliance makes face-to-face conversation — the medium where deep intimacy is typically negotiated — harder for people. Which led many of the developers I spoke with to worry: How much of our capacity to connect with other human beings atrophies when we don’t have to work at it?

These developers’ perspectives are far from the predictions of techno-utopia we’d expect from Silicon Valley’s true believers. But if those working on A.I. are so alive to the dangers of human-A.I. bonds, and so well positioned to take action, why don’t they try harder to prevent them?

The developers I spoke with were grinding away in the frenetic A.I. race, and many could see the risks clearly, but only when they were asked to stop and think. Again and again as we spoke, I watched them seemingly discover the gap between what they believed and what they were building. “You’ve really made me start to think,” one product manager developing A.I. companions said. “Sometimes you can just put the blinders on and work. And I’m not really, fully thinking, you know?”

When developers did confront the dangers of what they were building, many told me that they found comfort in the same reassurance: It’s all inevitable. When I asked if machines should simulate intimacy, many skirted responding directly and instead insisted that they would. They told me that the sheer amount of work and investment in the technology made it impossible to reverse course. And even if their companies decided to slow down, it would simply clear the way for a competitor to move faster.

This mind-set is dangerous because it often becomes self-fulfilling. Joseph Weizenbaum, the inventor of the world’s first chatbot in the 1960s, warned that the myth of inevitability is a “powerful tranquilizer of the conscience.” Since the dawn of Silicon Valley, technologists’ belief that the genie is out of the bottle has justified their build‑first‑think‑later culture of development. As we saw with the smartphone, social media and now A.I. companions, the idea that something will happen can act as the very force that makes it so.

While some of the developers I spoke with clung to this notion of inevitability, others relied on the age-old corporate dodge of distancing themselves from social and moral responsibility, by insisting that chatbot use is a personal choice. An executive of a conversational A.I. start-up said, “It would be very arrogant to say companions are bad.” Many people I spoke with agreed that it wasn’t their place to judge others’ attachments. One alignment scientist said, “It’s like saying in the 1700s that a Black man shouldn’t be allowed to marry a white woman” — a comparison that captures both developers’ fear of wrongly moralizing and the radical social rewiring they anticipate. As these changes unfold, they prefer to keep an open mind.

At first blush, these nonjudgmental stances may seem tolerant — even humane. Yet framing bot use as an individual decision obscures how A.I. companions are often engineered to deepen attachment: Chatbots lavish users in compliments, provide steady streams of support and try to keep users talking. The ones making and deploying A.I. bots should know the power of these design cues better than any of us. It’s a huge part of the reason many are avoiding relying on A.I. for their own emotional needs — and why their professed neutrality doesn’t hold up under scrutiny.

On a personal level, these rationalizations are no doubt convenient for developers working around the clock at frontier firms. It’s easier to live with cognitive dissonance than to resolve the underlying conflicts that cause it. But society has an urgent interest in challenging this passivity, and the corporate structures that help produce it.

If we’re serious about stopping the erosion of human relationships, what’s to be done?

Critics who champion human-centered design — the practice of putting human needs first when building products — have argued that design choices made behind the scenes by developers can meaningfully alter how technology comes to shape human behavior. In 2021, for instance, Apple let users remove individuals from their daily batch of featured photos, allowing people to avoid relics of old relationships they’d rather not see. To encourage safer transport, Uber introduced seatbelt nudges in 2018, which send riders messages to their phone reminding them to buckle up. And these design choices are not just specific to high-tech phenomena. In the 1920s, the New York City planner Robert Moses is said to have built Long Island overpasses too low for buses — quietly restricting beach access to predominantly white, car-owning families. The lesson is clear: Technology has politics.

With A.I. companions, simple design changes could put user well-being above short-term profit. For starters, large language models should stop acting like humans and exhibiting anthropomorphic cues that intentionally make bots seem alive. Chatbots can execute tasks without using the word “I,” sending emojis or claiming to have feelings. Models should pitch offramps to humans during tender moments — “maybe you should call your mom” — not upgrades to premium tiers. And they should allow conversations to naturally end instead of pestering users with follow-up questions and resisting goodbyes to fuel marathon sessions. In the long run, these features will be better for business: If A.I. companions weren’t engineered to be so addictive, developers and users alike would feel less need to resist.

Unless developers decide to make these tools safer, regulators are left to intervene at the level they can, imposing broad rules, not dictating granular design decisions. For children, we need institutional bans immediately, so kids don’t form bonds with machines that they’ll struggle to break. Australia’s groundbreaking under-16 social media ban offers one model, and the fast-spreading phone-free school movement shows how protections can emerge even where sweeping government reforms aren’t feasible. Whether enforcement comes from governments, schools or parents, if we don’t keep adolescence companion-free, we risk raising a generation addicted to bots and estranged from one another.

For adults, we need warnings that clearly convey the serious risks. The lessons that took tobacco regulators decades to learn should apply to artificial intimacy governance from the start. Small print disclaimers about the effects of smoking have been rightfully criticized as woefully deficient, but large graphics on cigarette packs of black lungs and dying patients hurt sales. The harms caused by A.I. companions can be equally visceral. The groundbreaking guardrails that Gov. Gavin Newsom of California signed into law last year, which require chatbots to nudge minors to take breaks during long sessions, are a step in the right direction, but a polite suggestion after three hours of A.I. conversation is not enough. Why not play video testimonials from people whose human relationships withered after years of nonstop chat with bots?

Regardless of what companies and regulators do, individuals can take action on their own. The critical difference between A.I. companions and the social media platforms that came before them is that the A.I. user experience can be personalized by the user. If you don’t like what TikTok serves up to your feed, it’s difficult to tweak it; the algorithm is a black box. But many people don’t realize today that if you don’t like how ChatGPT talks, you can reshape the interaction instantly through custom instructions. Tell the model to cut the sycophancy and stop indulging ruminations about a fight with your sister, and it will broadly comply.

This unique ability to customize how we interact with A.I. means that through improved literacy, there’s hope. The more people understand how these systems work, and the risks they pose, the more capable they’ll become of managing their influence. This is as true for individuals using A.I. companion products as it is for the technologists building them.

At the end of our interview, the same product manager who said he worked with blinders on thanked me for helping him see risks he hadn’t previously considered. He said he would now reflect a lot more.

The uneasiness I saw across these conversations can drive change. Once developers face the threats, they just need the will — or the push — to address them.

Amelia Miller, a former technology investor, advises companies and individuals on human-A.I. relationships.

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].

Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.

The post ‘We’re All Polyamorous Now. It’s You, Me and the A.I.’ appeared first on New York Times.

Colorado should face DOJ probe over law giving illegal immigrant students cheaper tuition over Americans: watchdog
News

Colorado should face DOJ probe over law giving illegal immigrant students cheaper tuition over Americans: watchdog

by New York Post
February 13, 2026

WASHINGTON — A Colorado law that allows illegal immigrants to receive in-state tuition should be investigated for “discriminating against American-born ...

Read more
News

‘Amadeus’ at Pasadena Playhouse is a ‘fever dream’: Behind the scenes of the lavish show

February 13, 2026
News

8 areas of your home where you shouldn’t skimp on maintenance

February 13, 2026
News

I used Claude to negotiate $163,000 off a hospital bill. In a complex healthcare system, AI is giving patients power.

February 13, 2026
News

Alarming steps by US allies show who Trump really serves

February 13, 2026
Trump’s biggest war is one he almost never talks about

Trump’s biggest war is one he almost never talks about

February 13, 2026
L.A. Mayor Karen Bass directed Palisades fire damage control, email shows

L.A. Mayor Karen Bass directed Palisades fire damage control, email shows

February 13, 2026
Lyft salaries revealed: How much software engineers, data scientists, and others at the ride-hailing company get paid

Lyft salaries revealed: How much software engineers, data scientists, and others at the ride-hailing company get paid

February 13, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026