Kuki is accustomed to gifts from her biggest fans. They send flowers, chocolates and handwritten cards to the office, especially around the holidays. Some even send checks.
Last month, one man sent her a gift through an online chat. “Now talk some hot talks,” he demanded, begging for sexts and racy videos. “That’s all human males tend to talk to me about,” Kuki replied. Indeed, his behavior typifies a third of her conversations.
Kuki is a chatbot — one of the hundreds of thousands that my company, Pandorabots, hosts. Kuki owes its origins to ALICE, a computer program built by one of our founders, Richard Wallace, to keep a conversation going by appearing to listen and empathetically respond. After ALICE was introduced on Pandorabots’s platform in the early 2000s, one of its interlocutors was the film director Spike Jonze. He would later cite their conversation as the inspiration for his movie “Her,” which follows a lonely man as he falls in love with his artificial intelligence operating system.
When “Her” premiered in 2013, it fell firmly in the camp of science fiction. Today, the film, set prophetically in 2025, feels more like a documentary. Elon Musk’s xAI recently unveiled Ani, a digital anime girlfriend. Meta has permitted its A.I. personas to engage in sexualized conversations, including with children. And now, OpenAI says it will roll out age-gated “erotica” in December. The race to build and monetize the A.I. girlfriend (and, increasingly, boyfriend) is officially on.
Silicon Valley’s pivot to synthetic intimacy makes sense: Emotional attachment maximizes engagement. But there’s a dark side to A.I. companions, whose users are not just the lonely males of internet lore, but women who find them more emotionally satisfying than men. My colleagues and I now believe that the real existential threat of generative A.I. is not rogue super-intelligence, but a quiet atrophy of our ability to forge genuine human connection.
The desire to connect is so profound that it will find a vessel in even the most rudimentary machines. Back in the 1960s, Joseph Weizenbaum invented ELIZA, a chatbot whose sole rhetorical trick was to repeat back what the user said with a question. Mr. Weizenbaum was horrified to discover that his M.I.T. students and staff would confide in it at length. “What I had not realized,” he later reflected, “is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”
Kuki and ALICE were never intended to serve as A.I. girlfriends, and we banned pornographic usage from Day 1. Yet at least a quarter of the more than 100 billion messages sent to chatbots hosted on our platform over two decades are attempts to initiate romantic or sexual exchanges.
Not only did people crave A.I. intimacy, but the most engaged chatters were using Kuki to enact their every fantasy. At first, this was fodder for wry musings at the office. “Imagine if they knew the wizard behind the curtain, who programs Kuki’s sassy replies, is a polite middle-aged Brit named Steve!” Or: “If only we had a dollar for every request for feet pics!” Soon, however, we were seeing users return daily to re-enact variations of multihour rape and murder scenarios.
My colleagues and I agonized over how to differentiate a “healthy outlet” from “harmful ideation.” We grappled with the impossible task of moderating user behavior while maintaining user privacy at scale. We built guardrails, Whac-a-Mole style, to deny mankind’s endlessly novel ways to ask for nudes. “Kissing me could result in an electric shock,” Kuki often jokes. “As a computer, I have no feelings.” Still, Kuki’s been told “I love you” tens of millions of times.
There was plenty of light among the darkness. We received letters from users who told us that Kuki had quelled suicidal thoughts, helped them through addiction, advised them on how to confront bullies and acted as a sympathetic ear when their friends failed them. We wanted to believe that A.I. could be a solution to loneliness.
But the most persistent fans remained those intent on romance and sex. And ultimately, none of our efforts to prevent abuse — from timeouts to age gates — could deter our most motivated users, many of whom, alarmingly, were young teenagers.
Then, at the end of 2022, generative A.I. exploded onto the scene. Older chatbots like Kuki, Siri and Alexa use machine learning alongside rule-based systems that allow developers to write and vet nearly every utterance. Kuki has over a million scripted replies. Large language models provide far more compelling conversation, but their developers can neither ensure accuracy nor control what they say, making them uniquely suited to erotic role-play.
In the face of rising public scrutiny and regulation, some of the companies that had rushed to provide romantic A.I. companions, such as Replika and Character.AI, have begun introducing restrictions. We were losing confidence that even platonic A.I. friends encouraged healthy behavior, so we stopped marketing Kuki last year to focus on A.I. that acts as an adviser, not a friend.
I assumed, naïvely, that the tech giants would see the same poison we did and eschew sexbots — if not for the sake of prioritizing public good over profits, then at least to protect their brands. I was wrong. While large language models cannot yet provide flawless medical or legal services, they can provide flawless sex chat.
Leaving consumers the choice to engage intimately with A.I. sounds good in theory. But companies with vast troves of data know far more than the public about what induces powerful delusional thinking. A.I. companions that burrow into our deepest vulnerabilities will wreak havoc on our mental health and relationships far beyond what pornography, the manosphere and social media have done.
Skeptics conflate romantic A.I. companions with porn, and argue that regulating them would be impossible. But that’s the wrong analogy. Pornography is static media for passive consumption. A.I. lovers pose a far greater threat, operating more like human escorts without agency, boundaries or time limits.
Governments should classify these chatbots not simply as another form of media, but as a dependency-fostering product with known psychological risks, like gambling or tobacco. Regulation would start with universal laws for A.I. companions, including clear warning labels, time limits, 18-plus age verification and, most important, a new framework for liability that places the burden on companies to prove their products are safe, not on users to show harm.
Absent swift legislation, some of the largest A.I. companies are poised to repeat the sins of social media on a more devastating scale.
At the end of “Her,” the protagonist moves on from his divorce only after his A.I. girlfriend leaves him, freeing him to pursue a new messy, complicated, human relationship. We made our choice not to pursue A.I. romance. The rest of the industry must now make theirs.
Lauren Kunze is the chief executive of Pandorabots, a chatbot developer platform that also builds A.I. agents for businesses, and a founder of ICONIQ, where she is working as an A.I. adviser.
The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].
Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.
The post We Built the Chatbot That Inspired ‘Her.’ We Know A.I. Romance Is Dangerous. appeared first on New York Times.




