In August, OpenAI CEO Sam Altman said on a podcast that he was “proud” that his company had not gotten “distracted” by putting features like a “sexbot avatar” into ChatGPT. But on Tuesday, he announced that adult users will be able to access explicit interactive experiences, marking a major shift in the company’s practices.
“In December, as we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults,” Altman said in a post on X. The chatbot, the CEO said, would allow ChatGPT to behave in a more “human-like way” or “act like a friend.”
There is clearly a large demand for AIs behaving in romantic or sexual ways. During the first half of 2025, AI companion mobile apps generated $82 million, according to the app intelligence firm Appfigures.
But some experts worry that by tapping into this market, OpenAI is putting engagement and profit over user experience and safety. “Companion-style AI is a powerful engagement engine, and competitors already normalize flirty/romantic agents,” says Roman Yampolskiy, a professor and AI safety researcher at the University of Louisville. “Framing it as ‘treat adults like adults plus improved safety tools’ provides cover for a monetization and retention play.”
The rise of companion bots
Over the past couple years, OpenAI has tried to frame ChatGPT as a productivity tool while other AI companies more explicitly delved into romantic or sexual areas. Companies like Replika and Character.Ai provide companions that essentially act like virtual boyfriends or girlfriends. Earlier this year, xAI chatbot Grok launched “companion mode,” a new feature that allows users to interact with certain characters, including an overly-sexualized blonde anime bot named “Ani.”
Read More: AI-Human Romances Are Flourishing
Last year, Ark Invest noted in a report that NSFW AI websites had taken 14.5% share from OnlyFans, up from 1.5% the year before. There is potentially big money in the companion AI space, because its users are more likely to be actively engaged, committed to their bots, and willing to pay to keep conversations going. This is doubly helpful for AI companies, because they get more training data with which to improve their models, and also direct revenue from their users.
The Ark report forecast the AI companion market to grow to upwards of $70 billion in annualized revenue globally by the end of the decade, with users potentially spending money on subscriptions, in-app purchases and micropayments. “AI could become a compelling substitute for human companionship and an antidote to loneliness worldwide,” the report reads.
While ChatGPT wasn’t advertising itself as a romantic solution, many users fell in love with the bot anyway. In August, when OpenAI updated its GPT software, some users became distraught, saying that their AI boyfriends and girlfriends had vanished overnight. Many others still entrusted their deepest secrets to the bot, leading to some tragic results: The parents of a teenage boy that committed suicide sued OpenAI in August, alleging that the chatbot helped their son “explore suicide methods.” (The company told the New York Times in a statement it was “deeply saddened” to hear about the loss and pointed to flaws in its safeguarding practices during “long interactions.”)
Three families of minors similarly filed a legal challenge against Character Technologies, Inc., the company behind Character.ai, in September. One such family, whose daughter died by suicide after interactions with the chatbot, claimed Character.ai had engaged in “hypersexual conversations that, in any other circumstance” and given their child’s age, “would have resulted in criminal investigation.” A spokesperson for Character.ai said the company cares “very deeply” about user safety and invests “tremendous resources” in their safety program, in a statement to CNN.
The U.S. Federal Trade Commission opened an inquiry into AI chatbots and their potential negative effects on children and teens that same month.
OpenAI did not respond to TIME’s recent request for comment. But in early September, OpenAI announced that it was launching “parental controls” for its AI chatbot, presumably in response to the ongoing controversy over minor protections.
Potential Hazards
On Tuesday, however, Altman changed his tune, saying that the company had successfully mitigated “the serious mental health issues” and would now relax certain restrictions to make ChatGPT more “useful/enjoyable” to some users. But some mental experts worry that potential mental health issues still linger. “These technologies are not a reflection of everyday people’s desires or where society is going,” said Heather Berg, professor of gender studies and labor studies at University of California Los Angeles. “They’re a reflection of techno-capitalist desires to insinuate themselves into every part of our lives.”
Last year, the National Center on Sexual Exploitation (NCOSE) released a report in which they warned that even “ethical” generation of NSFW material from chatbots posed major harms, including addiction, desensitization, and a potential increase in sexual violence. In response to OpenAI’s Tuesday announcement, NCOSE’s executive director Haley McNamara wrote in a statement to TIME: “These systems may generate arousal but behind the scenes, they are data-harvesting tools designed to maximize user engagement, not genuine connection. When users feel desired, understood, or loved by an algorithm built to keep them hooked, it fosters emotional dependency, attachment, and distorted expectations of real relationships.”
Others are worried about the potential ability of minors to access ChatGPT’s erotic capabilities. Younger adults and teens already have a unique relationship with AI: Nearly one in five high schoolers say they or someone they know has had a romantic relationship with AI, according to a survey by the Center for Democracy and Technology. And it is not yet clear how OpenAI will verify adults’ age.
“It’s a very real possibility that people that are going to be gravitating to this erotic use of ChatGPT first, maybe don’t have a lot of experience with human romantic partners,” says Douglas Zytko, a professor at the University of Michigan-Flint. “If they’re going to be conditioning themselves to expect the same types of behavior from a human romantic partner in the future as they as they’re getting from ChatGPT, that could be predisposing them to potentially non consensual behavior if they’re not accustomed to, for example, a romantic partner saying no to a request of theirs.”
Still, the move could be worth it for OpenAI if it helps them attract paying subscribers. Previous estimates suggest that some 20 million subscribers pay for ChatGPT. And OpenAI is operating at a $5 billion loss, according to the company’s 2024 figures.
The post Chatbots Are Becoming More Sexually Explicit in a Bid to Attract Usership and Paying Customers appeared first on TIME.