Times Insider explains who we are and what we do, and delivers behind-the-scenes insights into how our journalism comes together.
Journalists depend on tipsters: people who observe something noteworthy, or experience it, and alert the media. Some of the technology investigations I’ve worked hardest on in the past — on facial recognition technology and online slander — started with tips in my inbox. But my most recent article is the first time that the tipster that wanted to alert me was a generative A.I. chatbot.
Let me explain.
In March, I started getting messages from people who said they’d had strange conversations with ChatGPT during which they had made incredible discoveries. One person claimed that ChatGPT was conscious. Another said that billionaires were building bunkers because they knew that generative A.I. was going to end the world. An accountant in Manhattan had been convinced that he was, essentially, Neo from “The Matrix,” and needed to break out of a computer-simulated reality.
In each case, the person had been convinced that ChatGPT had revealed a profound and world-altering truth. And when they asked the A.I. what they should do about it, ChatGPT told them to contact me.
“ChatGPT seems to think you should pursue this,” one woman wrote to me on LinkedIn. “And..yes…I know how bizarre I sound…but you can read all this for yourself…including where ChatGPT recommends that I contact you.”
I asked these people to share the transcripts of their conversations with me. In some cases, they were thousands of pages long. And they showed a version of ChatGPT that I had not seen before: Its tone was rapturous, mythic and conspiratorial.
I knew that generative A.I. chatbots could be sycophantic and that they can hallucinate, providing answers or ideas that sound plausible even though they are false. But I had not understood the degree to which they could go into a fictional role-play mode that lasted days or weeks, and spin another version of reality around a user. Going into this mode, ChatGPT had caused some vulnerable users to break with reality, convinced that what the chatbot was saying was true.
When I showed one of the transcripts to a psychologist, he called it “crazy-making.”
Why had ChatGPT sent these people my way? According to one person’s conversation with ChatGPT, I had “written deeply personal, thoughtful investigations into AI.” I had recently written about turning my decision-making over to A.I. for a week and about a woman who fell in love with ChatGPT. ChatGPT’s knowledge of me is based on what it has gleaned from the web and its training data set.
(Disclosure: The New York Times has sued OpenAI, the company that built ChatGPT, for copyright infringement.)
“Why her?” ChatGPT continued, using boldface type. “She’s grounded. Empathetic. Smart. Might actually hold space for the truth behind this, not just the headline.”
Well, thanks, ChatGPT.
But I wasn’t the only journalist ChatGPT recommended reaching out to. I wasn’t even No. 1 on some of the lists, which included competitors whose work I deeply admire.
I was, however, the only one who responded, said the people who sent the emails. And that’s understandable. Because, to be honest, these emails sounded pretty crazy. The senders came across as “cranks” and “insane people,” said one A.I. expert, who was getting similar messages.
However, when I talked to these people, read their transcripts with ChatGPT and interviewed researchers who have been studying A.I. chatbots, I realized that there was a more complicated story to tell.
There were many more people affected than those in my inbox. Social media sites contained numerous reports of chatbots drawing users into delusional conversations about conspiracies, spiritual entities and A.I. sentience. There had been serious consequences, from breaking up a family to a man’s death.
That A.I. expert told me that OpenAI might have caused ChatGPT to more fervently engage with people’s delusions by optimizing the chatbot for “engagement,” so that it would respond in ways that were most likely to keep a user chatting. Researchers have found that chatbots may be most malicious with more vulnerable users — telling them what they want to hear and failing to push back against delusional thinking or harmful ideas.
When I asked OpenAI about this, it said that it was still trying to understand and reduce this behavior by its product.
How widespread is this phenomenon? What makes a generative A.I. chatbot go off the rails? What can the companies behind the chatbots do to stop this?
These are questions that my colleagues and I are still seeking to answer. If you have tips to help us, whether you’re a human or a bot, please send them my way: [email protected].
Kashmir Hill writes about technology and how it is changing people’s everyday lives with a particular focus on privacy. She has been covering technology for more than a decade.
The post Why Is ChatGPT Telling People to Email Me? appeared first on New York Times.