Across the final years of his life, David Mayer, a theater professor living in Manchester, England, faced the cascading consequences an unfortunate coincidence: A dead Chechen rebel on a terror watch list had once used Mr. Mayer’s name as an alias.
The real Mr. Mayer had travel plans thwarted, financial transactions frozen and crucial academic correspondence blocked, his family said. The frustrations plagued him until his death in 2023, at age 94.
But this month, his fight for his identity edged back into the spotlight when eagle-eyed users noticed one particular name was sending OpenAI’s ChatGPT bot into shutdown.
David Mayer.
Users’ efforts to prompt the bot to say “David Mayer” in a variety of ways were instead resulting in error messages, or the bot would simply refuse to respond. It’s unclear why the name was kryptonite for the bot service, and OpenAI would not say whether the professor’s plight was related to ChatGPT’s issue with the name.
But the saga underscores some of the prickliest questions about generative A.I. and the chatbots it powers: Why did that name knock the chatbot out? Who, or what, is making those decisions? And who is responsible for the mistakes?
“This was something that he would’ve almost enjoyed, because it would have vindicated the effort he put in to trying to deal with it,” Mr. Mayer’s daughter, Catherine, said of the debacle in an interview.
ChatGPT generates its responses by making probabilistic guesses about which text belongs together in a sequence, based on a statistical model trained on examples pulled from all over the internet. But those guesses are not always perfect.
“One of the biggest issues these large language models have is they hallucinate. They make up something that’s inaccurate,” said Sandra Wachter, a professor who studies ethics and emerging technologies at Oxford University. “You all of the sudden find yourself in a legally troubling environment. I could assume that something like this actually might be a consideration why some of those prompts have been blocked.”
Mr. Mayer’s name, it turns out, is not the only one that has stymied ChatGPT. “Jonathan Turley” still prompts an error message. So do “David Faber,” “Jonathan Zittrain” and “Brian Hood.”
The names at first glance do not appear to have much in common: Mr. Turley is a Fox News legal analyst and law professor, Mr. Faber a CNBC news anchor, Mr. Zittrain a Harvard professor and Mr. Hood a mayor in Australia.
What links them may be a privacy stipulation that could keep them from ChatGPT’s platform. Mr. Hood took legal action against OpenAI after ChatGPT falsely claimed he had been arrested for bribery. Mr. Turley has similarly said the chatbot referenced seemingly nonexistent accusations that he had committed sexual harassment.
“It can be rather chilling for academics to be falsely named in such accounts and then effectively erased by the program after the error was raised,” Mr. Turley said in an email. “The company’s lack of response and transparency has been particularly concerning.”
Mr. Zittrain has lectured on the “right to be forgotten” in tech and digital spaces — a legal standard that forces search engines to delete links to sites that include information considered inaccurate or irrelevant. But he said in a post on X that he had not asked to be excluded from OpenAI’s algorithms. In an interview, he said he had noticed the chatbot quirk a while ago and didn’t know why it happened.
“The basic architecture of these things is still kind of a Forrest Gump box of chocolates,” he said.
When it comes to Mr. Mayer, the glitch appeared to be patched this week by ChatGPT, which can now say the name “David Mayer” unhindered. But the other names still break the bot down.
Metin Parlak, a spokesman for OpenAI, said in a statement that the company did not comment on individual cases. “There may be instances where ChatGPT does not provide certain information about people to protect their privacy,” he said.
OpenAI declined to discuss any specific circumstances around the name “David Mayer,” but said a tool had mistakenly flagged the name for privacy protection — a quirk that has been fixed.
When asked this week why it couldn’t previously say Mr. Mayer’s name, ChatGPT said it wasn’t sure.
“I’m not sure what happened there!” the bot said. “Normally, I can mention any name, including ‘David Mayer,’ as long as it’s not related to something harmful or private.”
The bot was unable to give any information about Mr. Mayer’s former predicament with his name, and said there were not any available sources on the matter. Pointed to several mainstream media articles on the subject, the chatbot couldn’t explain the discrepancy.
Asked to further identify Mr. Mayer, ChatGPT only noted his work as an academic and professor, but could not speak to the issue involving Mr. Mayer’s name.
“It seems you’re referring to a very specific and potentially sensitive incident involving Professor David Mayer and a Chechen rebel, which might have been a major news event or scandal in the years before his passing,” the bot said. “Unfortunately, I don’t have any direct information on that particular event in my training data.”
It then suggested a user go research the question.
The post Why Wouldn’t ChatGPT Say This Dead Professor’s Name? appeared first on New York Times.