OpenAI, the company behind ChatGPT, is having all sorts of troubles right now. AI enthusiasm is waning; people are turning against the technology as they see the real-world consequences that have thus far largely gone unregulated in the United States. These are serious, existential issues not just facing one company but the entire industry. Meanwhile, OpenAI is also dealing with a weirder, smaller, much sillier problem: goblins.
Business Insider reports that, according to internal documentation shared on GitHub, OpenAI’s newer models, particularly GPT-5.5, had developed a habit of referencing goblins, gremlins, and other such fantasy races in normal responses that did not warrant references to mythical creatures. It became a problem when users noticed the AI was dropping phrases like “goblin mode” or “gremlin” into its technical explanations.
OpenAI responded by instructing the model to avoid all references to “goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user’s query.”
ChatGPT Apparently Got Too Weird About Goblins
The company is so serious about its AI following its new directive that the rule shows up multiple times in the code, which suggests to experts that this was a much bigger, more widespread problem that needed a correction hardcoded into the AI’s DNA, for lack of a better term.
OpenAI later explained why all those references to creatures were cropping up in the first place, landing it on a personality setting called “Nerdy,” which was introduced in earlier versions to load responses with reference to, well, nerdy things like references to whimsical fantasy creatures.
The funny part is that even after that personality type was retired, newer models had already absorbed its behavior during training. The thing was just doing exactly what it was trained to do.
OpenAI later explained the root cause: a personality setting known as “Nerdy,” introduced in earlier versions, unintentionally rewarded whimsical language, including references to mythical creatures. Even after that personality was retired, newer models had already absorbed the behavior during training.
In short, the AI wasn’t glitching randomly—it was doing exactly what it had been incentivized and trained to do, leading to responses that seem like they were cowritten by J.R.R. Tolkien, no matter what you asked it.
The post ChatGPT Just Got a Weird New List of Forbidden Topics (Including Gremlins) appeared first on VICE.




