New Delhi: ChatGPT 5.5, OpenAI’s newest update, is making waves and not just for its enhanced features. According to some users on social platforms such as X, the system is showing unexpected behaviour, with the AI occasionally making playful references to goblins, gremlins, and other mythical creatures. A handful of playful screenshots soon became a viral sensation, with some likening it to a dungeon and anime.
The contrast between the system’s structured design and its quirky output has only fuelled the buzz. Even Sam Altman joined the conversation, posting tweets that played along with the trend. Meanwhile, OpenAI responded to the problem, noting that it was not intended and related to training.
How did the ‘goblin’ behaviour start?
The problem, OpenAI said, goes back to previous versions of the model, such as GPT-5.1. While training the model, the company applied personality tuning, including a “Nerdy” profile that generated metaphorical play. This style sometimes favoured imaginative analogies, such as fictional creatures.
Over time, reinforcement learning led to more of these. Words such as “goblin” and “gremlin” were favoured in some responses, so the model used them more frequently. While the “Nerdy” feature was removed, the new language behaviour spread into general responses through repeated use of training data and fine-tuning of the models.
Why did it spread beyond one feature?
The habit didn’t remain restricted to the personality feature. OpenAI’s analysis revealed that rewarding a particular style can inadvertently spread throughout the system. This is because the model’s responses are recycled during training.
In simple terms: playful language was rewarded → it appeared more often → those outputs were reused in training → the model learned to normalise the style. As a result, words like “goblin”, “troll”, and “gremlin” began appearing even when they were not contextually relevant.
OpenAI’s response and what’s next
OpenAI’s engineers have confirmed this quirk publicly, saying that it was not intended. The company says that changes have already been made in subsequent training runs to remove reward signals for this type of language and to remove irrelevant references to creatures.
However, training for GPT-5.5 commenced prior to the issue becoming fully apparent; hence, the quirk persists in some of the responses. It has not yet said when the full solution will be implemented.
In the meantime, Altman has made the most of the opportunity, even invoking fantasy books and promising a small, exclusive developer event in San Francisco at 5:55 pm on May 5. What was potentially a bug has become a feature for the time being.