The usage of creature-related terms rose noticeably following the launch of GPT-5.1.

OpenAI has addressed what it jokingly refers to as the Goblin Problem, a curious phase in its recent AI development where ChatGPT appeared to lean unusually heavily on references to fictional creatures such as goblins, trolls and other fantasy-style metaphors.
In a technical post described as an internal autopsy of the issue, the company explained how certain personality design choices, combined with reinforcement learning signals, contributed to the unintended pattern.
While a stray "little goblin" reference might seem charming, the data suggested something more persistent. Following the launch of GPT-5.1, OpenAI tracked a massive spike in creature-related metaphors:
Goblin usage: Up 175%
Gremlin" usage: Up 52%
After moving beyond playful analogies, it has gradually developed into a more consistent stylistic habit across outputs.
The report traces the behaviour back to early experiments with GPT-5 personality modes introduced after feedback that the model felt overly neutral. One of these, internally known as 'Nerdy,' was designed to give responses a more playful, mentor-like tone using light humour and unconventional phrasing.
While 'Nerdy' accounted for a small proportion of total interactions, it was disproportionately associated with the use of creature-based metaphors, according to OpenAI’s findings.
OpenAI says reinforcement learning systems unintentionally opted for the use of imaginative metaphors as a “successful” conversational style. Over time, this pattern was not confined to one personality mode and began appearing more broadly across different contexts.
The company described this as a form of generalisation, where stylistic quirks extended beyond their intended boundaries during training.
To reduce the recurrence of such patterns, OpenAI says it has made several adjustments, including:
Phasing out the “Nerdy” personality in later GPT versions
Removing reward signals that encouraged excessive metaphor use
Refining training data to reduce over-reliance on fantasy-style language
OpenAI also noted that earlier versions of its coding assistant Codex had already incorporated some of these behavioural patterns during training. As a result, additional safeguards were introduced to ensure a more consistent and professional tone in developer-focused environments.
The company’s clarification comes amid wider industry developments, with firms such as Anthropic continuing to expand AI tools into creative software ecosystems, including integrations with platforms like Photoshop, Premiere Pro and Blender.
Also In This Package
Network Links
GN StoreDownload our app
© Al Nisr Publishing LLC 2026. All rights reserved.