OpenAI is retiring famous GPT-4o model, says GPT 5.2 is good enough
OpenAI has confirmed that it’s retiring ChatGPT’s most popular model, called GPT-4o, and several other models, including GPT-5 Instant, GPT-5 Thinking, GPT-4.1, GPT-4.1 mini, and o4-mini.
In a support document, OpenAI confirmed that it made the ultimate decision to retire GPT-4o after GPT 5.2 started to live up to expectations.
“On February 13, 2026, alongside the previously announced retirement of GPT-5 (Instant and Thinking), we will retire GPT-4o, GPT-4.1, GPT-4.1 mini, and OpenAI o4-mini from ChatGPT. In the API, there are no changes at this time,” OpenAI said in a statement.
GPT-4o is a special model because it felt more personal and warm. In fact, OpenAI had to bring it back after user backlash.
“We brought GPT-4o back after hearing clear feedback from a subset of Plus and Pro users, who told us they needed more time to transition key use cases, like creative ideation, and that they preferred GPT-4o’s conversational style and warmth.”
OpenAI says feedback from those who love and still use GPT-4o shaped the development of GPT 5.1 and GPT 5.2, but now it’s time to say goodbye to the old model.
Source: BleepingComputer
Ahead of the retirement, OpenAI already rolled out the Personality feature, which also makes it easier to customize your AI experience and bring it in line with GPT-4o.
However, unlike GPT 5.2, which plays safe, GPT-4o was unhinged, and that explains why users preferred it over newer models, such as GPT 5, 5.1, or even GPT 5.2.
“We’re announcing the upcoming retirement of GPT-4o today because these improvements are now in place, and because the vast majority of usage has shifted to GPT-5.2, with only 0.1% of users still choosing GPT-4o each day,” the company said.
OpenAI plans to continue working on ChatGPT personalization and integrating new safeguards.
The future of IT infrastructure is here
Modern IT infrastructure moves faster than manual workflows can handle.
In this new Tines guide, learn how your team can reduce hidden manual delays, improve reliability through automated response, and build and scale intelligent workflows on top of tools you already use.
