OpenAI has started rolling out two new artificial intelligence models, GPT-4.1 and GPT-4.1 mini, in ChatGPT, the company said in a post on X on Wednesday.
The company said GPT-4.1 performs better at tasks such as coding and following instructions compared to GPT-4o, while also being faster. GPT-4.1 is now available to users on ChatGPT Plus, Pro and Team plans. GPT-4.1 mini is being rolled out to both free and paid users.
With this update, OpenAI is removing GPT-4.0 mini from ChatGPT for all users.
The models were first introduced in April through OpenAI's API, which is mainly used by developers. At the time, the company was criticised by some AI researchers for not releasing a safety report for GPT-4.1. These researchers said OpenAI was becoming less transparent about how its models work.
OpenAI responded by saying that GPT-4.1 is not a frontier model, meaning it is not among its most advanced systems, and therefore did not need the same level of safety reporting.
“GPT-4.1 doesn’t introduce new modalities or ways of interacting with the model, and doesn’t surpass o3 in intelligence,” said OpenAI’s Head of Safety Systems, Johannes Heidecke, in a post on X. “This means that the safety considerations here, while substantial, are different from frontier models.”
The company has also launched a new “Safety Evaluations Hub”, a dedicated webpage that tracks how its models perform across key safety benchmarks. The hub is expected to be updated with major model releases and aims to provide ongoing visibility into model safety metrics.
“As the science of AI evaluation evolves, we aim to share our progress on developing more scalable ways to measure model capability and safety,” OpenAI wrote in a blog post. “By sharing a subset of our safety evaluation results here, we hope this will not only make it easier to understand the safety performance of OpenAI systems over time, but also support community efforts to increase transparency across the field.”
By popular request, GPT-4.1 will be available directly in ChatGPT starting today.
— OpenAI (@OpenAI) May 14, 2025
GPT-4.1 is a specialized model that excels at coding tasks & instruction following. Because it’s faster, it’s a great alternative to OpenAI o3 & o4-mini for everyday coding needs.
The company said GPT-4.1 performs better at tasks such as coding and following instructions compared to GPT-4o, while also being faster. GPT-4.1 is now available to users on ChatGPT Plus, Pro and Team plans. GPT-4.1 mini is being rolled out to both free and paid users.
With this update, OpenAI is removing GPT-4.0 mini from ChatGPT for all users.
The models were first introduced in April through OpenAI's API, which is mainly used by developers. At the time, the company was criticised by some AI researchers for not releasing a safety report for GPT-4.1. These researchers said OpenAI was becoming less transparent about how its models work.
OpenAI responded by saying that GPT-4.1 is not a frontier model, meaning it is not among its most advanced systems, and therefore did not need the same level of safety reporting.
“GPT-4.1 doesn’t introduce new modalities or ways of interacting with the model, and doesn’t surpass o3 in intelligence,” said OpenAI’s Head of Safety Systems, Johannes Heidecke, in a post on X. “This means that the safety considerations here, while substantial, are different from frontier models.”
1/ Safety is core to every model we build at OpenAI. As we deploy GPT-4.1 into ChatGPT, we want to share some insights from our safety work. 🧵
— Johannes Heidecke (@JoHeidecke) May 14, 2025
The company has also launched a new “Safety Evaluations Hub”, a dedicated webpage that tracks how its models perform across key safety benchmarks. The hub is expected to be updated with major model releases and aims to provide ongoing visibility into model safety metrics.
Introducing the Safety Evaluations Hub—a resource to explore safety results for our models.
— OpenAI (@OpenAI) May 14, 2025
While system cards share safety metrics at launch, the Hub will be updated periodically as part of our efforts to communicate proactively about safety.https://t.co/c8NgmXlC2Y
“As the science of AI evaluation evolves, we aim to share our progress on developing more scalable ways to measure model capability and safety,” OpenAI wrote in a blog post. “By sharing a subset of our safety evaluation results here, we hope this will not only make it easier to understand the safety performance of OpenAI systems over time, but also support community efforts to increase transparency across the field.”
You may also like
Zelenskyy slams Russian team as 'dummy' delegation in Turkey peace talks; Kremlin fires back, calls him 'clown'
Benjamin Sesko to Arsenal transfer truth as Andrea Berta makes Viktor Gyokeres feelings known
Boots 'ideal for work or a night out' XL aftershave reduced from £144 to £66
Chelsea get first summer decision confirmed with Joao Felix transfer
BJP's Tiranga Yatra sweeps across the nation marking success of Operation Sindoor