Skip to main content

OpenAI’s Latest Machine Learning Model Arrives

OpenAI’s latest machine learning mode, o3-mini, has arrived, and it’s available to try now. For the first time, OpenAI is making one of its "reasoning" models available to free users of ChatGPT. If you want to try it yourself, select "Reason" in the message composer to get started.

Key Features and Performance

According to OpenAI, o3-mini is faster and more accurate than its predecessor, o1-mini. In A/B testing, the company found o3-mini was 24 percent faster than o1 at delivering a response. Moreover, set to its "medium" reasoning effort, the new model can come close to the performance of the more expensive o1 system in some math, coding, and science benchmarks. Like OpenAI’s other reasoning models, o3-mini will show you how it arrived at an answer instead of simply responding to a prompt.

Mission to Push the Boundaries of Cost-Effective Intelligence

"The release of OpenAI o3-mini marks another step in OpenAI’s mission to push the boundaries of cost-effective intelligence," OpenAI said. "By optimizing reasoning for STEM domains while keeping costs low, we’re making high-quality AI even more accessible." This model continues OpenAI’s track record of driving down the cost of intelligence, reducing per-token pricing by 95% since launching GPT-4, while maintaining top-tier reasoning capabilities.

Background and Timeline

When OpenAI first previewed o3 and o3-mini at the end of last year, CEO Sam Altman said the latter would arrive "around the end of January." Altman gave a more concrete timeline on January 17 when he wrote on X that OpenAI was "planning to ship in a couple of weeks." Now that it’s here, it’s safe to say o3-mini arrives with a sense of urgency.

Rise of DeepSeek and Its Impact on OpenAI

On January 20, the same day Altman was attending Donald Trump’s inauguration, China’s DeepSeek quietly released its R1 chain-of-thought model. By January 27, the company’s chatbot surpassed ChatGPT as the most-download free app on the US App Store after going viral. The overnight success of DeepSeek wiped $1 trillion of stock market value, and almost certainly left OpenAI blindsided.

OpenAI’s Response to DeepSeek

In the aftermath of last week, OpenAI said it was working with Microsoft to identify two accounts the company claims may have distilled its models. Distillation is the process of transferring the knowledge of an advanced AI system to a smaller, more efficient one. OpenAI’s terms of service allow for distillation as long users don’t train competing models on the outputs of the company’s AI.

OpenAI’s Stance on Distillation

OpenAI did not explicitly name DeepSeek. "We know [China]-based companies — and others — are constantly trying to distill the models of leading US AI companies," an OpenAI spokesperson told The Guardian recently. However, David Sacks, President Trump’s AI advisor, was more direct, claiming there was "substantial evidence" that DeepSeek had "distilled the knowledge out of OpenAI’s models."


Source Link