According to OpenAI CEO Sam Altman, the company has been forced to implement a staggered rollout of its latest model, GPT-4.5, due to a shortage of GPUs.
In a recent post on X, Altman described GPT-4.5 as a “giant” and “expensive” model that will require tens of thousands of additional GPUs to support further user access. The model will initially be available to ChatGPT Pro subscribers starting Thursday, followed by ChatGPT Plus customers the following week.
The enormous size of GPT-4.5 contributes to its high cost. OpenAI has set a price of $75 per million tokens (~750,000 words) for input into the model and $150 per million tokens generated by the model. This represents a significant increase of 30 times the input cost and 15 times the output cost compared to OpenAI’s existing GPT-4o model.
The pricing for GPT 4.5 is extremely high. If this doesn’t lead to significant advancements in large models, it will be disappointing pic.twitter.com/1kK5LPN9GH
— Casper Hansen (@casper_hansen_) February 27, 2025
Altman explained that the company’s rapid growth has led to a shortage of GPUs, stating “We’ve been growing a lot and are out of GPUs. We will add tens of thousands of GPUs next week and roll it out to the Plus tier then… This isn’t how we want to operate, but it’s challenging to predict growth surges that result in GPU shortages.”
Altman has previously acknowledged that a lack of computing capacity is delaying the company’s product releases. To address this issue, OpenAI plans to develop its own AI chips and establish a massive network of data centers in the coming years.