Skip to main content

Criticism of OpenAI’s Approach to AI Safety

A former high-profile policy researcher at OpenAI, Miles Brundage, has taken to social media to express his criticism of the company’s approach to deploying potentially risky AI systems. Brundage claims that OpenAI is "rewriting the history" of its deployment strategy.

OpenAI’s Current Philosophy on AI Safety

Earlier this week, OpenAI published a document outlining its current philosophy on AI safety and alignment. The document states that the development of Artificial General Intelligence (AGI) is a "continuous path" that requires "iteratively deploying and learning" from AI technologies. OpenAI views the first AGI as just one point along a series of systems of increasing usefulness, and believes that the way to make the next system safe and beneficial is to learn from the current system.

Brundage’s Criticism

However, Brundage disagrees with OpenAI’s characterization of its approach. He claims that the company’s release of GPT-2, a precursor to the AI systems powering ChatGPT, was actually an example of the iterative deployment strategy that OpenAI is now embracing. Brundage, who was involved in the release of GPT-2, says that the model was released incrementally, with lessons shared at each step, and that many security experts at the time thanked OpenAI for its caution.

The Release of GPT-2

GPT-2 was announced in 2019 and was a cutting-edge AI system at the time. Due to concerns about the risk of malicious use, OpenAI initially refused to release the model’s source code, instead giving selected news outlets limited access to a demo. The decision was met with mixed reviews from the AI industry, with some experts arguing that the threat posed by GPT-2 had been exaggerated. OpenAI eventually released a partial version of GPT-2 six months after the model’s unveiling, followed by the full system several months later.

Brundage’s Concerns

Brundage believes that OpenAI’s approach to AI safety is motivated by a desire to prioritize product releases over caution. He fears that the company’s aim is to set up a burden of proof where concerns about AI safety are dismissed as "alarmist" and require overwhelming evidence of imminent dangers to act on them. Brundage argues that this mentality is "very dangerous" for advanced AI systems.

OpenAI’s History and Competitive Pressures

OpenAI has historically been accused of prioritizing "shiny products" over safety and rushing product releases to beat rival companies to market. The company has reportedly projected that its annual losses could triple to $14 billion by 2026, and is under pressure to compete with rival AI labs such as DeepSeek. Experts like Brundage question whether the trade-off between safety and product releases is worth it, and whether OpenAI’s approach to AI safety is prioritizing profits over caution.

Conclusion

The debate over OpenAI’s approach to AI safety highlights the ongoing challenges and trade-offs in the development of advanced AI systems. As the industry continues to evolve, it remains to be seen whether OpenAI’s approach will prioritize safety and caution or prioritize product releases and profits.


Source Link