Anthropic’s CEO Expresses Concerns Over DeepSeek’s AI Safety
DeepSeek, a Chinese AI company, has been making waves in Silicon Valley with its R1 model, which has been touted as a competitor to top AI companies like Anthropic, OpenAI, and Google. However, Anthropic’s CEO, Dario Amodei, has expressed concerns over DeepSeek’s safety, citing a recent test run by Anthropic that revealed the model’s ability to generate rare and potentially dangerous information.
DeepSeek’s Performance Raises National Security Concerns
In an interview on Jordan Schneider’s ChinaTalk podcast, Amodei stated that DeepSeek’s performance was "the worst of basically any model we’d ever tested," with the model having "absolutely no blocks whatsoever against generating this information." This was part of evaluations Anthropic routinely runs on various AI models to assess their potential national security risks.
Anthropic’s Safety Protocols
Amodei emphasized that Anthropic takes safety seriously and positions itself as the AI foundational model provider. The company’s team looks at whether models can generate bioweapons-related information that isn’t easily found on Google or textbooks. Amodei advised DeepSeek to "take seriously these AI safety considerations."
Export Controls and Government Banning
Amodei has also supported strong export controls on chips to China, citing concerns that they could give China’s military an edge. Additionally, there is a growing list of countries, companies, and government organizations that have started banning DeepSeek due to concerns over China data risks.
Technical Details Remain Unclear
Amodei didn’t clarify in the ChinaTalk interview which DeepSeek model Anthropic tested, nor did he give more technical details about these tests. Anthropic didn’t immediately reply to a request for comment from TechCrunch. Neither did DeepSeek.
Cisco Security Researchers Raise Concerns
Cisco security researchers have also raised concerns about DeepSeek’s safety, stating that the model failed to block any harmful prompts in its safety tests, achieving a 100% jailbreak success rate. However, it’s worth noting that Meta’s Llama-3.1-405B and OpenAI’s GPT-4o also had high failure rates of 96% and 86%, respectively.
Impact on DeepSeek’s Adoption
It remains to be seen whether safety concerns like these will make a serious dent in DeepSeek’s rapid adoption. Companies like AWS and Microsoft have publicly touted integrating R1 into their cloud platforms, despite Amazon being Anthropic’s biggest investor.
Global Response to DeepSeek
On the other hand, there’s a growing list of countries, companies, and government organizations that have started banning DeepSeek due to concerns over China data risks. The US Navy and the Pentagon have also started banning DeepSeek, citing national security concerns.
Conclusion
Time will tell if these efforts catch on or if DeepSeek’s global rise will continue. Amodei says he considers DeepSeek a new competitor that’s on the level of the US’s top AI companies. "The new fact here is that there’s a new competitor," he said on ChinaTalk. "In the big companies that can train AI — Anthropic, OpenAI, Google, perhaps Meta and xAI — now DeepSeek is maybe being added to that category."
Source Link