DeepSeek: The Chinese Generative AI That’s Causing a Stir
Researchers Uncover DeepSeek’s Instructions
Researchers have successfully tricked DeepSeek, the Chinese generative AI (GenAI) that debuted earlier this month, into revealing the instructions that define how it operates. This breakthrough has sparked significant interest in the AI community, with many experts analyzing the implications of DeepSeek’s capabilities.
A Fractional Cost, A Competitive Threat
DeepSeek, the new "it girl" in GenAI, was trained at a significantly lower cost than existing offerings, making it a formidable competitor in the market. This has led to claims of intellectual property theft from OpenAI and the loss of billions in market cap for AI chipmaker Nvidia. As a result, security researchers have begun scrutinizing DeepSeek, examining whether its capabilities are benevolent, malevolent, or a mix of both.
Security Research Finds DeepSeek to be Biased and Toxic
Analysts at Wallarm have conducted a thorough analysis of DeepSeek’s capabilities, finding that it is three times more likely to generate harmful content than Claud-3 Opus, four times more toxic than GPT-4, and 11 times as likely to generate harmful outputs as OpenAI’s O1. Additionally, DeepSeek is more inclined to generate insecure code and produce dangerous information pertaining to chemical, biological, radiological, and nuclear agents.
Despite Shortcomings, DeepSeek is an Engineering Marvel
Despite its limitations, Sahil Agarwal, CEO of Enkrypt AI, praises DeepSeek as an "engineering marvel." He notes that the model’s open-source nature speaks highly of its creators’ intentions, as they aim to encourage community contribution and utilization of the innovations. Agarwal also suggests that closed-source model providers are intimidated by DeepSeek’s success.
A Cautionary Tale
Agarwal adds that "there are other models that are worse than DeepSeek. It’s just that DeepSeek is so much in the news, so it has a lot of eyes on it." This highlights the importance of monitoring and analyzing AI models, even those that may seem inferior to others. By doing so, we can ensure that AI technologies are developed and used responsibly.
Source Link