DeepSeek has gained widespread attention and popularity.
A Chinese AI lab, DeepSeek, has recently entered the public eye after its chatbot app reached the top of the Apple App Store charts and also rose to the top on Google Play. The company’s AI models, which were trained using efficient computing techniques, have led Wall Street analysts and technologists to question whether the US can maintain its lead in the AI race and if the demand for AI chips will be sustained.
However, the question remains: where did DeepSeek originate, and how did it achieve international recognition so rapidly?
DeepSeek’s Trader Origins
DeepSeek is supported by High-Flyer Capital Management, a Chinese quantitative hedge fund that utilizes AI to inform its trading decisions.
AI enthusiast Liang Wenfeng co-founded High-Flyer in 2015. Wenfeng, who reportedly started trading while studying at Zhejiang University, established High-Flyer Capital Management as a hedge fund in 2019, focusing on developing and deploying AI algorithms.
In 2023, High-Flyer launched DeepSeek as a lab dedicated to researching AI tools separate from its financial business. With High-Flyer as one of its investors, the lab spun off into its own company, also called DeepSeek.
From its inception, DeepSeek built its own data center clusters for model training. However, like other AI companies in China, DeepSeek has been impacted by US export bans on hardware. To train one of its more recent models, the company was forced to use Nvidia H800 chips, a less powerful version of the H100 chip available to US companies.
DeepSeek’s technical team is said to have a young demographic. The company reportedly aggressively recruits doctorate AI researchers from top Chinese universities. Additionally, DeepSeek hires individuals without a computer science background to help its tech better comprehend a wide range of subjects, according to The New York Times.
DeepSeek’s Strong Models
DeepSeek unveiled its first set of models, including DeepSeek Coder, DeepSeek LLM, and DeepSeek Chat, in November 2023. However, it wasn’t until the release of its next-generation DeepSeek-V2 family of models in the spring that the AI industry began to take notice.
DeepSeek-V2, a general-purpose text- and image-analyzing system, performed well in various AI benchmarks and was significantly cheaper to run than comparable models at the time. This led DeepSeek’s domestic competitors, including ByteDance and Alibaba, to reduce the usage prices for some of their models and make others completely free.
DeepSeek-V3, launched in December 2024, further solidified DeepSeek’s reputation.
According to DeepSeek’s internal benchmark testing, DeepSeek V3 outperforms both downloadable, openly available models like Meta’s Llama and “closed” models that can only be accessed through an API, such as OpenAI’s GPT-4o.
Equally impressive is DeepSeek’s R1 “reasoning” model, released in January. DeepSeek claims that R1 performs as well as OpenAI’s o1 model on key benchmarks.
As a reasoning model, R1 effectively fact-checks itself, helping it avoid pitfalls that typically trip up models. Reasoning models take a bit longer to arrive at solutions compared to typical non-reasoning models. The advantage is that they tend to be more reliable in domains such as physics, science, and math.
However, there is a downside to R1, DeepSeek V3, and DeepSeek’s other models. As Chinese-developed AI, they are subject to benchmarking by China’s internet regulator to ensure that their responses “embody core socialist values.” In DeepSeek’s chatbot app, for example, R1 won’t answer questions about Tiananmen Square or Taiwan’s autonomy.
A Disruptive Approach
If DeepSeek has a business model, it’s not entirely clear what that model is. The company prices its products and services well below market value and gives others away for free.
According to DeepSeek, efficiency breakthroughs have enabled it to maintain extreme cost competitiveness. However, some experts dispute the figures the company has supplied.
Regardless, developers have taken to DeepSeek’s models, which aren’t open source in the classical sense but are available under permissive licenses that allow for commercial use. According to Clem Delangue, the CEO of Hugging Face, one of the platforms hosting DeepSeek’s models, developers on Hugging Face have created over 500 “derivative” models of R1 that have racked up 2.5 million downloads combined.
DeepSeek’s success against larger and more established rivals has been described as “upending AI” and “over-hyped.” The company’s success was at least partly responsible for causing Nvidia’s stock price to drop by 18% in January and for eliciting a public response from OpenAI CEO Sam Altman.
Microsoft announced that DeepSeek is available on its Azure AI Foundry service, Microsoft’s platform that brings together AI services for enterprises under a single banner. When asked about DeepSeek’s impact on Meta’s AI spending during its first-quarter earnings call, CEO Mark Zuckerberg said spending on AI infrastructure will continue to be a “strategic advantage” for Meta.
During Nvidia’s fourth-quarter earnings call, CEO Jensen Huang emphasized DeepSeek’s “excellent innovation,” saying that it and other “reasoning” models are great for Nvidia because they require more compute.
At the same time, some companies are banning DeepSeek, and so are entire countries and governments, including South Korea. New York state also banned DeepSeek from being used on government devices.
As for what DeepSeek’s future might hold, it’s uncertain. Improved models are a given, but the US government appears to be growing wary of what it perceives as harmful foreign influence.
TechCrunch has an AI-focused newsletter! Sign up here to get it in your inbox every Wednesday.
This story was originally published January 28, 2025, and will be updated regularly.
Source Link