DeepSeek has gained widespread attention.
A Chinese AI laboratory, DeepSeek, has recently entered the mainstream consciousness after its chatbot application reached the top of the Apple App Store charts and the Google Play Store. The AI models developed by DeepSeek, which utilized compute-efficient techniques for training, have raised questions among Wall Street analysts and technologists regarding the ability of the U.S. to maintain its lead in the AI race and the sustainability of demand for AI chips.
However, the question remains: where did DeepSeek originate, and how did it achieve international recognition so rapidly?
DeepSeek’s Trader Origins
DeepSeek is backed by High-Flyer Capital Management, a Chinese quantitative hedge fund that utilizes AI to inform its trading decisions.
Liang Wenfeng, an AI enthusiast, co-founded High-Flyer in 2015. Wenfeng, who began exploring trading while a student at Zhejiang University, launched High-Flyer Capital Management as a hedge fund in 2019, focusing on developing and deploying AI algorithms.
In 2023, High-Flyer established DeepSeek as a laboratory dedicated to researching AI tools separate from its financial operations. With High-Flyer as one of its investors, the lab spun off into its own company, also called DeepSeek.
From its inception, DeepSeek constructed its own data center clusters for model training. However, like other AI companies in China, DeepSeek has been affected by U.S. export bans on hardware. To train one of its more recent models, the company was forced to use Nvidia H800 chips, a less powerful version of the H100 chip available to U.S. companies.
DeepSeek’s technical team is said to be relatively young. The company reportedly aggressively recruits doctorate AI researchers from top Chinese universities. DeepSeek also hires individuals without computer science backgrounds to help its technology better understand a wide range of subjects, according to The New York Times.
DeepSeek’s Strong Models
DeepSeek unveiled its first set of models, including DeepSeek Coder, DeepSeek LLM, and DeepSeek Chat, in November 2023. However, it wasn’t until the release of its next-generation DeepSeek-V2 family of models in the spring that the AI industry began to take notice.
DeepSeek-V2, a general-purpose text- and image-analyzing system, performed well in various AI benchmarks and was significantly cheaper to run than comparable models at the time. This forced DeepSeek’s domestic competitors, including ByteDance and Alibaba, to reduce the usage prices for some of their models and make others completely free.
The launch of DeepSeek-V3 in December 2024 further added to DeepSeek’s notoriety.
According to DeepSeek’s internal benchmark testing, DeepSeek V3 outperforms both downloadable, openly available models like Meta’s Llama and “closed” models that can only be accessed through an API, like OpenAI’s GPT-4o.
Equally impressive is DeepSeek’s R1 “reasoning” model, released in January, which the company claims performs as well as OpenAI’s o1 model on key benchmarks.
As a reasoning model, R1 effectively fact-checks itself, helping it avoid common pitfalls that typically hinder models. Reasoning models take slightly longer to arrive at solutions compared to typical non-reasoning models. However, they tend to be more reliable in domains such as physics, science, and mathematics.
One downside to R1, DeepSeek V3, and DeepSeek’s other models is that, being Chinese-developed AI, they are subject to benchmarking by China’s internet regulator to ensure that their responses “embody core socialist values.” For example, in DeepSeek’s chatbot application, R1 will not answer questions about Tiananmen Square or Taiwan’s autonomy.
A Disruptive Approach
DeepSeek’s business model is not clearly defined. The company prices its products and services significantly below market value and gives others away for free.
According to DeepSeek, efficiency breakthroughs have enabled the company to maintain extreme cost competitiveness. However, some experts dispute the figures supplied by the company.
Developers have taken to DeepSeek’s models, which are available under permissive licenses that allow for commercial use. According to Clem Delangue, the CEO of Hugging Face, one of the platforms hosting DeepSeek’s models, developers on Hugging Face have created over 500 “derivative” models of R1 that have racked up 2.5 million downloads combined.
DeepSeek’s success against larger and more established rivals has been described as “upending AI” and “over-hyped.” The company’s success was at least partially responsible for causing Nvidia’s stock price to drop by 18% in January and eliciting a public response from OpenAI CEO Sam Altman.
Microsoft announced that DeepSeek is available on its Azure AI Foundry service, a platform that brings together AI services for enterprises under a single banner. During Meta’s first-quarter earnings call, CEO Mark Zuckerberg stated that spending on AI infrastructure will continue to be a “strategic advantage” for Meta.
During Nvidia’s fourth-quarter earnings call, CEO Jensen Huang emphasized DeepSeek’s “excellent innovation,” stating that it and other “reasoning” models are beneficial for Nvidia because they require more compute power.
Meanwhile, some companies are banning DeepSeek, as are entire countries and governments, including South Korea, due to concerns over China-related data risks. New York state also banned DeepSeek from being used on government devices.
As for DeepSeek’s future, it is uncertain. While improved models are expected, the U.S. government appears to be growing wary of what it perceives as harmful foreign influence. In March, The Wall Street Journal reported that the U.S. will likely ban DeepSeek on government devices.
This story was originally published on January 28, 2025, and will be updated regularly.
Source Link