DeepSeek: How a Small Chinese AI Company is Shaking up US Tech Heavyweights

Chinese artificial intelligence (AI) company DeepSeek has sent shockwaves through the techcommunity, with the release of extremely efficient AI models that can compete with cutting-edge products from US companies such as OpenAI and Anthropic. Founded in 2023, DeepSeek has achieved its results with a fraction of the cash and computingpower of its competitors.
January 30, 2025
image_print

Chinese artificial intelligence (AI) company DeepSeek has sent shockwaves through the techcommunity, with the release of extremely efficient AI models that can compete with cutting-edge products from US companies such as OpenAI and Anthropic. Founded in 2023, DeepSeek has achieved its results with a fraction of the cash and computingpower of its competitors.

DeepSeek’s “reasoning” R1 model, released last week, provoked excitement amongresearchers, shock among investors, and responses from AI heavyweights. The companyfollowed up on January 28 with a model that can work with images as well as text.

So what has DeepSeek done, and how did it do it?

What DeepSeek Did

In December, DeepSeek released its V3 model. This is a very powerful “standard” largelanguage model that performs at a similar level to OpenAI’s GPT-4o and Anthropic’s Claude3.5.

While these models are prone to errors and sometimes make up their own facts, they can carryout tasks such as answering questions, writing essays and generating computer code. On sometests of problem-solving and mathematical reasoning, they score better than the averagehuman.

V3 was trained at a reported cost of about US$5.58 million. This is dramatically cheaper thanGPT-4, for example, which cost more than US$100 million to develop.

DeepSeek also claims to have trained V3 using around 2,000 specialised computer chips, specifically H800 GPUs made by NVIDIA. This is again much fewer than other companies, which may have used up to 16,000 of the more powerful H100 chips.

On January 20, DeepSeek released another model, called R1. This is a so-called “reasoning” model, which tries to work through complex problems step by step. These models seem to be better at many tasks that require context and have multiple interrelated parts, such as readingcomprehension and strategic planning.

The R1 model is a tweaked version of V3, modified with a technique called reinforcementlearning. R1 appears to work at a similar level to OpenAI’s o1, released last year.

DeepSeek also used the same technique to make “reasoning” versions of small open-sourcemodels that can run on home computers.

This release has sparked a huge surge of interest in DeepSeek, driving up the popularity of itsV3-powered chatbot app and triggering a massive price crash in tech stocks as investors re-evaluate the AI industry. At the time of writing, chipmaker NVIDIA has lost around US$600 billion in value.

How DeepSeek Did It

DeepSeek’s breakthroughs have been in achieving greater efficiency: getting good result swith fewer resources. In particular, DeepSeek’s developers have pioneered two techniques that may be adopted by AI researchers more broadly.

The first has to do with a mathematical idea called “sparsity”. AI models have a lot of parameters that determine their responses to inputs (V3 has around 671 billion), but only a small fraction of these parameters is used for any given input.

However, predicting which parameters will be needed isn’t easy. DeepSeek used a new technique to do this, and then trained only those parameters. As a result, its models needed far less training than a conventional approach.

The other trick has to do with how V3 stores information in computer memory. DeepSeek has found a clever way to compress the relevant data, so it is easier to store and access quickly.

What It Means

DeepSeek’s models and techniques have been released under the free MIT License, whichmeans anyone can download and modify them.

While this may be bad news for some AI companies – whose profits might be eroded by theexistence of freely available, powerful models – it is great news for the broader AI research community.

At present, a lot of AI research requires access to enormous amounts of computing resources. Researchers like myself who are based at universities (or anywhere except large techcompanies) have had limited ability to carry out tests and experiments.

More efficient models and techniques change the situation. Experimentation and developmentmay now be significantly easier for us.

For consumers, access to AI may also become cheaper. More AI models may be run on users’ own devices, such as laptops or phones, rather than running “in the cloud” for a subscriptionfee.

For researchers who already have a lot of resources, more efficiency may have less of an effect. It is unclear whether DeepSeek’s approach will help to make models with betterperformance overall, or simply models that are more efficient.

 

* Tongliang Liu , is Associate Professor of Machine Learning and Director of theSydney AI Centre at University of Sydney

 

Source: https://theconversation.com/deepseek-how-a-small-chinese-ai-company-is-shaking-up-us-tech-heavyweights-248434

Leave a Reply

Your email address will not be published.

Yazdır