China’s DeepSeek Said It Spend $294K On Training R1 Model

Breaking News Featured

In Summary

  • DeepSeek’s release of what it said were lower-cost AI systems in January prompted global investors to dump tech stocks
  • The Nature article said DeepSeek’s reasoning-focused R1 model cost $294,000 to train and used 512 Nvidia H800 chips
  • Sam Altman said in 2023 that what he called “foundational model training” had cost “much more” than $100 million
  • After this initial phase, R1 was trained for a total of 80 hours on the 512-cluster of H800 chips


Catenaa, Thursday, September 18, 2025- Chinese AI developer DeepSeek said it spent $294,000 on training its R1 model, much lower than figures reported for US rivals, as Beijing reignites the AI race with US.

The rare update from the Hangzhou-based company, the first estimate it has released of R1’s training costs, appeared in a peer-reviewed article in the academic journal Nature published on Wednesday.

DeepSeek’s release of what it said were lower-cost AI systems in January prompted global investors to dump tech stocks as they worried the new models could threaten the dominance of AI leaders, including Nvidia.

The Nature article, which listed Liang as one of the co-authors, said DeepSeek’s reasoning-focused R1 model cost $294,000 to train and used 512 Nvidia H800 chips. A previous version of the article published in January did not contain this information.

Sam Altman, CEO of US AI giant OpenAI, said in 2023 that what he called “foundational model training” had cost “much more” than $100 million – though his company has not given detailed figures for any of its releases.

Training costs for the large-language models powering AI chatbots refer to the expenses incurred from running a cluster of powerful chips for weeks or months to process vast amounts of text and code.

Some of Deepseek’s statements about its development costs and the technology it used have been questioned by U.S. companies and officials.

The H800 chips it mentioned were designed by Nvidia for the Chinese market after the US in October 2022 made it illegal for the company to export its more powerful H100 and A100 AI chips to China.

“Regarding our research on DeepSeek-R1, we utilized the A100 GPUs to prepare for the experiments with a smaller model,” the researchers wrote. After this initial phase, R1 was trained for a total of 80 hours on the 512 cluster of H800 chips, they added.

Protected by Copyscape