
Upgrade to High-Speed Internet for only ₱1499/month!
Enjoy up to 100 Mbps fiber broadband, perfect for browsing, streaming, and gaming.
Visit Suniway.ph to learn
DeepSeek R1 has been released as an open-source reasoning model operating at a fraction of the cost compared to industry-leading models in the US. The Chinese AI lab also claims that it could compete with OpenAI’s o1 model in a few benchmarks.
The R1 is available on the Hugging Face platform under an MIT license. The model can be used commercially without restrictions as well. According to DeepSeek, the R1 beats o1 on the benchmarks AIME, MATH-500, and SWE-bench Verified.
For context, AIME is for using other models to measure a model’s performance and MATH-500 is for word problems. SWE-bench Verified is a benchmark that evaluates programming capabilities.
As a reasoning model, the R1 is capable of fact-checking itself. This helps the service avoid inconsistencies that can be found in other models. In terms of speed, it takes a few seconds (to minutes) for it to get solutions or cater to prompts. While it may take longer compared to other models, it’s more reliable in subjects like math and physics.
DeepSeek revealed that this model contains 671 billion parameters, which is massive. To put it simply, models with more parameters perform better by corresponding to its coded problem-solving skills.
There are also smaller versions of the DeepSeek R1 that vary in size (1.5 billion to 70 billion parameters). The smallest device it could run on is a laptop.
The full version of the R1 (available through DeepSeek API) requires better hardware but is 90-95 percent cheaper than OpenAI’s o1. There are already dozens of YouTube videos about the difference between the two models as seen below.
The only downside to the R1 so far is that it’s a Chinese model. This means it is subject to benchmarking by China’s internet regulator, ensuring that the responses embody core socialist values. For example, it won’t answer questions about Taiwan’s autonomy or Tiananmen Square.