HACKER Q&A
📣 billconan

Could you explain deepseek r1 to me?


I want to gain a deeper understanding of this topic. I’ve read the paper and watched a few videos, but I lack the background and I’d really appreciate insights from someone with a solid grasp of it.

From what I understand, the key contribution lies in its cost efficiency during training. However, is this referring to the whole training (including pre-training) phase, or just the reinforcement learning stage?

Additionally, it seems that the cost savings primarily come from improvements in the reward model. The paper mentions two examples: for math problems, they use fixed answers to generate reward scores, and for LeetCode problems, they rely on a compiler.

However, these examples cover only a narrow set of problem types. Not all logical challenges fall under math or coding. Can a model trained mainly on math and coding problems generalize well to other types of logical reasoning tasks?


  👤 proc0 Accepted Answer ✓
In short DeepSeek R1 has nearly identical performance on benchmarks (AI tests with questions to determine their capabilities) as OpenAI o1 model, AND it's open source (only the weights), AND it's more efficient to run (I think more than 50% cheaper to run in terms of computation, which means you pay less server costs).

So it's just a much better option that is free and more efficient to run, and to add insult to injury it was apparently trained for much less than what OpenAI paid for o1. The cost of training is mostly irrelevant though because the model weights are available for free.


👤 jaggs
It's a side project of a Chinese hedge fund, staffed by graduate and PhD students which has embarrassed the US mightily. Because it's very good indeed.