People are claiming (I don't know much RL) that DeepSeek-R1's training process is very simple (based on the paper: https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf) - a boring standardish (for LLMs) RL algorithm optimizing for reward on some ground-truth-verifiable tasks (they don't say which). So why did o1 not happen until late 2024 (public release) or late 2023 (rumours of Q*)? "Do RL on useful tasks" is a very obvious idea. I think the relevant algorithms are older than that.
The paper says that they tried applying it to smaller models and it didn't work nearly as well, so "base models were bad then" is a plausible explanation, but it's clearly not true - GPT-4-base is probably a generally better (if costlier) model than 4o, which o1 is based on (could be distillation from a secret bigger one though); and LLaMA-3.1-405B used a somewhat similar postttraining process and is about as good a base model, but is not competitive with o1 or R1. So I don't think it's that.
What's going on here? The process is simple-sounding but filled with pitfalls DeepSeek don't mention? What has changed between 2022/23 and now which means we have at least three decent long-CoT reasoning models around?