Comment by lossolo
6 hours ago
What's funny is that most of this "progress" is new datasets + post-training shaping the model's behavior (instruction + preference tuning). There is no moat besides that.
6 hours ago
What's funny is that most of this "progress" is new datasets + post-training shaping the model's behavior (instruction + preference tuning). There is no moat besides that.
"post-training shaping the models behavior" it seems from your wording that you find it not that dramatic. I rather find the fact that RL on novel environments providing steady improvements after base-model an incredibly bullish signal on future AI improvements. I also believe that the capability increase are transferring to other domains (or at least covers enough domains) that it represents a real rise in intelligence in the human sense (when measured in capabilities - not necessarily innate learning ability)
What evidence do you base your opinions on capability transfer on?
>There is no moat besides that.
Compute.
Google didn't announce $185 billion in capex to do cataloguing and flash cards.
Google didn't buy 30% of Anthropic to starve them of compute
Probably why it's selling them TPUs.
> is new datasets + post-training shaping the model's behavior (instruction + preference tuning). There is no moat besides that.
sure, but acquiring/generating/creating/curating so much high quality data is still significant moat.