Comment by lukan
20 hours ago
Why are there actually only a few people in the world able to do this?
The basic concept is out there.
Lots of smart people studying hard to catch up to also be poached. No shortage of those I assume.
Good trainingsdata still seems the most important to me.
(and lots of hardware)
Or does the specific training still involves lots of smart decisions all the time?
And those small or big decisions make all the difference?
The basic concept plus a lot of money spent on compute and training data gets you pretraining. After that to get a really good model there’s a lot more fine-tuning / RL steps that companies are pretty secretive about. That is where the “smart decisions” and knowledge gained by training previous generations of sota models comes in.
We’d probably see more companies training their own models if it was cheaper, for sure. Maybe some of them would do very well. But even having a lot of money to throw at this doesn’t guarantee success, e.g. Meta’s Llama 4 was a big disappointment.
That said, it’s not impossible to catch up to close to state-of-the-art, as Deepseek showed.
Why are there so few people in the world able to run 100m in sub 10s?
The basic concept is out there: run very fast.
Lots of people running every day who could be poached. No shortage of those I assume.
Good running shoes still seem the most important to me.
1. Cost to hire is now prohibitive. You're competing against companies like Meta paying tens of millions for top talent.
2. Cost to train is also prohibitive. Grok data centre has 200,000 H100 Graphics cards. Impossible for a startup to compete with this.
"Impossible for a startup to compete with this."
its funny to me since xAI literally the "youngest" in this space and recently made an Grok4 that surpass all frontier model
it literally not impossible
I mean, that's a startup backed by the richest man in the world who also was engaged with OpenAI in the beginning.
I assume startup here means the average one, that has a little bit less of funding and connections.
2 replies →
xAI isn’t young. The brand, maybe. Not the actual history / timeline. Tesla was working on AI long ago.
xAI was just spun out to raise more money / fix the x finance issues.
Most startups don't have Elon Musk's money.
Because it’s not about “who can do it”, it’s about “who can do it the best”.
It’s the difference between running a marathon (impressive) and winning a marathon (here’s a giant sponsorship check).
You need a person that can hit the ground running. Compute for LLM is extremely capital intensive and you’re always racing against time. Missing performance targets can mean life or death of the company.
I'd recommend reading some of the papers on what it takes to actually train a proper foundation model, such as the Llama 3 Herd of Models paper. It is a deeply sophisticated process.
Coding startups also try to fine-tune OSS models to their own ends. But this is also very difficult, and usually just done as a cost optimization, not as a way to get better functionality.