Their audience is people who build stuff, techs audience is enterprise CEOs and politicians, and anyone else happy to hype up all the questionably timed releases and warnings of danger, white collar irrelevence, or promises of utopian paradise right before a funding round.
doesn't it get tiring after a while? using the same (perceived) gotcha, over and over again, for three years now?
no one is ever going to release their training data because it contains every copyrighted work in existence. everyone, even the hecking-wholesome safety-first Anthropic, is using copyrighted data without permission to train their models. there you go.
There is an easy fix already in widespread use: "open weights".
It is very much a valuable thing already, no need to taint it with wrong promise.
Though I disagree about being used if it was indeed open source: I might not do it inside my home lab today, but at least Qwen and DeepSeek would use and build on what eg. Facebook was doing with Llama, and they might be pushing the open weights model frontier forward faster.
Their audience is people who build stuff, techs audience is enterprise CEOs and politicians, and anyone else happy to hype up all the questionably timed releases and warnings of danger, white collar irrelevence, or promises of utopian paradise right before a funding round.
Insert obligatory "this is the way" Mando scene. Indeed!
Where's the training data and training scripts since you are calling this open source?
Edit: it seems "open source" was edited out of the parent comment.
doesn't it get tiring after a while? using the same (perceived) gotcha, over and over again, for three years now?
no one is ever going to release their training data because it contains every copyrighted work in existence. everyone, even the hecking-wholesome safety-first Anthropic, is using copyrighted data without permission to train their models. there you go.
There is an easy fix already in widespread use: "open weights".
It is very much a valuable thing already, no need to taint it with wrong promise.
Though I disagree about being used if it was indeed open source: I might not do it inside my home lab today, but at least Qwen and DeepSeek would use and build on what eg. Facebook was doing with Llama, and they might be pushing the open weights model frontier forward faster.
2 replies →
Nvidia did with Nemo.
1 reply →
it's not a gotcha but people using words in ways others don't like.
1 reply →
They are exactly open source. The training data is the internet. Don't say it's on the internet. It IS the internet.
The training scripts are in Megatron and vLLM.
Aww yes, let me push a couple petabytes to my git repo for everyone to download...
An easier thing would be to say "open weights", yes.
Weights are the source, training data is the compiler.
You got it the wrong way round. It's more akin to.
1. Training data is the source. 2. Training is compilation/compression. 3. Weights are the compiled source akin to optimized assembly.
However it's an imperfect analogy on so many levels. Nitpick away.
1 reply →