Comment by atleastoptimal
1 day ago
They released a near-SOTA open-source model recently.
Their prerogative is to make money via closed-source offerings so they can afford safety work and their open-source offerings. Ilya noted this near the beginning of the company. A company can't muster the capital needed to make SOTA models giving away everything for free when their competitor is Google, a huge for-profit company.
As per your claim that they are scammy, what about them is scammy?
Their contribution to opensouurce and open research is far behind other organisations like Meta and Mistral, as welcome as their recent model release is. Former security researchers like Jan Leike commonly cite a lack of organisational focus on security as a reason for leaving.
Not sure specifically what the commenter is referring to re: scammy, but things like the Scarlett Johansson / Her voice imitation and copyright infringement come to mind for me.
Oh yeah, that reminds me. the company did research on how to train a model that manipulates the metrics, allowing them to tick the open source box with a seemingly good score, while releasing something that serves no real purpose. [1] [2]
GPT-OSS is not a near-state-of-the-art model: it is a model deliberately trained in a way that it appears great in the evaluations, but is unusable and far underperforms actual open source models like Ollama. That's scammy.
[1] https://www.lesswrong.com/posts/pLC3bx77AckafHdkq/gpt-oss-is...
[2] https://huggingface.co/openai/gpt-oss-20b/discussions/14
That explains why gpt-oss wasn't working anywhere near as well for me as other similarly and smaller sized models. gemma3 27b, 12b, and phi4 (14b?) all significantly outperformed it when transforming unstructured data to structured data.