Comment by lolinder
1 month ago
Difficult to impossible. Their vendors are already working on AI features, so why would they risk adding a new vendor when a vendor they've already approved will have substantially the same capabilities soon?
1 month ago
Difficult to impossible. Their vendors are already working on AI features, so why would they risk adding a new vendor when a vendor they've already approved will have substantially the same capabilities soon?
because a vendor just using AI tools will not achieve the same capabilities as a vendor that either is OpenAI or is backed by OpenAI will achieve soon
I don't believe that to be true—OpenAI is plateauing on model capabilities and turning to scaling inference times instead. There's no moat to "just throw more tokens at the problem", and Meta and Anthropic are both hot on their heels on raw model capabilities. I see absolutely no evidence that OpenAI has a major breakthrough up their sleeve that will allow them to retake the lead.
In the end, models are fundamentally a commodity. Data is all that matters, and in the not too distant future you won't gain anything at all by sending your data to OpenAI versus just using the tooling provided by your existing vendors.
they’re plateauing on pretraining returns, quite possibly (if rumors are to be trusted)… but they are just getting more sophisticated at real world complex RL - which is still similar to throwing more tokens at the problem and is creating large returns.
i feel that the current artifact is already quite close to something that can operate in a competent manner if the downstream RL matches the task of interest well enough