← Back to context

Comment by kbelder

10 hours ago

It might settle into a situation where cutting edge LLMs are a service, while older and smaller LLMs are self-hosted. So you are not at risk of being cut off, but of being degraded.

I hope you're right. I played around with a bunch of AI stuff recently and that's kind of the conclusion I came to. Use local AI for mission critical stuff, if you're confident in that, and use the SOTA models for reviewing.

Tap the latest general knowledge for asking "could this be improved", but make the improvements with local systems and models. But then the obvious problem becomes finding new data to train the AIs. In my opinion, there's no way their plan doesn't involve stealing from everyone to keep training, so is it really going to be safe to use the cutting edge models at all?