Comment by imoverclocked
14 hours ago
It’s pretty great that despite having large data centers capable of doing this kind of computation, Apple continues to make things work locally. I think there is a lot of value in being able to hold the entirety of a product in hand.
Google has a family of local models too! https://ai.google.dev/gemma/docs
It's very convenient for Apple to do this: less expenses on costly AI chips, and more excuses to ask customers to buy their latest hardware.
Users have to pay for the compute somehow. Maybe by paying for models run in datacenters. Maybe paying for hardware that's capable enough to run models locally.
I can upgrade to a bigger LLM I use through an API with one click. If it runs on my device device I need to buy a new phone.
1 reply →
But also: if Apple's way works, it’s incredibly wasteful.
Server side means shared resources, shared upgrades and shared costs. The privacy aspect matters, but at what cost?
3 replies →
With no company having a clear lead in everyday ai for the non technical mainstream user, there is only going to be a race to the bottom for subscription and API pricing.
Local doesn't cost the company anything, and increases the minimum hardware customers need to buy.