Comment by cbsmith

5 days ago

> Nobody wants to pay for a trillion dollar cloud bill.

Buying dedicated hardware as a way to keep your AI bill down seems like a tough proposition for your average consumer. Unless you're using AI constantly, renting AI capacity when you need it is just going to be cheaper. The win with the on-device model is you don't have to go out to the network in the first place.

You misunderstood what I meant, I mean make models that run on potatoes, nobody wants to pay what chatgpt's subscription model probably SHOULD cost for them to make a profit.

  • So the idea is that it SHOULD cost OpenAI a trillion dollars to do what you can accomplish with a potato?

    • No, not even sure how you arrived to that conclusion. The idea is that there are models out there that can run on small amounts of VRAM. If all it costs is charging your phone, as opposed to some subscription to some overvalued AI company, people will choose ‘free’ first. We have models that can google things now. They only need to know so much when online, and a specific subset when offline.

      1 reply →

The "dedicated hardware" will be an Apple TV in the Apple ecosystem for example if something centralised is needed.

Or just your phone or laptop. Fully local, nothing leaves the device.

  • So if your AI compute needs are handled by an Apple TV, I'd be really curious how those same needs served by the cloud work out to a trillion dollars.