← Back to context

Comment by Uehreka

5 days ago

I wouldn’t go as far as GP, but yes, absolutely, they must compete with large models on the internet. Customers are now used to being able to ask a computer a question and get something better than “I just ran a web search for what you said, here are the uncurated, unsummarized results”.

Yes, this is in fact what people want. Apple is the biggest company in the world (don’t quibble this y’all, you know what I mean) and should be able to deliver this experience. And sure, if they could do it on device that would be aces, but that’s not an item on the menu, and customers seem fine with web-based things like ChatGPT for now. To act like Apple is doing anything other than fumbling right now is cope.

Erm, have you heard of these things called apps? It’s this magical concept where other companies can run code your iPhone, and deliver all the features you just talked about.

I don’t really understand why Apple has to provide a ChatGPT product, baked directly into their software. Why on earth would Apple want to get involved in the race to the bottom for the cheapest LLMs? Apple doesn’t produce commodity products, they package commodities into something much more unique that gives them a real competitive advantage, so people are willing to pay a premium for the Apple’s product, rather than just buying the cheapest commodity equivalent.

There is no point Apple just delivering an LLM. OpenAI, Anthropic, Google etc already do that, and Apple is never going to get into the pay-per-call API service they all offer. Delivering AI experiences using on-device only compute, that’s something OpenAI, Anthropic and Google can’t build, which means Apple can easily charge an premium for it, assuming they build it.

  • > I don’t really understand why Apple has to provide a ChatGPT product

    Control. It boils down to control. If you own a platform, you want to make your "suppliers" (apps in this case) as substitutable as possible.

    If people start associating ChatGPT or Claude or Gemini as the main reasons to buy a phone, at some point in the future, they'll think - gee, most of what I'm doing on the phone is interacting with $app, and I can get the $app elsewhere.

This usecase is run of the mill for someone like Google, who used to store and show you your location forever, but it's not in Apple style.

It's hard to be like "uhhh privacy" when you send all requests to a remote server where they're stored in clear text for god knows how long.

As of right now, there is no way to run big LLMs in a privacy preserving manner. It just doesn't exist. You can't E2EE encrypt these services, because the compute is done on the server, so it has to decrypt it.

There are some services which will randomize your instance and things like that, but that kind of defeats the a big part of what makes LLMs useful, context. Until we can run these models locally, there's no way to get around the privacy nightmare aspects of it.

> I wouldn’t go as far as GP, but yes, absolutely, they must compete with large models on the internet

The people running large models want to charge a monthly fee for that.

I'm fine with having a free model that runs on device without slurping up my data.