Comment by jonplackett
5 days ago
Every time I see a paper from Apple I just feel like, OK so why isn’t my iPhone actually doing any of this yet?
Why give this to developers if you haven’t been able to get Siri to use it yet? Does it not work or something? I guess we’ll find out when devs start trying to make stuff
> why isn’t my iPhone actually doing any of this yet?
What exactly are you referring to? Models do run on iPhone and there are features that take advantage of it, today.
None of those features are in any way interesting though. Image playground is a joke, Siri is a joke, that generative emoji thing is a joke.
The AI stuff with photography sure, but that’s more like machine learning.
The photo touch up thing is… useable? Sometimes?
What is it you’ve been so impressed with?
This is available in iOS 26 to all applications; it's available directly to the user through shortcuts.
I'm currently on the beta, and I have a shortcut that pulls in various bits of context and feeds that directly to the on-device model.
1 reply →
The main features are text summarization, search, and writing tools.
3 replies →
> why isn’t my iPhone doing any of this yet?
> Ok its doing it in 4 or 5 products, but thats a joke.
Not every AI product is a chatbot.
1 reply →
Yes, but why do I have to open a third-party app to do these things when Apple, the company that primarily popularized the entire genre of mobile voice assistants, could very feasibly bake all of that into theirs?
I mean, the thing even lets me ask ChatGPT things if I explicitly ask it to! But why do I need to ask in the first place?
Excellent question.
I don’t speak for Apple but certainly you can appreciate that there is a fine balance between providing basic functionality and providing apps. Apple works to provide tools that developers can leverage and also tries to not step on those same developers. Defining the line between what should be built-in and what should be add-on needs to be done carefully and often is done organically.
2 replies →
> why isn’t my iPhone actually doing any of this yet?
Probably Apple is trying to distill the models so they can run on your phone locally. Remember, most, if not all, of Siri is running on your device. There's no round trip whatsoever for voice processing.
Also, for larger models, there will be throwaway VMs per request, so building that infra takes time.
It says there’s 2 models - one local. It’s already released to app developers to use locally I think (it was in the keynote for WWDC).
The model now available to developers (in beta, not in released versions of iOS) is the same model that powers stuff like the much-maligned notification summaries from iOS 18. So your phone does have features that are powered by this stuff… you may just not be particularly overwhelmed by those features.
2 replies →
They just launched "Private Cloud Compute" with much fanfare to enable server-side LLM processing, so between that and the fact that Siri has been server-based for most of its existence (local processing is fairly new), I don't think that's their main constraint at this point.
That said, "Private Cloud Compute" does run on proprietary Apple hardware, so availability might be a concern (assuming they don't want to start charging for it).
Apple silicone unified memory is amazing for running things like ollama. You don’t have to wait for them to release their own applications.