← Back to context

Comment by bayindirh

5 days ago

> why isn’t my iPhone actually doing any of this yet?

Probably Apple is trying to distill the models so they can run on your phone locally. Remember, most, if not all, of Siri is running on your device. There's no round trip whatsoever for voice processing.

Also, for larger models, there will be throwaway VMs per request, so building that infra takes time.

It says there’s 2 models - one local. It’s already released to app developers to use locally I think (it was in the keynote for WWDC).

  • The model now available to developers (in beta, not in released versions of iOS) is the same model that powers stuff like the much-maligned notification summaries from iOS 18. So your phone does have features that are powered by this stuff… you may just not be particularly overwhelmed by those features.

    • That’s kinda my point though - is this only capable of things like this? If it ia capable of more, why isn’t there something more yet, it’s been a long time waiting…

      1 reply →

They just launched "Private Cloud Compute" with much fanfare to enable server-side LLM processing, so between that and the fact that Siri has been server-based for most of its existence (local processing is fairly new), I don't think that's their main constraint at this point.

That said, "Private Cloud Compute" does run on proprietary Apple hardware, so availability might be a concern (assuming they don't want to start charging for it).