Comment by ninkendo
6 days ago
People have been giving Siri a few years for a decade now. Siri used to run in a data center (and still does for older hardware and things like HomePods) and it has never supported compound queries.
Siri needs to be taken out back and shot. The problem with “upgrading” it is the pull to maintain backwards compatibility for every little thing Siri did, which leads them to try and incorporate existing Siri functionality (and existing Siri engineers) to work alongside any LLM. Which leads to disaster, and none of it works and just made it all slower. They’ve been trying to do an LLM assisted Siri for years now and it’s the most public facing disaster the company has had in a while. Time to start over.
As a user, I'd gladly opt into a slightly less deeply integrated Siri that understands what I want from it.
Build a crude router in front of it, if you must, or give it access to "the old Siri" as a tool it can call, and let the LLM decide whether to return its own or a Siri-generated response!
I bet even smaller LLMs would be able to figure out, given a user input and Siri response pair, whether the request was resonably answered or whether the model itself could do better or at least explain that the request is out of capabilities for now.
> Build a crude router in front of it, if you must, or give it access to "the old Siri" as a tool it can call, and let the LLM decide whether to return its own or a Siri-generated response!
Both of these approaches were tried internally, including even the ability for the LLM to rewrite siri-as-a-tool's response, and none of them shipped, because they all suck. Putting a router in front of it makes multi-turn conversation (when Siri asks for confirmation or disambiguation) a nightmare to implement, and siri-as-a-tool suffers from the same problem. What happens when legacy siri disambiguates? Does the LLM try to guess at an option? Does it proxy the prompt back to the user? What about all the "smart UI" like having a countdown timer with Siri saying "I'll send this" when sending a text message? Does that just pass through? When does the LLM know how/when to intervene in the responses the Siri tool is giving?
This was all an integration nightmare and it's the main reason why none of it shipped. (Well, that and the LLM being underwhelming and the on-device models not being smart enough in the first place. It was just a slower, buggier siri without any new features.)
The answer is that they need to renege on the entire promise of a "private" siri and admit that the only way they can get the experience they want is a _huge_ LLM running with a _ton_ of user context, in the cloud, and don't hinder it all with backwards compatibility with Siri. Give it a toolbox of things it can do with MCP to your device, bake in the stock tools with LoRA or whatever, and let it figure out the best user experience. If it's a frontier-quality LLM it'll be better than Siri on day one, without Apple having to really do anything other than figure out a good system prompt.
The problem is, Apple doesn't want to admit the whole privacy story is a dead-end, so they're going to keep trying to pursue on-device models, and it's going to continue to be underwhelming and "not meeting our quality bar", for the foreseeable future.
Very good details on why just bolting on an LLM isn't that trivial I hadn't really considered before, thank you!
But regarding Apple not wanting to admit that client side compute isn't enough: Haven't they essentially already done that, with Private Cloud Computing and all that? I believe not even proofreading and Safari summarization work fully on-device, at least according to my private compute privacy logs.
Those little things have been broken for a while now, it's best to bite the bullet and integrate LLM to Siri now.