Comment by amluto

7 months ago

I would argue that the problem with Siri isn’t the model. Siri is perfectly fine at transcribing speech, although it seems to struggle with figuring out when an instruction ends. But Siri is awful at doing anything with even simple instructions:

- It regularly displays an accurate transcription with exactly the same text that usually works and then sits there, apparently indefinitely, doing nothing.

- Sometimes it’s very slow to react. This seems to be separate from the above “takes literally forever” issue.

- Siri is apparently incapable of doing a lot of things that ought to work. For example, for years, trying to use Siri on a watch to place a call on Bluetooth (while the phone is right there) would have nonsensical effects.

These won’t be fixed with a better model. They will be fixed with a better architecture. OpenAI and Anthopic can’t provide that except insofar as they might inspire Apple to wire useful functionality up to something like MCP to allow the model to do useful things.

> Even my son suggested things like "I wish your phone had ChatGPT and you could ask it to organize all your apps into folders" – we can all come up with really basic things they could've done so easily, with privacy built in.

I’m not convinced the industry knows how to expose uncontrolled data like one’s folders to an LLM without gaping exploit opportunities. Apple won’t exploit you deliberately, as that’s not their MO, but they are not immune to letting things resembling instructions that are in one of your folders exploit you.