Comment by huwsername
23 days ago
I don’t believe this was ever confirmed by Apple, but there was widespread speculation at the time[1] that the delay was due to the very prompt injection attacks OpenClaw users are now discovering. It would be genuinely catastrophic to ship an insecure system with this kind of data access, even with an ‘unsafe mode’.
These kinds of risks can only be _consented to_ by technical people who correctly understand them, let alone borne by them, but if this shipped there would be thousands of Facebook videos explaining to the elderly how to disable the safety features and open themselves up to identity theft.
The article also confuses me because Apple _are_ shipping this, it’s pretty much exactly the demo they gave at WWDC24, it’s just delayed while they iron this out (if that is at all possible). By all accounts it might ship as early as next week in the iOS 26.4 beta.
[1]: https://simonwillison.net/2025/Mar/8/delaying-personalized-s...
Exactly. Apple operates at a scale where it's very difficult to deploy this technology for its sexy applications. The tech is simply too broken and flawed at this point. (Whatever Apple does deploy, you can bet it will be heavily guardrailed.) With ~2.5 billion devices in active use, they can't take the Tesla approach of letting AI drive cars into fire trucks.
This is so obvious I'm kind of surprised the author used to be a software engineer at Google (based on his Linkedin).
OpenClaw is very much a greenfield idea and there's plenty of startups like Raycast working in this area.
Being good at leetcode grinding isn’t the same as being a good product person.
5 replies →
I'm not that surprised because of how pervasive the 'move fast and break things' culture is in Silicon Valley, and what is essentially AI accelerationism. You see this reflected all over HN as well, e.g. when Cloudflare goes down and it's a good thing because it gives you a break from the screen. Who cares that it broke? That's just how it is.
This is just not how software engineering goes in many other places, particularly where the stakes are much higher and can be life altering, if not threatening.
It is obvious if viewed through an Apple lens. It wouldn't be so obvious if viewed through a Google lens. Google doesn't hesitate to throw whatever its got out there to see what sticks; quickly cancelling anything that doesn't work out, even if some users come to love the offering.
Regardless of how Apple will solve this, please just solve it. Siri is borderline useless these days.
> Will it rain today? Please unlock your iphone for that
> Any new messages from Chris? You will need to unlock your iphone for that
> Please play youtube music Playing youtube music... please open youtube music app to do that
All settings and permission granted. Utterly painful.
You'll need to unlock your iPhone first. Even though you're staring at the screen and just asked me to do something, and you saw the unlocked icon at the top of your screen before/while triggering me, please continue staring at this message for at least 5 seconds before I actually attempt FaceID to unlock your phone to do what you asked.
I think half your examples are made up, or not Apple's fault, but it sounds like what you really want is to disable your passcode.
3 replies →
Do you want people being able to command your phone without unblocking? Maybe what you want is to disable phone blocking all together
10 replies →
Right, but you understand why allowing access to unauthenticated voice is bad for security right?
3 replies →
re: youtube music, I just tried it on my phone and it worked fine... maaaybe b/c you're not a youtube premium subscriber and google wants to shove ads into your sweet sweet eyeballs?
The one that kindof caught me off guard was asking "hey siri, how long will it take me to get home?" => "You'll need to unlock your iPhone for that, but I don't recommend doing that while driving..." => if you left your phone unattended at a bar and someone could figure out your home address w/o unlock.
...I'm kindof with you, maybe similar to AirTags and "Trusted Locations" there could be a middle ground of "don't worry about exposing rough geolocation or summary PII". At home, in your car (connected to a known CarPlay), kindof an in-between "Geo-Unlock"?
1 reply →
Its hard to come up with useful AI apps that aren't massive security or privacy risks. This is pretty obvious. For an agent to be really useful it needs to have access to [important stuff] but giving an AI access to [important stuff] is very risky. So you can get some janky thing like OpenClaw thats thrown together by one guy and has no boundaries and everyone on HN thinks is great, but its going to be very difficult for a big firm to make a product like that for mass consumption without it risking a massive disaster. You can see that Apple and Microsoft and Salesforce and everyone are all wrestling with this. Current LLMs are too easily hoodwinked.
I think you're being very generous. There's almost 0 chance they had this actually working consistently enough for general use in 2024. Security is also a reason, but there's no security to worry about if it doesn't really work yet anyway
The more interesting question I have is if such Prompt Injection Attacks can ever be actualy avoided, with how GenAI works.
Removing the risk for most jobs should be possible. Just build the same cages other apps already have. Also add a bit more transparency, so people know better what the machine is doing, maybe even with a mandatory user-acknowledge for potential problematic stuff, similar to how we have root-access-dialogues now. I mean, you don't really need access to all data, when you are just setting a clock, or playing music.
Perhaps not, and it is indeed not unwise from Apple to stay away for a while given their ultra-focus on security.
They could be if models were trained properly, with more carefully delineated prompts.
I'd be super interested in more information on this! Do you mean abandoning unsupervised learning completely?
Prompt Injection seems to me to be a fundamental problem in the sense that data and instructions are in the same stream and there's no clear/simple way to differentiate between the two at runtime.
1 reply →
The prompt injection thing is especially nasty for agents because they process untrusted input (web pages, emails, documents) and can take real actions. With a chatbot, prompt injection makes it say something dumb. With an agent that acts as you, a malicious payload hidden in an email could make it forward your contacts, reply on your behalf, whatever. You can't fix this in the model alone — you need an enforcement layer outside the model that limits what it can actually do regardless of what it thinks it should do. I'd bet Apple is working on exactly this and it's why they're taking their time.