Comment by TIPSIO
2 days ago
Have you ever used any Anthropic AI product? You cannot literally do anything without big permissions, warnings, or annoying always-on popup warning you about safety.
2 days ago
Have you ever used any Anthropic AI product? You cannot literally do anything without big permissions, warnings, or annoying always-on popup warning you about safety.
Claude code has a YOLO mode, and from what I've seen a lot of heavy users, use it.
Fundamentally any security mechanism which relies on users to read and intelligently respond to approval prompts is doomed to fail over time, even if the prompts are well designed. Approval fatigue will kick in and people will just start either clicking through without reading, or prefer systems that let them disable the warnings (just as YOLO mode is a thing in Claude code)
Yes it basically does! My point was that I really doubt Anthropic will miss making it clear to users that this is manipulating their computer
Users are asking it to manipulate their computer for them, so I don't think that parts being lost.
No, of course not. Well.. apart from their API. That is a useful thing.
But you're missing the point. It is doing all this stuff with user consent, yes. It's just that the user fundamentally cannot provide informed consent as they seem to be out of their minds.
So yeah, technically, all those compliance checkboxes are ticked. That's just entirely irrelevant to the point I am making.
> It's just that the user fundamentally cannot provide informed consent
The user is an adult. They are capable of consenting to whatever they want, no matter how irrational it may look to you.
Uh, yes?
What does that refute?
3 replies →