← Back to context

Comment by ranguna

5 hours ago

I get what your saying, but this is resonating with me and making me feel for the author:

Cursor: we have top notch safeguards for destructive operations, you have our guarantee, we are the best

Author: uses their tools expecting their guarantees to be true (I would expect them to have a confirmation before destructive operation outside their prompt, as a coded system guardrail)

Cursor AI: Does destructive operation without asking

Author: feels betrayed.

So yeah, I think the author is right because they trusted Cursor to have better system guardrails, they didn't (agents shouldn't be able to delete a volume without having a meta-guardrail outside the prompt). Now the author knows and so do we: even if companies say they have good guardrails, never trust them. If it's not your code, you have no guarantees.

Sorry - still author's fault. They didn't understand how LLM's work. They thought Cursor implemented some magic "I control every action LLM takes" thing. It's impossible.

  • right. But cursor _said_ they had some magic. At some point you have to trust vendors. I don't know exactly how AWS guarantees eleven nines of durability on S3. But I sure hope that they do.

    • yeah and when you interview the junior dev who also convinces you they're smart and have something special, they also delete prod and guess what... not that devs fault.

    • I mean, AWS doesn't really "guarantee" anything, they just say if they can't meet the bar they'll refund you in credits which is equivalent to money.