← Back to context

Comment by spuz

1 month ago

There are a lot of people who don't know stuff. Nothing wrong with that. He says in his video "I love Google, I use all the products. But I was never expecting for all the smart engineers and all the billions that they spent to create such a product to allow that to happen. Even if there was a 1% chance, this seems unbelievable to me" and for the average person, I honestly don't see how you can blame them for believing that.

I think there is far less than 1% chance for this to happen, but there are probably millions of antigravity users at this point, 1 millionths chance of this to happen is already a problem.

We need local sandboxing for FS and network access (e.g. via `cgroups` or similar for non-linux OSes) to run these kinds of tools more safely.

  • Codex does such sandboxing, fwiw. In practice it gets pretty annoying when e.g. it wants to use the Go cli which uses a global module cache. Claude Code recently got something similar[0] but I haven’t tried it yet.

    In practice I just use a docker container when I want to run Claude with —-dangerously-skip-permissions.

    [0]: https://code.claude.com/docs/en/sandboxing

  • We also need laws. Releasing an AI product that can (and does) do this should be like selling a car that blows your finger off when you start it up.

    • This is more akin to selling a car to an adult that cannot drive and they proceed to ram it through their garage door.

      It's perfectly within the capabilities of the car to do so.

      The burden of proof is much lower though since the worst that can happen is you lose some money or in this case hard drive content.

      For the car the seller would be investigated because there was a possible threat to life, for an AI buyer beware.

      1 reply →

    • Responsibility is shared.

      Google (and others) are (in my opinion) flirting with false advertising with how they advertise the capabilities of these "AI"s to mainstream audiences.

      At the same time, the user is responsible for their device and what code and programs they choose to run on it, and any outcomes as a result of their actions are their responsibility.

      Hopefully they've learned that you can't trust everything a big corporation tells you about their products.

    • This is an archetypal case of where a law wouldn't help. The other side of the coin is that this is exactly a data loss bug in a product that is perfectly capable of being modified to make it harder for a user to screw up this way. Have people forgotten how comically easy it was to do this without any AI involved? Then shells got just a wee bit smarter and it got harder to do this to yourself.

      LLM makers that make this kind of thing possible share the blame. It wouldn't take a lot of manual functional testing to find this bug. And it is a bug. It's unsafe for users. But it's unsafe in a way that doesn't call for a law. Just like rm -rf * did not need a law.

    • there are laws about waiving liability for experimental products

      sure, it would be amazing if everyone had to do a 100 hour course on how LLMs work before interacting with one

      2 replies →

Didn't sound to me like GP was blaming the user; just pointing out that "the system" is set up in such a way that this was bound to happen, and is bound to happen again.