Comment by tacker2000
1 month ago
This guy is vibing some react app, doesnt even know what “npm run dev” does, so he let the LLM just run commands. So basically a consumer with no idea of anything. This stuff is gonna happen more and more in the future.
There are a lot of people who don't know stuff. Nothing wrong with that. He says in his video "I love Google, I use all the products. But I was never expecting for all the smart engineers and all the billions that they spent to create such a product to allow that to happen. Even if there was a 1% chance, this seems unbelievable to me" and for the average person, I honestly don't see how you can blame them for believing that.
I think there is far less than 1% chance for this to happen, but there are probably millions of antigravity users at this point, 1 millionths chance of this to happen is already a problem.
We need local sandboxing for FS and network access (e.g. via `cgroups` or similar for non-linux OSes) to run these kinds of tools more safely.
Codex does such sandboxing, fwiw. In practice it gets pretty annoying when e.g. it wants to use the Go cli which uses a global module cache. Claude Code recently got something similar[0] but I haven’t tried it yet.
In practice I just use a docker container when I want to run Claude with —-dangerously-skip-permissions.
[0]: https://code.claude.com/docs/en/sandboxing
We also need laws. Releasing an AI product that can (and does) do this should be like selling a car that blows your finger off when you start it up.
8 replies →
Didn't sound to me like GP was blaming the user; just pointing out that "the system" is set up in such a way that this was bound to happen, and is bound to happen again.
Yup, 100%. A lot of the comments here are "people should know better" - but in fairness to the people doing stupid things, they're being encouraged by the likes of Google, ChatGPT, Anthropic etc, to think of letting a indeterminate program run free on your hard drive as "not a stupid thing".
The amount of stupid things I've done, especially early on in programming, because tech-companies, thought-leaders etc suggested they where not stupid, is much large than I'd admit.
> but in fairness to the people doing stupid things, they're being encouraged by the likes of Google, ChatGPT, Anthropic etc, to think of letting a indeterminate program run free on your hard drive as "not a stupid thing".
> The amount of stupid things I've done, especially early on in programming, because tech-companies, thought-leaders etc suggested they where not stupid, is much large than I'd admit.
That absolutely happens, and it still amazes me that anyone today would take at face value anything stated by a company about its own products. I can give young people a pass, and then something like this will happen to them and hopefully they'll learn their lesson about trusting what companies say and being skeptical.
> I can give young people a pass
Or just anyone non-technical. They barely understand these things, if someone makes a claim, they kinda have to take it at face value.
What FAANG all are doing is massively irresponsible...
2 replies →
And is vibing replies to comments too in the Reddit thread. When commenters points out they shouldn’t run in YOLO/Turbo mode and review commands before executing the poster replies they didn’t know they had to be careful with AI.
Maybe AI providers should give more warnings and don’t falsely advertise capabilities and safety of their model, but it should be pretty common knowledge at this point that despite marketing claims the models are far from being able to be autonomous and need heavy guidance and review in their usage.
In Claude Code, the option is called "--dangerously-skip-permissions", in Codex, it's "--dangerously-bypass-approvals-and-sandbox". Google would do better to put a bigger warning label on it, but it's not a complete unknown to the industry.
This is engagement bait. It’s been flooding Reddit recently, I think there’s a firm or something that does it now. Seems very well lubricated.
Note how OP is very nonchalant at all the responses, mostly just agreeing or mirroring the comments.
I often see it used for astroturfing.
I'd recommend you watch the video which is linked at the top of the Reddit post. Everything matches up with an individual learner who genuinely got stung.
The command it supposedly ran is not provided and the spaces explanation is obvious nonsense. It is possible the user deleted their own files accidentally or they disappeared for some other reason.
Regardless of whether that was the case, it would be hilarious if the laid off Q/A workers tested their former employers’ software and raised strategic noise to tank the stock.
> So basically a consumer with no idea of anything.
Not knowing is sort of the purpose of AI. It's doing the 'intelligent' part for you. If we need to know it's because the AI is currently NOT good enough.
Tech companies seem to be selling the following caveat: if it's not good enough today don't worry it will be in XYZ time.
It still needs guardrails, and some domain knowledge, at least to prevent it from using any destructive commands
I don't think that's it at all.
> It still needs guardrails, and some domain knowledge, at least to prevent it from using any destructive commands
That just means the AI isn't adequate. Which is the point I am trying to make. It should 'understand' not to issue destructive commands.
By way of crude analogy, when you're talking to a doctor you're necessarily assuming he has domain knowledge, guardrails etc otherwise he wouldn't be a doctor. With AI that isn't the case as it doesn't understand. It's fed training data and provided prompts so as to steer in a particular direction.
1 reply →
Natural selection is a beautiful thing.
It will, especially with the activist trend towards dataset poisoning… some even know what they’re doing
Because that is exactly what the hype says that "AI" can do for you.
There’s a lot of power in letting LLM run commands to debug and iterate.
Frankly, having a space in a file path that’s not quoted is going to be an incredibly easy thing to overlook, even if you’re reviewing every command.
I have been recently experimenting with Antigravity and writing a react app. I too didn't know how to start the server or what is "npm run dev". I consider myself fairly technical so I caught up as I went along.
While using the vibe coding tools it became clear to me that this is not something to be used by folks who are not technically inclined. Because at some point they might need to learn about context, tokens etc.
I mean this guy had a single window, 10k lines of code and just kept burning tokens for simplest, vague prompts. This whole issue might be made possible due to Antigravity free tokens. On Cursor the model might have just stopped and asked to fed with more money to start working again -- and then deleting all the files.
Well but 370% of code will be written by machines next year!!!!!1!1!1!!!111!
And the price will have decreased 600% !