← Back to context

Comment by cushycush

4 days ago

I completely disagree. I think the author makes a fair point about safety concerns regarding AI tooling. The author sounds knowledgeable enough to me. Even if some of their suggestions are a bit crass, most of them aren’t. Railway should most definitely not be putting backups within the same volume (even if documented). AI should not have done that operation when they have explicit rules not to. The industry has a lot of work to do in this department. I would be extremely pissed off too.

The whole “vibecoding” argument is stupid. Everyone is pissed because it’s taking their jobs and saying, “welp, you shouldn’t have vibe coded then” when issues like this occur. Issues like this occurred and still occur without vibe coding. Probably much more often by actual people than AI. I’m frustrated too; I love coding. I’ve been doing it for 15 years. But either way, we have to get used to the idea that we won’t be coding in the future. The whole industry is moving that way and moving fast. You can’t do anything to change it. You can’t deny that you can complete projects 1000000x faster when coding with agents than by your own hands. Adapt. Stop complaining.

> AI should not have done that operation when they have explicit rules not to.

How much experience do you have with LLMs?

One of the first lessons developers learn after working with LLMs a bit, is that the LLM will hallucinate, and you need to be alert and competent enough to recognize when it happens. Sort of like a car with steering assist requires you to pay attention and take personal responsibility for anything that happens.

As a consequence of that, one of the second lessons developers learn after working with LLMs a bit, is that there is no such thing as "an explicit rule" for LLMs. "Explicit rules" can still be ignored by an LLM under many different circumstances. The sooner the developer learns this fact, the sooner they can be productive with LLMs, and the less likely they are to delete their own production database and blame it on their tools with which they're unfamiliar.

> The author sounds knowledgeable enough to me.

Nope, their complaint about having an API ask if you should delete or not clearly shows the author has no idea how API works. They could have said that a deletion API could require 2 different requests, one for the deletion request that returns a token and another for confirmation with the token returned by the first request, but this is not what they said so.

Also as others have said, this wouldn't have helped anyway because the AI could just call both APIs one after another and the result would be the same, especially if the first request returns "call this other endpoint with this token to confirm your deletion request".