Comment by codechicago277

3 days ago

The fault lies entirely with the human operator for not understanding the risks of tying a model directly to the prod database, there’s no excuse for this, especially without backups.

To immediately turn around and try to bully the LLM the same way you would bully a human shows what kind of character this person has too. Of course the LLM is going to agree with you and accept blame, they’re literally trained to do that.

I don't see the appeal of tooling that shields you from learning the admittedly annoying and largely accidental) complexity in developing software.

It can only make accidental complexity grow and people's understanding diminish.

When the inevitable problems become apparent, and you claim people should have understood better. Maybe using the tool that let's you avoid understanding things was a bad idea...

  • Sure, but every abstraction does that.

    A manager hiring a team of real humans, vs. a manager hiring an AI, either way the manager doesn't know or learn how the system works.

    And asking doesn't help, you can ask both humans and AI, and they'll be different in their strengths and weaknesses in those answers, but they'll both have them — the humans' answers come with their own inferential distance and that can be hard to bridge.

    • Thats not the same. In this case, a machine made a descision that was against its intructions. If a machine make decisions by itself, no one knows avout the process. A team of humans makimg decisions, benefits from multiple point of views, despite the manager being the one that aproves what is implemented or decides the course ofnthe proyect.

      Humans make mistakes, and they are critical too (crowdstrike), but letting machines decide, and build, and everything, just let humans out of the processes, and with the current state of "AI", thats just dumb.

      1 reply →