Comment by senko

4 days ago

> This shows the core of the flaw in the argument.

> "The tool is great. If the result is not perfect, it is the user to blame."

That's not what the parent said. A tool can be useful without being perfect. A result can be good without being perfect.

LLMs are power tools, and can be dangerous (to your codebase, your data, or your health) if used improperly[0].

If you hold the chainsaw wrong and saw off your foot, it's not the chainsaw's fault[1].

[0] In this case "properly" means understanding they are nondeterministic, can hallucinate, the output will vary and should be verified, and that GIGO still applies.

[1] The Altmans of the industry do the technology a great disservice by claiming "AGI achieved", "will replace workers", "PhD level intelligence" and "it just works". It's false marketing, plain and simple. When you set expectations sky high, of course any tech will disappoint.