Comment by laszlojamf
9 hours ago
The way I see it, the problem with LLMs is the same as with self-driving cars: trust. You can ask an LLM to implement a feature, but unless you're pretty technical yourself, how will you know that it actually did what you wanted? How will you know that it didn't catastrophically misunderstand what you wanted, making something that works for your manual test cases, but then doesn't generalize to what you _actually_ want to do? People have been saying we'll have self-driving cars in five years for fifteen years now. And even if it looks like it might be finally happening now, it's going glacially slow, and it's one run-over baby away from being pushed back another ten years.
The self driving car analogy is a good one, because what happens when you trust the car enough to do most of your driving but it suddenly thrusts the controls upon you when it shits the bed and can't figure out what to do? You suddenly realise you've become a very rusty driver in a moment that requires fast recall of skill, but your car is already careening off a cliff while you have this realisation.
[The "children of the magenta line"](https://www.computer.org/csdl/magazine/sp/2015/05/msp2015050...) is a god explanation of this, and is partly why I often dissuade junior devs from pretty user friendly using tools that abstract away the logic beneath them.
People used to brush away this argument with plain statistics. Supposedly, if the death statistics is below the average human, you are supposed to lean back and relax. I never bought this one. Its like saying LLMs write better texts then the average huamn can, so you are supposed to use it, no matter how much you bring to the table.