Comment by ryan_n

4 days ago

So you essentially trust the output of the model from beginning to end? Curious to know what type of application you're building where you can safely do that.

Edit: to clarify, I know these models have gotten significantly better. The output is pretty incredible sometimes, but trusting it end to end like that just seems super risky still.

I guarantee you it's nothing quantifiable.

LLMs can't be responsible for deciding what code you use because they have no skin in the game. They don't even have skin.

If you type fast, well then it takes just as long to code it yourself as review it. Plus you actually get flow time when you're coding.

For heaven's sake people have the robot write your unit tests and dashboards, not your production code. Otherwise delete yourself.