← Back to context

Comment by danielbln

9 days ago

Let's not put humans on too much of a pedestal, there are plenty of us who are not that reliable either. That's why we have tests, linting, types and various other validation systems. Incidentally, LLMs can utilize these as well.

Humans are unreliable in predictable ways. This makes review relatively painless since you know what to look for, and you can skim through the boilerplate and be pretty confident that it's right and isn't redundant/insecure, etc.

LLMs can use linters and type checkers, but getting past them often times leads it down a path of mayhem and destruction, doing pretty dumb things to get them to pass.