← Back to context

Comment by TeMPOraL

9 months ago

> how will you see the one time out of X where it confidently provides you false information that you happily use because it usually work ?

You don’t. You treat it like you would a human worker: set your process to detect or tolerate wrong output. If you can't, don't apply this tool to your work.

This is true but misses a key fact, that typical llm errors are different to human errors. Not that they're worse or better but just that you need to understand where and when they're more likely to make mistakes and how to manage that.