Comment by godelski
5 days ago
> the math of the LLM doesn't matter to the point I'm making.
The point I'm making is that to make effective use out of a tool, you should know what the tool can and can't do. Really the "all models are wrong, but some models are useful" paradigm. To know which models are useful you have to know how your models are wrong.
Sure, you can blindly trust too. But that can get pretty dangerous. While most of the time we leverage high levels of trust, I'm unconvinced our models allow us to trust them. Without being able to strongly demonstrate that they do not optimize tricking us (in our domains of interest) then they should be treated as distrustful, not trustful.
The part of the tool that I'm "blindly trusting" is the part any competent programmer can reason about.