← Back to context

Comment by tptacek

5 days ago

Right, I'm just saying, I meant something else by the term than you did. Again: my point is, the math of the LLM doesn't matter to the point I'm making. It's not the model figuring out whether the code actually compiled. It's 200 lines of almost straight-line Python code that has cracked the elusive computer science problem of running an executable and checking the exit code.

  > the math of the LLM doesn't matter to the point I'm making.

The point I'm making is that to make effective use out of a tool, you should know what the tool can and can't do. Really the "all models are wrong, but some models are useful" paradigm. To know which models are useful you have to know how your models are wrong.

Sure, you can blindly trust too. But that can get pretty dangerous. While most of the time we leverage high levels of trust, I'm unconvinced our models allow us to trust them. Without being able to strongly demonstrate that they do not optimize tricking us (in our domains of interest) then they should be treated as distrustful, not trustful.

  • The part of the tool that I'm "blindly trusting" is the part any competent programmer can reason about.