Comment by HeatrayEnjoyer
3 days ago
It obviously can be resolved, otherwise we wouldn't be able to self-correct our own selves. When is unknown, but not the if.
3 days ago
It obviously can be resolved, otherwise we wouldn't be able to self-correct our own selves. When is unknown, but not the if.
We can sometimes correct ourselves. With training, in specific circumstances.
The same insight (given enough time, a coding agent will make a mistake) is true for even the best human programmers, and I don’t see any mechanism that would make an LLM different.
The reason you will basically never just recommend e.g. somebody use a completely nonexistent function is because you're not just guessing what the answer to something should be. Rather you have a knowledge base which you believe to be correct and are constantly evolving and drawing from it.
LLMs do not function like this at all. Rather all they have is a series of weights to help predict the next token given the prior tokens. Cascading errors is a lot like a math problem. If you make a mistake somewhere along when solving a lengthy problem then your further calculations will also continue to be more and more wrong. The same is true of an LLM when executing its prediction algorithm.
This is why an LLM does give you a wrong answer it's usually just an exercise in frustration trying to get it to correct itself, and you'd be better of just creating a completely new context.
We aren't LLMs, obviously.