Comment by ryandrake

7 hours ago

This seems to be one of the huge weaknesses of current LLMs: Despite the words "intelligence" and "machine learning" we throw around, they aren't really able to learn and improve their skills without someone changing the model. So, they repeat the same mistakes and invent new mistakes by random chance.

If I was tutoring a junior developer, and he accidentally deleted the whole source tree or something egregious, that would be a milestone learning point in his career, and he would never ever do it again. But if the LLM does it accidentally, it will be apologetic, but after the next context window clear, it has the same chances of doing it again.