Comment by simonw
6 days ago
It is entirely true that current LLMs do not learn from their mistakes, and that is a difference between eg an LLM and a human intern.
It is us, the users of the LLMs, that need to learn from those mistakes.
If you prompt an LLM and it makes a mistake, you have to learn not to prompt it in the same way in the future.
It takes a lot of time and experimentation to find the prompting patterns that work.
My current favorite tactic is to dump sizable amounts of example code into the models every time I use them. I find this works extremely well. I will take code that I wrote previously that accomplishes a similar task, drop that in and describe what I want it to build next.
You seem to be assuming that the thing I'm learning is not "Stop using LLMs for this kind of work".