← Back to context

Comment by quantummagic

3 days ago

I'm not sure. If you see what they're doing with feedback already in code generation. The LLM makes a "hallucination", generates the wrong idea, then tests its code only to find out it doesn't compile. And goes on to change its idea, and try again.