← Back to context

Comment by littlestymaar

3 days ago

> The thinking machines are still babies, whose ideas aren't honed by personal experience; but that will come, in one form or another.

Some machines, maybe. But attention-based LLMs aren't these machines.

I'm not sure. If you see what they're doing with feedback already in code generation. The LLM makes a "hallucination", generates the wrong idea, then tests its code only to find out it doesn't compile. And goes on to change its idea, and try again.