← Back to context

Comment by quantummagic

3 days ago

But humans do the same thing. How many eons did we make the mistake of attributing everything to God's will, without a scientific thought in our heads? It's really easy to be wrong, when the consequences don't lead to your death, or are actually beneficial. The thinking machines are still babies, whose ideas aren't honed by personal experience; but that will come, in one form or another.

> The thinking machines are still babies, whose ideas aren't honed by personal experience; but that will come, in one form or another.

Some machines, maybe. But attention-based LLMs aren't these machines.

  • I'm not sure. If you see what they're doing with feedback already in code generation. The LLM makes a "hallucination", generates the wrong idea, then tests its code only to find out it doesn't compile. And goes on to change its idea, and try again.