← Back to context

Comment by RandomLensman

1 year ago

Not just logic puzzles but also applying information, and, yes, I tried a few things.

People/humans tend to be pretty poor, too (training can help, though), as it isn't easy to really think through and solve things - we don't have a general recipe to follow there and neither do LLMs it seems (otherwise it shouldn't fail).

What I am getting at is that as far as a reasoning machine is concerned, I'd want it to be like a pocket calculator is for arithmetic, i.e., it doesn't fail other than in some rare exceptions - and not inheriting human weaknesses there.