Comment by joefourier

9 hours ago

What's your definition of intelligence? If you exclude LLMs, you might have to exclude quite a few humans as well.

LLMs are artificial intelligence illusion engines, they only "reason" as far as there's an already made answer in their dataset that they can retrieve and eventually tweak (when things go best). Take them where there's no training data and give them the new axioms to solve your specific problem and see them fail with incorrect gibberish provided as confident answer. Humans of any level of intelligence wouldn't behave like that.