Comment by famouswaffles
2 years ago
>because they don't understand anything at all about the world.
LLMs understand plenty, in any way that can be tested. It's really funny when i see making mistakes as the evidence of lack of understanding. Guess people don't understand anything at all too.
> I often see ChatGPT stumped by simple variations of brain teasers
Only if everything else is exactly as the basic teaser and guess what ? humans fall for this too. They see something they memorized and go full speed ahead. Simply changing names is enough to get it to solve it.
No comments yet
Contribute on Hacker News ↗