Comment by Sankozi
20 hours ago
I have opposite view - LLMs have many similarities with humans. Human, especially poorly trained one, could have made the same mistake. Human after amnesia could have found similar reasons to that LLM.
While LLM generate "plausible text" humans just generate "plausible thoughts".
Just because it sounds coherent doesn’t mean it is. You can make up false equivalence for anything if you try hard enough: A sheet of plywood also has many similarities with humans (made from carbon, contain water, break when hit hard enough), but that doesn’t mean they are even remotely equal.
I didn't write they were equal. I wrote they are similar in many ways.
Comparing LLM to humans make much more sense than comparing them to computer programs.