Comment by bloaf
3 days ago
I'll make the following observation:
The contra-positive of "All LLMs are not thinking like humans" is "No humans are thinking like LLMs"
And I do not believe we actually understand human thinking well enough to make that assertion.
Indeed, it is my deep suspicion that we will eventually achieve AGI not by totally abandoning today's LLMs for some other paradigm, but rather embedding them in a loop with the right persistence mechanisms.
Given that LLMs are incapable of synthetic a priori knowledge and humans are, I would say that as the tech stands currently, it's reasonable to make both of those statements.
The loop, or more precisely the "search" does the novel part in thinking, the brain is just optimizing this process. Evolution could manage with the simplest model - copying with occasional errors, and in one run it made everyone of us. The moral - if you scale search the model can be dumb.
Let’s not underestimate the scale of the search which led to us though, even though you may be right in principle. In addition to deep time on earth, we may well be just part of a tiny fraction of a universe-wide and mostly fruitless search.