Comment by jonplackett
7 months ago
I think we need to shift our idea of what LLMs do and stop thinking they are ‘thinking’ in any human way.
The best mental description I have come up with is they are “Concept Processors”. Which is still awesome. Computers couldn’t understand concepts before. And now they can, and they can process and transform them in really interesting and amazing ways.
You can transform the concept of ‘a website that does X’ into code that expresses a website X.
But it’s not thinking. We still gotta do the thinking. And actually that’s good.
Concept Processor actually sounds pretty good, I like it. That's pretty close to how I treat LLMs.
Are you invoking a 'god of the gaps' here? Is 'true' thinking whatever machines haven't mastered yet?
Not at all, I don’t think humans are magic at all.
But I don’t think even the ‘thinking’ LLMs are doing true thinking.
It’s like calling pressing the autocomplete buttons on your iPhone ‘writing’. Yeah kinda. It mostly forms sentences. But it’s not writing just because it follows the basic form of a sentence.
And an LLM, though now very good at writing is just creating a very good impression of thinking. When you really examine what it’s outputting it’s hard to call it true thinking.
How often does your LLM take a step back and see more of the subject than you prompted it to? How often does it have an epiphany that no human has ever had?
That’s what real thinking looks like - most humans don’t do tonnes of it most of the time either - but we can do it when required.