Comment by _dwt
7 hours ago
I have a question for all the "humans make those mistakes too" people in this thread, and elsewhere: have you ever read, or at least skimmed a summary of, "The Origin of Consciousness in the Breakdown of the Bicameral Mind"? Did you say "yeah, that sounds right"? Do you feel that your consciousness is primarily a linguistic phenomenon?
I am not trying to be snarky; I used to think that intelligence was intrinsically tied to or perhaps identical with language, and found deep and esoteric meaning in religious texts related to this (i.e. "in the beginning was the Word"; logos as soul as language-virus riding on meat substrate).
The last ~three years of LLM deployment have disabused me of this notion almost entirely, and I don't mean in a "God of the gaps" last-resort sort of way. I mean: I see the output of a purely-language-based "intelligence", and while I agree humans can make similar mistakes/confabulations, I overwhelmingly feel that there is no "there" there. Even the dumbest human has a continuity, a theory of the world, an "object permanence"... I'm struggling to find the right description, but I believe there is more than language manipulation to intelligence.
(I know this is tangential to the article, which is excellent as the author's usually are; I admire his restraint. However, I see exemplars of this take all over the thread so: why not here?)
>I am not trying to be snarky; I used to think that intelligence was intrinsically tied to or perhaps identical with language
I learned a long time ago that this wasn’t the case.
I can speak several languages, and many times when I remember something and want to search for it on Google or any other AI engine, I can’t recall which language I originally read it in.
So whatever mechanism the brain uses to store information, it’s certainly language‑agnostic. There are also many moments when you fully grasp a concept but forget the words to describe it, yet the concept itself remains clear in your mind.
>and while I agree humans can make similar mistakes/confabulations, I overwhelmingly feel that there is no "there" there.
What really opened my eyes a couple weeks ago (anyone can try this): I asked Sonnet to write an inference engine for Qwen3, from scratch, without any dependencies, in pure C. I gave it GGUF specs for parsing (to quickly load existing models) and Qwen3's architecture description. The idea was to see the minimal implementation without all the framework fluff, or abstractions. Sonnet was able to one-shot it and it worked.
And you know what, Qwen3's entire forward pass is just 50 lines of very simple code (mostly vector-matrix multiplications).
The forward pass is only part of the story; you just get a list of token probabilities from the model, that is all. After the pass, you need to choose the sampling strategy: how to choose the next token from the list. And this is where you can easily make the whole model much dumber, more creative, more robotic, make it collapse entirely by just choosing different decoding strategies. So a large part of a model's perceived performance/feel is not even in the neurons, but in some hardcoded manually-written function.
Then I also performed "surgery" on this model by removing/corrupting layers and seeing what happens. If you do this excercise, you can see that it's not intelligence. It's just a text transformation algorithm. Something like "semantic template matcher". It generates output by finding, matching and combining several prelearned semantic templates. A slight perturbation in one neuron can break the "finding part" and it collapases entirely: it can't find the correct template to match and the whole illusion of intelligence breaks. Its corrupted output is what you expect from corrupting a pure text manipulation algorithm, not a truly intelligent system.
It feels like you probably went too deep in the LLM bandwagon.
An LLM is a statistical next token machine trained on all stuff people wrote/said. It blends texts together in a way that still makes sense (or no sense at all).
Imagine you made a super simple program which would answer yes/no to any questions by generating a random number. It would get things right 50% of the times. You can them fine-tune it to say yes more often to certain keywords and no to others.
Just with a bunch of hardcoded paths you'd probably fool someone thinking that this AI has superhuman predictive capabilities.
This is what it feels it's happening, sure it's not that simple but you can code a base GPT in an afternoon.
If it were not "just a statistical next token machine", how different would it behave?
Can you find an example and test it out?
Wait, you're asking to find and produce a example of a feasible and better alternative to LLMs when they are the current forefront of AI technology?
Anyway, just to play along, if it weren't just a statistical next token machine, the same question would have always the same answer and not be affected by a "temperature" value.
2 replies →
How do non-LLM based World Models behave?
1 reply →
If you look at different ancient traditions, you will notice how they struggle with the limitations of language, with its inability to represent certain things that are not just crucial for understanding the world, but also are even somehow communicable. Buddhists dug into that in a very analytical, articulate way, for instance.
Another perspective: cetaceans are considered to be as conscious as humans, but any attempts to interpret their communication as a language failed so far. They can be taught simple languages to communicate with humans, as can be chimps. But apparently it's not how they process the world inside.
You're a little out of date. Cetaceans communicate images to each other in the form of ultrasonic chirps. They chirp, they hear a reflection, and they repeat the reflection.
Does this resemble human language, with syntax, the ability to define new notions based on known notions, etc?
> In the beginning were the words, and the words made the world. I am the words. The words are everything. Where the words end the world ends. You cannot go forward in an absence of space. Repeat: In the beginning were the words...
- a self-aware computer program in a video game, when you attempt to exceed the boundaries of its code
I think there are two types of discussions, when it comes to LLMs: Some people talk about whether LLMs are "human" and some people talk about whether LLMs are "useful" (ie they perform specific cognitive tasks at least as well as humans).
Both of those aspects are called "intelligence", and thus these two groups cannot understand each other.
> I'm struggling to find the right description
I think you're circling the concept of a "soul". It is the reason that, in non-communicative disabled people, we still see a life.
I've wanted to make an art piece. It would be a chatbox claiming to connect you to the first real intelligence, but that intelligence would be non-communicative. I'd assure you that it is the most intelligent being, that it had a soul, but that it just couldn't write back.
Intelligence and Soul is not purely measurable phenomenon. A man can do nothing but stupid things, say nothing but outright lies, and still be the most intelligent person. Intelligence is within.