Comment by CharlesW
6 months ago
> The flip side of all this is of course the idea that there is still something emergent, unplanned, and mind-like.
For people who have only a surface-level understanding of how they work, yes. A nuance of Clarke's law that "any sufficiently advanced technology is indistinguishable from magic" is that the bar is different for everybody and the depth of their understanding of the technology in question. That bar is so low for our largely technologically-illiterate public that a bothersome percentage of us have started to augment and even replace religious/mystical systems with AI powered godbots (LLMs fed "God Mode"/divination/manifestation prompts).
(1) https://www.spectator.co.uk/article/deus-ex-machina-the-dang... (2) https://arxiv.org/html/2411.13223v1 (3) https://www.theguardian.com/world/2025/jun/05/in-thailand-wh...
> For people who have only a surface-level understanding of how they work, yes.
This is too dismissive because it's based on an assumption that we have a sufficiently accurate mechanistic model of the brain that we can know when something is or is not mind-like. This just isn't the case.
Nah, as a person that knows in detail how LLMs work with probably unique alternative perspective in addition to the commonplace one, I found any claims of them not having emergent behaviors to be of the same fallacy as claiming that crows can't be black because they have DNA of a bird.
> the same fallacy as claiming that crows can't be black because they have DNA of a bird.
What fallacy is that? I’m a fan of logical fallacies and never heard that claim before nor am I finding any reference with a quick search.
(Not the parent)
It doesn't have a name, but I have repeatedly noticed arguments of the form "X cannot have Y, because <explains in detail the mechanism that makes X have Y>". I wanna call it "fallacy of reduction" maybe: the idea that because a trait can be explained with a process, that this proves the trait absent.
(Ie. in this case, "LLMs cannot think, because they just predict tokens." Yes, inasmuch as they think, they do so by predicting tokens. You have to actually show why predicting tokens is insufficient to produce thought.)
1 reply →
Good catch. No such fallacy exists. Contextually, the implied reasoning (though faulty) relies on the fallacy of denying the antecedent. The mons ponus - if A then B - does NOT imply not A then not B. So if you see B, that doesn't mean A any more than not seeing A means not B. It's the difference between a necessary and sufficient condition - A is a sufficient condition for B, but the mons ponus alone is not sufficient for determining whether either A or B is a necessary condition of the other.
I think s/he meant swans instead (in ref. to Popperian epistemology).
Not sure though, the point s/he is making isn't really clear to me as well
1 reply →
I've seen some of the world's top AI researchers talk about the emergent behaviors of LLMs. It's been a major topic over the past couple years, ever since Microsoft's famous paper on the unexpected capabilities of GPT4. And they still have little understanding of how it happens.