Comment by dwallin
7 hours ago
Some people point at LLMs confabulating, as if this wasn’t something humans are already widely known for doing.
I consider it highly plausible that confabulation is inherent to scaling intelligence. In order to run computation on data that due to dimensionality is computationally infeasible, you will most likely need to create a lower dimensional representation and do the computation on that. Collapsing the dimensionality is going to be lossy, which means it will have gaps between what it thinks is the reality and what is.
The concern for me about LLMs confabulating is not that humans don't do it. It's that the massive scale at which LLMs will inevitably be deployed makes even the smallest confabulation extremely risky.
I don't understand this. Many small errors distributed across a large deployment sounds a lot like normal mode of error prone humans / cogs / whatevers distributed over a wide deployment.
There's a difference between 1000 diverse humans with varied traits making errors that should cancel out because of the law of large numbers vs 10 AI with the same training data making errors that would likely correlate and compound upon each other.
I have yet to see a comparison of human vs. LLM confabulation errors at scale.
"Many small errors" makes a presumption about LLM confabulation/hallucination that seems unwarranted. Pre-LLM humans (and our computers) have managed vast nuclear arsenals, bioweapons research, and ubiquitous global transport - as a few examples - without any catastrophic mistakes, so far. What can we reasonably expect as a likely worst case scenario if LLMs replacing all the relevant expertise and execution?
Let's say a given B2B system deployment typically requires 100 custom behaviours/scripts and 3 years worth of effort. A team of ten people can execute such a deployment in 3-4 months. The team has the capacity to fix up issues caused by small human errors as they arise, since they show up roughly once a week.
With the advent of LLMs, a new deployment now takes 3 days. Consequently, errors requiring human attention crop up several times a day.
Your project vue-skuilder has 6 github action steps devoted to checking the work you do before it's allowed to go out. You do not trust yourself to get things right 100% of the time.
I am watching people trust LLM-based analysis and actions 100% of the time without checking.
> Some people point at LLMs confabulating, as if this wasn’t something humans are already widely known for doing.
I think we need to start rejecting anthropomorphic statements like this out of hand. They are lazy, typically wrong, and are always delivered as a dismissive defense of LLM failure modes. Anything can be anthropomorphized, and it's always problematic to do so - that's why the word exists.
This rhetorical technique always follows the form of "this LLM behavior can be analogized in terms of some human behavior, thus it follows that LLMs are human-like" which then opens the door to unbounded speculation that draws on arbitrary aspects of human nature and biology to justify technical reasoning.
In this case, you've deliberately conflated a technical term of art (LLM confabulation) with the the concept of human memory confabulation and used that as a foundation to argue that confabulation is thus inherent to intelligence. There is a lot that's wrong with this reasoning, but the most obvious is that it's a massive category error. "Confabulation" in LLMs and "confabulation" in humans have basically nothing in common, they are comparable only in an extremely superficial sense. To then go on to suggest that confabulation might be inherent to intelligence isn't even really a coherent argument because you've created ambiguity in the meaning of the word confabulate.
>this LLM behavior can be analogized in terms of some human behavior, thus it follows that LLMs are human-like
No, the argument is "this behavior is similar enough to human behavior that using it as evidence against <claim regarding LLM capability that humans have> is specious"
>"Confabulation" in LLMs and "confabulation" in humans have basically nothing in common
I don't know why you think this. They seem to have a lot in common. I call it sensible nonsense. Humans are prone to this when self-reflective neural circuits break down. LLMs are characterized by a lack of self-reflective information. When critical input is missing, the algorithm will craft a narrative around the available, but insufficient information resulting in sensible nonsense (e.g. neural disorders such as somatoparaphrenia)
> No, the argument is "this behavior is similar enough to human behavior that using it as evidence against <claim regarding LLM capability that humans have> is specious"
I'm not really following. LLM capabilities are self-evident, comparing them to a human doesn't add any useful information in that context.
> LLMs are characterized by a lack of self-reflective information. When critical input is missing, the algorithm will craft a narrative around the available, but insufficient information resulting in sensible nonsense (e.g. neural disorders such as somatoparaphrenia)
You're just drawing lines between superficial descriptions from disparate concepts that have a metaphorical overlap. It's also wrong. LLMs do not "craft a narrative around available information when critical input is missing", LLM confabulations are statistical, not a consequence of missing information or damage.
1 reply →
We shouldn’t try to build a worse version of a human. We should try to build a better compiler and encyclopedia.
We tried that. It was called Cyc. It never got even close to the level of capabilities a modern LLM has in an agentic harness — even on common sense and reasoning problems!
That sounds like a "get wealthy slowly" plan, while the LLM prophets are more focused on "get rich quick".
Humans can be reasoned with, though, and are capable of learning.
> Some people point at LLMs confabulating
No. LLMs do not confabulate they bullshit. There is a big difference. AIs do not care, cannot care, have not capacity to care about the output. String tokens in, string tokes out. Even if they have all the data perfectly recorded they will still fail to use it for a coherent output.
> Collapsing the dimensionality is going to be lossy, which means it will have gaps between what it thinks is the reality and what is.
Confabulation has to do with degradation of biological processes and information storage.
There is no equivalent in a LLM. Once the data is recorded it will be recalled exactly the same up to the bit. A LLM representation is immutable. You can download a model a 1000 times, run it for 10 years, etc. and the data is the same. The closes that you get is if you store the data in a faulty disk, but that is not why LLMs output is so awful, that would be a trivial problem to solve with current technology. (Like having a RAID and a few checksums).
I don't even think they bullshit, since that requires conscious effort that they do not an cannot possess. They just simply interpret things incorrectly sometimes, like any of us meatbags.
They make incorrect predictions of text to respond to prompts.
The neat thing about LLMs is they are very general models that can be used for lots of different things. The downside is they often make incorrect predictions, and what's worse, it isn't even very predictable to know when they make incorrect predictions.
I think this is leaning on the "lies are when you tell falsehoods on purpose; bullshit is when you simply don't care at all whether what you're saying is true" definition of bullshit. Cf. On Bullshit.
So, they can't lie, but they can (and, in fact, exclusively do) bullshit.
> No. LLMs do not confabulate they bullshit. There is a big difference. AIs do not care, cannot care, have not capacity to care about the output. String tokens in, string tokes out. Even if they have all the data perfectly recorded they will still fail to use it for a coherent output.
Isn't "caring" a necessary pre-requisite for bullshitting? One either bullshits because they care, or don't care, about the context.
They're presumably referring to the Harry Frankfurt definition of bullshit: "speech intended to persuade without regard for truth. The liar cares about the truth and attempts to hide it; the bullshitter doesn't care whether what they say is true or false."
2 replies →
You seem confident. Can you get it to bullshit on GPT-5.4 thinking? Use a text prompt spanning 3-4 pages and lets see if it gets it wrong.
I haven't seen any counter examples, so you may give some examples to start with.
Here we go. Would this do?
https://chatgpt.com/share/69d6cc45-1678-8384-bd9c-0f313021ff...
The correct answer in that the U and _ in the mdstat output cannot be mapped the the rest of the output by either position or indexes in square brackets, so you can't tell the exact nature of the failure from the mdstat output alone (for the record, the failed disk was sda).
So all of the "analysis" was bullshit, including "it's probably multiple partitions from multiple drives". But there are so many juicy numbered and indexed bits of info to pattern match on!
Notice how for the followup question it "thought" for 4 minutes, going in circles trying to make essentially random ordering to make some sort of ordered sense., and then bullshited its way to "it is sdb"
There are AI researchers who wrote blogposts which got to HN top about spiky spheres (I won't link the original blogpost making that claim to avoid hurt sentiments). Here's 3blue1brown correcting those AI/ML researchers intuitions.
https://www.youtube.com/watch?v=fsLh-NYhOoU&t=3238s
people can and do confabulate, but generally I trust my intern to tell me "I don't know" and "I think it was X but tbh I have no fuckin clue"
the LLM will just lie to me "Good idea! You're totally right, we should do Y"
It’s a failure mode of humans, it’s the entire mode of LLMs.
Yes, and to me the evolution of life sure looks like an evolution of more truthful models of the universe in service of energy profit. Better model -> better predictions -> better profit.
I'm extremely skeptical that all of life evolved intelligence to be closer to truth only for us to digitize intelligence and then have the opposite happen. Makes no sense.
My understanding is that this is the opposite of what is typically understood to be true - organisms with less truthful (more reductive/compressed) perception survive better than those with more complete perception. "Fitness beats truth."
I think we are maybe talking past each other?
Fitness is effective truth prediction, appropriately scoped.
A frog doesn't need to understand quantum physics to catch a fly. But if the frogs model of fly movement was trained on lies it will have a model that predicts poorly, won't catch flies, and will die.
There is another level to this in that the more complex and changing the environment the more beneficial a wider scoped model / understanding of truth.
However if you are going to lean fully into Hoffman and accept thatby default consciousness constructs rather than approximate reality I think we will have to agree to disagree. Personally I ascribe to Karl Friston free energy principle.
If you want to call it that, I find the confabulation in LLMs extreme. That level of confabulation would most likely be diagnosed as dementia in humans.[0] Hence, it is considered a bug not a feature in humans as well.
Now imagine a high-skilled software engineer with dementia coding safety-critical software...
[0] https://www.medicalnewstoday.com/articles/confabulation-deme...
And is that considered a feature of humans or a bug?
Is it something we want to emulate?
The suggestion is that it is an intrinsic quality and therefore neither a feature nor a bug.
It's like saying, computation requires nonzero energy. Is that a feature or a bug? Neither, it's irrelevant, because it's a physical constant of the universe that computation will always require nonzero energy.
If confabulation is a physical constant of intelligence, then like energy per computation, all we can do is try to minimize it, while knowing it can never go to zero.
The test isn’t whether humans also create bullshit, but whether an honest actor knows when they are doing this and doesn’t do it on purpose. As the article points out, LLMs don’t say “I don’t know.” If you demand they do something that never appears in the training data, they just forge ahead and generate words and make something up according to the statical probabilities they have in the model weights. A human knows that he doesn’t know. That seems missing with current AIs.
> Some people point at LLMs confabulating, as if this wasn’t something humans are already widely known for doing.
Are you seriously making the argument that AI "hallucinations" are comparable and interchangeable to mistakes, omissions and lies made by humans?
You understand that calling AI errors "hallucinations" and "confabulations" is a metaphor to relate them to human language? The technical term would be "mis-prediction", which suddenly isn't something humans ever do when talking, because we don't predict words, we communicate with intent.
Yes see Karl Frisstons Free energy principle
https://www.nature.com/articles/nrn2787