← Back to context

Comment by ben_w

3 days ago

The general argument you make is correct, but you conclusion "And this one doesn't." is as yet uncertain.

I will absolutely say that all ML methods known are literally too stupid to live, as in no living thing can get away with making so many mistakes before it's learned anything, but that's the rate of change of performance with respect to examples rather than what it learns by the time training is finished.

What is "abstract thought"? Is that even the same between any two humans who use that word to describe their own inner processes? Because "imagination"/"visualise" certainly isn't.

> no living thing can get away with making so many mistakes before it's learned anything

If you consider that LLMs have already "learned" more than any one human in this world is able to learn, and still make those mistakes, that suggests there may be something wrong with this approach...

  • Not so: "Per example" is not "per wall clock".

    To a limited degree, they can compensate for being such slow learners (by example) due to the transistors doing this learning being faster (by the wall clock) than biological synapses to the same degree to which you walk faster than continental drift. (Not a metaphor, it really is that scale difference).

    However, this doesn't work on all domains. When there's not enough training data, when self-play isn't enough… well, this is why we don't have level-5 self-driving cars, just a whole bunch of anecdotes about various different self-driving cars that work for some people and don't work for other people: it didn't generalise, the edge cases are too many and it's too slow to learn from them.

    So, are LLMs bad at… I dunno, making sure that all the references they use genuinely support the conclusions they make before declaring their task is complete, I think that's still a current failure mode… specifically because they're fundamentally different to us*, or because they are really slow learners?

    * They *definitely are* fundamentally different to us, but is this causally why they make this kind of error?

  • But humans do the same thing. How many eons did we make the mistake of attributing everything to God's will, without a scientific thought in our heads? It's really easy to be wrong, when the consequences don't lead to your death, or are actually beneficial. The thinking machines are still babies, whose ideas aren't honed by personal experience; but that will come, in one form or another.

    • > The thinking machines are still babies, whose ideas aren't honed by personal experience; but that will come, in one form or another.

      Some machines, maybe. But attention-based LLMs aren't these machines.

      8 replies →

> but that's the rate of change of performance with respect to examples rather than what it learns by the time training is finished.

It's not just that. The problem of “deep learning” is that we use the word “learning” for something that really has no similarity with actual learning: it's not just that it converges way too slowly, it's also that it just seeks to minimize the predicted loss for every samples during training, but that's no how humans learn. If you feed it enough flat-earther content, as well a physics books, an LLM will happily tells you that the earth is flat, and explain you with lots of physics why it cannot be flat. It simply learned both “facts” during training and then spit it out during inference.

A human will learn one or the other first, and once the initial learning is made, it will disregards all the evidence of the contrary, until maybe at some point it doesn't and switches side entirely.

LLMs don't have an inner representation of the world and as such they don't have an opinion about the world.

The humans can't see the reality for itself, but they at least know it exists and they are constantly struggling to understand it. The LLM, by nature, is indifferent to the world.

  • > If you feed it enough flat-earther content, as well a physics books, an LLM will happily tells you that the earth is flat, and explain you with lots of physics why it cannot be flat.

    This is a terrible example, because it's what humans do as well. See religious, or indeed military, indoctrination. All propaganda is as effective as it is, because the same message keeps getting hammered in.

    And not just that, common misconceptions abound everywhere and not just conspiracy theories, religion, and politics. My dad absolutely insisted that the water draining in toilets or sinks are meaningfully influenced by the Coriolis effect, used an example of one time he went to the equator and saw a demonstration of this on both sides of the equator. University education and lifetime career in STEM, should have been able to figure out from first principles why the Coriolis effect is exactly zero on the equator itself, didn't.

    > A human will learn one or the other first, and once the initial learning is made, it will disregards all the evidence of the contrary, until maybe at some point it doesn't and switches side entirely.

    We don't have any way to know what a human would do if they could read the entire internet, because we don't live long enough to try.

    The only bet I'd make is that we'd be more competent than any AI doing the same, because we learn faster from fewer examples, but that's about it.

    > LLMs don't have an inner representation of the world and as such they don't have an opinion about the world.

    There is evidence that they do have some inner representation of the world, e.g.:

    https://arxiv.org/abs/2506.02996

    https://arxiv.org/abs/2404.18202

    • > This is a terrible example, because it's what humans do as well. See religious, or indeed military, indoctrination. All propaganda is as effective as it is, because the same message keeps getting hammered in.

      You completely misread my point.

      The key thing with humans isn't that they cannot believe in bullshit. They can definitely do. But we don't usually believe in both the bullshit and in the fact the BS is actually BS. We have opinions on the BS. And we, as a species, routinely die or kill for these opinions, by the way. LLM don't care about anything.

      3 replies →