Comment by reedlaw

18 days ago

This article, (https://michaelmangialardi.substack.com/p/the-celestial-mirr...), came to similar conclusions as the parent article, and includes some tests (e.g. https://colab.research.google.com/drive/1kTqyoYpTcbvaz8tiYgj...) showing that LLMs, while good at understanding, fail at intellectual reasoning. The fact that they often produce correct outputs has more to do with training and pattern recognition than ability to grasp necessity and abstract universals.

They neither understand nor reason. They don’t know what they’re going to say, they only know what has just been said.

Language models don’t output a response, they output a single token. We’ll use token==word shorthand:

When you ask “What is the capital of France?” it actually only outputs: “The”

That’s it. Truly, that IS the final output. It is literally a one-way algorithm that outputs a single word. It has no knowledge, memory, and it’s doesn’t know what’s next. As far as the algorithm is concerned it’s done! It outputs ONE token for any given input.

Now, if you start over and put in “What is the capital of France? The” it’ll output “ “. That’s it. Between your two inputs were a million others, none of them have a plan for the conversation, it’s just one token out for whatever input.

But if you start over yet again and put in “What is the capital of France? The “ it’ll output “capital”. That’s it. You see where this is going?

Then someone uttered the words that have built and destroyed empires: “what if I automate this?” And so it was that the output was piped directly back into the input, probably using AutoHotKey. But oh no, it just kept adding one word at a time until it ran of memory. The technology got stuck there for a while, until someone thought “how about we train it so that <DONE> is an increasingly likely output the longer the loop goes on? Then, when it eventually says <DONE>, we’ll stop pumping it back into the input and send it to the user.” Booya, a trillion dollars for everyone but them.

It’s truly so remarkable that it gets me stuck in an infinite philosophical loop in my own head, but seeing how it works the idea of ‘think’, ‘reason’, ‘understand’ or any of those words becomes silly. It’s amazing for entirely different reasons.

  • Yes, LLMs mimic a form of understanding partly through the way language embeds concepts that are preserved when embedded geometrically in vector space.

    • Your continued use of the word “understanding” hints at a lingering misunderstanding. They’re stateless one-shot algorithms that output a single word regardless of the input. Not even a single word, it’s a single token. It isn’t continuing a sentence or thought it had, you literally have to put it into the input again and it’ll guess at the next partial word.

      By default that would be the same word every time you give the same input. The only reason it isn’t is because the fuzzy randomized selector is cranked up to max by most providers (temp + seed for randomized selection), but you can turn that back down through the API and get deterministic outputs. That’s not a party trick, that’s the default of the system. If you say the same thing it will output the same single word (token) every time.

      You see the aggregate of running it through the stateless algorithm 200+ times before the collection of one-by-one guessed words are sent back to you as a response. I get it, if you think that was put into the glowing orb and it shot back a long coherent response with personality then it must be doing something, but the system truly only outputs one token with zero memory. It’s stateless, meaning nothing internally changed, so there is no memory to remember it wants to complete that thought or sentence. After it outputs “the” the entire thing resets to zero and you start over.

      1 reply →

isn't intellectual reasoning just pattern recognition + a forward causal token generation mechanism?

  • You can replicate an LLM:

    You and a buddy are going to play “next word”, but it’s probably already known by a better name than I made up.

    You start with one word, ANY word at all, and say it out loud, then your buddy says the next word in the yet unknown sentence, then it’s back to you for one word. Loop until you hit an end.

    Let’s say you start with “You”. Then your buddy says the next word out loud, also whatever they want. Let’s go with “are”. Then back to you for the next word, “smarter” -> “than” -> “you” -> “think.”

    Neither of you knew what you were going to say, you only knew what was just said so you picked a reasonable next word. There was no ‘thought’, only next token prediction, and yet magically the final output was coherent. If you want to really get into the LLM simulation game then have a third person provide the first full sentence, then one of you picks up the first word in the next sentence and you two continue from there. As soon as you hit a breaking point the third person injects another full sentence and you two continue the game.

    With no idea what either of you are going to say and no clue about what the end result will be, no thought or reasoning at all, it won’t be long before you’re sounding super coherent while explaining thermodynamics. But one of the rounds someone’s going to mess it up, like “gluons” -> “weigh” -> “…more?…” -> “…than…(damnit Gary)…” but you must continue the game and finish the sentence, then sit back and think about how you just hallucinated an answer without thinking, reasoning, understanding, or even knowing what you were saying until it finished.

    • that's not how llms work. study the transformer architecture. every token is conditioned not just on the previous token, but each layer's activation generates a query over the kv cache of the previous activations, which means that each token's generation has access to any higher order analytical conclusions and observations generated in the past. information is not lost between the tokens like your thought exercise implies.

      1 reply →

  • Obviously not. In actual thinking, we can generate an idea, evaluate it for internal consistency and consistency with our (generally much more than linguistic, i.e. may include visual imagery and other sensory representations) world models, decide this idea is bad / good, and then explore similar / different ideas. I.e. we can backtrack and form a branching tree of ideas. LLMs cannot backtrack, do not have a world model (or, to the extent they do, this world model is solely based on token patterns), and cannot evaluate consistency beyond (linguistic) semantic similarity.

    • There's no such thing as a "world model". That is metaphor-driven development from GOFAI, where they'd just make up a concept and assume it existed because they made it up. LLMs are capable of approximating such a thing because they are capable of approximating anything if you train them to do it.

      > or, to the extent they do, this world model is solely based on token patterns

      Obviously not true because of RL environments.

      3 replies →

    • "LLMs cannot backtrack". This is exactly wrong. LLMs always see everything in the past. In this sense they are more efficient than turing machines, because (assuming sufficiently large context length) every token sees ALL previous tokens. So, in principle, an LLM could write a bunch of exploratory shit, and then add a "tombstone" "token" that can selectively devalue things within a certain timeframe -- aka just de exploratory thngs (as judged by RoPE time), and thus "backtrack".

      I put "token" in quotes because this would obviously not necessarily be an explicit token, but it would have to be learned group of tokens, for example. But who knows, if the thinking models have some weird pseudo-xml delimiters for thinking, it's not crazy to think that an LLM could shove this information in say the closer tag.

      4 replies →