Comment by docjay
18 days ago
A fun and insightful read, but the idea that it isn’t “just a prompting issue” is objectively false, and I don’t mean that in the “lemme show you how it’s done” way. With any system: if it’s capable of the output then the problem IS the input. Always. That’s not to say it’s easy or obvious, but if it’s possible for the system to produce the output then it’s fundamentally an input problem. “A calculator will never understand the obesity epidemic, so it can’t be used to calculate the weight of 12 people on an elevator.”
> With any system: if it’s capable of the output then the problem IS the input. Always. [...] if it’s possible for the system to produce the output then it’s fundamentally an input problem.
No, that isn't true. I can demonstrate it with a small (and deterministic) program which is obviously "capable of the output":
Is the "fundamental problem" here "always the input"? Heck no! While a user could predict all coin-tosses by providing "the correct prayers" from some other oracle... that's just, shall we say, algorithm laundering: Secretly moving the real responsibility to some other system.
There's an enormously important difference between "output which happens to be correct" versus "the correct output from a good process." Such as, in this case, the different processes of wor[l]d models.
I think you may believe what I said was controversial or nuanced enough to be worthy of a comprehensive rebuttal, but really it’s just an obvious statement when you stop to think about it.
Your code is fully capable of the output I want, assuming that’s one of “heads” or “tails”, so yes that’s a succinct example of what I said. As I said, knowing the required input might not be easy, but we KNOW it’s possible to do exactly what I want and we KNOW that it’s entirely dependent on me putting the right input into it, then it’s just a flat out silly thing to say “I’m not getting the output I want, but it could do it if I use the right input, thusly input has nothing to do with it.” What? If I wanted all heads I’d need to figure out “hamburgers” would do it, but that’s the ‘input problem’ - not “input is irrelevant.”
This reads like "if we have to solution, then we have the solution". If I can model the system required to condition inputs such that outouts are deseriable, haven't i given the model the world model it required? More to the point, isn't this just what the article argues? Scaling the model cannot solve this issue.
it's like saying a pencil is a portraint drawring device, like it isn't thr artist who makes it a portrait drawring device, wheras in the hands of a peot a peom generating machine.
1 reply →
> a comprehensive rebuttal [...] an obvious statement
I think what you said is obviously false, such that I spent a while trying to figure out if you'd accidentally typo'ed an is/isn't that flipped the logic. (If that did happen, now is a good time to check!) Any "comprehensiveness" comes from the awkward task of trying to unpack and explain things that seem like they ought to be intuitive.
> but that’s the ‘input problem’ - not “input is irrelevant.”
Not sure where that last phrase comes from, but it's a "this algorithm is bad" problem. The input is at best a secondary contributing factor.
You can manually brute-force inputs until an algorithm spits out a pre-chosen output--especially if you exploit a buffer overflow--but that doesn't mean the original algorithm is any good. I think this is already illustrated by the bad implmentation of predict_coin_flip().
> Your code is fully capable of the output I want
This is even more capable, so therefore it must be even better, right?
Alas, this is just a more-blatant version of what I labeled as algorithmic laundering. It's bad code that doesn't do the job it's intended to do. Its quality is bad no matter how clever other code is that provides `something`.
1 reply →
This article, (https://michaelmangialardi.substack.com/p/the-celestial-mirr...), came to similar conclusions as the parent article, and includes some tests (e.g. https://colab.research.google.com/drive/1kTqyoYpTcbvaz8tiYgj...) showing that LLMs, while good at understanding, fail at intellectual reasoning. The fact that they often produce correct outputs has more to do with training and pattern recognition than ability to grasp necessity and abstract universals.
They neither understand nor reason. They don’t know what they’re going to say, they only know what has just been said.
Language models don’t output a response, they output a single token. We’ll use token==word shorthand:
When you ask “What is the capital of France?” it actually only outputs: “The”
That’s it. Truly, that IS the final output. It is literally a one-way algorithm that outputs a single word. It has no knowledge, memory, and it’s doesn’t know what’s next. As far as the algorithm is concerned it’s done! It outputs ONE token for any given input.
Now, if you start over and put in “What is the capital of France? The” it’ll output “ “. That’s it. Between your two inputs were a million others, none of them have a plan for the conversation, it’s just one token out for whatever input.
But if you start over yet again and put in “What is the capital of France? The “ it’ll output “capital”. That’s it. You see where this is going?
Then someone uttered the words that have built and destroyed empires: “what if I automate this?” And so it was that the output was piped directly back into the input, probably using AutoHotKey. But oh no, it just kept adding one word at a time until it ran of memory. The technology got stuck there for a while, until someone thought “how about we train it so that <DONE> is an increasingly likely output the longer the loop goes on? Then, when it eventually says <DONE>, we’ll stop pumping it back into the input and send it to the user.” Booya, a trillion dollars for everyone but them.
It’s truly so remarkable that it gets me stuck in an infinite philosophical loop in my own head, but seeing how it works the idea of ‘think’, ‘reason’, ‘understand’ or any of those words becomes silly. It’s amazing for entirely different reasons.
Yes, LLMs mimic a form of understanding partly through the way language embeds concepts that are preserved when embedded geometrically in vector space.
2 replies →
isn't intellectual reasoning just pattern recognition + a forward causal token generation mechanism?
You can replicate an LLM:
You and a buddy are going to play “next word”, but it’s probably already known by a better name than I made up.
You start with one word, ANY word at all, and say it out loud, then your buddy says the next word in the yet unknown sentence, then it’s back to you for one word. Loop until you hit an end.
Let’s say you start with “You”. Then your buddy says the next word out loud, also whatever they want. Let’s go with “are”. Then back to you for the next word, “smarter” -> “than” -> “you” -> “think.”
Neither of you knew what you were going to say, you only knew what was just said so you picked a reasonable next word. There was no ‘thought’, only next token prediction, and yet magically the final output was coherent. If you want to really get into the LLM simulation game then have a third person provide the first full sentence, then one of you picks up the first word in the next sentence and you two continue from there. As soon as you hit a breaking point the third person injects another full sentence and you two continue the game.
With no idea what either of you are going to say and no clue about what the end result will be, no thought or reasoning at all, it won’t be long before you’re sounding super coherent while explaining thermodynamics. But one of the rounds someone’s going to mess it up, like “gluons” -> “weigh” -> “…more?…” -> “…than…(damnit Gary)…” but you must continue the game and finish the sentence, then sit back and think about how you just hallucinated an answer without thinking, reasoning, understanding, or even knowing what you were saying until it finished.
2 replies →
Obviously not. In actual thinking, we can generate an idea, evaluate it for internal consistency and consistency with our (generally much more than linguistic, i.e. may include visual imagery and other sensory representations) world models, decide this idea is bad / good, and then explore similar / different ideas. I.e. we can backtrack and form a branching tree of ideas. LLMs cannot backtrack, do not have a world model (or, to the extent they do, this world model is solely based on token patterns), and cannot evaluate consistency beyond (linguistic) semantic similarity.
9 replies →
But that's only true if the system is deterministic?
And in an LLM, the size of the inputs is vast and often hidden from the prompter. It is not something that you have exact control over in the way that you have exact control over the inputs that go into a calculator or into a compiler.
a system that copies its input into the output is capable of any output, no?
That would depend - is the input also capable of anything? If it’s capable of handling any input, and as you said the output will match it, the yes of course it’s capable of any output.
I’m not pulling a fast one here, I’m sure you’d chuckle if you took a moment to rethink your question. “If I had a perfect replicator that could replicate anything, does that mean it can output anything?” Well…yes. Derp-de-derp? ;)
It aligns with my point too. If you had a perfect replicator that can replicate anything, and you know that to be true, then if you weren’t getting gold bars out of it you wouldn’t say “this has nothing to do with the input.”
It doesn't align with your point
My point is that your reasoning is too reductive - completely ignoring the mechanics of the system - and you claim the system is capable of _anything_ if prompted correctly. You wouldn't say the replicator system is capable of the reasoning outlined in the article, right?