Comment by harrouet
8 hours ago
This, and similar stories at Anthropic, should remind us that LLM is a sorcery tech that we don't understand at all.
- First, deep-learning networks are poorly understood. It is actually a field of research to figure out how they work. - Second, it came as a surprise that using transformers at scale would end up with interesting conversational engines (called LLM). _It was not planned at all_.
Now that some people raised VC money around the tech, they want you to think that LLMs are smart beasts (they are not) and that we know what LLMs are doing (we don't). Deploying LLMs is all about tweaking and measuring the output. There is no exact science about predicting output. Proof: change the model and your LLM workflow behaves completely differently and in an unpredictable way.
Because of this, I personally side with Yann Le Cun in believing that LLM is not a path to AGI. We will see LLM used in user-assisting tech or automation of non-critical tasks, sometimes with questionable RoI -- but not more.
Humanity has been using steel for over a millenia, however it's only in the past 100 years or so we have a good understanding of how carbon interacts with iron at an atomic level to create the strength characteristics that makes it useful. Based on this argument, we should not have used steel, until we had a complete first principles understanding.
What if you substituted "steel" with "asbestos" in your argument.
Steel has almost always (as in 99.99...% of the time) delivered to our expectations based on our understanding of it.
The cases where we built something out of steel and it failed are _massively_ outnumbered by the instances where we used it where/when suitable. If we built something in steel and it failed/someone died we stopped doing that pretty soon after.
1 reply →
Yeah but well you see, humans did not go extinct from just asbestos!
Asbestos, lead paint, cigarettes, heroin(perscribed generously for basically whatever the doc felt like), "Radithor" (patent medicine containing radium-226 and 228, marketed as a "perpetual sunshine" energy tonic and cure for over 150 diseases), bloodletting, mercury treatments for syphilis, tobacco smoke enemas (yep that was a real thing), milk-based blood transfusions.
Didn't understand those either and used the fuck out of them because "the experts" said we should.
5 replies →
Assuming your timeline and metallurgical claims to be true, you're conflating engineering and (materials) science.
Humans have been using steel for however long, when and where it was understood to be an appropriate solution to a problem. In some sense, engineering is the development and application of that understanding. You do not need to have a molecular explanation of the interaction between carbon and iron to do effective engineering[-1] with steel.[0] Science seeks to explain how and why things are the way they are, and this can inform engineering, but it is not prerequisite.
I think that machine learning as a field has more of an understanding of how LLMs work than your parent post makes out. But I agree with the thrust of that comment because it's obvious that the reckless startups that are pushing LLMs as a solution to everything are not doing effective engineering.
[-1] "effective engineering" -- that's getting results, yes, but only with reasonable efficiency and always with safety being a fundamental consideration throughout
[0] No, I'm not saying that every instance of the use of steel has been effective/efficient/safe.
>do not need to have a molecular explanation of the interaction between carbon and iron to do effective engineering
It was more like 'we take iron from place X and it works, but iron from place Y doesn't"
This is why the invention of steel isnt really recognized before 1740. We were blind to molecular impurities
Which year did we use steel to replace human workers and automate decision-making?
Around 1928ish
The entire industrial revolution was steel replacing human workers. And that is still the backbone of the world today. We are still living the industrial revolution.
Just like the invention of fire happened ages ago, but is still a crucial part of life today.
11 replies →
Poor correlation comparing physical material to computer technology
Why
3 replies →
This is a very low-effort argument.
Humans could understand properties of steel long before they knew how Carbon interacted with Iron. Steel always behaved in a predictable, reproducible way. Empirical experiments with steel usage yielded outputs that could be documented and passed along. You could measure steel for its quality, etc.
The same cannot be said of LLMs. This is not to say they are not useful, this was never the claim of people that point at it's nondeterministic behavior and our lack of understanding of their workings to incorporate them into established processes.
Of course the hype merchants don't really care about any of this. They want to make destructive amounts of money out of it, consequences be damned.
[dead]
2 replies →
That's not his point at all. He advocates using LLMs.
The correct analogy is: if we just scale and improve steel enough, we'll get a flying car.
Well, we did build airplanes out of steel, but there are better (lighter) materials avaiable. But the developement of car engines did directly enabled airplane engines. Not sure if this is the right analogy path, but I kind of suspect similar with LLM's/transformers. They will be a important part.
5 replies →
We literally did that though. Walk outside and look up.
Where did he say not to use LLMs? Oh that's right: he didn't.
pro LLM people are the kings of ad hoc fallacy. Why did you type this? You can consistently test steel and get a good idea of when and where it will break in a system without knowing its molecular structure.
LLMs are literally stochastic by nature and can't be relied on for anything critical as its impossible to determine why they fail, regardless of the deterministic tooling you build around them.
> LLMs are literally stochastic by nature and can't be relied on for anything critical
Ahh, yes, unlike humans, who are completely deterministic, and thus can be trusted.
12 replies →
What is the ad hoc fallacy? From googling I didn’t find any convincing definitions (definitions that demonstrate that it is a logical fallacy).
2 replies →
Oh for crying out loud! Let's stop inventing fake analogies to justify the inherent LLM shortcomings! Those of us who are critical - are only using the standards that the LLM companies set themselves ("superintelligence", "pocket phds" bla blabla), to hold them accountable. When does the grift stop?
The article you are responding to showed that a strange LLM behaviour was caused by a training signal that was explicitly designed to produce that type of behaviour. They were able to isolate it, clearly demonstrate what happened, and roll out a mitigation using a mechanism they engineered for exactly this type of thing (the developer prompt). That doesn’t sound like sorcery to me. If anything I’m surprised you can so easily engineer these things!
The article I am responding to (which I've read) shows that these LLMs come with all sorts of hacks (= context bits) to make it behave more like this or more like that.
There is probably a whole testing workflow at AI companies to tweak each new model until it "looks" acceptable.
But they still don't understand what they are doing. This is purely empirical.
> "There is probably a whole testing workflow at AI companies to tweak each new model until it "looks" acceptable."
Isn't that what the RLHF phase does ( https://www.paloaltonetworks.com/cyberpedia/what-is-rlhf )?
It's interesting to think about what the process will look like when we do understand them. I imagine pulling bits of LLM off the shelf like libraries and compiling them together into a functioning "brain", precisely tailored to your needs.
That all of their model outputs should be influenced by whatever personality prompt voodoo the wise artisan at OpenAI decided to stuff it with during RL should give everyone pause.
That Nerdy personality prompt made me gag. As a card-carrying Nerd, I feel offended
I configured it to use the nerdy personality when I used it to help me on a personal project (setting up a home server, nothing too fancy). LLMs are great at parsing documentation and combing through forums to find out the configurations that matched my goals.
The first time it said something along the lines of "let's use these options to avoid future gremlins haunting you", I sort of rolled my eyes but it was okay, I thought its attempt to sound endearing almost cute. A bit of a "hello fellow kids" attempt at sounding nerdy.
It quickly became noise though. It was extremely overused. Sometimes multiple mentions to goblins in the same reply.
I don't really have an opinion about it, but I sort of came to prefer a more neutral tone instead.
…months after it began.
I think that AGI will make heavy use of LLMs. It's not a straight path, but a component.
To compare with the human brain, have you ever been so drunk you don't remember the night, but you're told afterwards you had coherent conversations about complex topics? There's some aspect of our minds that is akin to a next-token-generator, pulling information from other components to produce a conversation. But that component alone is not enough to produce intelligence.
> so drunk you don't remember the night, but you're told afterwards you had coherent conversations about complex topics?
I thought that was just our short term memory failing to commit to long term, not our intelligence actually turning off
I believe that LLMs will eventually be a small component of AGI; most likely it'll function like the Broca's region of the brain.
What does LLM need to do for you to consider it "smart"?
To me they seem to be pretty damn smart, to put it mildly. They sometimes do stupid things - but so do smart people!
Not OP, but I think the argument here would be not that LLMs "are not smart" but that smart is just the wrong category of thing to describe an LLM as.
A calculator can do very complex sums very quickly, but we don't tend to call it "smart" because we don't think it's operating intelligently to some internal model of the world. I think the "LLMs are AGI" crowd would say that LLMs are, but it's perfectly consistent to think the output of LLMs is consistent/impressive/useful, but still maintain that they aren't "smart" in any meaningful way.
Intelligence can be defined as an optimization problem: "find X which maximizes F(X, Y)" where X is the solution, Y is constraints, and F is optimality/fitness criterion. Most other definitions are inane. E.g. "invent an aircraft" can be described as optimization over possible build instructions under given constraints for base materials which optimizes its ability to fly. Absolutely any invention can be formulated as an optimization problem.
It's not like a calculator because LLM can solve very broad classes of problems - you'd struggle to define problems which LLM can't solve (given some fine-tuning, harness, KB, etc).
All this talk about "smartness" isn't even particularly cute...
1 reply →
> "we don't think it's operating intelligently to some internal model of the world"
Okay, but you have to actually address why you think LLMs lack an "internal model of the world"
You can train one on 1930s text, and then teach it Python in-context.
They've produced multiple novel mathematical proofs now; Terrance Tao is impressed with them as research assistants.
You can very clearly ask them questions about the world, and they'll produce answers that match what you'd get from a "model" of the world.
What are weights, if not a model of the world? It's got a very skewed perspective, certainly, since it's terminally online and has never touched grass, but it still very clearly has a model of the world.
I'd dare say it's probably a more accurate model than the average person has, too, thanks to having Wikipedia and such baked in.
I would analogize LLMs to physics simulations in software. Game engines, for example, simulate physics enough to provide a good enough semblance of real-world physics for suspension of disbelief but we would never mistake it for real world physics. Complicated enough simulations, e.g. for weather forecasting, nuclear weapons, or QCD, can provide insights and prove physics theories, but again, experts would never mistake it for real world physics and would be able to explain where the simulation breaks down when trying to predict real world behavior.
Now we have these LLMs that provide some simulation of reasoning merely through prediction of token patterns and that is indeed unexpected and astonishing. However, the AI promoters want to suggest that this simulation of reasoning is human-level reasoning or evolving toward human-level reasoning and this is the same as mistaking game engine physics for real physics. The failure cases (e.g. the walk vs drive to a car wash next door question or the generating an image of a full glass of wine issue), even if patched away, are enough to reveal the token predictor underneath.
> To me they seem to be pretty damn smart
That's the sorcery mentioned in the GP, the issue comes when people believe it to be smart however in reality it is just a next word prediction. Gives the impression it's actually thinking, and this is by design. Personally I think it's dangerous in the sense it gives users a false sense of confidence in the LLM and so a LOT of people will blindly trust it. This isn't a good thing.
I'm curious how you think "word predictor" meaningfully describes an instruct model that has developed novel mathematical proofs that have eluded mathematicians for decades?
edit:
You cannot predict all the actions or words of someone smarter than you. If I could always predict Magnus Carlsen's next chess move, I'd be at least as good at chess as Magnus - and that would have to involve a deep understanding of chess, even if I can't explain my understanding.
I can't predict the next token in a novel mathematical proof unless I've already understood the solution.
4 replies →
What's the difference between "smart" and "next word prediction", at this point? Back when they first came out, sure, but now they can write code and create art.
What would it take for you to concede a future model was smart?
2 replies →
They aren’t smart, they approximate language constructs. They don’t have believes, ideas, etc. but have a few rounds of discussions with any LLMs and you see how they are probabilistic autocompletes based on whatever patterns from rounds of discussions you feed them
At what point does autocomplete stop being "just autocomplete"?
Clearly there's a limit. For example, if an alien autocomplete implementation were to fall out of a wormhole that somehow manages to, say, accurately complete sentences like "S&P 500, <tomorrow's date>:" with tomorrow's actual closing value today, I'd call that something else.
3 replies →
It’s not about them being smart or not. It’s about giving anthropic/openai/google the power to handle our future. Haven’t we learned anything about tech giants so far?
How about writing "all code" this June, as Dario Amodei announced in January this year?
Are they smart or are they imitating things smart people did? (and if so, is there a difference?)
LLMs are amazing. You can call them 'smart', but they're not intelligent and never will be.
They are useful but a cul de sac for heading toward AGI.
HN sober AI take of the day coming from a guy with nutjob for his handle, thank you.
You can always redefine "intelligent" so that humans meet the requirements but AIs don't.
A better model to use is this: LLMs possess a different type of intelligence than us, just like an intelligent alien species from another planet might.
A calculator has a very narrow sort of intelligence. It has near perfect capability in a subset of algebra with finite precision numbers, but that's it.
An old-school expert system has its own kind of intelligence, albeit brittle and limited to the scope of its pre-programmed if-then-else statements.
By extension, an AI chat bot has a type of intelligence too. Not the same as ours, but in many ways superior, just as how a calculator is superior to a human at basic numeric algebra. We make mistakes, the calculator does not. We make grammar and syntax errors all the time, the AI chat bots generally never do. We speak at most half a dozen languages fluently, the chat bots over a hundred. We're experts in at most a couple of fields of study, the chat bots have a very wide but shallow understanding. Etc.
Don't be so narrow minded! Start viewing all machines (and creatures) as having some type of intelligence instead of a boolean "have" or "have not" intelligence.
7 replies →
LLMs are lossy compression of a corpus with a really good natural language parser... that's it.
It’s not sorcery tech at all. Nothing in their “goblin post mortem” is surprising the least bit if you have a working high-level mental model of what an LLM is.
It’s a fancy autocomplete that takes a bunch of text in and produces the most “likely” continuation for the source text “at once and in full”. So when you add to the source text something like: “You’re an edgy nerd”, it’s very much not surprising that the responses start referencing D&D tropes.
If you then use those outputs to train your base models further it’s not at all surprising that the “likely” continuations said models end up producing also start including D&D tropes because you just elevated those types of responses from “niche” to “not niche”.
The post-mortem is hilarious in that sense. “Oh, the goblin references only come up for ‘Nerdy’ prompt”. No shit.
Your argument doesn't seem to allow that the intelligence & versatility within that mystery could exceed ours to such a degree that AGI would be the only term that makes sense for it. By your own logic, if we don't understand how these things really work, it's foolish to declare there's a limit to their potential.
https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-...
I've never been Wolfram's biggest fan, but this is a solid article. I'm trying to get a deeper understanding of the transformer architecture, and it seems that the written articles on transformer are bimodal: the either blind you with the raw math, or handwave the complexity away. I have been trying to figure out why the input embedding matrix is simply added to the input position matrix before the encoding stage, as opposed to some other way of combining these. Wolfram says:
> Why does one just add the token-value and token-position embedding vectors together? I don’t think there’s any particular science to this. It’s just that various different things have been tried, and this is one that seems to work. And it’s part of the lore of neural nets that—in some sense—so long as the setup one has is “roughly right” it’s usually possible to home in on details just by doing sufficient training, without ever really needing to “understand at an engineering level” quite how the neural net has ended up configuring itself.
It's the lack of "understand[ing] at an engineering level" that irks me- that this emergent behavior is discovered, rather than designed.
...it came as a surprise that [leaving a Petri dish out with a window open] would end up with interesting [molds] (called [penicillin]). _It was not planned at all_.
You say we don’t understand LLMs, and then you say they are not smart.
How can you say LLMs are not smart without understanding them? Do you see the contradiction?
> that we know what LLMs are doing
they loudly claim the opposite. can you show where they claim that they know?
Not sure if we read the same post, as I cannot agree with this claim, especially under this post that exactly goes into details of what happened.
>LLM is a sorcery tech that we don't understand at all
We do, and I'm sure that people at OpenAI did intuitively know why this is happening. As soon as I saw the persona mention, it was clear that the "Nerdy" behavior puts it in the same "hyperdimensional cluster" as goblins, dungeons and dragons, orcs, fantasy, quirky nerd-culture references. Especially since they instruct the model to be playful, and playful + nerdy is quite close to goblin or gremlin. Just imagine a nerdy funny subreddit, and you can probably imagine the large usage of goblin or gremlin there. And the rewards system will of course hack it, because a text containing Goblin or Gremlin is much more likely to be nerdy and quirky than not. You don't need GPT 5 for that, you would probably see the same behavior on text completion only GPT3 models like Ada or DaVinci. They specifically dissect how it came to this and how they fixed it. You can't do that with "sorcery we dont understand". Hell, I don't know their data and I easily understood why this is going on.
>they want you to think that LLMs are smart beasts (they are not)
I mean, depends on what you consider smart. It's hard to measure what you can't define, that's why we have benchmarks for model "smartness", but we cannot expect full AGI from them. They are smart in their own way, in some kind of technical intelligence way that finds the most probable average solution to a given problem. A universal function approximator. A "common sense in a box" type of smart. Not your "smart human" smart because their exact architecture doesn't allow for that.
>and that we know what LLMs are doing (we don't)
But we do. We understand them, we know how they work, we built thousands of different iterations of them, probing systems, replications in excel, graphic implementations, all kinds of LLM's. We know how they work, and we can understand them.
The big thing we can't do as humans is the same math that they do at the same speed, combining the same weights and keeping them all in our heads - it's a task our minds are just not built for. But instead of thinking you have to do "hyperdimensional math" to understand them 100%, you can just develop an intuition for what I call "hyperdimensional surfing", and it isn't even prompting, more like understanding what words mean to an LLM and into which pocket of their weights will it bring you.
It's like saying we can't understand CPU's because there is like 10 people on earth who can hold modern x86-64 opcodes in their head together with a memory table, so they must be magic. But you don't need to be able to do that to understand how CPU's work. You can take a 6502, understand it, develop an intuition for it, which will make understanding it 100x easier. Yeah, 6502 is nothing close to modern CPU's, but the core ideas and concepts help you develop the foundations. And same goes with LLM's.
>personally side with Yann Le Cun in believing that LLM is not a path to AGI
I agree, but it is the closest we currently have and it's a tech that can get us there faster. LLM's have an insane amount of uses as glue, as connectors, as human<>machine translators, as code writers, as data sorters and analysts, as experimenters, observers, watchers, and those usages will just keep growing. Maybe we won't need them when we reach AGI, but the amount of value we can unlock with these "common sense" machines is amazing and they will only speed up our search for AGI.
We understand the low level details of how they are constructed. But we do not fully understand how higher-level behavior emerges - it is a subject of active research.
For example:
https://arxiv.org/html/2210.13382v5
https://arxiv.org/abs/2109.06129
We do understand tho, it is exactly what they were made for.
If you train it on a dataset of Othello games, or a dataset including these, you are basically creating a map of all possible moves and states that have ever happened, odds of transitions between them, effective and un-effective transitions.
By querying it, you basically start navigating the map from a spot, and it just follows the semi-randomly sampled highest confidence weights when navigating "the map".
And in the multidimensional cross-section of all these states and transitions, existence of a "board map" is implied, as it is a set of common weights shared between all of them. And it becomes even more obvious with championship models in Othello paper, as it was trained on better games in which the wider state of the board was more important than the local one, thus the overall board state mattered more for responses.
The second research you linked is also has a pretty obvious conclusion. It's telling us more about us as humans than about LLM's, about our culture and colors and how we communicate it's perception through text. If you want to try something similar, try kiki bouba style experiments on old diffusion models or old LLM's. A Dzzkwok grWzzz, will get you a much rougher and darker looking things than Olulola Opolili's cloudy vibes.
The active research is as much as:
- probing and seeing "hey lets see if funky machine also does X"
- finding a way to scientifically verify and explain LLMs behaviors we know
- pure BS in some cases
- academics learning about LLM's
And not a proof of where our understanding/frontier is. It is basically standardizing and exploring the intuition that people who actively work with models already have. It's like saying we don't understand math, because people outside the math circles still do not know all behaviors and possibilities of a monoid.
2 replies →