Comment by striking
10 days ago
Even if interpretability of specific models or features within them is an open area of research, the mechanics of how LLMs work to produce results are observable and well-understood, and methods to understand their fundamental limitations are pretty solid these days as well.
Is there anything to be gained from following a line of reasoning that basically says LLMs are incomprehensible, full stop?
>Even if interpretability of specific models or features within them is an open area of research, the mechanics of how LLMs work to produce results are observable and well-understood, and methods to understand their fundamental limitations are pretty solid these days as well.
If you train a transformer on (only) lots and lots of addition pairs, i.e '38393 + 79628 = 118021' and nothing else, the transformer will, during training discover an algorithm for addition and employ it in service of predicting the next token, which in this instance would be the sum of two numbers.
We know this because of tedious interpretability research, the very limited problem space and the fact we knew exactly what to look for.
Alright, let's leave addition aside (SOTA LLMs are after all trained on much more) and think about another question. Any other question at all. How about something like:
"Take a capital letter J and a right parenthesis, ). Take the parenthesis, rotate it counterclockwise 90 degrees, and put it on top of the J. What everyday object does that resemble?"
What algorithm does GPT or Gemini or whatever employ to answer this and similar questions correctly ? It's certainly not the one it learnt for addition. Do you Know ? No. Do the creators at Open AI or Google know ? Not at all. Can you or they find out right now ? Also No.
Let's revisit your statement.
"the mechanics of how LLMs work to produce results are observable and well-understood".
Observable, I'll give you that, but how on earth can you look at the above and sincerely call that 'well-understood' ?
It's pattern matching, likely from typography texts and descriptions of umbrellas. My understanding is that the model can attempt some permutations in its thinking and eventually a permutation's tokens catch enough attention to attempt to solve, and that once it is attending to "everyday object", "arc", and "hook", it will reply with "umbrella".
Why am I confident that it's not actually doing spatial reasoning? At least in the case of Claude Opus 4.6, it also confidently replies "umbrella" even when you tell it to put the parenthesis under the J, with a handy diagram clearly proving itself wrong: https://claude.ai/share/497ad081-c73f-44d7-96db-cec33e6c0ae3 . Here's me specifically asking for the three key points above: https://claude.ai/share/b529f15b-0dfe-4662-9f18-97363f7971d1
I feel like I have a pretty good intuition of what's happening here based on my understanding of the underlying mathematical mechanics.
Edit: I poked at it a little longer and I was able to get some more specific matches to source material binding the concept of umbrellas being drawn using the letter J: https://claude.ai/share/f8bb90c3-b1a6-4d82-a8ba-2b8da769241e
>It's pattern matching, likely from typography texts and descriptions of umbrellas.
"Pattern matching" is not an explanation of anything, nor does it answer the question I posed. You basically hand waved the problem away in conveniently vague and non-descriptive phrase. Do you think you could publish that in a paper for ext ?
>Why am I confident that it's not actually doing spatial reasoning? At least in the case of Claude Opus 4.6, it also confidently replies "umbrella" even when you tell it to put the parenthesis under the J, with a handy diagram clearly proving itself wrong
I don't know what to tell you but J with the parentheses upside down still resembles an umbrella. To think that a machine would recognize it's just a flipped umbrella and a human wouldn't is amazing, but here we are. It's doubly baffling because Claude quite clearly explains it in your transcript.
>I feel like I have a pretty good intuition of what's happening here based on my understanding of the underlying mathematical mechanics.
Yes I realize that. I'm telling you that you're wrong.
5 replies →
> I feel like I have a pretty good intuition of what's happening here based on my understanding of the underlying mathematical mechanics.
You should write a paper and release it and basically get rich.
From Gemini:When you take those two shapes and combine them, the resulting image looks like an umbrella.
The concept “understand” is rooted in utility. It means “I have built a much simpler model which produces usefully accurate predictions, of the thing or behaviour I seek to ‘understand’”. This utility is “explanatory power”. The model may be in your head, may be math, may be an algorithm or narrative, it may be a methodology with a history of utility. “Greater understanding” is associated with models that are simpler, more essential, more accurate, more useful, cheaper, more decomposed, more composable, more easily communicated or replicated, or more widely applicable.
“Pattern matching”, “next token prediction”, “tensor math” and “gradient descent” or the understanding and application of these by specialists, are not useful models of what LLMs do, any more than “have sex, feed and talk to the resulting artifact for 18 years” is a useful model of human physiology or psychology.
My understanding, and I'm not a specialist, is there are huge and consequential utility gaps in our models of LLMs. So much so, it is reasonable to say we don't yet understand how they work.
You can't keep pushing the AI hype train if you consider it just a new type of software / fancy statistical database.
Yes, there is - benefit of a doubt.