Comment by TeMPOraL

12 hours ago

Oh boy. Someone didn't get the memo that for LLMs, tokens are units of thinking. I.e. whatever feat of computation needs to happen to produce results you seek, it needs to fit in the tokens the LLM produces. Being a finite system, there's only so much computation the LLM internal structure can do per token, so the more you force the model to be concise, the more difficult the task becomes for it - worst case, you can guarantee not to get a good answer because it requires more computation than possible with the tokens produced.

I.e. by demanding the model to be concise, you're literally making it dumber.

(Separating out "chain of thought" into "thinking mode" and removing user control over it definitely helped with this problem.)

What do you mean? The page explicitly states:

> cutting ~75% of tokens while keeping full technical accuracy.

I have no clue if this claim holds, but alas, just pretending they did not address the obvious criticism, while they did, is at the very least pretty lazy.

An explanation that explains nothing is not very interesting.

  • The burden of proof is on the author to provide at least one type of eval for making that claim.

    • I notice that the number of people confidently talking about "burden of proof" and whose it allegedly is in the context of AI has gone up sharply.

      Nobody has to proof anything. It can give your claim credibility. If you don't provide any, an opposing claim without proof does not get any better.

      3 replies →

  • The author pretended they addressed the obvious criticism.

    You can read the skill. They didn't do anything to mitigate the issue, so the criticism is valid.

  • In the age of vibe coding and that we are literally talking about a single markdown file I am sure this has been well tested and achieves all of its goals with statistical accuracy, no side effects with no issues.

  • > I have no clue if this claim holds, but alas, just pretending they did not address the obvious criticism, while they did, is at the very least pretty lazy.

    But they didn't address the criticism. "cutting ~75% of tokens while keeping full technical accuracy" is an empirical claim for which no evidence was provided.

Yeah but not all tokens are created equal. Some tokens are hard to predict and thus encode useful information; some are highly predictable and therefore don't. Spending an entire forward pass through the token-generation machine just to generate a very low-entropy token like "is" is wasteful. The LLM doesn't get to "remember" that thinking, it just gets to see a trivial grammar-filling token that a very dumb LLM could just as easily have made. They aren't stenographically hiding useful computation state in words like "the" and "and".

  • >They aren't stenographically hiding useful computation state in words like "the" and "and".

    When producing a token the model doesn't just emit the final token but you also have the entire hidden states from previous attention blocks. These hidden states are mixed into the attention block of future tokens (so even though LLMs are autoregressive where a token attends to previous tokens, in terms of a computational graph this means that the hidden states of previous tokens are passed forward and used to compute hidden states of future tokens).

    So no it's not wasteful, those low-perplexity tokens are precisely spots that can instead be used to do plan ahead and do useful computation.

    Also I would not be sure that even the output tokens are purely "filler". If you look at raw COT, they often have patterns like "but wait!" that are emitted by the model at crucial pivot points. Who's to say that the "you're absolutely right" doesn't serve some other similar purpose of forcing the model into one direction of adjusting its priors.

  • > They aren't stenographically hiding useful computation state in words like "the" and "and".

    Do you know that is true? These aren’t just tokens, they’re tokens with specific position encodings preceded by specific context. The position as a whole is a lot richer than you make it out to be. I think this is probably an unanswered empirical question, unless you’ve read otherwise.

    • I am quite certain.

      The output is "just tokens"; the "position encodings" and "context" are inputs to the LLM function, not outputs. The information that a token can carry is bounded by the entropy of that token. A highly predictable token (given the context) simply can't communicate anything.

      Again: if a tiny language model or even a basic markov model would also predict the same token, it's a safe bet it doesn't encode any useful thinking when the big model spits it out.

      1 reply →

  • can you prove this?

    train an LLM to leave out the filler words, and see it get the same performance at a lower cost? or do it at token selection time?

    • Low entropy is low entropy. You can prove it by viewing the logits of the output stream. The LLM itself will tell you how much information is encoded in each token.

      Or if you prefer, here's a Galilean thought experiment: gin up a script to get a large language model and a tiny language model to predict the next token in parallel; when they disagree, append the token generated by the large model. Clearly the large model will not care that the "easy" tokens were generated by a different model - how could it even know? Same token, same result. And you will find that the tokens that they agree on are, naturally, the filler words.

      To be clear, this observation merely debunks the idea that filler words encode useful information, that they give the LLM "room to think". It doesn't directly imply that an LLM that omits filler words can be just as smart, or that such a thing is trivial to make. It could be that highly predictable words are still important to thought in some way. It could be that they're only important because it's difficult to copy the substance of human thought without also capturing the style. But we can be very sure that what they aren't doing is "storing useful intermediate results".

Yeah, I don't think that "I'd be happy to help you with that" or "Sure, let me take a look at that for you" carries much useful signal that can be used for the next tokens.

  • There is a study that shows that what the model is doing behind the scenes in those cases is a lot more than just outputting those tokens.

    For an LLM, tokens are thought. They have no ability to think, by whatever definition of that word you like, without outputting something. The token only represents a tiny fraction of the internal state changes made when a token is output.

    Clearly there is an optimal for each task (not necessarily a global one) and a concrete model for a given task can be arbitrarily far from it. But you'd need to test it out for each case, not just assume that "less tokens = more better". You can be forcing your model to be dumber without realizing it if you're not testing.

  • They carry information in regular human communication, so I'm genuinely curious why you'd think they would not when an LLM outputs them as part of the process of responding to a message.

I’ve heard this, I don’t automatically believe it nor do I understand why it would need to be true, I’m still caught on the old fashioned idea that the only “thinking” for autoregressive modes happens during training.

But I assume this has been studied? Can anyone point to papers that show it? I’d particularly like to know what the curves look like, it’s clearly not linear, so if you cut out 75% or tokens what do you expect to lose?

I do imagine there is not a lot of caveman speak in the training data so results may be worse because they don’t fit the same patterns that have been reinforcement learned in.

  • We’re years into the industry leaning into “chain of thought” and then “thinking models” that are based on this premise, forcing more token usage to avoid premature conclusions and notice contradictions (I sometimes see this leak into final output). You may remember in the early days users themselves would have to say “think deeply” or after a response “now check your work” and it would find its own “one shot” mistakes often.

    So it must be studied and at least be proven effective in practice to be so universally used now.

    Someone else posted a few articles like this in the thread above but there’s probably more and better ones if you search. https://news.ycombinator.com/item?id=47647907

  • I have seen a paper though I can’t find it right now on asking your prompt and expert language produces better results than layman language. The idea of being that the answers that are actually correct will probably be closer to where people who are expert are speaking about it so the training data will associate those two things closer to each other versus Lyman talking about stuff and getting it wrong.

This is condescending and wrong at the same time (best combo).

LLMs do stumble into long prediction chains that don’t lead the inference in any useful direction, wasting tokens and compute.

I agree with this take in general, but I think we need to be prepared for nuance when thinking about these things.

Tokens are how an LLM works things out, but I think it's just as likely as not that LLMs (like people) are capable of overthinking things to the point of coming to a wrong answer when their "gut" response would have been better. I do not content that this is the default mode, but that it is both possible, and that it's more or less likely on one kind of problem than another, problem categories to be determined.

A specific example of this was the era of chat interfaces that leaned too far in the direction of web search when responding to user queries. No, claude, I don't want a recipe blogspam link or summary - just listen to your heart and tell me how to mix pancakes.

More abstractly: LLMs give the running context window a lot of credit, and will work hard to post-hoc rationalize whatever is in there, including any prior low-likelihood tokens. I expect many problematic 'hallucinations' are the result of an unlucky run of two or more low probability tokens running together, and the likelihood of that happening in a given response scales ~linearly with the length of response.

  • The solution to that is turning off thinking mode or reducing thinking budget.

That was my first thought too -- instead of talk like a caveman you could turn off reasoning, with probably better results.

Additionally, LLMs do not actually operate in text; much of the thinking happens in a much higher dimensional space that just happens to be decoded as text.

So unless the LLM was trained otherwise, making it talk like a caveman is more than just theoretically turning it into a caveman.

  • > much of the thinking happens in a much higher dimensional space that just happens to be decoded as text.

    What do you mean by that? It’s literally text prediction, isn’t it?

    • It is text prediction. But to predict text, other things follow that need to be calculated. If you can step back just a minute, i can provide a very simple but adjacent idea that might help to intuit the complexity of “ text prediction “ .

      I have a list of numbers, 0 to9, and the + , = operators. I will train my model on this dataset, except the model won’t get the list, they will get a bunch of addition problems. A lot. But every addition problem possible inside that space will not be represented, not by a long shot, and neither will every number. but still, the model will be able to solve any math problem you can form with those symbols.

      It’s just predicting symbols, but to do so it had to internalize the concepts.

      1 reply →

    • There was a paper recently that demonstrated that you can input different human languages and the middle layers of the model end up operating on the same probabilistic vectors. It's just the encoding/decoding layers that appear to do the language management.

      So the conclusion was that these middle layers have their own language and it's converting the text into this language and this decoding it. It explains why sometime the models switch to chinese when they have a lot of chinese language inputs, etc.

      7 replies →

  • > instead of talk like a caveman you could turn off reasoning, with probably better results

    This is not how the feature called "reasoning" work in current models.

    "reasoning" simply let's the model output and then consume some "thinking" tokens before generating the actual output.

    All the "fluff" tokens in the output have absolutely nothing to do with "reasoning".

  • You obviously do not speak other languages. Other cultures have different constrains and different grammar.

    For example thinking in modern US English generates many thoughts, to keep correct speak at right cultural context (there is only one correct way to say People Of Color, and it changes every year, any typo makes it horribly wrong).

    Some languages are far more expressive and specialized in logical conditions, conditionals, recursion and reasoning. Like eskimos have 100 words for snow, but for boolean algebra.

    It is well proven that thinking in Chinese needs far less tokens!

    With this caveman mod you strip out most of cultural complexities of anglosphere, make it easier for foreigners and far simpler to digest.

    • >Some languages are far more expressive and specialized in logical conditions, conditionals, recursion and reasoning. Like eskimos have 100 words for snow, but for boolean algebra.

      This is simply not true.

      4 replies →

A fundamental (but sadly common) error behind “tokens are units of thinking” is antropomorphising the model as a thinking being. That’s a pretty wild claim that requires a lot of proof, and possibly solving the hard problem, before it can be taken seriously.

There’s a less magical model of how LLMs work: they are essentially fancy autocomplete engines.

Most of us probably have an intuition that the more you give an autocomplete, the better results it will yield. However, does this extend to output of the autocomplete—i.e. the more tokens it uses for the result, the better?

It could well be true in context of chain of thought[0] models, in the sense that the output of a preceding autocomplete step is then fed as input to the next autocomplete step, and therefore would yield better results in the end. In other words, with this intuition, if caveman speak is applied early enough in the chain, it would indeed hamper the quality of the end result; and if it is applied later, it would not really save that many tokens.

Willing to be corrected by someone more familiar with NN architecture, of course.

[0] I can see “thinking” used as a term of art, distinct from its regular meaning, when discussing “chain of thought” models; sort of like what “learning” is in “machine learning”.

  • IMO "thinking" here means "computation", like running matrix multiplications. Another view could be: "thinking" means "producing tokens". This doesn't require any proof because it's literally what the models do.

    As I understand it, the claim is: more tokens = more computation = more "thinking" => answer probably better.

That's going to depend on what model you're using with Claude Code. All of the more recent Anthropic models (4.5 and 4.6) support thinking, so the number of tokens generated ("units of thought") isn't directly tied to the verbosity of input and non-thought output.

However, another potential issue is that LLMs are continuation engines, and I'd have thought that talking like a caveman may be "interpreted" as meaning you want a dumbed down response, not just a smart response in caveman-speak.

It's a bit like asking an LLM to predict next move in a chess game - it's not going to predict the best move that it can, but rather predict the next move that would be played given what it can infer about the ELO rating of the player whose moves it is continuing. If you ask it to continue the move sequence of a poor player, it'll generate a poor move since that's the best prediction.

Of course there's not going to be a lot of caveman speak on stack overflow, so who knows what the impact is. Program go boom. Me stomp on bugs.

If this is true, shouldn't LLMs perform way worse when working in Chinese than in English? Seems like an easy thing to study since there are so many Chinese LLMs that can work in both Cbinese and English.

Do LLMs generally perform better in verbose languages than they do in concise ones?

  • Are you saying Chinese is more concise than English? Chinese poetry is concise, but that can be true in any language. For LLMs, it depends on the tokenizer. Chinese models are of course more Chinese-friendly and so would encode the same sentence with fewer tokens than Western models.

    • > Are you saying Chinese is more concise than English?

      Yeah, definitely. It lacks case and verb conjugations, plus whole classes of filler words, and words themselves are on average substantially shorter. If you listen to or read a hyper-literal transliteration of Chinese speech into English (you can find fun videos of this on Chinese social media), it even resembles "caveman speech" for those reasons.

      If you look at translated texts and compare the English versions to the Chinese ones, the Chinese versions are substantially shorter. Same if you compare localization strings in your favorite open-source project.

      It's also part of why Chinese apps are so information-dense, and why localizing to other languages often requires reorganizing the layout itself— languages like English just aren't as information-dense, pixel for pixel.

      The difference is especially profound for vernacular Chinese, which is why Chinese people often note that text which "has a machine translation flavor" is over-specified and gratuitously prolix.

      Maybe some of this washes out in LLMs due to tokenization differences. But Chinese texts are typically shorter than English texts and it extends to prose as well as poetry.

      But yeah this is standard stuff: Chinese is more concise and more contextual/ambiguous. More semantic work is allocated in interpretation than with English, less is allocated in the writing/speaking.

      Do you speak Chinese and experience the differences between Chinese and English differently? I'm a native English speaker and only a beginner in Chinese but I've formed these views in discussion with Chinese people who know some English as well.

      2 replies →

Do you know of evals with default Claude vs caveman Claude vs politician Claude solving the same tasks? Hypothesis is plausible, but I wouldn’t take it for granted

I wonder if a language like Latin would be useful.

It's a significantly much succinct semantic encoding than English while being able to express all the same concepts, since it encodes a lot of glue words into the grammar of the language, and conventionally lets you drop many pronouns.

e.g.

"I would have walked home, but it seemed like it was going to rain" (14 words) -> "Domum ambulavissem, sed pluiturum esse videbatur" (6 words).

Grug says you quite right, token unit thinking, but empty words not real thinking and should avoid. Instead must think problem step by step with good impactful words.

When it comes to LLM you really cannot draw conclusions from first principles like this. Yes, it sounds reasonable. And things in reality aren't always reasonable.

Benchmark or nothing.

  • There have been papers about introducing thinking tokens in intermediary layers that get stripped from the output.

You are absolutely right! That is exactly the reason why more lines of code always produce a better program. Straight on, m8!

  • This might be not so far from the truth, if you count total loc written and rewritten during the development cycle, not just the final number.

    Not everybody is Dijkstra.

I remember a while back they found that replacing reasoning tokens with placeholders ("....") also boosted results on benchies.

But does talk like caveman make number go down? Less token = less think?

I also wondered, due to the way LLMs work, if I ask AI a question using fancy language, does that make it pattern match to scientific literature, and therefore increase the probability that the output will be true?

IIUC this doesn't make the LLM think in caveman (thinking tokens). It just makes the final output show in caveman.

CoT token are usually controled via 'extended thinking' or 'adapted thinking'. CoT tokens are usually not affected by the system prompt. There is an effort parameter, though, which states to have an effect on accuracy for over all token consumption.

https://platform.claude.com/docs/en/build-with-claude/extend...

  • This helps, but the original prompt is still there. The system prompt is still influencing these thinking blocks. They just don’t end up clogging up your context. The system prompt sits at the very top of the context hierarchy. Even with isolated "thinking" blocks, the reasoning tokens are still autoregressively conditioned on the system instructions. If the system prompt forces "caveman speak" the model's attention mechanisms are immediately biased toward simpler, less coherent latent spaces. You are handicapping the vocabulary and syntax it uses inside its own thinking process, which directly throttles its ability to execute high-level logic.

    Nothing on that page indicates otherwise.

Ah so obviously making the LLM repeat itself three times for every response it will get smarter

You mention thinking tokens as a side note, but their existence invalidates your whole point. Virtually all modern LLMs use thinking tokens.

It's not "units of thinking" its "units of reference"; as long as what it produces references the necessary probabilistic algorithms, itll do just fine.

LLMs don't think at all.

Forcing it to be concise doesn't work because it wasn't trained on token strings that short.

  • > Forcing it to be concise doesn't work because it wasn't trained on token strings that short.

    This is a 2023-era comment and is incorrect.

  • They’re able to solve complex, unstructured problems independently. They can express themselves in every major human language fluently. Sure, they don’t actually have a brain like we do, but they emulate it pretty well. What’s your definition of thinking?

    • When OP wrote about LLMs "thinking" he implied that they have an internal conceptual self-reflecting state. Which they don't, they *are* merely next token predicting statistical machines.

      2 replies →

[flagged]

  • Let’s see, I think these pretty much map out a little chronology of the research:

    https://arxiv.org/abs/2112.00114 https://arxiv.org/abs/2406.06467 https://arxiv.org/abs/2404.15758 https://arxiv.org/abs/2512.12777

    First that scratchpads matter, then why they matter, then that they don’t even need to be meaningful tokens, then a conceptual framework for the whole thing.

    • I dont’t see the relevance, the discussion is over whether boilerplate text that occurs intermittently in the output purely for the sake of linguistic correctness/sounding professional is of any benefit. Chain of thought doesn’t look like that to begin with, it’s a contiguous block of text.

      3 replies →

  • That "unproven claim" is actually a well-established concept called Chain of Thought (CoT). LLMs literally use intermediate tokens to "think" through problems step by step. They have to generate tokens to talk to themselves, debug, and plan. Forcing them to skip that process by cutting tokens, like making them talk in caveman speak, directly restricts their ability to reason.

  • the fact that more tokens = more smart should be expected given cot / thinking / other techniques that increase the model accuracy by using more tokens.

    Did you test that ""caveman mode"" has similar performance to the ""normal"" model?

    • Yes but: If the amount is fixed, then the density matters.

      A lot of communication is just mentioning the concepts.

    • That is part of it. They are also trained to think in very well mapped areas of their model. All the RHLF, etc. tuned on their CoT and user feedback of responses.

  • Looking at the skill.md wouldn’t this actually increase token use since the model now needs to reformat its output?

    Funny idea though. And I’d like to see a more matter-of-fact output from Claude.

  • I assume you're a human but wow this is the type of forum bot I could really get behind.

    Take it a step further and do kind of like that xkcd where you try to post and it rewrites it like this and if you want the original version you have to write a justification that gets posted too.

    Chef's kiss

  • Can't you know that tokens are units of thinking just by... like... thinking about how models work?

    • > Can't you know that tokens are units of thinking just by... like... thinking about how models work?

      Seems reasonable, but this doesn't settle probably-empirical questions like: (a) to what degree is 'more' better?; (b) how important are filler words? (c) how important are words that signal connection, causality, influence, reasoning?

      1 reply →

  > Someone didn't get the memo that for LLMs, tokens are units of thinking.

Where do you get this memo ? Seems completely wrong to me. More computation does not translate to more "thinking" if you compute the wrong things (ie things that contribute significantly to the final sentence meaning).

  • That’s why you need filler words that contribute little to the sentence meaning but give it a chance to compute/think. This is part of why humans do the same when speaking.

    • The LLM has no accessible state beyond its own output tokens; each pass generates a single token and does not otherwise communicate with subsequent passes. Therefore all information calculated in a pass must be encoded into the entropy of the output token. If the only output of a thinking pass is a dumb filler word with hardly any entropy, then all the thinking for that filler word is forgotten and cannot be reconstructed.

    • Do you have any evidence at all of this? I know how LLMs are trained and this makes no sense to me. Otherwise you'd just put filler words in every input

      e.g. instead of: "The square root of 256 is" you'd enter "errr The er square um root errr of 256 errr is" and it would miraculously get better? The model can't differentiate between words you entered and words it generated its self...

      4 replies →