Comment by gnatman
1 month ago
LLMs sure do love to burn tokens. It’s like a high schooler trying to meet the minimum word length on a take home essay.
1 month ago
LLMs sure do love to burn tokens. It’s like a high schooler trying to meet the minimum word length on a take home essay.
I've always wondered about that. LLM providers could easily decimate the cost of inference if they got the models to just stop emitting so much hot air. I don't understand why OpenAI wants to pay 3x the cost to generate a response when two thirds of those tokens are meaningless noise.
Because they don't yet know how to "just stop emitting so much hot air" without also removing their ability to do anything like "thinking" (or whatever you want to call the transcript mode), which is hard because knowing which tokens are hot air is the hard problem itself.
They basically only started doing this because someone noticed you got better performance from the early models by straight up writing "think step by step" in your prompt.
I would guess that by the time a response is being emitted, 90% of the actual work is done. The response has been thought out, planned, drafted, the individual elements researched and placed.
It would actually take more work to condense that long response into a terse one, particularly if the condensing was user specific, like "based on what you know about me from our interactions, reduce your response to the 200 words most relevant to my immediate needs, and wait for me to ask for more details if I require them."
1 reply →
IMO it supports the framing that it's all just a "make document longer" problem, where our human brains are primed for a kind of illusion, where we perceive/infer a mind because, traditionally, that's been the only thing that makes such fitting language.
3 replies →
An LLM uses constant compute per output token (one forward pass through the model), so the only computational mechanism to increase 'thinking' quantity is to emit more tokens. Hence why reasoning models produce many intermediary tokens that are not shown to the user, as mentioned in other replies here. This is also why the accuracy of "reasoning traces" is hotly debated; the words themselves may not matter so much as simply providing a compute scratch space.
Alternative approaches like "reasoning in the latent space" are active research areas, but have not yet found major success.
My assumption has been that emitting those tokens is part of the inference, analogous to humans "thinking out loud".
You're absolutely right!
This is an active research topic - two papers on this have come out over the last few days, one cutting half of the tokens and actually boosting performance overall.
I'd hazard a guess that they could get another 40% reduction, if they can come up with better reasoning scaffolding.
Each advance over the last 4 years, from RLHF to o1 reasoning to multi-agent, multi-cluster parallelized CoT, has resulted in a new engineering scope, and the low hanging fruit in each place gets explored over the course of 8-12 months. We still probably have a year or 2 of low hanging fruit and hacking on everything htat makes up current frontier models.
It'll be interesting if there's any architectural upsets in the near future. All the money and time invested into transformers could get ditched in favor of some other new king of the hill(climbers).
https://arxiv.org/abs/2602.02828 https://arxiv.org/abs/2503.16419 https://arxiv.org/abs/2508.05988
Current LLMs are going to get really sleek and highly tuned, but I have a feeling they're going to be relegated to a component status, or maybe even abandoned when the next best thing comes along and blows the performance away.
The 'hot air' is apparently more important than it appears at first, because those initial tokens are the substrate that the transformer uses for computation. Karpathy talks a little about this in some of his introductory lectures on YouTube.
Related are "reasoning" models, where there's a stream of "hot air" that's not being shown to the end-user.
I analogize it as a film noir script document: The hardboiled detective character has unspoken text, and if you ask some agent to "make this document longer", there's extra continuity to work with.
I can only imagine that someone's KPIs are tied to increasing rather than decreasing token usage.
The one that always gets me is how they're insistent on giving 17-step instructions to any given problem, even when each step is conditional and requires feedback. So in practice you need to do the first step, then report the results, and have it adapt, at which point it will repeat steps 2-16. IME it's almost impossible to reliably prevent it from doing this, however you ask, at least without severely degrading the value of the response.
because for API users they get to charge for 3x the tokens for the same requests
Because inference costs are negligible compared to training costs
The long incremental reasoning is how they arrive at higher quality answers.
Some applications hide the reasoning tokens from view, but then the final answer appears delayed.
I feel like this has gotten much worse since they were introduced. I guess they're optimizing for verbosity in training so they can charge for more tokens. It makes chat interfaces much harder to use IMO.
I tried using a custom instruction in chatGPT to make responses shorter but I found the output was often nonsensical when I did this
Yeah, ChatGPT has gotten so much worse about this since the GPT-5 models came out. If I mention something once, it will repeatedly come back to it every single message after regardless of if the topic changed, and asking it to stop mentioning that specific thing works, except it finds a new obsession. We also get the follow up "if you'd like, I can also..." which is almost always either obvious or useless.
I occasionally go back to o3 for a turn (it's the last of the real "legacy" models remaining) because it doesn't have these habits as bad.
It's similar for me, it generates so much content without me asking. if I just ask for feedback or proofreading smth it just tends to regenerate it in another style. Anything is barely good to go, there's always something it wants to add
1 reply →
It's also annoying when it starts obsessing over stuff from other chats! Like I know it has a memory of me but geez, I mention that I want to learn more about systems design and now every chat, even recipes, is like "Architect mode - your garlic chicken recipe"
Like, no, stop that! Keep my engineering life separate from my personal life!
I'm suspicious it's something far worse: they're increasingly being trained on their own output scraped from the wild.
Because that's where the compute happens, in those "verbose" tokens. A transformer has a size, it can only do so many math operations in one pass. If your problem is hard, you need more passes.
Asking it to be shorter is like doing fewer iteration of numerical integral solving algorithm.
Yeah, but all the models live in chatGPT have reasoning (iirc) - they could use reasoning tokens to do the 'compute', and still show the user a succinct response that directly answers the query
Oh good, it's not just me. Sometimes I'd have it draft an email or something and then the message seems perfect but then it's like "tell me more about the recipient and I'll make it better."
Like, my guy, I don't want to keep prompting you to make shit better, if you're missing info, ask me, don't write a novel then say "BTW, this version sucked"
Yes, I know this could probably be resolved via better prompting or a system prompt, but it's still annoying.
well, they probably have quite a lot of text from high schoolers trying to meet the minimum word length on a take home essay in the training data
Solution: just add "no yapping" to the prompt.
Same. I usually add a "Be curt" in front of every prompt in Gemini.
Is that more effective than simply adding it to your user instructions?
1 reply →
I mean their whole existence is about token prediction, so they just want to do their things :)