Comment by roughly
9 hours ago
One thing I’m becoming curious about with these models are the token counts to achieve these results - things like “better reasoning” and “more tool usage” aren’t “model improvements” in what I think would be understood as the colloquial sense, they’re techniques for using the model more to better steer the model, and are closer to “spend more to get more” than “get more for less.” They’re still valuable, but they operate on a different economic tradeoff than what I think we’re used to talking about in tech.
I also find the implications for this for AGI interesting. If very compute-intensive reasoning leads to very powerful AI, the world might remain the same for at least a few years even after the breakthrough because the inference compute simply cannot keep up.
You might want millions of geniuses in a data center, but perhaps you can only afford one and haven't built out enough compute? Might sound ridiculous to the critics of the current data center build-out, but doesn't seem impossible to me.
I've been pretty skeptical of LLMs as the solution to AGI already, mostly just because the limits of what the models seem capable of doing seem to be lower than we were hoping (glibly, I think they're pretty good at replicating what humans do when we're running on autopilot, so they've hit the floor of human cognition, but I don't think they're capable of hitting the ceiling). That said, I think LLMs will be a component of whatever AGI winds up being - there's too much "there" there for them to be a total dead end - but, echoing the commenter below and taking an analogy to the brain, it feels like "many well-trained models, plus some as-yet unknown coordinator process" is likely where we're going to land here - in other words, to take the Kahneman & Tversky framing, I think the LLMs are making a fair pass at "system 1" thinking, but I don't think we know what the "system 2" component is, and without something in that bucket we're not getting to AGI.
i'm no expert, and i actually asked google gemini a similar question yesterday - "how much more energy is consumed by running every query through Gemini AI versus traditional search?" turns out that the AI result is actually on par, if not more efficient (power wise) than traditional search. I think it said its the equivalent power of watching 5 seconds of TV per search.
I also asked perplexity to give a report of the most notable ARXIV papers. This one was at the top of the list -
"The most consequential intellectual development on arXiv is Sara Hooker's "On the Slow Death of Scaling," which systematically dismantles the decade-long consensus that computational scale drives progress. Hooker demonstrates that smaller models—Llama-3 8B and Aya 23 8B—now routinely outperform models with orders of magnitude more parameters, such as Falcon 180B and BLOOM 176B. This inversion suggests that the future of AI development will be determined not by raw compute, but by algorithmic innovations: instruction finetuning, model distillation, chain-of-thought reasoning, preference training, and retrieval-augmented generation. The implications are profound—progress is no longer the exclusive domain of well-capitalized labs, and academia can meaningfully compete again."
I’m… deeply suspicious of Gemini’s ability to make that assessment.
I do broadly agree that smaller, better tuned models are likely to be the future, if only because the economics of the large models seem somewhat suspect right now, and also the ability to run models on cheaper hardware’s likely to expand their usability and the use cases they can profitably address.
It's a good paper by Hooker but that specific comparison is shoddy. Llama and Aya were both trained by significantly more competent labs on different datasets to Falcon and BLOOM. The takeaway there is "it doesn't matter if you have loads of parameters if you don't know what you're doing."
If we compare apples-to-apples, eg. across Claude models, the larger Opus still happily outperforms it's smaller counterparts.
Conceptually, the training process is like building a massive and highly compressed index of all known results. You can't outright ignore the power usage to build this index, but at the very least once you have it, in theory traversing it could be more efficient than the competing indexes that power google search. Its a data structure that's perfectly tailored to semantic processing.
Though, once the LLM has to engage a hypothetical "google search" or "web search" tool to supplement its own internal knowledge; I think the efficiency obviously goes out the window. I suspect that Google is doing this every time you engage with Gemini on Search AI Mode.
Some external context on those approximate claims:
- Run a 1500W USA microwave for 10 seconds: 15,000 joules
- Llama 3.1 405B text generation prompts: On average 6,706 joules total, for each response
- Stable Diffusion 3 Medium generating a 1024 x 1024 pixel image w/ 50 diffusion steps: about 4,402 joules
[1] - MIT Technology Review, 2025-05-20 https://www.technologyreview.com/2025/05/20/1116327/ai-energ...
A single Google search in 2009: about 1,000 joules
Couldn't find any more up-to-date number, everyone just keeps repeating that 0.0003kWh number from 2009
https://googleblog.blogspot.com/2009/01/powering-google-sear...
> the token counts to achieve these results
I've also been increasingly curious about better metrics to objectively assess relative model progress. In addition to the decreasing ability of standardized benchmarks to identify meaningful differences in the real-world utility of output, it's getting harder to hold input variables constant for apples-to-apples comparison. Knowing which model scores higher on a composite of diverse benchmarks isn't useful without adjusting for GPU usage, energy, speed, cost, etc.
Pareto frontier is the term you are looking for
yes. reasoning has a lot of scammy features. just look the number of tokens to nswer on bench and you will see that some models are just awful