Comment by bodge5000
21 hours ago
> The LLM's are clearly useful for many things
I don't think that's of any doubt. Even beyond programming, imo especially beyond programming, there are a great many things they're useful for. The question is; is that worth the enormous cost of running them?
NFT's were cheap enough to produce and that didn't really scale depending on the "quality" of the NFT. With an LLM, if you want to produce something at the same scale as OpenAI or Anthropic the amount of money you need just to run it is staggering.
This has always been the problem, LLMs (as we currently know them) they being a "pretty useful tool" is frankly not good enough for the investment put into them
All of the professions its trying to replace are very much bottom end of the tree, like programmers, designers, artists, support, lawyers etc. While you can easily already replace management and execs with it already and save 50% of the costs, but no one is talking about that.
At this point the "trick" is to scare white collar knowledge workers into submission with low pay and high workload with the assumption that AI can do some of the work.
And do you know a better way to increase your output without giving OpenAI/Claude thousands of dollars? Its morale, improving morale would increase the output in a much more holistic way. Scare the workers and you end up with spaghetti of everyone merging their crappy LLM enhanced code.
"Just replace management and execs with AI" is an elaborate wagie cope. "Management and execs" are quite resistant to today's AI automation - and mostly for technical reasons.
The main reason being: even SOTA AIs of today are subhuman at highly agentic tasks and long-horizon tasks - which are exactly the kind of tasks the management has to handle. See: "AI plays Pokemon", AccountingBench, Vending-Bench and its "real life" test runs, etc.
The performance at long-horizon tasks keeps going up, mind - "you're just training them wrong" is in full force. But that doesn't change that the systems available today aren't there yet. They don't have the executive function to be execs.
> even SOTA AIs of today are subhuman at highly agentic tasks and long-horizon tasks
This sounds like a lot of the work engineers do as well, we're not perfect at it (though execs aren't either), but the work you produce is expected to survive long term, thats why we spend time accounting for edge cases and so on.
Case in point; the popularity of docker/containerization. "It works on my machine" is generally fine in the short term, you can replicate the conditions of the local machine relatively easily, but doing that again and again becomes a problem, so we prepare for that (a long-horizon task) by using containers.
Some management would be cut off when the time comes, Execs on the other hand are not there for work and are in due to personal relationships, so impossible to fire. If you think someone like lets say Satya Nadella can't be replaced by a bot which takes different input streams and then makes decisions, then you are joking. Even his recent end of 2025 letter was mostly written by AI.
2 replies →
Yeah. Obviously. Duh. That's why we keep doing it.
Opus 4.5 saved me about 10 hours of debugging stupid issues in an old build system recently - by slicing through the files like a grep ninja and eventually narrowing down onto a thing I surely would have missed myself.
If I were to pay for the tokens I used at API pricing, I'd pay about $3 for that feat. Now, come up with your best estimate: what's the hourly wage of a developer capable of debugging an old build system?
For the reference: by now, the lifetime compute use of frontier models is inference-dominated, at a rate of 1:10 or more. And API costs at all major providers represent selling the model with a good profit margin.
So could the company hiring you to do that work fire you and just use Opus instead? If no, then you cannot compare an engineers salary to what Opus costs, because the engineer is needed anyway.
> And API costs at all major providers represent selling the model with a good profit margin.
Though we don't know for certain, this is likely false. At best, it's looking like break even, but if you look at Anthropic, they cap their API spend at just $5,000 a month, which sounds like a stop loss. If it were making a good profit, they'd have no reason to have a stop loss (and certainly not that low).
> Yeah. Obviously. Duh. That's why we keep doing it.
I don't think so. I think what is promised is what keeps spend on it so high. I'd imagine if all the major AI companies were to come out and say "this is it, we've gone as far as we can", investment would likely dry up
But now instead of spending 10 hours working on that, he can go and work on something else that would otherwise have required another engineer.
It's not going to mean they can employ 0 engineers, but maybe they can employ 4 instead of 5 - and a 20% reduction in workforce across the industry is still a massive change.
1 reply →