Comment by falloutx
18 hours ago
All of the professions its trying to replace are very much bottom end of the tree, like programmers, designers, artists, support, lawyers etc. While you can easily already replace management and execs with it already and save 50% of the costs, but no one is talking about that.
At this point the "trick" is to scare white collar knowledge workers into submission with low pay and high workload with the assumption that AI can do some of the work.
And do you know a better way to increase your output without giving OpenAI/Claude thousands of dollars? Its morale, improving morale would increase the output in a much more holistic way. Scare the workers and you end up with spaghetti of everyone merging their crappy LLM enhanced code.
"Just replace management and execs with AI" is an elaborate wagie cope. "Management and execs" are quite resistant to today's AI automation - and mostly for technical reasons.
The main reason being: even SOTA AIs of today are subhuman at highly agentic tasks and long-horizon tasks - which are exactly the kind of tasks the management has to handle. See: "AI plays Pokemon", AccountingBench, Vending-Bench and its "real life" test runs, etc.
The performance at long-horizon tasks keeps going up, mind - "you're just training them wrong" is in full force. But that doesn't change that the systems available today aren't there yet. They don't have the executive function to be execs.
> even SOTA AIs of today are subhuman at highly agentic tasks and long-horizon tasks
This sounds like a lot of the work engineers do as well, we're not perfect at it (though execs aren't either), but the work you produce is expected to survive long term, thats why we spend time accounting for edge cases and so on.
Case in point; the popularity of docker/containerization. "It works on my machine" is generally fine in the short term, you can replicate the conditions of the local machine relatively easily, but doing that again and again becomes a problem, so we prepare for that (a long-horizon task) by using containers.
Some management would be cut off when the time comes, Execs on the other hand are not there for work and are in due to personal relationships, so impossible to fire. If you think someone like lets say Satya Nadella can't be replaced by a bot which takes different input streams and then makes decisions, then you are joking. Even his recent end of 2025 letter was mostly written by AI.
If an AI exec reliably outperformed meatbag execs while demanding less $$$, many boards would consider that an upgrade. Why gamble on getting a rare high performance super-CEO when you can get a reliable "good enough"?
The problem is: we don't have an AI exec that would outperform a meatbag exec on average, let alone reliably. Yet.
1 reply →