Comment by schnitzelstoat
19 hours ago
I agree that all the AI doomerism is silly (by which I mean those that are concerned about some Terminator-style machine uprising, the economic issues are quite real).
But it's clear the LLM's have some real value, even if we always need a human-in-the-loop to prevent hallucinations it can still massively reduce the amount of human labour required for many tasks.
NFT's felt like a con, and in retrospect were a con. The LLM's are clearly useful for many things.
Those aren’t mutually exclusive; something can be both useful and a con.
When a con man sells you a cheap watch for an high price, what you get is still useful—a watch that tells the time—but you were also still conned, because what you paid for is not what was advertised. You overpaid because you were tricked about what you were buying.
LLMs are useful for many things, but they’re also not nearly as beneficial and powerful as they’re being sold as. Sam Altman, while entirely ignoring the societal issues raised by the technology (such as the spread of misinformation and unhealthy dependencies), repeatedly claims it will cure all cancers and other kinds of diseases, eradicate poverty, solve the housing crisis, democracy… Those are bullshit, thus the con description applies.
https://youtu.be/l0K4XPu3Qhg?t=60
I think the following things can both be true at the same time:
* LLMs are a useful tool in a variety of circumstances.
* Sam Altman is personally incentivised to spout a great deal of hyped-up rubbish about both what LLMs are capable of, and can be capable of.
Yes, that’s the point I’m making. In the scenario you’re describing, that would make Sam Altman a con man. Alternatively, he could simply be delusional and/or stupid. But given his history of deceit with Loopt and Worldcoin, there is precedent for the former.
5 replies →
These are not independent hypotheses. If (b) is true it decreases the possibility that (a) is true and vice versa.
The dependency here is that if Sam Altman is indeed a con man, it is reasonable to assume that he has in fact conned many people who then report an over inflated metric on the usefulness of the stuff they just bought (people don’t like to believe they were conned; cognitive dissonance).
In other words, if Sam Altman is indeed a con man, it is very likely that most metrics of the usefulness of his product is heavily biased.
LLMs of today advance in incremental improvements.
There is a finite amount of incremental improvements left between the performance of today's LLMs and the limits of human performance.
This alone should give you second thoughts on "AI doomerism".
That is not necessarily true. That would be like arguing there is a finite number of improvements between the rockets of today and Star Trek ships. To get warp technology you can’t simply improve combustion engines, eventually you need to switch to something else.
That could also apply to LLMs, that there would be a hard wall that the current approach can’t breach.
If that's the case, then, what's the wall?
The "walls" that stopped AI decades ago stand no more. NLP and CSR were thought to be the "final bosses" of AI by many - until they fell to LLMs. There's no replacement.
The closest thing to a "hard wall" LLMs have is probably online learning? And even that isn't really a hard wall. Because LLMs are good at in-context learning, which does many of the same things, and can do things like set up fine-tuning runs on themselves using CLI.
10 replies →
pole-vaulting records improve incrementally too. and there is finite distance left to the moon. without deep understanding and experience and numbers to back up the opinion, any progress seems about to reach arbitrary goals.
AI doomerism was sold by the AI companies as some sort of "learn it or you'll fall behind". But they didnt think it through, now that AI is widely seen as a bad thing by general public (except programmers who think they can deliver slop faster). Who would be buying $200/month sub when they get laid off, I am not sure the strategy of spreading fear was worth it. I also don't think this tech can ever be profitable. I hope it burns more money at this rate.
The employer buys the AI subscription, not the employee. An employee that sends company code to an external AI is somebody looking for troubles.
In the case of contractors, the contractors buy the subscription but they need authorization to give access to the code. That's obvious if the property of the code is of the customer but there might be NDAs even if the contractor owns the code.
1 reply →
You can't get to the moon by learning to climb taller and taller trees.
I disagree with this perspective. Human labour is mostly inefficiency from habitual repetition from experience. LLMs tend not to improve that. They look like they do but instead train the user into replacing the repetition with machine repetition.
We had an "essential" reporting function in the business which was done in Excel. All SMEs seem to have little pockets of this. Hours were spent automating the task with VBA to no avail. Then LLMs came in after the CTO became obsessed with it and it got hit with that hammer. This is four iterations of the same job: manual, Excel, Excel+VBA, Excel+CoPilot. 15 years this went on.
No one actually bothered to understand the reason the work was being done and the LLM did not have any context. This was being emailed weekly to a distribution list with no subscribers as the last one had left the company 14 years ago. No one knew, cared or even though about it.
And I see the same in all areas LLMs are used. They are merely pasting over incompetence, bad engineering designs, poor abstractions and low knowledge situations. Literally no one cares about this as long as the work gets done and the world keeps spinning. No one really wants to make anything better, just do the bad stuff faster. If that's where something is useful, then we have fucked up.
Another one. I need to make a form to store some stuff in a database so I can do some analytics on it later. The discussion starts with how we can approach it with ReactJS+microservices+kubernetes. That isn't the problem I need solving. People have been completely blinded on what a problem is and how to get rid of it efficiently.
> The LLM's are clearly useful for many things
I don't think that's of any doubt. Even beyond programming, imo especially beyond programming, there are a great many things they're useful for. The question is; is that worth the enormous cost of running them?
NFT's were cheap enough to produce and that didn't really scale depending on the "quality" of the NFT. With an LLM, if you want to produce something at the same scale as OpenAI or Anthropic the amount of money you need just to run it is staggering.
This has always been the problem, LLMs (as we currently know them) they being a "pretty useful tool" is frankly not good enough for the investment put into them
All of the professions its trying to replace are very much bottom end of the tree, like programmers, designers, artists, support, lawyers etc. While you can easily already replace management and execs with it already and save 50% of the costs, but no one is talking about that.
At this point the "trick" is to scare white collar knowledge workers into submission with low pay and high workload with the assumption that AI can do some of the work.
And do you know a better way to increase your output without giving OpenAI/Claude thousands of dollars? Its morale, improving morale would increase the output in a much more holistic way. Scare the workers and you end up with spaghetti of everyone merging their crappy LLM enhanced code.
"Just replace management and execs with AI" is an elaborate wagie cope. "Management and execs" are quite resistant to today's AI automation - and mostly for technical reasons.
The main reason being: even SOTA AIs of today are subhuman at highly agentic tasks and long-horizon tasks - which are exactly the kind of tasks the management has to handle. See: "AI plays Pokemon", AccountingBench, Vending-Bench and its "real life" test runs, etc.
The performance at long-horizon tasks keeps going up, mind - "you're just training them wrong" is in full force. But that doesn't change that the systems available today aren't there yet. They don't have the executive function to be execs.
4 replies →
Yeah. Obviously. Duh. That's why we keep doing it.
Opus 4.5 saved me about 10 hours of debugging stupid issues in an old build system recently - by slicing through the files like a grep ninja and eventually narrowing down onto a thing I surely would have missed myself.
If I were to pay for the tokens I used at API pricing, I'd pay about $3 for that feat. Now, come up with your best estimate: what's the hourly wage of a developer capable of debugging an old build system?
For the reference: by now, the lifetime compute use of frontier models is inference-dominated, at a rate of 1:10 or more. And API costs at all major providers represent selling the model with a good profit margin.
So could the company hiring you to do that work fire you and just use Opus instead? If no, then you cannot compare an engineers salary to what Opus costs, because the engineer is needed anyway.
> And API costs at all major providers represent selling the model with a good profit margin.
Though we don't know for certain, this is likely false. At best, it's looking like break even, but if you look at Anthropic, they cap their API spend at just $5,000 a month, which sounds like a stop loss. If it were making a good profit, they'd have no reason to have a stop loss (and certainly not that low).
> Yeah. Obviously. Duh. That's why we keep doing it.
I don't think so. I think what is promised is what keeps spend on it so high. I'd imagine if all the major AI companies were to come out and say "this is it, we've gone as far as we can", investment would likely dry up
2 replies →
> it can still massively reduce the amount of human labour required for many tasks.
I want to see some numbers before I believe this. So far my feelings is that the best case scenario is that it reduces the time it needs to do bureaucratic tasks, tasks that were not needed anyway and could have just been removed for an even grater boost in productivity. Maybe, it seems to be automating tasks from junior engineer, tasks which they need to perform in order to gain experience and develop their expertise. Although I need to see the numbers before I believe even that.
I have a suspicion that AI is not increasing productivity by any meaningful metric which couldn’t be increased by much much much cheaper and easier means.