The original post did wind me up and I was hoping to see a good rebuttal from someone. Unfortunately this is just as bad going the other way. Using expletives and highly emotional language ('don't talk to me about my kids' etc.) and making some unsubstantiated claims in responses as well just devolves it into 'AI good' vs 'AI bad'.
With barrage of pro-ai content, I like to add some opposing views to my watch/read queue. Ed comes up a lot with comments on the other side, but after watching him once or twice I have lost any interest in his view as he seems to basically just be AI bashing rather than providing good counter arguments to the more bombastic points.
It's a shame that middle-of-the-road, reasonable takes don't seem to cut through to the public's attention. I would love to see someone popular enough and sensible enough advocating for a measured approach to the rollout of new tech and an approach to manage the risks and capture the opportunities.
Is AI transformational and can it impact 'most' white-collar jobs? YES.
Is it going to leave us all without jobs? 'Likely' NO, but it's worth assessing and preparing for if it does...
I truly feel like our system of laws and government is failing us in providing a rapid response and guardrails to safeguard the public from new and rapidly advancing tech. The advancement of tech seems to be accelerating while our approach to responding to it properly has not really kept up. Things like microtransactions, BNPL, AI, ridesharing, prediction markets etc. have all been able to perform a form of regulatory arbitrage and have been a vast net negative to some segments of society (mostly those that need the most help and support), yet it takes years to implement the most basic of protections.
> Commented [9]:
This is fundamentally untrue.
An LLM can certainly spit out thousands of lines of
code, but "opens the app itself" is definitely up for
question, as is "clicks the buttons" considering how
unreliable
basically every computer
-
use LLM is.
"It iterates like a developer would, fixing and refining
until it's satisfied" is just a bald
-
faced lie. What're you
talking about? This is not what these models do, nor
what Codex or Claude code does.
This is a clever and sinister
way to write, because it
abuses the soft edges of the truth
-
while coding LLMs
can test products, or scan/fix some bugs, this
suggests
they A) do this autonomously without human
input, B) they do this correctly every time (or ever!), C)
that there is some sort of internal "standard" they follow
and D) that all of this just happens without any human
involvement
---
Ummm. Yeah, no. This actually works. No idea why bozos who obviously don't use the tools write about how the tools don't do this or that. Yes they do. I know because I use them. Today's best agentic harnesses can absolutely do all of the above. Not perfect by any means, not every time, but enough to be useful to me. As some people say "stop larping". If you don't know how a tool works, or what it can do, why the hell would you comment on something so authoritatively? This is very bad.
(I'll make a note that the original article was written by a 100% certified grifter. I happened to be online on llocallama when that whole debacle happened. He's a quack. No doubt about it. But from the quote I just pasted, so is the commenter. Qwaks commenting on qwacks. This is so futile)
I hate leaving snarky ad hominem replies but, yes: Zitron is simply a joke. I am not sure why some journalists and professionals seem to take him seriously. It's odd
The original article is also silly, as you say. It's just two tiresome cranks barking at each other. (Though I think I find Zitron's commentary less tethered to reality than what it's critiquing. Hypomanic exaggeration vs. deeply incurious pedantic skepticism.)
I'm not sure what to say about calling someone a "liar" for stating that AI can work for hours unattended. I can prompt AI and have it run for an hour+ at a time and get good results out of it. I have no reason to lie; this is just a factual statement, sort of like saying that my test suite runs for an hour or something. Yes, you need to prompt it correctly and have the right environment and so forth, but it is absolutely not a "lie".
If you actually read the post you'll see the reasons to call him a liar:
1) faking benchmarks and lying about a model he profited from commercially (ie. fraud)
2) implying that only a few people (like himself) saw COVID coming. This is a lie: it was the New York Times that published a huge article on the coronavirus at the time indicated, and he, of course, didn't see it coming
3) he doesn't just fail to disclose his commercial interests in what he's peddling, he denies them
4) he confidently states that AI builds the next generation of AI, which he can't know, and has not been stated anywhere
I did actually read the post -- or at least the first two pages, until the increasingly unhinged comments started to get a little redundant and I figured I had gotten the gist.
> implying that only a few people (like himself) saw COVID coming
Nowhere does the post imply this. The post says COVID was an exponential curve, and he thinks that AI is a similar curve. There is nothing in there saying that only he was the one to see this. The comment, and you, are responding to a sentiment that doesn't exist in the document.
> he confidently states that AI builds the next generation of AI, which he can't know
Anthropic reports 55% of engineers use Claude for debugging on a daily basis in December[1]. I am not sure how you come to the conclusion that "has not been stated anywhere".
I would respond to your other points but I feel like these are so thoroughly incorrect that I should probably stop here.
The post is silly, but I do not expect Zitron's commentary to be particularly illuminating as he is a charlatan himself. I could point to many examples, but here is a blog post I wrote about one case of him trying very hard to not understand a simple and familiar situation: https://crespo.business/posts/cost-of-inference/.
The original post did wind me up and I was hoping to see a good rebuttal from someone. Unfortunately this is just as bad going the other way. Using expletives and highly emotional language ('don't talk to me about my kids' etc.) and making some unsubstantiated claims in responses as well just devolves it into 'AI good' vs 'AI bad'.
With barrage of pro-ai content, I like to add some opposing views to my watch/read queue. Ed comes up a lot with comments on the other side, but after watching him once or twice I have lost any interest in his view as he seems to basically just be AI bashing rather than providing good counter arguments to the more bombastic points.
It's a shame that middle-of-the-road, reasonable takes don't seem to cut through to the public's attention. I would love to see someone popular enough and sensible enough advocating for a measured approach to the rollout of new tech and an approach to manage the risks and capture the opportunities.
Is AI transformational and can it impact 'most' white-collar jobs? YES. Is it going to leave us all without jobs? 'Likely' NO, but it's worth assessing and preparing for if it does...
I truly feel like our system of laws and government is failing us in providing a rapid response and guardrails to safeguard the public from new and rapidly advancing tech. The advancement of tech seems to be accelerating while our approach to responding to it properly has not really kept up. Things like microtransactions, BNPL, AI, ridesharing, prediction markets etc. have all been able to perform a form of regulatory arbitrage and have been a vast net negative to some segments of society (mostly those that need the most help and support), yet it takes years to implement the most basic of protections.
Recent and related:
Something Big Is Happening - https://news.ycombinator.com/item?id=46973011 - Feb 2026 (73 comments)
As someone using Claude/Opus 4.6 everyday, Zitron is full of shit. All the stuff he says is a bald faced lie is... stuff I see every day.
This is in reply to this post the other day, which did numbers: https://x.com/mattshumer_/status/2021256989876109403
Also in reply (satirically):
Something Small is Happening
https://x.com/johnpalmer/status/2021966462198460849?s=12
> Commented [9]: This is fundamentally untrue. An LLM can certainly spit out thousands of lines of code, but "opens the app itself" is definitely up for question, as is "clicks the buttons" considering how unreliable basically every computer - use LLM is. "It iterates like a developer would, fixing and refining until it's satisfied" is just a bald - faced lie. What're you talking about? This is not what these models do, nor what Codex or Claude code does. This is a clever and sinister way to write, because it abuses the soft edges of the truth - while coding LLMs can test products, or scan/fix some bugs, this suggests they A) do this autonomously without human input, B) they do this correctly every time (or ever!), C) that there is some sort of internal "standard" they follow and D) that all of this just happens without any human involvement
---
Ummm. Yeah, no. This actually works. No idea why bozos who obviously don't use the tools write about how the tools don't do this or that. Yes they do. I know because I use them. Today's best agentic harnesses can absolutely do all of the above. Not perfect by any means, not every time, but enough to be useful to me. As some people say "stop larping". If you don't know how a tool works, or what it can do, why the hell would you comment on something so authoritatively? This is very bad.
(I'll make a note that the original article was written by a 100% certified grifter. I happened to be online on llocallama when that whole debacle happened. He's a quack. No doubt about it. But from the quote I just pasted, so is the commenter. Qwaks commenting on qwacks. This is so futile)
I hate leaving snarky ad hominem replies but, yes: Zitron is simply a joke. I am not sure why some journalists and professionals seem to take him seriously. It's odd
The original article is also silly, as you say. It's just two tiresome cranks barking at each other. (Though I think I find Zitron's commentary less tethered to reality than what it's critiquing. Hypomanic exaggeration vs. deeply incurious pedantic skepticism.)
I'm not sure what to say about calling someone a "liar" for stating that AI can work for hours unattended. I can prompt AI and have it run for an hour+ at a time and get good results out of it. I have no reason to lie; this is just a factual statement, sort of like saying that my test suite runs for an hour or something. Yes, you need to prompt it correctly and have the right environment and so forth, but it is absolutely not a "lie".
Yes; and you can also find a bear that dances, if you visit a circus. Therefore saying bears can't dance is a lie.
I don't really understand what you are trying to say with this comment.
4 replies →
If you actually read the post you'll see the reasons to call him a liar:
1) faking benchmarks and lying about a model he profited from commercially (ie. fraud)
2) implying that only a few people (like himself) saw COVID coming. This is a lie: it was the New York Times that published a huge article on the coronavirus at the time indicated, and he, of course, didn't see it coming
3) he doesn't just fail to disclose his commercial interests in what he's peddling, he denies them
4) he confidently states that AI builds the next generation of AI, which he can't know, and has not been stated anywhere
The list goes on.
I did actually read the post -- or at least the first two pages, until the increasingly unhinged comments started to get a little redundant and I figured I had gotten the gist.
> implying that only a few people (like himself) saw COVID coming
Nowhere does the post imply this. The post says COVID was an exponential curve, and he thinks that AI is a similar curve. There is nothing in there saying that only he was the one to see this. The comment, and you, are responding to a sentiment that doesn't exist in the document.
> he confidently states that AI builds the next generation of AI, which he can't know
Anthropic reports 55% of engineers use Claude for debugging on a daily basis in December[1]. I am not sure how you come to the conclusion that "has not been stated anywhere".
I would respond to your other points but I feel like these are so thoroughly incorrect that I should probably stop here.
[1] https://www.anthropic.com/research/how-ai-is-transforming-wo...
1 reply →
context?
It's a response to this: https://shumer.dev/something-big-is-happening
The post is silly, but I do not expect Zitron's commentary to be particularly illuminating as he is a charlatan himself. I could point to many examples, but here is a blog post I wrote about one case of him trying very hard to not understand a simple and familiar situation: https://crespo.business/posts/cost-of-inference/.
> ...as he is a charlatan himself.
What's the evidence for that?
6 replies →
Everyone's a charlatan until their claims come true. For that matter, your rebuttal comes with its own statements of faith, "I just don’t buy it."
5 replies →
I don't know whether Ed Zitron is telling the truth.
I do know that Suleyman, Altman, and Amodei have lied, lied, and lied repeatedly, whether intentional or not.
For that matter, I do not believe AGI will happen in our lifetimes. https://timdettmers.com/2025/12/10/why-agi-will-not-happen/
However, it already did? Interesting how everyone seems to have a different perspective on that.
There's a good article about why AGI is not happening rather it's the religion of Silicon Valley: https://fluxus.io/article/alchemy-2-electric-boogaloo it's good despite written by a promptfondler.