Comment by mikert89
5 days ago
The cynicism/denial on HN about AI is exhausting. Half the comments are some weird form of explaining away the ever increasing performance of these models
I've been reading this website for probably 15 years, its never been this bad. many threads are completely unreadable, all the actual educated takes are on X, its almost like there was a talent drain
Cynicism and denial are two very different things, and have very different causes and justifications. I personally don't deny that LLMs are very powerful and are capable and capable of eliminating many jobs. At the same time I'm very cynical about the rollout and push for AI. I don't see in any way as a push for a "better" society or towards some notion of progress, but rather an enthusiastic effort to disempower employees, centralize power, expand surveillance, increase profits, etc.
AI is kerosene. A useful resource when applied with reason and compassion. Late stage capitalism is a dumpster full of garbage. AI in combination with late stage capitalism is a dumpster fire. Many, perhaps most people conflate the dumpster fire with "kerosine evil!"
Can be, but why not take them at their word? The people building these systems are directly stating the goal is to replace people. Should anyone blame people for being mad at not just the people building the systems, but the systems themselves?
> AI in combination with late stage capitalism
What's the alternative here?
Making an account just to point out how these comments are far more exhausting, because they don't engage with the subject matter. They are just agreeing with a headline and saying, "See?"
You say, "explaining away the increasing performance" as though that was a good faith representation of arguments made against LLMs, or even this specific article. Questionong the self-congragulatory nature of these businesses is perfectly reasonable.
But don't you think this might be a case where there is both self-congragulation and actual progress?
The level of proof for the latter is much higher, and IMO, OpenAI hasn't met the bar yet.
Something really funky is going on with newer AI models and benchmarks, versus how they perform subjectively when I use them for my use-cases. I say this across the board[1], not just regarding IpenAI. I don't know if frontier labs have run into Goodheart's law viz benchmarks, or if my use-cases that are atypical.
1. I first noticed this with Claud 3.5 vs Claud 3.7
That's a fair question, and I agree. I just find it odd how we shout across the aisle, whether in favor or against. It's a case of thinking the tech is neat, while cringing at all the money-people and their ideations.
Probably because both sides have strong vested interests and it’s next to impossible to find a dispassionate point of view.
The Pro AI crowd, VC, tech CEOs etc have strong incentive to claim humans are obsolete. Many tech employees see threats to their jobs and want to poopoo any way AI could be useful or competitive.
That's a huge hyperbole. I can assure you many people find the entire thing genuinely fascinating, without having any vested interest and without buying the hype.
Sure but it’s still a gold rush with a lot of exaggeration pushed by tech executives to acquire investors. There’s a lot of greed and fear to go around. I think LLMs are fascinating and cool myself having grown up with Eliza and crappy expert systems, but am more interested in deep learning outcomes like Alphafold than general purpose LLMs. You don’t hear enough about non-LLM AI because of all the money riding on LLM based tech. It’s hard not to see the bad behavior that has arisen due to all the money being thrown about. So that is to say it makes sense there is some skepticism as you can’t take what these companies say at face value. It’d be nice to have a toned down discussion about what LLMs can and can’t do but there is a lot of obfuscation and hype. Also there is the conversation about what they should or shouldn’t be doing which is completely fair to talk about.
That's just another way to state that everybody is almost always self-serving when it comes to anything.
Or some can spot a euphoric bubble when they see it with lots of participants who have over-invested in 90% of these so called AI startups that are not frontier labs.
What does this have to do with the math Olympiad? Why would it frame your view of the accomplishment?
2 replies →
dude we have computers reasoning in english to solve math problems, what are you even talking about
1 reply →
Accepting openai at face value is just the lazy stance.
Finding a critic perspective and try to understand why it can be wrong is more fun. You just say "I was wrong" when proved wrong.
Two things can happen at the same time: Genuine technological progress and the “hype machine” going into absolute overdrive.
The problem with the hype machine is that it provokes an opposite reaction and the noise from it buries any reasonable / technical discussion.
> I've been reading this website for probably 15 years, its never been this bad.
People here were pretty skeptical about AlexNet, when it won the ImageNet challenge 13 years ago.
https://news.ycombinator.com/item?id=4611830
Ouch that thread makes me quite sad at the state of discourse on HN today. It's a lot better than this thread.
That thread was skeptical, but it's still far more substantive than what you find here today.
I think that's because the announcement there actually told you something technically interesting. This just presents a result (which is cool), but the actual method is what is really cool!
Indeed it's a very unsophisticated and obnoxious audience around here. They are so conservative and unadventurous that anything possibly transformative is labeled as hype. I feel bad for them.
Enthusiastically denouncing or promoting something is much, much easier and more rewarding in the short term for people who want to appear hip to their chosen in-group - or profit center.
And then, it's likewise easy to be a reactionary to the extremes of the other side.
The middle is a harder, more interesting place to be, and people who end up there aren't usually chasing money or power, but some approximation of the truth.
I agree that there's both cynicism and denial, but when I've explained my views I have usually been able to get through to the complainers.
Usually my go-to example for LLMs doing more than mass memorization is Charton's and Lample's LLM trained on function expressions and their derivatives and which is able to go from the derivatives to the original functions and thus perform integration, but at the same time I know that LLMs are essentially completely crazy with no understanding of reality-- just ask them to write some fiction and you'll have the model outputting discussions where characters who have never met before are addressing each other by name, or getting other similarly basic things wrong, and when something genuinely is not in the model you will end up in hallucination land. So the people saying that the models are bad are not completely crazy.
With the wrong codebase I wouldn't be surprised if you need a finetune.
It's caught in a kind of feedback loop. There are only so many times you can see "stochastic parrot" or "fancy autocomplete" or "can't draw hands" or "just a bunch of matmuls, it can't replicate the human soul" lines before you decide to just not engage. This leads to more of the content being exactly that, driving more people away.
At this point, there are much better places to find technical discussion of AI, pros and cons. Even Reddit.
yeah alot of the time now, i will draft a comment, and then not even publish it. like whats the point
This sounds like a version of "HN hates X and I am tired of it". In last 10 years or so I have been reading HN, X has been crypto, Musk/Tesla and many more.
So, as much I get the frustration comments like these don't really add much. Its complaining about others complaining. Instead this should be taken as a signal that maybe HN is not the right forum to read about these topics.
GP is exaggerating, but this thread in particular is really bad.
It's healthy to be skeptical, and it's even healthier to be skeptical of openai, but there are commenters who clearly have no idea of what IMO problems are saying that this means nothing somehow?
Makes sense. Everyone here has their pride and identity tied to their ability to code. HN likes to upvote articles related to IQ because coding correlates with IQ and HNers like to think they are smart.
AI is of course a direct attack on the average HNers identity. The response you see is like attacking a Christian on his religion.
The pattern of defense is typical. When someone’s identity gets attacked they need to defend their identity. But their defense also needs to seem rational to themselves. So they begin scaffolding a construct of arguments that in the end support their identity. They take the worst aspects of AI and form a thesis around it. And that becomes the basis of sort of building a moat around their old identity as an elite programmer genius.
Tell tale sign you or someone else is doing this is when you are talking about AI and someone just comments about how they aren’t afraid of AI taking over their own job when it wasn’t even directly the topic.
If you say like ai is going to lessen the demand for software engineering jobs the typical thing you here is “I’m not afraid of losing my job” and I’m like bro, I’m not talking about your job specifically, I’m not talking about you or your fear of losing a job I’m just talking about the economics of the job market. This is how you know it’s an identity thing more than a technical topic.
[flagged]
Please don't cross into personal attack. We ban accounts that do that.
Also, please don't fulminate. This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.
2 replies →
Your anger about his comment suggests that it actually is about pride and identity. I simply don’t buy that most people here argue against AI because they’re worried about software quality and lowering the user experience. It’s the same argument the American Medical Association made in order to allow them to gatekeep physician jobs and limit openings. We’ve had developers working on adtech directly intended to reduce the quality of the user experience for decades now.
> The fact we're now promoting and discussing fucking Twitter threads is absurd.
The ML community is really big on Twitter. I'm honestly quite surprised that you're angry or surprised at this. That means either you're very disconnected from the actual ML community, which is fine of course but then maybe you should hold your opinions a bit less tightly. Alternatively you're ideologically against Twitter which brings me to:
> It's not about pride and identity, you dingus.
Maybe it is? There's a very-online-tech-person identity that I'm familiar with that hates Twitter because they think that Twitter's short post length and other cultural factors on the site contributed to bad discourse quality. I used to buy it, but I've stopped because HN and Reddit are equally filled with terrible comments that generate more heat than light.
FWIW a bunch of ML researchers tried to switch to Bluesky but got so much hate, including death threats, sent at them that they all noped back to Twitter. That's the other identity portion of it that, post Musk there's a set of folks who hate Twitter ideologically and have built an identity around it. Unfortunately this identity also is anti-AI enough that it's willing to act with toxicity toward ML researchers. Tech cynicism and anti-capitalism has some tie-ins with this also.
So IMO there is an identity aspect to this. It might not be the "true hacker" identity that the GP talks about but I do very much think that this pro vs anti AI fight has turned into another culture war axis on HN that has more to do with your identity or tribe than any reasoned arguments.
Based on the past history with frontier-math & AIME 2025 [1],[2] I would not trust announcements which cant be independently verified. I am excited to try it out though.
Also, the performance of LLMs on imo 2025 was not even bronze [3].
Finally, this article shows that LLMs were just mostly bluffing [4] on usamo 2025.
[1] https://www.reddit.com/r/slatestarcodex/comments/1i53ih7/fro...
[2] https://x.com/DimitrisPapail/status/1888325914603516214
[3] https://matharena.ai/imo/
[4] https://arxiv.org/pdf/2503.21934
The solutions were publicly posted to GitHub: https://github.com/aw31/openai-imo-2025-proofs/tree/main
Did humans formalize the inputs ? or was the exact natural language input provided to the llm. A lot of detail is missing on the methodology used. Not to mention of any independent validation.
My skepticism stems from the past frontier math announcement which turned out to be a bluff.
1 reply →
Its obvious why though. The typical "tech" culture values human ingenuity, creativity, intelligence and agency due to its history. Someone coming up with a new algorithm in their garage can build a billion dollar business - it is a indie hacker culture that historically valued "human intelligence".
i.e. it is a culture of meritocracy; where no matter your social connections, political or financial capital if you are smart and driven you can make it.
AI flips that around. It devalues human intelligence and moves the moats to the ol' school things of money, influence and power. The big winners are no longer the most hard working, or above average intelligence. Intelligence is devalued; as a wealthy person I now have intelligence at my fingertips making it a commodity rather than a virtue - but money, power and connections - that's now the moat.
If all you have is your talent the future could look quite scary in an AI world long term. Money buys the best models, connections, wealth and power become the remaining moats. This doesn't gel typically in a "indie hacker" like culture in most tech forums.
Basically this. Not sure why people here love to doubt AI progress as it clearly makes strides
because per corps statements, AI are now top 0.1% of PhD in math, coding, physics, law, medicine etc, yet, when I try it myself for my work it makes stupid mistakes, so I have suspicion that corp very pushy on manipulating metrics/benchmarks.
I don't doubt the genuine progress in the field (from like, a research perspective) but my experience with commercial LLM products comes absolutely nowhere close to the hype.
It's reasonable to be suspicious of self aggrandizing claims from giant companies hyping a product, and it's hard not to be cynical when every forced AI interaction (be it Google search or my corporate managers or whatever) makes my day worse.
HN feels very low signal, since it's populated by people who barely interact with the real world
X is higher signal, but very group thinky. It's great if you want to know the trends, but gotta be careful not to jump off the cliff with the lemmings.
Highest signal is obviously non digital. Going to meetups, coffee/beers with friends, working with your hands, etc.
it used to be high signal though. you have to wonder if the type of people posting on here is different than it used to be
Meh. Some over hype, some under hype. People like you whine and then don't want to listen to any technical concerns.
Some of us are implementing things in relation to AI so we know it's not about "increasing performance of models" but actual about the right solution for the right problem.
If you think Twitter has "educated takes" then maybe go there and stop being pretentious schmuck over here.
Talent drain, lol. I'd much rather have skeptics and good tips than usernames, follows and social media engagement.
Both sides are not equally wrong, clearly. Until yesterday prediction markets were saying the probability of an AI getting a gold medal in IMO in 2025 was <20%. So clearly we should be more hyped, not less.
Prediction markets are companies and people trying to make money on volatility. Who cares? Why do people treat them as some prescient being?
cynacism -> cynicism
It may be a talent drain too, but at least it's a selection bias. People just get enough and go away, or don't comment. At the extreme, that leads to a downward spiral in the epistemology of the site. Look at how AI fares in Bluesky.
As a partially separate issue, there are people trying to punish comments quoting AI by downvotes. You don't need to have a non-informative reply, just sourcing it to AI is enough. A random internet dude telling the same thing with less justification or detail is fine to them.
Its because hackers are fed up of being conned by corporations that steal our code, ideas, data. They start out "open" only to rug pull. "Pissing in the pool of opensource".
As hackers we have more responsibility than the general public because we understand the tech and its side effects, we are the first line of defense so it is important to speak out not only to be on the right side of history but also to protect society.
It’s the same with anything related to cryptocurrency. HN has a hate boner for certain topics.
The overconfidence/short sightedness on HN about AI is exhausting. Half the comments are some weird form of explaining how developers will be obsolete in five years and how close we are to AGI.
> Half the comments are some weird form of explaining how developers will be obsolete in five years and how close we are to AGI.
I do not see that at all in this comment section.
There is a lot of denial and cynicism like the parent comment suggested. The comments trying to dismiss this as just “some high school math problem” are the funniest example.
[flagged]
3 replies →
I went through the thread and saw nothing that looked like this.
I don’t think developers will be obsolete in five years. I don’t think AGI is around the corner. But I do think this is the biggest breakthrough in computer science history.
I worked on accelerating DNNs a little less than a decade ago and had you shown me what we’re seeing now with LLMs I’d say it was closer to 50 years out than 20 years out.
its very clearly a major breakthrough for humanity
1 reply →
[flagged]
3 replies →
Greatest breakthru in compsci.
You mean the one that paves the way for ancient Egyptian slave worker economies?
Or totalitarian rule that 1984 couldn't imagine?
Or...... Worse?
The intermediate classes of society always relied on intelligence and competence to extract money from the powerful.
AI means those classes no longer have power.
2 replies →
I don’t typically find this to be true. There is a definite cynicism on HN especially when it comes to OpenAI. You already know what you will see. Low quality garbage of “I remember when OpenAI was open”, “remember when they used to publish research”, “sama cannot be trusted”, it’s an endless barrage of garbage.
its honestly ruining this website, you cant even read the comments sections anymore
2 replies →
Nobody likes the idea that this is only "economical superior AI". Not as good as humans, but a LOT cheaper.
The "It will just get better" is bubble baiting the investors. The tech companies learned from the past and they are riding and managing the bubble to extract maximum ROI before it pops.
The reality is a lot of work done by humans can be replaced by an LLM with lower quality and nuance. The loss in sales/satisfaction/ect is more than offset by the reduced cost.
The current model of LLMs are enshitification accelerators and that will have real effects.
Incredible how many HNers cannot see this comment for what it is.
> I've been reading this website for probably 15 years, its never been this bad... all the actual educated takes are on X
Almost every technical comment on HN is wrong (see for example essentially all the discussion of Rust async, in which people keep making up silly claims that Rust maintainers then attempt to patiently explain are wrong).
The idea that the "educated" takes are on X though... that's crazy talk.
With regard to AI & LLMs Twitter/x is actually the only place with all of the industry people discussing.
There are a bunch of great accounts to follow that are only really posting content to x.
Karpathy, nearcyan, kalomaze, all of the OpenAI researchers including the link this discussion is on, many anthropic researchers. It's such a meme that you see people discuss reading Twitter thread + paper because the thread gives useful additional context.
Hn still has great comment sections on maker style posts, on network stuff, but I no longer enjoy the discussions wrt AI here. It's too hyperbolic.
that people on here dont know alot of the leading researchers only post on X is a tell in itself
I see the same effect regarding macroeconomic discussions. Great content on X that is head and shoulders (says me) above discussions on other platforms.
I'm unconvinced Twitter is a very good medium for serious technical discussion. I imagine a lot of this happens on the sidelines at conferences, on mailing threads and actually in organisations doing work on AI (e.g. Universities, Anthropic). The people who are doing the work are also often not the people who have time to Twitter.
1 reply →
Too hyperbolic for, against, or either way?
1 reply →
Yup! People here are great hackers but it’s almost like they have their head up their own ass when it comes to AI/ML.
Most of HN was very wrong about LLMs.
This is true of every forum and every topic. When you actually know something about the topic you realize 90% of the takes about it are garbage.
But in most other sites the statistic is 99%, so HN is still doing much better than average.
No on AI, this is really a fringe environment of relatively uninformed commenters, compared to X. X has its craziness but you can curate your feeds by using lists. Here I can't choose who to follow.
And like said, the researchers themselves are on X, even Gary Marcus is there. ;)
[dead]
Software that mangles data on the regular should be thrown away.
How is it rational to 10x the budget over and over again when it mangles data every time?
The mind blowing thing is not being skeptical of that approach, it's defending it. It has become an article of faith.
It would be great to have AI chatbots. But chatbots that mangle data getting their budgets increased by orders of magnitude over and over again is just doubling down on the same mistake over and over again.
HN doesn't have a strong enough protection against bots, so foreign influence campaign bots with the goal of spreading negative sentiment about American technology companies are, I believe, very common here.
"Please don't post insinuations about astroturfing, shilling, bots, brigading, foreign agents and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data."
https://news.ycombinator.com/newsguidelines.html
https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme...