Comment by thomassmith65
13 hours ago
The hype around AI is admittedly annoying - especially from the Wall St crowd who don't know how to pronounce 'Nvidia' correctly, and who haven't managed to internalize the fact that the chatbots they use hallucinate.
It really is 'different', though, in the same way the Internet was.
It took about 20 years (ie: since The World ISP) for the Internet to work its way into every facet of life. And the dot com bubble popped half-way through that period of time.
AI might 'underwhelm' for another five or ten years. And then it won't. Whether that's good or bad, I don't know.
The only people underwhelmed by AI in February 2026 are people who have formed an identity around being AI skeptics over the last couple years and are struggling to shed it. I haven't met anyone who has seriously used the new models who isn't a at least a bit awed and disturbed.
What disturbs me is the speed of improvement, moreso then the capability.
Maybe it will plateau in the next 6-24 months, in which case it will “only” be as disruptive as the computer or industrial revolutions, albeit at a faster pace.
If not, I don’t think anyone can predict.
That's very true in terms of how capable these chatbots clearly are, but I believe the author was using 'underwhelming' to refer to the societal impact.
So far, life goes on roughly the same as it did five years ago. This can feel 'underwhelming' in contrast to the onslaught of public discussion about, and huge investments in, AI.
Most of us here on HN are programmers, and we all know how radically LLMs have changed our code projects. Even so, the change to our everyday lives (aside from our work or hobby project) is not, just yet, glaringly obvious. This year, it's mainly... every website shoves an AI box on its site that nobody seems to want!
There is also that contrast about it being genuinely useful for work/programming and the fact that, for now, it changes the rest of my life in a negative way - by making PC hardware unavailable, by hearing every day I'll be out of work in 6-24 months, and by having to deal with people taking the information from Chat for granted.
Not true. I'm a really heavy user of AI. And it's improved my productivity dramatically as a developer, but it doesn't work in every situation even in programming. I see it as an indispensible tool, but its not, right now, a tool that will replace me as a programmer or product manager or salesperson, or marketer. or (in my case) an owner and investor.
Will that happen in the future, maybe. but I don't have enough insight into how AI is evolving in the labs to make a judgement on that.
This statment is really annoying and getting boring. There are A LOT of us who have built careers evaluating technology with healthy skepticism, finding where it works and were it doesn't, excited to share & learn - and we've heard "this time it's different" many times. Now because we refuse to jump in without that same nuance and thought, and proclaim "everything's different over night!" we're branded as ludites when we're really trying find a balance.
I don't hear people saying "nothing is going to change", but I do hear questions about the timeline and if the current levels of investment match returns. Branding these people as stuck in some sort of negative identity is bullshit.
What is your position on AI?
2 replies →
You’re creating a false dichotomy to alienate perceived opponents. Frankly, it’s really annoying and close-minded, and you haven’t contributed anything to the conversation.
You're likely to find more nuance in opposing views than your "underwhelmed by AI" generalisation could represent.
[flagged]
"AI is a bubble!"
"AI will change everything!"
Few seem to understand that both of the above can be true. The parallel you draw to the internet revolution is apt; dot-coms were both a bubble and changed everything.
it literally describes the gartnerhype cycle. this article is pointless, the only thing that matters is what survives it with over 1m users. AI will have billions of users when GHC is on the back end.
I think a good analogy will be the way word processors changed printing. Suddenly anyone with access to a computer had the ability to do professional level editing and layout. Most of them didn’t have the taste or skills to use the tools to the fullest, but it still opened up a ton of possibilities that weren’t available before because it was never practical to hire an actual professional to do a poster for a dinky church bake sale before. But now, church bake sales can have pretty slick looking posters (and websites) depending on whether any of the volunteers cares enough to get.
The stuff LLMs will democratize will be a lot more impactful than nice posters for car wash fundraisers though. So in that sense it will be different, but I don’t think it will crack the market for proficient experts in the field in the same way photoshop didn’t destroy graphic design and CAD didn’t destroy drafting. It may get rid of the market for a lot of the second-tier bootcamp grad talent though, so I wouldn’t be getting into that right now if I could help it.
I think this is exactly right. I've been thinking of "this time" as similar to the advent of digital spreadsheets. Spreadsheets existed for thousands of years but spreadsheet programs transformed spreadsheet work that took hours or weeks into seconds. You still had to know what you were doing, and if you knew what you were doing you were easily 10x faster than those that didn't.
I think we are in a similar situation with code generation now, then only difference in my mind is that LLMs come with a massive platform risk. Who's to say that one day anthropic decides my company is too much of a competitor to use their tool (like they've already done with openai) or what if they decide that instead of pulling their product from my use they just make it generate worse code, or even insert malicious payloads. A dependence on these tools is wildly more risky than dependency on a word processor or a spreadsheet program. It reminds me of the arguments around net neutrality and I cannot fathom how people building on top of, and with, these tools do not see the mountain of risks around them.
We have a generation of computer programmers who have known nothing but building on top of AWS. Vendor lockin at a career level. Most were building on top of Microsoft before that. Platform agnosticism and open source and specifically the ownership and control was mostly niche.
I don’t see that changing.
What world are you living in where AI is underwhelming currently? I can’t even comprehend this. Are you just not using it or something?