← Back to context

Comment by Wissenschafter

3 days ago

[flagged]

It’s primarily about confidence and motivations. People with high confidence at what they do are supremely unmotivated to use something like AI to solve problems they don’t have.

People with low confidence will be super excited for AI because it solves problems they weren’t even thinking about.

Executives that don’t write code are super excited about AI because hopefully it means they can continue to high low confidence people, which are plentiful and cost less.

I am sitting on the sidelines watching in disbelief. I don’t use AI and don’t plan to. I used to write JavaScript for a living and still get JavaScript job alerts from a lot of job boards. The compensation for JavaScript work is starting to shoot through the roof as employers are moving away from garbage like React and Angular. The recent jobs are becoming fewer and are more reliant upon people with tons of experience that can actually program. Clearly AI is not replacing positions for higher talent with greater than 8-12 years experience.

  • "I refuse to pick up the magic hammer that nails things in by me just thinking about it while holding it in my hand; nosiree, give me that old fashioned hammer so I can sit here and nail some nails into a 2x4 while the guy using the other tool is building whole slop neighborhoods. Ha, that guy is so dumb and I'm too cool because I won't ever use that hammer."

    I don't get it. Proudly saying you don't plan to use better tools is not some 'cool' look or the brag you think it is. You're just making yourself less valuable and being ignorant on purpose.

    Cool.

Blindly dismissing everyone not impressed by the AI hype only serves to further delegitimize the AI hype.

  • [flagged]

    • Totally right! The folks who were very recently telling us we were all going to be trading NFTs in the metaverse are the clear eyed optimists not motivated by anything but rational consideration for the truth.

      2 replies →

    • It seems like you get personally offended by people using their critical reasoning abilities.

      I know a folk who did a PhD in the area, and work at one of those frontier labs as a researcher, and privately he is as sceptical as the most "stubborn" HN denizen you mention.

      Unbounded enthusiasm for AI without any reservations is something that can only be born out of minds utterly deprived of imagination and creativity.

      1 reply →

As a senior dev who has been using these tools to their fullest effectiveness in production environments, until AI can reduce the entropy of a codebase while still adding capability I will continue to be underwhelmed.

  • Did you ever ask it to do that?

    • Did one ever ask if AI can reduce entropy of a growing code base? You mustn’t have read that famous short story by Asimov.

    • "They Don't Think It Be Like It Is But It Do", is all I can think.

      The simple truth is, these senior devs have no idea how to use these new tools to their capabilities.

      It's a simple case of PEBCAK; ironic considering most of them would be the ones throwing that term around 20 years ago.

      5 replies →

When you use the term "luddite" in the way you do, you reveal that you aren't aware of who the Luddites actually were. Luddites weren't anti-technology; many of them were experts at using advanced machinery. What they opposed was the poor quality output of automated factories and the use of machinery to circumvent apprenticeships and decent wages.

As for your promise of a great leap at some vague point in the future, that's such a widely-mocked AI industry trope at this point that it's a little embarrassing you went there.

  • The only thing that will be embarrassing is how badly your comments, and those like yours will age.

    I don't know what happened to this place, but it went from actual young people sharing information on the newest things in tech, tech philosophy, interesting stuff; to now old men yelling at the clouds about the new tech.

    • I agree with your basic point, but it’s not just an age thing. There are plenty of older people enthusiastically using AI for software development now. Just as an example, Steve Yegge, who vibe-coded the Beads and Gas Town AI projects, is around 57. I’m a bit older than him, and I’m working with Claude, Gemini, and Codex on a daily basis, having great fun and learning tons.

      What we seem to be seeing with AI is that the prospect of completely changing the way you work is threatening for a lot of people, and of course so is the prospect of losing your job. When people are faced with something threatening, a common reaction is to criticize it in every possible way - you can’t admit anything about it is good because that risks encouraging the threat. It’s not exactly rational, but it’s what people often do.

      HN has never been exempt from that, it’s just that AI is a big change that brings out this instinct in many more people.

> You won't be 'underwhelmed' long.

This has been your constant mantra for 3 years now and is part of the reason people are underwhelmed.

> You won't be 'underwhelmed' long.

"Yes, it sucks now, but believe me it won't be for long" spiel has been hyped for several years now.

Oh, don't get me wrong, these tools are amazing. But just yesterday a very small refactoring resulted in 480 fully duplicated lines in a 5000-line codebase (on top of extremely bad DB access patterns) despite all the best shamanic rituals this world has to offer [1].

So yeah, senior engineers especially use these tools daily, and keep being completely honest about their issues and shortcomings. Unlike the hype and scam artists.

[1] Oh, sorry. I meant to say skills, context engineering and management, memory, prompt engineering.

  • " But just yesterday a very small refactoring resulted in 480 fully duplicated lines in a 5000-line codebase (on top of extremely bad DB access patterns) despite all the best shamanic rituals this world has to offer."

    Get better rituals. PEBCAK.

    • Yeah yeah. There exists some secret ritual known only to the selected elite that make these tools perfect, no mistakes.

      (The only people who say that are scammers and peddlers, and the only people who believe that are juniors or don't know how to code at all)

      2 replies →

Extraordinary claims require extraordinary evidence.

And even staying within the comfort of AI enthusiasm: Google wasn't exactly leading in this race. If you have this much confidence in what those presenters and engineers at Google told you, you now have some opportunities to make a lot of money.

  • https://github.com/gca-americas/way-back-home

    Anyone here who is currently 'underwhelmed'; please get through all 5 levels here and then say the same thing.

    This is just the beginning. I seriously can't believe this place turned into neo-boomerism ideology on tech. I honestly don't get it, just makes me think everyone here talking about being seniors and architecture and blah blah; don't actually know shit, and aren't actually good at what they do.

The same could have been said to someone who had yet to encounter generative AI. "Wait until you try it, you won't be underwhelmed for long".

But here we are.

Over time and usage the limitations of a thing become apparent.

Even SOTA models when used in agents in simple NLP tasks such as text classification still fail more times than acceptable when evaluated against a realistic evaluation dataset with sufficient example variety and with some adversarial prompts included.

Improving such uses cases is mostly an artisanal endeavor, sometimes a few-shot prompt improves things, sometimes it improves things at the expense of kind of overfitting it, sometimes structured reasoning works, sometimes it doesn't, or sometimes it works and then the latency and token explodes, etc etc....

And yet a lot of teams don't see this problem because they don't care much about evaluations, and will only find this issues in production a few months after deployment.

Are those who care about evaluation luddites?

Is it just me or is there a growing disconnect between AI insiders and everyone…

  • There are no "AI-insiders".

    "AI-insiders" are trying to market their tools to you. See Anthropic's continuous lithany of "all programmers will be replaced in 6 months" while they struggle to make their TUI API wrapper consume less than 2-4 GB of RAM (they brought it down from 68 GB[1]), or have a decent uptime.

    [1] Yes, you read it right. They had to buy a team of actual engineers to do the job: https://x.com/jarredsumner/status/2026497606575398987

> When did Hacker news start becoming a luddite, bad takes everywhere I look, feels like everyone is '50 year old burnt out guy' that has no idea what is going on vibe?

Much to the opposite, I think healthy skepticism is a sign of maturity. The overeager embracing of hype cycles is extremely cringe.

> I just got back from a SAIRS conference at UCLA and talked directly with some of the presenters and engineers at Google.

Cringe, as I was saying.

Conferences are just mutual fart smelling, swagger, and expensing trips on company momey. I am not against it, but treating your participation in some conference as a sign of the future is very silly.

Every conference I participated always overhyped every current bullshit.

tl;dr - "I will dismiss this because of the time I've been spending in a pro-AI bubble".

  • I never needed to be in a pro-AI bubble to dismiss bullshit; I wrote my capstone Philosophy paper on AI and Existentialism back in 2014.

    I am dismissing the neo-luddites because they are stupid and wrong, not because I am in a pro-AI bubble.

    • There is emotional bias and stubbornness in nearly all of your responses in this thread, the very same traits you lambasted HN broadly for in another comment. Rather than calling people, "stupid and wrong", why don't you make your case?

      If you don't want to be bothered to argue your points, and this place truly chaps your ass to the degree it does, why even waste your time commenting at all when, according to you, there's a more fun place with bigger brains that-a-way, as far as you're concerned? points

      I mean, it takes more energy and effort to be angry and annoyed than to just move on and leave us luddites in the dust.

      6 replies →

This place actually hates all technology after the invention of Lisp. And there's the common online incentive to dunk on things that also exists here. Hence the infamous Dropbox comment and others.

But it's also been anti-Javascript, anti-cloud, anti-social-media, anti-crypto, anti-React, and so on.

I would therefore not in a million years expect it to be pro-LLM, and this is so obvious to me that I'm a bit suspicious of your motives for acting confused about it, as if it was ever any different.

  • > But it's also been anti-Javascript, anti-cloud, anti-social-media, anti-crypto, anti-React, and so on.

    It was never any of these things, and you're misremembering if you think it was. There's never been a mono-opinion held by some all-encompassing hivemind.

    • I'm not misremembering. You can easily find monoculturey threads about all of these things. Just because there's a small slice of counter views, doesn't mean the average HN positions on these things isn't or wasn't decidedly negative.

  • It's literally unbearable now. I don't know how the place that once used to be exciting and deep in the know; is now old-man-yells-at-clouds ignorant of what is happening. It's actually really sad. /g/ and /r/accelerate seem like the last bastions of actual intelligent people discussing these things.

  • It's the same as the sad drunk man talking bad about the King at a dirty bar. It makes them feel better to compare or say they're better in some way.