Comment by ike2792

3 days ago

This has mirrored what I've seen in my company. People in the data science/ML part of the company are super excited about AI and are always giving presentations on it and evangelizing it. Most engineers in other areas, though, are generally underwhelmed every time they try using it. It's being heavily pushed by AI "experts" and senior leaders, but the enthusiasm on the ground is lacking as results rarely live up to the extremely rosy promises that the "experts" keep making. Meanwhile, everyone can read the news about layoffs attributed to AI and can see that hiring (especially of junior engineers) has slowed to a trickle. You can only fool people for so long.

> Meanwhile, everyone can read the news about layoffs attributed to AI and can see that hiring (especially of junior engineers) has slowed to a trickle.

According to FRED/Indeed[1], software job openings have been roughly flat for 2-3 years, and they've actually been slightly increasing again. What data source are you looking at?

[1] https://fred.stlouisfed.org/series/IHLIDXUSTPSOFTDEVE

  • Flat at 60% of pre-covid hiring while the number of graduates continue to increase and there's still a backlog of people who were laid off. That's not a particularly optimism inducing hiring market.

    • Do not with a straight face act like pre-COVID hiring levels were a Good Thing. They weren’t. They were a symptom of a broken economy that you personally happened to pretty directly benefit from.

      11 replies →

  • There have been a lot of headlines the past couple years about companies stating they are doing layoffs or slowing hiring because of AI. I would bet the average adult pays way more attention to news headlines than FRED reports.

    I also don't see why everyone would dismiss the statements of large company CEOs about why they are making hiring/firing decisions, regardless of what some statistics say.

    • Because the messaging of the CEOs is intentional, to both do 'damage control' and influence stock price / valuation. It's not neutral messaging.

      1 reply →

  • The companies doing the layoffs are themselves stating AI as a reason; that’s the news people are responding to. The parent didn’t claim that it’s based on reality, but it informs public opinion.

  • >According to FRED/Indeed[1], software job openings have been roughly flat for 2-3 years, and they've actually been slightly increasing again.

    None of this contradicts OP's claim, because at least anecdotally, juniors/interns are getting disproportionately squeezed by AI. Why hire an intern to write random scripts/tests for you, when claude code does the same thing? Therefore overall job posting could be flat or slightly rising, but that's only because everyone is rushing to hire senior/principals staff to wrangle all the AI agents, offsetting the junior losses.

    • We haven't hired for about 6 months, but the value of a junior is that they eventually become not junior, and if you value them, you pay them what they're worth and they stay.

      My guess is that AI can now assist juniors just as it can assist seniors, and they'll become competent in the correct skillset needed for the future, just as everyone before them has.

  • They are increasing, but the level is still lower than it's been since Oct 2020. In my experience at two different companies since 2020, hiring more or less stopped sometime in 2022 to early 2023. In early 2025, some hiring started again but it's still a very low rate compared to pre-COVID, particularly for new college grads. While I don't believe that AI has actually taken any significant number of jobs in the software field, I do think it's being used as a convenient excuse by executives to lay people off. Regardless of the actual numbers though, the general perception in tech is "lots of layoffs are happening with not so much hiring" and "AI has something to do with it (either directly or as an excuse)."

  • "Opening" doesn't mean anything. An actual job where someone is working and is being paid a salary means something.

  • Software job openings are mostly bullshit. Companies post ghost jobs en masse, while refusing to hire people. You can ask anyone that's had to look for a job recently and see how bad the market is.

  • Is that data useful at all? Indeed postings are a poor proxy for how many people actually get hired. One of the major problems we have is that employment statistics are largely just estimates, and don’t reflect reality on the ground. Factor in the Trump admin firing most of the BLS and other agencies for not giving him the numbers he wants, and there really is no reliable data.

I feel like the junior problem contributes more heavily than people might think. The people on top see juniors as replaceable since they view them as cheap menial labor, whereas most seniors at least acknowledge the human element as part of the benefit

  • Today's juniors are tomorrow's seniors, or more importantly today's seniors' yesterday.

    They do the dirty, repetitive work, learn the systems inside out, take note of the flaws, and fix them if they are motivated and the system/process allows.

    Thinking them as replaceable, worthless gears is allowing your organization rot from inside. I can't believe people can't see it.

    • Plenty of people see it - but, to a hiring team, a junior is an extremely risky investment. They demand a high cost relative to when they can start contributing actual value, may not work out, or may hop ship the moment they become competent. It is rational for a business to want to eliminate this risk. It's possible that everyone is acting rationally here, knowing it will lead to a result that is not favorable down the line - because the immediate benefit is too great to consider the latter.

      In other words the gamble of hiring expensive juniors with shiny degrees is greater to them than the gamble of not having competent seniors a few years down the line. And that risk may be overblown - people are still hiring some juniors, it's not like it has stopped entirely - so future seniors will likely just be worth more than they are currently. To some, that may be worth the risk, especially if you believe AI will continue to get stronger.

      I am not saying I agree with this decision making, more pointing out the thought process. We have had to have similar discussions where I am but are still hiring juniors, FYI. That's basically all we're hiring right now, actually, because the market for strong juniors is very good right now.

      7 replies →

    • If people need to be retained for many years, is the solution to give a bunch of stock that vests over many years? It would be interesting if such incentives (the need to hang onto talent that has been incubated for many years) could bring about a return to one-company careers.

    • This would have been a great revelation for decision-makers across the economy to have had about 20 years ago. Instead, they took every opportunity to turn the job market into the Hunger Games. Congrats to the people who survived the Cornucopia; the rest of us have been bleeding out for well over a decade.

  • It's not that juniors are replaceable, but that hiring them is a high variance move. Few, if any, know whether a candidate is just memorizing leetcode and is going to be a dud, costing you effort before they get a PIP, or you are hiring a very talented individual that will be contributing in 2 weeks . With seniors, you risk less, just because the track record makes the very worst candidates unlikely.

    • maybe they should have seniors choose juniors, they tend to be good at recognizing the worth someone could bring

> but the enthusiasm on the ground is lacking

Using claude and friends takes all the fun out of the job, so I'm not surprised engineers are not enthusiastic. It's cool for 1 month then you realize we went from solving problems and implementing algos and optimizing slow code and fixing security issues and other fun stuff, to writing prompts all day long.

The use-cases for data science and other engineers are different. AI is not uniformly good at all kinds of development.

There is an issue with execs pushing it though. You have people at the top of the company with little to no idea how people work attempting to micromanage tool usage. It is as if you had a group of execs determining what IDEs people could use.

No-one is getting fired because of AI. The start of this year is the start of companies beginning to use AI. The reason layoffs are happening is because of the massive overhiring after Covid.

  • > overhiring after Covid

    How long after COVID are we going to be able to keep using this excuse? This is starting to feel like the politician blaming his predecessor even though he's been in office for years. In the year 2033, Company X lays off another 10,000, just as it did each year since 2023, again blaming massive over-hiring during COVID, ten years ago.

    • > How long after COVID are we going to be able to keep using this excuse?

      I am with you but if you look what happened after COVID it is a big line going waaaaay up. COVID was a significant event and there is no way around it, no? the OPs comment is invalid because we below the pre-COVID (by miles) but COVID should be taken into account (everyone seems to use it to further some agenda by looking at just one particular aspect of what happened post-COVID)

  •   > It is as if you had a group of execs determining what IDEs people could use.
    

    its worse than that; its more like determining what ide you use and also mandating how much time you spend in it, and then chewing you out at review time because you used jira and confluence too much instead of writing md files in the blessed ide of their choice

I have a similar experience. I seldom use it to test to see its current state, and it generally (85% of the time) gives wrong answers. Then I discuss this with a couple of friends:

    Me: I tried $AI recently, I asked $question, it hallucinated.
    Them: But it sucks at that.
    Me: Then what's good at? It's useful if it helps me out of a ditch.
    Them: It depends on the domain...

These guys are not evangelists or anything, but colleagues who want to reduce their workloads. If it can't help with what I need, then how can it help me at all?

At the end of the day, I don't plan to use this at daily capacity, but with all the resources poured into this, it's still underwhelming.

  • A friend of mine has copilot integrated with his storage appliance that all the business docs are hosted on for his firm. He says it's amazing.

    My company uses Sharepoint, and can digest all of the documents I have access to on that, one drive, teams, outlooks, etc. across my tenant. Most of the time, it's pretty useless.

    There must be some reason for these two disparate experiences. It's the same product offering. I couldn't tell you.

    • Reminds me of a bounty I received recently. Someone essentially exposed a bedrock agent that had access to the companies internal documents to the internet unauthenticated. They actually had the reports and notes for other bug bounties that had been reported to them as well.

  • Tell claude what you do and ask it where it can be the most helpful. It is true that the tool has to be learned, and won't help everywhere. If you are doing web dev just to make a tool, it is purely magical. I've found it to be mostly useless in making geed helm charts.

    • I generally use them for researching things which I was unable to find anywhere else. For example, for Gemini I have two extreme examples:

      I asked for a concept in Tango music, with a long prompt explaining what I'm looking for. It brought me back a single, Spanish YouTube Video explaining it perfectly alongside its slightly wrong summary, but the video was spot on, and I got what I needed.

      Then I asked for something else about a musical instrument, again with a very detailed prompt, and it gave me a very confident answer suggesting that mine is broken and needs to be serviced. After an e-mail to the maker of the said instrument, giving the same model number (and providing a serial) and asking the same question, I got a reply saying that it's supposed to that and it's perfectly fine, it turned out that Gemini hallucinated pretty wildly.

      For programming I don't use AI at all. I have a habit of reading library references and writing code directly by RTFM'ing the official docs of what I'm working with. It provides more depth, and I do nail the correct usage in less time.

      2 replies →

Funny, I was supposed to be the expert in my company, but I was run over by the demo folks, while I was uselessly preaching about evaluation, safeguards, guardrails, observability.

For mine it’s worse because we have new leadership who believes in it to a far larger extent than it can deliver. Now a massive amount of our workforce is building up proofs of concepts and spitting out tons of effectively useless output to look good because of how strongly they’ve signaled it’s good for careers here to fully embrace it. It’s a massive mess and there’s nobody to clean it up, and the voices advocating for rigor or good engineering practices are being sidelined.

It’s full out mania. As someone raised in and who escaped a cult, I am having to use every tool in my very large toolbox to stay sane while I wait for this to pass and die down or make my move towards a place that still cares whether their product works.

  • If the majority of engineers decide to rot their brains and abandon best practices, the industry will eventually implode. Stay true to your beliefs and use the bare minimum of AI to keep your job.

    We’re in what I would call the “dark ages” of tech. There will be a new renaissance led by those who used this as an opportunity to build skills and tools that are genuinely useful and ingenious.

    If you keep a long-term horizon this is the perfect opportunity to work on a solo project in stealth mode. Or build professional connections with others who see things the way you do.

    • “What I would call […]”.

      When people talk about one’s salary being an imperative to them understanding something, they are talking about exactly you. “This’ll all wash over and we’ll be back to the good old days that I’m used to” has never happened. Ever.

      2 replies →

  • [flagged]

    • AI isn't going away, but leadership expectations to (say) increase "efficiency" by 50% in the next 6 months through "AI" will. Eventually. After lots of fudging of numbers and general reluctance to admit that the Emperor's clothes are looking awfully translucent.

      1 reply →

    • If LOC and tokenmaxxing is the future, nobody will have a job.

      I use AI all day every day, I’m not a luddite, I’m someone who has seen people take the same shitty shortcuts to working systems they are now. They’re wasting tons of money and smarter competitors who can actually think clearly about the benefits and costs are gonna eat their lunch.

  • Early stages of any major disruptive technology will have hype due to get-rich-quick folks. Dot-com boom & bust of 2000 is similar. But the underlying technology (internet) defined our lives forever.

    I don't know why people are comparing the Day-1 of one technology with the Day-1000 of another. Yes, AI is useless in many fields - NOW. But you can't imagine doing any work without in a couple years.

    Like the kids used to ask - 'How did they build Google without Google?'

    Now their kids will ask - 'How did they build chatGPT without chatGPT'?

    • ChatGPT has been around for 4 years at this point. Not very long, but I’ve heard of the ‘imagine what it’ll do in one year’ spiel quite a few times by now.

      2 replies →

    • 2 things - it’s not day 1 for AI, and it’s also not dot-com (which dropped the nasdaq 80% btw). It’s the entire American economy right now. When it can’t deliver anything approaching its hype, just like all the data centers that can’t deliver on power, the profit margins that can’t deliver, and the promises of massive 500% revenue increases this fiscal year… sorry, I was raised in a cult and know what the fuck I’m seeing, sadly among a lot of otherwise intelligent people here.

      I expect I’ll be using LLMs now and in the future, but the public is far more right about the companies and the people running them than the tech “insiders” here.

How can the AI be lacking in results and also at the same time responsible for layoffs and slowing of hiring?

Wouldn't it be one or the other?

  • You replace half of a team with AI. Salary cost go immediately down, but team output can keep up for some time. You don't see the technical debt, the security issues and the prompt injection which will result in wrong invoices being sent. In six months suddenly there will be a big problem, but this quarter a lot of shareholders are happy about the cost-cutting. You may even be promoted by the time shit hits the fan, and it won't even be your problem anymore.

    On the other hand there probably also is a general correction in the market after the covid hiring spree.

  • You assume CEOs are completely rational actors.

    The reality is most of them are so divorced from reality that they think they are infallible and AI will pick up the slack because they want it to be true.

  • No, layoffs happen due to leadership’s expectations. It’s never been about reality, layoffs and both massive hiring to ramp teams are based on vibes.

    The lack of results is felt by those using it to assist with their work daily.

    • And specifically, their expectations as to what will positively impact the stock price. Shareholder value this quarter is more important than keeping the company afloat next quarter.

  • It can be both if for the majority of layoffs, AI is just a scapegoat to act as cover for cuts made for financial reasons or offshoring and not the actual cause.

  • AI could be a huge net benefit, and justify large layoffs.

    AI could be a huge short-term benefit, justify layoffs now, so long as you (the exec doing the laying off) don't have to worry about the long term

    AI could have middling net benefit, but be a great excuse to justify layoffs now. In this scenario, the people laid off and those that remain bear the cost (one, losing their jobs; those that remain, burning out with the extra workload) etc etc, many scenarios to consider...

  • When Block laid off 40% of their staff Jack Dorsey said it was because of AI. Whether or not you believe him is a different question.

    • He went all in on block chain, so it would be consistent with previous hyped tech. In this case I would believe him.

  • Not if the actual vs stated reasons for the layoffs have nothing to do with AI.

  • and surely, all of those companies are being entirely truthful about the reason for the layoffs.

  • can happen at the same time when businesses speculatively fire workers to replace with AI. The lack of results might bite them in the ass and the bubble might pop. Or not, but they are going long on their AI position

  • AI hype -> layoffs -> AI underperforms -> ????

    Hilariously, it's the exact same playbook as the big third-world-country-outsourcing hype from a few years ago.

I totally understand where you are coming from and my personal take is LLMs are to "stuff" as a drill driver is to a screw driver. They are a tool, just a tool. ... bear with ...

I over floored several rooms in my house (UK, '20s build) with plywood before laying insulation, heating mats and laminate floor boards for the final finish. I don't have a staple gun so I screwed the boards down at roughly 600mm c/c across the floorboards and 300mm along them.

What the blazes has that got to do with LLMs?

Well, I used a nearly inappropriate method for a job and blasted through it nearly as fast as the best method! If I had used a manual screwdriver I would have been at it nearly forever and ended up with a very limp wrist. I do own an old school ratchet screwdriver and that would have speeded things up but still been slow. I did use yellow passivated screws with sharp threads and a notch to initiate biting into the wood - rather more expensive than a staple or a nail.

So I burned through my tokens (screws instead of nails/staples) faster than if I had used a pneumatic nail/staple gun.

Anyway. LLMs are tools. They can be good tools in the right hands or rip your fingers off in the wrong hands.

  • Running with this analogy, the two sides of the AI argument are the people who think they can fire their plumber and electrician now that they have a drill driver, and the people who know it doesn't work that way...

    • Quite. My larger drill driver will wrench your wrist unless you know how to set the speed/mode/etc correctly and know how to brace yourself correctly.

      At the moment, I think that a LLM needs skilled hands too. Have a casual chat - that's fine but for work ... be aware.

      I recently dumped a wikimedia (our knowledge base is a wiki) formatted table into a LLM (on prem) and asked it to sort the list on the first column. It lost a few rows for some reason. No problem - I know how my tools work but it was a bit odd!

Your statement is a bit contradictory. That is, the article about "the growing disconnect between AI insiders and everyone else" pretty clearly states that "everyone else" is scared about job losses and the extreme inequality they see advanced AI causing. This is in line with your second to last sentence.

But the first part of your comment is basically saying "AI insiders think the tech is super awesome and powerful, while other engineers think it doesn't stand up to the hype." Well, if the AI is indeed not as good a tech as its boosters are saying, well, this would be great news for everyone scared about job losses and widening inequality if AI turned out to be a nothing burger.

  • No and it has been said already elsewhere in this thread: decision makers are not entirely rational, they might fire entire departments even if the AI revolution isn’t here quite yet

That's likely because it takes an entirely different approach to make it work. Augmenting your existing flow with "sophisticated auto complete" isn't as interesting and isn't actually using the tools how they were designed to be used.

I'm not going to pass judgement either way; we'll see how it all shakes out.

I just know for me, personally, I love computers and making them do what I want and in the AI era I am somehow using them even more and doing even more.

I feel like this will change as people move around. It’s definitely a skill.

What about karpathy though?

  • Smart guy phoning it in now - realized a few weeks ago that he “notices” something interesting to share, but is really paraphrasing a recently released paper that found it - without giving paper credit.

  • Wasn't Karpathy the guy who used to work for tesla and that tried to convince everyone that you only need cameras for self-driving and that by 2025 there wouldn't be anymore cars without self-driving capabilities to sell?

Underwhelmed is the absolute correct word to use here.

Absolutely everyone raves about this but other than a few basic computer related tasks I’ve not seen compelling use cases that justify the billions being lit on fire trying to pursue it.

My cynical take is the crypto bro’s needed something to do with their useless GPU’s after the crash and found the perfect answer in LLM’s.

  • [flagged]

    • It’s primarily about confidence and motivations. People with high confidence at what they do are supremely unmotivated to use something like AI to solve problems they don’t have.

      People with low confidence will be super excited for AI because it solves problems they weren’t even thinking about.

      Executives that don’t write code are super excited about AI because hopefully it means they can continue to high low confidence people, which are plentiful and cost less.

      I am sitting on the sidelines watching in disbelief. I don’t use AI and don’t plan to. I used to write JavaScript for a living and still get JavaScript job alerts from a lot of job boards. The compensation for JavaScript work is starting to shoot through the roof as employers are moving away from garbage like React and Angular. The recent jobs are becoming fewer and are more reliant upon people with tons of experience that can actually program. Clearly AI is not replacing positions for higher talent with greater than 8-12 years experience.

      12 replies →

    • As a senior dev who has been using these tools to their fullest effectiveness in production environments, until AI can reduce the entropy of a codebase while still adding capability I will continue to be underwhelmed.

      8 replies →

    • When you use the term "luddite" in the way you do, you reveal that you aren't aware of who the Luddites actually were. Luddites weren't anti-technology; many of them were experts at using advanced machinery. What they opposed was the poor quality output of automated factories and the use of machinery to circumvent apprenticeships and decent wages.

      As for your promise of a great leap at some vague point in the future, that's such a widely-mocked AI industry trope at this point that it's a little embarrassing you went there.

      4 replies →

    • > You won't be 'underwhelmed' long.

      This has been your constant mantra for 3 years now and is part of the reason people are underwhelmed.

    • > You won't be 'underwhelmed' long.

      "Yes, it sucks now, but believe me it won't be for long" spiel has been hyped for several years now.

      Oh, don't get me wrong, these tools are amazing. But just yesterday a very small refactoring resulted in 480 fully duplicated lines in a 5000-line codebase (on top of extremely bad DB access patterns) despite all the best shamanic rituals this world has to offer [1].

      So yeah, senior engineers especially use these tools daily, and keep being completely honest about their issues and shortcomings. Unlike the hype and scam artists.

      [1] Oh, sorry. I meant to say skills, context engineering and management, memory, prompt engineering.

      4 replies →

    • Extraordinary claims require extraordinary evidence.

      And even staying within the comfort of AI enthusiasm: Google wasn't exactly leading in this race. If you have this much confidence in what those presenters and engineers at Google told you, you now have some opportunities to make a lot of money.

      6 replies →

    • The same could have been said to someone who had yet to encounter generative AI. "Wait until you try it, you won't be underwhelmed for long".

      But here we are.

      Over time and usage the limitations of a thing become apparent.

    • Even SOTA models when used in agents in simple NLP tasks such as text classification still fail more times than acceptable when evaluated against a realistic evaluation dataset with sufficient example variety and with some adversarial prompts included.

      Improving such uses cases is mostly an artisanal endeavor, sometimes a few-shot prompt improves things, sometimes it improves things at the expense of kind of overfitting it, sometimes structured reasoning works, sometimes it doesn't, or sometimes it works and then the latency and token explodes, etc etc....

      And yet a lot of teams don't see this problem because they don't care much about evaluations, and will only find this issues in production a few months after deployment.

      Are those who care about evaluation luddites?

    • > When did Hacker news start becoming a luddite, bad takes everywhere I look, feels like everyone is '50 year old burnt out guy' that has no idea what is going on vibe?

      Much to the opposite, I think healthy skepticism is a sign of maturity. The overeager embracing of hype cycles is extremely cringe.

      > I just got back from a SAIRS conference at UCLA and talked directly with some of the presenters and engineers at Google.

      Cringe, as I was saying.

      Conferences are just mutual fart smelling, swagger, and expensing trips on company momey. I am not against it, but treating your participation in some conference as a sign of the future is very silly.

      Every conference I participated always overhyped every current bullshit.

    • This place actually hates all technology after the invention of Lisp. And there's the common online incentive to dunk on things that also exists here. Hence the infamous Dropbox comment and others.

      But it's also been anti-Javascript, anti-cloud, anti-social-media, anti-crypto, anti-React, and so on.

      I would therefore not in a million years expect it to be pro-LLM, and this is so obvious to me that I'm a bit suspicious of your motives for acting confused about it, as if it was ever any different.

      5 replies →

Shocking that people who are in data science/ML are excited about data science/ML, and people in jobs not interested in that area are not interested in it.

It's like a programmer being surprised that a worker in $random_job wants to keep doing their job, and not learn how to be a programmer instead.

There's this weird unspoken assumption in a lot of these HN posts that any layoffs or lack of hiring is due to companies shirking on providing the cushy jobs they owe software engineers. Actually, they hire engineers to get stuff done. If it's true that AI is just a big 'ol scam and it doesn't even work, then I guess we'll see the companies that insist on nothing but the finest artisanily hand-typed organic code rocket to the top of the charts on app downloads, sales, revenue, and market cap.

  • This is basically how most engineers talk to their managers, politely implying - "can you see how this decision has a short term payoff but a long term consequence?"

    Before LLMs I only worked at one place that "only hired seniors and above" and now its the most commonplace thing in the world.

    Nobody owes me anything, I already have the skills I need, where will the juniors come from that these companies are going to need in a few years? We don't need extremist stances in either camp, we need balance.

    • > Nobody owes me anything, I already have the skills I need, where will the juniors come from that these companies are going to need in a few years? We don't need extremist stances in either camp, we need balance.

      Seems a bit like asking where the bread will come from, if no-one is forced to bake it.

      4 replies →

  • > If it's true that AI is just a big 'ol scam and it doesn't even work, then I guess we'll see the companies that insist on nothing but the finest artisanily hand-typed organic code rocket to the top of the charts on app downloads, sales, revenue, and market cap.

    AI works fine to get a vibe coded BS version of the app. No doubt there. But eventually, especially once scale hits your app, it will devolve into an unholy mess of low performance and (extremely) high cost if you do not have a bunch of senior talent able and willing to clean up after the AI mess.

    Unfortunately, our capitalist economy only rewards the metrics you mentioned... but by the time the house of cards collapses, either from financial issues stemming from the above or because the tech debt explodes, it's too late to turn the ship around.

    • And I've even heard rumors of software engineers that don't even write apps or write code that runs on the internet at all. They say some of them don't even use javascript or python! The horror.

I get it, but as a "AI expert and senior leader" myself in my 1,000 people organization (in relative terms), the disconnect I have is:

A lot of what non-believers say matches "enthusiasm on the ground is lacking as results rarely live up to the extremely rosy promises". They would then say they need 2 weeks to work on a specific project, the good old way, maybe with some light AI use along the way.

But then I'm like "hmm actually let me try this real quick" and I prompt Claude for 3 minutes, and 30 minutes later it has one-shotted the whole "two weeks project". It then gets reviewed and merged by the "non-believers". This happens repeatedly.

So overall, I think the lack of enthusiasm is largely a skill issue. Not having the skill is fine, but not being willing to learn the skill is the real issue.

I see things changing, as "non-believers" eventually start to realize that they need to evolve or be toast. But it's slower than I imagined.

  • I am a strong believer and selected as power user because of AI usage metrics, but I also see perverse incentives -- a colleague was desperately searching for me on the Claude token usage leaderboard (I was part of a different group he did not have access to) -- it was clear he was actively trying to climb that leaderboard.

    Meanehile our average PR loc balooned to ~2000loc -- generated with Claude, reviewed with copilot but colleagues also review it with Claude because it gives valid nitpicks that bump up your github stats, while missing glaring functional/architectural issues, overenginerring issues.

    No way this doesn't blow up down the road with the massive bloat we're creating while getting high on the "good progress" we're making.

    Yes, your 3 minutes prompt got merged. So was my friends(ex-programmer now manager) non-ai generated PR that a technical TL got stuckstuck on for 2 weeks. Different perspective? Survivor bias? High authority?

    • Blame your engineering culture not AI if metrics such as Github stats, number of nitpick reviews and token usage is what is used to judge one's performance.

      In a sane engineering culture, actual customer-visible impact is what is measured, and AI is just a tool to improve that metric, but to improve it massively.

  •   > But then I'm like "hmm actually let me try this real quick" and I prompt Claude for 3 minutes, and 30 minutes later it has one-shotted the whole "two weeks project". It then gets reviewed and merged by the "non-believers". This happens repeatedly.
    

    this is a nice anecdote but i think the real issue is the forcing and kpi-nization of llms top-down for nearly everything

    there are still code-quality issues, prompting issues for long-running tasks, some things are just faster and more deterministic with normal code generators or just find-and-replace etc

    people are annoyed at the force-feeding of llms/ai into everything even when its not needed

    somethings can be one-shotted and some things cant, and that is fine and perfectly normal but execs don't like that because its not the new hotness

    • > somethings can be one-shotted and some things cant

      True but my point is that people vastly underestimate what is one-shottable.

      In my experience, 80% of the times an average "non-believer" SW engineer with 7 years experience says something is not one-shottable, I, with my 15 years of experience, think it is fact one-shottable. And 20% of the time, I do verify that by one-shotting it on my free time.

  • I believe that this has happened in some cases but am very skeptical that it is widespread and generalizeable at this point. My own experience is that software engineers thinking they can easily solve a problem in a domain they know nothing about overrate their ability to do so ~99% of the time.

    • I'm not talking about coding in domains I know nothing about. I'm talking about coding in domains I've worked in for 15 years

  • Well "non-believers" don't see any gain from being faster, right? That'll just set expectations of "do a lot more for same". Fear of being "toast" will get you the loyalty you'd expect from fear.

  • the best way I found to deal with non-believers is to have claude run code reviews on their own work. I’ll point it to an older commit and get like 3-page markdown file :) works really, really well.

    on one-shotting 3 minute prompt in 30 minutes though, software is a living organism and early gains can (and often result) in later pains. I do not use this type of argument as it relates to AI as the follow-up as the organism spreads its wings to production seldom makes its way to HN (if this 30 minute one-shot results in a huge security breach I doubt you would be back here with a follow-up, you will quietly handle it…)

    • You can get it to generate a 3-page markdown file for any random code, or its own code it just generated. If requested it will produce a seemingly plausible looking review with recommendations and possible issues.

      How impressed someone get from that will depend on the recipient.

      2 replies →

  • Unsure of this really tracks tho. How are you evaluating for the bias that they’re not merging it because you’re “their leader of 1000 people org” and not because you’re actually an engineer deep in the trenches that knows the second or third order effects of slop?

    This is a genuine question btw, I see plenty of instances of this in my own org.

    • I see your point, but

      1. I am also on the receiving end of this. My boss often codes and vibecodes, and no one feels like they have to merge their stuff. We only merge it if it meets the high quality standard we have. And there is no drama for blocking a PR in our culture. 2. I am fairly deep in the trenches myself and I know when my PRs are high quality and when they are not. And that does not correlate with use of AI in my experience.

  • I've been on this ride about three or four times over decades. Every new major wave of technology takes a surprisingly long time to be adopted, despite advantages that seem obvious to the evangelists.

    I had the exact same experience with, for example, rolling out fully virtualized infrastructure (VMware ESXi) when that was a new concept.

    The resistance was just incredible!

    "That's not secure!" was the most common push-back, despite all evidence being that VM-level isolation combined with VLANs was much better isolation than huge consolidated servers running dozens of apps.

    "It's slower!" was another common complaint, pointing at the 20% overheads that were the norm at the time (before CPU hardware offload features such as nested page tables). Sure, sure, in benchmarks, but in practice putting a small VM on a big host meant that it inherited the fast network and fibre adapters and hence could burst far above the performance you'd get from a low end "pizza box" with a pair of mechanical drives in a RAID10.

    I see the same kind of naive, uninformed push-back against AI. And that's from people that are at least aware of it. I regularly talk to developers that have never even heard of tools like Codex, Gemini CLI, or whatever! This just hasn't percolated through the wider industry to the level that it has in Silicon Valley.

    Speaking of security, the scenarios are oddly similar. Sure, prompt injection is a thing, but modern LLMs are vastly "more secure" in a certain sense than traditional solutions.

    Consider Data Loss Prevention (DLP) policy engines. Most use nothing more than simple regular expression patterns looking for things like credit card numbers, social security numbers, etc... Similarly, there are policy engines that look for swearwords, internal project code names being sent to third-parties, etc...

    All of those are trivially bypassed even by accident! Simply screenshot a spreadsheet and attach the PNG. Swear at the customer in a language other than English. Put spaces in between the characters in each s w e a r word. Whatever.

    None of those tricks work against a modern AI. Even if you very carefully phrase a hurtful statement while avoiding the banned word list, the AI will know that's hurtful and flag it. Even if you use an obscure language. Even if you embed it into a meme picture. It doesn't matter, it'll flag it!

    This is a true step change in capability.

    It'll take a while for people to be dragged into the future, kicking and screaming the whole way there.