Comment by jstummbillig

1 day ago

> how every new iteration is going to spell doom/be a paradigm shift/change the entire tech industry etc.

It's much the dynamic between parents and a child. The child, with limited hindsight, almost zero insight and no ability to forecast, is annoyed by their parents. Nothing bad ever happens! Why won't parents stop being so worried all the time and make a fuss over nothing?

The parents, which the child somewhat starts to realize but not fully, have no clue what they are doing. There is a lot they don't know and are going to be wrong about, because it's all new to them. But, what they do have is a visceral idea of how bad things could be and that's something they have to talk to their child about too.

In the eyes of the parents the child is % dead all the time. Assigning the wrong % makes you look like an idiot and not being able to handle any % too. In the eyes of the child actions leading to death are not even a concept. Hitting the right balance is probably hard, but not for the reasons the child thinks.

Disagree - we’re being told on one hand that we are 6 months away from AI writing all Code, and 3 months into that the tools are unusable for complex engineering [1]. Every time I mention this I’m told “but have you tried the latest model and this particular tool” - yes I have, but if I need to be on the hottest new model for it to be functional that means the last time you claimed it was solved, it wasn’t solved.

[0] https://news.ycombinator.com/item?id=47660925

  • > Every time I mention this

    I feel like there’s a bunch of factors for why it will never be the same for many folks, from the models and harnesses, to the domains and existing tests/tooling.

    I feel bad for the people for whom it doesn’t work, but Claude Opus has written most of my code in 2026 so far. I had to build some tools around linting entire projects and most of my tokens are probably referencing existing stuff and parallel review iterations and tests, but it’s pretty nice and even seeing legacy code doesn’t make me want move to a farm and grow potatoes.

    It might be counter productive to be like: "Oh, just do X!" which works for the person suggesting it, and then have to do "But have you tried Y?" when it doesn't for the other person, if it just keeps being a never ending string of what works for one person not working for another.

    • > I feel like there’s a bunch of factors for why it will never be the same for many folks

      Yeah, and the problem arises simply because some people are unable to accept the fact. They insist that if LLM-assisted coding doesn't work for one, it's because “you're holding it wrong”.

    • > I feel like there’s a bunch of factors for why it will never be the same for many folks, from the models and harnesses, to the domains and existing tests/tooling.

      If the argument is “you have to use the right model, harness, test and tooling for it to work” then it’s not replacing software engineers any time soon.

      The other thing is - where are all the web apps, mobile apps, games, desktop apps, from these 100x productivity multipliers. we’re 1-2 years into these tools being widely mainstream and available and I’m not seeing applications that took years to ship before appear at 100x the rate, or games being shipped by tiny teams, or new ideas of mobile apps coming out at 100x the rate. What we do see is vibe coded slop, stability issues with massive companies (windows, AWS for example), and mass layoffs back to pre-covid levels blamed on AI but everyone knows it’s a regression to the mean after a massive over hiring when money was cheap.

      It’s like the emperor has no clothes on this topic to me.

      10 replies →

    • Even co-pilot writes most of my code in april 2026.

      Further, i don't trust code anymore that hasn't been reviewed 3x or more by co-pilot.

      If you have asked me 6 months ago I wouldn't have expected this change so soon.

    • It’s because the models response is conditioned on the prompt. They are as intelligent as the person using them

      In some sense it’s a lot like a google search. There’s this big box of knowledge and you are choosing tokens to pluck out of it. The quality of the tokens depends on how intelligent you are.

      2 replies →

    • > I had to build some tools around linting entire projects

      OK, everybody is doing that. And everybody is doing their best at making LLMs more reliable when working on non-trivial tasks. Yet, it looks like nobody came up with a universal solution yet. This is particularly true for non-trivial projects.

  • Check out from this onwards and the following point. You get a nice summary on top right. Mind that Anthropic alone is doing 30B/y annualized already.

    Take a snapshot and check again in a few months. It's not perfect but it's much more falsifiable than a lot of the noise.

    https://ai-2027.com/#narrative-2026-04-30

    • > Mind that Anthropic alone is doing 30B/Y annualised already

      How many crypto exchanges were pulling in hundreds of millions in funding and doing billions in trades in 2021/2022?

      That blog post is… really something, I’ll give you that. Im not entirely sure what else to say about it other than that.

      1 reply →

  • > “I think… I don’t know… we might be six to twelve months away from when the model is doing most, maybe all of what SWEs (software engineers) do end to end.”

    I think it's disingenuous (as disingenuous as you're accusing these marketing teams of being) to paraphrase that as "being told on one hand that we are 6 months away from AI writing all Code". It's merely stating that it's a real possibility. (It's also disingenuous to use a post complaining about a behavioral regression bug as evidence that it's not progressing)

    Dismissing it as impossible is silly, considering how close it already is to a junior dev. Keep in mind that 14 months prior to that statement was before we even had any public reasoning models. Things really are moving that fast, it's just, at the moment, unclear how fast.

    • We’ve been suggesting that programmers are going to be replaced by simpler programming languages, gui programming tools, no code tools, low code tools, and now AI. The real big step was when Claude code came out and introduced the agentic loop where it could self validate against tests/linters/tooling, but everything after that had been penned as miraculous when IME it’s a new iteration of the same thing - wild hallucinations, getting stuck in deep loops, ignoring explicit instructions and guard rails, wild tangents and just generating stuff that doesn’t work or solve the problem.

      > I think it's disingenuous (as disingenuous as you're accusing these marketing teams of being) to paraphrase that as "being told on one hand that we are 6 months away from AI writing all Code". It's merely stating that it's a real possibility

      No - you don’t get to make wild predictions and say “oh I didn’t actually mean that, look how succesful we are though”. These teams aren’t saying “hey we think we’re going to majorly influence programming in 6-12 months”, they’re saying “we’re going to replace programmers”. If you can’t stand over your claims, don’t make them. _That’s_ disingenuous.

      2 replies →

That feels like a very complex way of looking at it. Another way would be to say “potentially profit seeking companies have an incentive to oversell products even if they’re good”.

  • Is Anthropic lying about model capabilities? If not, where is the overselling?

    • March 2025, Anthropic was claiming that 90% of code would be written by LLMs in three to six months, and "essentially all" code within twelve months. This was one week after closing a Series E round for $3.5 billion. When they began working on their Series F round for $13 billion. You shouldn't need more than that to understand what's going on here.

      The Claude Code leak revealed that Anthropic runs Claude-operated bots on the internet. One should be very cautious in getting swept up in the fund-raising process if they are not seeing first-hand the fruition of all of the flattering claims being presented by strangers on the internet.

      2 replies →

  • [flagged]

    • Homie chill. I use Opus every day and I love it. I’m not saying it’s all hype, just that these companies are here to make money and that every advertisement should be taken with salt yeah?

      Also maybe consider what this kind of visceral reaction indicates on a personal level :/

      6 replies →

The parents in this case are profiteering corporations on a mission to exploit the child for everything they can get away with, almost by definition.

It's a slightly different dynamic.

I feel like you’re muddying 2 different arguments here. Or rather, 2 different positions.

You’re asserting that people who are tired of this line being wheeled out hold a position analogous to “what’s the big deal, nothing bad happens, just relax”. In reality, that’s only 1 position. The other position is “I understand fully, the consequences, but the relentless doomer language is tiring in the face of continuing-to-not-eventuate”.

  • What do you think of people that say that about climate change? It seems you don't understand fully. This is not the time go get tired, right before this actually starts impacting jobs and people in other ways.

It’s more like the abusive parents telling the child that they’ll sell him to the scary man at the bus stop every time they want to coerce the child into doing what they want.

Eventually the child develops disrespect for authority.

This is just a really bad analogy. It doesn't addresses that there are multiple sources, the incentives to be telling us about it, and the spectrum between disaster-mitigation heroes and snake-oil salesmen.

Did you compare AI companies to parents and engineers actually delivering value to toddlers? AI companies cannot, in any capacity, be regarded as caretakers.

Don’t take it personally but this amount of fear and paranoia about death on every corner sounds like a mental illness to me. Generalised Anxiety disorder to be precise. Maybe I am just not a parent.

In any case there are substances and realiable methods that fix whatever paralyzing existential dread anyone struggles with daily.

Probably best to use conventional route but I personally use special low thc, high cbg weed once a week with a medical grade vaporizer and once a year (early autumn) a moderate dose of golden teacher mushrooms. Although I understand that most people perhaps couldn’t due to not managing their own business but on a strict employment contract with urine tests.

Are you suggesting these researchers somehow have wisdom and aren’t just guessing, and that everyone else are children too naive to understand the technology? It certainly sounds that way from the description you are attempting to apply.

This is two parents disagreeing on whether their child will automatically grow up to be a psychopath with one parent constantly remarking “if you teach that child how to cut bread, they will stab everyone later. If you teach that child to drive, they will run over everyone later”, not the “parents know better” situation you describe.

An analogy that’s, quite literally, an appeal to paternalism to trust the motivations and pernicious incentive structures of the big AI labs.

This is literally one the most infantilizing and simultaneously insulting analogies I've ever come across on this site. Do you really think consumers of the latest AI tools have no ability to forecast? The parents in this analogy have every incentive to lie