← Back to context

Comment by threethirtytwo

1 month ago

>Wait, so we can infer the future from “trendlines”, but not from past events? Either past events are part of a macro trend, and are valuable data points, or the micro data points you choose to focus on are unreliable as well. Talk about selection bias…

If past events can be dismissed as “noise,” then so can selectively chosen counterexamples. Either historical outcomes are legitimate inputs into a broader signal, or no isolated datapoint deserves special treatment. You cannot appeal to trendlines while arbitrarily discarding the very history that defines them without committing selection bias.

When large numbers of analogous past events point in contradictory directions, individual anecdotes lose predictive power. Trendlines are not an oracle, but once the noise overwhelms the signal, they are the best approximation we have.

>What is hypothetical is what will happen to all this software and the companies that produced it a few years down the line. How reliable is it? How maintainable is it? How many security issues does it have? What has the company lost because those issues were exploited? Will the same people who produced it using these new tools be able to troubleshoot and fix it? Will the tools get better to allow them to do that?

These are legitimate questions, and they are all speculative. My expectation is that code quality will decline while simultaneously becoming less relevant. As LLMs ingest and reason over ever larger bodies of software, human oriented notions of cleanliness and maintainability matter less. LLMs are far less constrained by disorder than humans are.

>Really? Everything? There is no chance that some people are simply pointing out the flaws of this technology, and that the marketing around it is making it out to be far more valuable than it actually is, so that a bunch of tech grifters can add more zeroes to their net worth?

The flaws are obvious. So obvious that repeatedly pointing them out is like warning that airplanes can crash while ignoring that aviation safety has improved to the point where you are far more likely to die in a car than in a metal tube moving at 500 mph.

Everyone knows LLMs hallucinate. That is not contested. What matters is the direction of travel. The trendline is clear. Just as early aviation was dangerous but steadily improved, this technology is getting better month by month.

That is the real disagreement. Critics focus on present day limitations. Proponents focus on the trajectory. One side freezes the system in time; the other extrapolates forward.

>I don’t get how anyone can speak about trends and what’s currently happening with any degree of confidence. Let alone dismiss the skeptics by making wild claims about their character. Do better.

Because many skeptics are ignoring what is directly observable. You can watch AI generate ultra complex, domain specific systems that have never existed before, in real time, and still hear someone dismiss it entirely because it failed a prompt last Tuesday.

Repeating the limitations is not analysis. Everyone who is not a skeptic already understands them and has factored them in. What skeptics keep doing is reciting known flaws while refusing to reason about what is no longer a limitation.

At that point, the disagreement stops being about evidence and starts looking like bias.

Respectfully, you seem to love the sound of your writing so much you forget what you are arguing about. The topic (at least for the rest of the people in this thread) seems to be whether AI assistance can truly eliminate programmers.

There is one painfully obvious, undeniable historical trend: making programmer work easier increases the number of programmers. I would argue a modern developer is 1000x more effective than one working in the times of punch cards - yet we have roughly 1000x more software developers than back then.

I'm not an AI skeptic by any means, and use it everyday at my job where I am gainfully employed to develop production software used by paying customers. The overwhelming consensus among those similar to me (I've put down all of these qualifiers very intentionally) is that the currently existing modalities of AI tools are a massive productivity boost mostly for the "typing" part of software (yes, I use the latest SOTA tools, Claude Opus 4.5 thinking, blah, blah, so do most of my colleagues). But the "typing" part hasn't been the hard part for a while already.

You could argue that there is a "step change" coming in the capabilities of AI models, which will entirely replace developers (so software can be "willed into existence", as elegantly put by OP), but we are no closer to that point now than we were in December 2022. All the success of AI tools in actual, real-world software has been in tools specifically design to assist existing, working, competent developers (e.g. Cursor, Claude Code), and the tools which have positioned themselves to replace them have failed (Devin).

  • There is no respectful way of telling someone they like the sound of their own voice. Let’s be real, you were objectively and deliberately disrespectful. Own it if you are going to break the rules of conduct. I hate this sneaky shit. Also I’m not off topic, you’re just missing the point.

    I responded to another person in this thread and it’s the same response I would throw at you. You can read that as well.

    Your “historical trend” is just applying an analogy and thinking that an analogy can take the place of reasoning. There are about a thousand examples of careers where automation technology increased the need of human operators and thousands of examples where automation eliminated human operators. Take pilots for example. Automation didn’t lower the need for pilots. Take intellisense and autocomplete… That didn’t lower the demand for programmers.

    But then take a look at Waymo. You have to be next level stupid to think that ok, cruise control in cars raised automation but didn’t lower the demand for drivers… Therefore all car related businesses including Waymo will always need physical drivers.

    As anyone is aware… this idea of using analogy as reasoning fails here. Waymo needs zero physical drivers thanks to automation. There is zero demand here and your methodology of reasoning fails.

    Analogies are a form of manipulation. They only help allow you to elucidate and understand things via some thread of connection. You understand A therefore understanding A can help you understand B. But you can’t use analogies as the basis for forecasting or reasoning because although A can be similar to B, A is not in actuality B.

    For AI coders it’s the same thing. You just need to use your common sense rather than rely on some inaccurate crutch of analogies and hoping everything will play out in the same way.

    If AI becomes as good and as intelligent as a human swe than your job is going out the fucking window and replaced by a single Prompter. That’s common sense.

    Look at the actual trendline of the actual topic: AI taking over our jobs and not automation in other sectors of engineering or other types of automation in software. What happened with AI in the last decade? We went from zero to movies, music and coding. What does your common sense tell you the next decade will bring?

    If the improvement of AI from the last decade keeps going or keeps accelerating, the conclusion is obvious.

    Sometimes the delusion a lot of swes have is jarring. Like literally if AGI existed thousands of jobs will be displaced. That’s common sense, but you still see tons of people clinging to some irrelevant analogy as if that exact analogy will play out against common sense.

    • How ironic of you to call my argument an analogy while it isn't an analogy, yet all you have to offer is exactly that - analogies. Analogies to pilots, drivers, "a thousand examples of careers".

      My argument isn't an analogy - it's an observation based on the trajectory of SWE employment specifically. It's you who's trying to reason about what's going to happen with software based on what happened to three-field crop rotation or whatever, not me.

      I argued that a developer today is 1000x more effective than in the days of punch cards, yet we have 1000x more developers today. Not only that, this correlation tracked fairly linearly throughout the last many decades.

      I would also argue that the productivity improvement between FORTRAN and C, or between C and Python was much, much more impactful than going from JavaScript to JavaScript with ChatGPT.

      Software jobs will be redefined, they will require different skill sets, they may even be called something else - but they will still be there.

      1 reply →

> The trendline is clear. Just as early aviation was dangerous but steadily improved, this technology is getting better month by month.

I'm yet to be convinced of this. I keep hearing it, but every time I look at the results they're basically garbage.

I think LLMs are useful tools, but I haven't seen anything convincing that they will be able to replace even junior developers any time soon.

  • Look at the past decade. Zero AI to AI that codes and makes movies in an inferior way when matched with humans.

    What does common sense tell you the next decade will bring? Does the trendline predict flat lining that LLMs or AI in general won’t improve? Or will the trendline continue like most trendlines typically trend on doing? What is the most logical conclusion?

> You cannot appeal to trendlines while arbitrarily discarding the very history that defines them without committing selection bias.

> When large numbers of analogous past events point in contradictory directions, individual anecdotes lose predictive power. Trendlines are not an oracle, but once the noise overwhelms the signal, they are the best approximation we have.

I'm confused. So you're agreeing with me, up until the very last part of the last sentence...? If the "noise overwhelms the signal", why are "trendlines the best approximation we have"? We have reliable data of past outcomes in similar scenarios, yet the most recent noisy data is the most valuable? Huh?

(Honestly, your comments read suspiciously like they were LLM-generated, as others have mentioned. It's like you're jumping on specific keywords and producing the most probable tokens without any thought about what you're saying. I'll give you the benefit of the doubt for one more reply, though.)

To be fair, I think this new technology is fundamentally different from all previous attempts at abstracting software development. And I agree with you that past failures are not necessarily indicative that this one will fail as well. But it would be foolish to conclude anything about the value of this technology from the current state of the industry, when it should be obvious to anyone that we're in a bull market fueled by hype and speculation.

What you're doing is similar to speculative takes during the early days of the internet and WWW. How it would transform politics, end authoritarianism and disinformation, and bring the world together. When the dust settled after the dot-com crash, actual value of the technology became evident, and it turns out that none of the promises of social media became true. Quite the opposite, in fact. That early optimism vanished along the way.

The same thing happened with skepticism about the internet being a fad, that e-commerce would never work, and so on. Both groups were wrong.

> What skeptics keep doing is reciting known flaws while refusing to reason about what is no longer a limitation. At that point, the disagreement stops being about evidence and starts looking like bias.

Skepticism and belief are not binary states, but a spectrum. At extreme ends there are people who dismiss the technology altogether, and there are people who claim that the technology will cure diseases, end poverty, and bring world prosperity[1].

I think neither of these viewpoints are worth paying attention to. As usual, the truth is somewhere in the middle. I'm leaning towards the skeptic side simply because the believers are far louder, more obnoxious, and have more to gain from pushing their agenda. The only sane position at this point is to evaluate the technology based on personal use, discuss your experience with other rational individuals, and wait for the hype to die down.

[1]: https://ai-2027.com/

  • >I'm confused. So you're agreeing with me, up until the very last part of the last sentence...? If the "noise overwhelms the signal", why are "trendlines the best approximation we have"? We have reliable data of past outcomes in similar scenarios, yet the most recent noisy data is the most valuable? Huh?

    Let me help you untangle the confusion. Historical data on other phenomenons is not a trendline for AI taking over your job. It's a typical logical mistake people make. It's reasoning via analogy. Because this trend happened for A, and A fits B like an analogy therefore what happened to A must happen to B.

    Why is that stupid logic? Because there are thousands of things that fit B as an analogy. And out of those thousands of things that fit, some failed and some succeeded. What you're doing and not realizin is you are SELECTIVELY picking the analogy you like to use as evidence.

    When I speak of a trendline. It's deadly simple. Literally look at AI as it is now, as it is in the past and use that to project into the future. Look at exact data of the very thing you are measuring rather then trying to graft some analogous thing onto the current thing and make a claim from that.

    >What you're doing is similar to speculative takes during the early days of the internet and WWW. How it would transform politics, end authoritarianism and disinformation, and bring the world together. When the dust settled after the dot-com crash, actual value of the technology became evident, and it turns out that none of the promises of social media became true. Quite the opposite, in fact. That early optimism vanished along the way.

    Again same thing. The early days of the internet is not what's happening to AI currently. You need to look at what happened to AI and software from the beginning to now. Observe the trendline of the topic being examined.

    >I think neither of these viewpoints are worth paying attention to. As usual, the truth is somewhere in the middle. I'm leaning towards the skeptic side simply because the believers are far louder, more obnoxious, and have more to gain from pushing their agenda. The only sane position at this point is to evaluate the technology based on personal use, discuss your experience with other rational individuals, and wait for the hype to die down.

    Well if you look at the pace and progress of AI, the quantitative evidence points against your middle ground opinion here. It's fashionable to take the middle ground because moderates and grey areas seem more level headed and reasonable than extremism. But this isn't really applicable to reality is it? Extreme events that overload systems happen in nature all the time, taking the middle ground without evidence pointing to the middle ground is pure stupidity.

    So all you need to look at is this, in the past decade look at the progress we've made until now. A decade ago AI via ML was non-existent. Now AI generates movies, music and code, and unlike AI in music and movies, code is being in actuality used by engineers.

    That's ZERO to coding in a decade. What do you think the next decade will bring. Coding to what? That is reality and the most logical analysis. Sure it's ok to be a skeptic, but to ignore the trendline is ignorance.