← Back to context

Comment by lolinder

17 hours ago

I remember when I was first getting started in the industry the big fear of the time was that offshoring was going to take all of our jobs and drive down the salaries of those that remained. In fact the opposite happened: it was in the next 10 years that salaries ballooned and tech had a hiring bubble.

Companies always want to reduce staff and bad companies always try to do so before the solution has really proven itself. That's what we're seeing now. But having deep experience with these tools over many years, I'm very confident that this will backfire on companies in the medium term and create even more work for human developers who will need to come in and clean up what was left behind.

(Incidentally, this also happened with offshoring— many companies ended up with large convoluted code bases that they didn't understand and that almost did what they wanted but were wrong in important ways. These companies needed local engineers to untangle the mess and get things back on track.)

But having deep experience with these tools over many years, I'm very confident...

No one has had deep experience with these tools for any amount of time, let alone many years. They're literally just now hitting the market and are rapidly expanding their capabilities. We're at a fundamentally different place than we were just twelve months ago, and there's no reason to think 2025 will be any different.

  • I was building things with GPT-2 in 2019. I have as much experience engineering with them as anyone who wasn't an AI researcher before then.

    And no, we're not at a fundamentally different place than we were just 12 months ago. The last 12 months had much slower growth than the 12 months before that, which had slower growth than the 12 months before that. And in the end these tools have the same weaknesses that I saw in GPT-2, just to a lesser degree.

    The only aspect in which we are in a fundamentally different place is that the hype has gone through the roof. The tools themselves are better, but not fundamentally different.

    • It’s genuinely difficult to take seriously a claim that coding using Sonnet has “the same weaknesses” as GPT-2, which was effectively useless for the task. It’s like suggesting that a flamethrower has the same weaknesses as a matchstick because they both can be put out by water.

      We’ll have to agree to disagree about whether the last 12 months has had as much innovation as the preceding 12 months. We started 2024 with no models better than GPT-4, and we ended the year with multiple open source models that beat GPT-4 and can run on your laptop, not to mention a bunch of models that trounce it. Plus tons of other innovations, dramatically cheaper training and inference costs, reasoning models, expanded multi-modal capabilities, etc, etc.

      I’m guessing you’ve already seen and dismissed it, but in case you’re interested in an overview, this is a good one: https://simonwillison.net/2024/Dec/31/llms-in-2024/

      1 reply →

I think it's qualitatively different this time.

Unlike with offshoring, this is a technological solution, which understandably is received more enthusiastically on HN. I get it. It's interesting as tech! And it's achieved remarkable things. But unlike with offshoring (which is a people thing) or magical NOCODE/CASE/etc "solutions", it seems the consensus is that AI coding assistants will eventually get there. At least a portion of even HN seems to think so. And some are cheering!

The coping mechanism seems to be "it won't happen to me" or "my knowledge is too specialized" but I think this will become increasingly false. And even if your knoweldge is too specialized to be replaced by AI, most engineers aren't like that. "Well, become more specialized" is unrealistic advice, and in any case, the employment pool will shrink.

PS: I am offhsoring (in a way). I'm not based in the US but I work remotely for a US company.

  • > But unlike with offshoring (which is a people thing) or magical NOCODE/CASE/etc "solutions", it seems the consensus is that AI coding assistants will eventually get there.

    There's no consensus to that point. There are a few loud hype artists, most of whom are employed in AI and have so have conflicts of interest and also are pre-filtered to the true believers. Their logic is basically "See this trend? Trends continue, so this is inevitable!"

    That's bad logic. Trends do not always continue, they often slow or reverse, and this one is showing all signs of doing so already. OpenAI has come straight out and said that they don't expect to see another jump like GPT-3 to 4, and have resorted to throwing more tokens at the problems, which works with diminishing returns. I do not expect to see a return to the rapid growth we had for a year or two there.

    > PS: I am offhsoring (in a way). I'm not based in the US but I work remotely for a US company.

    Yes, and this is a good example: there's a place for offshoring, but it didn't replace US devs. The same thing will happen here.

    • Trends do not always continue, they often slow or reverse, and this one is showing all signs of doing so already. OpenAI has come straight out and said that they don't expect to see another jump like GPT-3 to 4, and have resorted to throwing more tokens at the problems, which works with diminishing returns. I do not expect to see a return to the rapid growth we had for a year or two there.

      This feels like the declaration of someone who has spent almost no time playing with these models or keeping up with AI over the last two years. Go look at the benchmarks and leaderboards for the last 18 months and tell me we're not progressing far beyond GPT4. Meanwhile models are also getting faster, cheaper, getting multi-modal capabilities, cheaper to train for a given capability, etc.

      And of course there are diminishing returns, the latest public models are in the 90s on many of their benchmarks!