← Back to context

Comment by sarchertech

2 months ago

>2025-2035

Depending on other people to maintain backward compatibility so that you can keep coding like it’s 2025 is its own problematic dependency.

You could certainly do it but it would be limiting. Imagine that you had a model trained on examples from before 2013 and your boss wants you to take over maintenance for a React app.

You're all referencing the strange idea in a world where there would be no open-weight coding models trained in the future. Even in a world where VC spending vanished completely, coding models are such a valuable utility that I'm sure at the very least companies/individuals would crowdsource them on a reoccurring basis, keeping them up to date.

The value of this technology has been established, it's not leaving anytime soon.

  • SOTA models cost hundreds of millions to train. I doubt anyone is crowdsourcing that.

    And that’s assuming you already have a lot of the infrastructure in place.

    • I think faang and the like would probably crowdsource it given that they would—according to the hypothesis presented—would only have to do it every few years, and ostensibly are realizing improved developer productivity from them.

      1 reply →

2013 was pre-LLM. If devs continue relying on LLMs and their training would stop (which i would find unlikely), still the tools around the LLMs will continue to evolve and new language features will get less attention and would only be used by people who don't like to use LLMs. Then it would be a race of popularity between new language (features) and using LLMs steering 'old' programming languages and APIs. Its not always the best technology that wins, often its the most popular one. You know what happened during the browser wars.