← Back to context

Comment by yuri91

5 days ago

So using agents forces (or at least nudges) you to use go and tailwind, because they are simple enough (and abundant in the training data) for the AI to use correctly.

Does this mean that eventually in a world where we all use this stuff, no new language/framework/library will ever be able to emerge?

Competing with the existing alternatives will be too hard. You won't even be able to ask real humans for help on platforms like StackOverflow because they will be dead soon.

> Does this mean that eventually in a world where we all use this stuff, no new language/framework/library will ever be able to emerge?

I highly doubt it. These things excel at translation.

Even without training data, if you have an idiosyncratic-but-straightforward API or framework, they pick it up no problem just looking at the codebase. I know this from experience with my own idiosyncratic C# framework that no training data has ever seen, that the LLM is excellent at writing code against.

I think something like Rust lifetimes would have a harder time getting off the ground in a world where everyone expects LLM coding to work off the bat. But something like Go would have an easy time.

Even with the Rust example though, maybe the developers of something that new would have to take LLMs into consideration, in design choices, tooling choices, or documentation choices, and it would be fine.

> Does this mean that eventually in a world where we all use this stuff, no new language/framework/library will ever be able to emerge?

That's a very good question.

Rephrased: as good training data will diminish exponentially with the Internet being inundated by LLM regurgitations, will "AI savvy" coders prefer old, boring languages and tech because there's more low-radiation training data from the pre-LLM era?

The most popular language/framework combination in early 2020s is JavaScript/React. It'll be the new COBOL, but you won't need an expensive consultant to maintain in the 2100s because LLMs can do it for you.

Corollary: to escape the AI craze, let's keep inventing new languages. Lisps with pervasive macro usage and custom DSLs will be safe until actual AGIs that can macroexpand better than you.

  • > Rephrased: as good training data will diminish exponentially with the Internet being inundated by LLM regurgitations

    I don't think the premise is accurate in this specific case.

    First, if anything, training data for newer libs can only increase. Presumably code reaches github in a "at least it compiles" state. So you have lots of people fight the AIs and push code that at least compiles. You can then filter for the newer libs and train on that.

    Second, pre-training is already mostly solved. The pudding seems to be now in post-training. And for coding a lot of post-training is done with RL / other unsupervised techniques. You get enough signals from using generate -> check loops that you can do that reliably.

    The idea that "we're running out of data" is way too overblown IMO, especially considering the last ~6mo-1y advances we've seen so far. Keep in mind that the better your "generation" pipeline becomes, the better will later models be. And the current "agentic" loop based systems are getting pretty darn good.

    • > First, if anything, training data for newer libs can only increase.

      How?

      Presumably in the "every coder is using AI assistants" future, it will be an incredible amount of friction to get people to adopt languages that AI assistants don't know anything about

      So how does the training data for a new language get made, if no programmers are using the language, because the AI tools that all programmers rely on aren't trained on the language?

      The snake eating its own tail

      3 replies →

A traditional digital stack's lifecycle is:

1. The previous gen has become bloated and complex because it widened it's scope to cover every possible miche scenario and got infiltrated by 'expert' language and framework specialists that went on an atrotecture binge.

2. As a result a new stack is born, much simpler, back to basics, than the poorly aged encumbant. It doesn't cover every niche, but it does a few newly popular things realy easy and well, and rises on the coattails of this new thing as the default envoronment for it.

3. Over time the new stack ages just as poorly as the old stack for all the same reasons. So the cycle repeats.

I do not see this changing with ai-assisted coding, as context enrichment is getting better allowing a full stack specification in post training.

  • > It doesn't cover every niche, but it does a few newly popular things realy easy and well, and rises on the coattails of this new thing as the default envoronment for it

    How will it ever rise on the coattails of anything if it isn't in the AI training data so no one is ever incentivized to use it to begin with?

    • AI legible documentation. If you optimize for a "1-pager" doc you can add to the context of an LLM and that is all it needs to know to use your package or framework ... people will use it if has some kind non-technical advantage. deepwiki.com is sorta an attempt to automate doing something like this.

> So using agents forces (or at least nudges) you to use go and tailwind

Not even close, and the article betrays the author's biases more than anything else. The fact that their Claude Code (with Sonnet) setup has issues with the `cargo test` cli for instance is hardly a categorical issue with AIs or cargo, let alone rust in general. Junie can't seem to use its built-in test runner tool on PHP tests either, that doesn't mean AI has a problem with PHP. I just wrote a `bin/test-php` script for it to use instead, and it figures out it has to use that (telling it so in the guidelines helps, but it still keeps trying to use its built-in tool first)

As for SO, my AI assistant doesn't close my questions as duplicates. I appreciate what SO is trying to do in terms of curation, but the approach to it has driven people away in droves.

  • I tried Junie in PyCharm and it had big problems with running tests or even using the virtual environment set up in PyCharm for that project.

    You'd expect more from the company that is developing both the IDE and the AI agent...

    • JB's product strategy is baffling. The AI assistant is way more featureful, but it's a lousy agent. Junie is pretty much only good as an agent, but it's hardwired to one model, doesn't support MCP, but does have a whole lot of internal tools ... which it can't seem to use reliably. They really need to work on having just one good AI product that does it all.

      I really liked Augment, except for its piggish UI. Then they revealed the price tag, and back to Junie I went.

Just yesterday I gave Claude (via Zed) a project brief and a fresh elixir phoenix project. It had 0 problems. It did opt for tailwind for the css, but phoenix already sets it up when using `mix phx.new` so that's probably why.

I don't buy that it pushes you into using Go at all. If anything I'd say they push you towards Python a lot of the time when asking it random questions with no additional context.

The elixir community is probably only a fraction of the size of Go or Python, but I've never had any issues with getting it to use it.

I’m wondering whether we may see programming languages that are either unreadable to humans or at least designed towards use by LLMs.

  • Yes, and an efficient tokenizer designed only for that language. As the ratio of synthetic data to human data grows this will become more plausible.

> Does this mean that eventually in a world where we all use this stuff, no new language/framework/library will ever be able to emerge?

If you truly believe in the potential of agentic AI, then the logical conclusion is that programming languages will become the assembly languages of the 21st century. This may or may not become the unfortunate reality.

  • I'd bet money that in less than six months, there'll be some buzz around a "programming language for agents".

    Whether that's going to make sense, I have some doubts, but as you say: For an LLM optimist, it's the logical conclusion. Code wouldn't need to be optimised for humans to read or modify, but for models, and natural language is a bit of an unnecessary layer in that vision.

    Personally I'm not an LLM optimist, so I think the popular stack will remain focused on humans. Perhaps tilting a bit more towards readability and less towards typing efficiency, but many existing programming languages, tools and frameworks already optimise for that.

My best results have been with Ruby/Rails and either vanilla Bootstrap, or something like Tabler UI, Tailwind seems to be fine as well, but I'm still not a fan of the verbosity.

With a stable enough boilerplate you can come up with outstanding results in a few hours. Truly production ready stuff for small size apps.

  • How are you getting results when Ruby has no type system? That seems like where half the value of LLM coding agents are (dumping in type errors and it solving them).

    • Bunch of unit, functional and E2E tests, just like before LLMs :) Haven't tried with Ruby specifically but works well with JavaScript and other dynamic languages so should work fine with Ruby too.

      3 replies →

With maturing synthetic data pipelines, can't they just take one base llm and fine tune it for 20 different niches, and allow user to access the niche with a string parameter in the API call? Even if a new version of a language released only yesterday, they could quickly generate enough synthetic training data to bake in the new syntax for that niche, and roll it out.

If AI really takes over coding, programming languages will be handled the same way we currently handle assembly code.

Right now languages are the interface between human and computer. When LLM's would take over, their ideal programming language is probably less verbose than what we are currently using. Maybe keywords could become 1 token long, etc. Just some quick thoughts here :D.

> no new language/framework/library will ever be able to emerge?

Here is a Youtube video that makes the same argument. React is / will be the last Javascript framework, because it is the dominant one right now. Even of people publish new frameworks, LLM coding assistants will not be able to assist coding using the new frameworks, so the new frameworks will not find users or popularity.

And even for React, it will be difficult to add any more new features, because LLMs only assist to write code that uses the features the LLMs know about, which are the old, established ways to write React.

https://www.youtube.com/watch?v=P1FLEnKZTAE

  • > LLM coding assistants will not be able to assist coding using the new frameworks

    Why not? When my coding agent discovers that they used the wrong API or used the right API wrong, it digs up the dependency source on disk (works at least with Rust and with JavaScript) and looks up the new details.

    I also have it use my own private libraries the same way, and those are not in any training data guaranteed.

    I guess if whatever platform/software you use doesn't have tool calling youre kind of right, but also missing something kind of commonplace today.

  • My theory is that it will not be the case.

    New frameworks can be created, but they will be different from before:

    - AI-friendly syntax, AI-friendly error handling

    - Before being released, we will have to spend hundred of millions of token of agents reading the framework and writing documentation and working example code with it, basically creating the dataset that other AI can reference when using the new framework.

    - Create a way to have that documentation/example code easily available for AI agents (via MCP or new paradigm)

Speaking of which, anyone had success using these tools for coding Common Lisp?

  • Agents no, LLMs yes. Not for generating code per se, but for answering questions. Common Lisp doesn't seem to have a strong influx of n00bs like me, and even though there's pretty excellent documentation, I find it sometimes hard to know what I'm looking for. LLMs definitely helped me a few times by answering my n00b questions I would have otherwise had to ask online.

  • Joe Marshall had a couple of posts about... No: https://funcall.blogspot.com/2025/05/vibe-coding-common-lisp...

    • Vibe coding Common Lisp could probably work well with additional tool support. Even a good documentation lookup and search tool, exposed in an AGENTS.md file, could significantly improve the problem Joe ran into of having the code generate bogus symbols. If you provide a small MCP server or other tool to introspect a running image containing your application, it could be even better.

      LLMs can handle the syntax of basically any language, but the library knowledge is significantly improved by having a larger corpus of code than Common Lisp tends to have publicly available.

  • Not CL specifically but works well with Clojure and fits better than non-lisp languages (imo) once you give the LLM direct access to the repl