Comment by ninjis
3 months ago
The "Learning" point drives home a concern my brother-in-law and I were talking about recently. As LLMs become more entrenched as a tool, they may inevitably become the crutch that actually holds back innovation. Individuals and teams may be hesitant to explore or adopt bleeding edge technologies specifically because LLMs don't know about them or don't know enough about them yet.
It was all in science fiction in 1957: "Profession" by Isaac Asimov http://employees.oneonta.edu/blechmjb/JBpages/m360/Professio...
Excellent read
I see this quite a bit with Rust. I honestly cringe when people get up in arms about someone taking their project out of the rust community.
The same can be said of books as of programming languages:
"Not every ___ deserves to be read/used"
If the documentation or learning curve is so high and/or convoluted that it's disparaging to newcomers then perhaps it's just not a language that's fit for widespread adoption. That's actually fine.
"Thanks for your work on the language, but this one just isn't for me" "Thanks for writing that awfully long book, but this one just isn't for me"
There's no harm in saying either of those statements. You shouldn't be disparaged for saying that rust just didn't work out for your case. More power to the author.
Rust attracts a religious fervour that you'll almost never see associated with any other language. That's why posts like this make the front page and receive over 200 comments.
If you switched from Java to C# or vice versa, nobody would care.
A religious fervor against it: no one is in the comments telling the OP he’s wrong.
How is that different from choosing not to adopt a technology because it’s not widely used therefore not widely documented? It’s the timeless mantra of “use boring tech” that seems to resurface every once in a while. It’s all about the goal: do you want to build a viable product, quickly, or do you want to learn and contribute to a specific tech stack? That’s the trade off most of the time.
It's a lot worse. A high quality project can have great documentation and guides that make it easy to use for a human, but an LLM won't until there's a lot of code and documents out there using it.
And if it's not already popular, that won't happen.
No, this doesn't ring true: long before there were LLMs, people were selecting languages and stacks because of the quality and depth of their community.
But also: there is a lot of Rust code out there! And a cubic fuckload of high-quality written material about the language, its idioms, and its libraries, many of which are pretty famous. I don't think this issue is as simple as it's being out to be.
2 replies →
I was actually meaning to post this as an Ask HN question, but never found the time to word it well. Basically, what happens to new frameworks and technologies in the age of widespread LLM-assisted coding? Will users be reluctants to adopt bleeding-edge tools because the LLMs can't assist as well? Will companies behind the big frameworks put more resources towards documenting them in a way that makes it easy for LLMs to learn from?
Actually, here in my corner of EU, only the prominent big tech backed well documented and battle tested tools are most marketable skills. So, React, 50 new jobs, but you worked with Svelte/Solidjs, what is that? Java/PHP/Python/Ruby/JS, adequate jobs. Go/Rust/Zig/Crystal/Nim, what are these? While Go has some popularity in recent years and I can spot Rust once in a blue moon. Anything involving requiring near metal work is always C/C++.
Availability of documentation and tooling, widespread adaptation and access to already-trained-at-someone-else's-dime possibility is deemed safe for hiring decision. Sometimes, the narrow tech is spotted in the wild, but it was mostly some senior/staff engineer wanted to experiment something which became part of production because management saw no issue, will sometimes open some doors for practitioners of those stack but the probability is akin to getting hit by lightning strike.
This is just reality outside of the early stage startup. The US tech industry and its social networks are very dominated by trendy startup ideas, but the reality is still the major tried-and-true platforms.
Maybe it is not the regulations what is holding EU back.
Another way to look at it: working bleeding edge will become a competitive advantage and a signal to how competent the team is. „Do they consume it” vs „do they own it”.
Or a signal that, someone did not think about the bus factor and future of the project when most of the teams jumped ship.
Constantly chasing the latest tech trends has probably done more harm than good, because more often than not, it turns out that the latest hype technology actually does not deliver what the marketing had promised. Look at NoSQL and MongoDB especially as recent examples. Most people who blindly jumped on the MDB bandwagon would have probably been better off just using Postgres, and they later had to spend a lot of resources migrating away from Mongo.
To me constantly chasing the latest trends means lack of experience in a team and absence of focus on what is actually important, which is delivering the product.
This already happens. Is your new framework popular on GitHub and on Stack Overflow is a metric people use. LLMs are currently mostly capable of just adapting documentation, blog posts, and answers on SO. So they add a thin veneer on top of those resources.
This is already happening.
On one hand, yes, when it comes to picking tools for new projects, LLM awareness of them is now a consideration in large companies.
And at the same time, those same companies are willing to spend time and effort to ensure that their own tooling is well-represented in the training sets for SOTA models. To the point where they work directly with the corresponding teams at OpenAI etc.
And yes, it does mean that the barrier to entry for new competitors is that much higher, especially when they don't have the resources to do the same.
I expect it will wind up like search engines where you either submit urls for indexing/inclusion or wait for a crawl to pick your information up.
Until the tech catches up it will have a stifling effect on progress toward and adoption of new things (which imo is pretty common of new/immature tech, eg how culture has more generally kind of stagnated since the early 2000s)
Hopefully, tools can adapt to integrate documentation better. I've already run into this with GitHub Copilot, trying to use Svelte 5 with it is a battle despite it being released most of a year ago.
There’s another future where reasoning models get better with larger context windows, and you can throw a new programming language or framework at it and it will do a pretty good job.
We already have quite a lot of that effect with tooling. A language can't really get much traction until its got a build, packaging and all the IDE support we expect or however productive the language is it looses out in practice because its hard to work with and doesn't just fit into our CI/CD systems.
What innovation? Languages with curly braces versus BEGIN/END? There is no innovation going on in computer languages. Rust is C with better ergonomics and rigorous memory management. This was made possible with better processors which made more elaborate compilers practical. It all gets compiled by LLVM down to the same object code. I think we are moving to an era of "read-only" languages. Languages that have horrible writing ergonomics yet are easy to understand when read. Humans won't write code. They will review code.
Doesn't this mean that new tech will have to demonstrate material advantages, such that outweigh the LLM inertia, in order to be adopted? This sounds good to me; so much framework churn seems to be code fashion rather than function. Now if someone releases a new framework, they need to demonstrate real value first. People that are smart enough to read the docs and absorb the material of a new, better, framework will now have a competitive advantage; this all seems good.
I think it's a good point and I experienced the same thing when playing with SDL3 the other day. So even established languages with new API's can be problematic.
However, I had a different takeaway when playing with Rust+AI. Having a language that has strict compile-time checks gave me more confidence in the code the AI was producing.
I did see Cursor get in an infinite loop where it couldn't solve a borrow checker problem and it eventually asked me for help. I prefer that to burying a bug.
I had the same issue a few months ago when I was trying to ask LLMs about Box2D 3.0. I kept getting answers that were either for Box2D 2.x, or some horrific mashup of 2.x and 3.0.
Now Box2D 3.1 has been released and there's zero chance any of the LLMs are going to emit any useful answers that integrate the newly introduced features and changes.
Almost every time I've run into similar problems with LLMs I've mostly managed to solve them by uploading the documentation to the version of the library I'm using to the LLM and instructing it do use that documentation when answering questions about the library.
Its not even innovation. I had a new Laravel project that i was chopping around to play with some new library and I couldn't the the dumbest stuff to work. Of course I went back to read the docs and - ah Laravel 19 or whatever is using config/boostrap.php again and no matter what chatgpt, or myself had figured, could understand why it wasnt working.
unfortunately, a lot of libraries and services - well I don't think chatGPT understands the differences or it would be hard to. At least I have found that with writing scriplets for RT, PHP tooling, etc. The web world seems to move fast enough (and RT moves hella slow) that its confusing libraries and interfaces through the versions.
It'd really need a wider project context where it can go look at how those includes, or functions, or whatever work instead of relying on 'built in' knowledge.
"Assume you know nothing, go look at this tool, api endpoint or, whatever, read the code, and tell me how to use it"
I have that worry as well, but it may not be as bad as I feared. I am currently developing a Python serialization/deserialization library based on advanced multiple dispatch, so it is fairly different from how existing libraries work. Nonetheless, if I ask LLMs (using Cursor) to write new functionality or plugins within my framework, they are surprisingly adept at it, even with limited guidance. I expect it'll only get better in the next few years. Perhaps a set of AI directives and examples for new technologies would suffice.
In any case, there has always been a strong bias towards established technologies that have a lot of available help online. LLMs will remain better at using them, but as long as they are not completely useless on new technologies, they will also help enthusiasts and early adopters work with them and fill in the gaps.
I don’t think we will have a lack of people who explore and know beyond others how to things.
LLMs will make people productive. But it will at the same time elevate those with real skill and passion to create good software. In the meantime there will be some maker confusion, and some engineers who are mediocre might find them selfs in demand like top end engineers. But over the time companies and markets will realize and top dollar will go to those select engineers who know how to do things with and without LLMs.
Lots of people are afraid of LLMs and think it is the end of the software engineer. It is and it is not. It’s the end of the “CLI engineer” or the “Front end engineer” and all those specializations that were attempt to require less skill to pay less. But the systems engineers who know how computers work, can take all week long describing what happens when you press enter on a keyboard at google.com will only be pressed into higher demand. This is because the single skill “engineer” wont really be a thing.
tldr; LLMs wont kill software engineering its a reset, it will cull those who chose such a path on a rubric only because it paid well.
I've noticed this effect even with well established tech but just in degrees of popularity. I've recently been working on a Swift/SwiftUI project and the experience with LLM's compared to something like web dev stuff with React, etc is noticeably different/worse which I mostly attribute to there probably being at least 20 times less Swift specific content on the web in comparison.
There are a ton of Swift /SwiftUI tutorials out there for every new technology.
The problem is, they’re all blogspam rehashes of the same few WWDC talks. So they all have the same blindspots and limitations, usually very surface level.
Is that different from what is happening already? A lot of people won't adopt a language/technology unless it has a huge repository of answers on StackOverflow, mature tooling, and a decent hiring pool.
I'm not saying you're definitely wrong, but if you think that LLMs are going to bring qualitative change rather than just another thing to consider, then I'm interested in why.
New languages / packages / frameworks may need to collaborate with LLM providers to provide good training material. LLM-able training material may be the next important documentation thing.
Another potentially interesting avenue of research would be to explore allowing LLMs to use "self-play" to explore new things.
How can it compete with vast amount of trained codebases on Github? For LLMs, more data equals better results, so people will naturally be driven to better completion with already established frameworks and languages. It would be hard to produce organic data on all ways your technology can be (ab)used.
Allegedly one of the ways they've been training LLMs to get better at logic and reasoning, as well as factual accuracy, is to use LLMs themselves to generate synthetic training data. The idea here would be similar: generate synthetic training data. Generating this could be aided by LLMs, perhaps with a "playground" of some sort where LLMs could compile / run / render various things, to help select out things that work and things that don't work (as well as if you see error X, what the problem might be).
In a similar way to how this works for natural languages. Turns out that if you train the model on e.g. vast quantities of English, teaching it other languages doesn't require nearly as much, because it has already internalized all the "shared" parts (and there's a lot more of those than there are surface differences).
But, yes, it does mean that new things that are drastic breaks with old practices are much harder to teach compared to incremental improvements.
Unity was a better choice for game engine long before the existence of LLMs.
It’s the same now. I’ve spent arguably too much time trying to avoid Python and it has cost me a whole lot of time. You keep running into bugs and have to implement much more yourself if you go off the beaten path (see also [1]). I don’t regret it since I learned a lot, but it’s definitively not always the easiest path. To this day I wonder whether maybe I should have taken the simple route.
[1]: https://huijzer.xyz/posts/killer-domain/
A showerthought I had recently was that newly-written software may have a perverse incentive to be intentionally buggy such that there will be more public complaints/solutions for said software, which gives LLMs more training data to work with.
[dead]