← Back to context

Comment by bluefirebrand

5 days ago

> First, if anything, training data for newer libs can only increase.

How?

Presumably in the "every coder is using AI assistants" future, it will be an incredible amount of friction to get people to adopt languages that AI assistants don't know anything about

So how does the training data for a new language get made, if no programmers are using the language, because the AI tools that all programmers rely on aren't trained on the language?

The snake eating its own tail

You can code today with new libs, you just need to tell the model what to use. Things like context7 work, or downloading docs, llms.txt or any other thing that will pop up in the future. The idea that LLMs can only generate what they were trained on is like 3 years old. They can do pretty neat things with stuff in context today.

  • The context would have to be massive in order to ingest an entire new programming language and associated design patterns, best practices and such wouldn't it?

    I'm not an expert here by any means but I'm not seeing how this makes much sense versus just using languages that the LLM is already trained on