Comment by suddenlybananas
19 hours ago
How could an LLM learn a programming language sufficiently well unless there is already a large corpus of human-written examples of that language?
19 hours ago
How could an LLM learn a programming language sufficiently well unless there is already a large corpus of human-written examples of that language?
I'm pretty sure, ChatGPT could write a program in any language, which is similar enough to existing languages. So you could start by translating existing programs.
LLM could generate such a corpus, right? With feedback mechanisms such as side by side tests.
So… llm learns from a corpus it has created?
Yes. The learning comes from running tests on the program and ensuring they pass. So running as an agent. Tests and compiler give hard feedback- thats the data outside the model that it learns from.
I think modern RLHF schemes have models that train LLMs. LLMs teaching each other isn't new.
My knowledge is limited, just based on a read of https://huyenchip.com/2023/05/02/rlhf.html though.
1 reply →
It’s basically called “reinforced learning” and it’s a common technique for machine learning.
You provide a goal as a big reward (eg test passing), and smaller rewards for any particular behaviours you want to encourage, and then leave the machine to figure out the best way to achieve those rewards through trial and error.
After a few million attempts, you generally either have a decent result, or more data around additional weights you need to apply before reiterating on the training.
5 replies →