Comment by didierbreedt
21 hours ago
I’m waiting for a llm focused language. We’re already seeing AI is better with strongly typed languages. If we think about how an agent can ensure correctness as instructed by a human, as the priority, things could get interesting. Question is, will humans actually be able to make sense of it? Do we need to?
How could an LLM learn a programming language sufficiently well unless there is already a large corpus of human-written examples of that language?
I'm pretty sure, ChatGPT could write a program in any language, which is similar enough to existing languages. So you could start by translating existing programs.
LLM could generate such a corpus, right? With feedback mechanisms such as side by side tests.
So… llm learns from a corpus it has created?
8 replies →
I've wondered about this too. What would a language look like if it were written with tokenization in mind, could you have a more dense and efficient form of encoding expressions? At the same time, the language could be more verbose and exacting because a human wouldn't bemoan reading or writing it.
I don't know if you've seen this: https://github.com/toon-format/toon
I saw some memes about it being CSV but it actually makes a compelling use case for yet another format.