Comment by pmarreck
2 months ago
At that point, why not develop a custom language or IL that is specifically designed for LLM use and which compiles to good native code?
I propose WASM, or an updated version of it
2 months ago
At that point, why not develop a custom language or IL that is specifically designed for LLM use and which compiles to good native code?
I propose WASM, or an updated version of it
Because LLMs will have no concept of that IL. It only have a model for what it has seen.
Oh? I've had great luck with LLMs and homemade ILs. It has become my favourite trick to get LLMs to do complex things without overly complicating my side of the equation (i.e. parsing, sandboxing, etc. that is much harder to deal with if you have it hand you the code of a general purpose language meant for humans to read).
There is probably some point where you can go so wild and crazy with ideas never seen before that it starts to break down, but if it remains within the realm of what the LLM can deal with in most common languages, my experience says it is able to pick up and apply the same ideas in the IL quite well.
100%
People are still confusing AI putting together scraps of text it has seen that correlates with its understanding of the input, with the idea that AI understands causation, and provides actual answers.
And people are also still clearly confusing "isn't human or conscious" with "can't possibly create new logical thoughts or come to new logical conclusions i.e. do intellectual labor" when there is a plethora of evidence at this point that the latter is, in fact, the truth
3 replies →
It is trained on WASM btw, but if we invented one specific for it, it could easily be trained up on it or refined with it. I've already had some success just handing it a language guide and it runs with it.