← Back to context

Comment by skydhash

16 hours ago

Because LLMs will have no concept of that IL. It only have a model for what it has seen.

Oh? I've had great luck with LLMs and homemade ILs. It has become my favourite trick to get LLMs to do complex things without overly complicating my side of the equation (i.e. parsing, sandboxing, etc. that is much harder to deal with if you have it hand you the code of a general purpose language meant for humans to read).

There is probably some point where you can go so wild and crazy with ideas never seen before that it starts to break down, but if it remains within the realm of what the LLM can deal with in most common languages, my experience says it is able to pick up and apply the same ideas in the IL quite well.

100%

People are still confusing AI putting together scraps of text it has seen that correlates with its understanding of the input, with the idea that AI understands causation, and provides actual answers.