← Back to context

Comment by JambalayaJimbo

2 days ago

AI is a very leaky abstraction. You will always be worried

LLMS are leaky abstractions. LLMs configured in wrappers can be made to be mostly correct.

For example, define range of input, appropriate output, ask LLM to write code, automatically run that code against the range of input, evaluate the output, ask llm to fix any issues where the input doesn't match the output.

That whole process can be made faster without the need for huge models. The model doesn't need to get trained on everything CS, because it doesn't need to get the code correct on the first try, it just needs to be trained on enough code to understand how something affects the output and iterate on that. I.e basically making the model do smart guided search. It was done with Mu Zero with great success, not sure why nobody is focusing on this now.