← Back to context

Comment by empressplay

21 days ago

While large language models don't have enough nuance for AGI, there is some promise still in multi-modal models, or models based purely on other high-bandwidth data like video. So probabilistic token-based models aren't entirely out of the running yet.

Part of the problem with LLMs in particular is ambiguity -- this is poisonous to a language model. And English in particular is full of it. So another potential that is being explored is translating everything (with proper nuance) to another language that is more precise, or by rewriting training data to eliminate any ambiguities by using more exact English.

So there are ideas and people are still at it. After all, it usually takes decades to fully exploit any new technology. I don't expect that to be any different with models.