← Back to context

Comment by gwervc

7 months ago

We need to find a Plato cave analogy for people believing LLM output is anything more than syntactically correct and somewhat semantically correct text.

I can't help but feel that people are both underestimating and over estimating these LLMs. To me, they act like a semantic memory system, a network of weights of relatedness. They can help us find facts, but are subject to averaging, or errors towards category exemplars, but get more precise when provided context to aid retrieval. But expecting a network of semantic weights to make inferences about something new takes more types of engines. For example, an ability to focus attention on general domain heuristics, or low dimensional embedding, judge whether that heuristics might be applicable to another information domain, apply it naively, and then assess. Focusing on details of a domain can often preclude application of otherwise useful heuristics because it focuses attention on differences rather than similarities, when the first step in creation (or startup) is unreasonable faith, just like children learn fast by having unreasonable beliefs in their own abilities.

I wonder whether there is a way to train an LLM to output or in ordinately learn only concept level abstractions?

If the model is called by a program which takes the output of the model and runs the commands that the model says to, then takes the output of the commands and passes that back to the model, the model has an effect in the real world.