← Back to context

Comment by TeMPOraL

2 years ago

> But so far, most of the proposals seem to involve bolting something on the outside of the black box of the LLM itself.

This might be the only way. I maintain that, if we're making analogies to humans, then LLMs best fit as equivalent of one's inner voice - the thing sitting at the border between the conscious and the (un/sub)conscious, which surfaces thoughts in form of language - the "stream of consciousness". The instinctive, gut-feel responses which... you typically don't voice, because they tend to sound right but usually aren't. Much like we do extra processing, conscious or otherwise, to turn that stream of consciousness into something reasonably correct, I feel the future of LLMs is to be a component of a system, surrounded by additional layers that process the LLM's output, or do a back-and-forth with it, until something reasonably certain and free of hallucinations is reached.