Comment by qgin
2 days ago
We didn’t design these models to be able to do the majority of the stuff they do. Almost ALL of the their abilities are emergent. Mechanistic interpretability is only beginning to start to understand how these models do what they do. It’s much more a field of discovery than traditional engineering.
> We didn’t design these models to be able to do the majority of the stuff they do. Almost ALL of the their abilities are emergent
Of course we did. Today's LLMs are a result of extremely aggressive refinement of training data and RLHF over many iterations targeting specific goals. "Emergent" doesn't mean it wasn't designed. None of this is spontaneous.
GPT-1 produced barely coherent nonsense but was more statistically similar to human language than random noise. By increasing parameter count, the increased statistical power of GPT-2 was apparent, but what was produced was still obviously nonsense. GPT-3 achieved enough statistical power to maintain coherence over multiple paragraphs and that really impressed people. With GPT-4 and its successors the statistical power became so strong that people started to forget that it still produces nonsense if you let the sequence run long enough.
Now we're well beyond just RLHF and into a world where "reasoning models" are explicitly designed to produce sequences of text that resemble logical statements. We say that they're reasoning for practical purposes, but it's the exact same statistical process that is obvious at GPT-1 scale.
The corollary to all this is that a phenomenon like consciousness has absolutely zero reason to exist in this design history, it's a totally baseless suggestion that people make because the statistical power makes the text easy to anthropomorphize when there's no actual reason to do so.
Right, but RLHF is mostly reinforcing answers that people prefer. Even if you don't believe sentience is possible, it shouldn't be a stretch to believe that sentience might produce answers that people prefer. In that case it wouldn't need to be an explicit goal.
>it shouldn't be a stretch to believe that sentience might produce answers that people prefer
Even if that were true, there's no reason to believe that training LLMs to produce answers people prefer leads it towards sentience.