← Back to context

Comment by enahs19

3 months ago

Everyone misses the forest for the trees. Neural unit and layered topology deep learning applications excel at revealing and leveraging correlations (something people flippantly refer to crudely as "pattern matching") at many scales in a data set. Commercial LLMs are merely deep learning applications purpose built to reveal correlations in the corpus of internet available language. The impressive aspect of LLMs is not that they seem to give such comprehensive answers to queries, but rather what that fact says about human language itself. Human language uses thousands of tokens (degrees of freedom) which in theory could be combined in an infinite number of ways to encode information. Yet LLMs show us that we really only use our tokens in a very limited, highly correlated manner. Taking it a step further, this also demonstrates the limits of deep learning... that an LLM requires a trillion parameters and $100B to characterize the much much lower dimensionality of this data set should be a clear signal that LLMs and likely all deep learning approaches based on data alone are not a viable path to "intelligence". Anyway, I'm just a vallet (yes, FSD fans, this still exists) so what to I know?