← Back to context

Comment by sriku

1 year ago

An analogy I made to friend - language models capture the constraints in the arrangement of tokens in streams of communication. LLMs that model the constraints placed by human intelligence on token streams can no more be said to have attained (human)intelligence than physicists who decode the constraints placed by a god-like intelligence on the universe can be said to have attained god-like intelligence themselves. (Using comments by theist physicists to the tune of "deciphering the mind of God")

> language models capture the constraints in the arrangement of tokens in streams of communication

Yes but ultimately that includes all of math, logic, science, physics, etc. which as far as we can tell are fundamental truths of the universe. And if there's a large enough LLM that can capture enough constraints, functionally what's the difference between its intelligence and ours?