← Back to context

Comment by hackinthebochs

6 days ago

Linear regression has well characterized mathematical properties. But we don't know the computational limits of stacked transformers. And so declaring what LLMs can't do is wildly premature.

> And so declaring what LLMs can't do is wildly premature.

The opposite is true as well. Emergent complexity isn’t limitless. Just like early physicists tried to explain the emergent complexity of the universe through experimentation and theory, so should we try to explain the emergent complexity of LLMs through experimentation and theory.

Specifically not pseudoscience, though.

  • >so should we try to explain the emergent complexity of LLMs through experimentation and theory.

    Physicists had the real world to verify theories and explanations against.

    So far anyone 'explaining the emergent complexity of LLMs through experimentation and theory' is essentially just making stuff up nobody can verify.

  • Sure, that's true as well. But I don't see this as a substantive response given that the only people making unsupported claims in this thread are those trying to deflate LLM capabilities.

    • So, to review this thread

        - OP asked for someone to make a logical argument for the separation of “training” from “model”
        - I made the argument
        - You cherry picked an argument against my specific example and made an appeal to emergent complexity
        - I pointed out that emergent complexity isn’t limitless
        - “the only people making unsupported claims in this thread are those trying to deflate LLM capabilities”

      4 replies →