← Back to context

Comment by seedpi

5 days ago

[flagged]

What I find interesting is the supposition that weights must change. The connections of my motherboard do not change, yet it can simulate any system.

Perhaps there is an architecture that is write-once-read-forever, and all that matters is context.

There's almost certainly some of this in the human mind, and I bet there is much more of it than we are willing to admit. No amount of mental gymnastics is going to let you visualize 6D structures.

  • >supposition that weights must change

    The thing is that's where most of the leaning and 'intelligence' is. If you don't change them the model doesn't really get smarter.

    • > The thing is that's where most of the leaning and 'intelligence' is

      The question is: Is it required for AGI that the model changes its weights _during deployment_, or can we train up and deploy like we do now and manage learning via context?

      Taken to extreme, "context" could be defined as the "change in weights from training time" so the answer is trivially "yes", but that seems like cheating.

      2 replies →