← Back to context

Comment by cs702

21 days ago

Our brains, which are organic neural networks, are constantly updating themselves. We call this phenomenon "neuroplasticity."

If we want AI models that are always learning, we'll need the equivalent of neuroplasticity for artificial neural networks.

Not saying it will be easy or straightforward. There's still a lot we don't know!

I wasn't explicit about this in my initial comment, but I don't think you can equate more forward passes to neuroplasticity. Because, for one, simply, we (humans) also /prune/. And... Similar to RL which just overwrites the policy, pushing new weights is in a similar camp. You don't have the previous state anymore. But we as humans with our neuroplasticity do know the previous states even after we've "updated our weights".

How would you keep controls - safety restrictions - Ip restrictions etc with that, though? the companies selling models right now probably want to keep those fairly tight.

  • This is why I’m not sure most users actually want AGI. They want special purpose experts that are good at certain things with strictly controlled parameters.

    • I agree, the fundamental problem is we wouldn't be able to understand it ("AGI"). Therefore it's useless. Either useless or you let it go unleashed and it's useful. Either way you still don't understand it/can't predict it/it's dangerous/untrustworthy. But a constrained useful thing is great, but it fundamentally has to be constrained otherwise it doesn't make sense

      1 reply →