That is the end goal after all, but all the potential VCs seem to forget that almost every conceivable outcome of real AGI involves the current economic system falling to pieces.
Which is sorta weird. It is like if VCs in Old Regime france started funding the revolution.
1. They're too stupid to understand what they're truly funding.
2. They understand but believe they can control it for their benefit, basically want to "rule the world" like any cartoon villain.
3. They understand but are optimists and believe AGI will be a benevolent construct that will bring us to post scarcity society. There are a lot of rich / entrepreneurs that still believe they are working to make the world a better place.. (one SaaS at a time but alas, they believe it)
4. They don't believe that AGI is close or even possible
If it makes the models smarter, someone will do it.
From any individual, up to entire countries, not participating doesn't do anything except ensure you don't have a card to play when it happens.
There is a very strong element of the principles of nature and life (as in survival, not nightclubs or hobbies) happening here that can't be shamed away.
The resource feedback for AI progress effort is immense (and it doesn't matter how much is earned today vs. forward looking investment). Very few things ever have that level of relentless force behind them. And even beyond the business need, keeping up is rapidly becoming a security issue for everyone.
Yes the planet got destroyed. But for a beautiful moment in time we created a lot of value for shareholders.
And for your comparison, they did fund the American revolution which on its turn was one of the sparks for the French revolution (or was that exactly the point you were making?)
I wasn't explicit about this in my initial comment, but I don't think you can equate more forward passes to neuroplasticity. Because, for one, simply, we (humans) also /prune/. And... Similar to RL which just overwrites the policy, pushing new weights is in a similar camp. You don't have the previous state anymore. But we as humans with our neuroplasticity do know the previous states even after we've "updated our weights".
How would you keep controls - safety restrictions - Ip restrictions etc with that, though? the companies selling models right now probably want to keep those fairly tight.
This is why I’m not sure most users actually want AGI. They want special purpose experts that are good at certain things with strictly controlled parameters.
If done right, one step closer to actual AGI.
That is the end goal after all, but all the potential VCs seem to forget that almost every conceivable outcome of real AGI involves the current economic system falling to pieces.
Which is sorta weird. It is like if VCs in Old Regime france started funding the revolution.
I think VCs end up in one of four categories
1. They're too stupid to understand what they're truly funding.
2. They understand but believe they can control it for their benefit, basically want to "rule the world" like any cartoon villain.
3. They understand but are optimists and believe AGI will be a benevolent construct that will bring us to post scarcity society. There are a lot of rich / entrepreneurs that still believe they are working to make the world a better place.. (one SaaS at a time but alas, they believe it)
4. They don't believe that AGI is close or even possible
If it makes the models smarter, someone will do it.
From any individual, up to entire countries, not participating doesn't do anything except ensure you don't have a card to play when it happens.
There is a very strong element of the principles of nature and life (as in survival, not nightclubs or hobbies) happening here that can't be shamed away.
The resource feedback for AI progress effort is immense (and it doesn't matter how much is earned today vs. forward looking investment). Very few things ever have that level of relentless force behind them. And even beyond the business need, keeping up is rapidly becoming a security issue for everyone.
If Moore's Law had fully kicked over twice more we'd all have 64GB GPUs, enthusiasts would have 2x64GB, and data center build outs wouldn't be needed.
Eventually GPU memory is going to creep up and local models will powerful enough.
2 replies →
Yes the planet got destroyed. But for a beautiful moment in time we created a lot of value for shareholders.
And for your comparison, they did fund the American revolution which on its turn was one of the sparks for the French revolution (or was that exactly the point you were making?)
The funding of the American revolution is a fun topic but most people don't know about it so I don't bother dropping references to it. :D
1 reply →
1. Progress is unstoppable. Refusing to fund it won't make it disappear.
2. Most VCs are normal people that just want a bigger slice of pie, not necessarily a bigger share of the pie. See the fixed pie fallacy.
Our brains, which are organic neural networks, are constantly updating themselves. We call this phenomenon "neuroplasticity."
If we want AI models that are always learning, we'll need the equivalent of neuroplasticity for artificial neural networks.
Not saying it will be easy or straightforward. There's still a lot we don't know!
I wasn't explicit about this in my initial comment, but I don't think you can equate more forward passes to neuroplasticity. Because, for one, simply, we (humans) also /prune/. And... Similar to RL which just overwrites the policy, pushing new weights is in a similar camp. You don't have the previous state anymore. But we as humans with our neuroplasticity do know the previous states even after we've "updated our weights".
How would you keep controls - safety restrictions - Ip restrictions etc with that, though? the companies selling models right now probably want to keep those fairly tight.
This is why I’m not sure most users actually want AGI. They want special purpose experts that are good at certain things with strictly controlled parameters.
2 replies →
Tay the chatbot says hi from 2017.
How about we just put them to bed once in a while?
Please elaborate on this one
I think they mean that the model should have sleep period where they update themselves with what they learnt that day.
it is interesting
Please elaborate