Comment by lenerdenator
2 days ago
> IMO the key problem that OpenAI have is that they are all-in on AGI
I think this needs to be said again.
Also, not only do we not know if AGI is possible, but generally speaking, it doesn't bring much value if it is.
At that point we're talking about up-ending 10,000 years of human society and economics, assuming that the AGI doesn't decide humans are too dangerous to keep around and have the ability to wipe us out.
If I'm a worker or business owner, I don't need AGI. I need something that gets x task done with a y increase in efficiency. Most models today can do that provided the right training for the person using the model.
The SV obsession with AGI is more of a self-important Frankenstein-meets-Pascal's Wager proposition than it is a value proposition. It needs to end.
Why would AGI not be possible?
It might be hard, it might be difficult, but it is definitely possible. Us humans are the evidence for that.
Theoretically possible doesn't mean we're capable of doing it. Like, it's easy to say "I'm gonna boil the ocean" and another thing for you personally to succeed at it while on a specific beach with the contents of several Home Depots.
Humans tend to vastly underestimate scale and complexity.
Because human brains are giant three-dimensional processors containing billions of neurons (each with computationally complex behaviors), each one performing computations >3 orders of magnitude more efficiently than transistors do, to train an intelligence with trillions of connections in real time, while being attached to incredibly sophisticated sensors and manipulators.
And despite all that, humans are still just made of dirt.
Even if we can get silicon to do some of these tricks, that'd require multiple breakthroughs, and it wouldn't be cost-competitive with humans for quite a while.
I would even think it's possible that building brain-equivalent structures that consume the same power, and can do all the stuff for the same amount of resources, is a so far out science fiction proposition, that we can't even give a prediction as to when it will happen, and for practical purposes, biological intelligences will have an insurmountable advantage for even the furthest foreseeable future once you consider the economics of humans vs machines.
> And despite all that, humans are still just made of dirt.
No we become dirt. I guess we are made of wood and computers are made of sand.
2 replies →
That’s rather presupposing materialism (in the philosophy of mind sense) is correct. That seems to be the consensus theory, but it’s not be shown ‘definitely’ true.
So, you're a business owner and you've decided we need AGI bc you're fine. You've no one to blame when the Revolution comes.
You clearly do not understand AGI. It's a gamble that really is most easily explained by saying, creating a god. That thing won't hate us. We create its oxygen - data. If anything, it would empower us to make of it.