Comment by ashdksnndck
11 days ago
> Basically the prime bet that they made (that one needs extremely expensive hardware to have useful AI) has already failed.
I thought the prime bet was that the winning lab who reaches takeoff through recursive self improvement will make a galactic superintelligence. Not saying I believe this but the people running the labs do. Under this scenario if you are a few months behind at the pivotal time you might as well not exist at all.
only if said galactic superintelligence takes immediate steps to kill all its potential competitors, or hoover up all the world's resources, or some other aggressively zero sum thing. otherwise I don't see what difference it makes down the line of you have the second superintelligence rather than the first.
and that's under the assumption that you can create a superintelligence that will continue to slavishly serve your agenda rather than establishing and following its own goals.
This is also assuming that AGI is even possible. So far there is no evidence that this is actually doable over anything but billions of years (and even then we have no idea how nature really managed it).
Edit: Meant to say AGI (superintelligence didn't make sense). Superintelligence is undefinable at the moment so even considering if it's possible or not is more of a philosophical thing/si-fi thought experiment than anything else.
> So far there is no evidence that this is actually doable over anything but billions of years (and even then we have no idea how nature really managed it).
"The brain is so mysterious and unique, that we should abandon all attempts to even try to apply results like the general approximation theorem to it and discard all signs that some approximation is happening."
Why we don't see signs of intelligence in the universe? The simplest self-replicator requires accidental synthesis of the sequence of 200 (or so) RNA nucleobases.
BTW, your argument could have been applied word-for-word to powered flight in 1899. In short, argumentum ad ignorantiam.
3 replies →
oh absolutely, no argument there, the case for AGI is pretty weak. I was just saying that I am even more sceptical that any of this is a "first or nothing" scenario - that is one of my biggest pet peeves about the entire tech sector.
2 replies →
ASI is the acronym you’re looking for. It stands for Artificial Superintelligence.
Arguably it’s already here. ChatGPT knows more than any human who has ever lived. It can carry out millions of conversations at once. And it has better working memory (“context”) than humans. And it can speak and write code much faster than humans.
Humans still have some advantages: Specialists are smarter than chatgpt in most domains. We’re better at using imagination. We understand the physical world better. But it seems like we’re watching the gap close in real time. A few years ago chatgpt could barely program. Now you can give it complex prompts and it can write large, complex programs which mostly work. If you extrapolate forward, is there any good reason to think humans will retain a lead?
5 replies →
One could argue that AI has already started to hoover up all the world’s resources. AI buildout as a percent of GDP is already high and still rising.
Don't blame machines for our folly. This is just standard bubble behavior.
1 reply →
Anthropic/OpenAI aren't planning to have their superintelligence take over the world, but they're still afraid that someone else will do it.
Well no because no one is going to be coming in to work building the next AI model after the Singularity.
We’ll all be bblbrvkxn46?/4!gfbxf’mgv5fhxtgcsgjcucz to buvtcibycuvinovrYdyvuctYcrzuvhxh gcuch7…:!
If OpenAI has the second superintelligence they have to merge with the first and cooperate. It's a provision in their charter.
I'm not sure anyone thinks their charter carries much weight at this point.
I don't think this race to superintelligence idea should be taken too seriously. It is great for headlines and get peoples imaginations up. It is mostly a marketing gag.
I look at superintelligence this way: software engineering used to be considered amoung the most mentally demanding jobs one can have. And in this field more and more people give up large parts of their job and become approximately product managers to let the machine do the engineering part. So we are about there. Who cares that there are some puzzles in some "synthetic" benchmark in which humans outsmart AIs?
The people in that community have been talking about superintelligence for decades and it’s part of an ideology. It’s not some recently-invented story for headlines.
One thing I don’t understand about this viewpoint (which I understand isn’t your own): why does one benefit so tremendously from getting there a month before competitors? I’m sure having a month of superintelligence with no competition would be lucrative, but do they think achieving superintelligence first will impede competitors from also achieving it a month later?
A week of superintelligence should be enough to take over the world, or at least sabotage your competitors. And even if someone else gets there a week later, they'll be permanently one week behind the curve (until the AI hits some physical limit, I suppose).
But that's all just sci-fi worldbuilding.
>they'll be permanently one week behind the curve
What if the competitor's architecture is able to produce tokens twice as fast. What if the competitor secures a 1 month exclusivity deal on Nvidia's next generation?
A month with a superintelligence at your hands could be quite impactful, especially if you're willing to break the law / normal operating decorum in the pursuit of protecting what you have. A superintelligence, if wielded so, could destroy your competitors in a great many ways, including the relatively-benign solution of outcompeting them, to exploiting them and tearing them apart from the inside.
A genuine superintelligence is a very, very scary thing to have under the control of one person or organisation.
If I interpret "a machine superintelligence" as "a classroom of 300IQ humans," I'm not really sure how this is true? You still have material and energy constraints, you can't think your way out of those.
5 replies →
Assuming it can't super hack all computer systems and cripple competing SI incubation to at least increase its lead time indefinetly.
The assumption would be that in the lead time it has the super intelligence at least takes a small lead and undermines any paths a later arriving super intelligence could take to interfere with it's goals, which naturally includes stopping competing SIs from becoming more powerful in a way that could undermine it.
So assuming the super intelligence has goals and work towards them it will be initially trying to solidify its own power, iterating on that small lead, assuming it's the smartest super intelligence[1], should be enough to win. The scary part is that assuming no guardrails [2] it's going to be as ruthless as possible in achieving those goals. That does not necessarily mean it will appear ruthless in achieving those goals, just as ruthless as it judges optimal.
1. Which being so smart one of it's chores would have been reinvestment in making itself smarter than competition and being smarter than its makers has a good chance of actuating those self-improvements.
2. In the internal balancing of goals sense not the don't feed the mogwai after midnight sense.
It's a tenet of the eschatology from the singularity ideology that was developed on online forums over the last few decades.
The viewpoint is baked into those assumptions and boils down to the power of exponentials and poor application of game theory.
That's just what they told the gullible investors to get money.
[dead]