Comment by martin-t
1 day ago
Anybody here read Coding machines?
There's this implied trust we all have in the AI companies that the models are either not sufficiently powerful to form a working takeover plan or that they're sufficiently aligned to not try. And maybe they genuinely try but my experience is that in the real world, nothing is certain. If it's not impossible, it will happen given enough time.
If the safety margin for preventing takeover is "we're 99.99999999 percent sure per 1M tokens", how long before it happens? I made up these numbers but any guess what they are really?
Because we're giving the models so much unsupervised compute...
> If it's not impossible, it will happen given enough time.
I hope you might be somewhat relieved to consider that this is not so in an absolute sense. There are plenty of technological might-have-beens that didn't happen, and still haven't, and probably will never—due to various economic and social dynamics.
The counterfactual—all that's possible happens—ie almost tautological.
We should try and look at these mechanisms from an economic standpoint, and ask "do they really have the information-processing density to take significant long-term independent action?"
Of course, "significant" is my weasel word.
> we're giving the models so much unsupervised compute...
Didn't you read the article? It's wasted! It's kipple!