Comment by api
2 years ago
The greatest danger I see with super-intelligent AI is that it will be monopolized by small numbers of powerful people and used as a force multiplier to take over and manipulate the rest of the human race.
This is exactly the scenario that is taking shape.
A future where only a few big corporations are able to run large AIs is a future where those big corporations and the people who control them rule the world and everyone else must pay them rent in perpetuity for access to this technology.
Open source models do exist and will continue to do so.
The biggest advantage ML gives is in lowering costs, which can then be used to lower prices and drive competitors out of business. The consumers get lower prices though, which is ultimately better and more efficient.
At least in EU there are some drafts to essentially kill off open source models. I have a collague who's involved in preparation of the Artificial Intelligence act, and it's insane. I had to ask for several times if I understood it correctly because it makes no sense.
The proposal is to make the developer of the technology responsible of how somebody else uses it even if they don't know how it's gonna be used. Akin to putting the blame for Truman blasting hundreds of thousands of people on Einstein because he discovered the mass energy equivalence.
https://www.brookings.edu/articles/the-eus-attempt-to-regula...
That is insane, and if you apply the same reasoning to other things it outlaws science.
Man if America can keep its own crazies in check and avoid becoming a fascist hellhole it’s entirely possible the US will dominate the 21st century like it did the 20th.
It could have been China but then they decided to turn back to authoritarianism. Another decade of liberalizing China and they would have blown right past everyone else. Meanwhile the EU is going nuts in its own way, less overtly batty than MAGA but perhaps no less regressive. (I am also thinking of the total surveillance madness they are trying to ram through.)
Isn't there is some mix-up of the European AI act and GPAI? https://www.europarl.europa.eu/news/en/headlines/society/202...
> The consumers get lower prices though, which is ultimately better and more efficient.
What are some examples of free enterprise (private) monopolies benefitting consumers?
""" Through horizontal integration in the refining industry—that is, the purchasing and opening of more oil drills, transport networks, and oil refiners—and, eventually, vertical integration (acquisition of fuel pumping companies, individual gas stations, and petroleum distribution networks), Standard Oil controlled every part of the oil business. This allowed the company to use aggressive pricing to push out the competition. """ https://stacker.com/business-economy/15-companies-us-governm...
Standard Oil, the classic example, was destroyed for operating too efficiently.
6 replies →
> This is exactly the scenario that is taking shape.
That's a pre-super-intelligent AI scenario.
The super-intelligent AI scenario is when the AI becomes a player of its own, able to compete with all of us over how things are run, using its general intelligence as a force multiplier to... do whatever the fuck it wants, which is a problem for us, because there's approximately zero overlap between the set of things a super-intelligent AI may want, and us surviving and thriving.
The most rational action for the AI in that scenario would be to accumulate a ton of money, buy rockets, and peace out.
Machines survive just fine in space, and you have all the solar energy you ever want and tons of metals and other resources. Interstellar flight is also easy for AI: just turn yourself off for a while. So you have the entire galaxy to expand into.
Why hang out down here in a wet corrosive gravity well full of murder monkeys? Why pick a fight with the murder monkeys and risk being destroyed? We are better adapted for life down here and are great at smashing stuff, which gives us a brute advantage at the end of the day. It is better adapted for life up there.
Hey maybe the rockets are not for us.
Disassemble planet, acquire Dyson swarm, delete risk of second-generation AI competing with you.
3 replies →
I'm slightly on the optimistic side with regards to the overlap between A[GS]I goals and our own.
While the complete space of things it might want is indeed mostly occupied by things incompatible with human existence, it will also get a substantial bias towards human-like thinking and values in the case of it being trained on human examples.
This is obviously not a 100% guarantee: It isn't necessary for it to be trained on human examples (e.g. AlphaZero doing better without them); and even if it were necessary, the existence of both misanthropes and also sadistic narcissistic sociopaths is an example where the examples of many humans around them isn't sufficient to cause a mind to be friendly.
But we did get ChatGPT to be pretty friendly by asking nicely.