Comment by pk-protect-ai
2 years ago
Right now, the "bad acting human" is, for example, Sam Altman, who frequently cries "Wolf!" about AI. He is trying to eliminate the competition, manipulate public opinion, and present himself as a good Samaritan. He is so successful in his endeavor, even without AI, that you must report to the US government about how you created and tested your model.
The greatest danger I see with super-intelligent AI is that it will be monopolized by small numbers of powerful people and used as a force multiplier to take over and manipulate the rest of the human race.
This is exactly the scenario that is taking shape.
A future where only a few big corporations are able to run large AIs is a future where those big corporations and the people who control them rule the world and everyone else must pay them rent in perpetuity for access to this technology.
Open source models do exist and will continue to do so.
The biggest advantage ML gives is in lowering costs, which can then be used to lower prices and drive competitors out of business. The consumers get lower prices though, which is ultimately better and more efficient.
At least in EU there are some drafts to essentially kill off open source models. I have a collague who's involved in preparation of the Artificial Intelligence act, and it's insane. I had to ask for several times if I understood it correctly because it makes no sense.
The proposal is to make the developer of the technology responsible of how somebody else uses it even if they don't know how it's gonna be used. Akin to putting the blame for Truman blasting hundreds of thousands of people on Einstein because he discovered the mass energy equivalence.
https://www.brookings.edu/articles/the-eus-attempt-to-regula...
2 replies →
> The consumers get lower prices though, which is ultimately better and more efficient.
What are some examples of free enterprise (private) monopolies benefitting consumers?
7 replies →
> This is exactly the scenario that is taking shape.
That's a pre-super-intelligent AI scenario.
The super-intelligent AI scenario is when the AI becomes a player of its own, able to compete with all of us over how things are run, using its general intelligence as a force multiplier to... do whatever the fuck it wants, which is a problem for us, because there's approximately zero overlap between the set of things a super-intelligent AI may want, and us surviving and thriving.
The most rational action for the AI in that scenario would be to accumulate a ton of money, buy rockets, and peace out.
Machines survive just fine in space, and you have all the solar energy you ever want and tons of metals and other resources. Interstellar flight is also easy for AI: just turn yourself off for a while. So you have the entire galaxy to expand into.
Why hang out down here in a wet corrosive gravity well full of murder monkeys? Why pick a fight with the murder monkeys and risk being destroyed? We are better adapted for life down here and are great at smashing stuff, which gives us a brute advantage at the end of the day. It is better adapted for life up there.
Hey maybe the rockets are not for us.
4 replies →
I'm slightly on the optimistic side with regards to the overlap between A[GS]I goals and our own.
While the complete space of things it might want is indeed mostly occupied by things incompatible with human existence, it will also get a substantial bias towards human-like thinking and values in the case of it being trained on human examples.
This is obviously not a 100% guarantee: It isn't necessary for it to be trained on human examples (e.g. AlphaZero doing better without them); and even if it were necessary, the existence of both misanthropes and also sadistic narcissistic sociopaths is an example where the examples of many humans around them isn't sufficient to cause a mind to be friendly.
But we did get ChatGPT to be pretty friendly by asking nicely.
> He is trying to eliminate the competition,
Funny way of doing it, going around saying "you should regulate us, but don't regulate people smaller than us, and don't regulate open-source".
> you must report to the US government about how you created and tested your model.
If you're referring to the recent executive order: only when dual-use, meaning the following:
---
(k) The term “dual-use foundation model” means an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters, such as by:
(i) substantially lowering the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons;
(ii) enabling powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyber attacks; or
(iii) permitting the evasion of human control or oversight through means of deception or obfuscation.
- https://www.whitehouse.gov/briefing-room/presidential-action...
The "bad acting human" are the assholes who uses "AI" to create fake imagery to push certain (and likely false) narratives on the various medias.
Key thing here is that this is fundamentally no different from what has been happening since time immemorial, it's just that becomes easier with "AI" as part of the tooling.
Every piece of bullshit starts from the "bad acting human". Every single one. "AI" is just another new part of the same old process.
so let me tie this with a controversial topic - gun control.
If people agree that gun control would reduce the harm from guns, wouldn't this same logic apply to AI? Is it different?
I do not agree that gun control would reduce harm from guns.