Comment by rqtwteye
2 years ago
I always wonder about "safe" for who? If the current economic system continues, we may end up with a lot of people out of jobs. Add to that improvements in robotics and we will have many people ending up having nothing to contribute to the economy. I am not getting the impression that the people who push for AI safety are thinking about this. It seems they are most worried about not losing their position of privilege.
The Industrial Revolution devalued manual labor. Sure, new jobs were created, but on the whole this looked like a shift to knowledge work and away from manual labor.
Now AI is beginning to devalue knowledge work. Although the limits of current technology is obvious in many cases, AI is already doing a pretty good job at replacing illustrators and copy writers. It will only get better.
Who owns the value created by AI productivity? Ultimately it will be shareholders and VCs. It’s no surprise that the loudest voices in techno-optimism are VCs. In this new world they win.
Having said all this, I think Ilya’s concerns are more of the existential type.
> The Industrial Revolution devalued manual labor.
Only some types. It caused a great number of people to be employed in manual labour, in the new factories. The shift to knowledge work came much later as factory work (and farming) became increasingly automated.
>In this new world they win.
If history is any indication not really. There's an obvious dialectical nature to this where technological advance initially delivers returns to its benefactors, but then they usually end up being swallowed by their own creation. The industrial revolution didn't devalue labor, it empowered labor to act collectively for the first time, laying the groundwork for what ultimately replaced the pre-industrial powers that were.
It is a strange thing. We are not anywhere near “super intelligence” yet the priority is safety.
If the Wright brothers had focused on safety, I am not sure they would have flown very far.
The consequences of actual AI is far reaching compared to a plane crash.
No they are not.
Not unless you connect a machine gun to your neural net.
Otherwise - we are talking about a chat bot - yes if there is no safety - it will say something racist - or implement a virus for you that you would have had to search 4chan for or something.
None of this is any more dangerous than what you can find on the far corners of the internet.
And it will not bring the end of the world.
2 replies →
[flagged]
AIUI it with superalignment, it merely means "the AI does what the humans instructing it want it to do". It's a different kind of safety than the built in censoring that most LLMs have.