Comment by AlexandrB

2 years ago

Fictional depictions of AI risk are like thought experiments. They have to assume that the technology achieves a certain level of capability and goes in a certain direction to make the events in the fictional story possible. Neither of these assumptions is a given. For example, we've also had many sci-fi stories that feature flying taxis and the like - but there's no point debating "flying taxi risk" when it seems like flying cars are not a thing that will happen for reasons of practicality.

So sure, it's possible that we'll have to reckon with scenarios like those in Neuromancer, but it's more likely that reality will be far more mundane.

Flying cars is a really bad example... We have them, they are called airplanes and airplanes are regulated to hell and back twice. We debate the risk around airplanes when making regulations all the time! The 'flying cars' you're talking about are just a different form of airplane and they don't exist because we don't want to give most people their own cruise missile.

So, please, come up with a better analogy because the one you used failed so badly it negated the point you were attempting to make.