Comment by andrewmutz
2 years ago
Fortunately, these regulations don't seem too extreme. I hope it stays at this point and doesn't escalate to regulations that severely impact the development of AI technology.
Many people spend time talking about the lives that may be lost if we don't act to slow the progress of AI tech. There are just as many reasons to fear the lives lost if we do slow down the progress of AI tech (drug cures, scientific breakthroughs, etc).
> There are just as many reasons to fear the lives lost if we do slow down the progress of AI tech (drug cures, scientific breakthroughs, etc).
While I’m cautious about over regulation, and I do think there’s a lot of upside potential, I think there’s an asymmetry between potentially good outcomes and potentially catastrophic outcomes.
What worries me is that it seems like there are far more ways it can/will harm us than there are ways it will save us. And it’s not clear that the benefit is a counteracting force to the potential harm.
We could cure cancer and solve all of our energy problems, but this could all be nullified by runaway AGI or even more primitive forms of AI warfare.
I think a lot of caution is still warranted.
It's literally a 1st amendment violation. Seems pretty extreme to me.
> Fortunately, these regulations don't seem too extreme. I hope it stays at this point and doesn't escalate to regulations that severely impact the development of AI technology.
The details matter. The parts being publicized refer to using AI assistance to do things that are already illegal. But what else is being restricted?
The weapons issue is becoming real. The difference between crappy Hamas unguided missiles that just hit something at random and a computer vision guided Javelin that can take out tanks is in the guidance package. The guidance package is simpler than a smartphone and could be made out of smartphone parts. Is that being discussed?