Comment by nradov

10 hours ago

Some of you have been watching too many sci-fi movies. The whole notion of "AI safety regulations" is so silly and misguided. If a safety critical system is connected to public networks with an exposed API or any security vulnerabilities then there is a safety risk regardless of whether AI is being used or not. This is exactly why nuclear weapon control systems are air gapped and have physical interlocks.

The existence of network-connected robots or drones isn't inherently a security vulnerability. AI control of the robots specifically is a problem in the same way that piping in instructions from /dev/urandom would be, except worse because AI output isn't purely random and has a higher probability of directing the machine to cause actual harm.

Are you saying you're opposed to letting AI perform physical labor, or that you're opposed to requiring safeguards that allow humans to physically shut it off?

  • I am opposed to regulating any algorithms, including AI/LLM. We can certainly have safety regulations for equipment with the potential to cause physical harm, such as industrial robots or whatever. But the regulation needs to be around preventing injury to humans regardless of what software the equipment is running.

    • If that's the case, then it sounds like we largely agree with each other. There's no need for personal attacks implying that I'm somehow detached from reality.

      Ultimately, this isn't strictly an issue specific to genAI. If a "script roulette" program that downloaded and executed random GitHub Gist files somehow became popular, or if someone created a web app that allowed anyone to anonymously pilot a fleet of robots, I'd suggest that those be subject to exactly the same types of safety regulations I proposed.

      Any such regulations should be generically written, not narrowly targeted at AI algorithms. I'd still call that "AI safety", because in practice it's a much more useful definition of AI safety than the one being pushed today. "Non-determinism safety" doesn't really have the same ring to it.

> The whole notion of "AI safety regulations" is so silly and misguided.

Here is a couple of real world AI issues that have already happened due to the lack of AI Safety.

- In the US if you were black you were flagged "high risk" for parole. If you were a white person living in farmland area then you were flagged "low risk" regardless of your crime.

- Being denied ICU because you are diabetic. (Thankfully that never went into production)

- Having your resume rejected because you are a woman.

- Having black people photos classified as "Gorilla". (Google couldn't fix at the time and just removed the classification)

- Radicalizing users by promoting extreme content for engagement.

- Denying prestige scholarships to black people who live in black neighbourhoods.

- Helping someone who is clearly suicidal to commit suicide. Explaining how to end their life and write the suicide note for them.

... and the list is huge!