← Back to context

Comment by sschueller

2 years ago

There is no way to prevent AI from being researched on or to make it safe by government oversight because the rest of the world has places that don't care.

What does work is to pass laws to not permit certain automation such as insurance claims or life and death decisions. These laws are needed even without AI as automation is already doing such things to a concerning degree like banning people due to a mistake without recourse.

Is the whitehouse going to ban the use of AI in the decision making when dropping a bomb?

>not permit certain automation such as insurance claims

I don't see any problem in automation which does mistakes, humans do too. The real problem is that it's often an impenetrable wall with no way to protest, or appeal, and nobody's held accountable while victims lives are ruined. So if to pass any law in this field it should not be about banning AI, but rather about obligatory compensation for those affected by errors. Facing money loss, insurers, and banks will fix themselves

  • Agreed,

    This doesn't just apply to insurance, etc, of course. Inaccessibility of support and inability to appeal automated decisions for products we use is widespread and inexcusable.

    This shouldn't just apply to products you pay for, either. Products like facebook and gmail shouldn't get off with inaccessible support just because they are "free" when we all know they're still making plenty of money off us.

Just because the rest of the world has lawless areas doesn't mean we don't pass laws. If you do something that risks our national safety, or various other things, we can extradite and try you in court.

They're not suggesting the banning of anything, they're requiring you make it be safe and prove how you did that. That's not unreasonable.

[0] https://en.m.wikipedia.org/wiki/Extradition_law_in_the_Unite... [1] https://en.m.wikipedia.org/wiki/Personal_jurisdiction_over_i...

  • Right, but in some areas of AI regulation, the existence of other countries might undermine unilateral regulation.

    For example, imagine LLMs improve to the point where they can double programmer productivity while lowering bug counts. If Country A decides to Protect Tech Jobs by banning such LLMs, but Country B doesn't - could be all the tech jobs will move to Country B, where programmers are twice as productive.

I mean isn't automating important decisions line insurance claims or life and death decisions a beneficial thing. Sure the tech isn't ready yet but I think even now AI with a human overlooking it who has the power to override the system would provide people with a better experience