← Back to context

Comment by trod1234

21 days ago

Regulation with active enforcement is the only civil way.

The whole point of regulation is for when the profit motive forces companies towards destructive ends for the majority of society. The companies are legally obligated to seek profit above all else, absent regulation.

> Regulation with active enforcement is the only civil way.

What regulation? What enforcement?

These terms are useless without details. Are we going to fine LLM providers every time their output is wrong? That’s the kind of proposition that sounds good as a passing angry comment but obviously has zero chance of becoming a real regulation.

Any country who instituted a regulation like that would see all of the LLM advancements and research instantly leave and move to other countries. People who use LLMs would sign up for VPNs and carry on with their lives.

  • Regulations exist to override profit motive when corporations are unable to police themselves.

    Enforcement ensures accountability.

    Fines don't do much in a fiat money-printing environment.

    Enforcement is accountability, the kind that stakeholders pay attention to.

    Something appropriate would be where if AI was used in a safety-critical or life-sustaining environment and harm or loss was caused; those who chose to use it are guilty until they prove they are innocent I think would be sufficient, not just civil but also criminal; where that person and decision must be documented ahead of time.

    > Any country who instituted a regulation like that would see all of the LLM advances and research instantly leave and move to other countries.

    This is fallacy. Its a spectrum, research would still occur, it would be tempered by the law and accountability, instead of the wild-west where its much more profitable to destroy everything through chaos. Chaos is quite profitable until it spread systemically and ends everything.

    AI integration at a point where it can impact the operation of nuclear power plants through interference (perceptual or otherwise) is just asking for a short path to extinction.

    Its quite reasonable that the needs for national security trump private business making profit in a destructive way.

    • > Something appropriate would be where if AI was used in a safety-critical or life-sustaining environment and harm or loss was caused; those who chose to use it are guilty until they prove they are innocent I think would be sufficient, not just civil but also criminal

      Would this guilty-until-proven-innocent rule apply also to non-ML code and manual decisions? If not, I feel it's kind of arbitrarily deterring certain approaches potentially at the cost of safety ("sure this CNN blows traditional methods out of the water in terms of accuracy, but the legal risk isn't worth it").

      In most cases I think it'd make more sense to have fines and incentives for above-average and below-average incident rates (and liability for negligence in the worse cases), then let methods win/fail on their own merit.

      5 replies →

  • A very simple example would be a mandatory mechanism for correcting mistakes in prebaked LLM outputs, and an ability to opt out of things like Gemini AI Overview on pages about you. Regulation isn't all or nothing, viewing it like that is reductive.