← Back to context

Comment by LeifCarrotson

2 years ago

Operators in the political space are used to working with human systems that can be regulated arbitrarily. It defines its terms, and in so doing creates perfectly delineated categories of people and actions. The law's interpretation of what is and is not allowed is interchangeable with what is and is not possible

The fact that bits don't have colour to define their copyright or that CNC machines produce arbitrarily-shaped pieces of metal possibly including firearms or that factoring numbers is a mathematically hard problem does not matter to the law. AI software does not have a simple "can produce weapons" option or "can cause harm" option that you can turn off so a law that says it should have one does not change the universe to comply. I think that most programmers and engineers err when confronted with this disparity when that they assume politicians who make these misguided laws are simply not smart. To be sure, that happens, but there are thousands to millions of people working in this space, each with an intelligence within a couple standard deviations of that of an individual engineer. If this headline seems dumb to the average tech-savvy millennial who's tried ChatGPT, it's not because its authors didn't spend 10 seconds thinking about prompt injection. It's because they were operating under different parameters.

In this case, I think that the Biden administration is making some attempts to improve the problem, while also benefiting its corporate benefactors. Having Microsoft, Apple, Google, and Facebook work on ways to mitigate prompt injection vulnerabilities does add friction that might dissuade some low-skill or low-effort attacks at the margins. It shifts the blame from easily-abused dangerous tech to tricky criminals. Meanwhile, these corporate interests will benefit from adding a regulatory moat that requires startups to make investments and jump hurdles before they're allowed to enter the market. Those are sufficient reasons to pass this regulation.

> AI software does not have a simple "can produce weapons" option or "can cause harm" option that you can turn off so a law that says it should have one does not change the universe to comply

That wording is by design. Laws like this are a cudgel for regulators to beat software with. Just like the CFAA is reinterpreted and misapplied to everything, so too will this law. “Can cause harm” will be interpreted to mean “anything we don’t like.”