← Back to context

Comment by sfRattan

6 days ago

> The author has some valid points, but dismissing this entire class of arguments so flippantly is intellectually lazy.

Agree 100%. And generally programmers have a poor understanding of the law, especially common law as it applies in America (the country whose legal system most software licenses have been written to integrate with, especially copyleft principles).

American Common Law is an institution and continuity of practice dating back centuries. Everything written by jurists within that tradition, while highly technical, is nonetheless targeted at human readers who are expected to apply common sense and good faith in reading. Where programmers declare something in law insufficiently specified or technically a loophole, the answer is largely: this was written for humans to interpret using human reason, not for computers to compile using limited, literal algorithms.

Codes of law are not computer code and do not behave like computer code.

And following the latest AI boom, here is what the bust will look like:

1. Corporations and the state use AI models and tools in a collective attempt to obfuscate, diffuse, and avoid accountability. This responsibility two-step is happening now.

2. When bad things happen (e.g. a self-driving car kills someone, predictive algorithms result in discriminatory policy, vibe coding results in data leaks and/or cyberattacks), there will be litigation that follows the bad things.

3. The judges overseeing the litigation will not accept that AI has somehow magically diffused and obfuscated all liability out of existence. They will look at the parties at hand, look at relevant precedents, pick out accountable humans, and fine them or---if the bad is bad enough---throw them in cages.

4. Other companies will then look at the fines and the caged humans, and will roll back their AI tools in a panic while they re-discover the humans they need to make accountable, and in so doing fill those humans back in on all the details they pawned off on AI tools.

The AI tools will survive, but in a role that is circumscribed by human accountability. This is how common law has worked for centuries. Most of the strange technicalities of our legal system are in fact immune reactions to attempts made by humans across the centuries to avoid accountability or exploit the system. The law may not be fast, but it will grow an immune response to AI tools and life will go on.

I agreed with this comment until the second half which is just one scenario - one that is contingent on many things happening in specific ways.