Comment by Ajedi32

2 years ago

To me, these arguments bear a strong resemblance to those of the free software movement.

> There are areas in society where trustworthiness is of paramount importance, even more than usual. Doctors, lawyers, accountants…these are all trusted agents. They need extraordinary access to our information and ourselves to do their jobs, and so they have additional legal responsibilities to act in our best interests. They have fiduciary responsibility to their clients.

> We need the same sort of thing for our data.

IMO this is equally applicable to "non-AI" software. Modern corporate-controlled software does not have "fiduciary responsibility to [its] clients", and you can see the results of this everywhere. Even in physical products that consumers presumably own the software that controls those products frequently acts _against_ the interests of the device owner when those interests conflict with the interests of the company that wrote the software.

Schneier says:

> It’s not even an open-source model that the public is free to examine and modify.

But, to me at least, that's exactly what the following paraphrase describes:

> A public model is a model built by the public for the public. It requires political accountability, not just market accountability. This means openness and transparency paired with a responsiveness to public demands. It should also be available for anyone to build on top of. This means universal access. And a foundation for a free market in AI innovations.

Two of the Free Software Foundation's four freedoms encapsulate this goal quite nicely:

> Freedom 1: The freedom to study how the program works, and change it so it does your computing as you wish. Access to the source code is a precondition for this.

> [...]

> Freedom 3: The freedom to distribute copies of your modified versions to others. By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.

The problem is that, as it stands, most people do not have practical access to freedoms 1 or 3, and therefore most software you interact with, even software running on devices that you supposedly own, does not "do your computing as you wish", but rather, does your computing as the software authors wish.

Perhaps AI will exacerbate this problem further due to the psychological differences Schneier outlines in the article, but it's hardly a new problem.

Strongly agree, and I am sure the resemblance is conscious. Unfortunately, most people don't know or care about software freedom. So for a general audience, Schneier has to make these points from scratch. But yeah, it would be nice if this push for regulation happened in a way that was compatible with strengthening software freedom as well.