← Back to context

Comment by adrian_b

1 day ago

All the known history of humans is evidence against the possibility of existence of "beneficial surveillance".

This is a utopian idea of the same kind as the idea of theoretical communism.

The communist theory argued that because the owners of assets can use their power in nefarious ways against the others this can be easily solved by dispossessing them of their assets and transforming all such private assets into assets that belong to the common property owned by all people. Then all assets will be used for the welfare of the entire society.

The fallacy of this theory was that when something belongs to all people it is impossible for all people to manage it directly. So there must be a layer of relatively few middlemen who manage the assets directly.

In all the communist societies, instead of managing the assets for the common good, those middlemen have succeeded to become the de facto owners of the assets, despite not being de jure their owners. And then they managed the assets according to their personal interests, like any capitalist billionaire.

The only difference was that the communist elite was much less secure in their positions than rich capitalists, because not being the legal owners of a company or of other such valuable assets meant that they could lose their privileges at any time if their boss in the communist party hierarchy no longer liked them and sent them to an inferior position.

This hierarchical dependence ensured that the communist elite had to obey more or less whatever the supreme leader ordered. Except for this obedience, there was no real difference between a communist economy and the extreme stage of monopolistic capitalism, despite what the naive theory of communism hoped to achieve by nationalizing everything of value.

Similarly, I see no hope for a theory of "beneficial surveillance". Such beneficial surveillance could exist only if it were controlled by good-willing people. But this will never happen, like in practical communism, some of the worst people will be those who would succeed to control it.

I'm intrigued by Michael Nielsen's thoughts on cryptography applied to synthetic biology risk.

I'll quote his notes on using cryptography to maintain a balance of privacy and safety:

>To help address such concerns, it's been proposed that synthesis screening should use cryptographic ideas to help preserve customer privacy, while still ensuring safety. Let me mention three such ideas, some of which have already been implemented in a prototype system built by the SecureDNA collaboration. The first idea is that the screening itself should be done with an encrypted version of the sequence data, to help preserve customer privacy. The synthesis step would still require the raw sequence data, but such encryption would at least prevent centralized screening services from learning the sequence being synthesized. Second, as mentioned above, screening for exact matches and homologous sequences won't catch everything, especially as de novo design becomes possible. So it's also been proposed that an encrypted form of the sequence data should be logged and kept after synthesis. That data could not routinely be read by the synthesis company or screening service. However, suppose some later event occurs – say, some new pandemic agent is found in the wild. Then it should be possible to check whether that agent matches anything in the encrypted synthesis records. In the event such a check was needed, a third party authority could provide a kind of "search warrant" (a private key of some sort) to decrypt the data, and identify the responsible party. The third idea is to use cryptography to ensure the screening list remains private, and can even be updated privately by trusted third parties, without anyone else learning the contents of the update. Taken together, these three ideas would help preserve the balance of power between customers and the synthesis companies, while contributing to public safety and enabling imaginative new synthesis work to be done.

>Indeed, cryptographers are so clever that they've devised many techniques you might a priori deem impossible, or not even consider at all. Ideas like zero knowledge proofs, homomorphic encryption, and secret sharing are remarkable. As software (and AI) eats the world, cryptography will increasingly define the boundaries of law.

You mentioned communism, and I'll add to that since I've lived under communism. It's a great idea in theory that doesn't work in practice because of human limitations.

It doesn't work, because A) the reason you said: government officials favor themselves, and B) the knowledge problem: the economy is far too complex for a small group of officials to plan what everyone else should be doing.

An interesting idea that emerges now is an AI-moderated socialism. If A) AI can be trusted to not favor itself, and B) AI has perfect knowledge of each human (our needs, what we're good at, etc.), I can imagine an AI-moderated socialism to work.

An ideal future I can imagine is a world with many AI-moderated polities, and humans have freedom to move between them. AI-moderated polities share some global standards on safety, trade, and conflict resolution but otherwise have differing policies so humans have the freedom to find the one that they most prefer.