Comment by vostrocity
1 day ago
One idea I haven't seen much discussion on is "provably beneficial surveillance" [1], which builds off of Nick Bostrom's vulnerable world hypothesis. It seems like the best path forward.
>We can turn that conventional wisdom on its head, by reframing it as a question: is it possible to do surveillance and consequent policing in a way that is (a) compatible with or enhances liberal values, i.e., improving the welfare of all, except those undermining the common good; and also (b) sufficient to prevent catastrophic threats to society? I call this possibility Provably Beneficial Surveillance. It's a concept expanding on an old tradition of ideas, including search warrants, due process, habeas corpus, and Madisonian separation of powers, all of which help improve the balance of power between institutions and individuals. In particular, all those ideas help enable surveillance in service of safety, while also taking steps to prevent abuses of that power.
Salt Typhoon is the refutation to this. Building and enforcing a "lawful intercept" system formally codifies an exploit chain for your adversaries to use. If you don't want your politicians and dignitaries being blackmailed by foreign opposition, don't even consider this type of system for widespread development.
Let America be the canary in this particularly toxic coal mine, and refuse similar systems wherever you are locally.
No discussion because it's a bad idea
Try a little harder. You got this
Nope. That's not how any of this is trending at all. Being optimistic is good for getting through tough times. Albeit sometimes. It might help people sleep at night but sleeping our way into technofacism won't make it any better for us or our children.
Did you have a better path forward?
I point to Michael Nielsen's commentary on Vulnerable World Hypothesis [1] again:
>do you think inexpensive, easy-to-follow recipes for building catastrophic technologies will one day be found, given sufficient understanding of science and technology?
With every increase in technology and science, the probability increases, and as a result, society will necessitate ever more surveillance. The reason provably beneficial surveillance is important to discuss is that we need a careful middle path between totalitarianism and outright catastrophe. It is the opposite of "sleeping our way" into technofascism.
1. https://michaelnotebook.com/vwh/index.html
I disagree on a fundamental level. Crime is down. It's been trending down since the 90s. The 90s to the early 2000s ushered in more technological change than the century prior as far as the common person is concerned.
There's no need for mass surveillance and there never will be.
"Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety.", spoken by someone who knew better and just so happened to help found this country.
"Provably beneficial surveillance" is the wrong framing.
What you're trying to say is that the harms of surveillance are diminished when the underlying power is distributed enough that cops have to justify themselves in order to access the surveillance powers. That's why we have a 4th Amendment that demands cops get warrants before doing searches and seizures. Think of the difference between a store with a security camera that records to a local network DVR, and the same store but they bought some Ring cameras and send it to Amazon's servers. The former is the necessary amount of surveillance to prove a crime happened, the latter is just enabling abuse.
I think it is a new framing that merits discussion.
Example case is the school shooter in Canada that OpenAI knew about but chose not to warn authorities of (presumably because OpenAI wants to balance safety and privacy).
OpenAI (or any other big tech) has extreme concentration of power and knows more about its users than any government authority.
At what point should OpenAI alert authorities?
I would much rather have "provably beneficial surveillance" than OpenAI having an arbitrary black box policy or for government authority to have direct backdoor to all OpenAI data.
All the known history of humans is evidence against the possibility of existence of "beneficial surveillance".
This is a utopian idea of the same kind as the idea of theoretical communism.
The communist theory argued that because the owners of assets can use their power in nefarious ways against the others this can be easily solved by dispossessing them of their assets and transforming all such private assets into assets that belong to the common property owned by all people. Then all assets will be used for the welfare of the entire society.
The fallacy of this theory was that when something belongs to all people it is impossible for all people to manage it directly. So there must be a layer of relatively few middlemen who manage the assets directly.
In all the communist societies, instead of managing the assets for the common good, those middlemen have succeeded to become the de facto owners of the assets, despite not being de jure their owners. And then they managed the assets according to their personal interests, like any capitalist billionaire.
The only difference was that the communist elite was much less secure in their positions than rich capitalists, because not being the legal owners of a company or of other such valuable assets meant that they could lose their privileges at any time if their boss in the communist party hierarchy no longer liked them and sent them to an inferior position.
This hierarchical dependence ensured that the communist elite had to obey more or less whatever the supreme leader ordered. Except for this obedience, there was no real difference between a communist economy and the extreme stage of monopolistic capitalism, despite what the naive theory of communism hoped to achieve by nationalizing everything of value.
Similarly, I see no hope for a theory of "beneficial surveillance". Such beneficial surveillance could exist only if it were controlled by good-willing people. But this will never happen, like in practical communism, some of the worst people will be those who would succeed to control it.
I'm intrigued by Michael Nielsen's thoughts on cryptography applied to synthetic biology risk.
I'll quote his notes on using cryptography to maintain a balance of privacy and safety:
>To help address such concerns, it's been proposed that synthesis screening should use cryptographic ideas to help preserve customer privacy, while still ensuring safety. Let me mention three such ideas, some of which have already been implemented in a prototype system built by the SecureDNA collaboration. The first idea is that the screening itself should be done with an encrypted version of the sequence data, to help preserve customer privacy. The synthesis step would still require the raw sequence data, but such encryption would at least prevent centralized screening services from learning the sequence being synthesized. Second, as mentioned above, screening for exact matches and homologous sequences won't catch everything, especially as de novo design becomes possible. So it's also been proposed that an encrypted form of the sequence data should be logged and kept after synthesis. That data could not routinely be read by the synthesis company or screening service. However, suppose some later event occurs – say, some new pandemic agent is found in the wild. Then it should be possible to check whether that agent matches anything in the encrypted synthesis records. In the event such a check was needed, a third party authority could provide a kind of "search warrant" (a private key of some sort) to decrypt the data, and identify the responsible party. The third idea is to use cryptography to ensure the screening list remains private, and can even be updated privately by trusted third parties, without anyone else learning the contents of the update. Taken together, these three ideas would help preserve the balance of power between customers and the synthesis companies, while contributing to public safety and enabling imaginative new synthesis work to be done.
>Indeed, cryptographers are so clever that they've devised many techniques you might a priori deem impossible, or not even consider at all. Ideas like zero knowledge proofs, homomorphic encryption, and secret sharing are remarkable. As software (and AI) eats the world, cryptography will increasingly define the boundaries of law.
You mentioned communism, and I'll add to that since I've lived under communism. It's a great idea in theory that doesn't work in practice because of human limitations.
It doesn't work, because A) the reason you said: government officials favor themselves, and B) the knowledge problem: the economy is far too complex for a small group of officials to plan what everyone else should be doing.
An interesting idea that emerges now is an AI-moderated socialism. If A) AI can be trusted to not favor itself, and B) AI has perfect knowledge of each human (our needs, what we're good at, etc.), I can imagine an AI-moderated socialism to work.
An ideal future I can imagine is a world with many AI-moderated polities, and humans have freedom to move between them. AI-moderated polities share some global standards on safety, trade, and conflict resolution but otherwise have differing policies so humans have the freedom to find the one that they most prefer.