← Back to context

Comment by AnthonyMouse

9 hours ago

> Are you only against AIs censoring information that's already publicly available, or are you against AIs censoring themselves when they know dangerous non-public information? Say the AI was the only thing to know the secret recipe for this WMD. Would this be like the scientist choosing not to tell everyone, or should the AI be designed to tell anyone who asks how to make a WMD?

This is kind of what I mean by ridiculous hypotheticals. So you have this un-counterable yet trivial to produce WMD -- something that has never existed in all recorded history -- and an AI is the only thing that has it. This is a movie plot.

Even then, are you sure the answer should be "never tell anyone"? This is a computer running code to process data. It has no means to know who you are or what your intentions are. You could be the scientist who needs the formula to devise an antidote because the thing has already been released.

"A computer can never be held accountable, therefore a computer must never make a management decision."

It's not the machine's job to choose for you. It's frequently in error and it's not supposed to be in charge.

> This argument seems confused, because you're trying to assert that prohibiting censorship is okay because these dangerous scenarios will never happen, but also that censorship should still be prohibited if such a scenario did happen.

The problem comes from stipulating that something with a negligible probability has a high probability.

Suppose I say we should make mass transit free; no fares for anyone. You bring me the hypothetical that Hitler is on his way to acquire plutonium and he doesn't have bus fare, so the only thing preventing him from getting there is the bus driver turning him away for having nothing in his pockets. Then you ask if I still think we shouldn't charge fares to anyone.

And the answer is still yes, because you still have to make the decision ahead of time when the plausibility of that is still negligible. It's theoretically possible that any given choice could result in Armageddon via the butterfly effect. If you stipulate that that's what happens then obviously that's not what anybody wants, but it's also a thing that only happens in the implausible hypothetical. And if you're in a hypothetical then you can also hypothesize your way out of it. What if it's a sting and the allies are waiting for him at the plutonium factory, and he needs to get on the bus or you're depriving them of their only chance to kill Hitler?

Unless you stipulate that the tragedy is unavoidable given the decision, which is just assuming the conclusion.

> The problem comes from stipulating that something with a negligible probability has a high probability.

We are not doing so, and I don't know how I could have been more clear that we are not saying this hypothetical will happen. Would it help if the hypothetical was that the AI knows a magic spell that blows up the Earth?

It's a simple question. Would you think AI censorship is acceptable if the information actually were dangerous? Don't tell me why the hypothetical is impossible because that's entirely missing the point. I don't know what your position is, and so I don't know what you're arguing for. I don't know if you consider freedom of information to be a terminal virtue, or if you think it's good only when the consequences are good. Telling me the hypothetical won't happen doesn't clarify anything; I already know that.

You can have the view that we only want freedom of information when it causes net good, and that it always causes net good. Or maybe you have the view that freedom of information is always virtuous and we shouldn't consider the consequences. Or maybe something else. Until you clarify your view, I don't know if/what we disagree about.

  • Hypotheticals like that are uninteresting because there are only two ways it can go. The first is that you can find a way out of it, and then you say, do we need the magic spell for anything? Is knowing about it useful to preventing it from being used? Then people need to know.

    The second is that you're stipulating the information being available is going to destroy the world with high probability and no possible means of mitigating it. Then anything else gets drowned out by the end of the world, but only because you're stipulating the outcome.

    Which you can't do in real life, not just because the real probability of the hypothetical is so low but because there isn't anyone who can be trusted not to fudge the numbers when they want to censor something. Should it be censored if there is an absolute certainty it will destroy the world? There isn't much room to move in that one. Should it be censored because somebody claims it's really bad? Nope, because it's way more likely that they're full of crap than that it's actually going to destroy the world.