← Back to context

Comment by airgapstopgap

2 years ago

This is comically naive and literal. Have you read any moral philosophy before?

The point of his argument is not that under certain exceptional conditions surgeons should kill people to harvest their organs to save more lives, but precisely that any sort of formal pledge or personal obligation or non-utilitarian moral code can be betrayed if that leads to higher expected utility; and that it is prudent to lie about your true intentions and convictions if you think that is a precondition to achieving greater total utility. It is very much an argument in favor of a fundamentally untrustworthy and conspiratorial mindset, and not just specifically on the issue of saving lives – like the trolley problem, this is only an illustration. This applies to utility in general, and thus to all instrumental preconditions for creating it: to money, power, anything; therefore, any act of a Singerian should be suspected as part of an instrumentally useful scheme to secure a position to achieve more utility. This applies the most to pretenses of having integrity, valuing promises or even some kind of sentimental loyalty.

People who profess to believe in Singerian doctrine can not be trusted to mean what they are saying, because you cannot know what sort of a convoluted scheme to maximize total utility they have imagined that could necessitate deception in a particular case.

Again, his follower Sam Bankman-Fried has demonstrated this very clearly by defrauding his clients and appropriating money for the purposes of Effective Altruism and AI Alignment movements, and then by piling an absurd lie on an absurd lie. Singer defends the teaching by claiming, contrary to his somewhat more sophisticated argument, that "honesty is the best policy"[0]. This is what he, in his article, describes as morality for children – that is, the immature people who cannot be trusted to make consequentialist decisions and should be taught deontology.

> and he believes you could never recommend the action to others

Oh. Okay, so he says that it is the morally correct course of action logically following from moral philosophy he has been advancing and propagandizing all his life, but [generic] you should not recommend it to others. How is that very claim not such a recommendation? What is the meaning of this sophistry?

Perhaps it serves to separate those who can practice the shallowest Straussian reading from those who are effectively children.

0. https://www.theguardian.com/education/2022/dec/24/giving-goo...

> It is very much an argument in favor of a fundamentally untrustworthy and conspiratorial mindset

That's a misreading of the paper and a misrepresentation of the position that Singer holds. It is also a misrepresentation of what utilitarians more generally think about practical ethics and the virtue of truth-telling. The following text is in my experience fairly representative of the views held by real world utilitarian philosophers: https://www.utilitarianism.net/guest-essays/virtues-for-real...

  • > That's a misreading of the paper and a misrepresentation of the position that Singer holds

    It's not. However, utilitarians are inevitably compelled to argue that it is, because their efficacy depends on it. This amounts to gaslighting about plainly obvious positions they have committed to paper, which is an act of violence in and of itself.

    > While it may seem that utilitarians should engage in norm-breaking instrumental harm, a closer analysis reveals that it often carries large costs. It would lead to people taking precautions to safeguard against these kinds of harms, which would be costly for society. And it could harm utilitarians’ reputation, 33 which in turn could impair their ability to do good.

    Your link proposes a number of contingent reasons for utilitarians to not act like defect bots. It does not bite the bullet on cases where defection is clearly optimal, and those cases are plentiful. This is cheap and disingenuous rhetoric. His paper's very clear implication is that killing the patient is valid move if perfect secrecy can be ensured; so strategic arguments about reputation are irrelevant. Most importantly, this ethos breaks down in non-iterated games, e.g. if Utilitarians do build their God AI to subjugate the world and remake according to their moral code, as many in the rationalist community now intend to do.

    > We have a proof of concept in the effective altruism community, which does collaborate relatively well.

    Again, EA does very well on processing SBF's loot into anti-AI propaganda and funding for "AI safety" labs, but that's still a defection against broader society.

    • I quoted you claiming "It is very much an argument in favor of a fundamentally untrustworthy and conspiratorial mindset".

      Nothing in your reply now, nor in any of your other comments, supports that claim. Your claim does not follow from the fact that in rare, exceptional cases rule-breaking, perhaps in secret, is what an agent has most reason to do according to act utilitarianism, a well-known feature of the view. The act utilitarian reasons to be honest, not defect and so on are on philosophical reflection instrumental to the core utilitarian goal but such virtues, once habitualized, are nonetheless real features of the utilitarian person's psychology just like in other people.

      Do you possess any empirical evidence showing that real world utilitarian adherents are less upholding of everyday norms against lying, stealing, and so on? In my experience real world utilitarians (I've known a bunch of them so far in life) tend to be overrepresented in working for or donating to effective charities or organizations that work to eradicate global health problems, poverty and factory farming and at the same time no less conscientious with regard to common sense norms about honesty, keeping your word, not stealing and so on.

      You haven't described what alternative moral view you yourself adhere to. Does it have an absolute prohibition against secret rule-breaking? If the only way to prevent the end of the world and the death of everyone would be to secretly break some everyday rule once then you'd think your obligation in the case is to let the world end? If not then we have identified a case where your own moral view promotes secret rule-breaking. Would that warrant saying that your own view obligates you to have a "fundamentally untrustworthy and conspiratorial mindset"? If not, why not?

      3 replies →

> Have you read any moral philosophy before?

yes a fair bit, thanks for showing an interest in me!

> Again, his follower Sam Bankman-Fried has demonstrated this very clearly

I think it's much clearer that Bankman-Fried had absolutely no expectation that his rule-breaking would meet Singers requirements and, you know, he was just lying for the many normal reasons people lie.

> People who profess to believe in Singerian doctrine can not be trusted to mean what they are saying

in which case, nobody who believes in (this) Singerian doctrine should reveal that they do so.

Anyone telling you they follow this doctrine severely compromises their ability to actually execute on it. The rational thing for a Singerian secrecy advocate to do would be to publicly attack the doctrine, as you are doing.