← Back to context

Comment by airgapstopgap

2 years ago

> That's a misreading of the paper and a misrepresentation of the position that Singer holds

It's not. However, utilitarians are inevitably compelled to argue that it is, because their efficacy depends on it. This amounts to gaslighting about plainly obvious positions they have committed to paper, which is an act of violence in and of itself.

> While it may seem that utilitarians should engage in norm-breaking instrumental harm, a closer analysis reveals that it often carries large costs. It would lead to people taking precautions to safeguard against these kinds of harms, which would be costly for society. And it could harm utilitarians’ reputation, 33 which in turn could impair their ability to do good.

Your link proposes a number of contingent reasons for utilitarians to not act like defect bots. It does not bite the bullet on cases where defection is clearly optimal, and those cases are plentiful. This is cheap and disingenuous rhetoric. His paper's very clear implication is that killing the patient is valid move if perfect secrecy can be ensured; so strategic arguments about reputation are irrelevant. Most importantly, this ethos breaks down in non-iterated games, e.g. if Utilitarians do build their God AI to subjugate the world and remake according to their moral code, as many in the rationalist community now intend to do.

> We have a proof of concept in the effective altruism community, which does collaborate relatively well.

Again, EA does very well on processing SBF's loot into anti-AI propaganda and funding for "AI safety" labs, but that's still a defection against broader society.

I quoted you claiming "It is very much an argument in favor of a fundamentally untrustworthy and conspiratorial mindset".

Nothing in your reply now, nor in any of your other comments, supports that claim. Your claim does not follow from the fact that in rare, exceptional cases rule-breaking, perhaps in secret, is what an agent has most reason to do according to act utilitarianism, a well-known feature of the view. The act utilitarian reasons to be honest, not defect and so on are on philosophical reflection instrumental to the core utilitarian goal but such virtues, once habitualized, are nonetheless real features of the utilitarian person's psychology just like in other people.

Do you possess any empirical evidence showing that real world utilitarian adherents are less upholding of everyday norms against lying, stealing, and so on? In my experience real world utilitarians (I've known a bunch of them so far in life) tend to be overrepresented in working for or donating to effective charities or organizations that work to eradicate global health problems, poverty and factory farming and at the same time no less conscientious with regard to common sense norms about honesty, keeping your word, not stealing and so on.

You haven't described what alternative moral view you yourself adhere to. Does it have an absolute prohibition against secret rule-breaking? If the only way to prevent the end of the world and the death of everyone would be to secretly break some everyday rule once then you'd think your obligation in the case is to let the world end? If not then we have identified a case where your own moral view promotes secret rule-breaking. Would that warrant saying that your own view obligates you to have a "fundamentally untrustworthy and conspiratorial mindset"? If not, why not?

  • The rational thing for a secret rule-breaker to do would be to publicly argue against secret rule-breaking at any realistic opportunity. So the more airgapstopgap argues against it the more fundamentally untrustworthy we should assume they are.

  • > Nothing in your reply now, nor in any of your other comments, supports that claim

    Singer's insistence that a utilitarian doctor is morally bound to kill a patient to save others is sufficient.

    > Your claim does not follow from the fact that in rare, exceptional cases rule-breaking, perhaps in secret

    No perhaps about it, secrecy is part that can only really be discarded in non-iterated settings, in the endgame.

    > but such virtues, once habitualized

    Of course you know that "habit", contextualized in the moral framework where habitual action is merely instrumental too, is a categorically weaker insurance against rule-breaking than habit plus belief in the principle according to which those habitual decisions are correct generally.

    > Do you possess any empirical evidence showing that real world utilitarian adherents are less upholding of everyday norms against lying, stealing, and so on?

    Yes, for example the effective altruism movement is comprised of generic totalitarian scum, which is well reflected in their consensus position on AI safety. I notice you flinching from the example of SBF and his little club of Singerians too.

    The problem of utilitarianism, however, lies precisely on the margins. It is rational for utilitarians to build up reputation and influence with charities and such nonsense, to then expend it on a massive power-grab. SBF's only fault is that he moved to early, isn't it?

    > You haven't described what alternative moral view you yourself adhere to.

    Intuitive deontology.

    > If the only way to prevent the end of the world and the death of everyone would be to secretly break some everyday rule once

    Jaywalking isn't what the "pivotal act" theory entails, and your theory about total death is specious and motivated by the political benefits of such an act.

    Moreover, if [you believed that] the only way to prevent the eternal suffering of everyone would be to secretly work towards the extinction of humanity, would you not work on it? A consistent utilitarian would.

    > Would that warrant saying that your own view obligates you to have a "fundamentally untrustworthy and conspiratorial mindset"? If not, why not?

    Normal people (ie not effective altruists/utilitarians) have ad hoc "decision theories". I am not an exception. Existence of a world is good qualitatively, not as an ultimate expression of the Singerian principle. I believe that lying is wrong qualitatively too, so for me, the logic of habitually being honest applies in the way it cannot apply to a utilitarian, even if you can design a hypothetical where I would have to agree that lying is justified. A world of utilitarians is not a morally good world; a surgeon should commit to losing more lives than otherwise possible in that scenario, because it would not befit a surgeon to kill patients; utilitarian calculations are invalid, so I do not engage in them.

    In more contingent terms: habitual, as you put it, utilitarian reasoning leads to justification of your preferred policies via nonsense utility estimates. Proclaim "vulnerable world hypothesis", now anything you'd want is justified by saving the world, and you get to ape the respectable adult too.

    It's risible.

    • > Singer's insistence ...

      ... in rare, exceptional cases which are extremely unlikely given the teamwork and papertrails of modern day health care systems, as others have already pointed out to you, but which you keep dropping. Once that context is added your claim (the one I quoted in my previous reply) does not follow.

      > No perhaps about it, secrecy is part that can only really be discarded in non-iterated settings, in the endgame.

      No idea what that means.

      > Of course you know that "habit" ... is a categorically weaker insurance against rule-breaking than habit plus belief in the principle ...

      Actually I don't know that. Do you have empirical evidence for it? Evidence with regard to moral views with norms in a multi level structure, like what act utilitarianism tends to have, compared to views with a single or fewer such levels?

      > Yes, for example the effective altruism movement is comprised of generic totalitarian scum ...

      I hear you reporting your contempt and your intuition. I'm waiting for supportive evidence, studies.

      > SBF

      A bad-behaving billionaire. But I don't know if his behaviour is worse on average than billionaire peers who hold other moral belief systems.

      > Jaywalking isn't what the "pivotal act" theory entails, and your theory about total death is specious and motivated by the political benefits of such an act.

      No idea what that means.

      > I believe that lying is wrong qualitatively too, so for me, the logic of habitually being honest applies in the way it cannot apply to a utilitarian, even if you can design a hypothetical where I would have to agree that lying is justified.

      What does "wrong qualitatively" mean and how does it differ from "wrong non-qualitatively" or "wrong" simpliciter? I so far don't see anything in the sentence that gets you out of the bind of the hypothetical I presented.

      edit: It is clear that you contempt the "EA consensus position on AI safety". But it isn't clear what you think that position is nor what other position you think is better nor how you think that other position is reached from intuitive deontology or some other normative theory.