← Back to context

Comment by MajimasEyepatch

4 days ago

I feel this way about some of the more extreme effective altruists. There is no room for uncertainty or recognition of the way that errors compound.

- "We should focus our charitable endeavors on the problems that are most impactful, like eradicating preventable diseases in poor countries." Cool, I'm on board.

- "I should do the job that makes the absolute most amount of money possible, like starting a crypto exchange, so that I can use my vast wealth in the most effective way." Maybe? If you like crypto, go for it, I guess, but I don't think that's the only way to live, and I'm not frankly willing to trust the infallibility and incorruptibility of these so-called geniuses.

- "There are many billions more people who will be born in the future than those people who are alive today. Therefore, we should focus on long-term problems over short-term ones because the long-term ones will affect far more people." Long-term problems are obviously important, but the further we get into the future, the less certain we can be about our projections. We're not even good at seeing five years into the future. We should have very little faith in some billionaire tech bro insisting that their projections about the 22nd century are correct (especially when those projections just so happen to show that the best thing you can do in the present is buy the products that said tech bro is selling).

The "longtermism" idea never made sense to me: So we should sacrifice the present to save the future. Alright. But then those future descendants would also have to sacrifice their present to save their future, etc. So by that logic, there could never be a time that was not full of misery. So then why do all of that stuff?

  • At some point in the future, there won't be more people who will live in the future than live in the present, at which point you are allowed to improve conditions today. Of course, by that point the human race is nearly finished, but hey.

    That said, if they really thought hard about this problem, they would have come to a different conclusion:

    https://theconversation.com/solve-suffering-by-blowing-up-th...

  • To me it is disguised way of saying the ends justify the means. Sure, we murder a few people today but think of the utopian paradise we are building for the future.

    • From my observation, that "building the future" isn't something any of them are actually doing. Instead, the concept that "we might someday do something good with the wealth and power we accrue" seems to be the thought that allows the pillaging. It's a way to feel morally superior without actually doing anything morally superior.

  • A bit of longtermism wouldn’t be so bad. We could sacrifice the convenience of burning fossil fuels today for our descendants to have an inhabitable planet.

    • But that's the great thing about Longtermism. As long as a catastrophe is not going to lead to human extinction or otherwise specifically prevent the Singularity, it's not an X-Risk that you need to be concerned about. So AI alignment is an X-Risk we need to work on, but global warming isn't, so we can keep burning as much fossil fuel as we want. In fact, we need to burn more of them in order to produce the Singularity. The misery of a few billion present/near-future people doesn't matter compared to the happiness of sextillions of future post-humans.

  • Well, there's a balance to be had. Do the most good you can while still being able to survive the rat race.

    However, people are bad at that.

    I'll give an interesting example.

    Hybrid Cars. Modern proper HEVs[0] usually benefit to their owners, both by virtue of better fuel economy as well as in most cases being overall more reliable than a normal car.

    And, they are better on CO2 emissions and lower our oil consumption.

    And yet most carmakers as well as consumers have been very slow to adopt. On the consumer side we are finally to where we can have hybrid trucks that can get 36-40MPG capable of towing 4000 pounds or hauling over 1000 pounds in the bed [1] we have hybrid minivans capable of 35MPG for transporting groups of people, we have hybrid sedans getting 50+ and Small SUVs getting 35-40+MPG for people who need a more normal 'people' car. And while they are selling better it's insane that it took as long as it has to get here.

    The main 'misery' you experience at that point, is that you're driving the same car as a lot of other people and it's not as exciting [2] as something with more power than most people know what to do with.

    And hell, as they say in investing, sometimes the market can be irrational longer than you can stay solvent. E.x. was it truly worth it to Hydro-Quebec to sit on LiFePO patents the way they did vs just figuring out licensing terms that got them a little bit of money to then properly accelerate adoption of Hybrids/EVs/etc?

    [0] - By this I mean Something like Toyota's HSD style setup used by Ford and Subaru, or Honda or Hyundai/Kia's setup where there's still a more normal transmission involved.

    [1] - Ford advertises up to 1500 pounds, but I feel like the GVWR allows for a 25 pound driver at that point.

    [2] - I feel like there's ways to make an exciting hybrid, but until there's a critical mass or Stellantis gets their act together, it won't happen...

    • > [2] - I feel like there's ways to make an exciting hybrid, but until there's a critical mass or Stellantis gets their act together, it won't happen...

      many hybrid already way more exciting than regular ice, because they provide more torque, and many consumer buy hybrid because of this reason.

    • Not that these technologies don't have anything to bring, but any discussion that still presupposes that cars/trucks(/planes) (as we know them) still have a future is (mostly) a waste of time.

      P.S.: The article mentions the "normal error-checking processes of society"... but what makes them so sure cults aren't part of them ?

      It's not like society is particularly good about it either, immune from groupthink (see the issue above) - and who do you think is more likely to kick-start a strong enough alternative ?

      (Or they are just sad about all the failures ? But it's questionable that the "process" can work (with all its vivacity) without the "failures"...)

  • "I came up with a step-by-step plan to achieve World Peace, and now I am on a government watchlist!"

  • It goes along with the "taking ideas seriously" part of [R]ationalism. They committed to the idea of maximizing expected quantifiable utility, and imagined scenarios with big enough numbers (of future population) that the probability of the big-number-future coming to pass didn't matter anymore. Normal people stop taking an idea seriously once it's clearly a fantasy, but [R]ationalists can't do that if the fantasy is both technically possible and involves big enough imagined numbers to overwhelm its probability, because of their commitment to "shut up and calculate"'

"I should do the job that makes the absolute most amount of money possible, like starting a crypto exchange, so that I can use my vast wealth in the most effective way."

Has always really bothered me because it assumes that there are no negative impacts of the work you did to get the money. If you do a million dollars worth of damage to the world and earn 100k (or a billion dollars worth of damage to earn a million dollars), even if you spend all of the money you earned on making the world a better place, you arent even going to fix 10% of the damage you caused (and thats ignoring the fact that its usually easier/cheaper to break things than to fix them).

  • > If you do a million dollars worth of damage to the world and earn 100k (or a billion dollars worth of damage to earn a million dollars), even if you spend all of the money you earned on making the world a better place, you arent even going to fix 10% of the damage you caused (and thats ignoring the fact that its usually easier/cheaper to break things than to fix them).

    You kinda summed up a lot of the world post industrial revolution there, at least as far as stuff like toxic waste (Superfund, anyone?) and stuff like climate change, I mean for goodness sake let's just think about TEL and how they knew Ethanol could work but it just wasn't 'patentable'. [0] Or the "We don't even know the dollar amount because we don't have a workable solution" problem of PFAS.

    [0] - I still find it shameful that a university is named after the man who enabled this to happen.

  • And not just that, but the very fact that someone considers it valid to try to accumulate billions of dollars so they can have an outsized influence on the direction of society, seems somewhat questionable.

    Even with 'good' intentions, there is the implied statement that your ideas are better than everyone else's and so should be pushed like that. The whole thing is a self-satisfied ego-trip.

    • Well, it's easy to do good. Or, its easy to plan on doing good, once your multi-decade plan to become a billionaire comes to fruition.

  • There's a hidden (or not so hidden) assumption in the EA's "calculations" that capitalism is great and climate change isn't a big deal. (You pretty much have to believe the latter to believe the former).