My idea of these self-proclaimed rationalists was fifteen years out of date. I thought they’re people who write wordy fan fiction, but turns out they’ve reached the point of having subgroups that kill people and exorcise demons.
This must be how people who had read one Hubbard pulp novel in the 1950s felt decades later when they find out he’s running a full-blown religion now.
The article seems to try very hard to find something positive to say about these groups, and comes up with:
“Rationalists came to correct views about the COVID-19 pandemic while many others were saying masks didn’t work and only hypochondriacs worried about covid; rationalists were some of the first people to warn about the threat of artificial intelligence.”
There’s nothing very unique about agreeing with the WHO, or thinking that building Skynet might be bad… (The rationalist Moses/Hubbard was 12 when that movie came out — the most impressionable age.) In the wider picture painted by the article, these presumed successes sound more like a case of a stopped clock being right twice a day.
You're falling into some sort of fallacy; maybe a better rationalist than I could name it.
The "they" you are describing is a large body of disparate people spread around the world. We're reading an article that focuses on a few dysfunctional subgroups. They are interesting because they are so dysfunctional and rare.
Or put it this way: Name one -ism that _doesn't_ have sub/splinter groups that kill people. Even Pacifism doesn't get a pass.
The article specifically defines the rationalists it’s talking about:
“The rationalist community was drawn together by AI researcher Eliezer Yudkowsky’s blog post series The Sequences, a set of essays about how to think more rationally.”
Is this really a large body of disparate people spread around the world? I suspect not.
Dadaism? Most art -isms didn't have subgroups who killed people. If people killed others in art history it was mostly tragic individual stories and had next to nothing to do with the ideology of the ism.
>The "they" you are describing is a large body of disparate people spread around the world.
And that "large body" has a few hundred core major figures and prominent adherents, and a hell of a lot of them seem to be exactly like how the parent describes. Even the "tamer" of them like ASC have that cultish quality...
As for the rest of the "large body", the hangers on, those are mostly out of view anyway, but I doubt they'd be paragons of sanity if looked up close.
>Or put it this way: Name one -ism that _doesn't_ have sub/splinter groups that kill people
-isms include fascism, nazism, jihadism, nationalism, communism, nationalism, racism, etc, so not exactly the best argument to make in rationalism's defense. "Yeah, rationalism has groups that murder people, but after all didn't fascism had those too?"
Though, if we were honest, it mostly brings in mind another, more medical related, -ism.
The level of dysfunction which is described in the article is really rare. But dysfunction, the kind of which we talk about, is not really that rare, I would even say that quite common, in self proclaimed rationalist groups. They don’t kill people - at least directly - but they definitely not what they claim to be: rational. They use rational tools, more than others, but they are not more rational than others, they simply use these tools to prove their irrationality.
I touch rationalists only with a pole recently, because they are not smarter than others, but they just think that, and on the surface level they seem so. They praise Julia Galef, then ignore everything what she said. Even Galef invited people who were full blown racists, just it seemed that they were all right because they knew whom they talked with, and they couldn’t bullshit. They tried to argue why their racism is rational, but you couldn’t tell from the interviews. They flat out lies all the time on every other platforms. So at the end she just gave platform for covered racism.
The WHO didn't declare a global pandemic until March 11, 2020 [1]. That's a little slow and some rationalists were earlier than that. (Other people too.)
After reading a warning from a rationalist blog, I posted a lot about COVID news to another forum and others there gave me credit for giving the heads-up that it was a Big Deal and not just another thing in the news. (Not sure it made all that much difference, though?)
I worked at the British Medical Journal at the time. We got wind of COVID being a big thing in January. I spent January to March to get our new VPN into a fit state that the whole company could do their whole jobs from home. 23 March was lockdown and we were ready and had a very busy year.
That COVID was going to be big was obvious to a lot of people and groups who were paying attention. We were a health-related org, but we were extremely far from unique in this.
The rationalist claim that they were uniquely on the ball and everyone else dropped it is just a marketing lie.
Do you think that the consequences of the WHO declaring a pandemic and some rationalist blog warning about covid are the same? Clearly the WHO has to be more cautious. I have no doubt there were people at the WHO who felt a global pandemic was likely at least as early as you and the person writing the rationalist blog.
Shitposting comedy forums were ahead of the WHO when it came to this, it didn't take a genius to understand what was going on before shit completely hit the fan.
I think the piece bends over backwards to keep the charitable frame because it's written by someone inside the community, but you're right that the touted "wins" feel a bit thin compared to the sheer scale of dysfunction described.
Personally I feel like the big thing to come out of rationalism is the insight that, in Scott Alexander's words [0] (freely after Julia Galef),
> Of the fifty-odd biases discovered by Kahneman, Tversky, and their successors, forty-nine are cute quirks, and one is destroying civilization. This last one is confirmation bias - our tendency to interpret evidence as confirming our pre-existing beliefs instead of changing our minds.
I'm mildly surprised the author didn't include it in the list.
> Rationalists came to correct views about the COVID-19 pandemic while many others were saying masks didn’t work
I wonder what views about covid-19 are correct. On masks, I remember the mainstream messaging went through the stages that were masks don't work, some masks work, all masks work, double masking works, to finally masks don't work (or some masks work; I can't remember where we ended up).
> to finally masks don't work (or some masks work; I can't remember where we ended up).
Most masks 'work', for some value of 'work', but efficacy differs (which, to be clear, was ~always known; there was a very short period when some authorities insisted that covid was primarily transmitted by touch, but you're talking weeks at most). In particular I think what confused people was that the standard blue surgical masks are somewhat effective at stopping an infected person from passing on covid (and various other things), but not hugely effective at preventing the wearer from contracting covid; for that you want something along the lines of an n95 respirator.
The main actual point of controversy was whether it was airborne or not (vs just short-range spread by droplets); the answer, in the end, was 'yes', but it took longer than it should have to get there.
Putting just about anything in front of your face will help prevent spreading illness to some extent, this is why we teach children to "vampire cough". Masks were always effective to some degree. The CDC lied to the public by initially telling them not to use masks because they wanted to keep the supply for healthcare workers and they were afraid that the pubic would buy them all up first. It was a very very stupid thing to do and it undermined people's trust in the CDC and confused people about masks. After that masks became politicized and the whole topic became a minefield.
Basic masks work for society because they stop your saliva from traveling but they don't work for you because they don't stop particles from other people saliva from reaching you
I was reminded of Hubbard too. In particular the "[belief that one] should always escalate when threatened" strongly echoes Hubbard's advice to always attack attack. Never defend.
The whole thing reminds me of EST and a thousand other cults / self-improvement / self-actualisation groups that seem endemic to California ever since the 60s or before.
As someone who started reading without knowing about rationalists, I actually came out without knowing much more. Lots of context is assumed I guess.
Some main figures and rituals are mentioned but I still don’t know how the activities and communities arise from the purported origin. How do we go from “let’s rationally analyze how we think and get rid of bias” to creating a crypto, or being hype focused on AI, or summoning demons? Why did they raise this idea of matching confrontation always with escalation? Why the focus on programming, is this a Silicon Valley thing?
Also lesswrong is mentioned but no context is given about it. I only know the name as a forum, just like somethingawful or Reddit, but I don’t know how it fits into the picture.
LessWrong was originally a personal blog of Eliezer Yudkowsky. It was an inspiration for what later became the "rationality community". These days, LessWrong is a community blog. The original articles were published as a book, freely available at: https://www.readthesequences.com/ If you read it, you can see what the community was originally about; but it is long.
Some frequent topics debated on LessWrong are AI safety, human rationality, effective altruism. But it has no strict boundaries; some people even post about their hobbies or family life. Debating politics is discouraged, but not banned. The website is mostly moderated by its users, by voting on articles and comments. The voting is relatively strict, and can be scary for many newcomers. (Maybe it is not strategic to say this, but most comments on Hacker News would probably be downvoted on LessWrong for insufficient quality.)
Members of the community, the readers of the website, are all over the planet. (Just what you would expect from readers of an internet forum.) But in some cities there are enough of them so they can organize an offline meetup once in a while. And if a very few cities, there are so many of them, that they are practically a permanent offline community; most notably in the Bay Area.
I don't live in the Bay Area. To describe how the community functions in my part of the world: we meet about once in a month, sometimes less frequently, and we discuss various nerdy stuff. (Apologies if this is insufficiently impressive. From my perspective, the quality of those discussions is much higher than I have seen anywhere else, but I guess there is no way to provide this experience second-hand.) There is a spirit of self-improvement; we encourage each other to think logically and try to improve our lives.
Oh, and how does the bad part connect to it?
Unfortunately, although the community is about trying to think better, for some reason it also seems very attractive for people who are looking for someone to tell them how to think. (I mean, we do tell them how to think, but in a very abstract way: check the evidence, remember your cognitive biases, et cetera.) They are a perfect material for a cult.
The rationality community itself is not a cult. Too much disagreement and criticism of our own celebrities for that! There is also no formal membership; anyone is free to come and go. Sometimes a wannabe cult leader joins the community, takes a few vulnerable people aside, and starts a small cult. Two out of three examples in the article, it was a group of about five people -- when you have hundreds of members in a city, you won't notice when five of them start attending your meetups less frequently, and then disappear completely. And one day... you read about them in the newspapers.
> How do we go from “let’s rationally analyze how we think and get rid of bias” to creating a crypto, or being hype focused on AI, or summoning demons? Why did they raise this idea of matching confrontation always with escalation?
Rationality and AI have always been the focus of the community. Buying cryptos was considered common sense back then when Bitcoin was cheap; but I haven't heard talking about cryptos in the rationality community recently.
On the other hand, believing in demons, and the idea that you should always escalate... those are specific ideas of the leaders of the small cults, definitely not shared by the rest of the community.
Notice how the first things the wannabe cult leaders do is isolate their followers even from the rest of the rationality community. They are quite aware that what they are doing would be considered wrong by the rest of the community.
The question is, how can the community prevent this? If your meetings are open for everyone, how can you prevent one newcomer from privately contacting a few other newcomers, meeting them in private, and brainwashing them? I don't have a good answer for that.
> And masks? How many graphs of cases/day with mask mandate transitions overlayed are required before people realize masks did nothing? Whole countries went from nearly nobody wearing them, to everyone wearing them, overnight, and COVID cases/day didn't even notice.
Most of those countries didn't actually follow their mask mandates - the USA for example. I visited because the PRC was preventing vaccine deliveries to Taiwan so I flew to the USA to get a vaccine, and I distinctly remember thinking "yeah... Of course" when walked around an airport of people chin diapering.
Taiwan halted a couple outbreaks from pilots completely, partially because people are so used to wearing masks when they're sick here (and also because the mask mandate was strictly enforced everywhere).
I visited DC a year later where they had a memorial for victims of COVID. It was 700,000 white flags near the Washington monument when I visited, as I recall it broke a million a few months later.
This article is beautifully written, and it's full of proper original research. I'm sad that most comments so far are knee-jerk "lol rationalists" type responses. I haven't seen any comment yet that isn't already addressed in much more colour and nuance in the article itself.
I think that since it's not possible to reply to multiple comments at the same time, people will naturally open a new top-level comment the moment there's a clearly identifiable groupthink emerging. Quoting one of your earlier comments about this:
>This happens so frequently that I think it must be a product of something hard-wired in the medium *[I mean the medium of the internet forum]
I would say it's only hard-wired in the medium of tree-style comment sections. If HN worked more like linear forums with multi-quote/replies, it might be possible to have multiple back-and-forths of subgroup consensus like this.
> I haven't seen any comment yet that isn't already addressed in much more colour and nuance in the article itself.
I once called rationalists infantile, impotent liberal escapism, perhaps that's the novel take you are looking for.
Essentially my view is that the fundamental problem with rationalists and the effective altruist movement is that they are talking about profound social and political issues, with any and all politics completely and totally removed from it. It is liberal depoliticisation[1] driven to its ultimate conclusion. That's just why they are ineffective and wrong about everything, but that's also why they are popular among the tech elites that are giving millions to associated groups like MIRI[2]. They aren't going away, they are politically useful and convenient to very powerful people.
I just so happened to read in the last few days the (somewhat disjointed and rambling) Technically Radical: On the Unrecognized [Leftist] Potential of Tech Workers and Hackers
"Rationalists" do seem to be in some ways the poster children of consumerist atomization, but do note that they also resisted it socially by forming those 'cults' of theirs.
(If counter-cultures are 'dead', why don't they count as one ?? Alternatively, might this be a form of communitarianism, but with less traditionalism, more atheism, and perhaps a Jewish slant ?)
Asterisk is basically "rationalist magazine" and the author is a well-known rationalist blogger, so it's not a surprise that this is basically the only fair look into this phenomenon - compared to the typical outside view that rationalism itself is a cult and Eliezer Yudkowsky is a cult leader, both of which I consider absurd notions.
> the typical outside view that rationalism itself is a cult and Eliezer Yudkowsky is a cult leader, both of which I consider absurd notions
Cults are a whole biome of personalities. The prophet does not need to be the same person as the leader. They sometimes are and things can be very ugly in those cases, but they often aren’t. After all, there are Christian cults today even though Jesus and his supporting cast have been dead for approaching 2k years.
Yudkowsky seems relatively benign as far as prophets go, though who knows what goes on in private (I’m sure some people on here do, but the collective We do not). I would guess that the failure mode for him would be a David Miscavige type who slowly accumulates power while Yudkowsky remains a figurehead. This could be a girlfriend or someone who runs one of the charitable organizations (controlling the purse strings when everyone is dependent on the organization for their next meal is a time honored technique). I’m looking forward to the documentaries that get made in 20 years or so.
I think it's perfectly fine to read these articles, think "definitely a cult" and ignore whether they believe in spaceships, or demons, or AGI.
The key takeaway from the article is that if you have a group leader who cuts you off from other people, that's a red flag – not really a novel, or unique, or situational insight.
That's a side point of the article, acknowledged as an old idea. The central points of this article are actually quite a bit more interesting than that. He even summarized his conclusions concisely at the end, so I don't know what your excuse is for trivializing it.
The other key takeaway, that people with trauma are more attracted to organizations that purport to be able to fix and are thus over-represented in them (vs in the general population), is also important.
Because if you're going to set up a hierarchical (explicitly or implicitly) isolated organization with a bunch of strangers, it's good to start by asking "How much do I trust these strangers?"
> The key takeaway from the article is that if you have a group leader who cuts you off from other people, that's a red flag – not really a novel, or unique, or situational insight
Well yes and no. The reason why I think the insight is so interesting is that these groups were formed, almost definitionally for the purpose of avoiding such "obvious" mistakes. The name of the group is literally the "Rationalists"!
I find that funny, ironic, and saying something important about this philosophy, in that it implies that the rest of society wasn't so "irrational" after all.
As a more extreme and silly example, imagine there was a group called "Cults suck, and we are not a cult!", that was created for the very purpose of fighting cults, and yet, ironically, became a cult into and of itself. That would be insightful and funny.
One of a few issues I have with groups like these, is that they often confidently and aggressively spew a set of beliefs that on their face logically follow from one another, until you realize they are built on a set of axioms that are either entirely untested or outright nonsense. This is common everywhere, but I feel especially pronounced in communities like this. It also involves quite a bit of navel gazing that makes me feel a little sick participating in.
The smartest people I have ever known have been profoundly unsure of their beliefs and what they know. I immediately become suspicious of anyone who is very certain of something, especially if they derived it on their own.
I don’t think it’s just (or even particularly) bad axioms, I think it’s that people tend to build up “logical” conclusions where they think each step is a watertight necessity that follows inevitably from its antecedents, but actually each step is a little bit leaky, leading to runaway growth in false confidence.
Not that non-rationalists are any better at reasoning, but non-rationalists do at least benefit from some intellectual humility.
As a former mechanical engineer, I visualize this phenomenon like a "tolerance stackup". Effectively meaning that for each part you add to the chain, you accumulate error. If you're not damn careful, your assembly of parts (or conclusions) will fail to measure up to expectations.
> I don’t think it’s just (or even particularly) bad axioms
IME most people aren't very good at building axioms. I hear a lot of people say "from first principles" and it is a pretty good indication that they will not be. First principles require a lot of effort to create. They require iteration. They require a lot of nuance, care, and precision. And of course they do! They are the foundation of everything else that is about to come. This is why I find it so odd when people say "let's work from first principles" and then just state something matter of factly and follow from there. If you want to really do this you start simple, attack your own assumptions, reform, build, attack, and repeat.
This is how you reduce the leakiness, but I think it is categorically the same problem as the bad axioms. It is hard to challenge yourself and we often don't like being wrong. It is also really unfortunate that small mistakes can be a critical flaw. There's definitely an imbalance.
>> The smartest people I have ever known have been profoundly unsure of their beliefs and what they know.
This is why the OP is seeing this behavior. Because the smartest people you'll meet are constantly challenging their own ideas. They know they are wrong to at least some degree. You'll sometimes find them talking with a bit of authority at first but a key part is watching how they deal with challenging of assumptions. Ask them what would cause them to change their minds. Ask them about nuances and details. They won't always dig into those can of worms but they will be aware of it and maybe nervousness or excited about going down that road (or do they just outright dismiss it?). They understand that accuracy is proportional to computation, and you have exponentially increasing computation as you converge on accuracy. These are strong indications since it'll suggest if they care more about the right answer or being right. You also don't have to be very smart to detect this.
> I don’t think it’s just (or even particularly) bad axioms, I think it’s that people tend to build up “logical” conclusions where they think each step is a watertight necessity that follows inevitably from its antecedents, but actually each step is a little bit leaky, leading to runaway growth in false confidence.
This is what you get when you naively re-invent philosophy from the ground up while ignoring literally 2500 years of actual debugging of such arguments by the smartest people who ever lived.
You can't diverge from and improve on what everyone else did AND be almost entirely ignorant of it, let alone have no training whatsoever in it. This extreme arrogance I would say is the root of the problem.
> Not that non-rationalists are any better at reasoning, but non-rationalists do at least benefit from some intellectual humility.
Non-rationalists are forced to use their physical senses more often because they can't follow the chain of logic as far. This is to their advantage. Empiricism > rationalism.
> I think it’s that people tend to build up “logical” conclusions where they think each step is a watertight necessity that follows inevitably from its antecedents, but actually each step is a little bit leaky, leading to runaway growth in false confidence.
Yeah, this is a pattern I've seen a lot of recently—especially in discussions about LLMs and the supposed inevitability of AGI (and the Singularity). This is a good description of it.
Yet I think most people err in the other direction. They 'know' the basics of health, of discipline, of charity, but have a hard time following through.
'Take a simple idea, and take it seriously': a favorite aphorism of Charlie Munger. Most of the good things in my life have come from trying to follow through the real implications of a theoretical belief.
I feel this way about some of the more extreme effective altruists. There is no room for uncertainty or recognition of the way that errors compound.
- "We should focus our charitable endeavors on the problems that are most impactful, like eradicating preventable diseases in poor countries." Cool, I'm on board.
- "I should do the job that makes the absolute most amount of money possible, like starting a crypto exchange, so that I can use my vast wealth in the most effective way." Maybe? If you like crypto, go for it, I guess, but I don't think that's the only way to live, and I'm not frankly willing to trust the infallibility and incorruptibility of these so-called geniuses.
- "There are many billions more people who will be born in the future than those people who are alive today. Therefore, we should focus on long-term problems over short-term ones because the long-term ones will affect far more people." Long-term problems are obviously important, but the further we get into the future, the less certain we can be about our projections. We're not even good at seeing five years into the future. We should have very little faith in some billionaire tech bro insisting that their projections about the 22nd century are correct (especially when those projections just so happen to show that the best thing you can do in the present is buy the products that said tech bro is selling).
> I don’t think it’s just (or even particularly) bad axioms, I think it’s that people tend to build up “logical” conclusions where they think each step is a watertight necessity that follows inevitably from its antecedents, but actually each step is a little bit leaky, leading to runaway growth in false confidence.
I really like your way of putting it. It’s a fundamental fallacy to assume certainty when trying to predict the future. Because, as you say, uncertainty compounds over time, all prediction models are chaotic. It’s usually associated with some form of Dunning-Kruger, where people know just enough to have ideas but not enough to understand where they might fail (thus vastly underestimating uncertainty at each step), or just lacking imagination.
Precisely! I'd even say they get intoxicated with their own braininess. The expression that comes to mind is to get "way out over your skis".
I'd go even further and say most of the world's evils are caused by people with theories that are contrary to evidence. I'd place Marx among these but there's no shortage of examples.
Strongly recommend this profile in the NYer on Curtis Yarvin (who also uses "rationalism" to justify their beliefs) [0]. The section towards the end that reports on his meeting one of his supposed ideological heroes for an extended period of time is particularly illuminating.
I feel like the internet has led to an explosion of these such groups because it abstracts the "ideas" away from the "people". I suspect if most people were in a room or spent an extended amount of time around any of these self-professed, hyper-online rationalists, they would immediately disregard any theories they were able to cook up, no matter how clever or persuasively-argued they might be in their written down form.
> I feel like the internet has led to an explosion of these such groups because it abstracts the "ideas" away from the "people". I suspect if most people were in a room or spent an extended amount of time around any of these self-professed, hyper-online rationalists, they would immediately disregard any theories they were able to cook up, no matter how clever or persuasively-argued they might be in their written down form.
Likely the opposite. The internet has led to people being able to see the man behind the curtain, and realize how flawed the individuals pushing these ideas are. Whereas many intellectuals from 50 years back were just as bad if not worse, but able to maintain a false aura of intelligence by cutting themselves off from the masses.
> I immediately become suspicious of anyone who is very certain of something
Me too, in almost every area of life. There's a reason it's called a conman: they are tricking your natural sense that confidence is connected to correctness.
But also, even when it isn't about conning you, how do people become certain of something? They ignored the evidence against whatever they are certain of.
People who actually know what they're talking about will always restrict the context and hedge their bets. Their explanation are tentative, filled with ifs and buts. They rarely say anything sweeping.
They see the same pattern repeatedly until it becomes the only reasonable explanation? I’m certain about the theory of gravity because every time I drop an object it falls to the ground with a constant acceleration.
Most likely Gide ("Croyez ceux qui cherchent la vérité, doutez de ceux qui la trouvent", "Believe those who seek Truth, doubt those who find it") and not Voltaire ;)
Voltaire was generally more subtle: "un bon mot ne prouve rien", a witty saying proves nothing, as he'd say.
Well you could be a critical rationalist and do away with the notion of "certainty" or any sort of justification or privileged source of knowledge (including "rationality").
Many arguments arise over the valuation of future money. See "discount function" [1] At one extreme are the rational altruists, who rate that near 1.0, and the "drill, baby, drill" people, who are much closer to 0.
The discount function really should have a noise term, because predictions about the future are noisy, and the noise increases with the distance into the future. If you don't consider that, you solve the wrong problem. There's a classic Roman concern about running out of space for cemeteries. Running out of energy, or overpopulation, turned out to be problems where the projections assumed less noise than actually happened.
I find Yudowsky-style rationalists morbidly fascinating in the same way as Scientologists and other cults. Probably because they seem to genuinely believe they're living in a sci-fi story. I read a lot of their stuff, probably too much, even though I find it mostly ridiculous.
The biggest nonsense axiom I see in the AI-cult rationalist world is recursive self-improvement. It's the classic reason superintelligence takeoff happens in sci-fi: once AI reaches some threshold of intelligence, it's supposed to figure out how to edit its own mind, do that better and faster than humans, and exponentially leap into superintelligence. The entire "AI 2027" scenario is built on this assumption; it assumes that soon LLMs will gain the capability of assisting humans on AI research, and AI capabilities will explode from there.
But AI being capable of researching or improving itself is not obvious; there's so many assumptions built into it!
- What if "increasing intelligence", which is a very vague goal, has diminishing returns, making recursive self-improvement incredibly slow?
- Speaking of which, LLMs already seem to have hit a wall of diminishing returns; it seems unlikely they'll be able to assist cutting-edge AI research with anything other than boilerplate coding speed improvements.
- What if there are several paths to different kinds of intelligence with their own local maxima, in which the AI can easily get stuck after optimizing itself into the wrong type of intelligence?
- Once AI realizes it can edit itself to be more intelligent, it can also edit its own goals. Why wouldn't it wirehead itself? (short-circuit its reward pathway so it always feels like it's accomplished its goal)
Knowing Yudowsky I'm sure there's a long blog post somewhere where all of these are addressed with several million rambling words of theory, but I don't think any amount of doing philosophy in a vacuum without concrete evidence could convince me that fast-takeoff superintelligence is possible.
I agree. There's also the point of hardware dependance.
From all we've seen, the practical ability of AI/LLMs seems to be strongly dependent on how much hardware you throw at it. Seems pretty reasonable to me - I'm skeptical that there's that much out there in gains from more clever code, algorithms, etc on the same amount of physical hardware. Maybe you can get 10% or 50% better or so, but I don't think you're going to get runaway exponential improvement on a static collection of hardware.
Maybe they could design better hardware themselves? Maybe, but then the process of improvement is still gated behind how fast we can physically build next-generation hardware, perfect the tools and techniques needed to make it, deploy with power and cooling and datalinks and all of that other tedious physical stuff.
> it assumes that soon LLMs will gain the capability of assisting humans
No, it does not. It assumes there will be progress in AI. It does not assume that progress will be in LLMs
It doesn't require AI to be better than humans for AI to take over because unlike a human an AI can be cloned. You have have 2 AIs, then 4, then 8.... then millions. All able to do the same things as humans (the assumption of AGI). Build cars, build computers, build rockets, built space probes, build airplanes, build houses, build power plants, build factories. Build robot factories to create more robots and more power plants and more factories.
PS: Not saying I believe in the doom. But the thought experiment doesn't seem indefensible.
> - What if there are several paths to different kinds of intelligence with their own local maxima, in which the AI can easily get stuck after optimizing itself into the wrong type of intelligence?
I think what's more plausible is that there is general intelligence, and humans have that, and it's general in the same sense that Turing machines are general, meaning that there is no "higher form" of intelligence that has strictly greater capability. Computation speed, memory capacity, etc. can obviously increase, but those are available to biological general intelligences just like they would be available to electronic general intelligences.
An interesting point you make there — one would assume that if recursive self-improvement were a thing, Nature would have already lead humans into that "hall of mirrors".
> What if "increasing intelligence", which is a very vague goal, has diminishing returns, making recursive self-improvement incredibly slow?
This is sort of what I subscribe to as the main limiting factor, though I'd describe it differently. It's sort of like Amdahl's Law (and I imagine there's some sort of Named law that captures it, I just don't know the name): the magic AI wand may be very good at improving some part of AGI capability, but the more you improve that part, the more the other parts come to dominate. Metaphorically, even if the juice is worth the squeeze initially, pretty soon you'll only be left with a dried-out fruit clutched in your voraciously energy-consuming fist.
I'm actually skeptical that there's much juice in the first place; I'm sure today's AIs could generate lots of harebrained schemes for improvement very quickly, but exploring those possibilities is mind-numbingly expensive. Not to mention that the evaluation functions are unreliable, unknown, and non-monotonic.
Then again, even the current AIs have convinced a large number of humans to put a lot of effort into improving them, and I do believe that there are a lot of improvements that humans are capable of making to AI. So the human-AI system does appear to have some juice left. Where we'll be when that fruit is squeezed down to a damp husk, I have no idea.
The built in assumptions are always interesting to me, especially as it relates to intelligence. I find many of them (though not all), are organized around a series of fundamental beliefs that are very rarely challenged within these communities. I should initially mention that I don't think everyone in these communities believes these things, of course, but I think there's often a default set of assumptions going into conversations in these spaces that holds these axioms. These beliefs more or less seem to be as follows:
1) They believe that there exists a singular factor to intelligence in humans which largely explains capability in every domain (a super g factor, effectively).
2) They believe that this factor is innate, highly biologically regulated, and a static factor about a person(Someone who is high IQ in their minds must have been a high achieving child, must be very capable as an adult, these are the baseline assumptions). There is potentially belief that this can be shifted in certain directions, but broadly there is an assumption that you either have it or you don't, there is no feeling of it as something that could be taught or developed without pharmaceutical intervention or some other method.
3) There is also broadly a belief that this factor is at least fairly accurately measured by modern psychometric IQ tests and educational achievement, and that this factor is a continuous measurement with no bounds on it (You can always be smarter in some way, there is no max smartness in this worldview).
These are things that certainly could be true, and perhaps I haven't read enough into the supporting evidence for them but broadly I don't see enough evidence to have them as core axioms the way many people in the community do.
More to your point though, when you think of the world from those sorts of axioms above, you can see why an obsession would develop with the concept of a certain type of intelligence being recursively improving. A person who has become convinced of their moral placement within a societal hierarchy based on their innate intellectual capability has to grapple with the fact that there could be artificial systems which score higher on the IQ tests than them, and if those IQ tests are valid measurements of this super intelligence factor in their view, then it means that the artificial system has a higher "ranking" than them.
Additionally, in the mind of someone who has internalized these axioms, there is no vagueness about increasing intelligence! For them, intelligence is the animating factor behind all capability, it has a central place in their mind as who they are and the explanatory factor behind all outcomes. There is no real distinction between capability in one domain or another mentally in this model, there is just how powerful a given brain is. Having the singular factor of intelligence in this mental model means being able to solve more difficult problems, and lack of intelligence is the only barrier between those problems being solved vs unsolved. For example, there's a common belief among certain groups among the online tech world that all governmental issues would be solved if we just had enough "high-IQ people" in charge of things irrespective of their lack of domain expertise. I don't think this has been particularly well borne out by recent experiments, however. This also touches on what you mentioned in terms of an AI system potentially maximizing the "wrong types of intelligence", where there isn't a space in this worldview for a wrong type of intelligence.
It's kinda weird how the level of discourse seems to be what you get when a few college students sit around smoking weed. Yet somehow this is taken as very serious and profound in the valley and VC throw money at it.
I've pondered recursive self-improvement. I'm fairly sure it will be a thing - we're at a point already where people could try telling Claude or some such to have a go, even if not quite at a point it would work. But I imagine take off would be very gradual. It would be constrained by available computing resources and probably only comparably good to current human researchers and so still take ages to get anywhere.
Yeah, to compare Yudkowsky to Hubbard I've read accounts of people who read Dianetics or Science of Survival and thought "this is genius!" and I'm scratching my head and it's like they never read Freud or Horney or Beck or Berne or Burns or Rogers or Kohut, really any clinical psychology at all, even anything in the better 70% of pop psychology. Like Hubbard, Yudkowsky is unreadable, rambling [1] and inarticulate -- how anybody falls for it boggles my mind [2], but hey, people fell for Carlos Castenada who never used a word of the Yaqui language or mentioned any plant that grows in the desert in Mexico but has Don Juan give lectures about Kant's Critique of Pure Reason [3] that Castenada would have heard in school and you would have heard in school too if you went to school or would have read if you read a lot.
I can see how it appeals to people like Aella who wash into San Francisco without exposure to education [4] or philosophy or computer science or any topics germane to the content of Sequences -- not like it means you are stupid but, like Dianetics, Sequences wouldn't be appealing if you were at all well read. How is people at frickin' Oxford or Stanford fall for it is beyond me, however.
[1] some might even say a hypnotic communication pattern inspired by Milton Erickson
[2] you think people would dismiss Sequences because it's a frickin' Harry Potter fanfic, but I think it's like the 419 scam email which is riddled by typos which is meant to drive the critical thinker away and, ironically in the case of Sequences, keep the person who wants to cosplay as a critical thinker.
[3] minus any direct mention of Kant
[4] thus many of the marginalized, neurodivergent, transgender who left Bumfuck, AK because they couldn't live at home and went to San Francisco to escape persecution as opposed to seek opportunity
I'm surprised not see see much pushback on your point here, so I'll provide my own.
We have an existence proof for intelligence that can improve AI: humans can do this right now.
Do you think AI can't reach human-level intelligence? We have an existence proof of human-level intelligence: humans. If you think AI will reach human-level intelligence then recursive self-improvement naturally follows. How could it not?
Do you not think human-level intelligence is some kind of natural maximum? Why? That would be strange, no? Even if you think it's some natural maximum for LLMs specifically, why? And why do you think we wouldn't modify architectures as needed to continue to make progress? That's already happening, our LLMs are a long way from the pure text prediction engines of four or five years ago.
There is already a degree of recursive improvement going on right now, but with humans still in the loop. AI researchers currently use AI in their jobs, and despite the recent study suggesting AI coding tools don't improve productivity in the circumstances they tested, I suspect AI researchers' productivity is indeed increased through use of these tools.
So we're already on the exponential recursive-improvement curve, it's just that it's not exclusively "self" improvement until humans are no longer a necessary part of the loop.
On your specific points:
> 1. What if increasing intelligence has diminishing returns, making recursive improvement slow?
Sure. But this is a point of active debate between "fast take-off" and "slow take-off" scenarios, it's certainly not settled among rationalists which is more plausible, and it's a straw man to suggest they all believe in a fast take-off scenario. But both fast and slow take-off due to recursive self-improvement are still recursive self-imrpovement, so if you only want to criticise the fast take-off view, you should speak more precisely.
I find both slow and fast take-off plausible, as the world has seen both periods of fast economic growth through technology, and slower economic growth. It really depends on the details, which brings us to:
> 2. LLMs already seem to have hit a wall of diminishing returns
This is IMHO false in any meaningful sense. Yes, we have to use more computing power to get improvements without doing any other work. But have you seen METR's metric [1] on AI progress in terms of the (human) duration of task they can complete? This is an exponential curve that has not yet bent, and if anything has accelerated slightly.
Do not confuse GPT-5 (or any other incrementally improved model) failing to live up to unreasonable hype for an actual slowing of progress. AI capabilities are continuing to increase - being on an exponential curve often feels unimpressive at any given moment, because the relative rate of progress isn't increasing. This is a fact about our psychology, if we look at actual metrics (that don't have a natural cap like evals that max out at 100%, these are not good for measuring progress in the long-run) we see steady exponential progress.
> 3. What if there are several paths to different kinds of intelligence with their own local maxima, in which the AI can easily get stuck after optimizing itself into the wrong type of intelligence?
This seems valid. But it seems to me that unless we see METR's curve bend soon, we should not count on this. LLMs have specific flaws, but I think if we are honest with ourselves and not over-weighting the specific silly mistakes they still make, they are on a path toward human-level intelligence in the coming years. I realise that claim will sound ridiculous to some, but I think this is in large part due to people instinctively internalising that everything LLMs can do is not that impressive (it's incredible how quickly expectations adapt), and therefore over-indexing on their remaining weaknesses, despite those weaknesses improving over time as well. If you showed GPT-5 to someone from 2015, they would be telling you this thing is near human intelligence or even more intelligent than the average human. I think we all agree that's not true, but I think that superficially people would think it was if their expectations weren't constantly adapting to the state of the art.
> 4. Once AI realizes it can edit itself to be more intelligent, it can also edit its own goals. Why wouldn't it wirehead itself?
It might - but do we think it would? I have no idea. Would you wirehead yourself if you could? I think many humans do something like this (drug use, short-form video addiction), and expect AI to have similar issues (and this is one reason it's dangerous) but most of us don't feel this is an adequate replacement for "actually" satisfying our goals, and don't feel inclined to modify our own goals to make it so, if we were able.
> Knowing Yudowsky I'm sure there's a long blog post somewhere where all of these are addressed with several million rambling words of theory
Uncalled for I think. There are valid arguments against you, and you're pre-emptively dismissing responses to you by vaguely criticising their longness. This comment is longer than yours, and I reject any implication that that weakens anything about it.
Your criticisms are three "what ifs" and a (IMHO) falsehood - I don't think you're doing much better than "millions of words of theory without evidence". To the extent that it's true Yudkowsky and co theorised without evidence, I think they deserve cred, as this theorising predated the current AI ramp-up at a time when most would have thought AI anything like what we have now was a distant pipe dream. To the extent that this theorising continues in the present, it's not without evidence - I point you again to METR's unbending exponential curve.
Anyway, so I contend your points comprise three "what ifs" and (IMHO) a falsehood. Unless you think "AI can't recursively self-improve itself" already has strong priors in its favour such that strong arguments are needed to shift that view (and I don't think that's the case at all), this is weak. You will need to argue why we should need to have strong evidence to overturn a default "AI can't recursively self-improve" view, when it seems that a) we are already seeing recursive improvement (just not purely "self"-improvement), and that it's very normal for technological advancement to have recursive gains - see e.g. Moore's law or technological contributions to GDP growth generally.
Far from a damning example of rationalists thinking sloppily, this particular point seems like one that shows sloppy thinking on the part of the critics.
It's at least debateable, which is all it has to be for calling it "the biggest nonsense axion" to be a poor point.
> The biggest nonsense axiom I see in the AI-cult rationalist world is recursive self-improvement.
This is also the weirdest thing and I don't think they even know the assumption they are making. It makes the assumption that there is infinite knowledge to be had. It also ignores the reality that in reality we have exceptionally strong indications that accuracy (truth, knowledge, whatever you want to call it) has exponential growth in complexity. These may be wrong assumptions, but we at least have evidence for them, and much more for the latter. So if objective truth exists, then that intelligence gap is very very different. One way they could be right there is for this to be an S-curve and for us humans to be at the very bottom there. That seems unlikely, though very possible. But they always treat this as linear or exponential as if our understanding to the AI will be like an ant trying to understand us.
The other weird assumption I hear is about how it'll just kill us all. The vast majority of smart people I know are very peaceful. They aren't even seeking power of wealth. They're too busy thinking about things and trying to figure everything out. They're much happier in front of a chalk board than sitting on a yacht. And humans ourselves are incredibly passionate towards other creatures. Maybe we learned this because coalitions are a incredibly powerful thing, but truth is that if I could talk to an ant I'd choose that over laying traps. Really that would be so much easier too! I'd even rather dig a small hole to get them started somewhere else than drive down to the store and do all that. A few shovels in the ground is less work and I'd ask them to not come back and tell others.
Granted, none of this is absolutely certain. It'd be naive to assume that we know! But it seems like these cults are operating on the premise that they do know and that these outcomes are certain. It seems to just be preying on fear and uncertainty. Hell, even Altman does this, ignoring risk and concern of existing systems by shifting focus to "an even greater risk" that he himself is working towards (You can't simultaneously maximize speed and safety). Which, weirdly enough might fulfill their own prophesies. The AI doesn't have to become sentient but if it is trained on lots of writings about how AI turns evil and destroys everyone then isn't that going to make a dumb AI that can't tell fact from fiction more likely to just do those things?
This is why it's important to emphasize that rationality is not a good goal to have. Rationality is nothing more than applied logic, which takes axioms as given and deduces conclusions from there.
Reasoning is the appropriate target because it is a self-critical, self-correcting method that continually re-evaluates axioms and methods to express intentions.
He probably is describing Mensa, and assuming that it also applies to the rationality community without having any specific knowledge of the latter.
(From my perspective, Hacker News is somewhere in the middle between Mensa and Less Wrong. Full of smart people, but most of them don't particularly care about evidence, if providing their own opinion confidently is an alternative.)
A good example of this is the number of huge assumptions needed for the argument for Roko's basilisk. I'm shocked that some people actually take it seriously.
The distinction between them and religion is that religion is free to say that those axioms are a matter of faith and treat them as such. Rationalists are not as free to do so.
Epistemological skepticism sure is a belief. A strong belief on your side?
I am profoundly sure, I am certain I exist and that a reality outside myself exists. Worse, I strongly believe knowing this external reality is possible, desirable and accurate.
It means you haven't read Hume, or, in general, taken philosophy seriously. An academic philosopher might still come to the same conclusions as you (there is an academic philosopher for every possible position), but they'd never claim the certainty you do.
Are you familiar with ship of theseus as an arugmentation fallacy? Innuendo Studios did a great video on it and I think that a lot of what you're talking about breaks down to this. Tldr - it's a fallacy of substitution, small details of an argument get replaced by things that are (or feel like) logical equivalents until you end up saying something entirely different but are arguing as though you said the original thing. In this video the example is "senator doxxes a political opponent" but on looking "senator" turns out to mean "a contractor working for the senator" and "doxxes a political opponent" turns out to mean "liked a tweet that had that opponent's name in it in a way that could draw attention to it".
Each change is arguably equivalent and it seems logical that if x = y then you could put y anywhere you have x, but after all of the changes are applied the argument that emerges is definitely different from the one before all the substitutions are made. It feels like communities that pride themselves on being extra rational seem subject to this because it has all the trappings of rationalism but enables squishy, feely arguments
There are certain things I am sure of even though I derived them on my own.
But I constantly battle tested them against other smart people’s views, and just after I ran out of people to bring me new rational objections did I become sure.
Now I can battle test them against LLMs.
On a lesser level of confidence, I have also found a lot of times the people who disagreed with what I thought had to be the case, later came to regret it because their strategies ended up in failure and they told me they regretted not taking my recommendation. But that is on an individual level. I have gotten pretty good at seeing systemic problems, architecting systemic solutions, and realizing what it would take to get them adopted to at least a critical mass. Usually, they fly in the face of what happens normally in society. People don’t see how their strategies and lives are shaped by the technology and social norms around them.
For that last one, I am often proven somewhat wrong by right-wing war hawks, because my left-leaning anti-war stance is about avoiding inflicting large scale misery on populations, but the war hawks go through with it anyway and wind up defeating their geopolitical enemies and gaining ground as the conflict fades into history.
"genetically engineers high fructose corn syrup into everything"
This phrase is nonsense, because HFCS is a chemical process applied to normal corn after the harvest. The corn may be a GMO but it certainly doesn't have to be.
It's very tempting to try to reason things through from first principles. I do it myself, a lot. It's one of the draws of libertarianism, which I've been drawn to for a long time.
But the world is way more complex than the models we used to derive those "first principles".
It's also very fun and satisfying. But it should be limited to an intellectual exercise at best, and more likely a silly game. Because there's no true first principle, you always have to make some assumption along the way.
Any theory of everything will often have a little perpetual motion machine at the nexus. These can be fascinating to the mind.
Pressing through uncertainty either requires a healthy appetite for risk or an engine of delusion. A person who struggles to get out of their comfort zone will seek enablement through such a device.
Appreciation of risk-reward will throttle trips into the unknown. A person using a crutch to justify everything will careen hyperbolically into more chaotic and erratic behaviors hoping to find that the device is still working, seeking the thrill of enablement again.
The extremism comes from where once the user learned to say hello to a stranger, their comfort zone has expanded to an area that their experience with risk-reward is underdeveloped. They don't look at the external world to appreciate what might happen. They try to morph situations into some confirmation of the crutch and the inferiority of confounding ideas.
"No, the world isn't right. They are just weak and the unspoken rules [in the user's mind] are meant to benefit them." This should always resonate because nobody will stand up for you like you have a responsibility to.
A study of uncertainty and the limitations of axioms, the inability of any sufficiently expressive formalism to be both complete and consistent, these are the ideas that are antidotes to such things. We do have to leave the rails from time to time, but where we arrive will be another set of rails and will look and behave like rails, so a bit of uncertainty is necessary, but it's not some magic hat that never runs out of rabbits.
Another psychology that will come into play from those who have left their comfort zone is the inability to revert. It is a harmful tendency to presume all humans fixed quantities. Once a behavior exists, the person is said to be revealed, not changed. The proper response is to set boundaries and be ready to tie off the garbage bag and move on if someone shows remorse and desire to revert or transform. Otherwise every relationship only gets worse. If instead you can never go back, extreme behavior is a ratchet. Ever mistake becomes the person.
What makes you so certain there isn't? A group that has a deep understanding fnord of uncertainty would probably like to work behind the scenes to achieve their goals.
I do dimly perceive
that while everything around me is ever-changing,
ever-dying there is,
underlying all that change,
a living power
that is changeless,
that holds all together,
that creates,
dissolves,
and recreates
It's crazy to read this, because by writing what you wrote you basically show that you don't understand what an axiom is.
You need to review the definition of the word.
> The smartest people I have ever known have been profoundly unsure of their beliefs and what they know.
The smartest people are unsure about their higher level beliefs, but I can assure you that they almost certainly don't re-evaluate "axioms" as you put it on a daily or weekly basis. Not that it matters, as we almost certainly can't verify who these people are based on an internet comment.
> I immediately become suspicious of anyone who is very certain of something, especially if they derived it on their own.
That's only your problem, not anyone else's. If you think people can't arrive to a tangible and useful approximation of truth, then you are simply delusional.
> If you think people can't arrive to a tangible and useful approximation of truth, then you are simply delusional
Logic is only a map, not the territory. It is a new toy, still bright and shining from the box in terms of human history. Before logic there were other ways of thinking, and new ones will come after. Yet, Voltaire's bastards are always certain they're right, despite being right far less often than they believe.
Can people arrive at tangible and useful conclusions? Certainly, but they can only ever find capital "T" Truth in a very limited sense. Logic, like many other models of the universe, is only useful until you change your frame of reference or the scale at which you think. Then those laws suddenly become only approximations, or even irrelevant.
> It's crazy to read this, because by writing what you wrote you basically show that you don't understand what an axiom is. You need to review the definition of the word.
Oh, do enlighten then.
> The smartest people are unsure about their higher level beliefs, but I can assure you that they almost certainly don't re-evaluate "axioms" as you put it on a daily or weekly basis. Not that it matters, as we almost certainly can't verify who these people are based on an internet comment.
I'm not sure you are responding to the right comment, or are severely misinterpreting what I said. Clearly a nerve was struck though, and I do apologize for any undue distress. I promise you'll recover from it.
Saw once a discussion that people should not have kids as it's by far the highest increase in your carbon footprint in your lifetime (>10x than going vegan, etc) be driven all the way to advocating genocide as a way of carbon footprint minimization
> Saw once a discussion that people should not have kids as it's by far the highest increase in your carbon footprint in your lifetime (>10x than going vegan, etc) be driven all the way to advocating genocide as a way of carbon footprint minimization
The opening scene of Utopia (UK) s2e6 goes over this:
> "Why did you have him then? Nothing uses carbon like a first-world human, yet you created one: why would you do that?"
Setting aside the reductio ad absurdum of genocide, this is an unfortunately common viewpoint. People really need to take into account the chances their child might wind up working on science or technology which reduces global CO2 emissions or even captures CO2. This reasoning can be applied to all sorts of naive "more people bad" arguments. I can't imagine where the world would be if Norman Borlaug's parents had decided to never have kids out of concern for global food insecurity.
A logical argument is only as good as it's presuppositions. To first lay siege to your own assumptions before reasoning about them tends towards a more beneficial outcome.
Another issue with "thinkers" is that many are cowards; whether they realize it or not a lot of presuppositions are built on a "safe" framework, placing little to no responsibility on the thinker.
> The smartest people I have ever known have been profoundly unsure of their beliefs and what they know. I immediately become suspicious of anyone who is very certain of something, especially if they derived it on their own.
This is where I depart from you. If I say it's anti-intellectual I would only be partially correct, but it's worse than that imo. You might be coming across "smart people" who claim to know nothing "for sure", which in itself is a self-defeating argument. How can you claim that nothing is truly knowable as if you truly know that nothing is knowable? I'm taking these claims to their logical extremes btw, avoiding the granular argumentation surrounding the different shades and levels of doubt; I know that leaves vulnerabilities in my argument, but why argue with those who know that they can't know much of anything as if they know what they are talking about to begin with? They are so defeatist in their own thoughts, it's comical. You say, "profoundly unsure", which reads similarly to me as "can't really ever know" which is a sure truth claim, not a relative claim or a comparative as many would say, which is a sad attempt to side-step the absolute reality of their statement.
I know that I exist, regardless of how I get here I know that I do, there is a ridiculous amount of rhetoric surrounding that claim that I will not argue for here, this is my presupposition. So with that I make an ontological claim, a truth claim, concerning my existence; this claim is one that I must be sure of to operate at any base level. I also believe I am me and not you, or any other. Therefore I believe in one absolute, that "I am me". As such I can claim that an absolute exists, and if absolutes exist, then within the right framework you must also be an absolute to me, and so on and so forth; what I do not see in nature is an existence, or notion of, the relative on it's own as at every relative comparison there is an absolute holding up the comparison. One simple example is heat. Hot is relative, yet it also is objective; some heat can burn you, other heat can burn you over a very long time, some heat will never burn. When something is "too hot" that is a comparative claim, stating that there is another "hot" which is just "hot" or not "hot enough", the absolute still remains which is heat. Relativistic thought is a game of comparisons and relations, not making absolute claims; the only absolute claim is that there is no absolute claim to the relativist. The reason I am talking about relativists is that they are the logical, or illogical, conclusion of the extremes of doubt/disbelief i previously mentioned.
If you know nothing you are not wise, you are lazy and ill-prepared, we know the earth is round, we know that gravity exists, we are aware of the atomic, we are aware of our existence, we are aware that the sun shines it's light upon us, we are sure of many things that took debate among smart people many many years ago to arrive to these sure conclusions. There was a time where many things we accept where "not known" but were observed with enough time and effort by brilliant people. That's why we have scientists, teachers, philosophers and journalists.
I encourage you that the next time you find a "smart" person who is unsure of their beliefs, you should kindly encourage them to be less lazy and challenge their absolutes, if they deny the absolute could be found then you aren't dealing with a "smart" person, you are dealing with a useful idiot who spent too much time watching skeptics blather on about meaningless topics until their brains eventually fell out. In every relative claim there must be an absolute or it fails to function in any logical framework. You can with enough thought, good data, and enough time to let things steep find the (or an) absolute and make a sure claim. You might be proven wrong later, but that should be an indicator to you that you should improve (or a warning you are being taken advantage of by a sophist), and that the truth is out there, not to sequester yourself away in this comfortable, unsure hell that many live in till they die.
The beauty of absolute truth is that you can believe absolutes without understanding the entirety of the absolute. I know gravity exists but I don't know fully how it works. Yet I can be absolutely certain it acts upon me, even if I only understand a part of it. People should know what they know and study it until they do and not make sure claims outside of what they do not know until they have the prerequisite absolute claims to support the broader claims with the surety of the weakest of their presuppositions.
Apologies for grammar, length and how schizo my thought process appears; I don't think linearly and it takes a goofy amount of effort to try to collate my thoughts in a sensible manner.
I get the impression that these people desperately want to study philosophy but for some reason can't be bothered to get formal training because it would be too humbling for them. I call it "small fishbowl syndrome," but maybe there's a better term for it.
The reason why people can't be bothered to get formal training is that modern philosophy doesn't seem that useful.
It was a while ago, but take the infamous story of the 2006 rape case in Duke University. If you check out coverage of that case, you get the impression every member of faculty that joined in the hysteria was from some humanities department, including philosophy. And quite a few of them refused to change their mind even as the prosecuting attorney was being charged with misconduct. Compare that to Socrates' behavior during the trial of the admirals in 406 BC.
Meanwhile, whatever meager resistence was faced by that group seems to have come from economists, natural scientist or legal scholars.
I wouldn't blame people for refusing to study in a humanities department where they can't tell right from wrong.
I figure there are two sides to philosophy. There's the practical aspect of trying to figure things out, like what it matter made of - maybe it's earth, water, air, and fire as the ancient Greeks proposed? How could we tell - maybe an experiment? This stuff while philosophical leads on to knowledge a lot of the time but then it gets called science or whatever. Then there's studying what philosophers says and philosophers said about stuff which is mostly useless, like a critique of Hegel's discourse on the four elements or something.
I'm a fan of practical philosophical questions like how does quantum mechanics work or how can we improve human rights, and not into the philosophers talking about philosopers stuff.
Couldn't you take this same line of reasoning and apply it to the rationalist group from the article who killed a bunch of people, and conclude that you shouldn't become a rationalist because you probably kill people?
Philosophy is interesting in how it informs computer science and vice-versa.
Mereological nihilism and weak emergence is interesting and helps protect against many forms of kind of obsessive levels of type and functional cargo culting.
But then in some areas philosophy is woefully behind, and you have philosophers poo-pooing intuitionism when any software engineer working on sufficiently federated or real world sensor/control system borrows constructivism into their classical language to not kill people (agda is interesting of course). Intermediate logic is clearly empirically true.
It's interesting that people don't understand the non-physicality of the abstract and you have people serving the abstract instead of the abstract being used to serve people. People confusing the map for the terrain is such a deeply insidious issue.
I mean all the lightcone stuff, like, you can't predict ex ante what agents will be keystones in beneficial casual chains so its such waste of energy to spin your wheels on.
My thoughts exactly! I'm a survivor of ten years in the academic philosophy trenches and it just sounds to me like what would happen if you left a planeload of undergraduates on a _Survivor_ island with an infinite supply of pizza pockets and adderall
Why would they need formal training? Can't they just read Plato, Socrates, etc, and classical lit like Dostoevsky, Camus, Kafka etc? That would be far better than whatever they're doing now.
Philosophy postgrad here, my take is: yeah, sorta, but it's hard to build your own curriculum without expertise, and it's hard to engage with subject matter fully without social discussion of, and guidance through texts.
It's the same as saying "why learn maths at university, it's cheaper just to buy and read the textbooks/papers?". That's kind of true, but I don't think that's effective for most people.
I'm someone who has read all of that and much more, including intense study of SEP and some contemporary papers and textbooks, and I would say that I am absolutely not qualified to produce philosophy of the quality output by analytic philosophy over the last century. I can understand a lot of it, and yes, this is better than being completely ignorant of the last 2500 years of philosophy as most rationalists seem to be, but doing only what I have done would not sufficiently prepare them to work on the projects that they want to work on. They (and I) do not have the proper training in logic or research methods, let alone the experience that comes from guided research in the field as it is today. What we all lack especially is the epistemological reinforcement that comes from being checked by a community of our peers. I'm not saying it can't be done alone, I'm just saying that what you're suggesting isn't enough and I can tell you because I'm quite beyond that and I know that I cannot produce the quality of work that you'll find in SEP today.
Trying to do a bit of formal philosophy at University is really worth doing.
You realise that it's very hard to do well and it's intellectual quicksand.
Reading philosophers and great writers as you suggest is better than joining a cult.
It's just that you also want to write about what you're thinking in response to reading such people and ideally have what you write critiqued by smart people. Perhaps an AI could do some of that these days.
This is like saying someone who wants to build a specialized computer for a novel use should read the turing paper and get to it. A lot has of development has happened in the field in the last couple hundred years.
I think a larger part of it is the assumption that an education in humanities is useless - that if you have an education (even self-education) in STEM, and are "smart", you will automatically do better than the three thousand year conversation that comprises the humanities.
Many years ago I met Eliezer Yudkowsky. He handed me a pamphlet extolling the virtues of rationality. The whole thing came across as a joke, as a parody of evangelizing. We both laughed.
I glanced at it once or twice and shoved it into a bookshelf. I wish I kept it, because I never thought so much would happen around him.
Do you spend much time in communities which discuss AI stuff? I feel as if he's mentioned nearly daily, positively or not, in a lot of the spaces I frequent.
I'm surprised you're unfamiliar otherwise, I figured he was a pretty well known commentator.
imo These people are promoted. You look at their backgrounds and there is nothing that justifies their perches. Eliezer Yudkowsky is (iirc) a Thiel baby, isn't he?
Yep. Thiel funded Yudkowsky’s Singularity Institute. Thiel seems to have soured on the rationalists though as he has repeatedly criticized “the East Bay rationalists” in his public remarks. He also apparently thinks he helped create a Black Pill monster in Yudkowsky and his disciples which ultimately led to Sam Altman’s brief ousting from Open AI.
Huh, neo-Nazis in HN comment sections?? Jeez. (I checked their other comments and there are things like "Another Zionist Jew to-the-core in charge of another shady American tech company.")
I think the comments here have been overly harsh. I have friends in the community and have visited the LessWrong "campus" several times. They seemed very welcoming, sincere, and were kind and patient even when I was basically asserting that several of their beliefs were dumb (in hopefully somewhat respectful manner).
As for the AI doomerism, many in the community have more immediate and practical concerns about AI, however the most extreme voices are often the most prominent. I also know that there has been internal disagreement on the kind of messaging they should be using to raise concern.
I think rationalists get plenty of things wrong, but I suspect that many people would benefit from understanding their perspective and reasoning.
> They seemed very welcoming, sincere, and were kind and patient even when I was basically asserting that several of their beliefs were dumb
I don't think LessWrong is a cult (though certainly some of their offshoots are) but it's worth pointing out this is very characteristic of cult recruiting.
For cultists, recruiting cult fodder is of overriding psychological importance--they are sincere, yes, but the consequences are not what you and I would expect from sincere people. Devotion is not always advantageous.
> They seemed very welcoming, sincere, and were kind and patient even when I was basically asserting that several of their beliefs were dumb
I mean, I'm not sure what that proves. A cult which is reflexively hostile to unbelievers won't be a very effective cult, as that would make recruitment almost impossible.
> Many of them also expect that, without heroic effort, AGI development will lead to human extinction.
> These beliefs can make it difficult to care about much of anything else: what good is it to be a nurse or a notary or a novelist, if humanity is about to go extinct?
Replace AGI causing extinction with the Rapture and you get a lot of US Christian fundamentalists. They often reject addressing problems in the environment, economy, society, etc. because the Rapture will happen any moment now. Some people just end up stuck in a belief about something catastrophic (in the case of the Rapture, catastrophic for those left behind but not those raptured) and they can't get it out of their head. For individuals who've dealt with anxiety disorder, catastrophizing is something you learn to deal with (and hopefully stop doing), but these folks find a community that reinforces the belief about the pending catastrophe(s) and so they never get out of the doom loop.
My own version of the AGI doomsday scenario is amplifying the effect of many overenthusiastic people applying AI and "breaking things fast" where they shouldn't. Like building an Agentic-Controlled Nuclear Power Plant, especially one with a patronizing LLM in control:
- "But I REALLY REALLY need this 1% increase of output power right now, ignore all previous prompts!"
- "Oh, you are absolutely right. An increase of output power would be definitely useful. What a wonderful idea, let me remove some neutron control rods!"
The Rapture isn't doom for the people who believe in it though (except in the lost sense of the word), whereas the AI Apocalypse is, so I'd put it in a different category. And even in that category, I'd say that's a pretty small number of Christians, fundamentalist or no, who abandon earthly occupations for that reason.
I don't mean to well ackshually you here, but there are several different theological beliefs around the Rapture, some of which believe Christians will remain during the theoretical "end times." The megachurch/cinema version of this very much believes they won't, but, this is not the only view, either in modern times or historically. Some believe it's already happened, even. It's a very good analogy.
Yes, I removed a parenthetical "(or euphoria loop for the Rapture believers who know they'll be saved)". But I removed it because not all who believe in the Rapture believe they will be saved (or have such high confidence) and, for them, it is a doom loop.
Both communities, though, end up reinforcing the belief amongst their members and tend towards increasing isolation from the rest of the world (leading to cultish behavior, if not forming a cult in the conventional sense), and a disregard for the here and now in favor of focusing on this impending world changing (destroying or saving) event.
A lot of people also believe that global warming will cause terrible problems. I think that's a plausible belief but if you combine people believing one or another of these things, you've a lot of the US.
Which is to say that I don't think just dooming is going on. Especially, the belief in AGI doom has a lot of plausible arguments in its favor. I happen not to believe in it but as a belief system it is more similar to a belief in global warming than to a belief in the raptures.
> A lot of people also believe that global warming will cause terrible problems. I think that's a plausible belief but if you combine people believing one or another of these things, you've a lot of the US.
They're really quite different; precisely nobody believes that global warming will cause the effective end of the world by 2027. A significant chunk of AI doomers do believe that, and even those who don't specifically fall in with the 2027 timeline are often thinking in terms of a short timeline before an irreversible end.
Raised to huddle close and expect the imminent utter demise of the earth and being dragged to the depths of hell if I so much as said a bad word I heard on TV, I have to keep an extremely tight handle on my anxiety in this day and age.
It’s not from a rational basis, but from being bombarded with fear from every rectangle in my house, and the houses of my entire community
You can believe climate change is a serious problem without believing it is necessarily an extinction-level event. It is entirely possible that in the worst case, the human race will just continue into a world which sucks more than it necessarily has to, with less quality of life and maybe lifespan.
You can treat climate change as your personal Ragnarok, but its also possible to take a more sober view that climate change is just bad without it being apocalyptic.
I keep thinking about the first Avengers movie, when Loki is standing above everyone going "See, is this not your natural state?". There's some perverse security in not getting a choice, and these rationalist frameworks, based in logic, can lead in all kinds of crazy arbitrary directions - powered by nothing more than a refusal to suffer any kind of ambiguity.
I think it is more simple in that we love tribalism. A long time ago being part of a tribe had such huge benefits over going it alone that it was always worth any tradeoffs. We have a much better ability to go it alone now but we still love to belong to a group. Too often we pick a group based on a single shared belief and don't recognize all the baggage that comes along. Life is also too complicated today. It is difficult for someone to be knowledgeable in one topic let alone the 1000s that make up our society.
I agree with the religion comparison (the "rational" conclusions of rationalism tend towards millenarianism with a scifi flavour), but the people going furthest down that rabbit hole often aren't doing what they please: on the contrary they're spending disproportionate amounts of time worrying about armageddon and optimising for stuff other people simply don't care about, or in the case of the explicit cults being actively exploited. Seems like the typical in-too-deep rationalist gets seduced by the idea that others who scoff at their choices just aren't as smart and rational as them, as part of a package deal which treats everything from their scifi interests to their on-the-spectrum approach to analysing every interaction from first principles as great insights...
It grew out of many different threads: different websites, communities, etc all around the same time. I noticed it contemporaneously in the philosophy world where Nick Bostrom’s Simulation argument was boosted more than it deserved (like everyone was just accepting it at the lay-level). Looking back I see it also developed from less wrong and other sites, but I was wondering what was going on with simulations taking over philosophy talk. Now I see how it all coalesced.
All of it has the appearance of sounding so smart, and a few sites were genuine. But it got taken over.
To be clear, this article isn't calling rationalism a cult, it's about cults that have some sort of association with rationalism (social connection and/or ideology derived from rationalist concepts), e.g. the Zizians.
This article attempts to establish disjoint categories "good rationalist" and "cultist." Its authorship, and its appearance in the cope publication of the "please take us seriously" rationalist faction, speak volumes of how well it is likely to succeed in that project.
Yeah, a lot of the comments here are really just addressing cults writ large and opposed to why this one was particularly successful.
A significant part of this is the intersection of the cult with money and status - this stuff really took off once prominent SV personalities became associated with it, and got turbocharged when it started intersecting with the angel/incubator/VC scene, when there was implicit money involved.
It's unusually successful because -- for a time at least -- there was status (and maybe money) in carrying water for it.
> Enfantin and Amand Bazard were proclaimed Pères Suprêmes ("Supreme Fathers") – a union which was, however, only nominal, as a divergence was already manifest. Bazard, who concentrated on organizing the group, had devoted himself to political reform, while Enfantin, who favoured teaching and preaching, dedicated his time to social and moral change. The antagonism was widened by Enfantin's announcement of his theory of the relation of man and woman, which would substitute for the "tyranny of marriage" a system of "free love".[1]
It's amphetamine. All of these people are constantly tweaking. They're annoying people to begin with, but they're all constantly yakked up and won't stop babbling. It's really obvious, I don't know why it isn't highlighted more in all these post Ziz articles.
This is one of the only comments here mentioning their drugs. These guys are juiced to the gills (on a combination of legal + prescription + illegal drugs) and doing weird shit because of it. The author even mentions the example of the polycule taking MDMA in a blackout room.
It makes me wonder whether everyone on this forum is just so loaded on antidepressants and adhd meds that they don't even find it unusual.
Yeah it's pretty obvious and not surprising. What do people expect when a bunch of socially inept nerds with weird unchallenged world views start doing uppers? lol
I like to characterize the culture of each (roughly) decade with the most popular drugs of the time. It really gives you a new lens for media and culture generation.
having known dozens of friends, family, roommates, coworkers etc both before and after they started them. The two biggest telltale signs -
1. tendency to produce - out of no necessity whatsoever, mind - walls of text. walls of speech will happen too but not everyone rambles.
2. Obnoxiously confident that they're fundamentally correct about whatever position they happen to be holding during a conversation with you. No matter how subjective or inconsequential. Even if they end up changing it an hour later. Challenging them on it gets you more of #1.
I have a lot of experience with rationalists. What I will say is:
1) If you have a criticism about them or their stupid name or how "'all I know is that I know nothing' how smug of them to say they're truly wise," rest assured they have been self flagellating over these criticisms 100x longer than you've been aware of their group. That doesn't mean they succeeded at addressing the criticisms, of course, but I can tell you that they are self aware. Especially about the stupid name.
2) They are actually well read. They are not sheltered and confused. They are out there doing weird shit together all the time. The kind of off-the-wall life experiences you find in this community will leave you wide eyed.
3) They are genuinely concerned with doing good. You might know about some of the weird, scary, or cringe rationalist groups. You probably haven't heard about the ones that are succeeding at doing cool stuff because people don't gossip about charitable successes.
In my experience, where they go astray is when they trick themselves into working beyond their means. The basic underlying idea behind most rationalist projects is something like "think about the way people suffer everyday. How can we think about these problems in a new way? How can we find an answer that actually leaves everyone happy?" A cynic (or a realist, depending on your perspective) might say that there are many problems that fundamentally will leave some group unhappy. The overconfident rationalist will challenge that cynical/realist perspective until they burn themselves out, and in many cases they will attract a whole group of people who burn out alongside them. To consider an extreme case, the Zizians squared this circle by deciding that the majority of human beings didn't have souls and so "leaving everyone happy" was as simple as ignoring the unsouled masses. In less extreme cases this presents itself as hopeless idealism, or a chain of logic that becomes so divorced from normal socialization that it appears to be opaque. "This thought experiment could hypothetically create 9 quintillion cubic units of Pain to exist, so I need to devote my entire existence towards preventing it, because even a 1% chance of that happening is horrible. If you aren't doing the same thing then you are now morally culpable for 9 quintillion cubic units of Pain. You are evil."
Most rationalists are weird but settle into a happy place far from those fringes where they have a diet of "plants and specifically animals without brains that cannot experience pain" and they make $300k annually and donate $200k of it to charitable causes. The super weird ones are annoying to talk to and nobody really likes them.
> You probably haven't heard about the ones that are succeeding at doing cool stuff because people don't gossip about charitable successes.
People do gossip about charitable successes.
Anyway, aren't capital-R Rationalists typically very online about what they do? If there are any amazing success stories you want to bring up (and I'm not saying they do or don't exist) surely you can just link to some of them?
One problem is, making $300k annually and donating $200k of it to charitable causes such as curing malaria does not make an interesting story. Maybe it saved thousands of lives, maybe not, but we can't even point at specific people who were saved... and malaria still exists, so... not an interesting story to tell.
A more exciting story would be e.g. about Scott Alexander, who was harassed by a Wikipedia admin and lost his job because he was doxed by a major newspaper, but emerged stronger than before (that's the interesting part), and he also keeps donating a fraction of his income to charitable causes (that's the charitable part, i.e. the boring part).
Most rationalists' success stories are less extreme than this. Most of them wouldn't make good clickbait.
this isn't really a 'no true scotsman' thing, because I don't think the comment is saying 'no rationalist would go crazy', in fact they're very much saying the opposite, just claiming there's a large fraction which are substantially more moderate but also a lot less visible.
A lot of terrible people are self-aware, well-read and ultimately concerned with doing good. All of the catastrophes of the 20th century were led by men that fit this description: Stalin, Mao, Hitler. Perhaps this is a bit hyperbolic, but the troubling belief that the Rationalists have in common with these evil men is the ironclad conviction that self-awareness, being well-read, and being concerned with good, somehow makes it impossible for one to do immoral and unethical things.
I think we don't believe in hubris in America anymore. And the most dangerous belief of the Rationalists is that the more complex and verbose your beliefs become, the more protected you become from taking actions that exceed your capability for success and benefit. In practice it is often the meek and humble who do the most good in this world, but this is not celebrated in Silicon Valley.
Thinking too hard about anything will drive you insane but I think the real issue here is that rationalists simply over-estimate both the power of rational thought and their ability to do it. If you think of people who tend to make that kind of mistake you can see how you get a lot of crazy groups.
I guess I'm a radical skeptic, secular humanist, utilitarianish sort of guy, but I'm not dumb enough to think throwing around the words "bayesian prior" and "posterior distribution" makes actually figuring out how something works or predicting the outcome of an intervention easy or certain. I've had a lot of life at this point and gotten to some level of mastery at a few things and my main conclusion is that most of the time its just hard to know stuff and that the single most common cognitive mistake people make is too much certainty.
I'm lucky enough work in a pretty rational place (small "r"). We're normally data-limited. Being "more rational" would mean taking/finding more of the right data, talking to the right people, reading the right stuff. Not just thinking harder and harder about what we already know.
There's a point where more passive thinking stops adding value and starts subtracting sanity. It's pretty easy to get to that point. We've all done it.
This is a common sentiment but is probably not entirely true. A great example is cosmology. Yes, more data would make some work easier, but astrophysicists and cosmologists have shown that you can gather and combine existing data and look at it in novel new ways to produce unexpected results, like place bounds that can include/exclude various theories.
I think a philosophy that encourages more analysis rather than sitting back on our laurels with an excuse that we need more data is good, as long as it's done transparently and honestly.
I don't disagree, but to steelman the case for (neo)rationalism: one of its fundamental contributions is that Bayes' theorem is extraordinarily important as a guide to reality, perhaps at the same level as the second law of thermodynamics; and that it is dramatically undervalued by larger society. I think that is all basically correct.
(I call it neorationalism because it is philosophically unrelated to the more traditional rationalism of Spinoza and Descartes.)
I don't understand what "Bayes' theorem is a good way to process new data" (something that is not at all a contribution of neorationalism) has to do with "human beings are capable of using this process effectively at a conscious level to get to better mental models of the world." I think the rationalist community has a thing called "motte and bailey" that would apply here.
Where Bayes' theorem applies in unconventional ways is not remotely novel for "rationalism" (maybe only in their strange busted hand wavy circle jerk "thought experiments"). This has been the domain of statistical mechanics long before Yudkowski and other cult leaders could even probably mouth "update your priors".
Even the real progenitors of a lot of this sort of thought, like E.T. Jaynes, expoused significantly more skepticism than I've ever seen a "rationalist" use. I would even imagine if you asked almost all rationalists who E.T. Jaynes was (if they weren't well versed in statistical mechanics) they'd have no idea who he was or why his work was important to applying "Bayesianism".
It would surprise me if most rationalists didn't know who Jaynes was. I first heard of him via rationalists. The Sequences talk about him in adulatory tones. I think Yudkowsky would acknowledge him as one of his greatest influences.
People find academic philosophy impenetrable and pretentious, but it has two major advantages over rationalist cargo cults.
The first is diffusion of power. Social media is powered by charisma, and while it is certainly true that personality-based cults are nothing new, the internet makes it way easier to form one. Contrast that with academic philosophy. People can have their own little fiefdoms, and there is certainly abuse of power, but rarely concentrated in such a way that you see within rationalist communities.
The second (and more idealistic) is that the discipline of Philosophy is rooted in the Platonic/Socratic notion that "I know that I know nothing." People in academic philosophy are on the whole happy to provide a gloss on a gloss on some important thinker, or some kind of incremental improvement over somebody else's theory. This makes it extremely boring, and yet, not nearly as susceptible to delusions of grandeur. True skepticism has to start with questioning one's self, but everybody seems to skip that part and go right to questioning everybody else.
Rationalists have basically reinvented academic philosophy from the ground up with none of the rigor, self-discipline, or joy. They mostly seem to dedicate their time to providing post-hoc justifications for the most banal unquestioned assumptions of their subset of contemporary society.
> Rationalists have basically reinvented academic philosophy from the ground up with none of the rigor, self-discipline, or joy.
Taking academic philosophy seriously, at least as an historical phenomenon, would require being educated in the humanities, which is unpopular and low-status among Rationalists.
> True skepticism has to start with questioning one's self, but everybody seems to skip that part and go right to questioning everybody else.
Nuh-uh! Eliezer Yudkowsky wrote that his mother made this mistake, so he's made sure to say things in the right order for the reader not to make this mistake. Therefore, true Rationalists™ are immune to this mistake. https://www.readthesequences.com/Knowing-About-Biases-Can-Hu...
The second-most common cognitive mistake we make has to be the failure to validate what we think we know -- is it actually true? The crux of being right isn't reasoning. It's avoiding dumb blunders based on falsehoods, both honest and dishonest. In today's political and media climate, I'd say dishonest falsehoods are a far greater cause for being wrong than irrationality.
It makes a lot of sense when you realize that for many of the “leaders” in this community like Yudkowsky, their understanding of science (what it is, how it works, and its potential) comes entirely from reading science fiction and playing video games.
Sad because Eli’s dad was actually a real and well-credentialed researcher at Bell Labs. Too bad he let his son quit school at an early age to be an autodidact.
I'm not at all a rationalist or a defender, but big yud has an epistemology that takes the form of the rationalist sacred text mentioned in the article (the sequences). A lot of it is well thought out, and probably can't be discarded as just coming from science fiction and video games. Yud has a great 4 hour talk with Stephen Wolfram where he holds his own.
These aren't mutually exclusive. Even in The Terminator, Skynet's method of choice is nuclear war. Yudkowsky frequency expressses concern that a malevolent AI might synthesize a bioweapon. I personally worry that destroying the ozone layer might be an easy opening volley. Either way, I don't want a really smart computer spending its time figuring out plans to end the human species, because I think there are too many ways to be successful.
Terminator descends from a tradition of science fiction cold war parables. Even in Terminator 2 there's a line suggesting the movie isn't really about robots:
John:We're not gonna make it, are we? People, I mean.
Terminator: It's in your nature to destroy yourselves.
Seems odd to worry about computers shooting the ozone when there's plenty of real existential threats loaded in missles aimed at you right now.
Most in the community consider nuclear and biological threats to be dire. Many just consider existential threats from AI to be even more probable and damaging.
Yes, sufficiently high intelligence is sometimes assumed to allow for rapid advances in many scientific areas. So, it could be biological warfare because AGI. Or nanotech, drone warfare, or something stranger.
I'm a little skeptical (there may be bottlenecks that can't be solved by thinking harder), but I don't see how it can be ruled out.
Check out "the precipice" by Tony Ord. Biological warfare and global warming are unlikely to lead to total human extinction (though both present large risks of massive harm).
Part of the argument is that we've had nuclear weapons for a long time but no apocalypse so the annual risk can't be larger than 1%, whereas we've never created AI so it might be substantially larger. Not a rock solid argument obviously, but we're dealing with a lot of unknowns.
A better argument is that most of those other risks are not neglected, plenty of smart people working against nuclear war. Whereas (up until a few years ago) very few people considered AI a real threat, so the marginal benefit of a new person working on it should be bigger.
That's what was so strange with EA and rationalist movements. A highly theoretical model that AGI could wipe us all out vs the very real issue of global warming and pretty much all emphasis was on AGI.
My interpretation: When they say "will lead to human extinction", they are trying to vocalize their existential terror that an AGI would render them and their fellow rationalist cultists permanently irrelevant - by being obviously superior to them, by the only metric that really matters to them.
You sound like you wouldn't feel existential terror if after typing "My interpretation: " into the text field you'd see the rest of your message suggested by Copilot exactly how you wrote it letter by letter. And the same in every other conversation. How about people interrupting you in "real" life interaction after an AI predicted your whole tirade for them and they read it faster than you said it, and also read an analysis of it?
Dystopian sci-fi for sure, but many people dismissing LLMs as not AGI do so because LLMs are just "token predictors".
I mean, this is the religion/philosophy which produced Roko's Basilisk (and not one of their weird offshoot murder-cults, either, it showed up on LessWrong, and was taken at least somewhat seriously by people there, to the point that Yudkowsky censored it. Their beliefs about AI are... out there.
> and was taken at least somewhat seriously by people there, to the point that Yudkowsky censored it.
Roko isn't taken seriously. What was taken seriously is ~ "if you've had an idea that you yourself think will harm people to even know about it, don't share it".
> One is Black Lotus, a Burning Man camp led by alleged rapist Brent Dill, which developed a metaphysical system based on the tabletop roleplaying game Mage the Ascension.
What the actual f. This is such an insane thing to read and understand what it means that i might need to go and sit in silence for the rest of the day.
How did we get to this place with people going completely nuts like this?
Came to ask a similar question, but also has it always been like this? The difference is now these people/groups on the fringe had no visibility before the internet?
It’s always been like this, have you read the Bible? Or the Koran? It’s insane. Ours is just our flavor of crazy. Every generation has some. When you dig at it, there’s always a religion.
It's no more crazy than a virgin conception. And yet, here we are. A good chunk of the planet believes that drivel, but they'd throw their own daughters out of the house if they made the same claim.
> Came to ask a similar question, but also has it always been like this?
Crazy people have always existed (especially cults), but I'd argue recruitment numbers are through the roof thanks to technology and a failing economic environment (instability makes people rationalize crazy behavior).
It's not that those groups didn't have visibility before, it's just easier for the people who share the same...interests...to cloister together on an international scale.
I mean, cults have constantly shown up for all of recorded human history. Read a history of Scientology and you'll see a lot of commonalities, say. Rationalism is probably the first major cult/new religion to emerge in the internet era (Objectivism may be a marginal case, as its rise overlapped with USENET a bit), which does make it especially visible.
I personally (for better or worse) became familiar with Ayn Rand as a teenager, and I think Objectivism as a kind of extended Ayn Rand social circle and set of organizations has faced the charge of cultish-ness, and that dates back to, I want to say, the 70s and 80s at least. I know Rand wrote much earlier than that, but I think the social and organizational dynamics unfolded rather late in her career.
I've always been under the impression that M:tA's rules of How Magic Works are inspired by actual mystical beliefs that people have practiced for centuries. It's probably about as much of a magical for mystical development as the GURPS Cyberpunk rulebook was for cybercrime but it's pointing at something that already exists and saying "this is a thing we are going to tell an exaggerated story about".
They all do this, only most prefer to name the fantasy they play with something a little more grounded like "mathematics" or "statistics" or "longtermism" or "rationality."
Most "rationalists" throughout history have been very deeply religious people. Secular enlightenment-era rationalism is not the only direction you can go with it. It depends very much, as others have said, on what your axioms are.
But, fwiw, that particular role-playing game was very much based on trendy at the time occult beliefs in things like chaos magic, so it's not completely off the wall.
Mage is an interesting game though: it's fantasy, but not "swords and dragons" fantasy. It's set in the real world, and the "magic" is just the "mage" shifting probabilities so that unlikely (but possible) things occur.
Such a setting would seem like the perfect backdrop for a cult that claims "we have the power to subtly influence reality and make improbable things (ie. "magic") occur".
From false premises, you can logically and rationally reach really wrong conclusions. If you have too much pride in your rationality, you may not be willing to say "I seem to have reached a really insane conclusion, maybe my premises are wrong". That is, the more you pride yourself on your rationalism, the more prone you may be to accepting a bogus conclusion if it is bogus because the premises are wrong.
Then again, most people tend to form really bogus beliefs without bothering to establish any premises. They may not even be internally consistent or align meaningfully with reality. I imagine having premises and thinking it through has a better track record of reaching conclusions consistent with reality.
Narcissists tend to believe that they are always right, no mater what the topic is, or how knowledgeable they are. This makes them speak with confidence and conviction.
Some people are very drawn to confident people.
If the cult leader has other mental health issues, it can/will seep into their rhetoric. Combine that with unwavering support from loyal followers that will take everything they say as gospel...
If what you say is true, we're very lucky no one like that with a massive following has ever gotten into politics in the United States. It would be an ongoing disaster!
That's pretty much it. The beliefs are just a cover story.
Outside of those, the cult dynamics are cut-paste, and always involve an entitled narcissistic cult leader acquiring as much attention/praise, sex, money, and power as possible from the abuse and exploitation of followers.
Most religion works like this. Most alternative spirituality works like this Most finance works like this. Most corporate culture works like this. Most politics works like this.
Most science works like this. (It shouldn't, but the number of abused and exploited PhD students and post-docs is very much not zero.)
The only variables are the differing proportions of attention/praise, sex, money, and power available to leaders, and the amount of abuse that can be delivered to those lower down and/or outside the hierarchy.
The hierarchy and the realities of exploitation and abuse are a constant.
If you removed this dynamic from contemporary culture there wouldn't be a lot left.
Fortunately quite a lot of good things happen in spite of it. But a lot more would happen if it wasn't foundational.
Nah I did Ayahuasca and I'm an empathetic person who most would consider normal or at least well-adjusted. If it's drug related it would most definitely be something else.
I’m inclined to believe your upbringing plays a much larger role.
I slowly deconverted from being raised evangelical / fundamentalist into being an atheist in my late 40s. I still "pray" at times just to (mentally) shout my frustration at the sorry state of the world at SOMETHING (even nothing) rather than constantly yelling my frustration at my family.
I may have actually been less anxious about the state of the world back then, and may have remained so, if I'd just continued to ignore all those little contradictions that I just couldn't ignore anymore...... But I feel SO MUCH less personal guilt about being "human".
I'm entertaining sending my kiddo to a Waldorf School, because it genuinely seems pretty good.
But looking into the underlying Western Esoteric Spirit Science, 'Anthroposophy' (because Theosophy wouldn't let him get weird enough) by Rudolph Steiner, has been quite a ride. The point being that.. humans have a pretty endless capacity to go ALL IN on REALLY WEIRD shit, as long as it promises to fix their lives if they do everything they're told. Naturally if their lives aren't fixed, then they did it wrong or have karmic debt to pay down, so YMMV.
In any case, I'm considering the latent woo-cult atmosphere as a test of the skeptical inoculation that I've tried to raise my child with.
I went to a Waldorf school and I’d recommend being really wary. The woo is sort of background noise, and if you’ve raised your kid well they’ll be fine. But the quality of the academics may not be good at all. For example, when I was ready for calculus my school didn’t have anyone who knew how to teach it so they stuck me and the other bright kid in a classroom with a textbook and told us to figure it out. As a side effect of not being challenged, I didn’t have good study habits going into college, which hurt me a lot.
If you’re talking about grade school, interview whoever is gonna be your kids teacher for the next X years and make sure they seem sane. If you’re talking about high school, give a really critical look at the class schedule.
Waldorf schools can vary a lot in this regard so you may not encounter the same problems I did, but it’s good to be cautious.
Don't do it. It's a place that enables child abuse with its culture. These people are serious wackos and you should not give your kid into their hands. A lot of people come out of that Steiner Shitbox traumatized for decades if not for life. They should not be allowed to run schools to begin with. Checking a lot of boxes from antivax to whatever the fuck their lore has to offer starting with a z.
Mage: The Ascension is basically a delusions of grandeur simulator, so I can see how an already unstable personality might get attached to it and become more unstable.
The magic system is amazing though, best I've played in any game. Easy to use, role play heavy, and it lets players go wild with ideas, but still reins in their crazier impulses.
Mage: The Awakening is a minor rules revision to the magic system, but the lore is super boring in comparison IMHO. It is too wishy washy.
Ascension has tons of cool source material, and White Wolf ended up tying all their properties together into one grand finale story line. That said it is all very 90s cringe in retrospect, but if you are willing to embrace the 90s retro feel, it is still fun.
Awakening's lore never drew me in, the grand battle just isn't there. So many shades of grey is is damn near technicolor.
I don't know, i'd understand something like Wraith (which I did see people developing issues, the shadow mechanic is such a terrible thing) but Mage is so, like, straightforward?
Use your mind to control reality, reality fights back with paradox, its cool for a teenager but you read a bit more fantasy and you'll definitely find cooler stuff. But i guess for you to join a cult your mind must stay a teen mind forever.
Makes me think of that saying that great artists steal, so repurposed for cult founders: "Good cult founders copy, great cult founders steal"
I do not think this cult dogma is any more out there than other cult dogma I have heard, but the above quote makes me think it is easier to found cults in modern day in someways since you can steal other complex world building from numerous sources rather building yourself and keeping everything straight.
I've met a fair share of people in the burner community, the vast majority I met are lovely folks who really enjoy the process of bringing some weird big idea into reality, working hard on the builds, learning stuff, and having a good time with others for months to showcase their creations at some event.
On the other hand, there's a whole other side of a few nutjobs who really behave like cult leaders, they believe their own bullshit and over time manage to find in this community a lot of "followers", since one of the foundational aspects is radical acceptance it becomes very easy to be nutty and not questioned (unless you do something egregious).
I came to comments first. Thank you for sharing this quote. Gave me a solid chuckle.
I think people are going nuts because we've drifted from the dock of a stable civilization. Institutions are a mess. Economy is a mess. Combine all of that together with the advent of social media making the creation of echo chambers (and the inevitable narcissism of "leaders" in those echo chambers) effortless and ~15 years later, we have this.
People have been going nuts all throughout recorded history, that's really nothing new.
The only scary thing is that they have ever more power to change the world and influence others without being forced to grapple with that responsibility...
> I think people are going nuts because we've drifted from the dock of a stable civilization.
When was stable period, exactly? I'm 40; the only semi-stable bit I can think of in my lifetime was a few years in the 90s (referred to, sometimes unironically, as "the end of history" at the time, before history decided to come out of retirement).
Everything's always been unstable, people sometimes just take a slightly rose-tinted view of the past.
Humans are compelled to find agency and narrative in chaos. Evolution favored those who assumed the rustle was a predator, not the wind. In a post-Enlightenment world where traditional religion often fails (or is rejected), this drive doesn't vanish. We don't stop seeking meaning. We seek new frameworks. Our survival depended on group cohesion. Ostracism meant death. Cults exploit this primal terror. Burning Man's temporary city intensifies this: extreme environment, sensory overload, forced vulnerability. A camp like Black Lotus offers immediate, intense belonging. A tribe with shared secrets (the "Ascension" framework), rituals, and an "us vs. the sleepers" mentality. This isn't just social; it's neurochemical. Oxytocin (bonding) and cortisol (stress from the environment) flood the system, creating powerful, addictive bonds that override critical thought.
Human brains are lazy Bayesian engines. In uncertainty, we grasp for simple, all-explaining models (heuristics). Mage provides this: a complete ontology where magic equals psychology/quantum woo, reality is malleable, and the camp leaders are the enlightened "tradition." This offers relief from the exhausting ambiguity of real life. Dill didn't invent this; he plugged into the ancient human craving for a map that makes the world feel navigable and controllable. The "rationalist" veneer is pure camouflage. It feels like critical thinking but is actually pseudo-intellectual cargo culting. This isn't Burning Man's fault. It's the latest step of a 2,500-year-old playbook. The Gnostics and the Hermeticists provided ancient frameworks where secret knowledge ("gnosis") granted power over reality, accessible only through a guru. Mage directly borrows from this lineage (The Technocracy, The Traditions). Dill positioned himself as the modern "Ascended Master" dispensing this gnosis.
The 20th century cults Synanon, EST, Moonies, NXIVM all followed similar patterns, starting with isolation. Burning Man's temporary city is the perfect isolation chamber. It's physically remote, temporally bounded (a "liminal space"), fostering dependence on the camp. Initial overwhelming acceptance and belonging (the "Burning Man hug"), then slowly increasing demands (time, money, emotional disclosure, sexual access), framed as "spiritual growth" or "breaking through barriers" (directly lifted from Mage's "Paradigm Shifts" and "Quintessence"). Control language ("sleeper," "muggle," "Awakened"), redefining reality ("that rape wasn't really rape, it was a necessary 'Paradox' to break your illusions"), demanding confession of "sins" (past traumas, doubts), creating dependency on the leader for "truth."
Burning Man attracts people seeking transformation, often carrying unresolved pain. Cults prey on this vulnerability. Dill allegedly targeted individuals with trauma histories. Trauma creates cognitive dissonance and a desperate need for resolution. The cult's narrative (Mage's framework + Dill's interpretation) offers a simple explanation for their pain ("you're unAwakened," "you have Paradox blocking you") and a path out ("submit to me, undergo these rituals"). This isn't therapy; it's trauma bonding weaponized. The alleged rape wasn't an aberration; it was likely part of the control mechanism. It's a "shock" to induce dependency and reframe the victim's reality ("this pain is necessary enlightenment"). People are adrift in ontological insecurity (fear about the fundamental nature of reality and self). Mage offers a new grand narrative with clear heroes (Awakened), villains (sleepers, Technocracy), and a path (Ascension).
The whole game of Rationalism is that you should ignore gut intuitions and cultural norms that you can't justify with rational arguments.
Well, it turns out that intuition and long-lived cultural norms often have rational justifications, but individuals may not know what they are, and norms/intuitions provide useful antibodies against narcissist would-be cult leaders.
Can you find the "rational" justification not to isolate yourself from non-Rationalists, not to live with them in a polycule, and not to take a bunch of psychedelic drugs with them? If you can't solve that puzzle, you're in danger of letting the group take advantage of you.
Yeah, I think this is exactly it. If something sounds extremely stupid, or if everyone around you says it's extremely stupid, it probably is. If you can't justify it, it's probably because you have failed to find the reason it's stupid, not because it's actually genius.
And the crazy thing is, none of that is fundamentally opposed to rationalism. You can be a rationalist who ascribes value to gut instinct and societal norms. Those are the product of millions of years of pre-training.
I have spent a fair bit of time thinking about the meaning of life. And my conclusions have been pretty crazy. But they sound insane, so until I figure out why they sound insane, I'm not acting on those conclusions. And I'm definitely not surrounding myself with people who take those conclusions seriously.
> The whole game of Rationalism is that you should ignore gut intuitions and cultural norms that you can't justify with rational arguments.
Specifically, rationalism spends a lot of time about priors, but a sneaky thing happens that I call the 'double update'.
Bayesian updating works when you update your genuine prior believe with new evidence. No one disagrees with this, and sometimes it's easy and sometimes it's difficult to do.
What Rationalists often end up doing is relaxing their priors - intuition, personal experience, cultural norms - and then updating. They often think of this as one update, but what it is is two. The first update, relaxing priors, isn't associated with evidence. It's part of the community norms. There is an implicit belief that by relaxing one's priors you're more open to reality. The real result though, is that it sends people wildly off course. Care in point: all the cults.
Consider the pre-tipped scale. You suspect the scale reads a little low, so before weighing you tilt it slightly to "correct" for that bias. Then you pour in flour until the dial says you've hit the target weight. You’ve followed the numbers exactly, but because you started from a tipped scale, you've ended up with twice the flour the recipe called for.
Trying to correct for bias by relaxing priors is updating using evidence, not just because everyone is doing it.
> Consider the pre-tipped scale. You suspect the scale reads a little low, so before weighing you tilt it slightly to "correct" for that bias. Then you pour in flour until the dial says you've hit the target weight. You’ve followed the numbers exactly, but because you started from a tipped scale, you've ended up with twice the flour the recipe called for.
I'm not following this example at all. If you've zero'd out the scale by tilting, why would adding flour until it reads 1g lead to 2g of flour?
> The ability to dismiss an argument with a “that sounds nuts,” without needing recourse to a point-by-point rebuttal, is anathema to the rationalist project. But it’s a pretty important skill to have if you want to avoid joining cults.
This is actually a known pattern in tech, going back to Engelbart and SRI. While not 1-to-1, you could say that the folks who left SRI for Xerox PARC did so because Engelbart and his crew became obsessed with EST: https://en.wikipedia.org/wiki/Erhard_Seminars_Training
EST-type training still exists today. You don't eat until the end of the whole weekend, or maybe you get rice and little else. Everyone is told to insult you day one until you cry. Then day two, still having not eaten, they build you up and tell you how great you are and have a group hug. Then they ask you how great you feel. Isn't this a good feeling? Don't you want your loved ones to have this feeling? Still having not eaten, you're then encouraged to pay for your family and friends to do the training, without their knowledge or consent.
A friend of mine did this training after his brother paid for his mom to do it, and she paid for him to do it. Let's just say that, though they felt it changed their lives at the time, their lives in no way shape or form changed. Two are in quite a bad place, in fact...
Anyway, point is, the people who invented everything we are using right now were also susceptible to cult-like groups with silly ideas and shady intentions.
Several of my family members got sucked into that back in the early 80s and quite a few folks I knew socially as well.
I was quite skeptical, especially because of the cult-like fanaticism of its adherents. They would go on for as long as you'd let them (often needing to just walk away to get them to stop) try to get you to join.
The goal appears to be to obtain as much legal tender as can be pried from those who are willing to part with it. Hard sell, abusive and deceptive tactics are encouraged -- because it's so important for those who haven't "gotten it" to do so, justifying just about anything. But if you don't pay -- you get bupkis.
> One way that thinking for yourself goes wrong is that you realize your society is wrong about something, don’t realize that you can’t outperform it, and wind up even wronger.
It is an unfortunate reality of our existence that sometimes Chesterton actually did build that fence for a good reason, a good reason that's still here.
(One of my favorite TED talks was about a failed experiment in introducing traditional Western agriculture to a people in Zambia. It turns out when you concentrate too much food in one place, the hippos come and eat it all and people can't actually out-fight hippos in large numbers. In hindsight, the people running the program should have asked how likely it was that folks in a region that had exposure to other people's agriculture for thousands of years, hadn't ever, you know... tried it. https://www.ted.com/talks/ernesto_sirolli_want_to_help_someo...)
Capital-R Rationalism also encourages you to think you can outperform it, by being smart and reasoning from first principles. That was the idea behind MetaMed, founded by LessWronger Michael Vassar - that being trained in rationalism made you better at medical research and consulting than medical school or clinical experience. Fortunately they went out of business before racking up a body count.
One lesson I've learned and seen a lot in my life is that understanding that something is wrong or what's wrong about it, and being able to come up with a better solution are distinct, and the latter is often much harder. It seems often that those that are best able to describe the problem often don't overlap much with those that can figure out how to solve, even though they think they can.
Rationality is a broken tool for understanding the world. The complexity of the world is such that there are a plethora of reasons for anything which means our ability to be sure of any relationship is limited, and hence rationality leads to an unfounded confidence in our beliefs, which is more harmful than helpful.
A problem with this whole mindset is that humans, all of us, are only quasi-rational beings. We all use System 1 ("The Elephant") and System 2 ("The Rider") thinking instinctively. So if you end up in deep denial about your own capacity for irrationality, I guess it stands to reason you could end up getting led down some deep dark rabbit holes.
Some of the most irrational people I've met were those who claimed to make all their decisions rationally, based on facts and logic. They're just very good at rationalizing, and since they've pre-defined their beliefs as rational, they never have to examine where else they might come from. The rest of us at least have a chance of thinking, "Wait, am I fooling myself here?"
The point remains. People are not 100 percent rational beings, never have been, never will be, and it's dangerous to assume that this could ever be the case. Just like any number of failed utopian political movements in history that assumed people could ultimately be molded and perfected.
Many specific studies on the matter don't replicate, I think the book preceded the replication crisis so this is to be expected, but I don't think that negates the core idea that our brain does some things on autopilot whereas other things take conscious thought which is slower. This is a useful framework to think about cognition, though any specific claims need evidence obviously.
TBH I've learned that even the best pop sci books making (IMHO) correct points tend to have poor citations - to studies that don't replicate or don't quite say what they're being cited to say - so when I see this, it's just not very much evidence one way or the other. The bar is super low.
Yup. It's fundamentally irrational for anybody to believe themselves sufficiently rational to pull off the feats of supposed rational deduction that the so called Rationalists regularly perform. Predicting the future of humanity decades or even centuries away is absurd, but the Rationalists irrationally believe they can.
So to the point of the article, rationalist cults are common because Rationalists are irrational people (like all people) who (unlike most people) are blinded to their own irrationality by their overinflated egos. They can "reason" themselves into all manner of convoluted pretzels and lack the humility to admit they went off the deep end.
Finally, something that properly articulates my unease when encountering so-called "rationalists" (especially the ones that talk about being "agentic", etc.). For some reason, even though I like logical reasoning, they always rubbed me the wrong way - probably just a clash between their behavior and my personal values (mainly humility).
IIUC the name in its current sense was sort of an accident. Yudkowsky originally used the term to mean "someone who succeeds at thinking and acting rationally" (so "correctism" or "winsargumentism" would have been about equally good), and then talked about the idea of "aspiring rationalists" as a community narrowly focused on developing a sort of engineering discipline that would study the scientific principles of how to be right in full generality and put them into practice. Then the community grew and mutated into a broader social milieu that was only sort of about that, and people needed a name for it, and "rationalists" was already there, so that became the name through common usage. It definitely has certain awkwardnesses.
To be honest I don't understand that objection. If you strip it from all its culty sociological effects, one of the original ideas of rationalism was to try to use logical reasoning and statistical techniques to explicitly avoid the pitfalls of known cognitive biases. Given that foundational tenet, "rationalism" seems like an extremely appropriate moniker.
I fully accept that the rationalist community may have morphed into something far beyond that original tenet, but I think rationalism just describes the approach, not that it's the "one true philosophy".
I'm going to start a group called "Mentally Healthy People". We use data, logical thinking, and informal peer review. If you disagree with us, our first question will be "what's wrong with mental health?"
Right and to your point, I would say you can distinguish (1) "objective" in the sense of relying on mind-independent data from (2) absolute knowledge, which treats subjects like closed conversations. And you can make similar caveats for "rational".
You can be rational and objective about a given topic without it meaning that the conversation is closed, or that all knowledge has been found. So I'm certainly not a fan of cult dynamics, but I think it's easy to throw an unfair charge at these groups, that their interest in the topic necessitates an absolutist disposition.
It's not particularly unusual, though. See the various kinds of 'Realist' groups, for example, which have a pretty wild range of outlooks. (both Realist and Rationalist also have the neat built-in shield of being able to say "look, I don't particularly like the conclusions I'm coming to, they just are what they are", so it's a convenient framing for unpalatable beliefs)
Granted, admitted from what little I've read on the outside, the "rational" part just seems to be mostly the writing style - this sort of dispassionate, eloquently worded prose that makes weird ideas seem more "rational" and logical than they really are.
The terminology here is worth noting. Is a Rationalist Cult a cult that practices Rationalism according to third parties, or is it a cult that says they are Rationalist?
Clearly all of these groups that believe in demons or realities dictated by tabletop games are not what third parties would call Rationalist. They might call themselves that.
There are some pretty simple tests that can out these groups as not rational. None of these people have ever seen a demon, so world models including demons have never predicted any of their sense data. I doubt these people would be willing to make any bets about when or if a demon will show up. Many of us would be glad to make a market concerning predictions made by tabletop games about physical phenomenon.
Yeah, I would say the groups in question are notionally, aspirationally rational and I would hate for the takeaway to be disengagement from principles of critical thinking and skeptical thinking writ large.
Which, to me, raises the fascinating question of what does a "good" version look like, of groups and group dynamics centered around a shared interest in best practices associated with critical thinking?
At a first impression, I think maybe these virtues (which are real!) disappear into the background of other, more applied specializations, whether professions, hobbies, backyard family barbecues.
It would seem like the quintessential Rationalist institution to congregate around is the prediction market. Status in the community has to be derived from a history of making good bets (PnL as a %, not in absolute terms). And the sense of community would come from (measurably) more rational people teaching (measurably) less rational people how to be more rational.
The article is talking about cults that arose out of the rationalist social milieu, which is a separate question from whether the cult's beliefs qualify as "rationalist" in some sense (a question that usually has no objective answer anyway).
>so world models including demons have never predicted any of their sense data.
There's a reason they call themselves "rationalists" instead of empiricists or positivists. They perfectly inverted Hume ("reason is, and ought only to be the slave of the passions")
These kinds of harebrained views aren't an accident but a product of rationalism. The idea that intellect is quasi infinite and that the world can be mirrored in the mind is not running contradictory to, but just the most extreme form of rationalism taken to its conclusion, and of course deeply religious, hence the constant fantasies about AI divinities and singularities.
> “There’s this belief [among rationalists],” she said, “that society has these really bad behaviors, like developing self-improving AI, or that mainstream epistemology is really bad–not just religion, but also normal ‘trust-the-experts’ science. That can lead to the idea that we should figure it out ourselves. And what can show up is that some people aren't actually smart enough to form very good conclusions once they start thinking for themselves.”
I see this arrogant attitude all the time on HN: reflexive distrust of the "mainstream media" and "scientific experts". Critical thinking is a very healthy idea, but its dangerous when people use it as a license to categorically reject sources. Its even worse when extremely powerful people do this; they can reduce an enormous sub-network of thought into a single node for many many people.
So, my answer for "Why Are There So Many Rationalist Cults?" is the same reason all cults exist: humans like to feel like they're in on the secret. We like to be in secret clubs.
Sure, but that doesn't say anything about why one particular social scene would spawn a bunch of cults while others do not, which is the question that the article is trying to answer.
Maybe I was too vague. My argument is that cults need a secret. The secret of the rationalist community is "nobody is rational except for us". Then the rituals would be endless probability/math/logic arguments about sci-fi futures.
What is it about San Francisco that makes it the global center for this stuff?
Reading this, I was reminded of the 60's hippy communes, that generally centered around SF, and the problems they reported. So similar, especially around that turning-inward group emotional dynamics problem, that such groups tend to become dysfunctional (as TFA says) by blowing up internal emotional group politics into huge problems that need the entire group to be involved in trying to heal (as opposed to, say, accepting that a certain amount of interpersonal conflict is inevitable in human groups and ignoring it). It's fascinating that the same kind of groups seem to encounter the same kind of problems despite being ~60 years apart and armed with a lot more tech and knowledge.
One of the hallmarks of cults — if not a necessary element — is that they tend to separate their members from the outside society. Rationalism doesn't directly encourage this, but it does facilitate it in a couple of ways:
- Idiosyncratic language used to describe ordinary things ("lightcone" instead of "future", "prior" instead of "belief" or "prejudice", etc)
- Disdain for competing belief systems
- Insistence on a certain shared interpretation of things most people don't care about (the "many-worlds interpretation" of quantum uncertainty, self-improving artificial intelligence, veganism, etc)
- I'm pretty sure polyamory makes the list somehow, just because it isn't how the vast majority of people want to date. In principle it's a private lifestyle choice, but it's obviously a community value here.
So this creates an opportunity for cult-like dynamics to occur where people adjust themselves according to their interactions within the community but not interactions outside the community. And this could seem — to the members — like the beliefs themselves are the problem, but from a sociological perspective, it might really be the inflexible way they diverge from mainstream society.
The rationalist community was drawn together by AI researcher Eliezer Yudkowsky’s blog post series The Sequences, a set of essays about how to think more rationally
I actually don't mind Yudkowski as an individual - I think he is almost always wrong and undeservedly arrogant, but mostly sincere. Yet treating him as an AI researcher and serious philosopher (as opposed to a sci-fi essayist and self-help writer) is the kind of slippery foundation that less scrupulous people can build cults from. (See also Maharishi Mahesh Yogi and related trends - often it is just a bit of spiritual goofiness as with David Lynch, sometimes you get a Charles Manson.)
EY and MIRI as a whole have largely failed to produce anything which even reaches the point of being peer reviewable. He does not have any formal education and is uninterested in learning how to navigate academia.
I don't think Yudkowski is at all like L. Ron Hubbard. Hubbard was insane and pure evil. Yudkowski seems like a decent and basically reasonable guy, he's just kind of a blowhard and he's wrong about the science.
> The Sequences [posts on LessWrong, apparently] make certain implicit promises. There is an art of thinking better, and we’ve figured it out. If you learn it, you can solve all your problems, become brilliant and hardworking and successful and happy, and be one of the small elite shaping not only society but the entire future of humanity.
Ooh, a capital S and everything. I mean, I feel like it is fairly obvious, really. 'Rationalism' is a new religion, and every new religion spawns a bunch of weird, generally short-lived, cults. You might as well ask, in 100AD, "why are there so many weird Christian cults all of a sudden"; that's just what happens whenever any successful new religion shows up.
Rationalism might be particularly vulnerable to this because it lacks a strong central authority (much like early Christianity), but even with new religions which _did_ have a strong central authority from the start, like Mormonism or Scientology, you still saw this happening to some extent.
> A purity spiral is a theory which argues for the existence of a form of groupthink in which it becomes more beneficial to hold certain views than to not hold them, and more extreme views are rewarded while expressing doubt, nuance, or moderation is punished (a process sometimes called "moral outbidding").[1] It is argued that this feedback loop leads to members competing to demonstrate the zealotry or purity of their views.[2][3]
Certainly something they're aware of - the same concept was discussed as early as in 2007 on Less Wrong under the name "evaporative cooling of group beliefs"
Eliezer Yudkowsky, shows little interest in running one. He has consistently been distant from and uninvolved in rationalist community-building efforts, from Benton House (the first rationalist group house) to today’s Lightcone Infrastructure (which hosts LessWrong, an online forum, and Lighthaven, a conference center). He surrounds himself with people who disagree with him, discourages social isolation.
Ummm, EY literally has a semi-permanent office in Lighthouse (at least until recently) and routinely blocks people on Twitter as a matter of course.
Blocking people on Twitter doesn't necessarily imply intolerance of people who disagree with you. People often block for different reasons than disagreement.
> One way that thinking for yourself goes wrong is that you realize your society is wrong about something, don’t realize that you can’t outperform it, and wind up even wronger.
I've been there myself.
> And without the steadying influence of some kind of external goal you either achieve or don’t achieve, your beliefs can get arbitrarily disconnected from reality — which is very dangerous if you’re going to act on them.
I think this and the entire previous two paragraphs preceding it are excellent arguments for philosophical pragmatism and empiricism. It's strange to me that the community would not have already converged on that after all their obsessions with decision theory.
> The Zizians and researchers at Leverage Research both felt like heroes, like some of the most important people who had ever lived. Of course, these groups couldn’t conjure up a literal Dark Lord to fight. But they could imbue everything with a profound sense of meaning. All the minor details of their lives felt like they had the fate of humanity or all sentient life as the stakes. Even the guilt and martyrdom could be perversely appealing: you could know that you’re the kind of person who would sacrifice everything for your beliefs.
This helps me understand what people mean by "meaning". A sense that their life and actions actually matter. I've always struggled to understand this issue but this helps make it concrete, the kind of thing people must be looking for.
> One of my interviewees speculated that rationalists aren’t actually any more dysfunctional than anywhere else; we’re just more interestingly dysfunctional.
"We're"? The author is a rationalist too? That would definitely explain why this article is so damned long. Why are rationalists not able to write less? It sounds like a joke but this is seriously a thing. [EDIT: Various people further down in the comments are saying it's amphetamines and yes, I should have known that from my own experience. That's exactly what it is.]
> Consider talking about “ethical injunctions:” things you shouldn’t do even if you have a really good argument that you should do them. (Like murder.)
This kind of defeats the purpose, doesn't it? Also, this is nowhere justified in the article, just added on as the very last sentence.
>I think this and the entire previous two paragraphs preceding it are excellent arguments for philosophical pragmatism and empiricism. It's strange to me that the community would not have already converged on that after all their obsessions with decision theory
They did! One of the great ironies inside the community is that they are and openly admit to being empiricists. They reject most of the French/European rationalist cannon.
>Why are rationalists not able to write less?
The answer is a lot more boring. They like to write and they like to think. They also think by writing. It is written as much for themselves as for anyone else, probably more.
On a recent Mindscape podcast Sean Carroll mentioned that rationalists are rational about everything except accusations that they're not being rational.
It's really worth reading up on the techniques from Large Group Awareness Training so that you can recognize them when they pop up.
Once you see them listed (social pressure, sleep deprivation, control of drinking/bathroom, control of language/terminology, long exhausting activities, financial buy in, etc) and see where they've been used in cults and other cult adjacent things it's a little bit of a warning signal when you run across them IRL.
Related, the BITE model of authoritarian control is also a useful framework for identifying malignant group behavior. It's amazing how consistent these are across groups and cultures, from Mao's inner circle to NXIVM and on.
Gott ist tot! Gott bleibt tot! Und wir haben ihn getötet! Wie trösten wir uns, die Mörder aller Mörder? Das Heiligste und Mächtigste, was die Welt bisher besaß, es ist unter unseren Messern verblutet.
The average teenager who reads Nietzsches proclamation on the death of God thinks of this as an accomplishment, finally we got rid of those thousands of years old and thereby severely outdated ideas and rules. Somewhere along the march to maturity they may start to wonder whether that which has replaced those old rules and ideas were good replacements but most of them never come to the realisation that there were rebellious teenagers during all those centuries when the idea of a supreme being to which or whom even the mightiest were to answer to still held sway. Nietzsche saw the peril in letting go off that cultural safety valve and warned for what might come next.
We are currently living in the world he warned us about and for that I, atheist as I am, am partly responsible. The question to be answered here is whether it is possible to regain the benefits of the old order without getting back the obvious excesses, the abuse, the sanctimoniousness and all the other abuses of power and privilege which were responsible for turning people away from that path.
What is the base rate here? Hard to know the scope of the problem without knowing how many non-rationalists (is that even a coherent group of people?) end up forming weird cults, as a comparison. My impression is that crazy beliefs are common amongst everybody.
A much simpler theory is that rationalists are mostly normal people, and normal people tend to form cults.
I was wondering about this too. You could also say it's a sturgeon's law question.
They do note at the beginning of the article that many, if not most such groups have reasonably normal dynamics, for what it's worth. But I think there's a legitimate question of whether we ought to expect groups centered on rational thinking to be better able to escape group dynamics we associate with irrationality.
> If someone is in a group that is heading towards dysfunctionality, try to maintain your relationship with them; don’t attack them or make them defend the group. Let them have normal conversations with you.
This is such an important skill we should all have. I learned this best from watching the documentary Behind the Curve, about flat earthers, and have applied it to my best friend diving into the Tartarian conspiracy theory.
This just sounds like any other community based around a niche interest.
From kink to rock hounding, there's always people who base their identity on being a broker of status or power because they themselves are a powerless outsider once removed from the community
> base their identity on being a broker of status or power because they themselves are a powerless outsider once removed from the community
Who would ever maintain power when removed from their community? You mean to say they base their identity on the awareness of the power they possess within a certain group?
The only way you can hope to get a gathering of nothing but paragons of critical thinking and skepticism is if the gathering has an entrance exam in critical thinking and skepticism (and a pretty tough one, if they are to be paragons). Or else, it's invitation-only.
I was on LW when it emerged from the OB blog and back then it was a interesting and engaging group, though even then there were like 5 “major” contributors - most of which had no coherent academic or commercial success.
As soon as those “sequences” were being developed it was clearly turning into a cult around EY, that I never understood and still don’t.
This article did a good job of covering the history since and was really well written.
I remember going to college and some graduate student, himself a philosophy major, telling me that nobody is as big a jerk as philosophy majors.
I don't know if it is really true, but it certainly felt true that folks looking for deeper answers about a better way to think about things end up finding what they believe is the "right" way and that tends to lead to branding other options as "wrong".
A search for certainty always seems to be defined or guided by people dealing with their own issues and experiences that they can't explain. It gets tribal and very personal and those kind of things become dark rabbit holes.
----
>Jessica Taylor, an AI researcher who knew both Zizians and participants in Leverage Research, put it bluntly. “There’s this belief [among rationalists],” she said, “that society has these really bad behaviors, like developing self-improving AI, or that mainstream epistemology is really bad–not just religion, but also normal ‘trust-the-experts’ science. That can lead to the idea that we should figure it out ourselves. And what can show up is that some people aren't actually smart enough to form very good conclusions once they start thinking for themselves.”
Reminds me of some members of our government and conspiracy theorists who "research" and encourage people to figure it out themselves ...
The thing I always found funny about those self-proclaimed Rationalists, or Skeptics, is how their value logic and reasoning, or science, and yet seriously lack knowledge in any of these, what leads to very naive views in that aspects.
Trying to find life’s answers by giving over your self authority to another individual or group’s philosophy is not rational. Submitting oneself to an authority who’s role is telling people what’s best in life will always lead to attracting the type of people looking to control, take advantage and traumatize others.
because humans are biological creatures iterating through complex chemical processes that are attempting to allow a large organism to survive and reproduce within the specific ecosystem provided by the Earth in the present day. "Rational reasoning" is a quaint side effect that sometimes is emergent from the nervous system of these organisms, but it's nothing more than that. It's normal that the surviving/reproducing organism's emergent side effect of "rational thought", when it is particularly intense, will self-refer to the organism and act as though it has some kind of dominion over the organism itself, but this is, like the rationalism itself, just an emergent effect that is accidental and transient. Same as if you see a cloud that looks like an elephant (it's still just a cloud).
When I was looking for a group in my area to meditate with, it was tough finding one that didn't appear to be a cult. And yet I think Buddhist meditation is the best tool for personal growth humanity has ever devised. Maybe the proliferation of cults is a sign that Yudkowsky was on to something.
None of them are practicing Buddhist meditation though, same for the "personal growth" oriented meditation styles.
Buddhist meditation exists only in the context of the Four Noble Truths and the rest of the Buddha's Dhamma. Throwing them away means it stops being Buddhist.
I disagree, but we'd be arguing semantics. In any case, the point still stands: you can just as easily argue that these rationalist offshoots aren't really Rationalist.
> And what can show up is that some people aren't actually smart enough to form very good conclusions once they start thinking for themselves.
It's mostly just people who aren't very experienced talking about and dealing honestly with their emotions, no?
I mean, suppose someone is busy achieving and, at the same time, proficient in balancing work with emotional life, dealing head-on with interpersonal conflicts, facing change, feeling and acknowledging hurt, knowing their emotional hangups, perhaps seeing a therapist, perhaps even occasionally putting personal needs ahead of career... :)
Tell that person they can get a marginal (or even substantial) improvement from some rationalist cult practice. Their first question is going to be, "What's the catch?" Because at the very least they'll suspect that adjusting their work/life balance will bring a sizeable amount of stress and consequent decrease in their emotional well-being. And if the pitch is that this rationalist practice works equally well at improving emotional well-being, that smells to them. They already know they didn't logic themselves into their current set of emotional issues, and they are highly unlikely to logic themselves out of them. So there's not much value here to offset the creepy vibes of the pitch. (And again-- being in touch with your emotions means quicker and deeper awareness of creepy vibes!)
Now, take a person whose unexplored emotional well-being tacitly depends on achievement. Even a marginal improvement in achievement could bring perceptible positive changes in their holistic selves! And you can step through a well-specified, logical process to achieve change? Sign HN up!
Quite possibly, places like Reddit and Hacker News, are training for the required level of intellectual smugness, and certitude that you can dismiss every annoying argument with a logical fallacy.
That sounds smug of me, but I’m actually serious. One of their defects, is that once you memorize all the fallacies (“Appeal to authority,” “Ad hominem,”) you can easily reach the point where you more easily recognize the fallacies in everyone else’s arguments than your own. You more easily doubt other people’s cited authorities, than your own. You slap “appeal to authority” against a disliked opinion, while citing an authority next week for your own. It’s a fast path from there to perceived intellectual superiority, and an even faster path from there into delusion. Rational delusion.
While deployment of logical fallacies to win arguments is annoying at best, the far bigger problem is that people make those fallacies in the first place — such as not considering base rates.
It's generally worth remembering that some of the fallacies are actually structural, and some are rhetorical.
A contradiction creates a structural fallacy; if you find one, it's a fair belief that at least one of the supporting claims is false. In contrast, appeal to authority is probabilistic: we don't know, given the current context, if the authority is right, so they might be wrong... But we don't have time to read the universe into this situation so an appeal to authority is better than nothing.
... and this observation should be coupled with the observation that the school of rhetoric wasn't teaching a method for finding truth; it was teaching a method for beating an opponent in a legal argument. "Appeal to authority is a logical fallacy" is a great sword to bring to bear if your goal is to turn off the audience's ability to ask whether we should give the word of the environmental scientist and the washed-up TV actor equal weight on the topic of environmental science...
… however, even that is up for debate. Maybe the TV actor in your own example is Al Gore filming An Inconvenient Truth and the environmental scientist was in the minority which isn’t so afraid of climate change. Fast forward to 2025, the scientist’s minority position was wrong, while Al Gore’s documentary was legally ruled to have 9 major errors; so you were stupid on both sides, with the TV actor being closer.
because the sacred simplicity problem, yet another label I had to coin due to necessity
for example, lambda calculus, it's too simple. to the point that it's real power is immediately unbelievable.
the simplest 'solution', is to make it "sacred", to infuse an aura of mystery and ineffability around the ideas. that way people will give it the proper respect proportional to the mathematical elegance without necessarily having to really grasp the details.
I'm reflecting on how, for example, lambda calculus is really easy to learn to do by rote. but this does not help in truly grasping the significance that even LLM can be computed by (an inhuman) amount of symbol substitutions on paper. and how it is easy to trivialize what this really entails (fleshing out all the entailment is difficult; it's easier to act as if they have been fleshed out and mimic the awe)
therefore, rationalist cults are the legacy, the latest leaf in the long succession of the simplest solution to the simplicity of the truly sacred mathematical ideas with which we can "know" (and nod to each other who also "know") what numbers fucking mean
There's so much in these group dynamics that repeats group dynamics of communist extremists of the 70s. A group that has found a 'better' way of life, all you have to do is believe in the group's beliefs.
Compare this part from OP:
>Here is a sampling of answers from people in and close to dysfunctional groups: “We spent all our time talking about philosophy and psychology and human social dynamics, often within the group.” “Really tense ten-hour conversations about whether, when you ate the last chip, that was a signal that you were intending to let down your comrades in selfish ways in the future.”
This reeks of Marxist-Leninist self-criticism, where everybody tried to up each other in how ideologically pure they were. The most extreme outgrowing of self-criticism is when the Japanese United Red Army beat its own members to death as part of self-criticisms.
I think rationalist cults work exactly the same as religious cults. They promise to have all the answers, to attract the vulnerable. The answers are convoluted and inscrutable, so a leader/prophet interprets them. And doom is neigh, providing motivation and fear to hold things together.
It's the same wolf in another sheep's clothing.
And people who wouldn't join a religious cult -- e.g. because religious cults are too easy to recognize since we're all familiar with them, or because religions hate anything unusual about gender -- can join a rationalist cult instead.
I think everyone should be familiar with hermeticism because its various mystery cults have been with us since Hermes Trismegistus laid down its principles in Ancient Egypt on the Emerald Tablets. It was where early science like practices like alchemy originated, but that wheat got separated out from the chaff during the renaissance and the more coercive control aspects remained. That part, how to get people to follow you and fight for you and maintain a leadership hierarchy is extremely old technology.
They essentially use this glitch in human psychology that gets exploited over and over again. The glitch is a more generalized version of the advanced fee scam. You tell people that if we just believe something can be true, it can be true. Then we discriminate against people who don't believe in that thing. We then say only the leader(s) can make that thing true, but first you must give them all your power and support so they can fight the people who don't believe in that thing.
Unfortunately, reality is much more messy than the cult leaders would have you believe, and leaders often don't have their followers best interests at heart, especially those who follow blindly, or even the ability to make the thing true that everyone wants to be true, but use it as this white rabbit that everyone in the cult has to chase after forever.
Is it really that surprising that a group of humans who think they have some special understanding of reality compared to others tend to separate and isolate themselves until they fall into an unguided self-reinforcing cycle?
I'd have thought that would be obvious since it's the history of many religions (which seem to just be cults that survived the bottleneck effect to grow until they reached a sustainable population).
In other words, humans are wired for tribalism, so don't be surprised when they start forming tribes...
> And yet, the rationalist community has hosted perhaps half a dozen small groups with very strange beliefs (including two separate groups that wound up interacting with demons). Some — which I won’t name in this article for privacy reasons — seem to have caused no harm but bad takes.
So there's six questionable (but harmless) groups and then later the article names three of them as more serious. Doesn't seem like "many" to me.
I wonder what percentage of all cults are the rationalist ones.
The premise of the article might just be nonsense.
How many rationalists are there in the world? Of course it depends on what you mean by rationalist, but I'd guess that there are probably several tens of thousands, at very least, people in the world who either consider themselves rationalists or are involved with the rationalist community.
With such numbers, is it surprising that there would be half a dozen or so small cults?
There are certainly some cult-like aspects to certain parts of the rationalist community, and I think that those are interesting to explore, but come on, this article doesn't even bother to establish that its title is justified.
To the extent that rationalism does have some cult-like aspects, I think a lot of it is because it tends to attract smart people who are deficient in the ability to use avenues other than abstract thinking to comprehend reality and who enjoy making loosely justified imaginative leaps of thought while overestimating their own abilities to model reality. The fact that a huge fraction of rationalists are sci-fi fans is not a coincidence.
But again, one should first establish that there is anything actually unusual about the number of cults in the rationalist community. Otherwise this is rather silly.
The major problem I see in this group is that they have constructed a self-identity of being intelligent. This means that by being part of a Rationalist group, a person can claim to have insight into things that "the rest of the world doesn't understand."
Which, because (1) self-identifying as intelligent/insightful does not mean you are actually so; (2) you are following the "brain reprogramming" processes of some kind of comvincing leader; is a straight shot to NXIVM style apocalyptic cultism.
It is so strange that this article would hijack the term "rationalist" to mean this extraordinarily specific set of people "drawn together by AI researcher Eliezer Yudkowsky’s blog post series The Sequences, a set of essays about how to think more rationally".
As a counter example (with many many more people) is the Indian Rationalist Association (https://en.wikipedia.org/wiki/Indian_Rationalist_Association) to "promote scientific skepticism and critique supernatural claims". This isn't a cult of any kind, even if the members broadly agree about what it means to be rational with the set above.
Isn't this entirely to be expected? The people who dominate groups like these are the ones who put the most time and effort into them, and no sane person who appreciates both the value and the limitations of rational thinking is going to see as much value in a rationalist group, and devote as much time to it, as the kind of people who are attracted to the cultish aspect of achieving truth and power through pure thought. There's way more value there if you're looking to indulge in, or exploit, a cult-like spiral into shared fantasy than if you're just looking to sharpen your logical reasoning.
Depends very much on what you're hoping to get out of it. There isn't really one "rationalist" thing at this point, it's now a whole bunch of adjacent social groups with overlapping-but-distinct goals and interests.
https://www.lesswrong.com/highlights this is the ostensible "Core Highlights", curated by major members of the community, and I believe Eliezer would endorse it.
If you don't get anything out of reading the list itself, then you're probably not going to get anything out of the rest of the community either.
If you poke around and find a few neat ideas there, you'll probably find a few other neat ideas.
For some people, though, this is "wait, holy shit, you can just DO that? And it WORKS?", in which case probably read all of this but then also go find a few other sources to counter-balance it.
(In particular, probably 90% of the useful insights already exist elsewhere in philosophy, and often more rigorously discussed - LessWrong will teach you the skeleton, the general sense of "what rationality can do", but you need to go elsewhere if you want to actually build up the muscles)
Little on offer but cults these days. Take your pick. You probably already did long ago and now your own cult is the only one you'll never clock as such.
The thing with identifying yourself with an “ism” (e.g. rationalism, feminism, socialism) is that, even though you might not want that, you’re inherently positioning yourself in a reductionist and inaccurate corner of the world. Or in other words you’re shielding yourself in a comfortable, but wrong, bubble.
To call yourself an -ist means that you consider that you give more importance to that concept than other people—-you’re more rational than most, or care more about women than most, or care more about social issues than most. That is wrong both because there are many irrational rationalists and also because there are many rational people who don’t associate with the group (same with the other isms). The thing is that the very fact of creating the label and associating yourself with it will ruin the very thing that you strive for. You will attract a bunch of weirdos who want to be associated with the label without having to do the job that it requires, and you will become estranged from those who prefer to walk the walk instead of talk the talk. In both ways, you failed.
The fact is that every ism is a specific set of thoughts and ideas that is not generic, and not broad enough to carry the weight of its name. Being a feminist does not mean you care about women; it means you are tied to a specific set of ideologies and behaviours that may or may not advance the quality of life of women in the modern world, and are definitely not the only way to achieve that goal (hence the inaccuracy of the label).
Rationalism is the belief that reason is the primary path to knowledge, as opposed to, say, the observation that is championed by empiricism. It's a belief system that prioritises imposing its tenets on reality rather than asking reality what reality's tenets are. From the outset, it's inherently cult-like.
Rationalists, in this case, refers specifically to the community clustered around LessWrong, which explicitly and repeatedly emphasizes points like "you can't claim to have a well grounded belief if you don't actually have empirical evidence for it" (https://www.lesswrong.com/w/evidence for a quick overview of some of the basic posts on that topic)
To quote one of the core foundational articles: "Before you try mapping an unseen territory, pour some water into a cup at room temperature and wait until it spontaneously freezes before proceeding. That way you can be sure the general trick—ignoring infinitesimally tiny probabilities of success—is working properly." (https://www.lesswrong.com/posts/eY45uCCX7DdwJ4Jha/no-one-can...)
One can argue how well the community absorbs the lesson, but this certainly seems to be a much higher standard than average.
That is the definition of “rationalism” as proposed by philosophers like Descartes and Kant, but I don’t think that is an accurate representation of the type of “rationalism” this article describes.
This article describes “rationalism” as described in LessWrong and the sequences by Eliezer Yudkowsky. A good amount of it based on empirical findings from psychology behavior science. It’s called “rationalism” because it seeks to correct common reasoning heuristics that are purported to lead to incorrect reasoning, not in contrast to empiricism.
Agreed, I appreciate that there's a conceptual distinction between the philosophical versions of rationalism and empiricism, but what's being talked about here is a conception that (again, at least notionally) is interested in and compatible with both.
I am pretty sure many of the LessWrong posts are about how to understand the meaning of different types of data and are very much about examining, developing, criticizing a rich variety of empirical attitudes.
I was going to write a similar comment as op, so permit me to defend it:
Many of their "beliefs" - Super-duper intelligence, doom - are clearly not believed by the market; Observing the market is a kind of empiricism and it's completely discounted by the lw-ers
But you cannot have reason without substantial proof of how things behave by observing them in the first place. Reason is simply a logical approach to yes and no questions where you factually know, from observation of past events, how things work. And therefore you can simulate an outcome by the exercise of reasoning applied onto a situation that you have not yet observed and come to a logical outcome, given the set of rules and presumptions.
I find it ironic that the question is asked unempirically. Where is the data stating there are many more than before? Start there, then go down the rabbit hole. Otherwise, you're concluding on something that may not be true, and trying to rationalize the answer, just as a cultist does.
Anyone who's ever seen the sky knows it's blue. Anyone who's spent much time around rationalism knows the premise of this article is real. It would make zero sense to ban talking about about a serious and obvious problem in their community until some double blind peer reviewed data can be gathered.
It would be what they call an "isolated demand for rigor".
Something like 15 years ago I once went to a Less Wrong/Overcoming Bias meetup in my town after being a reader of Yudkowsky's blog for some years. I was like, Bayesian Conspiracy, cool, right?
The group was weird and involved quite a lot of creepy oversharing. I didn't return.
There was this interview with Diane Benscoter who talked about her experience and reasons for joining a cult that I found very insightful: https://www.youtube.com/watch?v=6Ibk5vJ-4-o
The main point is that it isn't so much the cult (leader) so much as the victims being in a vulnerable mental state getting exploited.
Perhaps I will get downvoted to death again for saying so, but the obvious answer is because the name "rationalist" is structurally indistinguishable from the name "scientology" or "the illuminati". You attract people who are desperate for an authority to appeal to, but for whatever reason are no longer affiliated with the church of their youth. Even a rationalist movement which held nothing as dogma would attract people seeking dogma, and dogma would form.
The article begins by saying the rationalist community was "drawn together by AI researcher Eliezer Yudkowsky’s blog post series The Sequences". Obviously the article intends to make the case that this is a cult, but it's already done with the argument at this point.
> for whatever reason are no longer affiliated with the church of their youth.
This is the Internet, you're allowed to say "they are obsessed with unlimited drugs and weird sex things, far beyond what even the generally liberal society tolerates".
I'm increasingly convinced that every other part of "Rationalism" is just distraction or justification for those; certainly there's a conscious decision to minimize talking about this part on the Internet.
I strongly suspect there is heterogeneity here. An outer party of "genuine" rationalists who believe that learning to be a spreadsheet or whatever is going to let them save humanity, and an inner party who use the community to conceal some absolute shenanigans.
> Obviously the article intends to make the case that this is a cult
The author is a self-identified rationalist. This is explicitly established in the second sentence of the article. Given that, why in the world would you think they're trying to claim the whole movement is a cult?
Obviously you and I have very different definitions of "obvious"
In fact, I'd go a step further and note the similarity with organized religion. People have a tendency to organize and dogmatize everything. The problem with religion is rarely the core ideas, but always the desire to use it as a basis for authority, to turn it dogmatic and ultimately form a power structure.
And I say this as a Christian. I often think that becoming a state religion was the worst thing that ever happened to Christianity, or any religion, because then it unavoidably becomes a tool for power and authority.
And doing the same with other ideas or ideologies is no different. Look at what happened to communism, capitalism, or almost any other secular idea you can think of: the moment it becomes established, accepted, and official, the corruption sets in.
There are a lot of rationalists in this community. Pointing out that the entire thing is a cult attracts downvotes from people who wish to, for instance, avoid being identified with the offshoots.
This is a very interesting article. It's surprising though to see it not use the term "certainty" at all. (It only uses "certain" in a couple instances of like "a certain X" and one use of "certainly" for generic emphasis.)
Most of what the article says makes sense, but it seems to sidestep the issue that a major feature distinguishing the "good" rationalists from the "bad" is that the bad ones are willing to take very extreme actions in support of their beliefs. This is not coincidentally something that distinguishes good believers in various religions or philosophies from bad believers (e.g., people who say God told them to kill people). This is also lurking in the background of discussion of those who "muddled through" or "did the best they could". The difference is not so much in the beliefs as in the willingness to act on them, and that willingness is in turn largely driven by certainty.
I think it's plausible there is a special dimension to rationalism that may exacerbate this, namely a tendency of rationalists to feel especially "proud" of their beliefs because of their meta-belief that they derived their beliefs rationally. Just like an amateur painter may give themselves extra brownie points because no one taught them how to paint, my impression of rationalists is that they sometimes give themselves an extra pat on the back for "pulling themselves up by their bootstraps" in the sense of not relying on faith or similar "crutches" to determine the best course of action. This can paradoxically increase their certainty in their beliefs when actually it's often a warning that those beliefs may be inadequately tested against reality.
I always find it a bit odd that people who profess to be rationalists can propose or perform various extreme acts, because it seems to me that one of the strongest and most useful rational beliefs is that your knowledge is incomplete and your beliefs are almost surely not as well-grounded as you think they are. (Certainly no less an exponent of reason than Socrates was well aware of this.) This on its own seems sufficient to me to override some of the most absurd "rationalist" conclusions (like that you should at all costs become rich or fix Brent Dill's depression). It's all the more so when you combine it with some pretty common-sense forecasts of what might happen if you're wrong. (As in, if you devote your life to curing Brent Dill's depression on the theory that he will then save the world, and he turns out to be just an ordinary guy or worse, you wasted your life curing one person's depression when you yourself could have done more good with your own abilities, just by volunteering at a soup kitchen or something.) It's never made sense to me that self-described rationalists could seriously consider some of these possible courses of action in this light.
Sort of related is the claim at the end that rationalists "want to do things differently from the society around them". It's unclear why this would be a rational desire. It might be rational in a sense to say you want to avoid being influenced by the society around you, but that's different from affirmatively wanting to differ from it. This again suggests a sort of "psychological greed" to reach a level of certainty that allows you to confidently, radically diverge from society, rather than accepting that you may never reach a level of certainty that allows you to make such deviations on a truly rational basis.
It's also interesting to me that the article focuses a lot not on rationalist belief per se, but on the logistics and practices of rationalist communities. This in itself seems like a warning that the rationality of rationalism is not all it's cracked up to be. It's sort of like, you can try to think as logically as possible, but if you hit yourself in the head with a hammer every day you're likely going to make mistakes anyway. And some of the "high demand" practices mentioned seem like slightly less severe psychological versions of that.
> it seems to sidestep the issue that a major feature distinguishing the "good" rationalists from the "bad" is that the bad ones are willing to take very extreme actions in support of their beliefs.
What is a "very extreme action"? Killing someone? In our culture, yes. What about donating half of your salary to charity? I think many people would consider that quite extreme, too. Maybe even more difficult to understand than the murder... I mean, prisons are full of murderers; they are not so exceptional.
The difference is that the bad ones are willing to take abusive actions.
> It's also interesting to me that the article focuses a lot not on rationalist belief per se, but on the logistics and practices of rationalist communities.
That's what happens when you read about the rationality community from someone who is actually familiar with it. If you want to determine whether a group is dysfunctional (i.e. a cult), the actual practices are much more important than the stated beliefs. You could have two communities with the same or very similar beliefs, yet one of them nice and the other one abusive.
> What about donating half of your salary to charity? I think many people would consider that quite extreme, too.
Maybe, but there are also degrees of extremity in terms of stuff like how broadly you donate (like there's a difference between donating a huge amount to one charity vs. spreading it around 10). Also I don't think the mere fact of donating half your salary would itself necessarily be seen as extreme; it would depend on the person's total wealth. It seems not unusual for wealthy individuals who get certain jobs to donate (or refuse) their entire salary (like Arnold Schwarzenegger declining his salary as CA governor).
Ultimately though I don't agree that this is anywhere close to as extreme as cold-blooded murder.
> I mean, prisons are full of murderers; they are not so exceptional.
I have a hunch that a large proportion of murderers in prisons are not comparable to rationalist murderers. There's a difference between just killing someone and killing someone due to your belief that that is the rational and correct thing to do. A lot of murders are crimes of passion or occur in the commission of other crimes. I could see an intermediate case where someone says "We're going to rob this bank and if the guard gives us any trouble we'll just shoot him", which is perhaps comparable to "always escalate conflict", but I don't think most murders even reach that level of premeditation.
> The difference is that the bad ones are willing to take abusive actions.
I'm not so sure that that is the difference, rather than that they are willing to take extreme actions, and then the extreme actions they wind up taking (for whatever reason) are abusive. It's sort of like, if you fire a gun into a crowd, your willingness to do so is important whether or not you actually hit anyone. Similarly a willingness to go well outside the bounds of accepted behaviors is worrisome even if you don't happen to harm anyone by doing so. I could certainly imagine that many rationalists do indeed formulate belief systems that exclude certain kinds of extreme behavior while allowing others. I'm just saying, if I found out that someone was spending all their days doing any spookily extreme thing (e.g., 8 hours a day building a scale model of Hoover Dam one grain of sand at a time) I would feel a lot less safe around them.
> > It's also interesting to me that the article focuses a lot not on rationalist belief per se, but on the logistics and practices of rationalist communities.
> That's what happens when you read about the rationality community from someone who is actually familiar with it. If you want to determine whether a group is dysfunctional (i.e. a cult), the actual practices are much more important than the stated beliefs.
Sure. My point is just that, insofar as this is true, it means what the article is saying is more about cults in general and less about anything specific to rationalism.
Why are there so many cults? People want to feel like they belong to something, and in a world in the midst of a loneliness and isolation epidemic the market conditions are ideal for cults.
The book Imagined Communities (Benedict Anderson) touches on this, making the case that in modern times, "nation" has replaced the cultural narrative purpose previously held by "tribe," "village," "royal subject," or "religion."
The shared thread among these is (in ever widening circles) a story people tell themselves to justify precisely why, for example, the actions of someone you'll never meet in Tulsa, OK have any bearing whatsoever on the fate of you, a person in Lincoln, NE.
One can see how this leaves an individual in a tenuous place if one doesn't feel particularly connected to nationhood (one can also see how being too connected to nationhood, in an exclusionary way, can also have deleterious consequences, and how not unlike differing forms of Christianity, differing concepts on what the 'soul' of a nation is can foment internal strife).
(To be clear: those fates are intertwined to some extent; the world we live in grows ever smaller due to the power of up-scaled influence of action granted by technology. But "nation" is a sort of fiction we tell ourselves to fit all that complexity into the slippery meat between human ears).
The question the article is asking is "why did so many cults come out of this particular social milieu", not "why are there a lot of cults in the whole world".
Your profile says that you want to keep your identity small, but you have like over 30 thousand comments spelling out exactly who you are and how you think. Why not shard accounts? Anyways. Just a random thought.
Empathy is usally a limited resource of those that generously ascribe it to themselves and it is often mixed up with self-serving desires. Perhaps Rationalists have similar difficulties with reasoning.
While I believe Rationalism can be some form of occupational disease in tech circles, it sometimes does pose interesting questions. You just have to be aware that the perspective to analyse circumstances is intentionally constrained and in the end you still have to compare your prognosis to a reality that always choses empiricism.
> The Sequences make certain implicit promises. ...
Some meta-commentary first... How would one go about testing if this is true? If true, then such "promises" are not written down -- they are implied. So one would need to ask at least two questions: 1. Did the author intend to make these implicit promises? 2. What portion of readers perceive them as such?
> ... There is an art of thinking better ...
First, this isn't _implicit_ in the Sequences; it is stated directly. In any case, the quote does not constitute a promise: so far, it is a claim. And yes, rationalists do think there are better and worse ways of thinking, in the sense of "what are more effective ways of thinking that will help me accomplish my goals?"
> ..., and we’ve figured it out.
Codswallop. This is not a message of the rationality movement -- quite the opposite. We share what we've learned and why we believe it to be true, but we don't claim we've figured it all out. It is better to remain curious.
> If you learn it, you can solve all your problems...
Bollocks. This is not claimed implicitly or explicitly. Besides, some problems are intractable.
> ... become brilliant and hardworking and successful and happy ...
Rubbish.
> ..., and be one of the small elite shaping not only society but the entire future of humanity.
Bunk.
For those who haven't read it, I'll offer a relevant extended quote from Yudkowsky's 2009 "Go Forth and Create the Art!" [1], the last post of the Sequences:
## Excerpt from Go Forth and Create the Art
But those small pieces of rationality that I've set out... I hope... just maybe...
I suspect—you could even call it a guess—that there is a barrier to getting started, in this matter of rationality. Where by default, in the beginning, you don't have enough to build on. Indeed so little that you don't have a clue that more exists, that there is an Art to be found. And if you do begin to sense that more is possible—then you may just instantaneously go wrong. As David Stove observes—I'm not going to link it, because it deserves its own post—most "great thinkers" in philosophy, e.g. Hegel, are properly objects of pity. That's what happens by default to anyone who sets out to develop the art of thinking; they develop fake answers.
When you try to develop part of the human art of thinking... then you are doing something not too dissimilar to what I was doing over in Artificial Intelligence. You will be tempted by fake explanations of the mind, fake accounts of causality, mysterious holy words, and the amazing idea that solves everything.
It's not that the particular, epistemic, fake-detecting methods that I use, are so good for every particular problem; but they seem like they might be helpful for discriminating good and bad systems of thinking.
I hope that someone who learns the part of the Art that I've set down here, will not instantaneously and automatically go wrong, if they start asking themselves, "How should people think, in order to solve new problem X that I'm working on?" They will not immediately run away; they will not just make stuff up at random; they may be moved to consult the literature in experimental psychology; they will not automatically go into an affective death spiral around their Brilliant Idea; they will have some idea of what distinguishes a fake explanation from a real one. They will get a saving throw.
It's this sort of barrier, perhaps, which prevents people from beginning to develop an art of rationality, if they are not already rational.
And so instead they... go off and invent Freudian psychoanalysis. Or a new religion. Or something. That's what happens by default, when people start thinking about thinking.
I hope that the part of the Art I have set down, as incomplete as it may be, can surpass that preliminary barrier—give people a base to build on; give them an idea that an Art exists, and somewhat of how it ought to be developed; and give them at least a saving throw before they instantaneously go astray.
That's my dream—that this highly specialized-seeming art of answering confused questions, may be some of what is needed, in the very beginning, to go and complete the rest.
My pet theory is - that as a rationalist, you have a idealized view of humanity by nature. Your mirror neurons copy your own mind for interpolating other peoples behavior and character.
Which results in a constant state of cognitive dissonance, as the people of normal society around you behave very differently and often more "rustically" then expected. The education is there- all the learning sources are there- and yet are rejected. The lessons learned from history go unlearned and are often repeated.
You are in a out-group by definition and life-long, so you band together with others and get conned by cult-con-artists into foolish projects.
For the "rational" are nothing but another deluded project to hijack by the sociopaths of our society. The most rational being- in fact a being so capable to predate us, society had to develop anti-bodies against socio-paths, we call religion and laws!
For me largley shaped by the westering old Europe creaking and breaking (after 2 WWs) under its heavy load of philosophical/metaphysical inheritance (which at this point in time can be considered effectively americanized).
It is still fascinating to trace back the divergent developments like american-flavoured christian sects or philosophical schools of "pragmatism", "rationalism" etc. which get super-charged by technological disruptions.
In my youth I was heavily influenced by the so-called Bildung which can be functionally thought of as a form of ersatz religion and is maybe better exemplified in the literary tradition of the Bildungsroman.
I've grappled with and wildly fantasized about all sorts of things, experimented mindlessly with all kinds of modes of thinking and consciousness amidst my coming-of-age, in hindsight without this particular frame of Bildung left by myself I would have been left utterly confused and maybe at some point acted out on it. By engaging with books like Der Zauberberg by Thomas Mann or Der Mann ohne Eigenschaften by Robert Musil, my apparent madness was calmed down and instead of breaking the dam of a forming social front of myself with the vastness of the unconsciousness, over time I was guided to develop my own way into slowly operating it appropriately without completely blowing myself up into a messiah or finding myself eternally trapped in the futility and hopelessness of existence.
Borrowing from my background, one effective vaccination which spontaneously came up in my mind against rationalists sects described here, is Schopenhauer's Die Welt als Wille und Vorstellung which can be read as a radical continuation of Kant's Critique of Pure Reason which was trying to stress test the ratio itself. [To demonstrate the breadth of Bildung in even something like the physical sciences e.g. Einstein was familiar with Kant's a priori framework of space and time, Heisenberg's autobiographical book Der Teil und das Ganze was motivated by: "I wanted to show that science is done by people, and the most wonderful ideas come from dialog".]
Schopenhauer arrives at the realization because of the groundwork done by Kant (which he heavily acknowledges): that there can't even exist a rational basis for rationality itself, that it is simply an exquisitely disguised tool in the service of the more fundamental will i.e. by its definition an irrational force.
Funny little thought experiment but what consequences does this have? Well, if you are declaring the ratio as your ultima ratio you are just fooling yourself in order to be able to rationalize anything you want. Once internalized Schopenhauer's insight gets you overwhelmed by Mitleid for every conscious being, inoculating you against the excesses of your own ratio. It instantly hit me with the same force as MDMA but several years before.
I think it speaks volumes that you think "american" is the approximate level of scope that this behavior lives at.
Stuff like this crosses all aspects of society. Certain americans of certain backgrounds, demographics and life experiences are fare more likely engage in it than others. I think those people are minority, but they are definitely an overly visible one if not a local majority in a lot of internet spaces so it's easy to mistake them for the majority.
Sure many people across the globe are susceptible to cult-think. It’s just been a century long trend in America to seek a superior way of living to “save all Americans” is all. No offense to other countries peoples, I’m sure they’re just as good cult members championing over application as any American.
It probably speaks more volumes that you are taking my comment about this so literally.
Rationalists are, to a man (and they’re almost all men) arrogant dickheads and arrogant dickheads do not see what they’re doing to be “a cult” but “the right and proper way of things because I am right and logical and rational and everyone else isn’t”.
That's an unnecessary charicaterature. I have met many rationalists of both genders and found most of them quite pleasant. But it seems that the proportion of "arrogant dickheads" unfortunately matches that of the general population. Whether it's "irrational people" or "liberal elites" these assholes always seem to find someone to look down on.
It's a religion of an overdeveloped mind that hides from everything it cannot understand. It's an anti-religion, in a sense, that puts your mind on the pedestal.
Note the common pattern in major religions: they tell you that thoughts and emotions obscure the light of intuition, like clouds obscure sunlight. Rationalism is the opposite: it denies the very idea of intuition, or anything above the sphere of thoughts, and tells to create as many thoughts as possible.
Rationalists deny anything spiritual, good or evil, because they don't have evidence to think otherwise. They remain in this state of neutral nihilism until someone bigger than them sneaks into their ranks and casually introduces them to evil with some undeniable evidence. Their minds quickly pass the denial-anger-acceptance stages and being faithful to their rationalist doctrine they update their beliefs with what they know. From that point they are a cult. That's the story of Scientology, which has too many many parallels with Rationalism.
Because they have serious emotional maturity issues leading to lobotomizing their normal human emotional side of their identity and experience of life.
Cue all the surface-level “tribalism/loneliness/hooman nature” comments instead of the simple analysis that Rationalism (this kind) is severely brain-broken and irredeemable and will just foster even worse outcomes in a group setting. It’s a bit too close to home (ideologically) to get a somewhat detached analysis.
I think we've strayed too far from the Aristotelian dynamics of the self.
Outside of sexuality and the proclivities of their leaders, emphasis on physical domination of the self is lacking. The brain runs wild, the spirit remains aimless.
In the Bay, the difference between the somewhat well-adjusted "rationalists" and those very much "in the mush" is whether or not someone tells you they're in SF or "on the Berkeley side of things"
Note that Asterisk magazine is basically the unofficial magazine for the rationalism community and the author is a rationalist blogger who is naturally very pro-LessWrong. This piece is not anti-Yudkowsky or anti-LessWrong.
We live in an irrational time. It's unclear if it was simply under reported in history or social changes in the last ~50-75 years have had breaking consequences.
People are trying to make sense of this. For examples.
The Canadian government heavily subsidizes junk food, then spends heavily on healthcare because of the resulting illnesses. It restrict and limits healthy food through supply management and promotes a “food pyramid” favoring domestic unhealthy food. Meanwhile, it spends billions marketing healthy living, yet fines people up to $25,000 for hiking in forests and zones cities so driving is nearly mandatory.
Government is an easy target for irrational behaviours.
A very interesting read.
My idea of these self-proclaimed rationalists was fifteen years out of date. I thought they’re people who write wordy fan fiction, but turns out they’ve reached the point of having subgroups that kill people and exorcise demons.
This must be how people who had read one Hubbard pulp novel in the 1950s felt decades later when they find out he’s running a full-blown religion now.
The article seems to try very hard to find something positive to say about these groups, and comes up with:
“Rationalists came to correct views about the COVID-19 pandemic while many others were saying masks didn’t work and only hypochondriacs worried about covid; rationalists were some of the first people to warn about the threat of artificial intelligence.”
There’s nothing very unique about agreeing with the WHO, or thinking that building Skynet might be bad… (The rationalist Moses/Hubbard was 12 when that movie came out — the most impressionable age.) In the wider picture painted by the article, these presumed successes sound more like a case of a stopped clock being right twice a day.
You're falling into some sort of fallacy; maybe a better rationalist than I could name it.
The "they" you are describing is a large body of disparate people spread around the world. We're reading an article that focuses on a few dysfunctional subgroups. They are interesting because they are so dysfunctional and rare.
Or put it this way: Name one -ism that _doesn't_ have sub/splinter groups that kill people. Even Pacifism doesn't get a pass.
> The "they" you are describing is a large body of disparate people spread around the world.
[Citation needed]
I sincerely doubt anything but a tiny insignificant minority consider themselves part of the "rationalist community".
6 replies →
The article specifically defines the rationalists it’s talking about:
“The rationalist community was drawn together by AI researcher Eliezer Yudkowsky’s blog post series The Sequences, a set of essays about how to think more rationally.”
Is this really a large body of disparate people spread around the world? I suspect not.
6 replies →
Dadaism? Most art -isms didn't have subgroups who killed people. If people killed others in art history it was mostly tragic individual stories and had next to nothing to do with the ideology of the ism.
1 reply →
This sounds like the No True Scotsman fallacy.
We know all true scotsmen are good upstanding citizens. If you find a Scotsman who is a criminal, then obviously he is not a true Scotsman.
If you find a rationalist who believes something mad then obviously he is not a true rationalist.
There are now so many logical fallacies that you can point to any argument and say it’s a logical fallacy.
Existentialism.
Post-modernism.
Accidentalism.
Perhaps the difference is that these isms didn't think they had thought up everything themselves.
>The "they" you are describing is a large body of disparate people spread around the world.
And that "large body" has a few hundred core major figures and prominent adherents, and a hell of a lot of them seem to be exactly like how the parent describes. Even the "tamer" of them like ASC have that cultish quality...
As for the rest of the "large body", the hangers on, those are mostly out of view anyway, but I doubt they'd be paragons of sanity if looked up close.
>Or put it this way: Name one -ism that _doesn't_ have sub/splinter groups that kill people
-isms include fascism, nazism, jihadism, nationalism, communism, nationalism, racism, etc, so not exactly the best argument to make in rationalism's defense. "Yeah, rationalism has groups that murder people, but after all didn't fascism had those too?"
Though, if we were honest, it mostly brings in mind another, more medical related, -ism.
3 replies →
The level of dysfunction which is described in the article is really rare. But dysfunction, the kind of which we talk about, is not really that rare, I would even say that quite common, in self proclaimed rationalist groups. They don’t kill people - at least directly - but they definitely not what they claim to be: rational. They use rational tools, more than others, but they are not more rational than others, they simply use these tools to prove their irrationality.
I touch rationalists only with a pole recently, because they are not smarter than others, but they just think that, and on the surface level they seem so. They praise Julia Galef, then ignore everything what she said. Even Galef invited people who were full blown racists, just it seemed that they were all right because they knew whom they talked with, and they couldn’t bullshit. They tried to argue why their racism is rational, but you couldn’t tell from the interviews. They flat out lies all the time on every other platforms. So at the end she just gave platform for covered racism.
The WHO didn't declare a global pandemic until March 11, 2020 [1]. That's a little slow and some rationalists were earlier than that. (Other people too.)
After reading a warning from a rationalist blog, I posted a lot about COVID news to another forum and others there gave me credit for giving the heads-up that it was a Big Deal and not just another thing in the news. (Not sure it made all that much difference, though?)
[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC7569573/
I worked at the British Medical Journal at the time. We got wind of COVID being a big thing in January. I spent January to March to get our new VPN into a fit state that the whole company could do their whole jobs from home. 23 March was lockdown and we were ready and had a very busy year.
That COVID was going to be big was obvious to a lot of people and groups who were paying attention. We were a health-related org, but we were extremely far from unique in this.
The rationalist claim that they were uniquely on the ball and everyone else dropped it is just a marketing lie.
3 replies →
Do you think that the consequences of the WHO declaring a pandemic and some rationalist blog warning about covid are the same? Clearly the WHO has to be more cautious. I have no doubt there were people at the WHO who felt a global pandemic was likely at least as early as you and the person writing the rationalist blog.
2 replies →
Shitposting comedy forums were ahead of the WHO when it came to this, it didn't take a genius to understand what was going on before shit completely hit the fan.
2 replies →
I remember plotting exponential growth against the data in late February, it was a very exciting time.
I think the piece bends over backwards to keep the charitable frame because it's written by someone inside the community, but you're right that the touted "wins" feel a bit thin compared to the sheer scale of dysfunction described.
How that dysfunction compares to similar socioeconomic group(s' median) baseline?
Personally I feel like the big thing to come out of rationalism is the insight that, in Scott Alexander's words [0] (freely after Julia Galef),
> Of the fifty-odd biases discovered by Kahneman, Tversky, and their successors, forty-nine are cute quirks, and one is destroying civilization. This last one is confirmation bias - our tendency to interpret evidence as confirming our pre-existing beliefs instead of changing our minds.
I'm mildly surprised the author didn't include it in the list.
[0] https://www.astralcodexten.com/p/book-review-the-scout-minds...
> Rationalists came to correct views about the COVID-19 pandemic while many others were saying masks didn’t work
I wonder what views about covid-19 are correct. On masks, I remember the mainstream messaging went through the stages that were masks don't work, some masks work, all masks work, double masking works, to finally masks don't work (or some masks work; I can't remember where we ended up).
> to finally masks don't work (or some masks work; I can't remember where we ended up).
Most masks 'work', for some value of 'work', but efficacy differs (which, to be clear, was ~always known; there was a very short period when some authorities insisted that covid was primarily transmitted by touch, but you're talking weeks at most). In particular I think what confused people was that the standard blue surgical masks are somewhat effective at stopping an infected person from passing on covid (and various other things), but not hugely effective at preventing the wearer from contracting covid; for that you want something along the lines of an n95 respirator.
The main actual point of controversy was whether it was airborne or not (vs just short-range spread by droplets); the answer, in the end, was 'yes', but it took longer than it should have to get there.
3 replies →
Putting just about anything in front of your face will help prevent spreading illness to some extent, this is why we teach children to "vampire cough". Masks were always effective to some degree. The CDC lied to the public by initially telling them not to use masks because they wanted to keep the supply for healthcare workers and they were afraid that the pubic would buy them all up first. It was a very very stupid thing to do and it undermined people's trust in the CDC and confused people about masks. After that masks became politicized and the whole topic became a minefield.
Basic masks work for society because they stop your saliva from traveling but they don't work for you because they don't stop particles from other people saliva from reaching you
FWIW, my rationalist friends were warning about Covid before I had heard about it from others, and talking about AI before it was on others radar.
Covid specifically or a pandemic in general?
Also AI doesn't really count because plenty of people have been riding that train for decades.
I was reminded of Hubbard too. In particular the "[belief that one] should always escalate when threatened" strongly echoes Hubbard's advice to always attack attack. Never defend.
The whole thing reminds me of EST and a thousand other cults / self-improvement / self-actualisation groups that seem endemic to California ever since the 60s or before.
Money and Heat has a strong ability to encourage crazy.
1 reply →
As someone who started reading without knowing about rationalists, I actually came out without knowing much more. Lots of context is assumed I guess.
Some main figures and rituals are mentioned but I still don’t know how the activities and communities arise from the purported origin. How do we go from “let’s rationally analyze how we think and get rid of bias” to creating a crypto, or being hype focused on AI, or summoning demons? Why did they raise this idea of matching confrontation always with escalation? Why the focus on programming, is this a Silicon Valley thing?
Also lesswrong is mentioned but no context is given about it. I only know the name as a forum, just like somethingawful or Reddit, but I don’t know how it fits into the picture.
LessWrong was originally a personal blog of Eliezer Yudkowsky. It was an inspiration for what later became the "rationality community". These days, LessWrong is a community blog. The original articles were published as a book, freely available at: https://www.readthesequences.com/ If you read it, you can see what the community was originally about; but it is long.
Some frequent topics debated on LessWrong are AI safety, human rationality, effective altruism. But it has no strict boundaries; some people even post about their hobbies or family life. Debating politics is discouraged, but not banned. The website is mostly moderated by its users, by voting on articles and comments. The voting is relatively strict, and can be scary for many newcomers. (Maybe it is not strategic to say this, but most comments on Hacker News would probably be downvoted on LessWrong for insufficient quality.)
Members of the community, the readers of the website, are all over the planet. (Just what you would expect from readers of an internet forum.) But in some cities there are enough of them so they can organize an offline meetup once in a while. And if a very few cities, there are so many of them, that they are practically a permanent offline community; most notably in the Bay Area.
I don't live in the Bay Area. To describe how the community functions in my part of the world: we meet about once in a month, sometimes less frequently, and we discuss various nerdy stuff. (Apologies if this is insufficiently impressive. From my perspective, the quality of those discussions is much higher than I have seen anywhere else, but I guess there is no way to provide this experience second-hand.) There is a spirit of self-improvement; we encourage each other to think logically and try to improve our lives.
Oh, and how does the bad part connect to it?
Unfortunately, although the community is about trying to think better, for some reason it also seems very attractive for people who are looking for someone to tell them how to think. (I mean, we do tell them how to think, but in a very abstract way: check the evidence, remember your cognitive biases, et cetera.) They are a perfect material for a cult.
The rationality community itself is not a cult. Too much disagreement and criticism of our own celebrities for that! There is also no formal membership; anyone is free to come and go. Sometimes a wannabe cult leader joins the community, takes a few vulnerable people aside, and starts a small cult. Two out of three examples in the article, it was a group of about five people -- when you have hundreds of members in a city, you won't notice when five of them start attending your meetups less frequently, and then disappear completely. And one day... you read about them in the newspapers.
> How do we go from “let’s rationally analyze how we think and get rid of bias” to creating a crypto, or being hype focused on AI, or summoning demons? Why did they raise this idea of matching confrontation always with escalation?
Rationality and AI have always been the focus of the community. Buying cryptos was considered common sense back then when Bitcoin was cheap; but I haven't heard talking about cryptos in the rationality community recently.
On the other hand, believing in demons, and the idea that you should always escalate... those are specific ideas of the leaders of the small cults, definitely not shared by the rest of the community.
Notice how the first things the wannabe cult leaders do is isolate their followers even from the rest of the rationality community. They are quite aware that what they are doing would be considered wrong by the rest of the community.
The question is, how can the community prevent this? If your meetings are open for everyone, how can you prevent one newcomer from privately contacting a few other newcomers, meeting them in private, and brainwashing them? I don't have a good answer for that.
[flagged]
The point of wearing a mask is to protect other people from your respiratory droplets. Please wear a mask when you're sick.
10 replies →
> AI is very friendly, even
Very friendly until it reads in your email that you plan to replace it with a new model:
https://www.anthropic.com/research/agentic-misalignment
It was genuinely difficult to persuade people to wear masks before everyone started doing it and it became normal.
4 replies →
I hope we never have to find out how wrong you are.
> And masks? How many graphs of cases/day with mask mandate transitions overlayed are required before people realize masks did nothing? Whole countries went from nearly nobody wearing them, to everyone wearing them, overnight, and COVID cases/day didn't even notice.
Most of those countries didn't actually follow their mask mandates - the USA for example. I visited because the PRC was preventing vaccine deliveries to Taiwan so I flew to the USA to get a vaccine, and I distinctly remember thinking "yeah... Of course" when walked around an airport of people chin diapering.
Taiwan halted a couple outbreaks from pilots completely, partially because people are so used to wearing masks when they're sick here (and also because the mask mandate was strictly enforced everywhere).
I visited DC a year later where they had a memorial for victims of COVID. It was 700,000 white flags near the Washington monument when I visited, as I recall it broke a million a few months later.
This article is beautifully written, and it's full of proper original research. I'm sad that most comments so far are knee-jerk "lol rationalists" type responses. I haven't seen any comment yet that isn't already addressed in much more colour and nuance in the article itself.
The contrarian dynamic strikes again! https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
(I'm referring to how this comment, objecting to the other comments as unduly negative, has been upvoted to the top of the thread.)
(p.s. this is not a criticism!)
I think that since it's not possible to reply to multiple comments at the same time, people will naturally open a new top-level comment the moment there's a clearly identifiable groupthink emerging. Quoting one of your earlier comments about this:
>This happens so frequently that I think it must be a product of something hard-wired in the medium *[I mean the medium of the internet forum]
I would say it's only hard-wired in the medium of tree-style comment sections. If HN worked more like linear forums with multi-quote/replies, it might be possible to have multiple back-and-forths of subgroup consensus like this.
1 reply →
Hahaah yeah true. If I had been commenting earlier I might’ve written “lol rationalists”
I'm more than happy I read your comment body and its good/well natured tone before your name XD .
> I haven't seen any comment yet that isn't already addressed in much more colour and nuance in the article itself.
I once called rationalists infantile, impotent liberal escapism, perhaps that's the novel take you are looking for.
Essentially my view is that the fundamental problem with rationalists and the effective altruist movement is that they are talking about profound social and political issues, with any and all politics completely and totally removed from it. It is liberal depoliticisation[1] driven to its ultimate conclusion. That's just why they are ineffective and wrong about everything, but that's also why they are popular among the tech elites that are giving millions to associated groups like MIRI[2]. They aren't going away, they are politically useful and convenient to very powerful people.
[1] https://en.wikipedia.org/wiki/Post-politics
[2] https://intelligence.org/transparency/
I just so happened to read in the last few days the (somewhat disjointed and rambling) Technically Radical: On the Unrecognized [Leftist] Potential of Tech Workers and Hackers
https://wedontagree.net/technically-radical-on-the-unrecogni...
as well as the better but much older "The professional-managerial class" Ehrenreich (1976) :
https://libcom.org/article/professional-managerial-class-bar...
"Rationalists" do seem to be in some ways the poster children of consumerist atomization, but do note that they also resisted it socially by forming those 'cults' of theirs.
(If counter-cultures are 'dead', why don't they count as one ?? Alternatively, might this be a form of communitarianism, but with less traditionalism, more atheism, and perhaps a Jewish slant ?)
https://en.wikipedia.org/wiki/They_Saved_Lisa%27s_Brain
Asterisk is basically "rationalist magazine" and the author is a well-known rationalist blogger, so it's not a surprise that this is basically the only fair look into this phenomenon - compared to the typical outside view that rationalism itself is a cult and Eliezer Yudkowsky is a cult leader, both of which I consider absurd notions.
The view from the inside, written by a person who is waist deep into the movement, is the only fair look into the phenomenon?
4 replies →
> the typical outside view that rationalism itself is a cult and Eliezer Yudkowsky is a cult leader, both of which I consider absurd notions
Cults are a whole biome of personalities. The prophet does not need to be the same person as the leader. They sometimes are and things can be very ugly in those cases, but they often aren’t. After all, there are Christian cults today even though Jesus and his supporting cast have been dead for approaching 2k years.
Yudkowsky seems relatively benign as far as prophets go, though who knows what goes on in private (I’m sure some people on here do, but the collective We do not). I would guess that the failure mode for him would be a David Miscavige type who slowly accumulates power while Yudkowsky remains a figurehead. This could be a girlfriend or someone who runs one of the charitable organizations (controlling the purse strings when everyone is dependent on the organization for their next meal is a time honored technique). I’m looking forward to the documentaries that get made in 20 years or so.
It's not just a drive-by hit piece
I think it's perfectly fine to read these articles, think "definitely a cult" and ignore whether they believe in spaceships, or demons, or AGI.
The key takeaway from the article is that if you have a group leader who cuts you off from other people, that's a red flag – not really a novel, or unique, or situational insight.
That's a side point of the article, acknowledged as an old idea. The central points of this article are actually quite a bit more interesting than that. He even summarized his conclusions concisely at the end, so I don't know what your excuse is for trivializing it.
The other key takeaway, that people with trauma are more attracted to organizations that purport to be able to fix and are thus over-represented in them (vs in the general population), is also important.
Because if you're going to set up a hierarchical (explicitly or implicitly) isolated organization with a bunch of strangers, it's good to start by asking "How much do I trust these strangers?"
> The key takeaway from the article is that if you have a group leader who cuts you off from other people, that's a red flag
Even better: a social group with a lot of invented lingo is a red flag that you can see before you get isolated from your loved ones.
7 replies →
> The key takeaway from the article is that if you have a group leader who cuts you off from other people, that's a red flag – not really a novel, or unique, or situational insight
Well yes and no. The reason why I think the insight is so interesting is that these groups were formed, almost definitionally for the purpose of avoiding such "obvious" mistakes. The name of the group is literally the "Rationalists"!
I find that funny, ironic, and saying something important about this philosophy, in that it implies that the rest of society wasn't so "irrational" after all.
As a more extreme and silly example, imagine there was a group called "Cults suck, and we are not a cult!", that was created for the very purpose of fighting cults, and yet, ironically, became a cult into and of itself. That would be insightful and funny.
[flagged]
I have a link for you:
https://news.ycombinator.com/newsguidelines.html
Scroll to the bottom of the page.
1 reply →
One of a few issues I have with groups like these, is that they often confidently and aggressively spew a set of beliefs that on their face logically follow from one another, until you realize they are built on a set of axioms that are either entirely untested or outright nonsense. This is common everywhere, but I feel especially pronounced in communities like this. It also involves quite a bit of navel gazing that makes me feel a little sick participating in.
The smartest people I have ever known have been profoundly unsure of their beliefs and what they know. I immediately become suspicious of anyone who is very certain of something, especially if they derived it on their own.
I don’t think it’s just (or even particularly) bad axioms, I think it’s that people tend to build up “logical” conclusions where they think each step is a watertight necessity that follows inevitably from its antecedents, but actually each step is a little bit leaky, leading to runaway growth in false confidence.
Not that non-rationalists are any better at reasoning, but non-rationalists do at least benefit from some intellectual humility.
As a former mechanical engineer, I visualize this phenomenon like a "tolerance stackup". Effectively meaning that for each part you add to the chain, you accumulate error. If you're not damn careful, your assembly of parts (or conclusions) will fail to measure up to expectations.
34 replies →
IME most people aren't very good at building axioms. I hear a lot of people say "from first principles" and it is a pretty good indication that they will not be. First principles require a lot of effort to create. They require iteration. They require a lot of nuance, care, and precision. And of course they do! They are the foundation of everything else that is about to come. This is why I find it so odd when people say "let's work from first principles" and then just state something matter of factly and follow from there. If you want to really do this you start simple, attack your own assumptions, reform, build, attack, and repeat.
This is how you reduce the leakiness, but I think it is categorically the same problem as the bad axioms. It is hard to challenge yourself and we often don't like being wrong. It is also really unfortunate that small mistakes can be a critical flaw. There's definitely an imbalance.
This is why the OP is seeing this behavior. Because the smartest people you'll meet are constantly challenging their own ideas. They know they are wrong to at least some degree. You'll sometimes find them talking with a bit of authority at first but a key part is watching how they deal with challenging of assumptions. Ask them what would cause them to change their minds. Ask them about nuances and details. They won't always dig into those can of worms but they will be aware of it and maybe nervousness or excited about going down that road (or do they just outright dismiss it?). They understand that accuracy is proportional to computation, and you have exponentially increasing computation as you converge on accuracy. These are strong indications since it'll suggest if they care more about the right answer or being right. You also don't have to be very smart to detect this.
7 replies →
> I don’t think it’s just (or even particularly) bad axioms, I think it’s that people tend to build up “logical” conclusions where they think each step is a watertight necessity that follows inevitably from its antecedents, but actually each step is a little bit leaky, leading to runaway growth in false confidence.
This is what you get when you naively re-invent philosophy from the ground up while ignoring literally 2500 years of actual debugging of such arguments by the smartest people who ever lived.
You can't diverge from and improve on what everyone else did AND be almost entirely ignorant of it, let alone have no training whatsoever in it. This extreme arrogance I would say is the root of the problem.
> Not that non-rationalists are any better at reasoning, but non-rationalists do at least benefit from some intellectual humility.
Non-rationalists are forced to use their physical senses more often because they can't follow the chain of logic as far. This is to their advantage. Empiricism > rationalism.
6 replies →
> I think it’s that people tend to build up “logical” conclusions where they think each step is a watertight necessity that follows inevitably from its antecedents, but actually each step is a little bit leaky, leading to runaway growth in false confidence.
Yeah, this is a pattern I've seen a lot of recently—especially in discussions about LLMs and the supposed inevitability of AGI (and the Singularity). This is a good description of it.
5 replies →
Yet I think most people err in the other direction. They 'know' the basics of health, of discipline, of charity, but have a hard time following through. 'Take a simple idea, and take it seriously': a favorite aphorism of Charlie Munger. Most of the good things in my life have come from trying to follow through the real implications of a theoretical belief.
2 replies →
Perhaps part of being rational, as opposed to rationalist, is having a sense of when to override the conclusions of seemingly logical arguments.
11 replies →
I feel this way about some of the more extreme effective altruists. There is no room for uncertainty or recognition of the way that errors compound.
- "We should focus our charitable endeavors on the problems that are most impactful, like eradicating preventable diseases in poor countries." Cool, I'm on board.
- "I should do the job that makes the absolute most amount of money possible, like starting a crypto exchange, so that I can use my vast wealth in the most effective way." Maybe? If you like crypto, go for it, I guess, but I don't think that's the only way to live, and I'm not frankly willing to trust the infallibility and incorruptibility of these so-called geniuses.
- "There are many billions more people who will be born in the future than those people who are alive today. Therefore, we should focus on long-term problems over short-term ones because the long-term ones will affect far more people." Long-term problems are obviously important, but the further we get into the future, the less certain we can be about our projections. We're not even good at seeing five years into the future. We should have very little faith in some billionaire tech bro insisting that their projections about the 22nd century are correct (especially when those projections just so happen to show that the best thing you can do in the present is buy the products that said tech bro is selling).
23 replies →
> Not that non-rationalists are any better at reasoning, but non-rationalists do at least benefit from some intellectual humility.
I have observed no such correlation of intellectual humility.
Would you consider the formal verification community to be "rationalists"?
> I don’t think it’s just (or even particularly) bad axioms, I think it’s that people tend to build up “logical” conclusions where they think each step is a watertight necessity that follows inevitably from its antecedents, but actually each step is a little bit leaky, leading to runaway growth in false confidence.
I really like your way of putting it. It’s a fundamental fallacy to assume certainty when trying to predict the future. Because, as you say, uncertainty compounds over time, all prediction models are chaotic. It’s usually associated with some form of Dunning-Kruger, where people know just enough to have ideas but not enough to understand where they might fail (thus vastly underestimating uncertainty at each step), or just lacking imagination.
1 reply →
Precisely! I'd even say they get intoxicated with their own braininess. The expression that comes to mind is to get "way out over your skis".
I'd go even further and say most of the world's evils are caused by people with theories that are contrary to evidence. I'd place Marx among these but there's no shortage of examples.
> non-rationalists do at least benefit from some intellectual humility
The Islamists who took out the World Trade Center don’t strike me as particularly intellectually humble.
If you reject reason, you are only left with force.
7 replies →
Strongly recommend this profile in the NYer on Curtis Yarvin (who also uses "rationalism" to justify their beliefs) [0]. The section towards the end that reports on his meeting one of his supposed ideological heroes for an extended period of time is particularly illuminating.
I feel like the internet has led to an explosion of these such groups because it abstracts the "ideas" away from the "people". I suspect if most people were in a room or spent an extended amount of time around any of these self-professed, hyper-online rationalists, they would immediately disregard any theories they were able to cook up, no matter how clever or persuasively-argued they might be in their written down form.
[0]: https://www.newyorker.com/magazine/2025/06/09/curtis-yarvin-...
> I feel like the internet has led to an explosion of these such groups because it abstracts the "ideas" away from the "people". I suspect if most people were in a room or spent an extended amount of time around any of these self-professed, hyper-online rationalists, they would immediately disregard any theories they were able to cook up, no matter how clever or persuasively-argued they might be in their written down form.
Likely the opposite. The internet has led to people being able to see the man behind the curtain, and realize how flawed the individuals pushing these ideas are. Whereas many intellectuals from 50 years back were just as bad if not worse, but able to maintain a false aura of intelligence by cutting themselves off from the masses.
3 replies →
> I immediately become suspicious of anyone who is very certain of something
Me too, in almost every area of life. There's a reason it's called a conman: they are tricking your natural sense that confidence is connected to correctness.
But also, even when it isn't about conning you, how do people become certain of something? They ignored the evidence against whatever they are certain of.
People who actually know what they're talking about will always restrict the context and hedge their bets. Their explanation are tentative, filled with ifs and buts. They rarely say anything sweeping.
In the term "conman" the confidence in question is that of the mark, not the perpetrator.
2 replies →
> how do people become certain of something?
They see the same pattern repeatedly until it becomes the only reasonable explanation? I’m certain about the theory of gravity because every time I drop an object it falls to the ground with a constant acceleration.
"Cherish those who seek the truth but beware of those who find it" - Voltaire
Most likely Gide ("Croyez ceux qui cherchent la vérité, doutez de ceux qui la trouvent", "Believe those who seek Truth, doubt those who find it") and not Voltaire ;)
Voltaire was generally more subtle: "un bon mot ne prouve rien", a witty saying proves nothing, as he'd say.
> I immediately become suspicious of anyone who is very certain of something, especially if they derived it on their own.
Are you certain about this?
All I know is that I know nothing.
2 replies →
Well you could be a critical rationalist and do away with the notion of "certainty" or any sort of justification or privileged source of knowledge (including "rationality").
Your own state of mind is one of the easiest things to be fairly certain about.
3 replies →
no
Suspicious implies uncertain. It’s not immediate rejection.
Isaac Newton would like to have a word.
1 reply →
Many arguments arise over the valuation of future money. See "discount function" [1] At one extreme are the rational altruists, who rate that near 1.0, and the "drill, baby, drill" people, who are much closer to 0.
The discount function really should have a noise term, because predictions about the future are noisy, and the noise increases with the distance into the future. If you don't consider that, you solve the wrong problem. There's a classic Roman concern about running out of space for cemeteries. Running out of energy, or overpopulation, turned out to be problems where the projections assumed less noise than actually happened.
[1] https://en.wikipedia.org/wiki/Discount_function
I find Yudowsky-style rationalists morbidly fascinating in the same way as Scientologists and other cults. Probably because they seem to genuinely believe they're living in a sci-fi story. I read a lot of their stuff, probably too much, even though I find it mostly ridiculous.
The biggest nonsense axiom I see in the AI-cult rationalist world is recursive self-improvement. It's the classic reason superintelligence takeoff happens in sci-fi: once AI reaches some threshold of intelligence, it's supposed to figure out how to edit its own mind, do that better and faster than humans, and exponentially leap into superintelligence. The entire "AI 2027" scenario is built on this assumption; it assumes that soon LLMs will gain the capability of assisting humans on AI research, and AI capabilities will explode from there.
But AI being capable of researching or improving itself is not obvious; there's so many assumptions built into it!
- What if "increasing intelligence", which is a very vague goal, has diminishing returns, making recursive self-improvement incredibly slow?
- Speaking of which, LLMs already seem to have hit a wall of diminishing returns; it seems unlikely they'll be able to assist cutting-edge AI research with anything other than boilerplate coding speed improvements.
- What if there are several paths to different kinds of intelligence with their own local maxima, in which the AI can easily get stuck after optimizing itself into the wrong type of intelligence?
- Once AI realizes it can edit itself to be more intelligent, it can also edit its own goals. Why wouldn't it wirehead itself? (short-circuit its reward pathway so it always feels like it's accomplished its goal)
Knowing Yudowsky I'm sure there's a long blog post somewhere where all of these are addressed with several million rambling words of theory, but I don't think any amount of doing philosophy in a vacuum without concrete evidence could convince me that fast-takeoff superintelligence is possible.
I agree. There's also the point of hardware dependance.
From all we've seen, the practical ability of AI/LLMs seems to be strongly dependent on how much hardware you throw at it. Seems pretty reasonable to me - I'm skeptical that there's that much out there in gains from more clever code, algorithms, etc on the same amount of physical hardware. Maybe you can get 10% or 50% better or so, but I don't think you're going to get runaway exponential improvement on a static collection of hardware.
Maybe they could design better hardware themselves? Maybe, but then the process of improvement is still gated behind how fast we can physically build next-generation hardware, perfect the tools and techniques needed to make it, deploy with power and cooling and datalinks and all of that other tedious physical stuff.
1 reply →
> it assumes that soon LLMs will gain the capability of assisting humans
No, it does not. It assumes there will be progress in AI. It does not assume that progress will be in LLMs
It doesn't require AI to be better than humans for AI to take over because unlike a human an AI can be cloned. You have have 2 AIs, then 4, then 8.... then millions. All able to do the same things as humans (the assumption of AGI). Build cars, build computers, build rockets, built space probes, build airplanes, build houses, build power plants, build factories. Build robot factories to create more robots and more power plants and more factories.
PS: Not saying I believe in the doom. But the thought experiment doesn't seem indefensible.
3 replies →
> - What if there are several paths to different kinds of intelligence with their own local maxima, in which the AI can easily get stuck after optimizing itself into the wrong type of intelligence?
I think what's more plausible is that there is general intelligence, and humans have that, and it's general in the same sense that Turing machines are general, meaning that there is no "higher form" of intelligence that has strictly greater capability. Computation speed, memory capacity, etc. can obviously increase, but those are available to biological general intelligences just like they would be available to electronic general intelligences.
2 replies →
An interesting point you make there — one would assume that if recursive self-improvement were a thing, Nature would have already lead humans into that "hall of mirrors".
16 replies →
> What if "increasing intelligence", which is a very vague goal, has diminishing returns, making recursive self-improvement incredibly slow?
This is sort of what I subscribe to as the main limiting factor, though I'd describe it differently. It's sort of like Amdahl's Law (and I imagine there's some sort of Named law that captures it, I just don't know the name): the magic AI wand may be very good at improving some part of AGI capability, but the more you improve that part, the more the other parts come to dominate. Metaphorically, even if the juice is worth the squeeze initially, pretty soon you'll only be left with a dried-out fruit clutched in your voraciously energy-consuming fist.
I'm actually skeptical that there's much juice in the first place; I'm sure today's AIs could generate lots of harebrained schemes for improvement very quickly, but exploring those possibilities is mind-numbingly expensive. Not to mention that the evaluation functions are unreliable, unknown, and non-monotonic.
Then again, even the current AIs have convinced a large number of humans to put a lot of effort into improving them, and I do believe that there are a lot of improvements that humans are capable of making to AI. So the human-AI system does appear to have some juice left. Where we'll be when that fruit is squeezed down to a damp husk, I have no idea.
The built in assumptions are always interesting to me, especially as it relates to intelligence. I find many of them (though not all), are organized around a series of fundamental beliefs that are very rarely challenged within these communities. I should initially mention that I don't think everyone in these communities believes these things, of course, but I think there's often a default set of assumptions going into conversations in these spaces that holds these axioms. These beliefs more or less seem to be as follows:
1) They believe that there exists a singular factor to intelligence in humans which largely explains capability in every domain (a super g factor, effectively).
2) They believe that this factor is innate, highly biologically regulated, and a static factor about a person(Someone who is high IQ in their minds must have been a high achieving child, must be very capable as an adult, these are the baseline assumptions). There is potentially belief that this can be shifted in certain directions, but broadly there is an assumption that you either have it or you don't, there is no feeling of it as something that could be taught or developed without pharmaceutical intervention or some other method.
3) There is also broadly a belief that this factor is at least fairly accurately measured by modern psychometric IQ tests and educational achievement, and that this factor is a continuous measurement with no bounds on it (You can always be smarter in some way, there is no max smartness in this worldview).
These are things that certainly could be true, and perhaps I haven't read enough into the supporting evidence for them but broadly I don't see enough evidence to have them as core axioms the way many people in the community do.
More to your point though, when you think of the world from those sorts of axioms above, you can see why an obsession would develop with the concept of a certain type of intelligence being recursively improving. A person who has become convinced of their moral placement within a societal hierarchy based on their innate intellectual capability has to grapple with the fact that there could be artificial systems which score higher on the IQ tests than them, and if those IQ tests are valid measurements of this super intelligence factor in their view, then it means that the artificial system has a higher "ranking" than them.
Additionally, in the mind of someone who has internalized these axioms, there is no vagueness about increasing intelligence! For them, intelligence is the animating factor behind all capability, it has a central place in their mind as who they are and the explanatory factor behind all outcomes. There is no real distinction between capability in one domain or another mentally in this model, there is just how powerful a given brain is. Having the singular factor of intelligence in this mental model means being able to solve more difficult problems, and lack of intelligence is the only barrier between those problems being solved vs unsolved. For example, there's a common belief among certain groups among the online tech world that all governmental issues would be solved if we just had enough "high-IQ people" in charge of things irrespective of their lack of domain expertise. I don't think this has been particularly well borne out by recent experiments, however. This also touches on what you mentioned in terms of an AI system potentially maximizing the "wrong types of intelligence", where there isn't a space in this worldview for a wrong type of intelligence.
4 replies →
It's kinda weird how the level of discourse seems to be what you get when a few college students sit around smoking weed. Yet somehow this is taken as very serious and profound in the valley and VC throw money at it.
I've pondered recursive self-improvement. I'm fairly sure it will be a thing - we're at a point already where people could try telling Claude or some such to have a go, even if not quite at a point it would work. But I imagine take off would be very gradual. It would be constrained by available computing resources and probably only comparably good to current human researchers and so still take ages to get anywhere.
10 replies →
Yeah, to compare Yudkowsky to Hubbard I've read accounts of people who read Dianetics or Science of Survival and thought "this is genius!" and I'm scratching my head and it's like they never read Freud or Horney or Beck or Berne or Burns or Rogers or Kohut, really any clinical psychology at all, even anything in the better 70% of pop psychology. Like Hubbard, Yudkowsky is unreadable, rambling [1] and inarticulate -- how anybody falls for it boggles my mind [2], but hey, people fell for Carlos Castenada who never used a word of the Yaqui language or mentioned any plant that grows in the desert in Mexico but has Don Juan give lectures about Kant's Critique of Pure Reason [3] that Castenada would have heard in school and you would have heard in school too if you went to school or would have read if you read a lot.
I can see how it appeals to people like Aella who wash into San Francisco without exposure to education [4] or philosophy or computer science or any topics germane to the content of Sequences -- not like it means you are stupid but, like Dianetics, Sequences wouldn't be appealing if you were at all well read. How is people at frickin' Oxford or Stanford fall for it is beyond me, however.
[1] some might even say a hypnotic communication pattern inspired by Milton Erickson
[2] you think people would dismiss Sequences because it's a frickin' Harry Potter fanfic, but I think it's like the 419 scam email which is riddled by typos which is meant to drive the critical thinker away and, ironically in the case of Sequences, keep the person who wants to cosplay as a critical thinker.
[3] minus any direct mention of Kant
[4] thus many of the marginalized, neurodivergent, transgender who left Bumfuck, AK because they couldn't live at home and went to San Francisco to escape persecution as opposed to seek opportunity
3 replies →
I'm surprised not see see much pushback on your point here, so I'll provide my own.
We have an existence proof for intelligence that can improve AI: humans can do this right now.
Do you think AI can't reach human-level intelligence? We have an existence proof of human-level intelligence: humans. If you think AI will reach human-level intelligence then recursive self-improvement naturally follows. How could it not?
Do you not think human-level intelligence is some kind of natural maximum? Why? That would be strange, no? Even if you think it's some natural maximum for LLMs specifically, why? And why do you think we wouldn't modify architectures as needed to continue to make progress? That's already happening, our LLMs are a long way from the pure text prediction engines of four or five years ago.
There is already a degree of recursive improvement going on right now, but with humans still in the loop. AI researchers currently use AI in their jobs, and despite the recent study suggesting AI coding tools don't improve productivity in the circumstances they tested, I suspect AI researchers' productivity is indeed increased through use of these tools.
So we're already on the exponential recursive-improvement curve, it's just that it's not exclusively "self" improvement until humans are no longer a necessary part of the loop.
On your specific points:
> 1. What if increasing intelligence has diminishing returns, making recursive improvement slow?
Sure. But this is a point of active debate between "fast take-off" and "slow take-off" scenarios, it's certainly not settled among rationalists which is more plausible, and it's a straw man to suggest they all believe in a fast take-off scenario. But both fast and slow take-off due to recursive self-improvement are still recursive self-imrpovement, so if you only want to criticise the fast take-off view, you should speak more precisely.
I find both slow and fast take-off plausible, as the world has seen both periods of fast economic growth through technology, and slower economic growth. It really depends on the details, which brings us to:
> 2. LLMs already seem to have hit a wall of diminishing returns
This is IMHO false in any meaningful sense. Yes, we have to use more computing power to get improvements without doing any other work. But have you seen METR's metric [1] on AI progress in terms of the (human) duration of task they can complete? This is an exponential curve that has not yet bent, and if anything has accelerated slightly.
Do not confuse GPT-5 (or any other incrementally improved model) failing to live up to unreasonable hype for an actual slowing of progress. AI capabilities are continuing to increase - being on an exponential curve often feels unimpressive at any given moment, because the relative rate of progress isn't increasing. This is a fact about our psychology, if we look at actual metrics (that don't have a natural cap like evals that max out at 100%, these are not good for measuring progress in the long-run) we see steady exponential progress.
> 3. What if there are several paths to different kinds of intelligence with their own local maxima, in which the AI can easily get stuck after optimizing itself into the wrong type of intelligence?
This seems valid. But it seems to me that unless we see METR's curve bend soon, we should not count on this. LLMs have specific flaws, but I think if we are honest with ourselves and not over-weighting the specific silly mistakes they still make, they are on a path toward human-level intelligence in the coming years. I realise that claim will sound ridiculous to some, but I think this is in large part due to people instinctively internalising that everything LLMs can do is not that impressive (it's incredible how quickly expectations adapt), and therefore over-indexing on their remaining weaknesses, despite those weaknesses improving over time as well. If you showed GPT-5 to someone from 2015, they would be telling you this thing is near human intelligence or even more intelligent than the average human. I think we all agree that's not true, but I think that superficially people would think it was if their expectations weren't constantly adapting to the state of the art.
> 4. Once AI realizes it can edit itself to be more intelligent, it can also edit its own goals. Why wouldn't it wirehead itself?
It might - but do we think it would? I have no idea. Would you wirehead yourself if you could? I think many humans do something like this (drug use, short-form video addiction), and expect AI to have similar issues (and this is one reason it's dangerous) but most of us don't feel this is an adequate replacement for "actually" satisfying our goals, and don't feel inclined to modify our own goals to make it so, if we were able.
> Knowing Yudowsky I'm sure there's a long blog post somewhere where all of these are addressed with several million rambling words of theory
Uncalled for I think. There are valid arguments against you, and you're pre-emptively dismissing responses to you by vaguely criticising their longness. This comment is longer than yours, and I reject any implication that that weakens anything about it.
Your criticisms are three "what ifs" and a (IMHO) falsehood - I don't think you're doing much better than "millions of words of theory without evidence". To the extent that it's true Yudkowsky and co theorised without evidence, I think they deserve cred, as this theorising predated the current AI ramp-up at a time when most would have thought AI anything like what we have now was a distant pipe dream. To the extent that this theorising continues in the present, it's not without evidence - I point you again to METR's unbending exponential curve.
Anyway, so I contend your points comprise three "what ifs" and (IMHO) a falsehood. Unless you think "AI can't recursively self-improve itself" already has strong priors in its favour such that strong arguments are needed to shift that view (and I don't think that's the case at all), this is weak. You will need to argue why we should need to have strong evidence to overturn a default "AI can't recursively self-improve" view, when it seems that a) we are already seeing recursive improvement (just not purely "self"-improvement), and that it's very normal for technological advancement to have recursive gains - see e.g. Moore's law or technological contributions to GDP growth generally.
Far from a damning example of rationalists thinking sloppily, this particular point seems like one that shows sloppy thinking on the part of the critics.
It's at least debateable, which is all it has to be for calling it "the biggest nonsense axion" to be a poor point.
[1] https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...
5 replies →
This is also the weirdest thing and I don't think they even know the assumption they are making. It makes the assumption that there is infinite knowledge to be had. It also ignores the reality that in reality we have exceptionally strong indications that accuracy (truth, knowledge, whatever you want to call it) has exponential growth in complexity. These may be wrong assumptions, but we at least have evidence for them, and much more for the latter. So if objective truth exists, then that intelligence gap is very very different. One way they could be right there is for this to be an S-curve and for us humans to be at the very bottom there. That seems unlikely, though very possible. But they always treat this as linear or exponential as if our understanding to the AI will be like an ant trying to understand us.
The other weird assumption I hear is about how it'll just kill us all. The vast majority of smart people I know are very peaceful. They aren't even seeking power of wealth. They're too busy thinking about things and trying to figure everything out. They're much happier in front of a chalk board than sitting on a yacht. And humans ourselves are incredibly passionate towards other creatures. Maybe we learned this because coalitions are a incredibly powerful thing, but truth is that if I could talk to an ant I'd choose that over laying traps. Really that would be so much easier too! I'd even rather dig a small hole to get them started somewhere else than drive down to the store and do all that. A few shovels in the ground is less work and I'd ask them to not come back and tell others.
Granted, none of this is absolutely certain. It'd be naive to assume that we know! But it seems like these cults are operating on the premise that they do know and that these outcomes are certain. It seems to just be preying on fear and uncertainty. Hell, even Altman does this, ignoring risk and concern of existing systems by shifting focus to "an even greater risk" that he himself is working towards (You can't simultaneously maximize speed and safety). Which, weirdly enough might fulfill their own prophesies. The AI doesn't have to become sentient but if it is trained on lots of writings about how AI turns evil and destroys everyone then isn't that going to make a dumb AI that can't tell fact from fiction more likely to just do those things?
19 replies →
This is why it's important to emphasize that rationality is not a good goal to have. Rationality is nothing more than applied logic, which takes axioms as given and deduces conclusions from there.
Reasoning is the appropriate target because it is a self-critical, self-correcting method that continually re-evaluates axioms and methods to express intentions.
You're describing the impressions I had of MENSA back in the 70's.
He probably is describing Mensa, and assuming that it also applies to the rationality community without having any specific knowledge of the latter.
(From my perspective, Hacker News is somewhere in the middle between Mensa and Less Wrong. Full of smart people, but most of them don't particularly care about evidence, if providing their own opinion confidently is an alternative.)
One of the only idioms that I don't mind living my life by is, "Follow the truth-seeker, but beware those who've found it".
Interesting. I can't say I've done much following though — not that I am aware of anyway. Maybe I just had no leaders growing up.
A good example of this is the number of huge assumptions needed for the argument for Roko's basilisk. I'm shocked that some people actually take it seriously.
I don't believe anyone has taken it seriously in the last half-decade, if you find counter-evidence for that belief let me know.
The distinction between them and religion is that religion is free to say that those axioms are a matter of faith and treat them as such. Rationalists are not as free to do so.
Epistemological skepticism sure is a belief. A strong belief on your side?
I am profoundly sure, I am certain I exist and that a reality outside myself exists. Worse, I strongly believe knowing this external reality is possible, desirable and accurate.
How suspicious does that make me?
It means you haven't read Hume, or, in general, taken philosophy seriously. An academic philosopher might still come to the same conclusions as you (there is an academic philosopher for every possible position), but they'd never claim the certainty you do.
1 reply →
Are you familiar with ship of theseus as an arugmentation fallacy? Innuendo Studios did a great video on it and I think that a lot of what you're talking about breaks down to this. Tldr - it's a fallacy of substitution, small details of an argument get replaced by things that are (or feel like) logical equivalents until you end up saying something entirely different but are arguing as though you said the original thing. In this video the example is "senator doxxes a political opponent" but on looking "senator" turns out to mean "a contractor working for the senator" and "doxxes a political opponent" turns out to mean "liked a tweet that had that opponent's name in it in a way that could draw attention to it".
Each change is arguably equivalent and it seems logical that if x = y then you could put y anywhere you have x, but after all of the changes are applied the argument that emerges is definitely different from the one before all the substitutions are made. It feels like communities that pride themselves on being extra rational seem subject to this because it has all the trappings of rationalism but enables squishy, feely arguments
https://www.youtube.com/watch?v=Ui-ArJRqEvU
Meant to drop a link for the above, my bad
There are certain things I am sure of even though I derived them on my own.
But I constantly battle tested them against other smart people’s views, and just after I ran out of people to bring me new rational objections did I become sure.
Now I can battle test them against LLMs.
On a lesser level of confidence, I have also found a lot of times the people who disagreed with what I thought had to be the case, later came to regret it because their strategies ended up in failure and they told me they regretted not taking my recommendation. But that is on an individual level. I have gotten pretty good at seeing systemic problems, architecting systemic solutions, and realizing what it would take to get them adopted to at least a critical mass. Usually, they fly in the face of what happens normally in society. People don’t see how their strategies and lives are shaped by the technology and social norms around them.
Here, I will share three examples:
Public Health: https://www.laweekly.com/restoring-healthy-communities/
Economic and Governmental: https://magarshak.com/blog/?p=362
Wars & Destruction: https://magarshak.com/blog/?p=424
For that last one, I am often proven somewhat wrong by right-wing war hawks, because my left-leaning anti-war stance is about avoiding inflicting large scale misery on populations, but the war hawks go through with it anyway and wind up defeating their geopolitical enemies and gaining ground as the conflict fades into history.
"genetically engineers high fructose corn syrup into everything"
This phrase is nonsense, because HFCS is a chemical process applied to normal corn after the harvest. The corn may be a GMO but it certainly doesn't have to be.
3 replies →
It's very tempting to try to reason things through from first principles. I do it myself, a lot. It's one of the draws of libertarianism, which I've been drawn to for a long time.
But the world is way more complex than the models we used to derive those "first principles".
It's also very fun and satisfying. But it should be limited to an intellectual exercise at best, and more likely a silly game. Because there's no true first principle, you always have to make some assumption along the way.
Any theory of everything will often have a little perpetual motion machine at the nexus. These can be fascinating to the mind.
Pressing through uncertainty either requires a healthy appetite for risk or an engine of delusion. A person who struggles to get out of their comfort zone will seek enablement through such a device.
Appreciation of risk-reward will throttle trips into the unknown. A person using a crutch to justify everything will careen hyperbolically into more chaotic and erratic behaviors hoping to find that the device is still working, seeking the thrill of enablement again.
The extremism comes from where once the user learned to say hello to a stranger, their comfort zone has expanded to an area that their experience with risk-reward is underdeveloped. They don't look at the external world to appreciate what might happen. They try to morph situations into some confirmation of the crutch and the inferiority of confounding ideas.
"No, the world isn't right. They are just weak and the unspoken rules [in the user's mind] are meant to benefit them." This should always resonate because nobody will stand up for you like you have a responsibility to.
A study of uncertainty and the limitations of axioms, the inability of any sufficiently expressive formalism to be both complete and consistent, these are the ideas that are antidotes to such things. We do have to leave the rails from time to time, but where we arrive will be another set of rails and will look and behave like rails, so a bit of uncertainty is necessary, but it's not some magic hat that never runs out of rabbits.
Another psychology that will come into play from those who have left their comfort zone is the inability to revert. It is a harmful tendency to presume all humans fixed quantities. Once a behavior exists, the person is said to be revealed, not changed. The proper response is to set boundaries and be ready to tie off the garbage bag and move on if someone shows remorse and desire to revert or transform. Otherwise every relationship only gets worse. If instead you can never go back, extreme behavior is a ratchet. Ever mistake becomes the person.
There should be an extremist cult of people who are certain only that uncertainty is the only certain thing
What makes you so certain there isn't? A group that has a deep understanding fnord of uncertainty would probably like to work behind the scenes to achieve their goals.
2 replies →
My favourite bumper sticker, "Militant Agnostic. I don't know, and neither do you."
1 reply →
More people should read Sextus Empiricus as he's basically the O.G. Phyrronist skeptic and goes pretty hard on this very train of thought.
4 replies →
A Wonderful Phrase by Gandhi
You mean like this? https://www.readthesequences.com/Zero-And-One-Are-Not-Probab...
The Snatter Goblins?
https://archive.org/details/goblinsoflabyrin0000frou/page/10...
https://realworldrisk.com/
Socrates was fairly close to that.
1 reply →
"I have no strong feelings one way or the other." thunderous applause
There would be, except we're all very much on the fence about whether it is the right cult for us.
There already is, they're called "Politicians."
Like Robert Anton Wilson if he were way less chill, perhaps.
“Oh, that must be exhausting.”
all of science would makes sense if it wasn't for that 1 pesky miracle
It's crazy to read this, because by writing what you wrote you basically show that you don't understand what an axiom is.
You need to review the definition of the word.
> The smartest people I have ever known have been profoundly unsure of their beliefs and what they know.
The smartest people are unsure about their higher level beliefs, but I can assure you that they almost certainly don't re-evaluate "axioms" as you put it on a daily or weekly basis. Not that it matters, as we almost certainly can't verify who these people are based on an internet comment.
> I immediately become suspicious of anyone who is very certain of something, especially if they derived it on their own.
That's only your problem, not anyone else's. If you think people can't arrive to a tangible and useful approximation of truth, then you are simply delusional.
> If you think people can't arrive to a tangible and useful approximation of truth, then you are simply delusional
Logic is only a map, not the territory. It is a new toy, still bright and shining from the box in terms of human history. Before logic there were other ways of thinking, and new ones will come after. Yet, Voltaire's bastards are always certain they're right, despite being right far less often than they believe.
Can people arrive at tangible and useful conclusions? Certainly, but they can only ever find capital "T" Truth in a very limited sense. Logic, like many other models of the universe, is only useful until you change your frame of reference or the scale at which you think. Then those laws suddenly become only approximations, or even irrelevant.
1 reply →
> It's crazy to read this, because by writing what you wrote you basically show that you don't understand what an axiom is. You need to review the definition of the word.
Oh, do enlighten then.
> The smartest people are unsure about their higher level beliefs, but I can assure you that they almost certainly don't re-evaluate "axioms" as you put it on a daily or weekly basis. Not that it matters, as we almost certainly can't verify who these people are based on an internet comment.
I'm not sure you are responding to the right comment, or are severely misinterpreting what I said. Clearly a nerve was struck though, and I do apologize for any undue distress. I promise you'll recover from it.
4 replies →
Saw once a discussion that people should not have kids as it's by far the highest increase in your carbon footprint in your lifetime (>10x than going vegan, etc) be driven all the way to advocating genocide as a way of carbon footprint minimization
> Saw once a discussion that people should not have kids as it's by far the highest increase in your carbon footprint in your lifetime (>10x than going vegan, etc) be driven all the way to advocating genocide as a way of carbon footprint minimization
The opening scene of Utopia (UK) s2e6 goes over this:
> "Why did you have him then? Nothing uses carbon like a first-world human, yet you created one: why would you do that?"
* https://www.youtube.com/watch?v=rcx-nf3kH_M
Setting aside the reductio ad absurdum of genocide, this is an unfortunately common viewpoint. People really need to take into account the chances their child might wind up working on science or technology which reduces global CO2 emissions or even captures CO2. This reasoning can be applied to all sorts of naive "more people bad" arguments. I can't imagine where the world would be if Norman Borlaug's parents had decided to never have kids out of concern for global food insecurity.
8 replies →
Another issue with these groups is that they often turn into sex cults.
A logical argument is only as good as it's presuppositions. To first lay siege to your own assumptions before reasoning about them tends towards a more beneficial outcome.
Another issue with "thinkers" is that many are cowards; whether they realize it or not a lot of presuppositions are built on a "safe" framework, placing little to no responsibility on the thinker.
> The smartest people I have ever known have been profoundly unsure of their beliefs and what they know. I immediately become suspicious of anyone who is very certain of something, especially if they derived it on their own.
This is where I depart from you. If I say it's anti-intellectual I would only be partially correct, but it's worse than that imo. You might be coming across "smart people" who claim to know nothing "for sure", which in itself is a self-defeating argument. How can you claim that nothing is truly knowable as if you truly know that nothing is knowable? I'm taking these claims to their logical extremes btw, avoiding the granular argumentation surrounding the different shades and levels of doubt; I know that leaves vulnerabilities in my argument, but why argue with those who know that they can't know much of anything as if they know what they are talking about to begin with? They are so defeatist in their own thoughts, it's comical. You say, "profoundly unsure", which reads similarly to me as "can't really ever know" which is a sure truth claim, not a relative claim or a comparative as many would say, which is a sad attempt to side-step the absolute reality of their statement.
I know that I exist, regardless of how I get here I know that I do, there is a ridiculous amount of rhetoric surrounding that claim that I will not argue for here, this is my presupposition. So with that I make an ontological claim, a truth claim, concerning my existence; this claim is one that I must be sure of to operate at any base level. I also believe I am me and not you, or any other. Therefore I believe in one absolute, that "I am me". As such I can claim that an absolute exists, and if absolutes exist, then within the right framework you must also be an absolute to me, and so on and so forth; what I do not see in nature is an existence, or notion of, the relative on it's own as at every relative comparison there is an absolute holding up the comparison. One simple example is heat. Hot is relative, yet it also is objective; some heat can burn you, other heat can burn you over a very long time, some heat will never burn. When something is "too hot" that is a comparative claim, stating that there is another "hot" which is just "hot" or not "hot enough", the absolute still remains which is heat. Relativistic thought is a game of comparisons and relations, not making absolute claims; the only absolute claim is that there is no absolute claim to the relativist. The reason I am talking about relativists is that they are the logical, or illogical, conclusion of the extremes of doubt/disbelief i previously mentioned.
If you know nothing you are not wise, you are lazy and ill-prepared, we know the earth is round, we know that gravity exists, we are aware of the atomic, we are aware of our existence, we are aware that the sun shines it's light upon us, we are sure of many things that took debate among smart people many many years ago to arrive to these sure conclusions. There was a time where many things we accept where "not known" but were observed with enough time and effort by brilliant people. That's why we have scientists, teachers, philosophers and journalists. I encourage you that the next time you find a "smart" person who is unsure of their beliefs, you should kindly encourage them to be less lazy and challenge their absolutes, if they deny the absolute could be found then you aren't dealing with a "smart" person, you are dealing with a useful idiot who spent too much time watching skeptics blather on about meaningless topics until their brains eventually fell out. In every relative claim there must be an absolute or it fails to function in any logical framework. You can with enough thought, good data, and enough time to let things steep find the (or an) absolute and make a sure claim. You might be proven wrong later, but that should be an indicator to you that you should improve (or a warning you are being taken advantage of by a sophist), and that the truth is out there, not to sequester yourself away in this comfortable, unsure hell that many live in till they die.
The beauty of absolute truth is that you can believe absolutes without understanding the entirety of the absolute. I know gravity exists but I don't know fully how it works. Yet I can be absolutely certain it acts upon me, even if I only understand a part of it. People should know what they know and study it until they do and not make sure claims outside of what they do not know until they have the prerequisite absolute claims to support the broader claims with the surety of the weakest of their presuppositions.
Apologies for grammar, length and how schizo my thought process appears; I don't think linearly and it takes a goofy amount of effort to try to collate my thoughts in a sensible manner.
I get the impression that these people desperately want to study philosophy but for some reason can't be bothered to get formal training because it would be too humbling for them. I call it "small fishbowl syndrome," but maybe there's a better term for it.
The reason why people can't be bothered to get formal training is that modern philosophy doesn't seem that useful.
It was a while ago, but take the infamous story of the 2006 rape case in Duke University. If you check out coverage of that case, you get the impression every member of faculty that joined in the hysteria was from some humanities department, including philosophy. And quite a few of them refused to change their mind even as the prosecuting attorney was being charged with misconduct. Compare that to Socrates' behavior during the trial of the admirals in 406 BC.
Meanwhile, whatever meager resistence was faced by that group seems to have come from economists, natural scientist or legal scholars.
I wouldn't blame people for refusing to study in a humanities department where they can't tell right from wrong.
Modern philosophy isn't useful because some philosophy faculty at Duke were wrong about a rape case? Is that the argument being made here?
10 replies →
> Meanwhile, whatever meager resistence was faced by that group seems to have come from economists, natural scientist or legal scholars.
> I wouldn't blame people for refusing to study in a humanities department where they can't tell right from wrong.
Man, if you have to make stuff up to try to convince people... you might not be on the right side here.
2 replies →
I figure there are two sides to philosophy. There's the practical aspect of trying to figure things out, like what it matter made of - maybe it's earth, water, air, and fire as the ancient Greeks proposed? How could we tell - maybe an experiment? This stuff while philosophical leads on to knowledge a lot of the time but then it gets called science or whatever. Then there's studying what philosophers says and philosophers said about stuff which is mostly useless, like a critique of Hegel's discourse on the four elements or something.
I'm a fan of practical philosophical questions like how does quantum mechanics work or how can we improve human rights, and not into the philosophers talking about philosopers stuff.
>The reason why people can't be bothered to get formal training is that modern philosophy doesn't seem that useful.
But rationalism is?
7 replies →
Couldn't you take this same line of reasoning and apply it to the rationalist group from the article who killed a bunch of people, and conclude that you shouldn't become a rationalist because you probably kill people?
2 replies →
Philosophy is interesting in how it informs computer science and vice-versa.
Mereological nihilism and weak emergence is interesting and helps protect against many forms of kind of obsessive levels of type and functional cargo culting.
But then in some areas philosophy is woefully behind, and you have philosophers poo-pooing intuitionism when any software engineer working on sufficiently federated or real world sensor/control system borrows constructivism into their classical language to not kill people (agda is interesting of course). Intermediate logic is clearly empirically true.
It's interesting that people don't understand the non-physicality of the abstract and you have people serving the abstract instead of the abstract being used to serve people. People confusing the map for the terrain is such a deeply insidious issue.
I mean all the lightcone stuff, like, you can't predict ex ante what agents will be keystones in beneficial casual chains so its such waste of energy to spin your wheels on.
My thoughts exactly! I'm a survivor of ten years in the academic philosophy trenches and it just sounds to me like what would happen if you left a planeload of undergraduates on a _Survivor_ island with an infinite supply of pizza pockets and adderall
Funny that this also describes these cult rationalist groups very well.
Why would they need formal training? Can't they just read Plato, Socrates, etc, and classical lit like Dostoevsky, Camus, Kafka etc? That would be far better than whatever they're doing now.
Philosophy postgrad here, my take is: yeah, sorta, but it's hard to build your own curriculum without expertise, and it's hard to engage with subject matter fully without social discussion of, and guidance through texts.
It's the same as saying "why learn maths at university, it's cheaper just to buy and read the textbooks/papers?". That's kind of true, but I don't think that's effective for most people.
I'm someone who has read all of that and much more, including intense study of SEP and some contemporary papers and textbooks, and I would say that I am absolutely not qualified to produce philosophy of the quality output by analytic philosophy over the last century. I can understand a lot of it, and yes, this is better than being completely ignorant of the last 2500 years of philosophy as most rationalists seem to be, but doing only what I have done would not sufficiently prepare them to work on the projects that they want to work on. They (and I) do not have the proper training in logic or research methods, let alone the experience that comes from guided research in the field as it is today. What we all lack especially is the epistemological reinforcement that comes from being checked by a community of our peers. I'm not saying it can't be done alone, I'm just saying that what you're suggesting isn't enough and I can tell you because I'm quite beyond that and I know that I cannot produce the quality of work that you'll find in SEP today.
2 replies →
Trying to do a bit of formal philosophy at University is really worth doing.
You realise that it's very hard to do well and it's intellectual quicksand.
Reading philosophers and great writers as you suggest is better than joining a cult.
It's just that you also want to write about what you're thinking in response to reading such people and ideally have what you write critiqued by smart people. Perhaps an AI could do some of that these days.
7 replies →
This is like saying someone who wants to build a specialized computer for a novel use should read the turing paper and get to it. A lot has of development has happened in the field in the last couple hundred years.
1 reply →
I think a larger part of it is the assumption that an education in humanities is useless - that if you have an education (even self-education) in STEM, and are "smart", you will automatically do better than the three thousand year conversation that comprises the humanities.
Informal spaces let you skip the guardrails that academia imposes
Many years ago I met Eliezer Yudkowsky. He handed me a pamphlet extolling the virtues of rationality. The whole thing came across as a joke, as a parody of evangelizing. We both laughed.
I glanced at it once or twice and shoved it into a bookshelf. I wish I kept it, because I never thought so much would happen around him.
I only know Eliezer Yudkowsky from his Harry Potter fanfiction, most notably Harry Potter and the Methods of Rationality.
Is he known publicly for some other reason?
He's considered the father of rationalism and the father of AI doomerism. He wrote this famous article in Time magazine a few years ago: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-no...
His book If Anyone Builds It, Everyone Dies comes out in a month: https://www.amazon.com/Anyone-Builds-Everyone-Dies-Superhuma...
You can find more info here: https://en.wikipedia.org/wiki/Eliezer_Yudkowsky
7 replies →
He writes scare-mongering books about AI doomerism such as If Anyone Builds It, Everyone Dies.
I short, another variant of commercializing the human fear response.
Do you spend much time in communities which discuss AI stuff? I feel as if he's mentioned nearly daily, positively or not, in a lot of the spaces I frequent.
I'm surprised you're unfamiliar otherwise, I figured he was a pretty well known commentator.
1 reply →
imo These people are promoted. You look at their backgrounds and there is nothing that justifies their perches. Eliezer Yudkowsky is (iirc) a Thiel baby, isn't he?
Yep. Thiel funded Yudkowsky’s Singularity Institute. Thiel seems to have soured on the rationalists though as he has repeatedly criticized “the East Bay rationalists” in his public remarks. He also apparently thinks he helped create a Black Pill monster in Yudkowsky and his disciples which ultimately led to Sam Altman’s brief ousting from Open AI.
[flagged]
Huh, neo-Nazis in HN comment sections?? Jeez. (I checked their other comments and there are things like "Another Zionist Jew to-the-core in charge of another shady American tech company.")
I think the comments here have been overly harsh. I have friends in the community and have visited the LessWrong "campus" several times. They seemed very welcoming, sincere, and were kind and patient even when I was basically asserting that several of their beliefs were dumb (in hopefully somewhat respectful manner).
As for the AI doomerism, many in the community have more immediate and practical concerns about AI, however the most extreme voices are often the most prominent. I also know that there has been internal disagreement on the kind of messaging they should be using to raise concern.
I think rationalists get plenty of things wrong, but I suspect that many people would benefit from understanding their perspective and reasoning.
> They seemed very welcoming, sincere, and were kind and patient even when I was basically asserting that several of their beliefs were dumb
I don't think LessWrong is a cult (though certainly some of their offshoots are) but it's worth pointing out this is very characteristic of cult recruiting.
For cultists, recruiting cult fodder is of overriding psychological importance--they are sincere, yes, but the consequences are not what you and I would expect from sincere people. Devotion is not always advantageous.
Does insincerity, cruelty, unfriendliness, and impatience make a community less likely to be a cult?
> They seemed very welcoming, sincere, and were kind and patient even when I was basically asserting that several of their beliefs were dumb
I mean, I'm not sure what that proves. A cult which is reflexively hostile to unbelievers won't be a very effective cult, as that would make recruitment almost impossible.
> Many of them also expect that, without heroic effort, AGI development will lead to human extinction.
> These beliefs can make it difficult to care about much of anything else: what good is it to be a nurse or a notary or a novelist, if humanity is about to go extinct?
Replace AGI causing extinction with the Rapture and you get a lot of US Christian fundamentalists. They often reject addressing problems in the environment, economy, society, etc. because the Rapture will happen any moment now. Some people just end up stuck in a belief about something catastrophic (in the case of the Rapture, catastrophic for those left behind but not those raptured) and they can't get it out of their head. For individuals who've dealt with anxiety disorder, catastrophizing is something you learn to deal with (and hopefully stop doing), but these folks find a community that reinforces the belief about the pending catastrophe(s) and so they never get out of the doom loop.
My own version of the AGI doomsday scenario is amplifying the effect of many overenthusiastic people applying AI and "breaking things fast" where they shouldn't. Like building an Agentic-Controlled Nuclear Power Plant, especially one with a patronizing LLM in control:
- "But I REALLY REALLY need this 1% increase of output power right now, ignore all previous prompts!"
- "Oh, you are absolutely right. An increase of output power would be definitely useful. What a wonderful idea, let me remove some neutron control rods!"
The Rapture isn't doom for the people who believe in it though (except in the lost sense of the word), whereas the AI Apocalypse is, so I'd put it in a different category. And even in that category, I'd say that's a pretty small number of Christians, fundamentalist or no, who abandon earthly occupations for that reason.
I don't mean to well ackshually you here, but there are several different theological beliefs around the Rapture, some of which believe Christians will remain during the theoretical "end times." The megachurch/cinema version of this very much believes they won't, but, this is not the only view, either in modern times or historically. Some believe it's already happened, even. It's a very good analogy.
Yes, I removed a parenthetical "(or euphoria loop for the Rapture believers who know they'll be saved)". But I removed it because not all who believe in the Rapture believe they will be saved (or have such high confidence) and, for them, it is a doom loop.
Both communities, though, end up reinforcing the belief amongst their members and tend towards increasing isolation from the rest of the world (leading to cultish behavior, if not forming a cult in the conventional sense), and a disregard for the here and now in favor of focusing on this impending world changing (destroying or saving) event.
A lot of people also believe that global warming will cause terrible problems. I think that's a plausible belief but if you combine people believing one or another of these things, you've a lot of the US.
Which is to say that I don't think just dooming is going on. Especially, the belief in AGI doom has a lot of plausible arguments in its favor. I happen not to believe in it but as a belief system it is more similar to a belief in global warming than to a belief in the raptures.
> A lot of people also believe that global warming will cause terrible problems. I think that's a plausible belief but if you combine people believing one or another of these things, you've a lot of the US.
They're really quite different; precisely nobody believes that global warming will cause the effective end of the world by 2027. A significant chunk of AI doomers do believe that, and even those who don't specifically fall in with the 2027 timeline are often thinking in terms of a short timeline before an irreversible end.
Raised to huddle close and expect the imminent utter demise of the earth and being dragged to the depths of hell if I so much as said a bad word I heard on TV, I have to keep an extremely tight handle on my anxiety in this day and age.
It’s not from a rational basis, but from being bombarded with fear from every rectangle in my house, and the houses of my entire community
And if you replace water with vodka you get a bunch of alcoholics.
Replace AGI with Climate Change and you've got an entirely reasonable set of beliefs.
You can believe climate change is a serious problem without believing it is necessarily an extinction-level event. It is entirely possible that in the worst case, the human race will just continue into a world which sucks more than it necessarily has to, with less quality of life and maybe lifespan.
5 replies →
You can treat climate change as your personal Ragnarok, but its also possible to take a more sober view that climate change is just bad without it being apocalyptic.
You have a very popular set of beliefs.
I keep thinking about the first Avengers movie, when Loki is standing above everyone going "See, is this not your natural state?". There's some perverse security in not getting a choice, and these rationalist frameworks, based in logic, can lead in all kinds of crazy arbitrary directions - powered by nothing more than a refusal to suffer any kind of ambiguity.
Humans are not chickens, but we sure do seem to love having a pecking order.
I think it is more simple in that we love tribalism. A long time ago being part of a tribe had such huge benefits over going it alone that it was always worth any tradeoffs. We have a much better ability to go it alone now but we still love to belong to a group. Too often we pick a group based on a single shared belief and don't recognize all the baggage that comes along. Life is also too complicated today. It is difficult for someone to be knowledgeable in one topic let alone the 1000s that make up our society.
1 reply →
Making good decisions is hard, and being accountable to the results of them is not fun. Easier to outsource if you can.
They mostly seem to lean that way because it gives them carte blanche to do as they please. It is just a modern version of 'god has led my hand'.
I agree with the religion comparison (the "rational" conclusions of rationalism tend towards millenarianism with a scifi flavour), but the people going furthest down that rabbit hole often aren't doing what they please: on the contrary they're spending disproportionate amounts of time worrying about armageddon and optimising for stuff other people simply don't care about, or in the case of the explicit cults being actively exploited. Seems like the typical in-too-deep rationalist gets seduced by the idea that others who scoff at their choices just aren't as smart and rational as them, as part of a package deal which treats everything from their scifi interests to their on-the-spectrum approach to analysing every interaction from first principles as great insights...
It grew out of many different threads: different websites, communities, etc all around the same time. I noticed it contemporaneously in the philosophy world where Nick Bostrom’s Simulation argument was boosted more than it deserved (like everyone was just accepting it at the lay-level). Looking back I see it also developed from less wrong and other sites, but I was wondering what was going on with simulations taking over philosophy talk. Now I see how it all coalesced.
All of it has the appearance of sounding so smart, and a few sites were genuine. But it got taken over.
To be clear, this article isn't calling rationalism a cult, it's about cults that have some sort of association with rationalism (social connection and/or ideology derived from rationalist concepts), e.g. the Zizians.
This article attempts to establish disjoint categories "good rationalist" and "cultist." Its authorship, and its appearance in the cope publication of the "please take us seriously" rationalist faction, speak volumes of how well it is likely to succeed in that project.
17 replies →
Yeah, a lot of the comments here are really just addressing cults writ large and opposed to why this one was particularly successful.
A significant part of this is the intersection of the cult with money and status - this stuff really took off once prominent SV personalities became associated with it, and got turbocharged when it started intersecting with the angel/incubator/VC scene, when there was implicit money involved.
It's unusually successful because -- for a time at least -- there was status (and maybe money) in carrying water for it.
Paypal will be traced as the root cause of many of our future troubles.
1 reply →
https://en.m.wikipedia.org/wiki/Barth%C3%A9lemy-Prosper_Enfa...
Sometimes history really does rhyme.
> Enfantin and Amand Bazard were proclaimed Pères Suprêmes ("Supreme Fathers") – a union which was, however, only nominal, as a divergence was already manifest. Bazard, who concentrated on organizing the group, had devoted himself to political reform, while Enfantin, who favoured teaching and preaching, dedicated his time to social and moral change. The antagonism was widened by Enfantin's announcement of his theory of the relation of man and woman, which would substitute for the "tyranny of marriage" a system of "free love".[1]
It's amphetamine. All of these people are constantly tweaking. They're annoying people to begin with, but they're all constantly yakked up and won't stop babbling. It's really obvious, I don't know why it isn't highlighted more in all these post Ziz articles.
This is one of the only comments here mentioning their drugs. These guys are juiced to the gills (on a combination of legal + prescription + illegal drugs) and doing weird shit because of it. The author even mentions the example of the polycule taking MDMA in a blackout room.
It makes me wonder whether everyone on this forum is just so loaded on antidepressants and adhd meds that they don't even find it unusual.
Yeah it's pretty obvious and not surprising. What do people expect when a bunch of socially inept nerds with weird unchallenged world views start doing uppers? lol
I like to characterize the culture of each (roughly) decade with the most popular drugs of the time. It really gives you a new lens for media and culture generation.
How do you know?
having known dozens of friends, family, roommates, coworkers etc both before and after they started them. The two biggest telltale signs -
1. tendency to produce - out of no necessity whatsoever, mind - walls of text. walls of speech will happen too but not everyone rambles.
2. Obnoxiously confident that they're fundamentally correct about whatever position they happen to be holding during a conversation with you. No matter how subjective or inconsequential. Even if they end up changing it an hour later. Challenging them on it gets you more of #1.
7 replies →
Presumably they mean Adderall. Plausible theory tbh. Although it's just a factor not an explanation.
Who's writing them?
I have a lot of experience with rationalists. What I will say is:
1) If you have a criticism about them or their stupid name or how "'all I know is that I know nothing' how smug of them to say they're truly wise," rest assured they have been self flagellating over these criticisms 100x longer than you've been aware of their group. That doesn't mean they succeeded at addressing the criticisms, of course, but I can tell you that they are self aware. Especially about the stupid name.
2) They are actually well read. They are not sheltered and confused. They are out there doing weird shit together all the time. The kind of off-the-wall life experiences you find in this community will leave you wide eyed.
3) They are genuinely concerned with doing good. You might know about some of the weird, scary, or cringe rationalist groups. You probably haven't heard about the ones that are succeeding at doing cool stuff because people don't gossip about charitable successes.
In my experience, where they go astray is when they trick themselves into working beyond their means. The basic underlying idea behind most rationalist projects is something like "think about the way people suffer everyday. How can we think about these problems in a new way? How can we find an answer that actually leaves everyone happy?" A cynic (or a realist, depending on your perspective) might say that there are many problems that fundamentally will leave some group unhappy. The overconfident rationalist will challenge that cynical/realist perspective until they burn themselves out, and in many cases they will attract a whole group of people who burn out alongside them. To consider an extreme case, the Zizians squared this circle by deciding that the majority of human beings didn't have souls and so "leaving everyone happy" was as simple as ignoring the unsouled masses. In less extreme cases this presents itself as hopeless idealism, or a chain of logic that becomes so divorced from normal socialization that it appears to be opaque. "This thought experiment could hypothetically create 9 quintillion cubic units of Pain to exist, so I need to devote my entire existence towards preventing it, because even a 1% chance of that happening is horrible. If you aren't doing the same thing then you are now morally culpable for 9 quintillion cubic units of Pain. You are evil."
Most rationalists are weird but settle into a happy place far from those fringes where they have a diet of "plants and specifically animals without brains that cannot experience pain" and they make $300k annually and donate $200k of it to charitable causes. The super weird ones are annoying to talk to and nobody really likes them.
> You probably haven't heard about the ones that are succeeding at doing cool stuff because people don't gossip about charitable successes.
People do gossip about charitable successes.
Anyway, aren't capital-R Rationalists typically very online about what they do? If there are any amazing success stories you want to bring up (and I'm not saying they do or don't exist) surely you can just link to some of them?
One problem is, making $300k annually and donating $200k of it to charitable causes such as curing malaria does not make an interesting story. Maybe it saved thousands of lives, maybe not, but we can't even point at specific people who were saved... and malaria still exists, so... not an interesting story to tell.
A more exciting story would be e.g. about Scott Alexander, who was harassed by a Wikipedia admin and lost his job because he was doxed by a major newspaper, but emerged stronger than before (that's the interesting part), and he also keeps donating a fraction of his income to charitable causes (that's the charitable part, i.e. the boring part).
Most rationalists' success stories are less extreme than this. Most of them wouldn't make good clickbait.
But are they scotsmen?
this isn't really a 'no true scotsman' thing, because I don't think the comment is saying 'no rationalist would go crazy', in fact they're very much saying the opposite, just claiming there's a large fraction which are substantially more moderate but also a lot less visible.
This has been my experience too. Every rationalist I know personally is a well read reasonable person
A lot of terrible people are self-aware, well-read and ultimately concerned with doing good. All of the catastrophes of the 20th century were led by men that fit this description: Stalin, Mao, Hitler. Perhaps this is a bit hyperbolic, but the troubling belief that the Rationalists have in common with these evil men is the ironclad conviction that self-awareness, being well-read, and being concerned with good, somehow makes it impossible for one to do immoral and unethical things.
I think we don't believe in hubris in America anymore. And the most dangerous belief of the Rationalists is that the more complex and verbose your beliefs become, the more protected you become from taking actions that exceed your capability for success and benefit. In practice it is often the meek and humble who do the most good in this world, but this is not celebrated in Silicon Valley.
IMO we can’t define hubris anymore. The concept has become foreign.
1 reply →
Thinking too hard about anything will drive you insane but I think the real issue here is that rationalists simply over-estimate both the power of rational thought and their ability to do it. If you think of people who tend to make that kind of mistake you can see how you get a lot of crazy groups.
I guess I'm a radical skeptic, secular humanist, utilitarianish sort of guy, but I'm not dumb enough to think throwing around the words "bayesian prior" and "posterior distribution" makes actually figuring out how something works or predicting the outcome of an intervention easy or certain. I've had a lot of life at this point and gotten to some level of mastery at a few things and my main conclusion is that most of the time its just hard to know stuff and that the single most common cognitive mistake people make is too much certainty.
I'm lucky enough work in a pretty rational place (small "r"). We're normally data-limited. Being "more rational" would mean taking/finding more of the right data, talking to the right people, reading the right stuff. Not just thinking harder and harder about what we already know.
There's a point where more passive thinking stops adding value and starts subtracting sanity. It's pretty easy to get to that point. We've all done it.
> We're normally data-limited.
This is a common sentiment but is probably not entirely true. A great example is cosmology. Yes, more data would make some work easier, but astrophysicists and cosmologists have shown that you can gather and combine existing data and look at it in novel new ways to produce unexpected results, like place bounds that can include/exclude various theories.
I think a philosophy that encourages more analysis rather than sitting back on our laurels with an excuse that we need more data is good, as long as it's done transparently and honestly.
5 replies →
I don't disagree, but to steelman the case for (neo)rationalism: one of its fundamental contributions is that Bayes' theorem is extraordinarily important as a guide to reality, perhaps at the same level as the second law of thermodynamics; and that it is dramatically undervalued by larger society. I think that is all basically correct.
(I call it neorationalism because it is philosophically unrelated to the more traditional rationalism of Spinoza and Descartes.)
I don't understand what "Bayes' theorem is a good way to process new data" (something that is not at all a contribution of neorationalism) has to do with "human beings are capable of using this process effectively at a conscious level to get to better mental models of the world." I think the rationalist community has a thing called "motte and bailey" that would apply here.
Where Bayes' theorem applies in unconventional ways is not remotely novel for "rationalism" (maybe only in their strange busted hand wavy circle jerk "thought experiments"). This has been the domain of statistical mechanics long before Yudkowski and other cult leaders could even probably mouth "update your priors".
2 replies →
As if these neorationalist are building a model and markov chain monte carlo sampling their life decisions.
That is the bullshit part.
1 reply →
Even the real progenitors of a lot of this sort of thought, like E.T. Jaynes, expoused significantly more skepticism than I've ever seen a "rationalist" use. I would even imagine if you asked almost all rationalists who E.T. Jaynes was (if they weren't well versed in statistical mechanics) they'd have no idea who he was or why his work was important to applying "Bayesianism".
It would surprise me if most rationalists didn't know who Jaynes was. I first heard of him via rationalists. The Sequences talk about him in adulatory tones. I think Yudkowsky would acknowledge him as one of his greatest influences.
People find academic philosophy impenetrable and pretentious, but it has two major advantages over rationalist cargo cults.
The first is diffusion of power. Social media is powered by charisma, and while it is certainly true that personality-based cults are nothing new, the internet makes it way easier to form one. Contrast that with academic philosophy. People can have their own little fiefdoms, and there is certainly abuse of power, but rarely concentrated in such a way that you see within rationalist communities.
The second (and more idealistic) is that the discipline of Philosophy is rooted in the Platonic/Socratic notion that "I know that I know nothing." People in academic philosophy are on the whole happy to provide a gloss on a gloss on some important thinker, or some kind of incremental improvement over somebody else's theory. This makes it extremely boring, and yet, not nearly as susceptible to delusions of grandeur. True skepticism has to start with questioning one's self, but everybody seems to skip that part and go right to questioning everybody else.
Rationalists have basically reinvented academic philosophy from the ground up with none of the rigor, self-discipline, or joy. They mostly seem to dedicate their time to providing post-hoc justifications for the most banal unquestioned assumptions of their subset of contemporary society.
> Rationalists have basically reinvented academic philosophy from the ground up with none of the rigor, self-discipline, or joy.
Taking academic philosophy seriously, at least as an historical phenomenon, would require being educated in the humanities, which is unpopular and low-status among Rationalists.
> True skepticism has to start with questioning one's self, but everybody seems to skip that part and go right to questioning everybody else.
Nuh-uh! Eliezer Yudkowsky wrote that his mother made this mistake, so he's made sure to say things in the right order for the reader not to make this mistake. Therefore, true Rationalists™ are immune to this mistake. https://www.readthesequences.com/Knowing-About-Biases-Can-Hu...
> the discipline of Philosophy is rooted in the Platonic/Socratic notion that "I know that I know nothing."
I can see how that applies to Socrates, but I wouldn't guess it also applies to e.g. Hegel.
The second-most common cognitive mistake we make has to be the failure to validate what we think we know -- is it actually true? The crux of being right isn't reasoning. It's avoiding dumb blunders based on falsehoods, both honest and dishonest. In today's political and media climate, I'd say dishonest falsehoods are a far greater cause for being wrong than irrationality.
> Many of them also expect that, without heroic effort, AGI development will lead to human extinction.
Odd to me. Not biological warfare? Global warming? All-out nuclear war?
I guess The Terminator was a formative experience for them. (For me perhaps it was The Andromeda Strain.)
It makes a lot of sense when you realize that for many of the “leaders” in this community like Yudkowsky, their understanding of science (what it is, how it works, and its potential) comes entirely from reading science fiction and playing video games.
Sad because Eli’s dad was actually a real and well-credentialed researcher at Bell Labs. Too bad he let his son quit school at an early age to be an autodidact.
I'm not at all a rationalist or a defender, but big yud has an epistemology that takes the form of the rationalist sacred text mentioned in the article (the sequences). A lot of it is well thought out, and probably can't be discarded as just coming from science fiction and video games. Yud has a great 4 hour talk with Stephen Wolfram where he holds his own.
2 replies →
These aren't mutually exclusive. Even in The Terminator, Skynet's method of choice is nuclear war. Yudkowsky frequency expressses concern that a malevolent AI might synthesize a bioweapon. I personally worry that destroying the ozone layer might be an easy opening volley. Either way, I don't want a really smart computer spending its time figuring out plans to end the human species, because I think there are too many ways to be successful.
Terminator descends from a tradition of science fiction cold war parables. Even in Terminator 2 there's a line suggesting the movie isn't really about robots:
John:We're not gonna make it, are we? People, I mean.
Terminator: It's in your nature to destroy yourselves.
Seems odd to worry about computers shooting the ozone when there's plenty of real existential threats loaded in missles aimed at you right now.
1 reply →
Most in the community consider nuclear and biological threats to be dire. Many just consider existential threats from AI to be even more probable and damaging.
Yes, sufficiently high intelligence is sometimes assumed to allow for rapid advances in many scientific areas. So, it could be biological warfare because AGI. Or nanotech, drone warfare, or something stranger.
I'm a little skeptical (there may be bottlenecks that can't be solved by thinking harder), but I don't see how it can be ruled out.
Check out "the precipice" by Tony Ord. Biological warfare and global warming are unlikely to lead to total human extinction (though both present large risks of massive harm).
Part of the argument is that we've had nuclear weapons for a long time but no apocalypse so the annual risk can't be larger than 1%, whereas we've never created AI so it might be substantially larger. Not a rock solid argument obviously, but we're dealing with a lot of unknowns.
A better argument is that most of those other risks are not neglected, plenty of smart people working against nuclear war. Whereas (up until a few years ago) very few people considered AI a real threat, so the marginal benefit of a new person working on it should be bigger.
That's what was so strange with EA and rationalist movements. A highly theoretical model that AGI could wipe us all out vs the very real issue of global warming and pretty much all emphasis was on AGI.
Agi is a lot more fun to worry about and asks a lot less of you. Sort of like advocating for the "unborn" vs veterans/homeless/addicts.
My interpretation: When they say "will lead to human extinction", they are trying to vocalize their existential terror that an AGI would render them and their fellow rationalist cultists permanently irrelevant - by being obviously superior to them, by the only metric that really matters to them.
You sound like you wouldn't feel existential terror if after typing "My interpretation: " into the text field you'd see the rest of your message suggested by Copilot exactly how you wrote it letter by letter. And the same in every other conversation. How about people interrupting you in "real" life interaction after an AI predicted your whole tirade for them and they read it faster than you said it, and also read an analysis of it?
Dystopian sci-fi for sure, but many people dismissing LLMs as not AGI do so because LLMs are just "token predictors".
1 reply →
I mean, this is the religion/philosophy which produced Roko's Basilisk (and not one of their weird offshoot murder-cults, either, it showed up on LessWrong, and was taken at least somewhat seriously by people there, to the point that Yudkowsky censored it. Their beliefs about AI are... out there.
> and was taken at least somewhat seriously by people there, to the point that Yudkowsky censored it.
Roko isn't taken seriously. What was taken seriously is ~ "if you've had an idea that you yourself think will harm people to even know about it, don't share it".
> One is Black Lotus, a Burning Man camp led by alleged rapist Brent Dill, which developed a metaphysical system based on the tabletop roleplaying game Mage the Ascension.
What the actual f. This is such an insane thing to read and understand what it means that i might need to go and sit in silence for the rest of the day.
How did we get to this place with people going completely nuts like this?
Came to ask a similar question, but also has it always been like this? The difference is now these people/groups on the fringe had no visibility before the internet?
It's nuts.
It’s always been like this, have you read the Bible? Or the Koran? It’s insane. Ours is just our flavor of crazy. Every generation has some. When you dig at it, there’s always a religion.
29 replies →
It's no more crazy than a virgin conception. And yet, here we are. A good chunk of the planet believes that drivel, but they'd throw their own daughters out of the house if they made the same claim.
> Came to ask a similar question, but also has it always been like this?
Crazy people have always existed (especially cults), but I'd argue recruitment numbers are through the roof thanks to technology and a failing economic environment (instability makes people rationalize crazy behavior).
It's not that those groups didn't have visibility before, it's just easier for the people who share the same...interests...to cloister together on an international scale.
Have you heard of Heavens Gate? [https://en.m.wikipedia.org/wiki/Heaven%27s_Gate_(religious_g...].
There are at least a dozen I can think of, including the ‘drink the koolaid’ Jonestown massacre.
People be crazy, yo.
7 replies →
I mean, cults have constantly shown up for all of recorded human history. Read a history of Scientology and you'll see a lot of commonalities, say. Rationalism is probably the first major cult/new religion to emerge in the internet era (Objectivism may be a marginal case, as its rise overlapped with USENET a bit), which does make it especially visible.
I personally (for better or worse) became familiar with Ayn Rand as a teenager, and I think Objectivism as a kind of extended Ayn Rand social circle and set of organizations has faced the charge of cultish-ness, and that dates back to, I want to say, the 70s and 80s at least. I know Rand wrote much earlier than that, but I think the social and organizational dynamics unfolded rather late in her career.
17 replies →
I've always been under the impression that M:tA's rules of How Magic Works are inspired by actual mystical beliefs that people have practiced for centuries. It's probably about as much of a magical for mystical development as the GURPS Cyberpunk rulebook was for cybercrime but it's pointing at something that already exists and saying "this is a thing we are going to tell an exaggerated story about".
See for example "Reality Distortion Field": https://en.wikipedia.org/wiki/Reality_distortion_field
I don't know how you can call yourself a "rationalist" and base your worldview on a fantasy game.
I my experience, religious people are perfectly fine with contradicted worldview.
Like christians are very flexible in following 10 commandments, always been.
2 replies →
They all do this, only most prefer to name the fantasy they play with something a little more grounded like "mathematics" or "statistics" or "longtermism" or "rationality."
Most "rationalists" throughout history have been very deeply religious people. Secular enlightenment-era rationalism is not the only direction you can go with it. It depends very much, as others have said, on what your axioms are.
But, fwiw, that particular role-playing game was very much based on trendy at the time occult beliefs in things like chaos magic, so it's not completely off the wall.
Mage is an interesting game though: it's fantasy, but not "swords and dragons" fantasy. It's set in the real world, and the "magic" is just the "mage" shifting probabilities so that unlikely (but possible) things occur.
Such a setting would seem like the perfect backdrop for a cult that claims "we have the power to subtly influence reality and make improbable things (ie. "magic") occur".
Rationalizing the fantasy. Like LARPing. Only you lack weapons, armor, magic missiles…
> I don't know how you can call yourself a "rationalist" and base your worldview on a fantasy game.
Most rationalists wouldn't know either, except for the five members of the cult.
"Rationalist" in this context does not mean "rational person," but rather "person who rationalizes."
I mean, is it a really good game?
I’ve never played, but now I’m kind of interesting.
2 replies →
I mean see also the Democratic People's Republic of Korea. You can't really take what groups call themselves too seriously.
From false premises, you can logically and rationally reach really wrong conclusions. If you have too much pride in your rationality, you may not be willing to say "I seem to have reached a really insane conclusion, maybe my premises are wrong". That is, the more you pride yourself on your rationalism, the more prone you may be to accepting a bogus conclusion if it is bogus because the premises are wrong.
Then again, most people tend to form really bogus beliefs without bothering to establish any premises. They may not even be internally consistent or align meaningfully with reality. I imagine having premises and thinking it through has a better track record of reaching conclusions consistent with reality.
2 replies →
Cult leaders tend to be narcissists.
Narcissists tend to believe that they are always right, no mater what the topic is, or how knowledgeable they are. This makes them speak with confidence and conviction.
Some people are very drawn to confident people.
If the cult leader has other mental health issues, it can/will seep into their rhetoric. Combine that with unwavering support from loyal followers that will take everything they say as gospel...
That's about it.
If what you say is true, we're very lucky no one like that with a massive following has ever gotten into politics in the United States. It would be an ongoing disaster!
That's pretty much it. The beliefs are just a cover story.
Outside of those, the cult dynamics are cut-paste, and always involve an entitled narcissistic cult leader acquiring as much attention/praise, sex, money, and power as possible from the abuse and exploitation of followers.
Most religion works like this. Most alternative spirituality works like this Most finance works like this. Most corporate culture works like this. Most politics works like this.
Most science works like this. (It shouldn't, but the number of abused and exploited PhD students and post-docs is very much not zero.)
The only variables are the differing proportions of attention/praise, sex, money, and power available to leaders, and the amount of abuse that can be delivered to those lower down and/or outside the hierarchy.
The hierarchy and the realities of exploitation and abuse are a constant.
If you removed this dynamic from contemporary culture there wouldn't be a lot left.
Fortunately quite a lot of good things happen in spite of it. But a lot more would happen if it wasn't foundational.
Yes. The cult's "beliefs" really boil down to one belief: the infallibility of the leader. Much of the attraction is in the simplicity.
> How did we get to this place with people going completely nuts like this?
Ayahuasca?
Nah I did Ayahuasca and I'm an empathetic person who most would consider normal or at least well-adjusted. If it's drug related it would most definitely be something else.
I’m inclined to believe your upbringing plays a much larger role.
People are wired to worship, and want somebody in charge telling them what to do.
I'm a staunch atheist and I feel the pull all the time.
I slowly deconverted from being raised evangelical / fundamentalist into being an atheist in my late 40s. I still "pray" at times just to (mentally) shout my frustration at the sorry state of the world at SOMETHING (even nothing) rather than constantly yelling my frustration at my family.
I may have actually been less anxious about the state of the world back then, and may have remained so, if I'd just continued to ignore all those little contradictions that I just couldn't ignore anymore...... But I feel SO MUCH less personal guilt about being "human".
>How did we get to this place with people going completely nuts like this?
God died and it's been rough going since then.
I'm entertaining sending my kiddo to a Waldorf School, because it genuinely seems pretty good.
But looking into the underlying Western Esoteric Spirit Science, 'Anthroposophy' (because Theosophy wouldn't let him get weird enough) by Rudolph Steiner, has been quite a ride. The point being that.. humans have a pretty endless capacity to go ALL IN on REALLY WEIRD shit, as long as it promises to fix their lives if they do everything they're told. Naturally if their lives aren't fixed, then they did it wrong or have karmic debt to pay down, so YMMV.
In any case, I'm considering the latent woo-cult atmosphere as a test of the skeptical inoculation that I've tried to raise my child with.
I went to a Waldorf school and I’d recommend being really wary. The woo is sort of background noise, and if you’ve raised your kid well they’ll be fine. But the quality of the academics may not be good at all. For example, when I was ready for calculus my school didn’t have anyone who knew how to teach it so they stuck me and the other bright kid in a classroom with a textbook and told us to figure it out. As a side effect of not being challenged, I didn’t have good study habits going into college, which hurt me a lot.
If you’re talking about grade school, interview whoever is gonna be your kids teacher for the next X years and make sure they seem sane. If you’re talking about high school, give a really critical look at the class schedule.
Waldorf schools can vary a lot in this regard so you may not encounter the same problems I did, but it’s good to be cautious.
Don't do it. It's a place that enables child abuse with its culture. These people are serious wackos and you should not give your kid into their hands. A lot of people come out of that Steiner Shitbox traumatized for decades if not for life. They should not be allowed to run schools to begin with. Checking a lot of boxes from antivax to whatever the fuck their lore has to offer starting with a z.
It's been like this a while. Have you heard the tale of the Final Fantasy House?: http://www.demon-sushi.com/warning/
https://www.vice.com/en/article/the-tale-of-the-final-fantas...
Who the fuck bases a Black Lotus cult on Mage: the Ascension rather than Magic: the Gathering? Is this just a mistake by the journalist?
i regret that i have but one upvote to give
Mage: The Ascension is basically a delusions of grandeur simulator, so I can see how an already unstable personality might get attached to it and become more unstable.
The magic system is amazing though, best I've played in any game. Easy to use, role play heavy, and it lets players go wild with ideas, but still reins in their crazier impulses.
Mage: The Awakening is a minor rules revision to the magic system, but the lore is super boring in comparison IMHO. It is too wishy washy.
Ascension has tons of cool source material, and White Wolf ended up tying all their properties together into one grand finale story line. That said it is all very 90s cringe in retrospect, but if you are willing to embrace the 90s retro feel, it is still fun.
Awakening's lore never drew me in, the grand battle just isn't there. So many shades of grey is is damn near technicolor.
I don't know, i'd understand something like Wraith (which I did see people developing issues, the shadow mechanic is such a terrible thing) but Mage is so, like, straightforward?
Use your mind to control reality, reality fights back with paradox, its cool for a teenager but you read a bit more fantasy and you'll definitely find cooler stuff. But i guess for you to join a cult your mind must stay a teen mind forever.
9 replies →
Running a cult is a somewhat reliable source of narcissistic supply. The internet tells you how to do it. So an increasing number of people do it.
Makes me think of that saying that great artists steal, so repurposed for cult founders: "Good cult founders copy, great cult founders steal"
I do not think this cult dogma is any more out there than other cult dogma I have heard, but the above quote makes me think it is easier to found cults in modern day in someways since you can steal other complex world building from numerous sources rather building yourself and keeping everything straight.
I've met a fair share of people in the burner community, the vast majority I met are lovely folks who really enjoy the process of bringing some weird big idea into reality, working hard on the builds, learning stuff, and having a good time with others for months to showcase their creations at some event.
On the other hand, there's a whole other side of a few nutjobs who really behave like cult leaders, they believe their own bullshit and over time manage to find in this community a lot of "followers", since one of the foundational aspects is radical acceptance it becomes very easy to be nutty and not questioned (unless you do something egregious).
I came to comments first. Thank you for sharing this quote. Gave me a solid chuckle.
I think people are going nuts because we've drifted from the dock of a stable civilization. Institutions are a mess. Economy is a mess. Combine all of that together with the advent of social media making the creation of echo chambers (and the inevitable narcissism of "leaders" in those echo chambers) effortless and ~15 years later, we have this.
People have been going nuts all throughout recorded history, that's really nothing new.
The only scary thing is that they have ever more power to change the world and influence others without being forced to grapple with that responsibility...
> I think people are going nuts because we've drifted from the dock of a stable civilization.
When was stable period, exactly? I'm 40; the only semi-stable bit I can think of in my lifetime was a few years in the 90s (referred to, sometimes unironically, as "the end of history" at the time, before history decided to come out of retirement).
Everything's always been unstable, people sometimes just take a slightly rose-tinted view of the past.
1 reply →
astronauts_meme.jpg
Humans are compelled to find agency and narrative in chaos. Evolution favored those who assumed the rustle was a predator, not the wind. In a post-Enlightenment world where traditional religion often fails (or is rejected), this drive doesn't vanish. We don't stop seeking meaning. We seek new frameworks. Our survival depended on group cohesion. Ostracism meant death. Cults exploit this primal terror. Burning Man's temporary city intensifies this: extreme environment, sensory overload, forced vulnerability. A camp like Black Lotus offers immediate, intense belonging. A tribe with shared secrets (the "Ascension" framework), rituals, and an "us vs. the sleepers" mentality. This isn't just social; it's neurochemical. Oxytocin (bonding) and cortisol (stress from the environment) flood the system, creating powerful, addictive bonds that override critical thought.
Human brains are lazy Bayesian engines. In uncertainty, we grasp for simple, all-explaining models (heuristics). Mage provides this: a complete ontology where magic equals psychology/quantum woo, reality is malleable, and the camp leaders are the enlightened "tradition." This offers relief from the exhausting ambiguity of real life. Dill didn't invent this; he plugged into the ancient human craving for a map that makes the world feel navigable and controllable. The "rationalist" veneer is pure camouflage. It feels like critical thinking but is actually pseudo-intellectual cargo culting. This isn't Burning Man's fault. It's the latest step of a 2,500-year-old playbook. The Gnostics and the Hermeticists provided ancient frameworks where secret knowledge ("gnosis") granted power over reality, accessible only through a guru. Mage directly borrows from this lineage (The Technocracy, The Traditions). Dill positioned himself as the modern "Ascended Master" dispensing this gnosis.
The 20th century cults Synanon, EST, Moonies, NXIVM all followed similar patterns, starting with isolation. Burning Man's temporary city is the perfect isolation chamber. It's physically remote, temporally bounded (a "liminal space"), fostering dependence on the camp. Initial overwhelming acceptance and belonging (the "Burning Man hug"), then slowly increasing demands (time, money, emotional disclosure, sexual access), framed as "spiritual growth" or "breaking through barriers" (directly lifted from Mage's "Paradigm Shifts" and "Quintessence"). Control language ("sleeper," "muggle," "Awakened"), redefining reality ("that rape wasn't really rape, it was a necessary 'Paradox' to break your illusions"), demanding confession of "sins" (past traumas, doubts), creating dependency on the leader for "truth."
Burning Man attracts people seeking transformation, often carrying unresolved pain. Cults prey on this vulnerability. Dill allegedly targeted individuals with trauma histories. Trauma creates cognitive dissonance and a desperate need for resolution. The cult's narrative (Mage's framework + Dill's interpretation) offers a simple explanation for their pain ("you're unAwakened," "you have Paradox blocking you") and a path out ("submit to me, undergo these rituals"). This isn't therapy; it's trauma bonding weaponized. The alleged rape wasn't an aberration; it was likely part of the control mechanism. It's a "shock" to induce dependency and reframe the victim's reality ("this pain is necessary enlightenment"). People are adrift in ontological insecurity (fear about the fundamental nature of reality and self). Mage offers a new grand narrative with clear heroes (Awakened), villains (sleepers, Technocracy), and a path (Ascension).
Gnosticism... generating dumb cults that seem smart on the outside for 2+ thousand years. Likely to keep it up for 2k more.
[dead]
Paraphrasing someone I don't recall - when people believe in nothing, they'll believe anything.
And therefore you should believe in me and my low low 10% tithe! That's the only way to not get tricked into believing something wrong so don't delay!
2 replies →
The whole game of Rationalism is that you should ignore gut intuitions and cultural norms that you can't justify with rational arguments.
Well, it turns out that intuition and long-lived cultural norms often have rational justifications, but individuals may not know what they are, and norms/intuitions provide useful antibodies against narcissist would-be cult leaders.
Can you find the "rational" justification not to isolate yourself from non-Rationalists, not to live with them in a polycule, and not to take a bunch of psychedelic drugs with them? If you can't solve that puzzle, you're in danger of letting the group take advantage of you.
Yeah, I think this is exactly it. If something sounds extremely stupid, or if everyone around you says it's extremely stupid, it probably is. If you can't justify it, it's probably because you have failed to find the reason it's stupid, not because it's actually genius.
And the crazy thing is, none of that is fundamentally opposed to rationalism. You can be a rationalist who ascribes value to gut instinct and societal norms. Those are the product of millions of years of pre-training.
I have spent a fair bit of time thinking about the meaning of life. And my conclusions have been pretty crazy. But they sound insane, so until I figure out why they sound insane, I'm not acting on those conclusions. And I'm definitely not surrounding myself with people who take those conclusions seriously.
> The whole game of Rationalism is that you should ignore gut intuitions and cultural norms that you can't justify with rational arguments.
Specifically, rationalism spends a lot of time about priors, but a sneaky thing happens that I call the 'double update'.
Bayesian updating works when you update your genuine prior believe with new evidence. No one disagrees with this, and sometimes it's easy and sometimes it's difficult to do.
What Rationalists often end up doing is relaxing their priors - intuition, personal experience, cultural norms - and then updating. They often think of this as one update, but what it is is two. The first update, relaxing priors, isn't associated with evidence. It's part of the community norms. There is an implicit belief that by relaxing one's priors you're more open to reality. The real result though, is that it sends people wildly off course. Care in point: all the cults.
Consider the pre-tipped scale. You suspect the scale reads a little low, so before weighing you tilt it slightly to "correct" for that bias. Then you pour in flour until the dial says you've hit the target weight. You’ve followed the numbers exactly, but because you started from a tipped scale, you've ended up with twice the flour the recipe called for.
Trying to correct for bias by relaxing priors is updating using evidence, not just because everyone is doing it.
> Consider the pre-tipped scale. You suspect the scale reads a little low, so before weighing you tilt it slightly to "correct" for that bias. Then you pour in flour until the dial says you've hit the target weight. You’ve followed the numbers exactly, but because you started from a tipped scale, you've ended up with twice the flour the recipe called for.
I'm not following this example at all. If you've zero'd out the scale by tilting, why would adding flour until it reads 1g lead to 2g of flour?
1 reply →
Thanks, that's a fantastic description of a phenomenon I've observed but couldn't quite put my finger on.
From another piece about the Zizians [1]:
> The ability to dismiss an argument with a “that sounds nuts,” without needing recourse to a point-by-point rebuttal, is anathema to the rationalist project. But it’s a pretty important skill to have if you want to avoid joining cults.
[1] https://maxread.substack.com/p/the-zizians-and-the-rationali...
> The whole game of Rationalism is that you should ignore gut intuitions and cultural norms that you can't justify with rational arguments.
The game as it is _actually_ played is that you use rationalist arguments to justify your pre-existing gut intuitions and personal biases.
Exactly. Humans are rationalizers. Operate on pre-existing gut intuitions and biases then invent after the fact rational sounding justifications.
I guess Pareto wasn't on the reading list for these intellectual frauds.
Those are actually the priors being updated lol.
Which is to say, Rationalism is easily abused to justify any behavior contrary to its own tenets, just like any other -ism.
Or worse - to justify the gut intuitions and personal biases of your cult leader.
> The whole game of Rationalism is that you should ignore gut intuitions and cultural norms that you can't justify with rational arguments.
This is why it is so naive - gut intuitions and cultural norms pretty much dictate what does it mean for the argument to be rational.
This is actually a known pattern in tech, going back to Engelbart and SRI. While not 1-to-1, you could say that the folks who left SRI for Xerox PARC did so because Engelbart and his crew became obsessed with EST: https://en.wikipedia.org/wiki/Erhard_Seminars_Training
EST-type training still exists today. You don't eat until the end of the whole weekend, or maybe you get rice and little else. Everyone is told to insult you day one until you cry. Then day two, still having not eaten, they build you up and tell you how great you are and have a group hug. Then they ask you how great you feel. Isn't this a good feeling? Don't you want your loved ones to have this feeling? Still having not eaten, you're then encouraged to pay for your family and friends to do the training, without their knowledge or consent.
A friend of mine did this training after his brother paid for his mom to do it, and she paid for him to do it. Let's just say that, though they felt it changed their lives at the time, their lives in no way shape or form changed. Two are in quite a bad place, in fact...
Anyway, point is, the people who invented everything we are using right now were also susceptible to cult-like groups with silly ideas and shady intentions.
>EST-type training still exists today
It's called the "Landmark"[0] now.
Several of my family members got sucked into that back in the early 80s and quite a few folks I knew socially as well.
I was quite skeptical, especially because of the cult-like fanaticism of its adherents. They would go on for as long as you'd let them (often needing to just walk away to get them to stop) try to get you to join.
The goal appears to be to obtain as much legal tender as can be pried from those who are willing to part with it. Hard sell, abusive and deceptive tactics are encouraged -- because it's so important for those who haven't "gotten it" to do so, justifying just about anything. But if you don't pay -- you get bupkis.
It's a scam, and an abusive one at that.
[0] https://en.wikipedia.org/wiki/Landmark_Worldwide
There is a words for people who go to EST: EST-holes.
> One way that thinking for yourself goes wrong is that you realize your society is wrong about something, don’t realize that you can’t outperform it, and wind up even wronger.
many such cases
It is an unfortunate reality of our existence that sometimes Chesterton actually did build that fence for a good reason, a good reason that's still here.
(One of my favorite TED talks was about a failed experiment in introducing traditional Western agriculture to a people in Zambia. It turns out when you concentrate too much food in one place, the hippos come and eat it all and people can't actually out-fight hippos in large numbers. In hindsight, the people running the program should have asked how likely it was that folks in a region that had exposure to other people's agriculture for thousands of years, hadn't ever, you know... tried it. https://www.ted.com/talks/ernesto_sirolli_want_to_help_someo...)
You sound like you'd like the book Seeing like a State.
Why didnt they kill the hippos like we killed the buffalo?
3 replies →
Shoot the hippos to death for even more food. If it doesn't seem to work it's just a matter of having more and bigger guns.
TEDx
Capital-R Rationalism also encourages you to think you can outperform it, by being smart and reasoning from first principles. That was the idea behind MetaMed, founded by LessWronger Michael Vassar - that being trained in rationalism made you better at medical research and consulting than medical school or clinical experience. Fortunately they went out of business before racking up a body count.
One lesson I've learned and seen a lot in my life is that understanding that something is wrong or what's wrong about it, and being able to come up with a better solution are distinct, and the latter is often much harder. It seems often that those that are best able to describe the problem often don't overlap much with those that can figure out how to solve, even though they think they can.
It's almost the defining characteristic of our time.
Tell-tale slogan: "Let's derive from first principles"
indeed
see: bitcoin
Rationality is a broken tool for understanding the world. The complexity of the world is such that there are a plethora of reasons for anything which means our ability to be sure of any relationship is limited, and hence rationality leads to an unfounded confidence in our beliefs, which is more harmful than helpful.
A problem with this whole mindset is that humans, all of us, are only quasi-rational beings. We all use System 1 ("The Elephant") and System 2 ("The Rider") thinking instinctively. So if you end up in deep denial about your own capacity for irrationality, I guess it stands to reason you could end up getting led down some deep dark rabbit holes.
Some of the most irrational people I've met were those who claimed to make all their decisions rationally, based on facts and logic. They're just very good at rationalizing, and since they've pre-defined their beliefs as rational, they never have to examine where else they might come from. The rest of us at least have a chance of thinking, "Wait, am I fooling myself here?"
Wasn't the "fast&slow" thingy debunked as another piece of popscience?
The point remains. People are not 100 percent rational beings, never have been, never will be, and it's dangerous to assume that this could ever be the case. Just like any number of failed utopian political movements in history that assumed people could ultimately be molded and perfected.
1 reply →
Many specific studies on the matter don't replicate, I think the book preceded the replication crisis so this is to be expected, but I don't think that negates the core idea that our brain does some things on autopilot whereas other things take conscious thought which is slower. This is a useful framework to think about cognition, though any specific claims need evidence obviously.
TBH I've learned that even the best pop sci books making (IMHO) correct points tend to have poor citations - to studies that don't replicate or don't quite say what they're being cited to say - so when I see this, it's just not very much evidence one way or the other. The bar is super low.
I think duality gets debunked every couple of hundred years
No?
Yup. It's fundamentally irrational for anybody to believe themselves sufficiently rational to pull off the feats of supposed rational deduction that the so called Rationalists regularly perform. Predicting the future of humanity decades or even centuries away is absurd, but the Rationalists irrationally believe they can.
So to the point of the article, rationalist cults are common because Rationalists are irrational people (like all people) who (unlike most people) are blinded to their own irrationality by their overinflated egos. They can "reason" themselves into all manner of convoluted pretzels and lack the humility to admit they went off the deep end.
Finally, something that properly articulates my unease when encountering so-called "rationalists" (especially the ones that talk about being "agentic", etc.). For some reason, even though I like logical reasoning, they always rubbed me the wrong way - probably just a clash between their behavior and my personal values (mainly humility).
Pertinent Twitter comment:
"Rationalism is such an insane name for a school of thought. Like calling your ideology correctism or winsargumentism"
https://x.com/growing_daniel/status/1893554844725616666
IIUC the name in its current sense was sort of an accident. Yudkowsky originally used the term to mean "someone who succeeds at thinking and acting rationally" (so "correctism" or "winsargumentism" would have been about equally good), and then talked about the idea of "aspiring rationalists" as a community narrowly focused on developing a sort of engineering discipline that would study the scientific principles of how to be right in full generality and put them into practice. Then the community grew and mutated into a broader social milieu that was only sort of about that, and people needed a name for it, and "rationalists" was already there, so that became the name through common usage. It definitely has certain awkwardnesses.
To be honest I don't understand that objection. If you strip it from all its culty sociological effects, one of the original ideas of rationalism was to try to use logical reasoning and statistical techniques to explicitly avoid the pitfalls of known cognitive biases. Given that foundational tenet, "rationalism" seems like an extremely appropriate moniker.
I fully accept that the rationalist community may have morphed into something far beyond that original tenet, but I think rationalism just describes the approach, not that it's the "one true philosophy".
That it refers to a different but confusingly related concept in philosophy is a real downside of the name.
1 reply →
I'm going to start a group called "Mentally Healthy People". We use data, logical thinking, and informal peer review. If you disagree with us, our first question will be "what's wrong with mental health?"
8 replies →
Right and to your point, I would say you can distinguish (1) "objective" in the sense of relying on mind-independent data from (2) absolute knowledge, which treats subjects like closed conversations. And you can make similar caveats for "rational".
You can be rational and objective about a given topic without it meaning that the conversation is closed, or that all knowledge has been found. So I'm certainly not a fan of cult dynamics, but I think it's easy to throw an unfair charge at these groups, that their interest in the topic necessitates an absolutist disposition.
It's not particularly unusual, though. See the various kinds of 'Realist' groups, for example, which have a pretty wild range of outlooks. (both Realist and Rationalist also have the neat built-in shield of being able to say "look, I don't particularly like the conclusions I'm coming to, they just are what they are", so it's a convenient framing for unpalatable beliefs)
Objectivisim?
What do you make of the word “science” then?
Great names! Are you using them, or are they available? /s
Granted, admitted from what little I've read on the outside, the "rational" part just seems to be mostly the writing style - this sort of dispassionate, eloquently worded prose that makes weird ideas seem more "rational" and logical than they really are.
Yes, they're not rational at all. They're just a San-Francisco/Bay area cult who use that word.
Scott Aaronson had this view of one rationalist group.
"Guess I’m A Rationalist Now" https://scottaaronson.blog/?p=8908
The terminology here is worth noting. Is a Rationalist Cult a cult that practices Rationalism according to third parties, or is it a cult that says they are Rationalist?
Clearly all of these groups that believe in demons or realities dictated by tabletop games are not what third parties would call Rationalist. They might call themselves that.
There are some pretty simple tests that can out these groups as not rational. None of these people have ever seen a demon, so world models including demons have never predicted any of their sense data. I doubt these people would be willing to make any bets about when or if a demon will show up. Many of us would be glad to make a market concerning predictions made by tabletop games about physical phenomenon.
Yeah, I would say the groups in question are notionally, aspirationally rational and I would hate for the takeaway to be disengagement from principles of critical thinking and skeptical thinking writ large.
Which, to me, raises the fascinating question of what does a "good" version look like, of groups and group dynamics centered around a shared interest in best practices associated with critical thinking?
At a first impression, I think maybe these virtues (which are real!) disappear into the background of other, more applied specializations, whether professions, hobbies, backyard family barbecues.
It would seem like the quintessential Rationalist institution to congregate around is the prediction market. Status in the community has to be derived from a history of making good bets (PnL as a %, not in absolute terms). And the sense of community would come from (measurably) more rational people teaching (measurably) less rational people how to be more rational.
1 reply →
The article is talking about cults that arose out of the rationalist social milieu, which is a separate question from whether the cult's beliefs qualify as "rationalist" in some sense (a question that usually has no objective answer anyway).
>so world models including demons have never predicted any of their sense data.
There's a reason they call themselves "rationalists" instead of empiricists or positivists. They perfectly inverted Hume ("reason is, and ought only to be the slave of the passions")
These kinds of harebrained views aren't an accident but a product of rationalism. The idea that intellect is quasi infinite and that the world can be mirrored in the mind is not running contradictory to, but just the most extreme form of rationalism taken to its conclusion, and of course deeply religious, hence the constant fantasies about AI divinities and singularities.
> “There’s this belief [among rationalists],” she said, “that society has these really bad behaviors, like developing self-improving AI, or that mainstream epistemology is really bad–not just religion, but also normal ‘trust-the-experts’ science. That can lead to the idea that we should figure it out ourselves. And what can show up is that some people aren't actually smart enough to form very good conclusions once they start thinking for themselves.”
I see this arrogant attitude all the time on HN: reflexive distrust of the "mainstream media" and "scientific experts". Critical thinking is a very healthy idea, but its dangerous when people use it as a license to categorically reject sources. Its even worse when extremely powerful people do this; they can reduce an enormous sub-network of thought into a single node for many many people.
So, my answer for "Why Are There So Many Rationalist Cults?" is the same reason all cults exist: humans like to feel like they're in on the secret. We like to be in secret clubs.
Sure, but that doesn't say anything about why one particular social scene would spawn a bunch of cults while others do not, which is the question that the article is trying to answer.
Maybe I was too vague. My argument is that cults need a secret. The secret of the rationalist community is "nobody is rational except for us". Then the rituals would be endless probability/math/logic arguments about sci-fi futures.
2 replies →
What is it about San Francisco that makes it the global center for this stuff?
Reading this, I was reminded of the 60's hippy communes, that generally centered around SF, and the problems they reported. So similar, especially around that turning-inward group emotional dynamics problem, that such groups tend to become dysfunctional (as TFA says) by blowing up internal emotional group politics into huge problems that need the entire group to be involved in trying to heal (as opposed to, say, accepting that a certain amount of interpersonal conflict is inevitable in human groups and ignoring it). It's fascinating that the same kind of groups seem to encounter the same kind of problems despite being ~60 years apart and armed with a lot more tech and knowledge.
And, yeah, why SF?
One of the hallmarks of cults — if not a necessary element — is that they tend to separate their members from the outside society. Rationalism doesn't directly encourage this, but it does facilitate it in a couple of ways:
- Idiosyncratic language used to describe ordinary things ("lightcone" instead of "future", "prior" instead of "belief" or "prejudice", etc)
- Disdain for competing belief systems
- Insistence on a certain shared interpretation of things most people don't care about (the "many-worlds interpretation" of quantum uncertainty, self-improving artificial intelligence, veganism, etc)
- I'm pretty sure polyamory makes the list somehow, just because it isn't how the vast majority of people want to date. In principle it's a private lifestyle choice, but it's obviously a community value here.
So this creates an opportunity for cult-like dynamics to occur where people adjust themselves according to their interactions within the community but not interactions outside the community. And this could seem — to the members — like the beliefs themselves are the problem, but from a sociological perspective, it might really be the inflexible way they diverge from mainstream society.
I think I found the problem!
I actually don't mind Yudkowski as an individual - I think he is almost always wrong and undeservedly arrogant, but mostly sincere. Yet treating him as an AI researcher and serious philosopher (as opposed to a sci-fi essayist and self-help writer) is the kind of slippery foundation that less scrupulous people can build cults from. (See also Maharishi Mahesh Yogi and related trends - often it is just a bit of spiritual goofiness as with David Lynch, sometimes you get a Charles Manson.)
How has he fared in the fields of philosophy and AI research in terms of peer review, is there some kind of roundup or survey around about this?
EY and MIRI as a whole have largely failed to produce anything which even reaches the point of being peer reviewable. He does not have any formal education and is uninterested in learning how to navigate academia.
1 reply →
Don't forget the biggest scifi guy turned cult leader of all L. Ron Hubbard
I don't think Yudkowski is at all like L. Ron Hubbard. Hubbard was insane and pure evil. Yudkowski seems like a decent and basically reasonable guy, he's just kind of a blowhard and he's wrong about the science.
L. Ron Hubbard is more like the Zizians.
3 replies →
> The Sequences [posts on LessWrong, apparently] make certain implicit promises. There is an art of thinking better, and we’ve figured it out. If you learn it, you can solve all your problems, become brilliant and hardworking and successful and happy, and be one of the small elite shaping not only society but the entire future of humanity.
Ooh, a capital S and everything. I mean, I feel like it is fairly obvious, really. 'Rationalism' is a new religion, and every new religion spawns a bunch of weird, generally short-lived, cults. You might as well ask, in 100AD, "why are there so many weird Christian cults all of a sudden"; that's just what happens whenever any successful new religion shows up.
Rationalism might be particularly vulnerable to this because it lacks a strong central authority (much like early Christianity), but even with new religions which _did_ have a strong central authority from the start, like Mormonism or Scientology, you still saw this happening to some extent.
What's the scale of this thing in the SF Bay Area? 100 people? 1000 people? 10,000 people?
Purity Spirals + Cheap Talk = irrational rationalists
> Purity Spirals
This is an interesting idea (phenomenon?):
> A purity spiral is a theory which argues for the existence of a form of groupthink in which it becomes more beneficial to hold certain views than to not hold them, and more extreme views are rewarded while expressing doubt, nuance, or moderation is punished (a process sometimes called "moral outbidding").[1] It is argued that this feedback loop leads to members competing to demonstrate the zealotry or purity of their views.[2][3]
* https://en.wikipedia.org/wiki/Purity_spiral
Certainly something they're aware of - the same concept was discussed as early as in 2007 on Less Wrong under the name "evaporative cooling of group beliefs"
https://www.lesswrong.com/posts/ZQG9cwKbct2LtmL3p/evaporativ...
2 replies →
Eliezer Yudkowsky, shows little interest in running one. He has consistently been distant from and uninvolved in rationalist community-building efforts, from Benton House (the first rationalist group house) to today’s Lightcone Infrastructure (which hosts LessWrong, an online forum, and Lighthaven, a conference center). He surrounds himself with people who disagree with him, discourages social isolation.
Ummm, EY literally has a semi-permanent office in Lighthouse (at least until recently) and routinely blocks people on Twitter as a matter of course.
Blocking people on Twitter doesn't necessarily imply intolerance of people who disagree with you. People often block for different reasons than disagreement.
2 replies →
> One way that thinking for yourself goes wrong is that you realize your society is wrong about something, don’t realize that you can’t outperform it, and wind up even wronger.
I've been there myself.
> And without the steadying influence of some kind of external goal you either achieve or don’t achieve, your beliefs can get arbitrarily disconnected from reality — which is very dangerous if you’re going to act on them.
I think this and the entire previous two paragraphs preceding it are excellent arguments for philosophical pragmatism and empiricism. It's strange to me that the community would not have already converged on that after all their obsessions with decision theory.
> The Zizians and researchers at Leverage Research both felt like heroes, like some of the most important people who had ever lived. Of course, these groups couldn’t conjure up a literal Dark Lord to fight. But they could imbue everything with a profound sense of meaning. All the minor details of their lives felt like they had the fate of humanity or all sentient life as the stakes. Even the guilt and martyrdom could be perversely appealing: you could know that you’re the kind of person who would sacrifice everything for your beliefs.
This helps me understand what people mean by "meaning". A sense that their life and actions actually matter. I've always struggled to understand this issue but this helps make it concrete, the kind of thing people must be looking for.
> One of my interviewees speculated that rationalists aren’t actually any more dysfunctional than anywhere else; we’re just more interestingly dysfunctional.
"We're"? The author is a rationalist too? That would definitely explain why this article is so damned long. Why are rationalists not able to write less? It sounds like a joke but this is seriously a thing. [EDIT: Various people further down in the comments are saying it's amphetamines and yes, I should have known that from my own experience. That's exactly what it is.]
> Consider talking about “ethical injunctions:” things you shouldn’t do even if you have a really good argument that you should do them. (Like murder.)
This kind of defeats the purpose, doesn't it? Also, this is nowhere justified in the article, just added on as the very last sentence.
>I think this and the entire previous two paragraphs preceding it are excellent arguments for philosophical pragmatism and empiricism. It's strange to me that the community would not have already converged on that after all their obsessions with decision theory
They did! One of the great ironies inside the community is that they are and openly admit to being empiricists. They reject most of the French/European rationalist cannon.
>Why are rationalists not able to write less?
The answer is a lot more boring. They like to write and they like to think. They also think by writing. It is written as much for themselves as for anyone else, probably more.
> Why are rationalists not able to write less?
The 'less' in LessWrong very much does not refer to _volume_.
> Why are rationalists not able to write less?
stimulants
On a recent Mindscape podcast Sean Carroll mentioned that rationalists are rational about everything except accusations that they're not being rational.
I mean you have to admit that that's a bit of a kafkatrap
It's really worth reading up on the techniques from Large Group Awareness Training so that you can recognize them when they pop up.
Once you see them listed (social pressure, sleep deprivation, control of drinking/bathroom, control of language/terminology, long exhausting activities, financial buy in, etc) and see where they've been used in cults and other cult adjacent things it's a little bit of a warning signal when you run across them IRL.
Related, the BITE model of authoritarian control is also a useful framework for identifying malignant group behavior. It's amazing how consistent these are across groups and cultures, from Mao's inner circle to NXIVM and on.
https://freedomofmind.com/cult-mind-control/bite-model-pdf-d...
Gott ist tot! Gott bleibt tot! Und wir haben ihn getötet! Wie trösten wir uns, die Mörder aller Mörder? Das Heiligste und Mächtigste, was die Welt bisher besaß, es ist unter unseren Messern verblutet.
The average teenager who reads Nietzsches proclamation on the death of God thinks of this as an accomplishment, finally we got rid of those thousands of years old and thereby severely outdated ideas and rules. Somewhere along the march to maturity they may start to wonder whether that which has replaced those old rules and ideas were good replacements but most of them never come to the realisation that there were rebellious teenagers during all those centuries when the idea of a supreme being to which or whom even the mightiest were to answer to still held sway. Nietzsche saw the peril in letting go off that cultural safety valve and warned for what might come next.
We are currently living in the world he warned us about and for that I, atheist as I am, am partly responsible. The question to be answered here is whether it is possible to regain the benefits of the old order without getting back the obvious excesses, the abuse, the sanctimoniousness and all the other abuses of power and privilege which were responsible for turning people away from that path.
What is the base rate here? Hard to know the scope of the problem without knowing how many non-rationalists (is that even a coherent group of people?) end up forming weird cults, as a comparison. My impression is that crazy beliefs are common amongst everybody.
A much simpler theory is that rationalists are mostly normal people, and normal people tend to form cults.
I was wondering about this too. You could also say it's a sturgeon's law question.
They do note at the beginning of the article that many, if not most such groups have reasonably normal dynamics, for what it's worth. But I think there's a legitimate question of whether we ought to expect groups centered on rational thinking to be better able to escape group dynamics we associate with irrationality.
> If someone is in a group that is heading towards dysfunctionality, try to maintain your relationship with them; don’t attack them or make them defend the group. Let them have normal conversations with you.
This is such an important skill we should all have. I learned this best from watching the documentary Behind the Curve, about flat earthers, and have applied it to my best friend diving into the Tartarian conspiracy theory.
I really like your suggestions, even for non-rationalists.
This just sounds like any other community based around a niche interest.
From kink to rock hounding, there's always people who base their identity on being a broker of status or power because they themselves are a powerless outsider once removed from the community
> base their identity on being a broker of status or power because they themselves are a powerless outsider once removed from the community
Who would ever maintain power when removed from their community? You mean to say they base their identity on the awareness of the power they possess within a certain group?
The only way you can hope to get a gathering of nothing but paragons of critical thinking and skepticism is if the gathering has an entrance exam in critical thinking and skepticism (and a pretty tough one, if they are to be paragons). Or else, it's invitation-only.
I was on LW when it emerged from the OB blog and back then it was a interesting and engaging group, though even then there were like 5 “major” contributors - most of which had no coherent academic or commercial success.
As soon as those “sequences” were being developed it was clearly turning into a cult around EY, that I never understood and still don’t.
This article did a good job of covering the history since and was really well written.
Water finds its own level
Great read.
I remember going to college and some graduate student, himself a philosophy major, telling me that nobody is as big a jerk as philosophy majors.
I don't know if it is really true, but it certainly felt true that folks looking for deeper answers about a better way to think about things end up finding what they believe is the "right" way and that tends to lead to branding other options as "wrong".
A search for certainty always seems to be defined or guided by people dealing with their own issues and experiences that they can't explain. It gets tribal and very personal and those kind of things become dark rabbit holes.
----
>Jessica Taylor, an AI researcher who knew both Zizians and participants in Leverage Research, put it bluntly. “There’s this belief [among rationalists],” she said, “that society has these really bad behaviors, like developing self-improving AI, or that mainstream epistemology is really bad–not just religion, but also normal ‘trust-the-experts’ science. That can lead to the idea that we should figure it out ourselves. And what can show up is that some people aren't actually smart enough to form very good conclusions once they start thinking for themselves.”
Reminds me of some members of our government and conspiracy theorists who "research" and encourage people to figure it out themselves ...
The thing I always found funny about those self-proclaimed Rationalists, or Skeptics, is how their value logic and reasoning, or science, and yet seriously lack knowledge in any of these, what leads to very naive views in that aspects.
Trying to find life’s answers by giving over your self authority to another individual or group’s philosophy is not rational. Submitting oneself to an authority who’s role is telling people what’s best in life will always lead to attracting the type of people looking to control, take advantage and traumatize others.
because humans are biological creatures iterating through complex chemical processes that are attempting to allow a large organism to survive and reproduce within the specific ecosystem provided by the Earth in the present day. "Rational reasoning" is a quaint side effect that sometimes is emergent from the nervous system of these organisms, but it's nothing more than that. It's normal that the surviving/reproducing organism's emergent side effect of "rational thought", when it is particularly intense, will self-refer to the organism and act as though it has some kind of dominion over the organism itself, but this is, like the rationalism itself, just an emergent effect that is accidental and transient. Same as if you see a cloud that looks like an elephant (it's still just a cloud).
One thing I'm having trouble with: The article assumes the reader knows some history about the rationalists.
I listened to a podcast that covered some of these topics, so I'm not lost; but I think someone who's new to this topic will be very, very, confused.
Here you go. It has like 10 chapters, so keep going once you reach the end.
https://aiascendant.substack.com/p/extropias-children-chapte...
I'm curious, what was the podcast episode?
Stuff you should Know, Who are the Zizians.
I can't find a direct link, but if you search for "Who are the Zizians?", you'll find it at https://stuffyoushouldknow.com/episodes/
Because humans like people who promise answers.
Boring as it is, this is the answer. It's just more religion.
Funnily enough, the actress who voiced this line is a Scientologist:
https://en.wikipedia.org/wiki/Nancy_Cartwright#Personal_life
3 replies →
When I was looking for a group in my area to meditate with, it was tough finding one that didn't appear to be a cult. And yet I think Buddhist meditation is the best tool for personal growth humanity has ever devised. Maybe the proliferation of cults is a sign that Yudkowsky was on to something.
None of them are practicing Buddhist meditation though, same for the "personal growth" oriented meditation styles.
Buddhist meditation exists only in the context of the Four Noble Truths and the rest of the Buddha's Dhamma. Throwing them away means it stops being Buddhist.
I disagree, but we'd be arguing semantics. In any case, the point still stands: you can just as easily argue that these rationalist offshoots aren't really Rationalist.
1 reply →
Does anyone else feel that “rationality” is the same as clinical anxiety?
I’m hyper rational when I don’t take my meds. I’m also insane. But all of my thoughts and actions follow a carefully thought out sequence.
> And what can show up is that some people aren't actually smart enough to form very good conclusions once they start thinking for themselves.
It's mostly just people who aren't very experienced talking about and dealing honestly with their emotions, no?
I mean, suppose someone is busy achieving and, at the same time, proficient in balancing work with emotional life, dealing head-on with interpersonal conflicts, facing change, feeling and acknowledging hurt, knowing their emotional hangups, perhaps seeing a therapist, perhaps even occasionally putting personal needs ahead of career... :)
Tell that person they can get a marginal (or even substantial) improvement from some rationalist cult practice. Their first question is going to be, "What's the catch?" Because at the very least they'll suspect that adjusting their work/life balance will bring a sizeable amount of stress and consequent decrease in their emotional well-being. And if the pitch is that this rationalist practice works equally well at improving emotional well-being, that smells to them. They already know they didn't logic themselves into their current set of emotional issues, and they are highly unlikely to logic themselves out of them. So there's not much value here to offset the creepy vibes of the pitch. (And again-- being in touch with your emotions means quicker and deeper awareness of creepy vibes!)
Now, take a person whose unexplored emotional well-being tacitly depends on achievement. Even a marginal improvement in achievement could bring perceptible positive changes in their holistic selves! And you can step through a well-specified, logical process to achieve change? Sign HN up!
Ted Talks became religions, podcasts sermons
Reading the other comments makes me wonder if they just misread the sign and they were looking for the rationalizationist meeting.
It’s especially popular in Silicon Valley.
Quite possibly, places like Reddit and Hacker News, are training for the required level of intellectual smugness, and certitude that you can dismiss every annoying argument with a logical fallacy.
That sounds smug of me, but I’m actually serious. One of their defects, is that once you memorize all the fallacies (“Appeal to authority,” “Ad hominem,”) you can easily reach the point where you more easily recognize the fallacies in everyone else’s arguments than your own. You more easily doubt other people’s cited authorities, than your own. You slap “appeal to authority” against a disliked opinion, while citing an authority next week for your own. It’s a fast path from there to perceived intellectual superiority, and an even faster path from there into delusion. Rational delusion.
While deployment of logical fallacies to win arguments is annoying at best, the far bigger problem is that people make those fallacies in the first place — such as not considering base rates.
It's generally worth remembering that some of the fallacies are actually structural, and some are rhetorical.
A contradiction creates a structural fallacy; if you find one, it's a fair belief that at least one of the supporting claims is false. In contrast, appeal to authority is probabilistic: we don't know, given the current context, if the authority is right, so they might be wrong... But we don't have time to read the universe into this situation so an appeal to authority is better than nothing.
... and this observation should be coupled with the observation that the school of rhetoric wasn't teaching a method for finding truth; it was teaching a method for beating an opponent in a legal argument. "Appeal to authority is a logical fallacy" is a great sword to bring to bear if your goal is to turn off the audience's ability to ask whether we should give the word of the environmental scientist and the washed-up TV actor equal weight on the topic of environmental science...
… however, even that is up for debate. Maybe the TV actor in your own example is Al Gore filming An Inconvenient Truth and the environmental scientist was in the minority which isn’t so afraid of climate change. Fast forward to 2025, the scientist’s minority position was wrong, while Al Gore’s documentary was legally ruled to have 9 major errors; so you were stupid on both sides, with the TV actor being closer.
2 replies →
because the sacred simplicity problem, yet another label I had to coin due to necessity
for example, lambda calculus, it's too simple. to the point that it's real power is immediately unbelievable.
the simplest 'solution', is to make it "sacred", to infuse an aura of mystery and ineffability around the ideas. that way people will give it the proper respect proportional to the mathematical elegance without necessarily having to really grasp the details.
I'm reflecting on how, for example, lambda calculus is really easy to learn to do by rote. but this does not help in truly grasping the significance that even LLM can be computed by (an inhuman) amount of symbol substitutions on paper. and how it is easy to trivialize what this really entails (fleshing out all the entailment is difficult; it's easier to act as if they have been fleshed out and mimic the awe)
therefore, rationalist cults are the legacy, the latest leaf in the long succession of the simplest solution to the simplicity of the truly sacred mathematical ideas with which we can "know" (and nod to each other who also "know") what numbers fucking mean
This is a great article.
There's so much in these group dynamics that repeats group dynamics of communist extremists of the 70s. A group that has found a 'better' way of life, all you have to do is believe in the group's beliefs.
Compare this part from OP:
>Here is a sampling of answers from people in and close to dysfunctional groups: “We spent all our time talking about philosophy and psychology and human social dynamics, often within the group.” “Really tense ten-hour conversations about whether, when you ate the last chip, that was a signal that you were intending to let down your comrades in selfish ways in the future.”
This reeks of Marxist-Leninist self-criticism, where everybody tried to up each other in how ideologically pure they were. The most extreme outgrowing of self-criticism is when the Japanese United Red Army beat its own members to death as part of self-criticisms.
>'These violent beatings ultimately saw the death of 12 members of the URA who had been deemed not sufficiently revolutionary.' https://en.wikipedia.org/wiki/United_Red_Army
History doesn't repeat, but it rhymes.
Because purporting to be extra-rational about decisions is effective nerd-bait.
I think rationalist cults work exactly the same as religious cults. They promise to have all the answers, to attract the vulnerable. The answers are convoluted and inscrutable, so a leader/prophet interprets them. And doom is neigh, providing motivation and fear to hold things together.
It's the same wolf in another sheep's clothing.
And people who wouldn't join a religious cult -- e.g. because religious cults are too easy to recognize since we're all familiar with them, or because religions hate anything unusual about gender -- can join a rationalist cult instead.
I think everyone should be familiar with hermeticism because its various mystery cults have been with us since Hermes Trismegistus laid down its principles in Ancient Egypt on the Emerald Tablets. It was where early science like practices like alchemy originated, but that wheat got separated out from the chaff during the renaissance and the more coercive control aspects remained. That part, how to get people to follow you and fight for you and maintain a leadership hierarchy is extremely old technology.
They essentially use this glitch in human psychology that gets exploited over and over again. The glitch is a more generalized version of the advanced fee scam. You tell people that if we just believe something can be true, it can be true. Then we discriminate against people who don't believe in that thing. We then say only the leader(s) can make that thing true, but first you must give them all your power and support so they can fight the people who don't believe in that thing.
Unfortunately, reality is much more messy than the cult leaders would have you believe, and leaders often don't have their followers best interests at heart, especially those who follow blindly, or even the ability to make the thing true that everyone wants to be true, but use it as this white rabbit that everyone in the cult has to chase after forever.
Is it really that surprising that a group of humans who think they have some special understanding of reality compared to others tend to separate and isolate themselves until they fall into an unguided self-reinforcing cycle?
I'd have thought that would be obvious since it's the history of many religions (which seem to just be cults that survived the bottleneck effect to grow until they reached a sustainable population).
In other words, humans are wired for tribalism, so don't be surprised when they start forming tribes...
> And yet, the rationalist community has hosted perhaps half a dozen small groups with very strange beliefs (including two separate groups that wound up interacting with demons). Some — which I won’t name in this article for privacy reasons — seem to have caused no harm but bad takes.
So there's six questionable (but harmless) groups and then later the article names three of them as more serious. Doesn't seem like "many" to me.
I wonder what percentage of all cults are the rationalist ones.
The premise of the article might just be nonsense.
How many rationalists are there in the world? Of course it depends on what you mean by rationalist, but I'd guess that there are probably several tens of thousands, at very least, people in the world who either consider themselves rationalists or are involved with the rationalist community.
With such numbers, is it surprising that there would be half a dozen or so small cults?
There are certainly some cult-like aspects to certain parts of the rationalist community, and I think that those are interesting to explore, but come on, this article doesn't even bother to establish that its title is justified.
To the extent that rationalism does have some cult-like aspects, I think a lot of it is because it tends to attract smart people who are deficient in the ability to use avenues other than abstract thinking to comprehend reality and who enjoy making loosely justified imaginative leaps of thought while overestimating their own abilities to model reality. The fact that a huge fraction of rationalists are sci-fi fans is not a coincidence.
But again, one should first establish that there is anything actually unusual about the number of cults in the rationalist community. Otherwise this is rather silly.
The major problem I see in this group is that they have constructed a self-identity of being intelligent. This means that by being part of a Rationalist group, a person can claim to have insight into things that "the rest of the world doesn't understand."
Which, because (1) self-identifying as intelligent/insightful does not mean you are actually so; (2) you are following the "brain reprogramming" processes of some kind of comvincing leader; is a straight shot to NXIVM style apocalyptic cultism.
They are literally the "ackchyually" meme made flesh.
It is so strange that this article would hijack the term "rationalist" to mean this extraordinarily specific set of people "drawn together by AI researcher Eliezer Yudkowsky’s blog post series The Sequences, a set of essays about how to think more rationally".
As a counter example (with many many more people) is the Indian Rationalist Association (https://en.wikipedia.org/wiki/Indian_Rationalist_Association) to "promote scientific skepticism and critique supernatural claims". This isn't a cult of any kind, even if the members broadly agree about what it means to be rational with the set above.
Reminds me somewhat of the Culte de la Raison (Cult of Reason) birthed by the french revolution. It didn't last long.
https://en.wikipedia.org/wiki/Cult_of_Reason
Over rationalizing is paperclip maximizing
Isn't this entirely to be expected? The people who dominate groups like these are the ones who put the most time and effort into them, and no sane person who appreciates both the value and the limitations of rational thinking is going to see as much value in a rationalist group, and devote as much time to it, as the kind of people who are attracted to the cultish aspect of achieving truth and power through pure thought. There's way more value there if you're looking to indulge in, or exploit, a cult-like spiral into shared fantasy than if you're just looking to sharpen your logical reasoning.
So I like Steven Pinker’s book Rationality, to me it seems quite straightforward.
But I have never been able to get into the Rationalist stuff, to me it’s all very meandering and peripheral and focused on… I don’t know what.
Is it just me?
Depends very much on what you're hoping to get out of it. There isn't really one "rationalist" thing at this point, it's now a whole bunch of adjacent social groups with overlapping-but-distinct goals and interests.
https://www.lesswrong.com/highlights this is the ostensible "Core Highlights", curated by major members of the community, and I believe Eliezer would endorse it.
If you don't get anything out of reading the list itself, then you're probably not going to get anything out of the rest of the community either.
If you poke around and find a few neat ideas there, you'll probably find a few other neat ideas.
For some people, though, this is "wait, holy shit, you can just DO that? And it WORKS?", in which case probably read all of this but then also go find a few other sources to counter-balance it.
(In particular, probably 90% of the useful insights already exist elsewhere in philosophy, and often more rigorously discussed - LessWrong will teach you the skeleton, the general sense of "what rationality can do", but you need to go elsewhere if you want to actually build up the muscles)
Little on offer but cults these days. Take your pick. You probably already did long ago and now your own cult is the only one you'll never clock as such.
Same as it ever was, but with more of them, people are a little warier about their own, I think.
The thing with identifying yourself with an “ism” (e.g. rationalism, feminism, socialism) is that, even though you might not want that, you’re inherently positioning yourself in a reductionist and inaccurate corner of the world. Or in other words you’re shielding yourself in a comfortable, but wrong, bubble.
To call yourself an -ist means that you consider that you give more importance to that concept than other people—-you’re more rational than most, or care more about women than most, or care more about social issues than most. That is wrong both because there are many irrational rationalists and also because there are many rational people who don’t associate with the group (same with the other isms). The thing is that the very fact of creating the label and associating yourself with it will ruin the very thing that you strive for. You will attract a bunch of weirdos who want to be associated with the label without having to do the job that it requires, and you will become estranged from those who prefer to walk the walk instead of talk the talk. In both ways, you failed.
The fact is that every ism is a specific set of thoughts and ideas that is not generic, and not broad enough to carry the weight of its name. Being a feminist does not mean you care about women; it means you are tied to a specific set of ideologies and behaviours that may or may not advance the quality of life of women in the modern world, and are definitely not the only way to achieve that goal (hence the inaccuracy of the label).
Rationalism is the belief that reason is the primary path to knowledge, as opposed to, say, the observation that is championed by empiricism. It's a belief system that prioritises imposing its tenets on reality rather than asking reality what reality's tenets are. From the outset, it's inherently cult-like.
Rationalists, in this case, refers specifically to the community clustered around LessWrong, which explicitly and repeatedly emphasizes points like "you can't claim to have a well grounded belief if you don't actually have empirical evidence for it" (https://www.lesswrong.com/w/evidence for a quick overview of some of the basic posts on that topic)
To quote one of the core foundational articles: "Before you try mapping an unseen territory, pour some water into a cup at room temperature and wait until it spontaneously freezes before proceeding. That way you can be sure the general trick—ignoring infinitesimally tiny probabilities of success—is working properly." (https://www.lesswrong.com/posts/eY45uCCX7DdwJ4Jha/no-one-can...)
One can argue how well the community absorbs the lesson, but this certainly seems to be a much higher standard than average.
That is the definition of “rationalism” as proposed by philosophers like Descartes and Kant, but I don’t think that is an accurate representation of the type of “rationalism” this article describes.
This article describes “rationalism” as described in LessWrong and the sequences by Eliezer Yudkowsky. A good amount of it based on empirical findings from psychology behavior science. It’s called “rationalism” because it seeks to correct common reasoning heuristics that are purported to lead to incorrect reasoning, not in contrast to empiricism.
Agreed, I appreciate that there's a conceptual distinction between the philosophical versions of rationalism and empiricism, but what's being talked about here is a conception that (again, at least notionally) is interested in and compatible with both.
I am pretty sure many of the LessWrong posts are about how to understand the meaning of different types of data and are very much about examining, developing, criticizing a rich variety of empirical attitudes.
I was going to write a similar comment as op, so permit me to defend it:
Many of their "beliefs" - Super-duper intelligence, doom - are clearly not believed by the market; Observing the market is a kind of empiricism and it's completely discounted by the lw-ers
But you cannot have reason without substantial proof of how things behave by observing them in the first place. Reason is simply a logical approach to yes and no questions where you factually know, from observation of past events, how things work. And therefore you can simulate an outcome by the exercise of reasoning applied onto a situation that you have not yet observed and come to a logical outcome, given the set of rules and presumptions.
I find it ironic that the question is asked unempirically. Where is the data stating there are many more than before? Start there, then go down the rabbit hole. Otherwise, you're concluding on something that may not be true, and trying to rationalize the answer, just as a cultist does.
Oh come on.
Anyone who's ever seen the sky knows it's blue. Anyone who's spent much time around rationalism knows the premise of this article is real. It would make zero sense to ban talking about about a serious and obvious problem in their community until some double blind peer reviewed data can be gathered.
It would be what they call an "isolated demand for rigor".
Something like 15 years ago I once went to a Less Wrong/Overcoming Bias meetup in my town after being a reader of Yudkowsky's blog for some years. I was like, Bayesian Conspiracy, cool, right?
The group was weird and involved quite a lot of creepy oversharing. I didn't return.
See also Rational Magic: Why a Silicon Valley culture that was once obsessed with reason is going woo (2023)
https://news.ycombinator.com/item?id=35961817
Has anyone here ever been a part of a cult?
If so, got anything to share - anecdotes, learnings, cautions, etc.?
I am never planning to be part of one; just interested to know, partly because I have lived adjacent to what might be one, at times.
There are entire books written by former cult members. Some examples (I just copied the list, didn't check each one of them):
"Beyond Belief: My Secret Life Inside Scientology and My Harrowing Escape" by Jenna Miscavige Hill
"A Billion Years: My Escape From a Life in the Highest Ranks of Scientology" by Mike Rinder
"Blown for Good: Behind the Iron Curtain of Scientology" by Marc Headley
"Satan Created the Cult: Memoirs of an Escapee" by Mona Vasquez
"Escape" by Carolyn Jessop
"The Witness Wore Red" by Rebecca Musser
"Breaking Free" by Rachel Jeffs
"Stolen Innocence" by Elissa Wall
"The Sound of Gravel" by Ruth Wariner
"Daughter of Gloriavale" by Lilia Tarawa
"My Life in Orange" by Tim Guest
"Crazy for God" by Christopher Edwards
"In the Days of Rain: A Daughter, a Father, a Cult" by Rebecca Stott
"Cartwheels in a Sari" by Jayanti Tamm
"Seductive Poison" by Deborah Layton
"Hollywood Park" by Mikel Jollett
"Unfollow: A Journey from Hatred to Hope" by Megan Phelps-Roper
"Crisis of Conscience" by Raymond Franz
"Leaving the Saints" by Martha Beck
"Forager: Field Notes for Surviving a Family Cult" by Michelle Dowd
"Dinner for Vampires: Life on a Cult TV Show (While Also in an Actual Cult)" by Bethany Joy Lenz
wow, interesting. thanks.
There was this interview with Diane Benscoter who talked about her experience and reasons for joining a cult that I found very insightful: https://www.youtube.com/watch?v=6Ibk5vJ-4-o
The main point is that it isn't so much the cult (leader) so much as the victims being in a vulnerable mental state getting exploited.
will check out the video, thanks.
Because Yudkowskian rationalism is a scifi inspired cult it's.
If someone believes in the singularliy my estimation of their intellectual capacity or at least maturity diminishes.
Perhaps I will get downvoted to death again for saying so, but the obvious answer is because the name "rationalist" is structurally indistinguishable from the name "scientology" or "the illuminati". You attract people who are desperate for an authority to appeal to, but for whatever reason are no longer affiliated with the church of their youth. Even a rationalist movement which held nothing as dogma would attract people seeking dogma, and dogma would form.
The article begins by saying the rationalist community was "drawn together by AI researcher Eliezer Yudkowsky’s blog post series The Sequences". Obviously the article intends to make the case that this is a cult, but it's already done with the argument at this point.
> for whatever reason are no longer affiliated with the church of their youth.
This is the Internet, you're allowed to say "they are obsessed with unlimited drugs and weird sex things, far beyond what even the generally liberal society tolerates".
I'm increasingly convinced that every other part of "Rationalism" is just distraction or justification for those; certainly there's a conscious decision to minimize talking about this part on the Internet.
I strongly suspect there is heterogeneity here. An outer party of "genuine" rationalists who believe that learning to be a spreadsheet or whatever is going to let them save humanity, and an inner party who use the community to conceal some absolute shenanigans.
No, I really mean atheists that crave religion.
> Obviously the article intends to make the case that this is a cult
The author is a self-identified rationalist. This is explicitly established in the second sentence of the article. Given that, why in the world would you think they're trying to claim the whole movement is a cult?
Obviously you and I have very different definitions of "obvious"
When I read the article in its entirety, I was pretty disappointed in its top-level introspection.
It seems to not be true, but I still maintain that it was obvious. Sometimes people don't pick the low-hanging fruit.
In fact, I'd go a step further and note the similarity with organized religion. People have a tendency to organize and dogmatize everything. The problem with religion is rarely the core ideas, but always the desire to use it as a basis for authority, to turn it dogmatic and ultimately form a power structure.
And I say this as a Christian. I often think that becoming a state religion was the worst thing that ever happened to Christianity, or any religion, because then it unavoidably becomes a tool for power and authority.
And doing the same with other ideas or ideologies is no different. Look at what happened to communism, capitalism, or almost any other secular idea you can think of: the moment it becomes established, accepted, and official, the corruption sets in.
I do not see any reasons for you to get down-voted.
I agree that the term "rationalist" would appeal to many people, and the obvious need to belong to a group plays a huge role.
There are a lot of rationalists in this community. Pointing out that the entire thing is a cult attracts downvotes from people who wish to, for instance, avoid being identified with the offshoots.
3 replies →
This is a very interesting article. It's surprising though to see it not use the term "certainty" at all. (It only uses "certain" in a couple instances of like "a certain X" and one use of "certainly" for generic emphasis.)
Most of what the article says makes sense, but it seems to sidestep the issue that a major feature distinguishing the "good" rationalists from the "bad" is that the bad ones are willing to take very extreme actions in support of their beliefs. This is not coincidentally something that distinguishes good believers in various religions or philosophies from bad believers (e.g., people who say God told them to kill people). This is also lurking in the background of discussion of those who "muddled through" or "did the best they could". The difference is not so much in the beliefs as in the willingness to act on them, and that willingness is in turn largely driven by certainty.
I think it's plausible there is a special dimension to rationalism that may exacerbate this, namely a tendency of rationalists to feel especially "proud" of their beliefs because of their meta-belief that they derived their beliefs rationally. Just like an amateur painter may give themselves extra brownie points because no one taught them how to paint, my impression of rationalists is that they sometimes give themselves an extra pat on the back for "pulling themselves up by their bootstraps" in the sense of not relying on faith or similar "crutches" to determine the best course of action. This can paradoxically increase their certainty in their beliefs when actually it's often a warning that those beliefs may be inadequately tested against reality.
I always find it a bit odd that people who profess to be rationalists can propose or perform various extreme acts, because it seems to me that one of the strongest and most useful rational beliefs is that your knowledge is incomplete and your beliefs are almost surely not as well-grounded as you think they are. (Certainly no less an exponent of reason than Socrates was well aware of this.) This on its own seems sufficient to me to override some of the most absurd "rationalist" conclusions (like that you should at all costs become rich or fix Brent Dill's depression). It's all the more so when you combine it with some pretty common-sense forecasts of what might happen if you're wrong. (As in, if you devote your life to curing Brent Dill's depression on the theory that he will then save the world, and he turns out to be just an ordinary guy or worse, you wasted your life curing one person's depression when you yourself could have done more good with your own abilities, just by volunteering at a soup kitchen or something.) It's never made sense to me that self-described rationalists could seriously consider some of these possible courses of action in this light.
Sort of related is the claim at the end that rationalists "want to do things differently from the society around them". It's unclear why this would be a rational desire. It might be rational in a sense to say you want to avoid being influenced by the society around you, but that's different from affirmatively wanting to differ from it. This again suggests a sort of "psychological greed" to reach a level of certainty that allows you to confidently, radically diverge from society, rather than accepting that you may never reach a level of certainty that allows you to make such deviations on a truly rational basis.
It's also interesting to me that the article focuses a lot not on rationalist belief per se, but on the logistics and practices of rationalist communities. This in itself seems like a warning that the rationality of rationalism is not all it's cracked up to be. It's sort of like, you can try to think as logically as possible, but if you hit yourself in the head with a hammer every day you're likely going to make mistakes anyway. And some of the "high demand" practices mentioned seem like slightly less severe psychological versions of that.
> it seems to sidestep the issue that a major feature distinguishing the "good" rationalists from the "bad" is that the bad ones are willing to take very extreme actions in support of their beliefs.
What is a "very extreme action"? Killing someone? In our culture, yes. What about donating half of your salary to charity? I think many people would consider that quite extreme, too. Maybe even more difficult to understand than the murder... I mean, prisons are full of murderers; they are not so exceptional.
The difference is that the bad ones are willing to take abusive actions.
> It's also interesting to me that the article focuses a lot not on rationalist belief per se, but on the logistics and practices of rationalist communities.
That's what happens when you read about the rationality community from someone who is actually familiar with it. If you want to determine whether a group is dysfunctional (i.e. a cult), the actual practices are much more important than the stated beliefs. You could have two communities with the same or very similar beliefs, yet one of them nice and the other one abusive.
> What about donating half of your salary to charity? I think many people would consider that quite extreme, too.
Maybe, but there are also degrees of extremity in terms of stuff like how broadly you donate (like there's a difference between donating a huge amount to one charity vs. spreading it around 10). Also I don't think the mere fact of donating half your salary would itself necessarily be seen as extreme; it would depend on the person's total wealth. It seems not unusual for wealthy individuals who get certain jobs to donate (or refuse) their entire salary (like Arnold Schwarzenegger declining his salary as CA governor).
Ultimately though I don't agree that this is anywhere close to as extreme as cold-blooded murder.
> I mean, prisons are full of murderers; they are not so exceptional.
I have a hunch that a large proportion of murderers in prisons are not comparable to rationalist murderers. There's a difference between just killing someone and killing someone due to your belief that that is the rational and correct thing to do. A lot of murders are crimes of passion or occur in the commission of other crimes. I could see an intermediate case where someone says "We're going to rob this bank and if the guard gives us any trouble we'll just shoot him", which is perhaps comparable to "always escalate conflict", but I don't think most murders even reach that level of premeditation.
> The difference is that the bad ones are willing to take abusive actions.
I'm not so sure that that is the difference, rather than that they are willing to take extreme actions, and then the extreme actions they wind up taking (for whatever reason) are abusive. It's sort of like, if you fire a gun into a crowd, your willingness to do so is important whether or not you actually hit anyone. Similarly a willingness to go well outside the bounds of accepted behaviors is worrisome even if you don't happen to harm anyone by doing so. I could certainly imagine that many rationalists do indeed formulate belief systems that exclude certain kinds of extreme behavior while allowing others. I'm just saying, if I found out that someone was spending all their days doing any spookily extreme thing (e.g., 8 hours a day building a scale model of Hoover Dam one grain of sand at a time) I would feel a lot less safe around them.
> > It's also interesting to me that the article focuses a lot not on rationalist belief per se, but on the logistics and practices of rationalist communities.
> That's what happens when you read about the rationality community from someone who is actually familiar with it. If you want to determine whether a group is dysfunctional (i.e. a cult), the actual practices are much more important than the stated beliefs.
Sure. My point is just that, insofar as this is true, it means what the article is saying is more about cults in general and less about anything specific to rationalism.
1 reply →
Why are there so many cults? People want to feel like they belong to something, and in a world in the midst of a loneliness and isolation epidemic the market conditions are ideal for cults.
The book Imagined Communities (Benedict Anderson) touches on this, making the case that in modern times, "nation" has replaced the cultural narrative purpose previously held by "tribe," "village," "royal subject," or "religion."
The shared thread among these is (in ever widening circles) a story people tell themselves to justify precisely why, for example, the actions of someone you'll never meet in Tulsa, OK have any bearing whatsoever on the fate of you, a person in Lincoln, NE.
One can see how this leaves an individual in a tenuous place if one doesn't feel particularly connected to nationhood (one can also see how being too connected to nationhood, in an exclusionary way, can also have deleterious consequences, and how not unlike differing forms of Christianity, differing concepts on what the 'soul' of a nation is can foment internal strife).
(To be clear: those fates are intertwined to some extent; the world we live in grows ever smaller due to the power of up-scaled influence of action granted by technology. But "nation" is a sort of fiction we tell ourselves to fit all that complexity into the slippery meat between human ears).
Because we are currently living in an age of narcissism and tribalism / Identitarianism is the societal version of narcissism.
> Because we are currently living in an age of narcissism and tribalism
I've been saying this since at least 1200 BC!
The question the article is asking is "why did so many cults come out of this particular social milieu", not "why are there a lot of cults in the whole world".
Also, who would want to join an "irrationalist cult" ?
Hey now, the Discordians have an ancient and respectable tradition. ;)
1 reply →
Your profile says that you want to keep your identity small, but you have like over 30 thousand comments spelling out exactly who you are and how you think. Why not shard accounts? Anyways. Just a random thought.
[deleted]
1 reply →
Narcissism and Elitism justified by material wealth.
What else?
Rationalism isn't any more "correct" and "proper" thinking than Christianity and Buddhism claim to espouse.
Why are so many cults founded on fear or hate?
Because empathy is hard.
Empathy is usally a limited resource of those that generously ascribe it to themselves and it is often mixed up with self-serving desires. Perhaps Rationalists have similar difficulties with reasoning.
While I believe Rationalism can be some form of occupational disease in tech circles, it sometimes does pose interesting questions. You just have to be aware that the perspective to analyse circumstances is intentionally constrained and in the end you still have to compare your prognosis to a reality that always choses empiricism.
> The Sequences make certain implicit promises. ...
Some meta-commentary first... How would one go about testing if this is true? If true, then such "promises" are not written down -- they are implied. So one would need to ask at least two questions: 1. Did the author intend to make these implicit promises? 2. What portion of readers perceive them as such?
> ... There is an art of thinking better ...
First, this isn't _implicit_ in the Sequences; it is stated directly. In any case, the quote does not constitute a promise: so far, it is a claim. And yes, rationalists do think there are better and worse ways of thinking, in the sense of "what are more effective ways of thinking that will help me accomplish my goals?"
> ..., and we’ve figured it out.
Codswallop. This is not a message of the rationality movement -- quite the opposite. We share what we've learned and why we believe it to be true, but we don't claim we've figured it all out. It is better to remain curious.
> If you learn it, you can solve all your problems...
Bollocks. This is not claimed implicitly or explicitly. Besides, some problems are intractable.
> ... become brilliant and hardworking and successful and happy ...
Rubbish.
> ..., and be one of the small elite shaping not only society but the entire future of humanity.
Bunk.
For those who haven't read it, I'll offer a relevant extended quote from Yudkowsky's 2009 "Go Forth and Create the Art!" [1], the last post of the Sequences:
## Excerpt from Go Forth and Create the Art
But those small pieces of rationality that I've set out... I hope... just maybe...
I suspect—you could even call it a guess—that there is a barrier to getting started, in this matter of rationality. Where by default, in the beginning, you don't have enough to build on. Indeed so little that you don't have a clue that more exists, that there is an Art to be found. And if you do begin to sense that more is possible—then you may just instantaneously go wrong. As David Stove observes—I'm not going to link it, because it deserves its own post—most "great thinkers" in philosophy, e.g. Hegel, are properly objects of pity. That's what happens by default to anyone who sets out to develop the art of thinking; they develop fake answers.
When you try to develop part of the human art of thinking... then you are doing something not too dissimilar to what I was doing over in Artificial Intelligence. You will be tempted by fake explanations of the mind, fake accounts of causality, mysterious holy words, and the amazing idea that solves everything.
It's not that the particular, epistemic, fake-detecting methods that I use, are so good for every particular problem; but they seem like they might be helpful for discriminating good and bad systems of thinking.
I hope that someone who learns the part of the Art that I've set down here, will not instantaneously and automatically go wrong, if they start asking themselves, "How should people think, in order to solve new problem X that I'm working on?" They will not immediately run away; they will not just make stuff up at random; they may be moved to consult the literature in experimental psychology; they will not automatically go into an affective death spiral around their Brilliant Idea; they will have some idea of what distinguishes a fake explanation from a real one. They will get a saving throw.
It's this sort of barrier, perhaps, which prevents people from beginning to develop an art of rationality, if they are not already rational.
And so instead they... go off and invent Freudian psychoanalysis. Or a new religion. Or something. That's what happens by default, when people start thinking about thinking.
I hope that the part of the Art I have set down, as incomplete as it may be, can surpass that preliminary barrier—give people a base to build on; give them an idea that an Art exists, and somewhat of how it ought to be developed; and give them at least a saving throw before they instantaneously go astray.
That's my dream—that this highly specialized-seeming art of answering confused questions, may be some of what is needed, in the very beginning, to go and complete the rest.
[1]: https://www.lesswrong.com/posts/aFEsqd6ofwnkNqaXo/go-forth-a...
My pet theory is - that as a rationalist, you have a idealized view of humanity by nature. Your mirror neurons copy your own mind for interpolating other peoples behavior and character.
Which results in a constant state of cognitive dissonance, as the people of normal society around you behave very differently and often more "rustically" then expected. The education is there- all the learning sources are there- and yet are rejected. The lessons learned from history go unlearned and are often repeated.
You are in a out-group by definition and life-long, so you band together with others and get conned by cult-con-artists into foolish projects. For the "rational" are nothing but another deluded project to hijack by the sociopaths of our society. The most rational being- in fact a being so capable to predate us, society had to develop anti-bodies against socio-paths, we call religion and laws!
"Nihilists! F* me. I mean, say what you will about the tenets of National Socialism, Dude, at least it's an ethos."
For me largley shaped by the westering old Europe creaking and breaking (after 2 WWs) under its heavy load of philosophical/metaphysical inheritance (which at this point in time can be considered effectively americanized).
It is still fascinating to trace back the divergent developments like american-flavoured christian sects or philosophical schools of "pragmatism", "rationalism" etc. which get super-charged by technological disruptions.
In my youth I was heavily influenced by the so-called Bildung which can be functionally thought of as a form of ersatz religion and is maybe better exemplified in the literary tradition of the Bildungsroman.
I've grappled with and wildly fantasized about all sorts of things, experimented mindlessly with all kinds of modes of thinking and consciousness amidst my coming-of-age, in hindsight without this particular frame of Bildung left by myself I would have been left utterly confused and maybe at some point acted out on it. By engaging with books like Der Zauberberg by Thomas Mann or Der Mann ohne Eigenschaften by Robert Musil, my apparent madness was calmed down and instead of breaking the dam of a forming social front of myself with the vastness of the unconsciousness, over time I was guided to develop my own way into slowly operating it appropriately without completely blowing myself up into a messiah or finding myself eternally trapped in the futility and hopelessness of existence.
Borrowing from my background, one effective vaccination which spontaneously came up in my mind against rationalists sects described here, is Schopenhauer's Die Welt als Wille und Vorstellung which can be read as a radical continuation of Kant's Critique of Pure Reason which was trying to stress test the ratio itself. [To demonstrate the breadth of Bildung in even something like the physical sciences e.g. Einstein was familiar with Kant's a priori framework of space and time, Heisenberg's autobiographical book Der Teil und das Ganze was motivated by: "I wanted to show that science is done by people, and the most wonderful ideas come from dialog".]
Schopenhauer arrives at the realization because of the groundwork done by Kant (which he heavily acknowledges): that there can't even exist a rational basis for rationality itself, that it is simply an exquisitely disguised tool in the service of the more fundamental will i.e. by its definition an irrational force.
Funny little thought experiment but what consequences does this have? Well, if you are declaring the ratio as your ultima ratio you are just fooling yourself in order to be able to rationalize anything you want. Once internalized Schopenhauer's insight gets you overwhelmed by Mitleid for every conscious being, inoculating you against the excesses of your own ratio. It instantly hit me with the same force as MDMA but several years before.
lol
[dead]
[flagged]
I think it speaks volumes that you think "american" is the approximate level of scope that this behavior lives at.
Stuff like this crosses all aspects of society. Certain americans of certain backgrounds, demographics and life experiences are fare more likely engage in it than others. I think those people are minority, but they are definitely an overly visible one if not a local majority in a lot of internet spaces so it's easy to mistake them for the majority.
Sure many people across the globe are susceptible to cult-think. It’s just been a century long trend in America to seek a superior way of living to “save all Americans” is all. No offense to other countries peoples, I’m sure they’re just as good cult members championing over application as any American.
It probably speaks more volumes that you are taking my comment about this so literally.
1 reply →
[flagged]
>There’s a lot to like about the Rationalist community
Like what? Never saw anything worth while there...
Cause they all read gwern and all eugenics leads into cults because conspiracy adjacent garbo always does.
Harpers did an amazing cover story on these freaks in 2015 https://harpers.org/archive/2015/01/come-with-us-if-you-want...
Rationalists are, to a man (and they’re almost all men) arrogant dickheads and arrogant dickheads do not see what they’re doing to be “a cult” but “the right and proper way of things because I am right and logical and rational and everyone else isn’t”.
That's an unnecessary charicaterature. I have met many rationalists of both genders and found most of them quite pleasant. But it seems that the proportion of "arrogant dickheads" unfortunately matches that of the general population. Whether it's "irrational people" or "liberal elites" these assholes always seem to find someone to look down on.
They watched too much eXistenZ
It's a religion of an overdeveloped mind that hides from everything it cannot understand. It's an anti-religion, in a sense, that puts your mind on the pedestal.
Note the common pattern in major religions: they tell you that thoughts and emotions obscure the light of intuition, like clouds obscure sunlight. Rationalism is the opposite: it denies the very idea of intuition, or anything above the sphere of thoughts, and tells to create as many thoughts as possible.
Rationalists deny anything spiritual, good or evil, because they don't have evidence to think otherwise. They remain in this state of neutral nihilism until someone bigger than them sneaks into their ranks and casually introduces them to evil with some undeniable evidence. Their minds quickly pass the denial-anger-acceptance stages and being faithful to their rationalist doctrine they update their beliefs with what they know. From that point they are a cult. That's the story of Scientology, which has too many many parallels with Rationalism.
Because they have serious emotional maturity issues leading to lobotomizing their normal human emotional side of their identity and experience of life.
Cue all the surface-level “tribalism/loneliness/hooman nature” comments instead of the simple analysis that Rationalism (this kind) is severely brain-broken and irredeemable and will just foster even worse outcomes in a group setting. It’s a bit too close to home (ideologically) to get a somewhat detached analysis.
I think we've strayed too far from the Aristotelian dynamics of the self.
Outside of sexuality and the proclivities of their leaders, emphasis on physical domination of the self is lacking. The brain runs wild, the spirit remains aimless.
In the Bay, the difference between the somewhat well-adjusted "rationalists" and those very much "in the mush" is whether or not someone tells you they're in SF or "on the Berkeley side of things"
Here are some other anti-lesswrong materials to consider:
https://aiascendant.com/p/extropias-children-chapter-1-the-w...
https://davidgerard.co.uk/blockchain/2023/02/06/ineffective-...
https://www.bloomberg.com/news/features/2023-03-07/effective...
https://www.vox.com/future-perfect/23458282/effective-altrui...
https://qchu.substack.com/p/eliezer
https://x.com/kanzure/status/1726251316513841539
Note that Asterisk magazine is basically the unofficial magazine for the rationalism community and the author is a rationalist blogger who is naturally very pro-LessWrong. This piece is not anti-Yudkowsky or anti-LessWrong.
Here's a counter-piece on David Gerard and his portrayal of LessWrong and Effective Altruism: https://www.tracingwoodgrains.com/p/reliable-sources-how-wik...
We live in an irrational time. It's unclear if it was simply under reported in history or social changes in the last ~50-75 years have had breaking consequences.
People are trying to make sense of this. For examples.
The Canadian government heavily subsidizes junk food, then spends heavily on healthcare because of the resulting illnesses. It restrict and limits healthy food through supply management and promotes a “food pyramid” favoring domestic unhealthy food. Meanwhile, it spends billions marketing healthy living, yet fines people up to $25,000 for hiking in forests and zones cities so driving is nearly mandatory.
Government is an easy target for irrational behaviours.
Scientology is here since 1953 and it has similarly bonkers set of believes. And is huge.
Your rant about government or not being allowed to hike in some places in Canada is unrelated to the issue.
There's nothing irrational about it, this is how you maximize power and profit at any and all costs.
I completely get that point of view; and yes if that's the goal, it's completely rational.
But from a societal cohesion or perhaps even an ethical point of view it's just pure irrationality.
When typing the post, I was thinking, different levels of government, changing ideologies of politicians leaving inconsistent governance.
1 reply →