Comment by analog31
5 days ago
Perhaps part of being rational, as opposed to rationalist, is having a sense of when to override the conclusions of seemingly logical arguments.
5 days ago
Perhaps part of being rational, as opposed to rationalist, is having a sense of when to override the conclusions of seemingly logical arguments.
In philosophy grad school, we described this as 'being reasonable' as opposed to 'being rational'.
That said, big-R Rationalism (the Lesswrong/Yudkowsky/Ziz social phenomenon) has very little in common with what we've standardly called 'rationalism'; trained philosophers tend to wince a little bit when we come into contact with these groups (who are nevertheless chockablock with fascinating personalities and compelling aesthetics.)
From my perspective (and I have only glancing contact,) these mostly seem to be _cults of consequentialism_, an epithet I'd also use for Effective Altruists.
Consequentialism has been making young people say and do daft things for hundreds of years -- Dostoevsky's _Crime and Punishment_ being the best character sketch I can think of.
While there are plenty of non-religious (and thus, small-r rationalist) alternatives to consequentialism, none of them seem to make it past the threshold in these communities.
The other codesmell these big-R rationalist groups have for me, and that which this article correctly flags, is their weaponization of psychology -- while I don't necessarily doubt the findings of sociology, psychology, etc, I wonder if they necessarily furnish useful tools for personal improvement. For example, memorizing a list of biases that people can potentially have is like numbering the stars in the sky; to me, it seems like this is a cargo-cultish transposition of the act of finding _fallacies in arguments_ into the domain of finding _faults in persons_.
And that's a relatively mild use of psychology. I simply can't imagine how annoying it would be to live in a household where everyone had memorized everything from connection theory to attachment theory to narrative therapy and routinely deployed hot takes on one another.
In actual philosophical discussion, back at the academy, psychologizing was considered 'below the belt', and would result in an intervention by the ref. Sometimes this was explicitly associated with something we called 'the Principle of Charity', which is that, out of an abundance of epistemic caution, you commit to always interpreting the motives and interests of your interlocutor in the kindest light possible, whether in 'steel manning' their arguments, or turning a strategically blind eye to bad behaviour in conversation.
The importance Principle of Charity is probably the most enduring lesson I took from my decade-long sojurn among the philosophers, and mutual psychological dissection is anathema to it.
I actually think that the fact that rationalists use the term "steel manning" betrays a lack of charity.
If the only thing you owe your interlocutor is to use your "prodigious intellect" to restate their own argument in the way that sounds the most convincing to you, maybe you are in fact a terrible listener.
I have tried to tell my legions of fanatic brainwashed adherents exactly this, and they have refused to listen to me because the wrong way is more fun for them.
https://x.com/ESYudkowsky/status/1075854951996256256
Listening to other viewpoints is hard. Restating is a good tool to improve listening and understanding. I don't agree with this criticism at all, since that "prodigious intellect" bit isn't inherent to the term.
3 replies →
Just so. I hate this term, and for essentially this reason, but it has undeniable currency right now; I was writing to be understood.
> While there are plenty of non-religious (and thus, small-r rationalist) alternatives to consequentialism, none of them seem to make it past the threshold in these communities.
I suspect this is because consequentialism is the only meta-ethical framework that has any leg to stand on other than "because I said so". That makes it very attractive. The problem is you also can't build anything useful on top of it, because if you try to quantify consequences, and do math on them, you end up with the Repugnant Conclusion or worse. And in practice - in Effective Altruism/Longtermism, for example - the use of arbitrarily big numbers lets you endorse the Very Repugnant Conclusion while patting yourself on the back for it.
> to me, it seems like this is a cargo-cultish transposition of the act of finding _fallacies in arguments_ into the domain of finding _faults in persons_.
Well put, thanks!
I am interested in your journey from philosophy to coding.