Comment by jl6
4 days ago
I don’t think it’s just (or even particularly) bad axioms, I think it’s that people tend to build up “logical” conclusions where they think each step is a watertight necessity that follows inevitably from its antecedents, but actually each step is a little bit leaky, leading to runaway growth in false confidence.
Not that non-rationalists are any better at reasoning, but non-rationalists do at least benefit from some intellectual humility.
As a former mechanical engineer, I visualize this phenomenon like a "tolerance stackup". Effectively meaning that for each part you add to the chain, you accumulate error. If you're not damn careful, your assembly of parts (or conclusions) will fail to measure up to expectations.
I like this approach. Also having dipped my toes in the engineering world (professionally) I think it naturally follows that you should be constantly rechecking your designs. Those tolerances were fine to begin with, but are they now that things have changed? It also makes you think about failure modes. What can make this all come down and if it does what way will it fail? Which is really useful because you can then leverage this to design things to fail in certain ways and now you got a testable hypothesis. It won't create proof, but it at least helps in finding flaws.
The example I heard was to picture the Challenger shuttle, and the O-rings used worked 99% of the time. Well, what happens to the failure rate when you have 6 O-rings in a booster rocket, and you only need one to fail for disaster? Now you only have a 94% success rate.
1 reply →
Basically the same as how dead reckoning your location works worse the longer you've been traveling?
Dead reckoning is a great analogy for coming to conclusions based on reason alone. Always useful to check in with reality.
2 replies →
I saw an article recently that talked about stringing likely inferences together but ending up with an unreliable outcome because enough 0.9 probabilities one after the other lead to an unlikely conclusion.
Edit: Couldn't find the article, but AI referenced Baysian "Chain of reasoning fallacy".
I think you have this oversimplified. Stringing together inferences can take us in either direction. It really depends on how things are being done and this isn't always so obvious or simple. But just to show both directions I'll give two simple examples (real world holds many more complexities)
It is all about what is being modeled and how the inferences string together. If these are being multiplied, then yes, this is going to decreases as xy < x and xy < y for every x,y < 1.
But a good counter example is the classic Bayesian Inference example[0]. Suppose you have a test that detects vampirism with 95% accuracy (Pr(+|vampire) = 0.95) and has a false positive rate of 1% (Pr(+|mortal) = 0.01). But vampirism is rare, affecting only 0.1% of the population. This ends up meaning a positive test only gives us a 8.7% likelihood of a subject being a vampire (Pr(vampire|+). The solution here is that we repeat the testing. On our second test Pr(vampire) changes from 0.001 to 0.087 and Pr(vampire|+) goes to 89% and a third getting us to about 99%.
[0] Our equation is
And the crux is Pr(+) = Pr(+|vampire)Pr(vampire) + Pr(+|mortal)(1-Pr(vampire))
19 replies →
I like this analogy.
I think of a bike's shifting systems; better shifters, better housings, better derailleur, or better chainrings/cogs can each 'improve' things.
I suppose where that becomes relevant to here, is that you can have very fancy parts on various ends but if there's a piece in the middle that's wrong you're still gonna get shit results.
You only as strong as the weakest link.
Your SCSI devices are only as fast as the slowest device in the chain.
I don't need to be faster than the bear, I only have to be faster than you.
1 reply →
This is what I hate about real life electronics. Everything is nice on paper, but physics sucks.
I think the reason this is true is mostly because how people do things "on paper". We can get much more accurate with "on paper" modeling, but the amount of work increases very fast. So it tends to be much easier to just calculate things as if they are spherical chickens in a vacuum and account for error than it is to calculate including things like geometry, drag, resistance, and all that other fun jazz (which you still will also need to account for error/uncertainty though this now can be smaller).
Which I think at the end of the day the important lesson is more how simple explanations can be good approximations that get us most of the way there but the details and nuances shouldn't be so easily dismissed. With this framing we can choose how we pick our battles. Is it cheaper/easier/faster to run a very accurate sim or cheaper/easier/faster to iterate in physical space?
IME most people aren't very good at building axioms. I hear a lot of people say "from first principles" and it is a pretty good indication that they will not be. First principles require a lot of effort to create. They require iteration. They require a lot of nuance, care, and precision. And of course they do! They are the foundation of everything else that is about to come. This is why I find it so odd when people say "let's work from first principles" and then just state something matter of factly and follow from there. If you want to really do this you start simple, attack your own assumptions, reform, build, attack, and repeat.
This is how you reduce the leakiness, but I think it is categorically the same problem as the bad axioms. It is hard to challenge yourself and we often don't like being wrong. It is also really unfortunate that small mistakes can be a critical flaw. There's definitely an imbalance.
This is why the OP is seeing this behavior. Because the smartest people you'll meet are constantly challenging their own ideas. They know they are wrong to at least some degree. You'll sometimes find them talking with a bit of authority at first but a key part is watching how they deal with challenging of assumptions. Ask them what would cause them to change their minds. Ask them about nuances and details. They won't always dig into those can of worms but they will be aware of it and maybe nervousness or excited about going down that road (or do they just outright dismiss it?). They understand that accuracy is proportional to computation, and you have exponentially increasing computation as you converge on accuracy. These are strong indications since it'll suggest if they care more about the right answer or being right. You also don't have to be very smart to detect this.
IME most people aren't very good at building axioms.
It seems you implying that some people are good building good axiom systems for the real world. I disagree. There are a few situations in the world where you have generalities so close to complete that you can use simple logic on them. But for the messy parts of the real world, there simply is not set of logical claims which can provide anything like certainty no matter how "good" someone is at "axiom creation".
I don't even know what you're arguing.
How do you go from "most people aren't very good" to "this implies some people are really good"? First, that is just a really weird interpretation of how people speak (btw, "you're" not "you" ;) because this is nicer and going to be received better than "making axioms is hard and people are shit at it." Second, you've assumed a binary condition. Here's an example. "Most people aren't very good at programming." This is an objectively true statement, right?[0] I'll also make the claim that no one is a good programmer, but some programmers are better than others. There's no contradiction in those two claims, even if you don't believe the latter is true.
Now, there are some pretty good axiom systems. ZF and ZFC seems to be working pretty well. There's others too and they are used to for pretty complex stuff. They all work at least for "simple logic."
But then again, you probably weren't thinking of things like ZFC. But hey, that was kinda my entire point.
I agree. I'd hope I agree considering my username... But you've jumped to a much stronger statement. I hope we both agree that just because there are things we can't prove that this doesn't mean there aren't things we can prove. Similarly I hope we agree that if we couldn't prove anything to absolute certainty that this doesn't mean we can't prove things to an incredibly high level of certainty or that we can't prove something is more right than something else.
[0] Most people don't even know how to write a program. Well... maybe everyone can write a Perl program but let's not get into semantics.
5 replies →
> I don’t think it’s just (or even particularly) bad axioms, I think it’s that people tend to build up “logical” conclusions where they think each step is a watertight necessity that follows inevitably from its antecedents, but actually each step is a little bit leaky, leading to runaway growth in false confidence.
This is what you get when you naively re-invent philosophy from the ground up while ignoring literally 2500 years of actual debugging of such arguments by the smartest people who ever lived.
You can't diverge from and improve on what everyone else did AND be almost entirely ignorant of it, let alone have no training whatsoever in it. This extreme arrogance I would say is the root of the problem.
> Not that non-rationalists are any better at reasoning, but non-rationalists do at least benefit from some intellectual humility.
Non-rationalists are forced to use their physical senses more often because they can't follow the chain of logic as far. This is to their advantage. Empiricism > rationalism.
That conclusion presupposes that rationality and empiricism are at odds or mutually incompatible somehow. Any rational position worth listening to, about any testable hypothesis, is hand in hand with empirical thinking.
In traditional philosophy, rationalism and empiricism are at odds; they are essentially diametrically opposed. Rationalism prioritizes a priori reasoning while empiricism prioritizes a posteriori reasoning. You can prioritize both equally but that is neither rationalism nor empiricism in the traditional terminology. The current rationalist movement has no relation to that original rationalist movement, so the words don't actually mean the same thing. In fact, the majority of participants in the current movement seem ignorant of the historical dispute and its implications, hence the misuse of the word.
2 replies →
Good rationalism includes empiricism though
[dead]
> I think it’s that people tend to build up “logical” conclusions where they think each step is a watertight necessity that follows inevitably from its antecedents, but actually each step is a little bit leaky, leading to runaway growth in false confidence.
Yeah, this is a pattern I've seen a lot of recently—especially in discussions about LLMs and the supposed inevitability of AGI (and the Singularity). This is a good description of it.
Another annoying one is the simulation theory group. They know just enough about Physics to build sophisticated mental constructs without understanding how flimsy the foundations are or how their logical steps are actually unproven hypotheses.
Agreed. This one is especially annoying to me and dear to my heart, because I enjoy discussing the philosophy behind this, but it devolves into weird discussions and conclusions fairly quickly without much effort at all. I particularly enjoy the tenets of certain sects of buddhism and how they view these things, but you'll get a lot of people that are doing a really pseudo-intellectual version of the Matrix where they are the main character.
1 reply →
You might have just explained the phenomenon of AI doomsayers overlapping with ea/rat types, which I otherwise found inexplicable. EA/Rs seem kind of appalingly positivist otherwise.
I mean, that's also because of their mutual association with Eliezer Yudkowski, who is (AIUI) a believer in the Singularity, as well as being one of the main wellsprings of "Rationalist" philosophy.
Yet I think most people err in the other direction. They 'know' the basics of health, of discipline, of charity, but have a hard time following through. 'Take a simple idea, and take it seriously': a favorite aphorism of Charlie Munger. Most of the good things in my life have come from trying to follow through the real implications of a theoretical belief.
And “always invert”! A related mungerism.
I always get weird looks when I talk about killing as many pilots as possible. I need a new example of the always invert model of problem solving.
Perhaps part of being rational, as opposed to rationalist, is having a sense of when to override the conclusions of seemingly logical arguments.
In philosophy grad school, we described this as 'being reasonable' as opposed to 'being rational'.
That said, big-R Rationalism (the Lesswrong/Yudkowsky/Ziz social phenomenon) has very little in common with what we've standardly called 'rationalism'; trained philosophers tend to wince a little bit when we come into contact with these groups (who are nevertheless chockablock with fascinating personalities and compelling aesthetics.)
From my perspective (and I have only glancing contact,) these mostly seem to be _cults of consequentialism_, an epithet I'd also use for Effective Altruists.
Consequentialism has been making young people say and do daft things for hundreds of years -- Dostoevsky's _Crime and Punishment_ being the best character sketch I can think of.
While there are plenty of non-religious (and thus, small-r rationalist) alternatives to consequentialism, none of them seem to make it past the threshold in these communities.
The other codesmell these big-R rationalist groups have for me, and that which this article correctly flags, is their weaponization of psychology -- while I don't necessarily doubt the findings of sociology, psychology, etc, I wonder if they necessarily furnish useful tools for personal improvement. For example, memorizing a list of biases that people can potentially have is like numbering the stars in the sky; to me, it seems like this is a cargo-cultish transposition of the act of finding _fallacies in arguments_ into the domain of finding _faults in persons_.
And that's a relatively mild use of psychology. I simply can't imagine how annoying it would be to live in a household where everyone had memorized everything from connection theory to attachment theory to narrative therapy and routinely deployed hot takes on one another.
In actual philosophical discussion, back at the academy, psychologizing was considered 'below the belt', and would result in an intervention by the ref. Sometimes this was explicitly associated with something we called 'the Principle of Charity', which is that, out of an abundance of epistemic caution, you commit to always interpreting the motives and interests of your interlocutor in the kindest light possible, whether in 'steel manning' their arguments, or turning a strategically blind eye to bad behaviour in conversation.
The importance Principle of Charity is probably the most enduring lesson I took from my decade-long sojurn among the philosophers, and mutual psychological dissection is anathema to it.
I actually think that the fact that rationalists use the term "steel manning" betrays a lack of charity.
If the only thing you owe your interlocutor is to use your "prodigious intellect" to restate their own argument in the way that sounds the most convincing to you, maybe you are in fact a terrible listener.
6 replies →
> While there are plenty of non-religious (and thus, small-r rationalist) alternatives to consequentialism, none of them seem to make it past the threshold in these communities.
I suspect this is because consequentialism is the only meta-ethical framework that has any leg to stand on other than "because I said so". That makes it very attractive. The problem is you also can't build anything useful on top of it, because if you try to quantify consequences, and do math on them, you end up with the Repugnant Conclusion or worse. And in practice - in Effective Altruism/Longtermism, for example - the use of arbitrarily big numbers lets you endorse the Very Repugnant Conclusion while patting yourself on the back for it.
> to me, it seems like this is a cargo-cultish transposition of the act of finding _fallacies in arguments_ into the domain of finding _faults in persons_.
Well put, thanks!
I am interested in your journey from philosophy to coding.
I feel this way about some of the more extreme effective altruists. There is no room for uncertainty or recognition of the way that errors compound.
- "We should focus our charitable endeavors on the problems that are most impactful, like eradicating preventable diseases in poor countries." Cool, I'm on board.
- "I should do the job that makes the absolute most amount of money possible, like starting a crypto exchange, so that I can use my vast wealth in the most effective way." Maybe? If you like crypto, go for it, I guess, but I don't think that's the only way to live, and I'm not frankly willing to trust the infallibility and incorruptibility of these so-called geniuses.
- "There are many billions more people who will be born in the future than those people who are alive today. Therefore, we should focus on long-term problems over short-term ones because the long-term ones will affect far more people." Long-term problems are obviously important, but the further we get into the future, the less certain we can be about our projections. We're not even good at seeing five years into the future. We should have very little faith in some billionaire tech bro insisting that their projections about the 22nd century are correct (especially when those projections just so happen to show that the best thing you can do in the present is buy the products that said tech bro is selling).
The "longtermism" idea never made sense to me: So we should sacrifice the present to save the future. Alright. But then those future descendants would also have to sacrifice their present to save their future, etc. So by that logic, there could never be a time that was not full of misery. So then why do all of that stuff?
At some point in the future, there won't be more people who will live in the future than live in the present, at which point you are allowed to improve conditions today. Of course, by that point the human race is nearly finished, but hey.
That said, if they really thought hard about this problem, they would have come to a different conclusion:
https://theconversation.com/solve-suffering-by-blowing-up-th...
4 replies →
To me it is disguised way of saying the ends justify the means. Sure, we murder a few people today but think of the utopian paradise we are building for the future.
1 reply →
A bit of longtermism wouldn’t be so bad. We could sacrifice the convenience of burning fossil fuels today for our descendants to have an inhabitable planet.
1 reply →
Zeno's poverty
Well, there's a balance to be had. Do the most good you can while still being able to survive the rat race.
However, people are bad at that.
I'll give an interesting example.
Hybrid Cars. Modern proper HEVs[0] usually benefit to their owners, both by virtue of better fuel economy as well as in most cases being overall more reliable than a normal car.
And, they are better on CO2 emissions and lower our oil consumption.
And yet most carmakers as well as consumers have been very slow to adopt. On the consumer side we are finally to where we can have hybrid trucks that can get 36-40MPG capable of towing 4000 pounds or hauling over 1000 pounds in the bed [1] we have hybrid minivans capable of 35MPG for transporting groups of people, we have hybrid sedans getting 50+ and Small SUVs getting 35-40+MPG for people who need a more normal 'people' car. And while they are selling better it's insane that it took as long as it has to get here.
The main 'misery' you experience at that point, is that you're driving the same car as a lot of other people and it's not as exciting [2] as something with more power than most people know what to do with.
And hell, as they say in investing, sometimes the market can be irrational longer than you can stay solvent. E.x. was it truly worth it to Hydro-Quebec to sit on LiFePO patents the way they did vs just figuring out licensing terms that got them a little bit of money to then properly accelerate adoption of Hybrids/EVs/etc?
[0] - By this I mean Something like Toyota's HSD style setup used by Ford and Subaru, or Honda or Hyundai/Kia's setup where there's still a more normal transmission involved.
[1] - Ford advertises up to 1500 pounds, but I feel like the GVWR allows for a 25 pound driver at that point.
[2] - I feel like there's ways to make an exciting hybrid, but until there's a critical mass or Stellantis gets their act together, it won't happen...
3 replies →
"I came up with a step-by-step plan to achieve World Peace, and now I am on a government watchlist!"
It goes along with the "taking ideas seriously" part of [R]ationalism. They committed to the idea of maximizing expected quantifiable utility, and imagined scenarios with big enough numbers (of future population) that the probability of the big-number-future coming to pass didn't matter anymore. Normal people stop taking an idea seriously once it's clearly a fantasy, but [R]ationalists can't do that if the fantasy is both technically possible and involves big enough imagined numbers to overwhelm its probability, because of their commitment to "shut up and calculate"'
"I should do the job that makes the absolute most amount of money possible, like starting a crypto exchange, so that I can use my vast wealth in the most effective way."
Has always really bothered me because it assumes that there are no negative impacts of the work you did to get the money. If you do a million dollars worth of damage to the world and earn 100k (or a billion dollars worth of damage to earn a million dollars), even if you spend all of the money you earned on making the world a better place, you arent even going to fix 10% of the damage you caused (and thats ignoring the fact that its usually easier/cheaper to break things than to fix them).
> If you do a million dollars worth of damage to the world and earn 100k (or a billion dollars worth of damage to earn a million dollars), even if you spend all of the money you earned on making the world a better place, you arent even going to fix 10% of the damage you caused (and thats ignoring the fact that its usually easier/cheaper to break things than to fix them).
You kinda summed up a lot of the world post industrial revolution there, at least as far as stuff like toxic waste (Superfund, anyone?) and stuff like climate change, I mean for goodness sake let's just think about TEL and how they knew Ethanol could work but it just wasn't 'patentable'. [0] Or the "We don't even know the dollar amount because we don't have a workable solution" problem of PFAS.
[0] - I still find it shameful that a university is named after the man who enabled this to happen.
And not just that, but the very fact that someone considers it valid to try to accumulate billions of dollars so they can have an outsized influence on the direction of society, seems somewhat questionable.
Even with 'good' intentions, there is the implied statement that your ideas are better than everyone else's and so should be pushed like that. The whole thing is a self-satisfied ego-trip.
1 reply →
There's a hidden (or not so hidden) assumption in the EA's "calculations" that capitalism is great and climate change isn't a big deal. (You pretty much have to believe the latter to believe the former).
> Not that non-rationalists are any better at reasoning, but non-rationalists do at least benefit from some intellectual humility.
I have observed no such correlation of intellectual humility.
Would you consider the formal verification community to be "rationalists"?
> I don’t think it’s just (or even particularly) bad axioms, I think it’s that people tend to build up “logical” conclusions where they think each step is a watertight necessity that follows inevitably from its antecedents, but actually each step is a little bit leaky, leading to runaway growth in false confidence.
I really like your way of putting it. It’s a fundamental fallacy to assume certainty when trying to predict the future. Because, as you say, uncertainty compounds over time, all prediction models are chaotic. It’s usually associated with some form of Dunning-Kruger, where people know just enough to have ideas but not enough to understand where they might fail (thus vastly underestimating uncertainty at each step), or just lacking imagination.
Deep Space 9 had an episode dealing with something similar. Superintelligent beings determine that a situation is hopeless and act accordingly. The normal beings take issue with the actions of the Superintelligents. The normal beings turn out to be right.
Precisely! I'd even say they get intoxicated with their own braininess. The expression that comes to mind is to get "way out over your skis".
I'd go even further and say most of the world's evils are caused by people with theories that are contrary to evidence. I'd place Marx among these but there's no shortage of examples.
> non-rationalists do at least benefit from some intellectual humility
The Islamists who took out the World Trade Center don’t strike me as particularly intellectually humble.
If you reject reason, you are only left with force.
Are you so sure the 9/11 hijackers rejected reason?
Why Are So Many Terrorists Engineers?
https://archive.is/XA4zb
Self-described rationalists can and often do rationalize acts and beliefs that seem baldly irrational to others.
Here's the thing, the goals of the terrorists weren't irrational.
People confuse "rational" with "moral". Those aren't the same thing. You can perfectly rationally do something that is immoral with a bad goal.
For example, if you value your life above all others, then it would be perfectly rational to slaughter an orphanage if a more powerful entity made that your only choice for survival. Morally bad, rationally correct.
1 reply →
I now feel the need to comment that this thread does illustrate an issue I have with the naming of the philosophical/internet community of rationalism.
One can very clearly be a rational individual or an individual who practices reason and not associate with the internet community of rationalism. The median member of the group defined as "not being part of the internet-organized movement of rationalism and not reading lesswrong posts" is not "religious extremist striking the world trade center and committing an atrocious act of terrorism", it's "random person on the street."
And to preempt a specific response some may make to this, yes, the thread here is talking about rationalism as discussed in the blog post above as organized around Yudowsky or slate star codex, and not the rationalist movement of like, Spinoza and company. Very different things philosophically.
Islamic fundamentalism and cult rationalism are both involved in a “total commitment”, “all or nothing” type of thinking. The former is totally committed to a particular literal reading of scripture, the latter, to logical deduction from a set of chosen premises. Both modes of thinking have produced violent outcomes in the past.
Skepticism, in which no premise or truth claim is regarded as above dispute (or, that it is always permissible and even praiseworthy to suspend one’s judgment on a matter), is the better comparison with rationalism-fundamentalism. It is interesting that skepticism today is often associated with agnostic or atheist religious beliefs, but I consider many religious thinkers in history to have been skeptics par excellence when judged by the standard of their own time. E.g. William Ockham (of Ockham’s razor) was a 14C Franciscan friar (and a fascinating figure) who denied papal infallibility. I count Martin Luther as belonging to the history of skepticism as well, for example, as well as much of the humanist movement that returned to the original Greek sources for the Bible, from the Latin Vulgate translation by Jerome.
The history of ideas is fun to read about. I am hardly an expert, but you may be interested by the history of Aristotelian rationalism, which gained prominence in the medieval west largely through the works of Averroes, a 12C Muslim philosopher who heavily favored Aristotle. In 13C, Thomas Aquinus wrote a definitive Catholic systematic theology, rejecting Averroes but embracing Aristotle. To this day, Catholic theology is still essentially Aristotelian.
True skepticism is rare. It's easy to be skeptical only about beliefs you dislike or at least don't care about. It's hard to approach the 100th self-professed psychic with an honest intention to truly test their claims rather than to find the easiest way to ridicule them.
The only absolute above questioning is that there are no absolutes.