Comment by JohnMakin
4 days ago
One of a few issues I have with groups like these, is that they often confidently and aggressively spew a set of beliefs that on their face logically follow from one another, until you realize they are built on a set of axioms that are either entirely untested or outright nonsense. This is common everywhere, but I feel especially pronounced in communities like this. It also involves quite a bit of navel gazing that makes me feel a little sick participating in.
The smartest people I have ever known have been profoundly unsure of their beliefs and what they know. I immediately become suspicious of anyone who is very certain of something, especially if they derived it on their own.
I don’t think it’s just (or even particularly) bad axioms, I think it’s that people tend to build up “logical” conclusions where they think each step is a watertight necessity that follows inevitably from its antecedents, but actually each step is a little bit leaky, leading to runaway growth in false confidence.
Not that non-rationalists are any better at reasoning, but non-rationalists do at least benefit from some intellectual humility.
As a former mechanical engineer, I visualize this phenomenon like a "tolerance stackup". Effectively meaning that for each part you add to the chain, you accumulate error. If you're not damn careful, your assembly of parts (or conclusions) will fail to measure up to expectations.
I like this approach. Also having dipped my toes in the engineering world (professionally) I think it naturally follows that you should be constantly rechecking your designs. Those tolerances were fine to begin with, but are they now that things have changed? It also makes you think about failure modes. What can make this all come down and if it does what way will it fail? Which is really useful because you can then leverage this to design things to fail in certain ways and now you got a testable hypothesis. It won't create proof, but it at least helps in finding flaws.
2 replies →
Basically the same as how dead reckoning your location works worse the longer you've been traveling?
3 replies →
I saw an article recently that talked about stringing likely inferences together but ending up with an unreliable outcome because enough 0.9 probabilities one after the other lead to an unlikely conclusion.
Edit: Couldn't find the article, but AI referenced Baysian "Chain of reasoning fallacy".
21 replies →
I like this analogy.
I think of a bike's shifting systems; better shifters, better housings, better derailleur, or better chainrings/cogs can each 'improve' things.
I suppose where that becomes relevant to here, is that you can have very fancy parts on various ends but if there's a piece in the middle that's wrong you're still gonna get shit results.
2 replies →
This is what I hate about real life electronics. Everything is nice on paper, but physics sucks.
1 reply →
IME most people aren't very good at building axioms. I hear a lot of people say "from first principles" and it is a pretty good indication that they will not be. First principles require a lot of effort to create. They require iteration. They require a lot of nuance, care, and precision. And of course they do! They are the foundation of everything else that is about to come. This is why I find it so odd when people say "let's work from first principles" and then just state something matter of factly and follow from there. If you want to really do this you start simple, attack your own assumptions, reform, build, attack, and repeat.
This is how you reduce the leakiness, but I think it is categorically the same problem as the bad axioms. It is hard to challenge yourself and we often don't like being wrong. It is also really unfortunate that small mistakes can be a critical flaw. There's definitely an imbalance.
This is why the OP is seeing this behavior. Because the smartest people you'll meet are constantly challenging their own ideas. They know they are wrong to at least some degree. You'll sometimes find them talking with a bit of authority at first but a key part is watching how they deal with challenging of assumptions. Ask them what would cause them to change their minds. Ask them about nuances and details. They won't always dig into those can of worms but they will be aware of it and maybe nervousness or excited about going down that road (or do they just outright dismiss it?). They understand that accuracy is proportional to computation, and you have exponentially increasing computation as you converge on accuracy. These are strong indications since it'll suggest if they care more about the right answer or being right. You also don't have to be very smart to detect this.
IME most people aren't very good at building axioms.
It seems you implying that some people are good building good axiom systems for the real world. I disagree. There are a few situations in the world where you have generalities so close to complete that you can use simple logic on them. But for the messy parts of the real world, there simply is not set of logical claims which can provide anything like certainty no matter how "good" someone is at "axiom creation".
6 replies →
> I don’t think it’s just (or even particularly) bad axioms, I think it’s that people tend to build up “logical” conclusions where they think each step is a watertight necessity that follows inevitably from its antecedents, but actually each step is a little bit leaky, leading to runaway growth in false confidence.
This is what you get when you naively re-invent philosophy from the ground up while ignoring literally 2500 years of actual debugging of such arguments by the smartest people who ever lived.
You can't diverge from and improve on what everyone else did AND be almost entirely ignorant of it, let alone have no training whatsoever in it. This extreme arrogance I would say is the root of the problem.
> Not that non-rationalists are any better at reasoning, but non-rationalists do at least benefit from some intellectual humility.
Non-rationalists are forced to use their physical senses more often because they can't follow the chain of logic as far. This is to their advantage. Empiricism > rationalism.
That conclusion presupposes that rationality and empiricism are at odds or mutually incompatible somehow. Any rational position worth listening to, about any testable hypothesis, is hand in hand with empirical thinking.
3 replies →
Good rationalism includes empiricism though
[dead]
> I think it’s that people tend to build up “logical” conclusions where they think each step is a watertight necessity that follows inevitably from its antecedents, but actually each step is a little bit leaky, leading to runaway growth in false confidence.
Yeah, this is a pattern I've seen a lot of recently—especially in discussions about LLMs and the supposed inevitability of AGI (and the Singularity). This is a good description of it.
Another annoying one is the simulation theory group. They know just enough about Physics to build sophisticated mental constructs without understanding how flimsy the foundations are or how their logical steps are actually unproven hypotheses.
2 replies →
You might have just explained the phenomenon of AI doomsayers overlapping with ea/rat types, which I otherwise found inexplicable. EA/Rs seem kind of appalingly positivist otherwise.
1 reply →
Yet I think most people err in the other direction. They 'know' the basics of health, of discipline, of charity, but have a hard time following through. 'Take a simple idea, and take it seriously': a favorite aphorism of Charlie Munger. Most of the good things in my life have come from trying to follow through the real implications of a theoretical belief.
And “always invert”! A related mungerism.
1 reply →
Perhaps part of being rational, as opposed to rationalist, is having a sense of when to override the conclusions of seemingly logical arguments.
In philosophy grad school, we described this as 'being reasonable' as opposed to 'being rational'.
That said, big-R Rationalism (the Lesswrong/Yudkowsky/Ziz social phenomenon) has very little in common with what we've standardly called 'rationalism'; trained philosophers tend to wince a little bit when we come into contact with these groups (who are nevertheless chockablock with fascinating personalities and compelling aesthetics.)
From my perspective (and I have only glancing contact,) these mostly seem to be _cults of consequentialism_, an epithet I'd also use for Effective Altruists.
Consequentialism has been making young people say and do daft things for hundreds of years -- Dostoevsky's _Crime and Punishment_ being the best character sketch I can think of.
While there are plenty of non-religious (and thus, small-r rationalist) alternatives to consequentialism, none of them seem to make it past the threshold in these communities.
The other codesmell these big-R rationalist groups have for me, and that which this article correctly flags, is their weaponization of psychology -- while I don't necessarily doubt the findings of sociology, psychology, etc, I wonder if they necessarily furnish useful tools for personal improvement. For example, memorizing a list of biases that people can potentially have is like numbering the stars in the sky; to me, it seems like this is a cargo-cultish transposition of the act of finding _fallacies in arguments_ into the domain of finding _faults in persons_.
And that's a relatively mild use of psychology. I simply can't imagine how annoying it would be to live in a household where everyone had memorized everything from connection theory to attachment theory to narrative therapy and routinely deployed hot takes on one another.
In actual philosophical discussion, back at the academy, psychologizing was considered 'below the belt', and would result in an intervention by the ref. Sometimes this was explicitly associated with something we called 'the Principle of Charity', which is that, out of an abundance of epistemic caution, you commit to always interpreting the motives and interests of your interlocutor in the kindest light possible, whether in 'steel manning' their arguments, or turning a strategically blind eye to bad behaviour in conversation.
The importance Principle of Charity is probably the most enduring lesson I took from my decade-long sojurn among the philosophers, and mutual psychological dissection is anathema to it.
10 replies →
I feel this way about some of the more extreme effective altruists. There is no room for uncertainty or recognition of the way that errors compound.
- "We should focus our charitable endeavors on the problems that are most impactful, like eradicating preventable diseases in poor countries." Cool, I'm on board.
- "I should do the job that makes the absolute most amount of money possible, like starting a crypto exchange, so that I can use my vast wealth in the most effective way." Maybe? If you like crypto, go for it, I guess, but I don't think that's the only way to live, and I'm not frankly willing to trust the infallibility and incorruptibility of these so-called geniuses.
- "There are many billions more people who will be born in the future than those people who are alive today. Therefore, we should focus on long-term problems over short-term ones because the long-term ones will affect far more people." Long-term problems are obviously important, but the further we get into the future, the less certain we can be about our projections. We're not even good at seeing five years into the future. We should have very little faith in some billionaire tech bro insisting that their projections about the 22nd century are correct (especially when those projections just so happen to show that the best thing you can do in the present is buy the products that said tech bro is selling).
The "longtermism" idea never made sense to me: So we should sacrifice the present to save the future. Alright. But then those future descendants would also have to sacrifice their present to save their future, etc. So by that logic, there could never be a time that was not full of misery. So then why do all of that stuff?
16 replies →
"I should do the job that makes the absolute most amount of money possible, like starting a crypto exchange, so that I can use my vast wealth in the most effective way."
Has always really bothered me because it assumes that there are no negative impacts of the work you did to get the money. If you do a million dollars worth of damage to the world and earn 100k (or a billion dollars worth of damage to earn a million dollars), even if you spend all of the money you earned on making the world a better place, you arent even going to fix 10% of the damage you caused (and thats ignoring the fact that its usually easier/cheaper to break things than to fix them).
4 replies →
> Not that non-rationalists are any better at reasoning, but non-rationalists do at least benefit from some intellectual humility.
I have observed no such correlation of intellectual humility.
Would you consider the formal verification community to be "rationalists"?
> I don’t think it’s just (or even particularly) bad axioms, I think it’s that people tend to build up “logical” conclusions where they think each step is a watertight necessity that follows inevitably from its antecedents, but actually each step is a little bit leaky, leading to runaway growth in false confidence.
I really like your way of putting it. It’s a fundamental fallacy to assume certainty when trying to predict the future. Because, as you say, uncertainty compounds over time, all prediction models are chaotic. It’s usually associated with some form of Dunning-Kruger, where people know just enough to have ideas but not enough to understand where they might fail (thus vastly underestimating uncertainty at each step), or just lacking imagination.
Deep Space 9 had an episode dealing with something similar. Superintelligent beings determine that a situation is hopeless and act accordingly. The normal beings take issue with the actions of the Superintelligents. The normal beings turn out to be right.
Precisely! I'd even say they get intoxicated with their own braininess. The expression that comes to mind is to get "way out over your skis".
I'd go even further and say most of the world's evils are caused by people with theories that are contrary to evidence. I'd place Marx among these but there's no shortage of examples.
> non-rationalists do at least benefit from some intellectual humility
The Islamists who took out the World Trade Center don’t strike me as particularly intellectually humble.
If you reject reason, you are only left with force.
Are you so sure the 9/11 hijackers rejected reason?
Why Are So Many Terrorists Engineers?
https://archive.is/XA4zb
Self-described rationalists can and often do rationalize acts and beliefs that seem baldly irrational to others.
2 replies →
I now feel the need to comment that this thread does illustrate an issue I have with the naming of the philosophical/internet community of rationalism.
One can very clearly be a rational individual or an individual who practices reason and not associate with the internet community of rationalism. The median member of the group defined as "not being part of the internet-organized movement of rationalism and not reading lesswrong posts" is not "religious extremist striking the world trade center and committing an atrocious act of terrorism", it's "random person on the street."
And to preempt a specific response some may make to this, yes, the thread here is talking about rationalism as discussed in the blog post above as organized around Yudowsky or slate star codex, and not the rationalist movement of like, Spinoza and company. Very different things philosophically.
Islamic fundamentalism and cult rationalism are both involved in a “total commitment”, “all or nothing” type of thinking. The former is totally committed to a particular literal reading of scripture, the latter, to logical deduction from a set of chosen premises. Both modes of thinking have produced violent outcomes in the past.
Skepticism, in which no premise or truth claim is regarded as above dispute (or, that it is always permissible and even praiseworthy to suspend one’s judgment on a matter), is the better comparison with rationalism-fundamentalism. It is interesting that skepticism today is often associated with agnostic or atheist religious beliefs, but I consider many religious thinkers in history to have been skeptics par excellence when judged by the standard of their own time. E.g. William Ockham (of Ockham’s razor) was a 14C Franciscan friar (and a fascinating figure) who denied papal infallibility. I count Martin Luther as belonging to the history of skepticism as well, for example, as well as much of the humanist movement that returned to the original Greek sources for the Bible, from the Latin Vulgate translation by Jerome.
The history of ideas is fun to read about. I am hardly an expert, but you may be interested by the history of Aristotelian rationalism, which gained prominence in the medieval west largely through the works of Averroes, a 12C Muslim philosopher who heavily favored Aristotle. In 13C, Thomas Aquinus wrote a definitive Catholic systematic theology, rejecting Averroes but embracing Aristotle. To this day, Catholic theology is still essentially Aristotelian.
2 replies →
Strongly recommend this profile in the NYer on Curtis Yarvin (who also uses "rationalism" to justify their beliefs) [0]. The section towards the end that reports on his meeting one of his supposed ideological heroes for an extended period of time is particularly illuminating.
I feel like the internet has led to an explosion of these such groups because it abstracts the "ideas" away from the "people". I suspect if most people were in a room or spent an extended amount of time around any of these self-professed, hyper-online rationalists, they would immediately disregard any theories they were able to cook up, no matter how clever or persuasively-argued they might be in their written down form.
[0]: https://www.newyorker.com/magazine/2025/06/09/curtis-yarvin-...
> I feel like the internet has led to an explosion of these such groups because it abstracts the "ideas" away from the "people". I suspect if most people were in a room or spent an extended amount of time around any of these self-professed, hyper-online rationalists, they would immediately disregard any theories they were able to cook up, no matter how clever or persuasively-argued they might be in their written down form.
Likely the opposite. The internet has led to people being able to see the man behind the curtain, and realize how flawed the individuals pushing these ideas are. Whereas many intellectuals from 50 years back were just as bad if not worse, but able to maintain a false aura of intelligence by cutting themselves off from the masses.
Hard disagree. People use rationality to support the beliefs they already have, not to change those beliefs. The internet allows everyone to find something that supports anything.
I do it. You do it. I think a fascinating litmus test is asking yourself this question: “When did I last change my mind about something significant?” For most people the answer is “never”. If we lived in the world you described, most people’s answers would be “relatively recently”.
2 replies →
> I immediately become suspicious of anyone who is very certain of something
Me too, in almost every area of life. There's a reason it's called a conman: they are tricking your natural sense that confidence is connected to correctness.
But also, even when it isn't about conning you, how do people become certain of something? They ignored the evidence against whatever they are certain of.
People who actually know what they're talking about will always restrict the context and hedge their bets. Their explanation are tentative, filled with ifs and buts. They rarely say anything sweeping.
In the term "conman" the confidence in question is that of the mark, not the perpetrator.
Isn't confidence referring to the alternate definition of trust, as in "taking you into his confidence"?
1 reply →
> how do people become certain of something?
They see the same pattern repeatedly until it becomes the only reasonable explanation? I’m certain about the theory of gravity because every time I drop an object it falls to the ground with a constant acceleration.
"Cherish those who seek the truth but beware of those who find it" - Voltaire
Most likely Gide ("Croyez ceux qui cherchent la vérité, doutez de ceux qui la trouvent", "Believe those who seek Truth, doubt those who find it") and not Voltaire ;)
Voltaire was generally more subtle: "un bon mot ne prouve rien", a witty saying proves nothing, as he'd say.
> I immediately become suspicious of anyone who is very certain of something, especially if they derived it on their own.
Are you certain about this?
All I know is that I know nothing.
How do you know?
1 reply →
Well you could be a critical rationalist and do away with the notion of "certainty" or any sort of justification or privileged source of knowledge (including "rationality").
Your own state of mind is one of the easiest things to be fairly certain about.
The fact that this is false is one of the oldest findings of research psychology
1 reply →
said no one familiar with their own mind, ever!
no
Suspicious implies uncertain. It’s not immediate rejection.
Isaac Newton would like to have a word.
I am not a big fan of alchemy, thank you though.
Many arguments arise over the valuation of future money. See "discount function" [1] At one extreme are the rational altruists, who rate that near 1.0, and the "drill, baby, drill" people, who are much closer to 0.
The discount function really should have a noise term, because predictions about the future are noisy, and the noise increases with the distance into the future. If you don't consider that, you solve the wrong problem. There's a classic Roman concern about running out of space for cemeteries. Running out of energy, or overpopulation, turned out to be problems where the projections assumed less noise than actually happened.
[1] https://en.wikipedia.org/wiki/Discount_function
I find Yudowsky-style rationalists morbidly fascinating in the same way as Scientologists and other cults. Probably because they seem to genuinely believe they're living in a sci-fi story. I read a lot of their stuff, probably too much, even though I find it mostly ridiculous.
The biggest nonsense axiom I see in the AI-cult rationalist world is recursive self-improvement. It's the classic reason superintelligence takeoff happens in sci-fi: once AI reaches some threshold of intelligence, it's supposed to figure out how to edit its own mind, do that better and faster than humans, and exponentially leap into superintelligence. The entire "AI 2027" scenario is built on this assumption; it assumes that soon LLMs will gain the capability of assisting humans on AI research, and AI capabilities will explode from there.
But AI being capable of researching or improving itself is not obvious; there's so many assumptions built into it!
- What if "increasing intelligence", which is a very vague goal, has diminishing returns, making recursive self-improvement incredibly slow?
- Speaking of which, LLMs already seem to have hit a wall of diminishing returns; it seems unlikely they'll be able to assist cutting-edge AI research with anything other than boilerplate coding speed improvements.
- What if there are several paths to different kinds of intelligence with their own local maxima, in which the AI can easily get stuck after optimizing itself into the wrong type of intelligence?
- Once AI realizes it can edit itself to be more intelligent, it can also edit its own goals. Why wouldn't it wirehead itself? (short-circuit its reward pathway so it always feels like it's accomplished its goal)
Knowing Yudowsky I'm sure there's a long blog post somewhere where all of these are addressed with several million rambling words of theory, but I don't think any amount of doing philosophy in a vacuum without concrete evidence could convince me that fast-takeoff superintelligence is possible.
I agree. There's also the point of hardware dependance.
From all we've seen, the practical ability of AI/LLMs seems to be strongly dependent on how much hardware you throw at it. Seems pretty reasonable to me - I'm skeptical that there's that much out there in gains from more clever code, algorithms, etc on the same amount of physical hardware. Maybe you can get 10% or 50% better or so, but I don't think you're going to get runaway exponential improvement on a static collection of hardware.
Maybe they could design better hardware themselves? Maybe, but then the process of improvement is still gated behind how fast we can physically build next-generation hardware, perfect the tools and techniques needed to make it, deploy with power and cooling and datalinks and all of that other tedious physical stuff.
I think you can get a few more gigantic step functions' worth of improvement on the same hardware. For instance, LLMs don't have any kind of memory, short or long term.
> it assumes that soon LLMs will gain the capability of assisting humans
No, it does not. It assumes there will be progress in AI. It does not assume that progress will be in LLMs
It doesn't require AI to be better than humans for AI to take over because unlike a human an AI can be cloned. You have have 2 AIs, then 4, then 8.... then millions. All able to do the same things as humans (the assumption of AGI). Build cars, build computers, build rockets, built space probes, build airplanes, build houses, build power plants, build factories. Build robot factories to create more robots and more power plants and more factories.
PS: Not saying I believe in the doom. But the thought experiment doesn't seem indefensible.
> It does not assume that progress will be in LLMs
If that's the case then there's not as much reason to assume that this progress will occur now, and not years from now; LLMs are the only major recent development that gives the AI 2027 scenario a reason to exist.
> You have have 2 AIs, then 4, then 8.... then millions
The most powerful AI we have now is strictly hardware-dependent, which is why only a few big corporations have it. Scaling it up or cloning it is bottlenecked by building more data centers.
Now it's certainly possible that there will be a development soon that makes LLMs significantly more efficient and frees up all of that compute for more copies of them. But there's no evidence that even state-of-the-art LLMs will be any help in finding this development; that kind of novel research is just not something they're any good at. They're good at doing well-understood things quickly and in large volume, with small variations based on user input.
> But the thought experiment doesn't seem indefensible.
The part that seems indefensible is the unexamined assumptions about LLMs' ability (or AI's ability more broadly) to jump to optimal human ability in fields like software or research, using better algorithms and data alone.
Take https://ai-2027.com/research/takeoff-forecast as an example: it's the side page of AI 2027 that attempts to deal with these types of objections. It spends hundreds of paragraphs on what the impact of AI reaching a "superhuman coder" level will be on AI research, and on the difference between the effectiveness of an organizations average and best researchers, and the impact of an AI closing that gap and having the same research effectiveness as the best humans.
But what goes completely unexamined and unjustified is the idea that AI will be capable of reaching "superhuman coder" level, or developing peak-human-level "research taste", at all, at any point, with any amount of compute or data. It's simply assumed that it will get there because the exponential curve of the recent AI boom will keep going up.
Skills like "research taste" can't be learned at a high level from books and the internet, even if, like ChatGPT, you've read the entire Internet and can see all the connections within it. They require experience, trial and error. Probably the same amount that a human expert would require, but even that assumes we can make an AI that can learn from experience as efficiently as a human, and we're not there yet.
1 reply →
> No, it does not. It assumes there will be progress in AI. It does not assume that progress will be in LLMs
I mean, for the specific case of the 2027 doomsday prediction, it really does have to be LLMs at this point, just given the timeframes. It is true that the 'rationalist' AI doomerism thing doesn't depend LLMs, and in fact predates transformer-based models, but for the 2027 thing, it's gotta be LLMs.
> - What if there are several paths to different kinds of intelligence with their own local maxima, in which the AI can easily get stuck after optimizing itself into the wrong type of intelligence?
I think what's more plausible is that there is general intelligence, and humans have that, and it's general in the same sense that Turing machines are general, meaning that there is no "higher form" of intelligence that has strictly greater capability. Computation speed, memory capacity, etc. can obviously increase, but those are available to biological general intelligences just like they would be available to electronic general intelligences.
I agree that general intelligence is general. But increasing computation speed 1000x could still be something that is available to the machines and not to the humans, simply because electrons are faster than neurons. Also, how specifically would you 1000x increase human memory?
1 reply →
An interesting point you make there — one would assume that if recursive self-improvement were a thing, Nature would have already lead humans into that "hall of mirrors".
I often like to point out that Earth was already consumed by Grey Goo, and today we are hive-minds in titanic mobile megastructure-swarms of trillions of the most complex nanobots in existence (that we know of), inheritors of tactics and capabilities from a zillion years of physical and algorithmic warfare.
As we imagine the ascension of AI/robots, it may seem like we're being humble about ourselves... But I think it's actually the reverse: It's a kind of hubris elevating our ability to create over the vast amount we've inherited.
2 replies →
There's a variant of this that argues that humans are already as intelligent as it's possible to be. Because if it's possible to be more intelligent, why aren't we? And a slightly more reasonable variant that argues that we're already as intelligent as it's useful to be.
9 replies →
Well, arguably that's exactly where we are, but machines can evolve faster.
And that's an entire new angle that the cultists are ignoring... because superintelligence may just not be very valuable.
And we don't need superintelligence for smart machines to be a problem anyway. We don't need even AGI. IMO, there's no reason to focus on that.
2 replies →
> What if "increasing intelligence", which is a very vague goal, has diminishing returns, making recursive self-improvement incredibly slow?
This is sort of what I subscribe to as the main limiting factor, though I'd describe it differently. It's sort of like Amdahl's Law (and I imagine there's some sort of Named law that captures it, I just don't know the name): the magic AI wand may be very good at improving some part of AGI capability, but the more you improve that part, the more the other parts come to dominate. Metaphorically, even if the juice is worth the squeeze initially, pretty soon you'll only be left with a dried-out fruit clutched in your voraciously energy-consuming fist.
I'm actually skeptical that there's much juice in the first place; I'm sure today's AIs could generate lots of harebrained schemes for improvement very quickly, but exploring those possibilities is mind-numbingly expensive. Not to mention that the evaluation functions are unreliable, unknown, and non-monotonic.
Then again, even the current AIs have convinced a large number of humans to put a lot of effort into improving them, and I do believe that there are a lot of improvements that humans are capable of making to AI. So the human-AI system does appear to have some juice left. Where we'll be when that fruit is squeezed down to a damp husk, I have no idea.
The built in assumptions are always interesting to me, especially as it relates to intelligence. I find many of them (though not all), are organized around a series of fundamental beliefs that are very rarely challenged within these communities. I should initially mention that I don't think everyone in these communities believes these things, of course, but I think there's often a default set of assumptions going into conversations in these spaces that holds these axioms. These beliefs more or less seem to be as follows:
1) They believe that there exists a singular factor to intelligence in humans which largely explains capability in every domain (a super g factor, effectively).
2) They believe that this factor is innate, highly biologically regulated, and a static factor about a person(Someone who is high IQ in their minds must have been a high achieving child, must be very capable as an adult, these are the baseline assumptions). There is potentially belief that this can be shifted in certain directions, but broadly there is an assumption that you either have it or you don't, there is no feeling of it as something that could be taught or developed without pharmaceutical intervention or some other method.
3) There is also broadly a belief that this factor is at least fairly accurately measured by modern psychometric IQ tests and educational achievement, and that this factor is a continuous measurement with no bounds on it (You can always be smarter in some way, there is no max smartness in this worldview).
These are things that certainly could be true, and perhaps I haven't read enough into the supporting evidence for them but broadly I don't see enough evidence to have them as core axioms the way many people in the community do.
More to your point though, when you think of the world from those sorts of axioms above, you can see why an obsession would develop with the concept of a certain type of intelligence being recursively improving. A person who has become convinced of their moral placement within a societal hierarchy based on their innate intellectual capability has to grapple with the fact that there could be artificial systems which score higher on the IQ tests than them, and if those IQ tests are valid measurements of this super intelligence factor in their view, then it means that the artificial system has a higher "ranking" than them.
Additionally, in the mind of someone who has internalized these axioms, there is no vagueness about increasing intelligence! For them, intelligence is the animating factor behind all capability, it has a central place in their mind as who they are and the explanatory factor behind all outcomes. There is no real distinction between capability in one domain or another mentally in this model, there is just how powerful a given brain is. Having the singular factor of intelligence in this mental model means being able to solve more difficult problems, and lack of intelligence is the only barrier between those problems being solved vs unsolved. For example, there's a common belief among certain groups among the online tech world that all governmental issues would be solved if we just had enough "high-IQ people" in charge of things irrespective of their lack of domain expertise. I don't think this has been particularly well borne out by recent experiments, however. This also touches on what you mentioned in terms of an AI system potentially maximizing the "wrong types of intelligence", where there isn't a space in this worldview for a wrong type of intelligence.
I think you'll indeed find, if you were to seek out the relevant literature, that those claims are more or less true, or at least, are the currently best-supported interpretation available. So I don't think they're assumptions so much as simply current state of the science on the matter, and therefore widely accepted among those who for whatever reason have looked into it (or, more likely, inherited the information from someone they trust who has read up on it).
Interestingly, I think we're increasingly learning that although most aspects of human intelligence seem to correlate with each other (thus the "singular factor" interpretation), the grab-bag of skills this corresponds to are maybe a bit arbitrary when compared to AI. What evolution decided to optimise the hell out of in human intelligence is specific to us, and not at all the same set of skills as you get out of cranking up the number of parameters in an LLM.
Thus LLMs continuing to make atrocious mistakes of certain kinds, despite outshining humans at other tasks.
Nonetheless I do think it's correct to say that the rationalists think intelligence is a real measurable thing, and that although in humans it might be a set of skills that correlate and maybe in AIs it's a different set of skills that correlate (such that outperforming humans in IQ tests is impressive but not definitive), that therefore AI progress can be measured and it is meaningful to say "AI is smarter than humans" at some point. And that AI with better-than-human intelligence could solve a lot of problems, if of course it doesn't kill us all.
3 replies →
It's kinda weird how the level of discourse seems to be what you get when a few college students sit around smoking weed. Yet somehow this is taken as very serious and profound in the valley and VC throw money at it.
I've pondered recursive self-improvement. I'm fairly sure it will be a thing - we're at a point already where people could try telling Claude or some such to have a go, even if not quite at a point it would work. But I imagine take off would be very gradual. It would be constrained by available computing resources and probably only comparably good to current human researchers and so still take ages to get anywhere.
I honestly am not trying to be rude when I say this, but this is exactly the sort of speculation I find problematic and that I think most people in this thread are complaining about. Being able to tell Claude to have a go has no relation at all to whether it may ever succeed, and you don't actually address any of the legitimate concerns the comment you're replying to points out. There really isn't anything in this comment but vibes.
9 replies →
Yeah, to compare Yudkowsky to Hubbard I've read accounts of people who read Dianetics or Science of Survival and thought "this is genius!" and I'm scratching my head and it's like they never read Freud or Horney or Beck or Berne or Burns or Rogers or Kohut, really any clinical psychology at all, even anything in the better 70% of pop psychology. Like Hubbard, Yudkowsky is unreadable, rambling [1] and inarticulate -- how anybody falls for it boggles my mind [2], but hey, people fell for Carlos Castenada who never used a word of the Yaqui language or mentioned any plant that grows in the desert in Mexico but has Don Juan give lectures about Kant's Critique of Pure Reason [3] that Castenada would have heard in school and you would have heard in school too if you went to school or would have read if you read a lot.
I can see how it appeals to people like Aella who wash into San Francisco without exposure to education [4] or philosophy or computer science or any topics germane to the content of Sequences -- not like it means you are stupid but, like Dianetics, Sequences wouldn't be appealing if you were at all well read. How is people at frickin' Oxford or Stanford fall for it is beyond me, however.
[1] some might even say a hypnotic communication pattern inspired by Milton Erickson
[2] you think people would dismiss Sequences because it's a frickin' Harry Potter fanfic, but I think it's like the 419 scam email which is riddled by typos which is meant to drive the critical thinker away and, ironically in the case of Sequences, keep the person who wants to cosplay as a critical thinker.
[3] minus any direct mention of Kant
[4] thus many of the marginalized, neurodivergent, transgender who left Bumfuck, AK because they couldn't live at home and went to San Francisco to escape persecution as opposed to seek opportunity
I thought sequences was the blog posts and the fanfic was kept separately, to nitpick
> like Dianetics, Sequences wouldn't be appealing if you were at all well read.
That would require an education in the humanities, which is low status.
1 reply →
I'm surprised not see see much pushback on your point here, so I'll provide my own.
We have an existence proof for intelligence that can improve AI: humans can do this right now.
Do you think AI can't reach human-level intelligence? We have an existence proof of human-level intelligence: humans. If you think AI will reach human-level intelligence then recursive self-improvement naturally follows. How could it not?
Do you not think human-level intelligence is some kind of natural maximum? Why? That would be strange, no? Even if you think it's some natural maximum for LLMs specifically, why? And why do you think we wouldn't modify architectures as needed to continue to make progress? That's already happening, our LLMs are a long way from the pure text prediction engines of four or five years ago.
There is already a degree of recursive improvement going on right now, but with humans still in the loop. AI researchers currently use AI in their jobs, and despite the recent study suggesting AI coding tools don't improve productivity in the circumstances they tested, I suspect AI researchers' productivity is indeed increased through use of these tools.
So we're already on the exponential recursive-improvement curve, it's just that it's not exclusively "self" improvement until humans are no longer a necessary part of the loop.
On your specific points:
> 1. What if increasing intelligence has diminishing returns, making recursive improvement slow?
Sure. But this is a point of active debate between "fast take-off" and "slow take-off" scenarios, it's certainly not settled among rationalists which is more plausible, and it's a straw man to suggest they all believe in a fast take-off scenario. But both fast and slow take-off due to recursive self-improvement are still recursive self-imrpovement, so if you only want to criticise the fast take-off view, you should speak more precisely.
I find both slow and fast take-off plausible, as the world has seen both periods of fast economic growth through technology, and slower economic growth. It really depends on the details, which brings us to:
> 2. LLMs already seem to have hit a wall of diminishing returns
This is IMHO false in any meaningful sense. Yes, we have to use more computing power to get improvements without doing any other work. But have you seen METR's metric [1] on AI progress in terms of the (human) duration of task they can complete? This is an exponential curve that has not yet bent, and if anything has accelerated slightly.
Do not confuse GPT-5 (or any other incrementally improved model) failing to live up to unreasonable hype for an actual slowing of progress. AI capabilities are continuing to increase - being on an exponential curve often feels unimpressive at any given moment, because the relative rate of progress isn't increasing. This is a fact about our psychology, if we look at actual metrics (that don't have a natural cap like evals that max out at 100%, these are not good for measuring progress in the long-run) we see steady exponential progress.
> 3. What if there are several paths to different kinds of intelligence with their own local maxima, in which the AI can easily get stuck after optimizing itself into the wrong type of intelligence?
This seems valid. But it seems to me that unless we see METR's curve bend soon, we should not count on this. LLMs have specific flaws, but I think if we are honest with ourselves and not over-weighting the specific silly mistakes they still make, they are on a path toward human-level intelligence in the coming years. I realise that claim will sound ridiculous to some, but I think this is in large part due to people instinctively internalising that everything LLMs can do is not that impressive (it's incredible how quickly expectations adapt), and therefore over-indexing on their remaining weaknesses, despite those weaknesses improving over time as well. If you showed GPT-5 to someone from 2015, they would be telling you this thing is near human intelligence or even more intelligent than the average human. I think we all agree that's not true, but I think that superficially people would think it was if their expectations weren't constantly adapting to the state of the art.
> 4. Once AI realizes it can edit itself to be more intelligent, it can also edit its own goals. Why wouldn't it wirehead itself?
It might - but do we think it would? I have no idea. Would you wirehead yourself if you could? I think many humans do something like this (drug use, short-form video addiction), and expect AI to have similar issues (and this is one reason it's dangerous) but most of us don't feel this is an adequate replacement for "actually" satisfying our goals, and don't feel inclined to modify our own goals to make it so, if we were able.
> Knowing Yudowsky I'm sure there's a long blog post somewhere where all of these are addressed with several million rambling words of theory
Uncalled for I think. There are valid arguments against you, and you're pre-emptively dismissing responses to you by vaguely criticising their longness. This comment is longer than yours, and I reject any implication that that weakens anything about it.
Your criticisms are three "what ifs" and a (IMHO) falsehood - I don't think you're doing much better than "millions of words of theory without evidence". To the extent that it's true Yudkowsky and co theorised without evidence, I think they deserve cred, as this theorising predated the current AI ramp-up at a time when most would have thought AI anything like what we have now was a distant pipe dream. To the extent that this theorising continues in the present, it's not without evidence - I point you again to METR's unbending exponential curve.
Anyway, so I contend your points comprise three "what ifs" and (IMHO) a falsehood. Unless you think "AI can't recursively self-improve itself" already has strong priors in its favour such that strong arguments are needed to shift that view (and I don't think that's the case at all), this is weak. You will need to argue why we should need to have strong evidence to overturn a default "AI can't recursively self-improve" view, when it seems that a) we are already seeing recursive improvement (just not purely "self"-improvement), and that it's very normal for technological advancement to have recursive gains - see e.g. Moore's law or technological contributions to GDP growth generally.
Far from a damning example of rationalists thinking sloppily, this particular point seems like one that shows sloppy thinking on the part of the critics.
It's at least debateable, which is all it has to be for calling it "the biggest nonsense axion" to be a poor point.
[1] https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...
Yudkowsky seems to believe in fast take off, so much so that he suggested bombing data centers. To more directly address your point, I think it’s almost certain that increasing intelligence has diminishing returns and the recursive self improvement loop will be slow. The reason for this is that collecting data is absolutely necessary and many natural processes are both slow and chaotic, meaning that learning from observation and manipulation of them will take years at least. Also lots of resources.
Regarding LLM’s I think METR is a decent metric. However you have to consider the cost of achieving each additional hour or day of task horizon. I’m open to correction here, but I would bet that the cost curves are more exponential than the improvement curves. That would be fundamentally unsustainable and point to a limitation of LLM training/architecture for reasoning and world modeling.
Basically I think the focus on recursive self improvement is not really important in the real world. The actual question is how long and how expensive the learning process is. I think the answer is that it will be long and expensive, just like our current world. No doubt having many more intelligent agents will help speed up parts of the loop but there are physical constraints you can’t get past no matter how smart you are.
3 replies →
> If you think AI will reach human-level intelligence then recursive self-improvement naturally follows. How could it not?
Humans have a lot more going on than just an intelligence brain. The two big ones are: bodies, with which to richly interact with reality, and emotions/desire, which drive our choices. The one that I don't think gets enough attention in this discussion is the body. The body is critical to our ability to interact with the environment, and therefore learn about it. How does an AI do this without a body? We don't have any kind of machine that comes close to the level of control, feedback, and adaptability that a human body offers. That seems very far away. I don't think that an AI can just "improve itself" without being able to interact with the world in many ways and experiment. How does it find new ideas? How does it test its ideas? How does it test its abilities? It needs an extremely rich interface with the physical world, that external feedback is necessary for improvement. That requirement would put the prospect of a recursive self-improving AI much further into the future than many rationalists believe.
And of course, the "singularity" scenario does not only make "recursive self-improvement" the only assumption, it assumes exponential recursive self-improvement all the way to superintelligence. This is highly speculative. It's just as possible that the curve is more logarithmic, sinusoid, or linear. The reason to believe that fully exponential self-improvement is the likely scenario, based on curve of some metric now that hasn't existed for very long, does not seem solid enough to justify a strong belief. It is just as easy to imagine that intelligence gains get harder and harder as intelligence increases. We see many things that are exponential for a time, and then they aren't anymore, and basing big decisions on "this curve will be exponential all the way" because we're seeing exponential progress now, at the very early stages, does not seem sound.
Humans have human-level intelligence, but we are very far away from understanding our own brain such that we can modify it to increase our capacity for intelligence (to any degree significant enough to be comparable to recursive self-improvement). We have to improve the intelligence of humanity the hard way: spend time in the world, see what works, the smart humans make more smart humans (as do the dumb humans, which often slows the progress of the smart humans). The time spent in the world, observing and interacting with it, is crucial to this process. I don't doubt that machines could do this process faster than humans, but I don't think it's at all clear that they could do so, say, 10,000x faster. A design needs time in the world to see how it fares in order to gauge its success. You don't get to escape this until you have a perfect simulation of reality, which if it is possible at all is likely not possible until the AI is already superintelligent.
Presumably a superintelligent AI has a complete understanding of biology - how does it do that without spending time observing the results of biological experiments and iterating on them? Extrapolate that to the many other complex phenomena that exist in the physical world. This is one of the reasons that our understanding of computers has increased so much faster than our understanding of many physical sciences: to understand a complex system that we didn't create and don't have a perfect model of, we must do lots of physical experiments, and those experiments take time.
The crucial assumption that the AI singularity assumption relies on is that once intelligence hits a certain threshold, it can gaze at itself and self-improve to the top very quickly. I think this is fundamentally flawed, as we exist in a physical reality that underlies everything and defines what intelligence is. Interaction and experimentation with reality is necessary for the feedback loop of increasing intelligence, and I think this both severely limits how short that feedback loop can be, and makes the bar for an entity that can recursively self-improve itself much higher, as it needs a physical embodiment far more complex and autonomous than any robot we've managed to make.
This is also the weirdest thing and I don't think they even know the assumption they are making. It makes the assumption that there is infinite knowledge to be had. It also ignores the reality that in reality we have exceptionally strong indications that accuracy (truth, knowledge, whatever you want to call it) has exponential growth in complexity. These may be wrong assumptions, but we at least have evidence for them, and much more for the latter. So if objective truth exists, then that intelligence gap is very very different. One way they could be right there is for this to be an S-curve and for us humans to be at the very bottom there. That seems unlikely, though very possible. But they always treat this as linear or exponential as if our understanding to the AI will be like an ant trying to understand us.
The other weird assumption I hear is about how it'll just kill us all. The vast majority of smart people I know are very peaceful. They aren't even seeking power of wealth. They're too busy thinking about things and trying to figure everything out. They're much happier in front of a chalk board than sitting on a yacht. And humans ourselves are incredibly passionate towards other creatures. Maybe we learned this because coalitions are a incredibly powerful thing, but truth is that if I could talk to an ant I'd choose that over laying traps. Really that would be so much easier too! I'd even rather dig a small hole to get them started somewhere else than drive down to the store and do all that. A few shovels in the ground is less work and I'd ask them to not come back and tell others.
Granted, none of this is absolutely certain. It'd be naive to assume that we know! But it seems like these cults are operating on the premise that they do know and that these outcomes are certain. It seems to just be preying on fear and uncertainty. Hell, even Altman does this, ignoring risk and concern of existing systems by shifting focus to "an even greater risk" that he himself is working towards (You can't simultaneously maximize speed and safety). Which, weirdly enough might fulfill their own prophesies. The AI doesn't have to become sentient but if it is trained on lots of writings about how AI turns evil and destroys everyone then isn't that going to make a dumb AI that can't tell fact from fiction more likely to just do those things?
I think of it more like visualizing a fractal on a computer. The more detail you try to dig down into the more detail you find, and pretty quickly you run out of precision in your model and the whole thing falls apart. Every layer further down you go the resource requirements increase by an exponential amount. That's why we have so many LLMs that seem beautiful at first glance but go to crap when the details really matter.
soo many things make no sense in this comment that I feel like 20% chance this a mid quality gpt. and so much interpolation effort, but starting from hearsay instead of primary sources. then the threads stop just before seeing the contradiction with the other threads. I imagine this is how we all reason most of the time, just based on vibes :(
17 replies →
This is why it's important to emphasize that rationality is not a good goal to have. Rationality is nothing more than applied logic, which takes axioms as given and deduces conclusions from there.
Reasoning is the appropriate target because it is a self-critical, self-correcting method that continually re-evaluates axioms and methods to express intentions.
You're describing the impressions I had of MENSA back in the 70's.
He probably is describing Mensa, and assuming that it also applies to the rationality community without having any specific knowledge of the latter.
(From my perspective, Hacker News is somewhere in the middle between Mensa and Less Wrong. Full of smart people, but most of them don't particularly care about evidence, if providing their own opinion confidently is an alternative.)
One of the only idioms that I don't mind living my life by is, "Follow the truth-seeker, but beware those who've found it".
Interesting. I can't say I've done much following though — not that I am aware of anyway. Maybe I just had no leaders growing up.
A good example of this is the number of huge assumptions needed for the argument for Roko's basilisk. I'm shocked that some people actually take it seriously.
I don't believe anyone has taken it seriously in the last half-decade, if you find counter-evidence for that belief let me know.
The distinction between them and religion is that religion is free to say that those axioms are a matter of faith and treat them as such. Rationalists are not as free to do so.
Epistemological skepticism sure is a belief. A strong belief on your side?
I am profoundly sure, I am certain I exist and that a reality outside myself exists. Worse, I strongly believe knowing this external reality is possible, desirable and accurate.
How suspicious does that make me?
It means you haven't read Hume, or, in general, taken philosophy seriously. An academic philosopher might still come to the same conclusions as you (there is an academic philosopher for every possible position), but they'd never claim the certainty you do.
why so aggressive chief
I am certain that your position "All academic philosophers never claim complete certainty about their beliefs" is not even wrong or falsifiable.
Are you familiar with ship of theseus as an arugmentation fallacy? Innuendo Studios did a great video on it and I think that a lot of what you're talking about breaks down to this. Tldr - it's a fallacy of substitution, small details of an argument get replaced by things that are (or feel like) logical equivalents until you end up saying something entirely different but are arguing as though you said the original thing. In this video the example is "senator doxxes a political opponent" but on looking "senator" turns out to mean "a contractor working for the senator" and "doxxes a political opponent" turns out to mean "liked a tweet that had that opponent's name in it in a way that could draw attention to it".
Each change is arguably equivalent and it seems logical that if x = y then you could put y anywhere you have x, but after all of the changes are applied the argument that emerges is definitely different from the one before all the substitutions are made. It feels like communities that pride themselves on being extra rational seem subject to this because it has all the trappings of rationalism but enables squishy, feely arguments
https://www.youtube.com/watch?v=Ui-ArJRqEvU
Meant to drop a link for the above, my bad
There are certain things I am sure of even though I derived them on my own.
But I constantly battle tested them against other smart people’s views, and just after I ran out of people to bring me new rational objections did I become sure.
Now I can battle test them against LLMs.
On a lesser level of confidence, I have also found a lot of times the people who disagreed with what I thought had to be the case, later came to regret it because their strategies ended up in failure and they told me they regretted not taking my recommendation. But that is on an individual level. I have gotten pretty good at seeing systemic problems, architecting systemic solutions, and realizing what it would take to get them adopted to at least a critical mass. Usually, they fly in the face of what happens normally in society. People don’t see how their strategies and lives are shaped by the technology and social norms around them.
Here, I will share three examples:
Public Health: https://www.laweekly.com/restoring-healthy-communities/
Economic and Governmental: https://magarshak.com/blog/?p=362
Wars & Destruction: https://magarshak.com/blog/?p=424
For that last one, I am often proven somewhat wrong by right-wing war hawks, because my left-leaning anti-war stance is about avoiding inflicting large scale misery on populations, but the war hawks go through with it anyway and wind up defeating their geopolitical enemies and gaining ground as the conflict fades into history.
"genetically engineers high fructose corn syrup into everything"
This phrase is nonsense, because HFCS is a chemical process applied to normal corn after the harvest. The corn may be a GMO but it certainly doesn't have to be.
Agreed, that was phrased wrong. The fruits across the board have been genetically engineered to be extremely sweet (fructose, not the syrup): https://weather.com/news/news/2018-10-03-fruit-so-sweet-zoo-...
While their nutritional quality has gone down tremendously, for vegetables too: https://pmc.ncbi.nlm.nih.gov/articles/PMC10969708/
2 replies →
It's very tempting to try to reason things through from first principles. I do it myself, a lot. It's one of the draws of libertarianism, which I've been drawn to for a long time.
But the world is way more complex than the models we used to derive those "first principles".
It's also very fun and satisfying. But it should be limited to an intellectual exercise at best, and more likely a silly game. Because there's no true first principle, you always have to make some assumption along the way.
Any theory of everything will often have a little perpetual motion machine at the nexus. These can be fascinating to the mind.
Pressing through uncertainty either requires a healthy appetite for risk or an engine of delusion. A person who struggles to get out of their comfort zone will seek enablement through such a device.
Appreciation of risk-reward will throttle trips into the unknown. A person using a crutch to justify everything will careen hyperbolically into more chaotic and erratic behaviors hoping to find that the device is still working, seeking the thrill of enablement again.
The extremism comes from where once the user learned to say hello to a stranger, their comfort zone has expanded to an area that their experience with risk-reward is underdeveloped. They don't look at the external world to appreciate what might happen. They try to morph situations into some confirmation of the crutch and the inferiority of confounding ideas.
"No, the world isn't right. They are just weak and the unspoken rules [in the user's mind] are meant to benefit them." This should always resonate because nobody will stand up for you like you have a responsibility to.
A study of uncertainty and the limitations of axioms, the inability of any sufficiently expressive formalism to be both complete and consistent, these are the ideas that are antidotes to such things. We do have to leave the rails from time to time, but where we arrive will be another set of rails and will look and behave like rails, so a bit of uncertainty is necessary, but it's not some magic hat that never runs out of rabbits.
Another psychology that will come into play from those who have left their comfort zone is the inability to revert. It is a harmful tendency to presume all humans fixed quantities. Once a behavior exists, the person is said to be revealed, not changed. The proper response is to set boundaries and be ready to tie off the garbage bag and move on if someone shows remorse and desire to revert or transform. Otherwise every relationship only gets worse. If instead you can never go back, extreme behavior is a ratchet. Ever mistake becomes the person.
There should be an extremist cult of people who are certain only that uncertainty is the only certain thing
What makes you so certain there isn't? A group that has a deep understanding fnord of uncertainty would probably like to work behind the scenes to achieve their goals.
The Fnords do keep a lower profile.
One might even call them illuminati? :D
My favourite bumper sticker, "Militant Agnostic. I don't know, and neither do you."
I heard about this the other day! I think I need one.
More people should read Sextus Empiricus as he's basically the O.G. Phyrronist skeptic and goes pretty hard on this very train of thought.
If I remember my Gellius, it was the Academic Skeptics who claimed that the only certainty was uncertainty; the Pyrrhonists, in opposition, denied that one could be certain about the certainty of uncertainty.
Cool. Any specific recs or places to start with him?
2 replies →
A Wonderful Phrase by Gandhi
You mean like this? https://www.readthesequences.com/Zero-And-One-Are-Not-Probab...
The Snatter Goblins?
https://archive.org/details/goblinsoflabyrin0000frou/page/10...
https://realworldrisk.com/
Socrates was fairly close to that.
My thought as well! I can't remember names at the moment, but there were some cults that spun off from Socrates. Unfortunately they also adopted his practice of never writing anything down, so we don't know a whole lot about them
"I have no strong feelings one way or the other." thunderous applause
There would be, except we're all very much on the fence about whether it is the right cult for us.
There already is, they're called "Politicians."
Like Robert Anton Wilson if he were way less chill, perhaps.
“Oh, that must be exhausting.”
all of science would makes sense if it wasn't for that 1 pesky miracle
It's crazy to read this, because by writing what you wrote you basically show that you don't understand what an axiom is.
You need to review the definition of the word.
> The smartest people I have ever known have been profoundly unsure of their beliefs and what they know.
The smartest people are unsure about their higher level beliefs, but I can assure you that they almost certainly don't re-evaluate "axioms" as you put it on a daily or weekly basis. Not that it matters, as we almost certainly can't verify who these people are based on an internet comment.
> I immediately become suspicious of anyone who is very certain of something, especially if they derived it on their own.
That's only your problem, not anyone else's. If you think people can't arrive to a tangible and useful approximation of truth, then you are simply delusional.
> If you think people can't arrive to a tangible and useful approximation of truth, then you are simply delusional
Logic is only a map, not the territory. It is a new toy, still bright and shining from the box in terms of human history. Before logic there were other ways of thinking, and new ones will come after. Yet, Voltaire's bastards are always certain they're right, despite being right far less often than they believe.
Can people arrive at tangible and useful conclusions? Certainly, but they can only ever find capital "T" Truth in a very limited sense. Logic, like many other models of the universe, is only useful until you change your frame of reference or the scale at which you think. Then those laws suddenly become only approximations, or even irrelevant.
There is no (T)ruth, but there is a useful approximation of truth for 99.9% things that I want to do in life.
YMMV.
> It's crazy to read this, because by writing what you wrote you basically show that you don't understand what an axiom is. You need to review the definition of the word.
Oh, do enlighten then.
> The smartest people are unsure about their higher level beliefs, but I can assure you that they almost certainly don't re-evaluate "axioms" as you put it on a daily or weekly basis. Not that it matters, as we almost certainly can't verify who these people are based on an internet comment.
I'm not sure you are responding to the right comment, or are severely misinterpreting what I said. Clearly a nerve was struck though, and I do apologize for any undue distress. I promise you'll recover from it.
> Oh, do enlighten then.
Absolutely. Just in case your keyboard wasn't working to arrive at this link via Google.
https://www.merriam-webster.com/dictionary/axiom
First definition, just in case it still isn't obvious.
> I'm not sure you are responding to the right comment, or are severely misinterpreting what I said. Clearly a nerve was struck though, and I do apologize for any undue distress.
Someone was wrong on the Internet! Just don't want other people getting the wrong idea. Good fun regardless.
3 replies →
Saw once a discussion that people should not have kids as it's by far the highest increase in your carbon footprint in your lifetime (>10x than going vegan, etc) be driven all the way to advocating genocide as a way of carbon footprint minimization
> Saw once a discussion that people should not have kids as it's by far the highest increase in your carbon footprint in your lifetime (>10x than going vegan, etc) be driven all the way to advocating genocide as a way of carbon footprint minimization
The opening scene of Utopia (UK) s2e6 goes over this:
> "Why did you have him then? Nothing uses carbon like a first-world human, yet you created one: why would you do that?"
* https://www.youtube.com/watch?v=rcx-nf3kH_M
Setting aside the reductio ad absurdum of genocide, this is an unfortunately common viewpoint. People really need to take into account the chances their child might wind up working on science or technology which reduces global CO2 emissions or even captures CO2. This reasoning can be applied to all sorts of naive "more people bad" arguments. I can't imagine where the world would be if Norman Borlaug's parents had decided to never have kids out of concern for global food insecurity.
It also entirely subjugates the economic realities that we (at least currently) live in to the future health of the planet. I care a great deal about the Earth and our environment, but the more I've learned about stuff the more I've realized that anyone advocating for focusing on one without considering the impact on the other is primarily following a religion
3 replies →
> this is an unfortunately common viewpoint
Not everyone believes that the purpose of life is to make more life, or that having been born onto team human automatically qualifies team human as the best team. It's not necessarily unfortunate.
I am not a rationalist, but rationally that whole "the meaning of life is human fecundity" shtick is after school special tautological nonsense, and that seems to be the assumption buried in your statement. Try defining what you mean without causing yourself some sort of recursion headache.
> their child might wind up..
They might also grow up to be a normal human being, which is far more likely.
> if Norman Borlaug's parents had decided to never have kids
Again, this would only have mattered if you consider the well being of human beings to be the greatest possible good. Some people have other definitions, or are operating on much longer timescales.
Insane to call "more people bad" naive but then actually try and account for what would otherwise best be described as hope.
1 reply →
> People really need to take into account the chances their child might wind up working on science or technology which reduces global CO2 emissions or even captures CO2.
All else equal, it would be better to spread those chances across a longer period of time at a lower population with lower carbon use.
Another issue with these groups is that they often turn into sex cults.
A logical argument is only as good as it's presuppositions. To first lay siege to your own assumptions before reasoning about them tends towards a more beneficial outcome.
Another issue with "thinkers" is that many are cowards; whether they realize it or not a lot of presuppositions are built on a "safe" framework, placing little to no responsibility on the thinker.
> The smartest people I have ever known have been profoundly unsure of their beliefs and what they know. I immediately become suspicious of anyone who is very certain of something, especially if they derived it on their own.
This is where I depart from you. If I say it's anti-intellectual I would only be partially correct, but it's worse than that imo. You might be coming across "smart people" who claim to know nothing "for sure", which in itself is a self-defeating argument. How can you claim that nothing is truly knowable as if you truly know that nothing is knowable? I'm taking these claims to their logical extremes btw, avoiding the granular argumentation surrounding the different shades and levels of doubt; I know that leaves vulnerabilities in my argument, but why argue with those who know that they can't know much of anything as if they know what they are talking about to begin with? They are so defeatist in their own thoughts, it's comical. You say, "profoundly unsure", which reads similarly to me as "can't really ever know" which is a sure truth claim, not a relative claim or a comparative as many would say, which is a sad attempt to side-step the absolute reality of their statement.
I know that I exist, regardless of how I get here I know that I do, there is a ridiculous amount of rhetoric surrounding that claim that I will not argue for here, this is my presupposition. So with that I make an ontological claim, a truth claim, concerning my existence; this claim is one that I must be sure of to operate at any base level. I also believe I am me and not you, or any other. Therefore I believe in one absolute, that "I am me". As such I can claim that an absolute exists, and if absolutes exist, then within the right framework you must also be an absolute to me, and so on and so forth; what I do not see in nature is an existence, or notion of, the relative on it's own as at every relative comparison there is an absolute holding up the comparison. One simple example is heat. Hot is relative, yet it also is objective; some heat can burn you, other heat can burn you over a very long time, some heat will never burn. When something is "too hot" that is a comparative claim, stating that there is another "hot" which is just "hot" or not "hot enough", the absolute still remains which is heat. Relativistic thought is a game of comparisons and relations, not making absolute claims; the only absolute claim is that there is no absolute claim to the relativist. The reason I am talking about relativists is that they are the logical, or illogical, conclusion of the extremes of doubt/disbelief i previously mentioned.
If you know nothing you are not wise, you are lazy and ill-prepared, we know the earth is round, we know that gravity exists, we are aware of the atomic, we are aware of our existence, we are aware that the sun shines it's light upon us, we are sure of many things that took debate among smart people many many years ago to arrive to these sure conclusions. There was a time where many things we accept where "not known" but were observed with enough time and effort by brilliant people. That's why we have scientists, teachers, philosophers and journalists. I encourage you that the next time you find a "smart" person who is unsure of their beliefs, you should kindly encourage them to be less lazy and challenge their absolutes, if they deny the absolute could be found then you aren't dealing with a "smart" person, you are dealing with a useful idiot who spent too much time watching skeptics blather on about meaningless topics until their brains eventually fell out. In every relative claim there must be an absolute or it fails to function in any logical framework. You can with enough thought, good data, and enough time to let things steep find the (or an) absolute and make a sure claim. You might be proven wrong later, but that should be an indicator to you that you should improve (or a warning you are being taken advantage of by a sophist), and that the truth is out there, not to sequester yourself away in this comfortable, unsure hell that many live in till they die.
The beauty of absolute truth is that you can believe absolutes without understanding the entirety of the absolute. I know gravity exists but I don't know fully how it works. Yet I can be absolutely certain it acts upon me, even if I only understand a part of it. People should know what they know and study it until they do and not make sure claims outside of what they do not know until they have the prerequisite absolute claims to support the broader claims with the surety of the weakest of their presuppositions.
Apologies for grammar, length and how schizo my thought process appears; I don't think linearly and it takes a goofy amount of effort to try to collate my thoughts in a sensible manner.