Comment by keiferski

6 months ago

One of the negative consequences of the “modern secular age” is that many very intelligent, thoughtful people feel justified in brushing away millennia of philosophical and religious thought because they deem it outdated or no longer relevant. (The book A Secular Age is a great read on this, btw, I think I’ve recommended it here on HN at least half a dozen times.)

And so a result of this is that they fail to notice the same recurring psychological patterns that underly thoughts about how the world is, and how it will be in the future - and then adjust their positions because of this awareness.

For example - this AI inevitabilism stuff is not dissimilar to many ideas originally from the Reformation, like predestination. The notion that history is just on some inevitable pre-planned path is not a new idea, except now the actor has changed from God to technology. On a psychological level it’s the same thing: an offloading of freedom and responsibility to a powerful, vaguely defined force that may or may not exist outside the collective minds of human society.

I'm pretty bearish on the idea that AGI is going to take off anytime soon, but I read a significant amount of theology growing up and I would not describe the popular essays from e.g., LessWrong as religious in nature. I also would not describe them as appearing poorly read. The whole "look they just have a new god!" is a common trope in religious apologetics that is usually just meant to distract from the author's own poorly constructed beliefs. Perhaps such a comparison is apt for some people in the inevitable AGI camp, but their worst arguments are not where we should be focusing.

  • Philosophy and religion are not mutually inclusive, though one can certainly describe a religious belief as being a philosophical belief.

    Even a scientifically inclined atheist has philosophical ideas grounding their world view. The idea that the universe exists as an objective absolute with immutable laws of nature is a metaphysical idea. The idea that nature can be observed and that reason is a valid tool for acquiring knowledge about nature is an epistemological idea. Ethics is another field of philosophy and it would be a mistake to assume a universal system of ethics that has been constant throughout all cultures across all of human history.

    So while I certainly agree that there is a very common hand-wave of "look the atheists have just replaced God with a new 'god' by a different name", you don't have to focus on religion, theology and faith based belief systems to identify different categories of philosophical ideas and how they have shaped different cultures, their beliefs and behaviours throughout history.

    A student of philosophy would identify the concept of "my truth" as being an idea put forward by Emmanuel Kant, for example, even though the person saying that doesn't know that that's the root of the idea that reality is subjective. Similarly, the empirically grounded scientist would be recognized as following in the footsteps of Aristotle. The pious bible thumper parroting ideas published by Plato.

    The point is that philosophy is not the same thing as religion and philosophy directly shapes how people think, what they believe and therefore how they act and behave. And it's kind of uncanny how an understanding of philosophy can place historical events in context and what kinds of predictive capabilities it has when it comes to human behaviour in the aggregate.

    • This sounds very educated but I don't really see what it has to do with the comment you're responding to (or with AI).

  • While it's a fair criticism, just because someone doesn't believe in a god doesn't mean the religious hardware in their brain has been turned off. It's still there and operational - I don't think it's a surprise that this hardware's attention would then be automatically tuned to a different topic.

    I think you can also see this in the intensification of political discussion, which has a similar intensity to religious discussions 100-200+ years ago (i.e. Protestant reformation). Indicating that this "religious hardware" has shifted domains to the realm of politics. I believe this shift can also be seen through the intense actions and rhetoric we saw in the mid-20th century.

    You can also look at all of these new age "religions" (spiritualism, horoscopes, etc.) as that religious hardware searching for something to operate on in the absence of traditional religion.

    • > While it's a fair criticism, just because someone doesn't believe in a god doesn't mean the religious hardware in their brain has been turned off.

      Max Stirner said that after the Enlightenment and the growth of liberalism, which is still very much in vogue to this day, all we’ve done is replace the idea of God with the idea of Man.

      The object might be different, but it is still the unshakable belief in an idealised and subjective truth, with its own rituals and ministers i.e a religion.

      I guess the Silicon Valley hyper-technological optimism of the past years is yet another shift from Man to religious belief in the Machine.

    • I agree that modern hyper-online moralist progressivism and QAnonism are just fresh coats of paint on religion, but that isn't similar to AI.

      AI isn't a worldview; it's an extremely powerful tool which some people happen to be stronger at using than others, like computers or fighter jets. For people who empirically observe that they've been successful at extracting massive amounts of value from the tool, it's easy to predict a future in which aggregate economic output in their field by those who are similarly successful will dwarf that of those who aren't. For others, it's understandable that their mismatched experience would lead to skepticism of the former group, if not outright comfort in the idea that such productivity claims are dishonest or delusional. And then of course there are certainly those who are actually lying or deluded about fitting in the former group.

      Every major technology or other popular thing has some subset of its fandom which goes too far in promotion of the thing to a degree that borders on evangelical (operating systems, text editors, video game consoles, TV shows, diets, companies, etc.), but that really has nothing to do with the thing itself.

      Speaking for myself, anecdotally, I've recently been able to deliver a product end-to-end on a timeline and level of quality/completeness/maturity that would have been totally impossible just a few years ago. The fact that something has been brought into existence in substantially less time and at orders of magnitude lower cost than would have been required a few years ago is an undeniable observation of the reality in front of me, not theological dogma.

      It is, however, a much more cognitively intense way to build a product — with AI performing all the menial labor parts of development, you're boxed into focusing on the complex parts in a far more concentrated time period than would otherwise be required. In other words, you no longer get the "break" of manually coding out all the things you've decided need to be done and making every single granular decision involved. You're working at a higher level of abstraction and your written output for prompting is far more information-dense than code. The skills required are also a superset of those required for manual development; you could be the strongest pre-LLM programmer in the world, but if you're lacking in areas like human language/communication, project/product management, the ability to build an intuition for "AI psychology", or thinking outside the box in how you use your tools, adapting to AI is going to be a struggle.

      It's like an industry full of mechanics building artisan vehicles by hand suddenly finding themselves foisted with budgets to design and implement assembly lines; they still need to know how to build cars, but the nature of the job has now fundamentally changed, so it's unsurprising that many or even most who'd signed up for the original job would fail to excel in the new job and rationalize that by deciding the old ways are the best. It's not fair, and it's not anyone's fault, but it's important for us all to be honest and clear-eyed about what's really happening here. Society as a whole will ultimately enjoy some degree of greater abundance of resources, but in the process a lot of people are going to lose income and find hard-won skills devalued. The next generation's version of coal miners being told to "learn to code" will be coders being told to "learn to pilot AI".

      10 replies →

  • I've read LessWrong very differently from you. The entire thrust of that society is that humanity is going to create the AI god.

    • They are literally publishing a book called "If you build this, everybody dies" and trying to stop humanity from doing that. I feel like that's an important detail: they're not the ones trying to create the god, they're the ones worried about someone else doing it.

      1 reply →

  • Maybe not a god, but we're intentionally designing artificial minds greater than ours, and we intend to give them control of the entire planet. While also expecting them to somehow remain subservient to us (or is that part just lip service)?

    • What makes an artificial mind greater than ours?

      Do you assume that someone will stumble into creating a person, but with unlimited memory and computational power?

      Otherwise, if we are able to create this person using our knowledge, we will most certainly be able to augment humans with those capabilities.

  • I didn’t say that “it’s just a new god,” I said:

    The notion that history is just on some inevitable pre-planned path is not a new idea, except now the actor has changed from God to technology.

    This is a more nuanced sentence.

    • Before that quoted sentence you drew a line from the reformation to people believing that AI is inevitable, then went on to imply these people may even believe such a thing will happen without the involvement of people. These are generalizations which don't fit a lot of the literature and make their best ideas look a bit sillier than they are. It is situations like these that make me think that analogies are better suited as a debate tactic than a method of study.

  • > I also would not describe them as appearing poorly read.

    YOU come off as poorly read, so I wouldn't trust your judgement on this one, champ. "common trope" lmfao.

  • I jsut want to comment here that this is the classic arrogant, underread “I reject half of humanities thoughts” foolishness that OP is referring to.

    I mean the lack of self awareness you have here is amazing.

    • To the contrary. I sped through my compsci capstone coursework first year of college and spent most of the rest of my time in philosophy, psychology, and sociology classrooms. The "hey if you squint this thing it looks like religion for the non-religious" perspective is just one I've heard countless times. It is perfectly valid to have a fact based discussion on whether there is a biological desire for religiosity, but drawing a long line from that to broadly critique someone's well-articulated ideas is pretty sloppy.

      6 replies →

Techno Calvinists vs Luddite Reformists is a very funny image.

Agree - although it's an interesting view, I think it's far more related to a lack of idealogy and writing where this has emerged from. I find it more akin to a distorted renaissance. There's such a large population of really intelligent tech people that have zero real care for philisophical or religious thought, but still want to create and make new things.

This leads them down the first path of grafting for more and more money. Soon, a good proportion of them realise the futility of chasing cash beyond a certain extent. The problem is this belief that they are beyond these issues that have been dealt with since Mesopotamia.

Which leads to these weird distorted idealogies, creating art from regurgitated art, creating apps that are made to become worse over time. There's a kind of rush to wealth, ignoring the joy of making things to further humanity.

I think LLMs and AI is a genie out of a bottle, it's inevitable, but it's more like linear perpsective in drawing or the printing press rather than electricity. Except because of the current culture we live in, it's as if leonardo spent his life attempting to sell different variations of linear perspective tutorial rather than creating, drawing and making.

  • in Adam Curtis‘s all watched over by machines of loving Grace, he makes a pretty long and complete argument that humanity has a rich history of turning over its decision-making to inanimate objects in a desire to discover ideologies we can’t form ourselves in growing complexity of our interconnectivity.

    He tells a history of them constantly failing because the core ideology of “cybernetics” is underlying them all and fails to be adaptive enough to match our DNA/Body/mind combined cognitive system. Especially when scaled to large groups.

    He makes the second point that humanity and many thinkers constantly also resort to the false notion of “naturalism” as the ideal state of humanity, when in reality there is no natural state of anything, except maybe complexity and chaos.

Sorry I don't buy your argument.

(First I disagree with A Secular Age's thesis that secularism is a new force. Christian and Muslim churches were jailing and killing nonbelievers from the beginning. People weren't dumber than we are today, all the absurdity and self-serving hypocrisy that turns a lot of people off to authoritarian religion were as evident to them as they are to us.)

The idea is not that AI is on a pre-planned path, it's just that technological progress will continue, and from our vantage point today predicting improving AI is a no brainer. Technology has been accelerating since the invention of fire. Invention is a positive feedback loop where previous inventions enable new inventions at an accelerating pace. Even when large civilizations of the past collapsed and libraries of knowledge were lost and we entered dark ages human ingenuity did not rest and eventually the feedback loop started up again. It's just not stoppable. I highly recommend Scott Alexander's essay Meditations On Moloch on why tech will always move forward, even when the results are disastrous to humans.

  • That isn’t the argument of the book, so I don’t think you actually read it, or even the Wikipedia page?

    The rest of your comment doesn’t really seem related to my argument at all. I didn’t say technological process stops or slows down, I pointed out how the thought patterns are often the same across time, and the inability and unwillingness to recognize this is psychologically lazy, to over simplify. And there are indeed examples of technological acceleration or dispersal which was deliberately curtailed – especially with weapons.

    • > I pointed out how the thought patterns are often the same across time, and the inability and unwillingness to recognize this is psychologically lazy, to over simplify.

      It's not lazy to follow thought patterns that yield correct predictions. And that's the bedrock on which "AI hype" grows and persists - because these tools are actually useful, right now, today, across wide variety of work and life tasks, and we are barely even trying.

      > And there are indeed examples of technological acceleration or dispersal which was deliberately curtailed – especially with weapons.

      Name three.

      (I do expect you to be able to name three, but that should also highlight how unusual that is, and how questionable the effectiveness of that is in practice when you dig into details.)

      Also I challenge you to find but one restriction that actually denies countries useful capabilities that they cannot reproduce through other means.

      5 replies →

    • > And there are indeed examples of technological acceleration or dispersal which was deliberately curtailed – especially with weapons

      Which examples? Despite curtailment, new countries have acquired nuclear weapons over time.

      Efforts to squash technology exist, such as cloning bans, and so on, but they will only work for so long. You might think I'm making a "predestination" argument here, but I'm not. I'm observing the powerful incentives at play (first past the post advantage), noting that historically technology has always advanced, and making a bet that technology will continue to advance. I am supremely confident in that bet. I could of course go out and protest, but there is also a part that doesn't seem present in the original post and your argument, many (maybe most) of us don't want to stop technological progress.

  • I add to this that we have plenty of examples of societies that don't keep up with technological advancement, or "history" more broadly get left behind. Competition in a globalized world makes some things inevitable. I'm not agreeing in full with the most AI will change everything arguments, but those last couple of paragraphs of TFA sounds to me like standing athwart history, yelling "Stop!".

    • Communism used to be thought of in this way. It enabled societies to cast off old limitations and make remarkable progress. Until it didn't and Communists found themselves and their modernized society stuck well behind the rest of the world. Perhaps LLMs are a similar trap that will generate many lines of code and imagined images but leave us all stupid and with impaired executive function.

100%. Not a new phenomenon at all, just the latest bogeyman for the inevitabilists to point to in their predestination arguments.

My aim is only to point it out - people are quite comfortable rejecting predestination arguments coming from eg. physics or religion, but are still awed by “AI is inevitable”.

  • It's inevitable not because of any inherent quality of the tech, but because investors are demanding it be so and creating the incentives for 'inevitability'.

    I also think EV vehicles are an 'inevitability' but I am much less offended by the EV future, as they still have to outcompete IC's, there are transitional options (hybrids), there are public transport alternatives, and at least local regulations appear to be keeping pace with the technical change.

    AI inevitabilty so far seems to be only inevitable because I can't actually opt out of it when it gets pushed on me.

    • To use John Adams' separation of republics into the categories of "the many, the few, and the one," the few in our current day are unusually conflict-adverse both among each other and with respect to the people.

      When faced with the current crisis, they look at the options for investment and they see some that will involve a lot of conflict with the many (changing the industrial employment arrangement, rearranging state entitlements), and they see see some that avoid conflict or change. Our few as they are got that way by outsourcing anything physical and material as much as possible and making everything "into computer." So they promote a self serving spiritual belief that because overinvesting in computers got them to their elevated positions, that even more computer is what the world needs more than anything else.

      This approach also mollifies the many in a way that would be easily recognizable in any century to any classically educated person. Our few do not really know what the many are there for, but they figure that they might as well extract from the many through e.g. sports gambling apps and LLM girlfriends.

The article's main point is that "inevitabilism" is a rhetorical tactic used to frame the conversation in such a way you can easily dismiss any criticism as denying reality. So drawing comparisons to reformation ideology wouldn't be particularly meaningful.

There's a also a bit of irony that you're presenting the secular view of predestination. As someone who once had a multi-volume set of "Institutes of the Christian Religion" next to him on his bookshelf, the protestant conception of predestination had very little to do with "offloading of freedom and responsibility" both in theory and in practice.

Predestination is founded on the concept that God's grace is given not earned (unlike the previous Catholic system which had multiple ways that merit, including cash donations, could be converted into salvation), since no human could earn salvation without the grace of God. But the lesson from this is not "so don't worry about it!", quite the opposite. Calvin's main extension to this was that (paraphrasing) "It's not through good works that we are saved, but through our good works we have evidence of our salvation". You wanted to see the evidence of your salvation, so you did try to do good works, but without the belief that your efforts would ever be enough. This ultimately created a culture of hard work with out the expectation of reward.

This is part of the focus of Max Weber's "The Protestant Ethic and the Spirit of Capitalism" which argued that this ability to "work without immediate reward" is precisely what enabled Capitalism to take such a strong foot hold in the early United States.

So even if the article were arguing for "inevitabilism" the framework is still quite distinct from that established in Protestantism.

  • > God's grace is given not earned (unlike the previous Catholic system ...

    Catholicism does not hold that you can earn grace. Grace is a gift from God that is freely given.

    > including cash donations, could be converted into salvation

    I assume you are referring to selling indulgences. Indulgences are not something that can give you salvation.

I think this is a case of bad pattern matching, to be frank. Two cosmetically similar things don't necessarily have a shared cause. When you see billions in investment to make something happen (AI) because of obvious incentives, it's very reasonable to see that as something that's likely to happen; something you might be foolish to bet against. This is qualitatively different from the kind of predestination present in many religions where adherents have assurance of the predestined outcome often despite human efforts and incentives. A belief in a predestined outcome is very different from extrapolating current trends into the future.

  • Yes, nobody is claiming it's inevitable based on nothing, it's based on first principles thinking: economics, incentives, game theory, human psychology. Trying to recast this in terms of "predestination" gives me strong wordcel vibes.

    • It's a bit like pattern matching the Cold War fears of a nuclear exchange and nuclear winter to the flood myths or apocalyptic narratives across the ages, and hence dismissing it as "ah, seen this kind of talk before", totally ignoring that Hiroshima and Nagasaki actually happened, later tests actually happened, etc.

      It's indeed a symptom of working in an environment where everything is just discourse about discourse, and prestige is given to some surprising novel packaging or merger of narratives, and all that is produced is words that argue with other words, and it's all about criticizing how one author undermines some other author too much or not enough and so on.

      From that point of view, sure, nothing new under the sun.

      It's all too well to complain about the boy crying wolf, but when you see the pack of wolves entering the village, it's no longer just about words.

      Now, anyone is of course free to dispute the empirical arguments, but I see many very self-satisfied prestigious thinkers who think they don't have to stoop so low as to actually look at models and how people use them in reality, it can all just be dismissed based on ick factors and name calling like "slop".

      Few are saying that these things are eschatological inevitabilities. They are saying that there are incentive gradients that point in a certain direction and it cannot be moved out from that groove without massive and fragile coordination, due to game theoretical reasonings, given a certain material state of the world right now out there, outside the page of the "text".

      4 replies →

> many very intelligent, thoughtful people feel justified in brushing away millennia of philosophical and religious thought because they deem it outdated

Why lump philosophy and religion together? I distinguish between philosophical thought and religious thought, to the extent the former is conditionally framed.

  • They're intertwined but at the same time different tools. It's okay to lump them together in this context, imo.

The reason for this is it’s horrifying to consider that things like the Ukrainian war didn’t have to happen. It provides a huge amount of phycological relief to view these events as inevitable. I actually don’t think as humans are even able to conceptualise/internalise suffering on those scales as individuals. I can’t at least.

And then ultimately if you believe we have democracies in the west it means we are all individually culpable as well. It’s just a line of logic that becomes extremely distressing and so there’s a huge, natural and probably healthy bias away from thinking like that.

> the actor has changed from God to technology

Agreed. You could say that technology has become a god to those people.

  • What technology? Agriculture? The steam engine? The automobile? Modern medicine? Cryptography? The Internet? LLMs? Nanotechnology?

    Who are these people? Jonas Salk, widely credited as the inventor of the polio vaccine? Sam Altman, fundraiser extraordinaire? Peter Thiel, exalter of The World-Saving Founders? Ray Kurzweil? Technocrats? Other techno-optimists? Perhaps transhumanists? There are many variations, and they differ by quite a lot.

    What kind of god? Carl Sagan has a nice interview where he asks a question-asker to define what they mean by “god”. A blind watchmaker? Someone who can hear your prayers? A wrathful smoter of the wicked and (sometimes) the loyal (sorry, Job!)? A very confusing 3-tuple, one element of which birthed another, who died somehow but was resurrected? The essence of nature? The laws of physics? An abstract notion of love? Yeah. These three letters are too vague to be useful unless unpacked or situated in a mutually understood context. It often fosters a flimsy consensus or a shallow disagreement.

Oh I don't brush away spiritual or philisophical teachings from the "ancients", what I do brush aside with zero guilt is anything that requires me to believe in a sky daddy/mommy as an axiom for the consideration of the system.

It actually seems more to me like dialectical materialism, which started centuries ago and was already secular. It bears more in character to the differences that other commenters have already voiced, in that human actors not only believed in its inevitability, but attempted to bring it about themselves. Multiple global superpowers implemented forced industrialization, cultural reformation, and command economies to bring it about.

The difference this time isn't sacred versus secular. It's public versus private. Whereas the purveyors of communism were governments, this is being done by corporations. Well-funded private organizations are led by decision makers who believe strongly this is the future, it is inevitable, and their only hope is to get there first. The actor didn't change from God to technology. It changed from labor to capital.

I make no comment on whether they will prove to be more correct than the believers in communism, but the analogy is obvious either way.

  • I kinda feel this way too. Reading some of the blog posts by AI "luminaries" I'm struck by how Stalinist they sound. They hold out some utopia that exists in their minds, and they are ready to feed people into the meat grinder to try and make it a reality. Stalin said that this generation would suffer so that the next lived in utopia, and that's kind of the same pitch they are making.

    I think if we actually cared about making a better world, you'd take steps where each successive step is a positive one. Free healthcare, clean energy investments, etc..

    • > I think if we actually cared about making a better world, you'd take steps where each successive step is a positive one.

      Yeah, but lots of people don't care about that, they care about acheiving their visions of power, and they need an excuse to justify other people suffering for them. They aren’t seeking long term improvements at the cost of short term suffering, they are using a mirage of utopia over the hill to sell people a deal which is only suffering, now and for however long they can be kept in line.

    • In other words, "if we cared about the world, we would only do things that line up with my personal political beliefs and my political beliefs are obviously correct"

> One of the negative consequences of the “modern secular age” is that many very intelligent, thoughtful people feel justified in brushing away millennia of philosophical and religious thought because they deem it outdated or no longer relevant.

Isn't that a societal trait though? See English Christians attitude towards vikings, requiring baptism (or the prima signatio, kinda baptism-light) before they could deal with them, because they were savage. Or colons forcing natives to adopt Christianity, because what they had before was "primitive". There was wisdom and thought in both, but in both case the Christian side "brushed it away". Or capitalism and communism in the cold war. It feels like everyone with a belief system tries to force it onto others.

Before it jumped to technology, it had a pit stop in political economy viz-a-viz Marxism (and liberalism).

This is one of those types of comments to change one's whole world view.

> The notion that history is just on some inevitable pre-planned path is not a new idea, except now the actor has changed from God to technology.

I'm gonna fucking frame that. It goes hard

  • This entire conversation is a masterpiece!

    Just picture this convo somewhere in nature, at night, by a fire.