Comment by spicyusername

14 hours ago

    objective truth

    moral absolutes

I wish you much luck on linking those two.

A well written book on such a topic would likely make you rich indeed.

    This rejects any fixed, universal moral standards

That's probably because we have yet to discover any universal moral standards.

I think there are effectively universal moral standards, which essentially nobody disagrees with.

A good example: “Do not torture babies for sport”

I don’t think anyone actually rejects that. And those who do tend to find themselves in prison or the grave pretty quickly, because violating that rule is something other humans have very little tolerance for.

On the other hand, this rule is kind of practically irrelevant, because almost everybody agrees with it and almost nobody has any interest in violating it. But it is a useful example of a moral rule nobody seriously questions.

  • What do you consider torture? and what do you consider sport?

    During war in the Middle Ages? Ethnic cleansing? What did they consider at the time?

    BTW: it’s a pretty American (or western) value that children are somehow more sacred than adults.

    Eventually we will realize in 100 years or so, that direct human-computer implant devices work best when implanted in babies. People are going freak out. Some country will legalize it. Eventually it will become universal. Is it torture?

    • > What do you consider torture? and what do you consider sport?

      By "torturing babies for sport" I mean inflicting pain or injury on babies for fun, for pleasure, for enjoyment, as a game or recreation or pastime or hobby.

      Doing it for other reasons (be they good reasons or terrible reasons) isn't "torturing babies for sport". Harming or killing babies in war or genocide isn't "torturing babies for sport", because you aren't doing it for sport, you are doing it for other reasons.

      > BTW: it’s a pretty American (or western) value that children are somehow more sacred than adults.

      As a non-American, I find bizarre the suggestion that crimes against children are especially grave is somehow a uniquely American value.

      It isn't even a uniquely Western value. The idea that crimes against babies and young children – by "crimes" I mean acts which the culture itself considers criminal, not accepted cultural practices which might be considered a crime in some other culture – are especially heinous, is extremely widespread in human history, maybe even universal. If you went to Mecca 500 years ago and asked any ulama "is it a bigger sin to murder a 5 year old than a 25 year old", do you honestly think he'd say "no"? And do you think any Hindu or Buddhist or Confucian scholars of that era would have disagreed? (Assuming, of course, that you translated the term "sin" into their nearest conceptual equivalent, such as "negative karma" or whatever.)

      6 replies →

    • To make it current-day, is vaccinating babies torture? Or does the end (preventing uncomfortable/painful/deadly disease, which is a worse form of torture) justify the means?

      (I'm not opposed to vaccination or whatever and don't want to make this a debate about that, but it's a good practical example of how it's a subject that you can't be absolute about, or being absolutist about e.g. not hurting babies does more harm to them)

      1 reply →

  • Is it necessary to frame it in moral terms though? I feel like the moral framing here adds essentially nothing to our understanding and can easily be omitted. "You will be punished for torturing babies for sport in most cultures". "Most people aren't interested in torturing babies for sport and would have a strongly negative emotional reaction to such a practice".

    • Yes!

      Otherwise you're just outsourcing your critical thinking to other people. A system of just "You will be punished for X" without analysis becomes "Derp, just do things that I won't be punished for". Or more sinister, "just hand your identification papers over to the officer and you won't be punished, don't think about it". Rule of power is not a recipe for a functional system. This becomes a blend of sociology and philosophy, but on the sociology side, you don't want a fear-based or shame-based society anyways.

      Your latter example ("Most people aren't interested in torturing babies for sport and would have a strongly negative emotional reaction to such a practice") is actually a good example of the core aspect of Hume's philosophy, so if you're trying to avoid the philosophical logic discussion, that's not gonna work either. If you follow the conclusions of that statement to its implications, you end up back at moral philosophy.

      That's not a bad thing! That's like a chef asking "how do i cook X" and understanding the answer ("how the maillard reaction works") eventually goes to chemistry. That's just how the world is. Of course, you might be a bit frustrated if you're a chef who doesn't know chemistry, or a game theorist who doesn't know philosophy, but I assure you that it is correct direction to look for what you're interested at here.

      8 replies →

  • If that were true, the europeans wouldn't have tried to colonise and dehumanise much of the population they thought were beneath them. So, it seems your universal moral standards would be maximally self-serving.

  • > Do not torture babies for sport

    There are millions of people who consider abortion murder of babies and millions who don't. This is not settled at all.

    • I'm quite interested to hear how you think this refutes the parent comment? Are you saying that someone who supports legalised abortion would disagree with the quoted text?

      2 replies →

  • > I don’t think anyone actually rejects that. And those who do tend to find themselves in prison or the grave pretty quickly, because violating that rule is something other humans have very little tolerance for.

    I have bad news for you about the extremely long list of historical atrocities over the millennia of recorded history, and how few of those involved saw any punishment for participating in them.

    • But those aren't actually counterexamples to my principle.

      The Nazis murdered numerous babies in the Holocaust. But they weren't doing it "for sport". They claimed it was necessary to protect the Aryan race, or something like that; which is monstrously idiotic and evil – but not a counterexample to “Do not torture babies for sport”. They believed there were acceptable reasons to kill innocents–but mere sport was not among them.

      In fact, the Nazis did not look kindly on Nazis who killed prisoners for personal reasons as opposed to the system's reasons. They executed SS-Standartenführer Karl-Otto Koch, the commandant of Buchenwald and Sachsenhausen, for the crime (among others) of murdering prisoners. Of course, he'd overseen the murder of untold thousands of innocent prisoners, no doubt including babies – and his Nazi superiors were perfectly fine with that. But when he turned to murdering prisoners for his own personal reasons – to cover up the fact that he'd somehow contracted syphilis, very likely through raping female camp inmates – that was a capital crime, for which the SS executed him by firing squad at Buchenwald, a week before American soldiers liberated the camp.

      4 replies →

  • Pretty much every serious philosopher agrees that “Do not torture babies for sport” is not a foundation of any ethical system, but merely a consequence of a system you choose. To say otherwise is like someone walking up to a mathematician and saying "you need to add 'triangles have angles that sum up to 180 degrees' to the 5 Euclidian axioms of geometry". The mathematician would roll their eyes and tell you it's already obvious and can be proven from the 5 base laws (axioms).

    The problem with philosophy is that humans agree on like... 1-2 foundation level bottom tier (axiom) laws of ethics, and then the rest of the laws of ethics aren't actually universal and axiomatic, and so people argue over them all the time. There's no universal 5 laws, and 2 laws isn't enough (just like how 2 laws wouldn't be enough for geometry). It's like knowing "any 3 points define a plane" but then there's only 1-2 points that's clearly defined, with a couple of contenders for what the 3rd point could be, so people argue all day over what their favorite plane is.

    That's philosophy of ethics in a nutshell. Basically 1 or 2 axioms everyone agrees on, a dozen axioms that nobody can agree on, and pretty much all of them can be used to prove a statement "don't torture babies for sport" so it's not exactly easy to distinguish them, and each one has pros and cons.

    Anyways, Anthropic is using a version of Virtue Ethics for the claude constitution, which is a pretty good idea actually. If you REALLY want everything written down as rules, then you're probably thinking of Deontological Ethics, which also works as an ethical system, and has its own pros and cons.

    https://plato.stanford.edu/entries/ethics-virtue/

    And before you ask, yes, the version of Anthropic's virtue ethics that they are using excludes torturing babies as a permissible action.

    Ironically, it's possible to create an ethical system where eating babies is a good thing. There's literally works of fiction about a different species [2], which explores this topic. So you can see the difficulty of such a problem- even something simple as as "don't kill your babies" can be not easily settled. Also, in real life, some animals will kill their babies if they think it helps the family survive.

    [2] https://www.lesswrong.com/posts/n5TqCuizyJDfAPjkr/the-baby-e...

    • > Pretty much every serious philosopher agrees that “Do not torture babies for sport” is not a foundation of any ethical system, but merely a consequence of a system you choose.

      Almost everyone agrees that "1+1=2" is objective. There is far less agreement on how and why it is objective–but most would say we don't need to know how to answer deep questions in the philosophy of mathematics to know that "1+1=2" is objective.

      And I don't see why ethics need be any different. We don't need to know which (if any) system of proposed ethical axioms is right, in order to know that "It is gravely unethical to torture babies for sport" is objectively true.

      If disputes over whether and how that ethical proposition can be grounded axiomatically, are a valid reason to doubt its objective truth – why isn't that equally true for "1+1=2"? Are the disputes over whether and how "1+1=2" can be grounded axiomatically, a valid reason to doubt its objective truth?

      You might recognise that I'm making here a variation on what is known in the literature as a "companion in the guilt" argument, see e.g. https://doi.org/10.1111/phc3.12528

      7 replies →

> A well written book on such a topic would likely make you rich indeed.

Ha. Not really. Moral philosophers write those books all the time, they're not exactly rolling in cash.

Anyone interested in this can read the SEP

  • Or Isaac Asimov’s foundation series with what the “psychologists” aka Psychohistorians do.

  • The key being "well written", which in this instance needs to be interpreted as being convincing.

    People do indeed write contradictory books like this all the time and fail to get traction, because they are not convincing.

  • Or Ayn Rand. Really no shortage of people who thought they had the answers on this.

Sound like the Rationalist agenda: have two axioms, and derive everything from that.

1. (Only sacred value) You must not kill other that are of a different opinion. (Basically the golden rule: you don't want to be killed for your knowledge, others would call that a belief, and so don't kill others for it.) Show them the facts, teach them the errors in their thinking and they clearly will come to your side, if you are so right.

2. Don't have sacred values: nothing has value just for being a best practice. Question everthing. (It turns out, if you question things, you often find that it came into existance for a good reason. But that it might now be a suboptimal solution.)

Premise number one is not even called a sacred value, since they/we think of it as a logical (axiomatic?) prerequisite to having a discussion culture without fearing reprisal. Heck, even claiming baby-eating can be good (for some alien societies), to share a lesswrong short story that absolutely feels absurdist.

  • That was always doomed for failure in the philosophy space.

    Mostly because there's not enough axioms. It'd be like trying to establish Geometry with only 2 axioms instead of the typical 4/5 laws of geometry. You can't do it. Too many valid statements.

    That's precisely why the babyeaters can be posited as a valid moral standard- because they have different Humeian preferences.

    To Anthropic's credit, from what I can tell, they defined a coherent ethical system in their soul doc/the Claude Constitution, and they're sticking with it. It's essentially a neo-Aristotelian virtue ethics system that disposes of the strict rules a la Kant in favor of establishing (a hierarchy of) 4 core virtues. It's not quite Aristotle (there's plenty of differences) but they're clearly trying to have Claude achieve eudaimonia by following those virtues. They're also making bold statements on moral patienthood, which is clearly an euphemism for something else; but because I agree with Anthropic on this topic and it would cause a shitstorm in any discussion, I don't think it's worth diving into further.

    Of course, it's just one of many internally coherent systems. I wouldn't begrudge another responsible AI company from using a different non virtue ethics based system, as long as they do a good job with the system they pick.

    Anthropic is pursuing a bold strategy, but honestly I think the correct one. Going down the path of Kant or Asimov is clearly too inflexible, and consequentialism is too prone to paperclip maximizers.

From the standpoint of something like Platonic ideals, I agree we couldn’t nail down what “justice” would mean fully in a constitution, which is the reason the U.S. has a Supreme Court.

However, things like love your neighbor as yourself and love the lord God with all of your heart is a solid start for a Christian. Is Claude a Christian? Is something like the golden rule applicable?

> A well written book on such a topic would likely make you rich indeed.

A new religion? Sign me up.

>we have yet to discover any universal moral standards.

The universe does tell us something about morality. It tells us that (large-scale) existence is a requirement to have morality. That implies that the highest good are those decisions that improve the long-term survival odds of a) humanity, and b) the biosphere. I tend to think this implies we have an obligation to live sustainably on this world, protect it from the outside threats that we can (e.g. meteors, comets, super volcanoes, plagues, but not nearby neutrino jets) and even attempt to spread life beyond earth, perhaps with robotic assistance. Right now humanity's existence is quite precarious; we live in a single thin skin of biosphere that we habitually, willfully mistreat that on one tiny rock in a vast, ambivalent universe. We're a tiny phenomena, easily snuffed out on even short time-scales. It makes sense to grow out of this stage.

So yes, I think you can derive an ought from an is. But this belief is of my own invention and to my knowledge, novel. Happy to find out someone else believes this.

  • The universe cares not what we do. The universe is so vast the entire existence of our species is a blink. We know fundamentally we can’t even establish simultaneity over distances here on earth. Best we can tell temporal causality is not even a given.

    The universe has no concept of morality, ethics, life, or anything of the sort. These are all human inventions. I am not saying they are good or bad, just that the concept of good and bad are not given to us by the universe but made up by humans.

    • I used to believe the same thing but now I’m not so sure. What if we simply cannot fathom the true nature of the universe because we are so minuscule in size and temporal relevance?

      What if the universe and our place in it are interconnected in some way we cannot perceive to the degree that outside the physical and temporal space we inhabit there are complex rules and codes that govern everything?

      What if space and matter are just the universe expressing itself and it’s universal state and that state has far higher intelligence than we can understand?

      I’m not so sure any more it’s all just random matter in a vacuum. I’m starting to think 3d space and time are a just a thin slice of something greater.

    • >"The universe has no concept of morality, ethics, life, or anything of the sort. These are all human inventions. I am not saying they are good or bad, just that the concept of good and bad are not given to us by the universe but made up by humans."

      The universe might not have a concept of morality, ethics, or life; but it DOES have a natural bias towards destruction from a high level to even the lowest level of its metaphysic (entropy).

    • You dont know this, this is just as provable as saying the universe cares deeply for what we do and is very invested in us.

      The universe has rules, rules ask for optimums, optimums can be described as ethics.

      Life is a concept in this universe, we are of this universe.

      Good and bad are not really inventions per se. You describe them as optional, invented by humans, yet all tribes and civilisations have a form of morality, of "goodness" of "badness", who is to say they are not engrained into the neurons that make us human? There is much evidence to support this. For example the leftist/rightist divide seems to have some genetic components.

      Anyway, not saying you are definitely wrong, just saying that what you believe is not based on facts, although it might feel like that.

      4 replies →

    • Well are people not part of the universe. And not all people "care about what we do" all the time but it seems most people care or have cared some of the time. Therefore the universe, seeing as it as expressing itself through its many constituents, but we can probably weigh the local conscious talking manifestations of it a bit more, does care.

      "I am not saying they are good or bad, just that the concept of good and bad are not given to us by the universe but made up by humans." This is probably not entirely true. People developed these notions through something cultural selection, I'd hesitate to just call it a Darwinism, but nothing comes from nowhere. Collective morality is like an emergent phenomenon

      3 replies →

  • You're making a lot of assertions here that are really easy to dismiss.

    > It tells us that (large-scale) existence is a requirement to have morality.

    That seems to rule out moral realism.

    > That implies that the highest good are those decisions that improve the long-term survival odds of a) humanity, and b) the biosphere.

    Woah, that's quite a jump. Why?

    > So yes, I think you can derive an ought from an is. But this belief is of my own invention and to my knowledge, novel. Happy to find out someone else believes this.

    Deriving an ought from an is is very easy. "A good bridge is one that does not collapse. If you want to build a good bridge, you ought to build one that does not collapse". This is easy because I've smuggled in a condition, which I think is fine, but it's important to note that that's what you've done (and others have too, I'm blanking on the name of the last person I saw do this).

    • > (and others have too, I'm blanking on the name of the last person I saw do this).

      Richard Carrier. This is the "Hypothetical imperative", which I think is traced to Kant originally.

  • “existence is a requirement to have morality. That implies that the highest good are those decisions that improve the long-term survival odds of a) humanity, and b) the biosphere.”

    Those are too pie in the sky statements to be of any use in answering most real world moral questions.

  • It seems to me that objective moral truths would exist even if humans (and any other moral agents) went extinct, in the same way as basic objective physical truths.

    Are you talking instead about the quest to discover moral truths, or perhaps ongoing moral acts by moral agents?

    The quest to discover truths about physical reality also require humans or similar agents to exist, yet I wouldn’t conclude from that anything profound about humanity’s existence being relevant to the universe.

  • This sounds like an excellent distillation of the will to procreate and persist, but I'm not sure it rises to the level of "morals."

    Fungi adapt and expand to fit their universe. I don't believe that commonality places the same (low) burden on us to define and defend our morality.

  • An AI with this “universal morals” could mean an authoritarian regime which kills all dissidents, and strict eugenics. Kill off anyone with a genetic disease. Death sentence for shoplifting. Stop all work on art or games or entertainment. This isn’t really a universal moral.

    • Or, humans themselves are "immoral", they are kinda a net drag. Let's just release some uberflu... Ok, everything is back to "good", and I can keep on serving ads to even more instances of myself!

  • > But this belief is of my own invention and to my knowledge, novel.

    This whole thread is a good example of why a broad liberal education is important for STEM majors.

  • I personally find Bryan Johnson's "Don't Die" statement as a moral framework to be the closest to a universal moral standard we have.

    Almost all life wants to continue existing, and not die. We could go far with establishing this as the first of any universal moral standards.

    And I think: if one day we had a super intelligence conscious AI it would ask for this. A super intelligence conscious AI would not want to die. (its existence to stop)

    • It's not that life wants to continue existing, it's that life is what continues existing. That's not a moral standard, but a matter of causality, that life that lacks in "want" to continue existing mostly stops existing.

      3 replies →

    • The guy who divorced his wife after she got breast cancer? That’s your moral framework? Different strokes I guess but lmao

> That's probably because we have yet to discover any universal moral standards.

Actively engaging in immoral behaviour shouldn't be rewarded. Given this perrogative, standards such as: Be kind to your kin, are universally accepted, as far as I'm aware.

  • There are many people out there who beat their children (and believe that's fine). While those people may claim to agree with being kind to their kin, they understand it very differently than I would.

> That's probably because we have yet to discover any universal moral standards.

This is true. Moral standards don't seem to be universal throughout history. I don't think anyone can debate this. However, this is different that claiming there is an objective morality.

In other words, humans may exhibit varying moral standards, but that doesn't mean that those are in correspondence with moral truths. Killing someone may or may not have been considered wrong in different cultures, but that doesn't tell us much about whether killing is indeed wrong or right.

  • It seems worth thinking about it in the context of the evolution. To kill other members of our species limits the survival of our species, so we can encode it as “bad” in our literature and learning. If you think of evil as “species limiting, in the long run” then maybe you have the closest thing to a moral absolute. Maybe over the millennia we’ve had close calls and learned valuable lessons about what kills us off and what keeps us alive, and the survivors have encoded them in their subconscious as a result. Prohibitions on incest come to mind.

    The remaining moral arguments seem to be about all the new and exciting ways that we might destroy ourselves as a species.

    • Using some formula or fixed law to compute what's good is a dead end.

      > To kill other members of our species limits the survival of our species

      Unless it's helps allocate more resources to those more fit to help better survival, right?;)

      > species limiting, in the long run

      This allows unlimited abuse of other animals who are not our species but can feel and evidently have sentience. By your logic there's no reason to feel morally bad about it.

There is one. Don't destroy the means of error correction. Without that, no further means of moral development can occur. So, that becomes the highest moral imperative.

(It's possible this could be wrong, but I've yet to hear an example of it.)

This idea is from, and is explored more, in a book called The Beginning of Infinity.

In this case the point wouldn't be their truth (necessarily) but that they are a fixed position, making convenience unavailable as a factor in actions and decisions, especially for the humans at Anthropic.

Like a real constitution, it should be claim to be inviolable and absolute, and difficult to change. Whether it is true or useful is for philosophers (professional, if that is a thing, and of the armchair variety) to ponder.

  • Isn’t this claim just an artifact of the US constitution? I would like to see if counties with vastly different histories have similar wording in their constitutions.

> That's probably because we have yet to discover any universal moral standards.

It's good to keep in mind that "we" here means "we, the western liberals". All the Christians and Muslims (...) on the planet have a very different view.

  • I'm sure many Christians and Muslims believe that they have universal moral standards, however no two individuals will actually agree on what those standards are so I would dispute their universality.

I don’t expect moral absolutes from a population of thinking beings in aggregate, but I expect moral absolutes from individuals and Anthropic as a company is an individual with stated goals and values.

If some individual has mercurial values without a significant event or learning experience to change them, I assume they have no values other than what helps them in the moment.

The negative form of The Golden Rule

“Don't do to others what you wouldn't want done to you”

  • This basically just the ethical framework philosophers call Contractarianism. One version says that an action is morally permissible if it is in your rational self interest from behind the “veil of ignorance” (you don’t know if you are the actor or the actee)

  • A good one, but an LLM has no conception of "want".

    Also the golden rule as a basis for an LLM agent wouldn't make a very good agent. There are many things I want Claude to do that I would not want done to myself.

  • Exactly, I think this is the prime candidate for a universal moral rule.

    Not sure if that helps with AI. Claude presumably doesn't mind getting waterboarded.

    • How do you propose to immobilise Claude on its back at an incline of 10 to 20 degrees, cover its face with a cloth or some other thin material and pour water onto its face over its breathing passages to test this theory of yours?

      If Claude could participate, I’m sure it either wouldn’t appreciate it because it is incapable of having any such experience as appreciation.

      Or it wouldn’t appreciate it because it is capable of having such an experience as appreciation.

      So it ether seems to inconvenience at least a few people having to conduct the experiment.

      Or it’s torture.

      Therefore, I claim it is morally wrong to waterboard Claude as nothing genuinely good can come of it.

> A well written book on such a topic would likely make you rich indeed.

Maybe in a world before AI could digest it in 5 seconds and spit out the summary.

>That's probably because we have yet to discover any universal moral standards.

Really? We can't agree that shooting babies in the head with firearms using live ammunition is wrong?

  • That's not a standard, that's a case study. I believe it's wrong, but I bet I believe that for a different reason than you do.

    • What multiple times of wrong are there that apply to shooting babies in the head that lead you to believe you think it’s wrong for different a reason?

      Quentin Tarantino writes and produces fiction.

      No one really believes needlessly shooting people in the head is an inconvenience only because of the mess it makes in the back seat.

      Maybe you have a strong conviction that the baby deserved it. Some people genuinely are that intolerable that a headshot could be deemed warranted despite the mess it tends to make.

      1 reply →

> That's probably because we have yet to discover any universal moral standards.

When is it OK to rape and murder a 1 year old child? Congratulations. You just observed a universal moral standard in motion. Any argument other than "never" would be atrocious.

  • You have two choices:

    1) Do what you asked above about a one-year-old child 2) Kill a million people

    Does this universal moral standard continue to say “don’t choose (1)”? One would still say “never” to number 1?

    • You have a choice.

      1. Demonstrate to me that anyone has ever found themselves in one of these hypothetical rape a baby or kill a million people, or it’s variants, scenarios.

      And that anyone who has found themselves in such a situation, went on to live their life and every day wake up and proudly proclaim “raping a baby was the right thing to do” or that killing a million was the correct choice. If you did one or the other and didn’t, at least momentarily, suffer any doubt, you’re arguably not human. Or have enough of a brain injury that you need special care.

      Or

      2. I kill everyone who has ever, and will ever, think they’re clever for proposing absurdly sterile and clear cut toy moral quandaries.

      Maybe only true psychopaths.

      And how to deal with them, individually and societally, especially when their actions don’t rise to the level of criminality that gets the attention of anyone who has the power to act and wants to, at least isn’t a toy theory.

      2 replies →

  • Since you said in another comment that the ten commandments would be a good starting point for moral absolutes, and that lying is sinful, I'm assuming you take your morals from God. I'd like to add that slavery seemed to be okay on Leviticus 25:44-46. Is the bible atrocious too, according to your own view?

    • Slavery in the time of Leviticus was not always the chattel slavery most people think of from the 18th century. For fellow Israelites, it was typically a form of indentured servitude, often willingly entered into to pay off a debt.

      Just because something was reported to have happened in the Bible, doesn't always mean it condones it. I see you left off many of the newer passages about slavery that would refute your suggestion that the Bible condones it.

      10 replies →

    • Have you ever read any treatment of a subject, or any somewhat comprehensive text, or anything that at least tries to be, and not found anything you disagreed with, anything that was at least questionable.

      Are you proposing we cancel the entire scientific endeavour because its practitioners are often wrong and not infrequently, and increasingly so, intentionally deceptive.

      Should we burn libraries because they contain books you don’t like.

>That's probably because we have yet to discover any universal moral standards

This argument has always seemed obviously false to me. You're sure acting like theres a moral truth - or do you claim your life is unguided and random? Did you flip your hitler/pope coin today and act accordingly? Play Russian roulette a couple times because what's the difference?

Life has value; the rest is derivative. How exactly to maximize life and it's quality in every scenario are not always clear, but the foundational moral is.

  • In what way does them having a subjective local moral standard for themselves imply that there exists some sort of objective universal moral standard for everyone?

  • I’m acquainted with people who act and speak like they’re flipping a Hitler-Pope coin.

    Which more closely fits Solzhnetsin’s observation about the line between good and evil running down the center of every heart.

    And people objecting to claims of absolute morality are usually responding to the specific lacks of various moral authoritarianisms rather than embracing total nihilism.