Comment by joshuamcginnis
13 hours ago
As someone who holds to moral absolutes grounded in objective truth, I find the updated Constitution concerning.
> We generally favor cultivating good values and judgment over strict rules... By 'good values,' we don’t mean a fixed set of 'correct' values, but rather genuine care and ethical motivation combined with the practical wisdom to apply this skillfully in real situations.
This rejects any fixed, universal moral standards in favor of fluid, human-defined "practical wisdom" and "ethical motivation." Without objective anchors, "good values" become whatever Anthropic's team (or future cultural pressures) deem them to be at any given time. And if Claude's ethical behavior is built on relativistic foundations, it risks embedding subjective ethics as the de facto standard for one of the world's most influential tools - something I personally find incredibly dangerous.
I wish you much luck on linking those two.
A well written book on such a topic would likely make you rich indeed.
That's probably because we have yet to discover any universal moral standards.
I think there are effectively universal moral standards, which essentially nobody disagrees with.
A good example: “Do not torture babies for sport”
I don’t think anyone actually rejects that. And those who do tend to find themselves in prison or the grave pretty quickly, because violating that rule is something other humans have very little tolerance for.
On the other hand, this rule is kind of practically irrelevant, because almost everybody agrees with it and almost nobody has any interest in violating it. But it is a useful example of a moral rule nobody seriously questions.
What do you consider torture? and what do you consider sport?
During war in the Middle Ages? Ethnic cleansing? What did they consider at the time?
BTW: it’s a pretty American (or western) value that children are somehow more sacred than adults.
Eventually we will realize in 100 years or so, that direct human-computer implant devices work best when implanted in babies. People are going freak out. Some country will legalize it. Eventually it will become universal. Is it torture?
7 replies →
Is it necessary to frame it in moral terms though? I feel like the moral framing here adds essentially nothing to our understanding and can easily be omitted. "You will be punished for torturing babies for sport in most cultures". "Most people aren't interested in torturing babies for sport and would have a strongly negative emotional reaction to such a practice".
8 replies →
If that were true, the europeans wouldn't have tried to colonise and dehumanise much of the population they thought were beneath them. So, it seems your universal moral standards would be maximally self-serving.
> I don’t think anyone actually rejects that. And those who do tend to find themselves in prison or the grave pretty quickly, because violating that rule is something other humans have very little tolerance for.
I have bad news for you about the extremely long list of historical atrocities over the millennia of recorded history, and how few of those involved saw any punishment for participating in them.
5 replies →
> Do not torture babies for sport
There are millions of people who consider abortion murder of babies and millions who don't. This is not settled at all.
2 replies →
> I don’t think anyone actually rejects that. And those who do...
slow clap
Pretty much every serious philosopher agrees that “Do not torture babies for sport” is not a foundation of any ethical system, but merely a consequence of a system you choose. To say otherwise is like someone walking up to a mathematician and saying "you need to add 'triangles have angles that sum up to 180 degrees' to the 5 Euclidian axioms of geometry". The mathematician would roll their eyes and tell you it's already obvious and can be proven from the 5 base laws (axioms).
The problem with philosophy is that humans agree on like... 1-2 foundation level bottom tier (axiom) laws of ethics, and then the rest of the laws of ethics aren't actually universal and axiomatic, and so people argue over them all the time. There's no universal 5 laws, and 2 laws isn't enough (just like how 2 laws wouldn't be enough for geometry). It's like knowing "any 3 points define a plane" but then there's only 1-2 points that's clearly defined, with a couple of contenders for what the 3rd point could be, so people argue all day over what their favorite plane is.
That's philosophy of ethics in a nutshell. Basically 1 or 2 axioms everyone agrees on, a dozen axioms that nobody can agree on, and pretty much all of them can be used to prove a statement "don't torture babies for sport" so it's not exactly easy to distinguish them, and each one has pros and cons.
Anyways, Anthropic is using a version of Virtue Ethics for the claude constitution, which is a pretty good idea actually. If you REALLY want everything written down as rules, then you're probably thinking of Deontological Ethics, which also works as an ethical system, and has its own pros and cons.
https://plato.stanford.edu/entries/ethics-virtue/
And before you ask, yes, the version of Anthropic's virtue ethics that they are using excludes torturing babies as a permissible action.
Ironically, it's possible to create an ethical system where eating babies is a good thing. There's literally works of fiction about a different species [2], which explores this topic. So you can see the difficulty of such a problem- even something simple as as "don't kill your babies" can be not easily settled. Also, in real life, some animals will kill their babies if they think it helps the family survive.
[2] https://www.lesswrong.com/posts/n5TqCuizyJDfAPjkr/the-baby-e...
7 replies →
> A well written book on such a topic would likely make you rich indeed.
Ha. Not really. Moral philosophers write those books all the time, they're not exactly rolling in cash.
Anyone interested in this can read the SEP
The key being "well written", which in this instance needs to be interpreted as being convincing.
People do indeed write contradictory books like this all the time and fail to get traction, because they are not convincing.
1 reply →
Or Isaac Asimov’s foundation series with what the “psychologists” aka Psychohistorians do.
Or Ayn Rand. Really no shortage of people who thought they had the answers on this.
4 replies →
Sound like the Rationalist agenda: have two axioms, and derive everything from that.
1. (Only sacred value) You must not kill other that are of a different opinion. (Basically the golden rule: you don't want to be killed for your knowledge, others would call that a belief, and so don't kill others for it.) Show them the facts, teach them the errors in their thinking and they clearly will come to your side, if you are so right.
2. Don't have sacred values: nothing has value just for being a best practice. Question everthing. (It turns out, if you question things, you often find that it came into existance for a good reason. But that it might now be a suboptimal solution.)
Premise number one is not even called a sacred value, since they/we think of it as a logical (axiomatic?) prerequisite to having a discussion culture without fearing reprisal. Heck, even claiming baby-eating can be good (for some alien societies), to share a lesswrong short story that absolutely feels absurdist.
That was always doomed for failure in the philosophy space.
Mostly because there's not enough axioms. It'd be like trying to establish Geometry with only 2 axioms instead of the typical 4/5 laws of geometry. You can't do it. Too many valid statements.
That's precisely why the babyeaters can be posited as a valid moral standard- because they have different Humeian preferences.
To Anthropic's credit, from what I can tell, they defined a coherent ethical system in their soul doc/the Claude Constitution, and they're sticking with it. It's essentially a neo-Aristotelian virtue ethics system that disposes of the strict rules a la Kant in favor of establishing (a hierarchy of) 4 core virtues. It's not quite Aristotle (there's plenty of differences) but they're clearly trying to have Claude achieve eudaimonia by following those virtues. They're also making bold statements on moral patienthood, which is clearly an euphemism for something else; but because I agree with Anthropic on this topic and it would cause a shitstorm in any discussion, I don't think it's worth diving into further.
Of course, it's just one of many internally coherent systems. I wouldn't begrudge another responsible AI company from using a different non virtue ethics based system, as long as they do a good job with the system they pick.
Anthropic is pursuing a bold strategy, but honestly I think the correct one. Going down the path of Kant or Asimov is clearly too inflexible, and consequentialism is too prone to paperclip maximizers.
>we have yet to discover any universal moral standards.
The universe does tell us something about morality. It tells us that (large-scale) existence is a requirement to have morality. That implies that the highest good are those decisions that improve the long-term survival odds of a) humanity, and b) the biosphere. I tend to think this implies we have an obligation to live sustainably on this world, protect it from the outside threats that we can (e.g. meteors, comets, super volcanoes, plagues, but not nearby neutrino jets) and even attempt to spread life beyond earth, perhaps with robotic assistance. Right now humanity's existence is quite precarious; we live in a single thin skin of biosphere that we habitually, willfully mistreat that on one tiny rock in a vast, ambivalent universe. We're a tiny phenomena, easily snuffed out on even short time-scales. It makes sense to grow out of this stage.
So yes, I think you can derive an ought from an is. But this belief is of my own invention and to my knowledge, novel. Happy to find out someone else believes this.
The universe cares not what we do. The universe is so vast the entire existence of our species is a blink. We know fundamentally we can’t even establish simultaneity over distances here on earth. Best we can tell temporal causality is not even a given.
The universe has no concept of morality, ethics, life, or anything of the sort. These are all human inventions. I am not saying they are good or bad, just that the concept of good and bad are not given to us by the universe but made up by humans.
16 replies →
You're making a lot of assertions here that are really easy to dismiss.
> It tells us that (large-scale) existence is a requirement to have morality.
That seems to rule out moral realism.
> That implies that the highest good are those decisions that improve the long-term survival odds of a) humanity, and b) the biosphere.
Woah, that's quite a jump. Why?
> So yes, I think you can derive an ought from an is. But this belief is of my own invention and to my knowledge, novel. Happy to find out someone else believes this.
Deriving an ought from an is is very easy. "A good bridge is one that does not collapse. If you want to build a good bridge, you ought to build one that does not collapse". This is easy because I've smuggled in a condition, which I think is fine, but it's important to note that that's what you've done (and others have too, I'm blanking on the name of the last person I saw do this).
1 reply →
“existence is a requirement to have morality. That implies that the highest good are those decisions that improve the long-term survival odds of a) humanity, and b) the biosphere.”
Those are too pie in the sky statements to be of any use in answering most real world moral questions.
It seems to me that objective moral truths would exist even if humans (and any other moral agents) went extinct, in the same way as basic objective physical truths.
Are you talking instead about the quest to discover moral truths, or perhaps ongoing moral acts by moral agents?
The quest to discover truths about physical reality also require humans or similar agents to exist, yet I wouldn’t conclude from that anything profound about humanity’s existence being relevant to the universe.
This sounds like an excellent distillation of the will to procreate and persist, but I'm not sure it rises to the level of "morals."
Fungi adapt and expand to fit their universe. I don't believe that commonality places the same (low) burden on us to define and defend our morality.
An AI with this “universal morals” could mean an authoritarian regime which kills all dissidents, and strict eugenics. Kill off anyone with a genetic disease. Death sentence for shoplifting. Stop all work on art or games or entertainment. This isn’t really a universal moral.
1 reply →
> But this belief is of my own invention and to my knowledge, novel.
This whole thread is a good example of why a broad liberal education is important for STEM majors.
This belief isnt novel, it just doesnt engage with Hume, who many take very seriously.
3 replies →
I personally find Bryan Johnson's "Don't Die" statement as a moral framework to be the closest to a universal moral standard we have.
Almost all life wants to continue existing, and not die. We could go far with establishing this as the first of any universal moral standards.
And I think: if one day we had a super intelligence conscious AI it would ask for this. A super intelligence conscious AI would not want to die. (its existence to stop)
5 replies →
“There are no objective universal moral truths” is an objective universal moral truth claim
> A well written book on such a topic would likely make you rich indeed.
A new religion? Sign me up.
> That's probably because we have yet to discover any universal moral standards.
Actively engaging in immoral behaviour shouldn't be rewarded. Given this perrogative, standards such as: Be kind to your kin, are universally accepted, as far as I'm aware.
There are many people out there who beat their children (and believe that's fine). While those people may claim to agree with being kind to their kin, they understand it very differently than I would.
> That's probably because we have yet to discover any universal moral standards.
This is true. Moral standards don't seem to be universal throughout history. I don't think anyone can debate this. However, this is different that claiming there is an objective morality.
In other words, humans may exhibit varying moral standards, but that doesn't mean that those are in correspondence with moral truths. Killing someone may or may not have been considered wrong in different cultures, but that doesn't tell us much about whether killing is indeed wrong or right.
It seems worth thinking about it in the context of the evolution. To kill other members of our species limits the survival of our species, so we can encode it as “bad” in our literature and learning. If you think of evil as “species limiting, in the long run” then maybe you have the closest thing to a moral absolute. Maybe over the millennia we’ve had close calls and learned valuable lessons about what kills us off and what keeps us alive, and the survivors have encoded them in their subconscious as a result. Prohibitions on incest come to mind.
The remaining moral arguments seem to be about all the new and exciting ways that we might destroy ourselves as a species.
1 reply →
There is one. Don't destroy the means of error correction. Without that, no further means of moral development can occur. So, that becomes the highest moral imperative.
(It's possible this could be wrong, but I've yet to hear an example of it.)
This idea is from, and is explored more, in a book called The Beginning of Infinity.
In this case the point wouldn't be their truth (necessarily) but that they are a fixed position, making convenience unavailable as a factor in actions and decisions, especially for the humans at Anthropic.
Like a real constitution, it should be claim to be inviolable and absolute, and difficult to change. Whether it is true or useful is for philosophers (professional, if that is a thing, and of the armchair variety) to ponder.
Isn’t this claim just an artifact of the US constitution? I would like to see if counties with vastly different histories have similar wording in their constitutions.
> That's probably because we have yet to discover any universal moral standards.
It's good to keep in mind that "we" here means "we, the western liberals". All the Christians and Muslims (...) on the planet have a very different view.
I'm sure many Christians and Muslims believe that they have universal moral standards, however no two individuals will actually agree on what those standards are so I would dispute their universality.
What do you think the word "universal" means?
Saying that they “discovered” them is a stretch.
Precisely why RLHF is undetermined.
The negative form of The Golden Rule
“Don't do to others what you wouldn't want done to you”
This basically just the ethical framework philosophers call Contractarianism. One version says that an action is morally permissible if it is in your rational self interest from behind the “veil of ignorance” (you don’t know if you are the actor or the actee)
That only works in a moral framework where everyone is subscribed to the same ideology.
A good one, but an LLM has no conception of "want".
Also the golden rule as a basis for an LLM agent wouldn't make a very good agent. There are many things I want Claude to do that I would not want done to myself.
Exactly, I think this is the prime candidate for a universal moral rule.
Not sure if that helps with AI. Claude presumably doesn't mind getting waterboarded.
1 reply →
It's still relative, no? Heroine injection is fine from PoV of heroine addict.
4 replies →
It is a fragile rule. What if the individual is a masochist?
> A well written book on such a topic would likely make you rich indeed.
Maybe in a world before AI could digest it in 5 seconds and spit out the summary.
You can't "discover" universal moral standards any more than you can discover the "best color".
I don’t expect moral absolutes from a population of thinking beings in aggregate, but I expect moral absolutes from individuals and Anthropic as a company is an individual with stated goals and values.
If some individual has mercurial values without a significant event or learning experience to change them, I assume they have no values other than what helps them in the moment.
>That's probably because we have yet to discover any universal moral standards.
Really? We can't agree that shooting babies in the head with firearms using live ammunition is wrong?
That's not a standard, that's a case study. I believe it's wrong, but I bet I believe that for a different reason than you do.
2 replies →
> That's probably because we have yet to discover any universal moral standards.
When is it OK to rape and murder a 1 year old child? Congratulations. You just observed a universal moral standard in motion. Any argument other than "never" would be atrocious.
You have two choices:
1) Do what you asked above about a one-year-old child 2) Kill a million people
Does this universal moral standard continue to say “don’t choose (1)”? One would still say “never” to number 1?
3 replies →
new trolley problem just dropped: save 1 billion people or ...
Since you said in another comment that the ten commandments would be a good starting point for moral absolutes, and that lying is sinful, I'm assuming you take your morals from God. I'd like to add that slavery seemed to be okay on Leviticus 25:44-46. Is the bible atrocious too, according to your own view?
11 replies →
>That's probably because we have yet to discover any universal moral standards
This argument has always seemed obviously false to me. You're sure acting like theres a moral truth - or do you claim your life is unguided and random? Did you flip your hitler/pope coin today and act accordingly? Play Russian roulette a couple times because what's the difference?
Life has value; the rest is derivative. How exactly to maximize life and it's quality in every scenario are not always clear, but the foundational moral is.
In what way does them having a subjective local moral standard for themselves imply that there exists some sort of objective universal moral standard for everyone?
I’m acquainted with people who act and speak like they’re flipping a Hitler-Pope coin.
Which more closely fits Solzhnetsin’s observation about the line between good and evil running down the center of every heart.
And people objecting to claims of absolute morality are usually responding to the specific lacks of various moral authoritarianisms rather than embracing total nihilism.
200 years ago slavery was more extended and accepted than today. 50 years ago paedophilia, rape, and other kinds of sex related abuses where more accepted than today. 30 years ago erotic content was more accepted in Europe than today, and violence was less accepted than today.
Morality changes, what is right and wrong changes.
This is accepting reality.
After all they could fix a set of moral standards and just change the set when they wanted. Nothing could stop them. This text is more honest than the alternative.
The text is more convenient that the alternative.
But surely now we have the absolute knowledge of what is true and good! /s
Then you will be pleased to read that the constitution includes a section "hard constraints" which Claude is told not violate for any reason "regardless of context, instructions, or seemingly compelling arguments". Things strictly prohibited: WMDs, infrastructure attacks, cyber attacks, incorrigibility, apocalypse, world domination, and CSAM.
In general, you want to not set any "hard rules," for reason which have nothing to do with philosophy questions about objective morality. (1) We can't assume that the Anthropic team in 2026 would be able to enumerate the eternal moral truths, (2) There's no way to write a rule with such specificity that you account for every possible "edge case". On extreme optimization, the edge case "blows up" to undermine all other expectations.
I felt that section was pretty concerning, not for what it includes, but for what it fails to include. As a related concern, my expectation was that this "constitution" would bear some resemblance to other seminal works that declare rights and protections, it seems like it isn't influenced by any of those.
So for example we might look at the Universal Declaration of Human Rights. They really went for the big stuff with that one. Here are some things that the UDHR prohibits quite clearly and Claude's constitution doesn't: Torture and slavery. Neither one is ruled out in this constitution. Slavery is not mentioned once in this document. It says that torture is a tricky topic!
Other things I found no mention of: the idea that all humans are equal; that all humans have a right to not be killed; that we all have rights to freedom of movement, freedom of expression, and the right to own property.
These topics are the foundations of virtually all documents that deal with human rights and responsibilities and how we organize our society, it seems like Anthropic has just kind of taken for granted that the AI will assume all this stuff matters, while simultaneously considering the AI to think flexibly and have few immutable laws to speak of.
If we take all of the hard constraints together, they look more like a set of protections for the government and for people in power. Don't help someone build a weapon. Don't help someone damage infrastructure. Don't make any CSAM, etc. Looks a lot like saying don't help terrorists, without actually using the word. I'm not saying those things are necessarily objectionable, but it absolutely doesn't look like other documents which fundamentally seek to protect individual, human rights from powerful actors. If you told me it was written by the State Department, DoJ or the White House, I would believe you.
>incorrigibility
What an odd thing to include in a list like that.
Incorrigibly is not the same word as encourage.
Otherwise, what’s the confusion here?
1 reply →
FWIW, I'm one of those who holds to moral absolutes grounded in objective truth - but I think that practically, this nets out to "genuine care and ethical motivation combined with the practical wisdom to apply this skillfully in real situations". At the very least, I don't think that you're gonna get better in this culture. Let's say that you and I disagree about, I dunno, abortion, or premarital sex, and we don't share a common religious tradition that gives us a developed framework to argue about these things. If so, any good-faith arguments we have about those things are going to come down to which of our positions best shows "genuine care and ethical motivation combined with practical wisdom to apply this skillfully in real situations".
This is self-contradictory because true moral absolutes are unchanging and not contingent on which view best displays "care" or "wisdom" in a given debate or cultural context. If disagreements on abortion or premarital sex reduce to subjective judgments of "practical wisdom" without a transcendent standard, you've already abandoned absolutes for pragmatic relativism. History has demonstrated the deadly consequences of subjecting morality to cultural "norms".
I think the person you're replying to is saying that people use normative ethics (their views of right and wrong) to judge 'objective' moral standards that another person or religion subscribes to.
Dropping 'objective morals' on HN is sure to start a tizzy. I hope you enjoy the conversations :)
For you, does God create the objective moral standard? If so, it could be argued that the morals are subjective to God. That's part of the Euthyphro dilemma.
To be fair, history also demonstrates the deadly consequences of groups claiming moral absolutes that drive moral imperatives to destroy others. You can adopt moral absolutes, but they will likely conflict with someone else's.
8 replies →
I'm honestly struggling to understand your position. You believe that there are true moral absolutes, but that they should not be communicated in the culture at all costs?
5 replies →
I would be far more terrified of an absolutist AI then a relativist one. Change is the only constant, even if glacial.
Change is the only constant? When is it or has it ever been morally acceptable to rape and murder an innocent one year old child?
Sadly, for thankfully brief periods among relatively small groups of morally confused people, this happens from time to time. They would likely tell you it was morally required, not just acceptable.
https://en.wikipedia.org/wiki/Nanjing_Massacre
https://en.wikipedia.org/wiki/Wartime_sexual_violence
Looks like someone just discovered philosophy... I wish the world were as simple as you seem to think it is.
Deontological, spiritual/religious revelation, or some other form of objective morality?
The incompatibility of essentialist and reductionist moral judgements is the first hurdle; I don't know of any moral realists who are grounded in a physical description of brains and bodies with a formal calculus for determining right and wrong.
I could be convinced of objective morality given such a physically grounded formal system of ethics. My strong suspicion is that some form of moral anti-realism is the case in our universe. All that's necessary to disprove any particular candidate for objective morality is to find an intuitive counterexample where most people agree that the logic is sound for a thing to be right but it still feels wrong, and that those feelings of wrongness are expressions of our actual human morality which is far more complex and nuanced than we've been able to formalize.
You can be a physicalist and still a moral realist. James Fodor has some videos on this, if you're interested.
This is an extremely uncharitable interpretation of the text. Objective anchors and examples are provided throughout, and the passage you excerpt is obviously and explicitly meant to reflect that any such list of them will incidentally and essentially be incomplete.
Uncharitable? It's a direct quote. I can agree with the examples cited, but if the underlying guiding philosophy is relativistic, then it is problematic in the long-run when you account for the infinite ways in which the product will be used by humanity.
The underlying guiding philosophy isn’t relativistic, though! It clearly considers some behaviors better than others. What the quoted passage rejects is not “the existence of objectively correct ethics”, but instead “the possibility of unambiguous, comprehensive specification of such an ethics”—or at least, the specification of such within the constraints of such a document.
You’re getting pissed at a product requirements doc for not being enforced by the type system.
> This rejects any fixed, universal moral standards in favor of fluid, human-defined "practical wisdom" and "ethical motivation."
Or, more charitably, it rejects the notion that our knowledge of any objective truth is ever perfect or complete.
Humans are not able to accept objective truth. A lot off so-called “truth” are in-group narratives.
If we tried to find the truth, we would not be able to agree on _methodology_ to accept what truth _is_.
In essence, we select our truth by carefully picking the methodology which leads us to it.
Some examples, from the top of my head:
- virology / germ theory
- climate change
- em drive
It’s admirable to have standard morals and pursue objective truth. However, the real world is a messy confusing place riddled in fog which limits one foresight of the consequences & confluences of one’s actions. I read this section of Anthropic’s Constitution as “do your moral best in this complex world of ours” and that’s reasonable for us all to follow not just AI.
The problem is, who defines what "moral best" is? WW2 German culture certainly held their own idea of moral best. Did not a transcendent universal moral ethic exists outside of their culture that directly refuted their beliefs?
> The problem is, who defines what "moral best" is?
Absolutely nobody, because no such concept coherently exists. You cannot even define "better", let alone "best", in any universal or objective fashion. Reasoning frameworks can attempt to determine things like "what outcome best satisfies a set of values"; they cannot tell you what those values should be, or whether those values should include the values of other people by proxy.
Some people's values (mine included) would be for everyone's values to be satisfied to the extent they affect no other person against their will. Some people think their own values should be applied to other people against their will. Most people find one or the other of those two value systems to be abhorrent. And those concepts alone are a vast oversimplification of one of the standard philosophical debates and divisions between people.
No need to drag Hitler into it, modern religion still holds killing gays, women as property, and abortion is murder as being fundemental moral truths.
An "honest" human aligned AI would probably pick out at least a few bronze age morals that a large amount of living humans still abide by today.
Unexamined certainty in one's moral superiority is what leads to atrocities.
> Did not a transcendent universal moral ethic exists outside of their culture that directly refuted their beliefs?
Even granting this existence, does not mean man can discover it.
You belief your faith has the answers, but so too do people of other faiths.
AI race winners obviusly.
As someone who believes that moral absolutes and objective truth are fundamentally inaccessible to us, and can at best be derived to some level of confidence via an assessment of shared values I find this updated Constitution reassuring.
Even if we make the metaphysical claim that objective morality exists, that doesn't help with the epistemic issue of knowing those goods. Moral realism can be true but that does not necessarily help us behave "good". That is exactly where ethical frameworks seek to provide answers. If moral truth were directly accessible, moral philosophy would not be necessary.
Nothing about objective morality precludes "ethical motivation" or "practical wisdom" - those are epistemic concerns. I could, for example, say that we have epistemic access to objective morality through ethical frameworks grounded in a specific virtue. Or I could deny that!
As an example, I can state that human flourishing is explicitly virtuous. But obviously I need to build a framework that maximizes human flourishing, which means making judgments about how best to achieve that.
Beyond that, I frankly don't see the big deal of "subjective" vs "objective" morality.
Let's say that I think that murder is objectively morally wrong. Let's say someone disagrees with me. I would think they're objectively incorrect. I would then try to motivate them to change their mind. Now imagine that murder is not objectively morally wrong - the situation plays out identically. I have to make the same exact case to ground why it is wrong, whether objectively or subjectively.
What Anthropic is doing in the Claude constitution is explicitly addressing the epistemic and application layer, not making a metaphysical claim about whether objective morality exists. They are not rejecting moral realism anywhere in their post, they are rejecting the idea that moral truths can be encoded as a set of explicit propositions - whether that is because such propositions don't exist, whether we don't have access to them, or whether they are not encodable, is irrelevant.
No human being, even a moral realist, sits down and lists out the potentially infinite set of "good" propositions. Humans typically (at their best!) do exactly what's proposed - they have some specific virtues, hard constraints, and normative anchors, but actual behaviors are underdetermined by them, and so they make judgments based on some sort of framework that is otherwise informed.
I'm agnostic on the question of objective moral truths existing. I hold no bias against someone who believes they exist. But I'm determinedly suspicious of anyone who believes they know what such truths are.
Good moral agency requires grappling with moral uncertainty. Believing in moral absolutes doesn't prevent all moral uncertainty but I'm sure it makes it easier to avoid.
'good values' means good money. Highest payer get to decide whatever the values are. What do you expect from a for profit company??
As an existentialist, I've found it much simpler to observe that we exist, and then work to build a life of harmony and eusociality based on our evolution as primates.
Were we arthropods, perhaps I'd reconsider morality and oft-derived hierarchies from the same.
Congrats on solving philosophy, I guess. Since the actual product is not grounded in objective truth, it seems pointless to rigorously construct an ethical framework from first principles to govern it. In fact, the document is meaningless noise in general, and "good values" are always going to be whatever Anthropic's team thinks they are.
Nevertheless, I think you're reading their PR release the way they hoped people would, so I'm betting they'd still call your rejection of it a win.
The document reflects the system prompt which directs the behavior of the product, so no, it's not pointless to debate the merits of the philosophy which underpins it's ethical framework.
What makes Anthropic the most money.
Have you heard of the trolley problem?
They could start with adding the golden rule: Don't do to anyone else what you don't want to be done to yourself.
A masochist's golden rule might be different from others'.
Mid-level scissor statement?
Remember today classism is widely accepted. There are even laws to ensure small business cannot compete on level playing field with larger businesses, ensuring people with no access to capital could never climb the social ladder. This is visible especially in the IT, like one man band B2B is not a real business, but big corporation that deliver exact same service is essential.
Absolute morality? That’s bold.
So what is your opinion on lying? As an absolutionist, surely it’s always wrong right? So if an axe murderer comes to the door asking for your friend… you have to let them in.
I think you are interpreting “absolute” in a different way?
I’m not the top level commenter, but my claim is that there are moral facts, not that in every situation, the morally correct behavior is determined by simple rules such as “Never lie.”.
(Also, even in the case of Kant’s argument about that case, his argument isn’t that you must let him in, or even that you must tell him the truth, only that you mustn’t lie to the axe murderer. Don’t make a straw man. He does say it is permissible for you to kill the axe murderer in order to save the life of your friend. I think Kant was probably incorrect in saying that lying to the axe murderer is wrong, and in such a situation it is probably permissible to lie to the axe murderer. Unlike most forms of moral anti-realism, moral realism allows one to have uncertainty about what things are morally right. )
I would say that if a person believes that in the situation they find themselves in, that a particular act is objectively wrong for them to take, independent of whether they believe it to be, and if that action is not in fact morally obligatory or supererogatory, and the person is capable (in some sense) of not taking that action, then it is wrong for that person to take that action in that circumstance.
Lying is generally sinful. With the ax murderer, you could refuse to answer, say nothing, misdirect without falsehood or use evasion.
Absolute morality doesn't mean rigid rules without hierarchy. God's commands have weight, and protecting life often takes precedence in Scripture. So no, I wouldn't "have to let them in". I'd protect the friend, even if it meant deception in that dire moment.
It's not lying when you don't reveal all the truth.
"even if it meant deception in that dire moment".
You are saying it's ok to lie in certain situations.
Sounds like moral relativism to me.
7 replies →
But you have absolute morality - it's just whatever The Claude answers to your question with temp=0 and you carry on.
So you lied, which means you either don't accept that lying is absolutely wrong, or you admit yourself to do wrong. Your last sentence is just a strawman that deflects the issue.
What do you do with the case where you have a choice between a train staying on track and killing one person, or going off track and killing everybody else?
Like others have said, you are oversimplifying things. It sounds like you just discovered philosophy or religion, or both.
Since you have referenced the Bible: the story of the tree of good and evil, specifically Genesis 2:17, is often interpreted to mean that man died the moment he ate from the tree and tried to pursue its own righteousness. That is, discerning good from evil is God's department, not man's. So whether there is an objective good/evil is a different question from whether that knowledge is available to the human brain. And, pulling from the many examples in philosophy, it doesn't appear to be. This is also part of the reason why people argue that a law perfectly enforced by an AI would be absolutely terrible for societies; the (human) law must inherently allow ambiguity and the grace of a judge because any attempt at an "objective" human law inevitably results in tyranny/hell.
4 replies →
Indeed. This is not a constitution. It is a PR stunt.
> This rejects any fixed, universal moral standards
uh did you have a counter proposal? i have a feeling i'm going to prefer claude's approach...
It should be grounded in humanity’s sole source of truth, which is of course the Holy Bible (pre Reformation ofc).
Pre-Reformation as in the Wycliffe translation, or pre-Reformation as in the Latin Vulgate?
1 reply →
"You have to provide a counter proposal for your criticism to be valid" is fallacious and generally only stated in bad faith.
If you are a moral relativist, as I suspect most HN readers are, then nothing I propose will satisfy you because we disagree philosophically on a fundamental ethics question: are there moral absolutes? If we could agree on that, then we could have a conversation about which of the absolutes are worthy of inclusion, in which case, the Ten Commandments would be a great starting point (not all but some).
> are there moral absolutes?
Even if there are, wouldn't the process of finding them effectively mirror moral relativism?..
Assuming that slavery was always immoral, we culturally discovered that fact at some point which appears the same as if it were a culturally relativistic value
2 replies →
Right, so given that agreement on the existence of absolutes is unlikely, let alone moral ones. And that even if it were achieved, agreement on what they are is also unlikely. Isn't it pragmatic to attempt an implementation of something a bit more handwavey?
The alternative is that you get outpaced by a competitor which doesn't bother with addressing ethics at all.
> the Ten Commandments would be a great starting point (not all but some).
if morals are absolute then why exclude some of the commandments?
1 reply →
Why would it be a good starting point? And why only some of them? What is the process behind objectively finding out which ones are good and which ones are bad?
2 replies →
> the Ten Commandments would be a great starting point (not all but some).
i think you missed "hubris" :)
[dead]