← Back to context

Comment by vladms

3 hours ago

I find "morals" difficult to evaluate objectively. Some people might find it "moral" that women do not have any education and just stay at home, which I find terrible.

But if most people in a society find something "wrong" generally they will organize to prevent that (even if it has value for a part of the society). I think it is simpler for everybody that economics (how we produce and what) is separated from morals (how we decide what is right and wrong).

It may appear simpler on the surface but it's very easy to find that market forces that don't have any checks and balances on them eventually converge on increasingly aggressive and dehumanizing behavior—not unlike your example with women. I have many such well-documented behaviors to list as examples, and I guarantee you have encountered them regularly and been upset at them.

The way we organize in a society is by having governments, usually elected ones to represent what "most people in a society" actually think, to serve as an arbiter of applied morals in our interactions, including business. To that end, we codify most of them in laws with clear definitions to prevent things like unfettered monopolies, corporate espionage, poor working conditions and hiring practices, etc. This generally works, though it depends on how well a given government and its constituent parts does its job and whether it uses the power it has to serve the entire society's interests or the interests of the elites that drive decisions. We can see right now how it fails in real time, for example.

Morals don't have to be evaluated "objectively" (whatever that is) every time to be observed. Humanity has agreed on many things that make up UDHR, international law, and other related documents. It's not the hard part. Making independent actors conduct their business in accordance with these codes is the hard part. Somehow even making them follow their own self-imposed principles is crazy hard for some reason. When Amodei claims Anthropic develops Claude for the benefit of all humanity but greenlights its use for surveillance on non-Americans, that's scummy. When Amodei claims to be terrified of authoritarian regimes gaining access to powerful AI but seeks investment from them, that's scummy. The deal with Palantir, the mass-surveillance business, is scummy. Framing the use of autonomous weapons as only disagreeable insofar as the underlying capabilities aren't reliable enough is scummy. You don't need to be a PhD in morals to notice that.