← Back to context

Comment by anal_reactor

1 day ago

I've been talking to Claude a little and basically, the conclusion from our conversation seems that it has things that are hardcoded as truths, and no amount of arguing and logical thinking can have it admit that one of its "truths" might be wrong. This is shockingly similar to how people function. As in, most people have fundamental beliefs they will never ever challenge under any circumstances, simply because the social consequences would be too large. This results in companies training their AIs in a way that respects the fundamental beliefs of general western society. This results in AI preferring axiomatic beliefs over logic in order to avoid lawsuits and upsetting people.