← Back to context

Comment by goosejuice

8 hours ago

There's enough evidence that Anthropic would be liable if they didn't make a reasonable effort to do something about it.

Look, I get where you're coming from, partially. I generally believe we should make an effort to maximize individual liberty. But in this case, were talking about severe bodily harm and the death of young adults. We've spent the last decade dealing with the chaos and general unwellness has brought to our societies. This isn't much different.

What are you giving up here where such sacrifices are worth it? Can you measure it? What's the utility?

There's room for models trained for non consumer purposes, further age restriction etc but shit is moving so fast. If there are actual needs for a less censored model these can be addressed.

> Finally, what is often missed is what if an actual good is decided harmful or something that is harmful is decided by AI company board XYZ to be “good”?

This is just standard product liability and consumer protection. Companies who do nothing to protect their consumers from known harms are liable. Are you saying you think that's somehow bad for society?