← Back to context

Comment by keeda

3 months ago

> On what principle should that apply here and not elsewhere? Can your magic black box murder? Defame?

Good questions, and I think relevant to the current point. We're already seeing cases like that pop up with the libel suits or the recent, tragic AI-assisted suicides.

It's very clear that these models were not designed to be "suicide-ideation machines", yet that turned out to be one of the things they do! In these cases the questions are definitely not going to be about whether the AI labs intended these outcomes, but whether they took sufficient precautions to anticipate and prevent such outcomes.

One possible defense for the AI labs could be "these machines have an unprecedented, possibly unlimited, range of capabilities, and we could not reasonably have anticipated this."

A smoking gun would be an email or report outlining just such a threat that they dismissed (which may well exist, given what I hear about these labs' "move fast, break people" approach to safety.) But without that it seems like a reasonable defense.

While that argument may not work for this or other cases, I think it will pop up as these models do more and more unexpected things, and the courts will have to grapple with it eventually.