Comment by dns_snek
1 year ago
Nobody was killed because an AI product was inaccurate. If this was indeed the reason, this CEO was killed for killing someone's family member by denying them healthcare.
You're not expected to have a faultless AI but you're expected to supervise it, to have an appeal process, and to make things right when AI makes mistakes. In other words, this is a "high risk system" under the EU AI Act, which should have appropriate safeguards in place.
> Human oversight shall aim to prevent or minimise the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular where such risks persist despite the application of other requirements set out in this Section.
No comments yet
Contribute on Hacker News ↗