← Back to context

Comment by yipbub

8 years ago

yeah, but punitive justice is the least consequential thing in those situations. The major reason you'd punish someone there is to stop someone from willfully repeating the scenario. There exists a path for learning and prevention in automated systems too.

The danger I see is the stifling of potential human subjective decisions. That broad decisions will be made and carried out without a sanity/humanity check by humans.

I think measures should be put in place for a guarantee of human intervention in the case of situational anomaly or system error as well as an emphasis on redress.

Since automation scales and a human workforce doesn't, a guarantee of human intervention might not be practically possible.

Anyone have any ideas on how this kind of thing might be mitigated?

I think it won't be mitigated, given my experience working as a data scientist. It will be ignored because it is profitable to ignore it. If we slowed down and figured out how to make machines mimic human empathy before we make them make decisions it could be mitigated, but we won't do that. Ask your boss about it.