Comment by zmmmmm
9 hours ago
It's a bit like the argument with self driving cars though. They may be safer overall, but there's a big difference in how responsibility for errors is attributed. If a human is not a decision maker in the production of the code, where does responsibility for errors propagate to?
I feel like software engineers are taking a lot of license with the idea that if something bad happens, they will just be able to say "oh the AI did it" and no personal responsibility or liability will attribute. But if they personally looked at the code and their name is underneath it signing off the merge request acknowledging responsibility for it - we have a very different dynamic.
Just like artists have to re-conceptualise the value of what they do around the creative part of the process, software engineers have to rethink what their value proposition is. And I'm seeing a large part of it is, you are going to take responsibility for the AI output. It won't surprise me if after the first few disasters happen, we see liability legislation that mandates human responsibility for AI errors. At that point I feel many of the people all in on agent driven workflows that are explicitly designed to minimise human oversight are going to find themselves with a big problem.
My personal approach is I'm building up a tool set that maximises productivity while ensuring human oversight. Not just that it occurs and is easy to do, but that documentation of it is recorded (inherently, in git).
It will be interesting to see how this all evolves.
No comments yet
Contribute on Hacker News ↗