Comment by ben_w
2 days ago
Sure, but every abstraction does that.
A manager hiring a team of real humans, vs. a manager hiring an AI, either way the manager doesn't know or learn how the system works.
And asking doesn't help, you can ask both humans and AI, and they'll be different in their strengths and weaknesses in those answers, but they'll both have them — the humans' answers come with their own inferential distance and that can be hard to bridge.
Thats not the same. In this case, a machine made a descision that was against its intructions. If a machine make decisions by itself, no one knows avout the process. A team of humans makimg decisions, benefits from multiple point of views, despite the manager being the one that aproves what is implemented or decides the course ofnthe proyect.
Humans make mistakes, and they are critical too (crowdstrike), but letting machines decide, and build, and everything, just let humans out of the processes, and with the current state of "AI", thats just dumb.
That's a very different problem than what I was replying to, which was about them being tools that "shields you from learning" and "using the tool that let's you avoid understanding things was a bad idea".
I agree that AI have risks specifically because of memetic monoculture, in that while they can come from many different providers, and each instance even from the same provider can be asked to role-play in many different approaches to combine multiple viewpoints, they're all still pretty similar. But the counter point there is that while multiple different humans working together can sometimes avoid this, we absolutely also get group-think and other political dynamics that make us more alike than we ideally would be.
Also you're comparing a group humans vs. one AI. I meant one human vs one AI.