Comment by 2bsinha
14 days ago
Are AI failures really model problems, or governance problems?
Over the past year, we’ve seen AI systems hallucinate in courts, leak internal prompts, get manipulated by praise, or make decisions they were never meant to make.
Most discussions focus on:
better models
alignment
prompt design
But I’m starting to think many of these failures aren’t intelligence issues at all.
They’re governance issues.
In most real systems, we separate:
capability from permission
intelligence from authority
generation from action
AI systems often skip this and let agents act by default, then try to clean up afterward with filters.
Curious how others here think about:
eligibility checks before AI actions
graduated authority for agents
limiting influence rather than outputs
system-level governance outside the model
Is anyone building or experimenting with this kind of control layer?
No comments yet
Contribute on Hacker News ↗