Comment by reexpressionist
2 years ago
Important essay and points. I want to mention that there exist now practical technical approaches that can be used to create trustworthy AI...and such approaches can be run on local models, as this comment suggests.
> "[...] [AI] will act trustworthy, but it will not be trustworthy. We won’t know how they are trained. We won’t know their secret instructions. We won’t know their biases, either accidental or deliberate. [...]"
I agree that this is true with standard deployments of the generative AI models, but we can instead reframe networks as a direct connection between the observed/known data and new predictions, and to tightly constrain predictions against the known labels. In this way, we can have controllable oversight of biases, out-of-distribution errors, and more broadly, a clear relation to the task-specific training data.
That is to say, I believe the concerns in the essay are valid in that they reflect one possible path in the current fork in the road, but it is not inevitable, given the potential of reliable, on-device, personal AI.
No comments yet
Contribute on Hacker News ↗