Comment by a2128
5 months ago
If OpenAI really cares about AI safety, they should be all about humans double-checking the thought process and making sure it hasn't made a logical error that completely invalidates the result. Instead, they're making the conscious decision to close off the AI thinking process, and they're being as strict about keeping it secret as information about how to build a bomb.
This feels like an absolute nightmare scenario for AI transparency and it feels ironic coming from a company pushing for AI safety regulation (that happens to mainly harm or kill open source AI)
No comments yet
Contribute on Hacker News ↗