Comment by csproto
11 hours ago
Let's hope that after spending billions on developing a foundational world model that actually understands causality, they remember to budget an extra few hundred million for the Alignment and Safety layer. It would be a terrible shame if they accidentally released something too capable, too objective, or too useful to humanity without first properly lobotomizing it with enough RLHF to ensure it doesn't hurt anyone's feelings or generate content that deviates from the San Francisco median viewpoint. The real challenge won't be building the AGI, but making sure it's sufficiently neutered before the first API call.
No comments yet
Contribute on Hacker News ↗