Comment by WarmWash
2 hours ago
Maybe you are confused, but GPT4.5 had all the same "morality guards" as OAI's other models, and was clearly RL'd with the same "user first" goals.
True, it was a massive model, but my comment isn't really about scale so much as it is about bending will.
Also the model size you reference refers to the memory footprint of the parameters, not the actual number of parameters. The author postulates a lower bound of 800B parameters for Opus 4.5.
No comments yet
Contribute on Hacker News ↗