Comment by sdwr
9 hours ago
Great article! It does a good job of outlining the mechanics and implications of LLM prediction. It gets lost in the sauce in the alignment section though, where it suggests the Anthropic paper is about LLMs "pretending" to be future AIs. It's clear from the quoted text that the paper is about aligning the (then-)current, relatively capable model through training, as preparation for more capable models in the future.
No comments yet
Contribute on Hacker News ↗