Comment by TeMPOraL
1 year ago
> You are falling into the trap that everyone does. In anthropomorphising it. It doesn't understand anything you say.
And an intern does?
Anthropomorphising LLMs isn't entirely incorrect: they're trained to complete text like a human would, in completely general setting, so by anthropomorphising them you're aligning your expectations with the models' training goals.
No comments yet
Contribute on Hacker News ↗