Comment by baxtr
7 days ago
To generalize from the conclusion you quoted:
I think a bad outcome would be a scenario where LLMs are rated highly capable and intelligent because they excel at things they’re supposed to be doing, yet are easily manipulated.
No comments yet
Contribute on Hacker News ↗