Comment by brookst
9 hours ago
I haven’t found that to be the case. Both LLMs and humans produce outputs that cannot be blindly trusted to be accurate.
9 hours ago
I haven’t found that to be the case. Both LLMs and humans produce outputs that cannot be blindly trusted to be accurate.
No comments yet
Contribute on Hacker News ↗