Comment by brookst
10 hours ago
I haven’t found that to be the case. Both LLMs and humans produce outputs that cannot be blindly trusted to be accurate.
10 hours ago
I haven’t found that to be the case. Both LLMs and humans produce outputs that cannot be blindly trusted to be accurate.
No comments yet
Contribute on Hacker News ↗