Comment by brookst
19 hours ago
I haven’t found that to be the case. Both LLMs and humans produce outputs that cannot be blindly trusted to be accurate.
19 hours ago
I haven’t found that to be the case. Both LLMs and humans produce outputs that cannot be blindly trusted to be accurate.
No comments yet
Contribute on Hacker News ↗