Comment by brookst
16 hours ago
I haven’t found that to be the case. Both LLMs and humans produce outputs that cannot be blindly trusted to be accurate.
16 hours ago
I haven’t found that to be the case. Both LLMs and humans produce outputs that cannot be blindly trusted to be accurate.
No comments yet
Contribute on Hacker News ↗