Comment by brookst
20 hours ago
I haven’t found that to be the case. Both LLMs and humans produce outputs that cannot be blindly trusted to be accurate.
20 hours ago
I haven’t found that to be the case. Both LLMs and humans produce outputs that cannot be blindly trusted to be accurate.
No comments yet
Contribute on Hacker News ↗