Comment by xp84
7 days ago
This seems like such a negative framing. LLMs are (~approximately) predictors of what's either logical or at least probable. For areas where what's probable is wrong and also harmful, I don't think anybody is motivated to "update reality" as some kind of general rule.
No comments yet
Contribute on Hacker News ↗