Comment by ura_yukimitsu
6 hours ago
Calling LLMs "bullshit machines" is a reference to a 2024 paper [1] which itself uses the concept of "bullshit" as defined in the essay/book "On Bullshit" by Harry G. Frankfurt [2]. The TL;DR is that LLMs are fundamentally bullshit machines because they are only made to generate sentences that sound plausible, but plausible does not always mean true.
[1]: https://link.springer.com/article/10.1007/s10676-024-09775-5
No comments yet
Contribute on Hacker News ↗