Comment by tavavex
2 hours ago
What bad product? I'm not as categorical as OP, but acting like this is a solved problem is weird. LLMs being capable of generating nonsensical stuff isn't a one-off blip on the radar in one product that was quickly patched out, it's nigh unavoidable due to their probabilistic nature, likely until there's another breakthrough in that field. As far as I know, there's no LLM that will universally refuse to try outputting something it doesn't "know" - instead outputting a response that feels correct but is gibberish. Or even one that wouldn't have rare slip-ups even in known territory.
No comments yet
Contribute on Hacker News ↗