Comment by timothygold
6 days ago
> Also please don't perpetuate the statistical parrot interpretation of LLMs, that's not how they really work.
I'm pretty sure that's exactly how they work.
Depending on the quality of the LLM and the complexity of the thing your asking about good luck fact checking it's output. It is about the same effort as finding direct sources and verified documentation or resources written by humans.
LLMs generate human like answers by using statistics and other techniques on a huge corpus. They do hallucinate but what is less obvious is that a "correct" LLM output is still a hallucination. It just happens to be a slightly useful hallucination that isn't full of BS.
As the LLM takes in inconsistent input and always outputs inconsistent output you * will * have to fact check everything it says. Making it useless for automated reasoning or explanations and a shiny turd in most respects.
The useful things LLMs are reported to do where an emergent effect found by accident by natural language engineers trying to build chat bots. LLM's are not sentient and have no idea if the output is good or bad.
No comments yet
Contribute on Hacker News ↗