Comment by mjevans
3 hours ago
Think of the LLM as a slightly lossy compression algorithm fed by various pattern classifiers that weight and bin inputs and outputs.
The user of the LLM provides a new input, which might or might not closely match the existing smudged together inputs to produce an output that's in the same general pattern as the outputs which would be expected among the training dataset.
We aren't anywhere near general intelligence yet.
No comments yet
Contribute on Hacker News ↗