Comment by dartos
1 year ago
I think going to a statistics based generator with the intention to take what you see as an accurate representation of reality is a non starter.
The model isn’t trying to replicate reality, it’s trying to minimize some error metric.
Sure it may be inspired by reality, but should never be considered an authority on reality.
And yes, the words an LLM write have no meaning. We assign meaning to the output. There was no intention behind them.
The fact that some models can perfectly recall _some_ information that appears frequently in the training data is a happy accident. Remember, transformers were initially designed for translation tasks.
No comments yet
Contribute on Hacker News ↗