Comment by voidUpdate
10 days ago
LLMs just predict the next token. They mimic humans because they were trained on terabytes of human-created data (with no credit given to the authors of the training data). They don't mimic human thinking. If they did, you would be able to train them by themselves, but if you do that you get Model Collapse
No comments yet
Contribute on Hacker News ↗