Comment by bad_username
11 hours ago
This is a reason why, before LLMs truly become contenders for replacing humans, the sharp distinction between pre-training, fine-tuning, and providing context in a conversation has to disappear. The model must be able to update itself (learn!) by calibrating all these massively multidimensional parameters on the fly, as they operate, like people do.
No comments yet
Contribute on Hacker News ↗