Comment by hnuser123456
2 days ago
Fine tuning should be combined with inference in some way. However this requires keeping the model loaded at high enough precision for backprop to work.
Instead of hundreds of thousands of us downloading the latest and greatest model that won't fundamentally update one bit until we're graced with the next one, I would think we should all be able to fine-tune the weights so that it can naturally memorize new additional info and preferences without using up context length.
No comments yet
Contribute on Hacker News ↗