Comment by robrenaud
6 days ago
There is no real difference between fine-tuning with and without a lora. If you give me a model with a lora adapter, I can give you an updated model without the extra lora params that is functionally identical.
Fitting a lora changes potentially useful information the same way that fine-tuning the whole model does. It's just the lora restricts the expressiveness of the weight update so that is compactly encoded.
No comments yet
Contribute on Hacker News ↗