Comment by adultSwim
1 month ago
For medical applications, across several generations of models, we see fine-tuned models outperform base models of similar size. However, newer/bigger general base models outperform smaller fine-tuned models.
Also, as others have pointed out, supervised fine-tuning can be quite useful for teaching how to perform specific tasks. I agree with the author that RAG generally is more suited for injecting additional knowledge.
No comments yet
Contribute on Hacker News ↗