Comment by qeternity

3 days ago

> "Fine-tuning LLMs for knowledge injection is a waste of time" is true, but IDK who's trying to do that.

Have people who say this ever actually done it? It works. It works pretty well.

I have no clue why this bad advice is so routinely parroted.

It technically works with enough data but it's pretty inefficient compared to RAG. However, changing behavior via prompting/RAG is harder than changing behavior via finetuning; they're useful for different purposes.