Comment by elzbardico

5 days ago

Lots of prophets in every gold rush...

While the author makes some good points (along with some non-factual assertions), I wonder why he decided to have this counter-productive and factually wrong clickbait title.

Fine-tuning (and LoRA IS fine-tuning) may not be cost-effective for most organizations for knowledge updates, but it excels in driving behavior in task specific ways, for alignment, for enforcing structured output (usually way more accurately than prompting), tool and function use, and depending on the type of knowledge, if it is highly specific, niche, long tail type of knowledge, it can even make smaller models beat bigger models, like the case with MedGemma.