Comment by roenxi
6 days ago
A man who burns his own house down may understand what they are doing and do it intentionally - but without any further information still appears to be wasting his time and doing something stupid. There isn't any contradiction between something being a waste of time and people doing it on purpose - indeed the point of the article is to get some people to change what they are purposefully doing.
He's proposing alternatives he thinks are superior. He might well be right too, although I don't have a horse in the race but LORA seem like a more satisfying approach to get a result than retraining the model and giving LLMs tools seems to be proving more effective too.
It’s possible I misinterpreted a bit the gist of the article - in my mind nobody is doing fine-tuning these days without using techniques like LoRA or DoRA. But they are using these techniques because they are computationally efficient and convenient, and not because they perform significantly better than full fine-tuning.