← Back to context

Comment by mountainriver

5 days ago

I love how people say things like this with complete disregard for research.

Most LLM research involves fine tuning models, and we do amazing things with it. R1 is a fine tune, but I guess that’s bad?

Our company adds knowledge with fine tuning all the time. It’s usually a matter of skill not some fundamental limit. You need to either use LoRA or use a large batch size and mix the previous training data in.

All we are doing is forcing deep representations. This isn’t a binary “fine tuning good/bad” it’s a spectrum of how deep and robust you make the representations