Comment by muzani
6 days ago
It was the best option at one point. They're still a great option if you want an override (e.g. categorization or dialects), but they're not precise.
Changes that happened:
1. LLMs got a lot cheaper but fine tuning didn't. Fine tuning was a way to cut down on prompts and make them 0 shot (not require examples)
2. Context windows became bigger. Fine tuning was great when it was expected to respond a sentence.
3. The two things above made RAG viable.
4. Training got better on released models, to the point where 0 shots worked fine. Fine tuning ends up overriding these things that were scoring nearly full points on benchmarks.
No comments yet
Contribute on Hacker News ↗