Comment by wrs

1 day ago

(1) and (2) are correct (well, I don’t know specifics of Everlaw). Fine tuning is something different, where you incrementally train the model itself further using more inputs, so that given the same input context it will produce better output in your use case.

To be more precise, you seldom directly continue training the model, because it’s much cheaper and easier to add some more small layers to the big model and train those instead (see LoRA or Peft).

Something like Everlaw might do all three, by fine tuning a model to do better at discovery retrieval, then building a RAG system on top of that.