← Back to context

Comment by lovelearning

7 days ago

OpenAI makes statements like: [1]

1) "excel at a particular task"

2) "train on proprietary or sensitive data"

3) "Complex domain-specific tasks that require advanced reasoning", "Medical diagnosis based on history and diagnostic guidelines", "Determining relevant passages from legal case law"

4) "The general idea of fine-tuning is much like training a human in a particular subject, where you come up with the curriculum, then teach and test until the student excels."

Don't all these effectively inject new knowledge? It may happen through simultaneous destruction of some existing knowledge but that isn't obvious to non-technical people.

OpenAI's analogy of training a human in a particular subject until they excel even arguably excludes the possibility of destruction because we don't generally destroy existing knowledge in our minds to learn new things (but some of us may forget the older knowledge over time).

I'm a dev with hand-waving level of proficiency. I have fine-tuned self-hosted small LLMs using PyTorch. My perception of fine-tuning is that it fundamentally adds new knowledge. To what extent that involves destruction of existing knowledge has remained a bit vague.

My hand-waving solution if anyone pointed out that problem would be to 1) say that my fine-tuning data will include some of the foundational knowledge of the target subject to compensate for its destruction and 2) use a gold standard set of responses to verify the model after fine-tuning.

I for one found the article quite valuable for pointing out the problem and suggesting better approaches.

[1]: https://platform.openai.com/docs/guides/fine-tuning

> Don't all these effectively inject new knowledge?

If you mean new knowledge in the sense of "improved weights for a particular task", I guess yes, but the issuing about "new knowledge" is about learning something you didn't know if the first place, rather than being able to more accurately arrive at a conclusion.

1. "excel at a particular task" -> no. A lot of what gets in the way of excelling at a particular task is extraneous knowledge that leads to "thinking" about things that are not relevant about the task. If the job is "hot dog or not hot dog", knowing about the endless "hot dog is a sandwich" debate or the people with hot dog fingers in Everything, Everywhere, All At Once tends to just gets in the way of doing the job as accurately and efficiently as possible.

2. "train on proprietary or sensitive data" -> no. Training on proprietary or sensitive data might not give you any new knowledge, but it may allow for much more refined weights to drive probabilistic decisions. So, if I train on a model with thousands of examples of X-rays of potential cancer patients, it doesn't learn new ideas, but it does learn better weights for determining if it is seeing a tumour.

3. "Complex domain-specific tasks that require advanced reasoning" "Medical diagnosis based on history and diagnostic guidelines" "Determining relevant passages from legal case law"

If you fine tuned an engine to identify species of animals, ones that it is already aware of, you can produce a model that knows with high confidence that "jaguar" is a kind of cat, and not a car or a sports team. It has this high confidence because it knows that, after lots of examples, knowing about there's a car or sports team with that name just gets in the way of making good judgments.

"OpenAI's analogy of training a human in a particular subject until they excel even arguably excludes the possibility of destruction because we don't generally destroy existing knowledge in our minds to learn new things (but some of us may forget the older knowledge over time)."

That is a pretty broad statement about the workings of the human mind. We absolutely do lose sight of neural pathways as our brain learns. I can't remember most of what I learned when I was 2 years old.