← Back to context

Comment by redman25

2 months ago

They can if they've been post trained on what they know and don't know. The LLM can first been given questions to test its knowledge and if the model returns a wrong answer, it can be given a new training example with an "I don't know" response.

Oh that's a great idea, just do that for every question the LLM doesn't know the answer to!

That's.. how many questions? Maybe if one model generates all possible questions then