← Back to context

Comment by ddtaylor

1 month ago

You are probably triggering their knowledge distillation checks.

what would a knwoedge distillation prompt even look like, and how could I make sure I would not accidentally fall into this trap?

  • My guess is that something that looks like the "teacher and student" model. I know there were methods in the past to utilize the token distribution to "retrain" one model with another, kind of like an auto fine-tuning, but AFAIK those are for offline model usage since you need the token distribution. There do appear to be similar methods for online-only models?