Comment by adastra22
6 months ago
I’m not sure I understand what this comment is responding to. Wouldn’t a distilled Deepseek still use the same tokenizer? I’m not claiming they are using llama in their backend. I’m just saying they are likely using a lower-parameter model too.
The small models that have been published as part of the DeepSeek release are not a "distilled DeepSeek", they're fine-tuned varieties of Llama and Qwen. DeepSeek may have smaller models internally that are not Llama- or Qwen-based but if so they haven't released them.
Thank you. I’m still learning as I’m sure everyone else is, and that’s a distinction I wasn’t aware of. (I assumed “distilled” meant a compressed parameter size, not necessarily the use of another model in its construction.)