← Back to context

Comment by KolmogorovComp

3 months ago

a distilled version running on another model architecture does not count as using "DeepSeek". It counts as running a Llama:7B model fine-tuned on DeepSeek.

That’s splitting hairs. Most people refer to running locally as in running model on your hardware rather than the providing company.

  • Except you're not running the model locally, you're running an entirely different model that is deceptively named.

    You can pretend it's R1, and if it works for your purpose that's fine, but it won't perform anywhere near the same as the real model, and any tests performed on it are not representative of the real model.

Pretty sure this is just layman vs academic expert usage of the word conflicting.

For everyone who doesn’t build LLMs themselves, “running a Llama:7B model fined-tuned on DeepSeek.” _is_ using Deepseek mostly on account of all the tools and files being named DeepSeek and the tutorials that are aimed as casual users all are titled with equivalents of “How to use DeepSeek locally”

  • > “running a Llama:7B model fined-tuned on DeepSeek.” _is_ using Deepseek mostly on account of all the tools and files being named

    Most people confuse mass and weight, that does not mean weight and mass are the same thing.

    • Ok, but it seemed pretty obvious to me that the OP was using the common vernacular and not the hyper specific definition.