Comment by butz

8 hours ago

Are there any "optimized" models, that have lesser hardware requirements and are specialised in single programming language, e.g. C# ?

LLMs need diverse and extensive training data to be good at a specific thing. We don't (yet?) know how to train a small model that is really good at one programming language. Just big models that are good at a variety of languages (plus lots of other things).

Sort of - there's Qwen3-Coder and the Codestral family, but those are still multi-language, just code-focused. For truly single-language specialization, the practical path is fine-tuning an existing base model on a narrow distribution rather than training from scratch.

The issue with C# specifically is dataset availability. Open source C# code on GitHub is a fraction of Python/JS, and Microsoft hasn't released a public corpus the way Meta has for their code models. You'd probably get further fine-tuning Qwen3-Coder (or a similar base) on your specific codebase with LoRA than waiting for a dedicated C#-only model to appear.

  • Issues with C# not withstanding. It is not inherently bad idea for small models to trained on only specific languages like a JS/PY only model with declarative languages like HTML, CSS YAML, JSON, graph etc thrown in, probably could be more efficient for local use.

    Fine-tuning / LoRA on basis the org code base would be make it more useful.