Comment by homeonthemtn

13 days ago

That's interesting. Is there any kind of mapping to these respective models somewhere?

Yes, I included a 'Model Selection Cheat Sheet' in the README (scroll down a bit).

I map them by task type:

Tiny (<3B): Gemma 3 1B (could try 4B as well), Phi-4-mini (Good for classification). Small (8B-17B): Qwen 3 8B, Llama 4 Scout (Good for RAG/Extraction). Frontier: GPT-5, Llama 4 Maverick, GLM, Kimi

Is that what you meant?

  • at the sake of being obvious, do you have a tiny llm gating this decision and classifying and directing the task to its appropriate solution?