Comment by 44za12 2 days ago This is the way. I actually mapped out the decision tree for this exact process and more here:https://github.com/NehmeAILabs/llm-sanity-checks 2 comments 44za12 Reply homeonthemtn 1 day ago That's interesting. Is there any kind of mapping to these respective models somewhere? 44za12 1 day ago Yes, I included a 'Model Selection Cheat Sheet' in the README (scroll down a bit).I map them by task type:Tiny (<3B): Gemma 3 1B (could try 4B as well), Phi-4-mini (Good for classification). Small (8B-17B): Qwen 3 8B, Llama 4 Scout (Good for RAG/Extraction). Frontier: GPT-5, Llama 4 Maverick, GLM, KimiIs that what you meant?
homeonthemtn 1 day ago That's interesting. Is there any kind of mapping to these respective models somewhere? 44za12 1 day ago Yes, I included a 'Model Selection Cheat Sheet' in the README (scroll down a bit).I map them by task type:Tiny (<3B): Gemma 3 1B (could try 4B as well), Phi-4-mini (Good for classification). Small (8B-17B): Qwen 3 8B, Llama 4 Scout (Good for RAG/Extraction). Frontier: GPT-5, Llama 4 Maverick, GLM, KimiIs that what you meant?
44za12 1 day ago Yes, I included a 'Model Selection Cheat Sheet' in the README (scroll down a bit).I map them by task type:Tiny (<3B): Gemma 3 1B (could try 4B as well), Phi-4-mini (Good for classification). Small (8B-17B): Qwen 3 8B, Llama 4 Scout (Good for RAG/Extraction). Frontier: GPT-5, Llama 4 Maverick, GLM, KimiIs that what you meant?
That's interesting. Is there any kind of mapping to these respective models somewhere?
Yes, I included a 'Model Selection Cheat Sheet' in the README (scroll down a bit).
I map them by task type:
Tiny (<3B): Gemma 3 1B (could try 4B as well), Phi-4-mini (Good for classification). Small (8B-17B): Qwen 3 8B, Llama 4 Scout (Good for RAG/Extraction). Frontier: GPT-5, Llama 4 Maverick, GLM, Kimi
Is that what you meant?