← Back to context Comment by pama 6 hours ago Has anyone tested it at home yet and wants to share early impressions? 4 comments pama Reply lreeves 6 hours ago I have been kicking the tires for about 40 minutes since it downloaded and it seems excellent at general tasks, image comprehension and coding/tool-calling (using VLLM to serve it). I think it squeaks past Gemma4 but it's hard to tell yet. alfonsodev 6 hours ago good to hear! Do you mind sharing your setup and tokens / seconds performance ? lreeves 5 hours ago I'm running the unquantized base model on 2xA6000s (Ampere gen, 48GB each). Runs at about 25 tokens/second. 1 reply →
lreeves 6 hours ago I have been kicking the tires for about 40 minutes since it downloaded and it seems excellent at general tasks, image comprehension and coding/tool-calling (using VLLM to serve it). I think it squeaks past Gemma4 but it's hard to tell yet. alfonsodev 6 hours ago good to hear! Do you mind sharing your setup and tokens / seconds performance ? lreeves 5 hours ago I'm running the unquantized base model on 2xA6000s (Ampere gen, 48GB each). Runs at about 25 tokens/second. 1 reply →
alfonsodev 6 hours ago good to hear! Do you mind sharing your setup and tokens / seconds performance ? lreeves 5 hours ago I'm running the unquantized base model on 2xA6000s (Ampere gen, 48GB each). Runs at about 25 tokens/second. 1 reply →
lreeves 5 hours ago I'm running the unquantized base model on 2xA6000s (Ampere gen, 48GB each). Runs at about 25 tokens/second. 1 reply →
I have been kicking the tires for about 40 minutes since it downloaded and it seems excellent at general tasks, image comprehension and coding/tool-calling (using VLLM to serve it). I think it squeaks past Gemma4 but it's hard to tell yet.
good to hear! Do you mind sharing your setup and tokens / seconds performance ?
I'm running the unquantized base model on 2xA6000s (Ampere gen, 48GB each). Runs at about 25 tokens/second.
1 reply →