Comment by linkregister
13 hours ago
Empirically, DS4 is hosting the DeepSeek v4 Flash model with good performance on home hardware. I'm curious how you came to this conclusion.
13 hours ago
Empirically, DS4 is hosting the DeepSeek v4 Flash model with good performance on home hardware. I'm curious how you came to this conclusion.
"Empirically", have you tested this yourself?
It's trivial to find reviews and benchmarks of DS4 online. Also, there are benchmarks in the article.
Here's one of the top hits: https://forums.developer.nvidia.com/t/fully-custom-cuda-nati...
Bizarre comment; sounds like "How do you know Porsches are fast? Did you drive one?"
Parent is simply pointing out the incorrect usage of "empirically", which should typically only be mentioned when you've tested it yourself.
3 replies →
Are you comparing an LLM running on a laptop to a Porsche?
I just find it really funny people are willing to write things like "empirically speaking, X is obvious" without actually testing it themselves.
I've seen mixed reviews, and the most honest sounding ones have said it has latency issues.
I don't really care that much what the average LLM power user says at this point, they're impressed by anything an LLM does. They're like toddler's entertained by the sound their Velcro shoes make.
You LLM people are going to be like my mom, once she got an Maps app she completely gave up on navigating anywhere with her own brain, and is lost without a phone.
Except for you LLM people, its going to be reading, writing, problem solving and thinking in general. You'll be completely reliant on an llm to get anything done, have fun with that. You're cooked bro.
1 reply →