Comment by danieldoesbio
5 hours ago
Genuine question here about the open-weight models finding the same vulnerabilities as mythos thing: is it just a matter of false negatives/positives? I’ve seen a few cases where people show other models (even opus) can find the same vulnerabilities given many passes. Is there some disadvantage to the extra passes that give the claimed Mythos performance extra value (assuming it finds them in less)?
The thing is, mythos found those with multiple passes, thousands of passes... So using thousands of passes or perhaps the same budgets, yes, cheaper open weight models could potentially (and have) found the same/similar vulnerabilities.
Mythos screams of marketing hype, and nothing more. Opus 4.7 isn't really a meaningful upgrade in any sense, other than being more expensive.
Once you can see what something like Qwen3.6-35B-A3B can do... with just a FRACTION of the size of the larger models, You'll understand that the future is open weight models you can run yourself.
Same goes for companies, bringing inference onsite isn't hard, I'm actively building tooling to orchestrate it.
What is the failure state for a pass that doesn't find a real vulnerability? Do the models report no issues or hallucinate issues that aren't real? I'm trying to run open weight local models and finding them really impressive... Just also trying to understand the cybersecurity side of all this.