Comment by alganet
2 months ago
Irrelevant. I actually started by describing the system, which was my first comment on the post.
I am not responsible for anyone that gets offended if I say that something is not as AI as it seems to be. It's obviously not a personal offense. If you took it as such, it's really not my problem.
Rejecting an argument by authority like "I'm an ex Googler!" or "I'm a security engineer" is also common sense. Maybe it works a lot and you folks are upset it's not working here, but that's just the way it is.
Nobody claimed to be "offended" by your technical skepticism. The idea is simpler: be kind to people who are taking time to engage with you, and for you, not for us.
Several people have written lengthy, detailed responses to your questions. They've provided technical context, domain experience, and specific explanations. Your response pattern has been to suggest they're being evasive, that they're trying to suppress your ideas, or that they're protecting commercial interests. And now "it's really not my problem" when someone asks you to be more courteous.
Technical forums work because people volunteer their time and expertise. When that gets met with dismissiveness and assumptions of bad faith, it breaks down. Whether your technical argument is right or wrong, the "not my problem" response to a simple request for courtesy says a lot about how you're approaching this conversation.
You're partly right that credentials alone don't prove anything. "I worked at Google" or "I'm a security engineer" shouldn't automatically win an argument.
But that's not what happened here. When tptacek mentioned his background, he also explained that static analysis tools have failed commercially for decades despite massive investment, and that LLM orchestration is the new variable. That's someone providing context for their technical claim.
You rejected the credentials and the explanation together, then labeled it all as "argument from authority." That's using a logic 101 concept to avoid engaging with what was actually said.
This part of your response is the most telling: "Maybe it works a lot and you folks are upset it's not working here."
You've decided that people disagreeing with you are trying to manipulate you with authority, and they're frustrated it's not landing. But there's a simpler explanation: they might just think you're wrong and are trying to explain why based on relevant experience.
Once you've framed every response as attempted manipulation rather than genuine disagreement, productive conversation becomes impossible. You're not really asking questions anymore. You're defending your initial position and treating help as opposition.
If you actually want to understand this rather than win the argument, try engaging with the core claim: SAST tools with human triage have been ineffective for 20+ years despite enormous investment. Now SAST with LLM orchestration appears to work. What does that tell you about what the LLM is contributing beyond simple filtering?
That's a real question that might lead somewhere interesting. It also acknowledges that people have spent their time trying to help you understand something, even when you've been prickly about it. "Not my problem" just shuts everything down. And yeah, in a volunteer discussion forum, that actually is your problem if you want people to keep engaging with you.
> Several people have written lengthy, detailed responses to your questions.
No, they haven't. Just read the thread.
> You're partly right that credentials alone don't prove anything.
I am totally right. Saying "believe me, I work on this" is lazy and a bad argument. There was simply no technical discussion to back that up.
> When tptacek mentioned his background, he also explained that static analysis tools have failed commercially for decades despite massive investment
I am not convinced that static analysis tools failed that hard. When I mentioned sanitizers, for example, he simply disappeared from the conversation and left that subject.
Also, suddenly, 22 bugs are found and there's a new holy grail in security analysis? You must understand that this is not enough.
> You've decided that people disagreeing with you are trying to manipulate you with authority
That's not a decision. An attempt to use credentials happened and it's there for anyone to see. It's blatant, I don't need to frame it.
> SAST tools with human triage have been ineffective for 20+ years despite enormous investment
I am not convinced that this is true. As I mentioned, sanitizers work really well at mitigating lots of security issues. They're an industry standard.
Also, I am not totally convinced that the LLM solution is that much better. It's fairly recent, only found a couple of bugs, and it still has much to prove before it becomes a valuable tool. Promising, but far from the holy grail you folks are implying it to be.
> if you want people to keep engaging with you
I want reasonable, no-nonsense people engaging with me. You seem to imply that your time somehow is more valuable than mine, that's me who owe you somehow. That is simply not true.
On sanitizers: You brought them up earlier and the conversation moved on without really addressing it. That's a legitimate complaint. Sanitizers (AddressSanitizer, MemorySanitizer, etc.) are extremely effective at what they do. If the claim is "static analysis has been useless for 20 years," that's obviously too strong—plenty of tools have found real bugs and prevented real vulnerabilities.
On the evidence: These LLM-assisted tools are quite new. Curl finding 22 potential issues is interesting but you're right that it's early days. Declaring this definitively transformative based on limited public evidence is probably premature.
But let's be clear about something else:
You've been consistently rude to people trying to engage with you. Multiple people wrote lengthy, substantive responses. You can scroll up and count the paragraphs. Saying "No, they haven't. Just read the thread" is one of the nicest ways you've engaged. You assume we are dishonest or you genuinely can't recognize when someone's making an effort.
When someone asks you to be more courteous, "it's really not my problem" is a dick move. Nobody said you owe anyone deference. The ask was simpler: don't call good-faith engagement manipulation or suppression or ranking. That's basic forum etiquette, not hierarchy.
And this: "You seem to imply that your time somehow is more valuable than mine, that's me who owe you somehow." Nobody implied that. Several people spent time explaining things. You've spent time questioning them. That's symmetrical. What's asymmetrical is that you keep framing their explanations as evasion or authority-wielding while treating your skepticism as pure rationality. That's exhausting to deal with.
Fuck man. I haven't got a paycheck in 2 years. Your time is worth more objectively. You make up reasons to infer we're actually saying "you're worthless", which then require your interlocutor to point of they couldn't have been, as they are objectively worse than you on whatever metric you mind-read they were comparing you on. Really sick behavior, even though I am sure it is unintentional and you really do think you're being put down like we're at the 5th grade lunch table. I've never had to roll over and show my belly and do "I'm unemployed!!11!" thing just to get someone to stop being a dick. I've had to do it twice so far.
On the actual technical question:
The narrower claim (which might be more defensible) is that SAST tools generate enormous amounts of output that requires expert triage, and that triage step has been the bottleneck. Humans don't scale to it; it's tedious and expensive. If LLMs can effectively automate that triage—not find new classes of bugs, but filter and prioritize what existing analyzers already flag—that could be valuable even if the underlying analysis is traditional.
Your architectural model (verbose analyzer → LLM triage) might be basically correct. The disagreement may just be about how significant that triage step is. You think it's 1% of the value. Others think the triage bottleneck was the whole reason these tools didn't work at scale.
That's a real technical question worth discussing. But it requires assuming people disagree with you because they actually disagree, not because they're trying to bamboozle you with credentials.
3 replies →