Comment by stackghost
1 day ago
I think the point they're making is that "I, a seasoned network security and red-team-type person, could have done this in Wireshark without AI assistance" is neither surprising nor interesting.
That'd be like saying "I, an emergency room doctor, do not need AI assistance to interpret an EKG"
Consider that your expertise is atypical.
The specific point I was trying to make was along the lines of, "I, a seasoned network security and red-team-type person, could have done this in Wireshark without AI assistance. And yet, I’d probably lose a bet on a race against someone like me using an LLM."
Sure, but that is aside from my original point. If somebody:
a) Has the knowledge to run tcpdump or similar from the command line
b) Has the ambition to document and publish their effort on the internet
c) Has the ability identify and patch the target behaviors in code
I argue that, had they not run to an LLM, they likely would have solved this problem more efficiently, and would have learned more along the way. Forgive me for being so critical, but the LLM use here simply comes off as lazy. And not lazy in a good efficiency amplifying way, but lazy in a sloppy way. Ultimately this person achieved their goal, but this is a pattern I am seeing on a daily basis at this point, and I worry that heavy LLM users will see their skill sets stagnate and likely atrophy.
> I argue that, had they not run to an LLM, they likely would have solved this problem more efficiently
This is just expert blindness, and objectively, measurably wrong.
Oh come on, the fact that the author was able to pull this off is surely indicative of some expertise. If the story started had started off with, "I asked the LLM how to capture network traffic," then yeah, what I said would not be applicable. But that's not how this was presented. tcpdump was used, profiling tools were mentioned, etc. It is not a stretch to expect somebody who develops networked applications knows a thing or two about protocol analysis.
>I argue that, had they not run to an LLM, they likely would have solved this problem more efficiently
Hard disagree. Asking an LLM is 1000% more efficient than reading docs, lots of which are poorly written and thus dense and time-consuming to wade through.
The problem is hallucinations. It's incredibly frustrating to have an LLM describe an API or piece of functionality that fulfills all requirements perfectly, only to find it was a hallucination. They are impressive sometimes though. Recently I had an issue with a regression in some of our test capabilities after a pivot to Microsoft Orleans. After trying everything I could think of, I asked Sonnet 4.5, and it came up with a solution to a problem I could not even find described on the internet, let alone solved. That was quite impressive, but I almost gave up on it because it hallucinated wildly before and after the workable solution.
The same stuff happens when summarizing documentation. In that regard, I would say that, at best, modern LLMs are only good for finding an entrypoint into the docs.
1 reply →