Comment by nneonneo
6 months ago
Good god did they hallucinate the segmentation fault and the resulting GDB trace too? Given that the diffs don’t even apply and the functions don’t even exist, I guess the answer is yes - in which case, this is truly a new low for AI slop bug reports.
The git commit hashes in the diff are interesting: 1a2b3c4..d4e5f6a
I think my wetware pattern-matching brain spots a pattern there.
Going a bit further, it seems like there's a grain of truth here, HTTP/2 has a stream priority dependency mechanism [1] and this report [2] from Imperva describes an actual Dependency Cycle DoS in the nghttp implementation.
Unfortunately that's where it seems to end... I'm not that familiar with QUIC and HTTP/2, but I think the closest it gets is that the GitHub repo exists and has a `class QuicConnection` [3]. Beyond that, the QUIC protocol layer doesn't have any concept of exchanging stream priorities [4] and HTTP/2 priorities are something the client sends, not the server? The PoC also mentions HTTP/3 and PRIORITY_UPDATE frames, but those are from the newer RFC 9218 [5] and lack the stream dependencies used in HTTP/2 PRIORITY frames.
I should learn more about HTTP/3!
[1] https://blog.cloudflare.com/adopting-a-new-approach-to-http-...
[2] https://www.imperva.com/docs/imperva_hii_http2.pdf
[3] https://github.com/aiortc/aioquic/blob/218f940467cf25d364890...
[4] https://datatracker.ietf.org/doc/html/rfc9000#name-stream-pr...
[5] https://www.rfc-editor.org/rfc/rfc9218.html#name-the-priorit...
Excellent catch! I had to go back and take a second look, because I completely missed that the first time.
This is a whole new problem open source project will be facing. AI slop PR and Vulnerability reports, which will be only solved using AI tools to filter through the unholy amount.
AI filtering AI that's submitted based on AI scouring the web for ways to make probably less money than it costs to run. The future looks like turning on computers and having them run at 100% GPU + CPU usage 100% of time with 0 clue what they're doing. What a future.
1 reply →
An real report would have a GDB trace that looks like that, so it isn't hard to create such a trace. Many of us could create a real looking GDB trace just as well by hand - it would be tedious, boring, and pointless but we could.
Oh, I'm fully aware an LLM can hallucinate a GDB trace just fine.
My complaint is: if you're trying to use an AI to help you find bugs, you'd sincerely hope that they would have *some* attempt to actually run the exploit. Having the LLM invent fake evidence that you have done so, when you haven't, is just evil, and should be resulting in these people being kicked straight off H1 completely.
That means doing work. I can get a llm to write up a bugus report in minutes and then whatever value comes frome it. Checking the report is real would take time.