Nice chain and write-up. I don't know that I would call eval() on user input, hard coded secrets, and leaked credentials small or harmless. All of those are scary on their own.
This writeup is great, particularly the discussion of how Mehmet worked through understanding the system.
That said, Logpoint sell a siem product w/o a vulnerability intake process and that can't manage to rapidly patch pre-auth RCE security holes. There's nothing to say besides Logpoint are not serious people and nobody should use their nonsense. Given the number of bugs found and the surface level depth, security wasn't even an afterthought; it was not thought about at all.
>"This wasn’t a fully transparent codebase, though. Like many production appliances, a large portion of the Python logic was shipped only as compiled .pyc files."
Observation: One of the great virtues of Python is that typically when someone runs a Python program, they are running a Python interpreter on transparent Python source code files, which means that typically, the person that runs a Python program has the entirety of the source code of that program!
But that's not the case here!
.pyc, aka "pre-compiled, pre-tokenized, pre-digested" aka "obfuscated" python -- one of the roots of this problem -- is both a blessing and a curse!
It's a blessing because it allows Python code to have different interpretation/compilation/"digestion" stages cached -- which allows the Python code to run faster -- a very definite blessing!
But it's also (equal-and-oppositely!) -- a curse!
It's a curse because as the author of this article noted above, it allows Python codebases to be obfuscated -- in whole or in parts!
Of course, this is equally true of any compiled language -- for example with C code, one typically runs the compiled binaries, and compiled binaries are obfuscated/non-transparent by their nature. And that's equally true of any compiled language. So this is nothing new!
Now, I am not a Python expert. But maybe there's a Python interpreter switch which says 'do not run any pre-digested cached/obfuscated code in .pyc directories, and stop the run and emit an error message if any are encountered'.
I know there's a Python switch to prevent the compilation of source code into .pyc directories. Of course, the problem with this approach is that code typically runs slower...
So, what's the solution? Well, pre-created (downloaded) .pyc directories where the corresponding Python source code is not provided are sort of like the equivalent of "binary blobs" aka "binary black boxes" that ship with proprietary software.
Of course, some software publishers that do not believe in Open-source / transparent software might argue that such binary blobs protect their intellectual property... and if there's a huge amount of capital investment necessary to produce a piece of software, then such arguments are not 100% entirely wrong...
Getting back to Python (or more broadly, any interpreted language that has the same pre-compilation/obfuscation capability), what I'd love to see is a runtime switch, maybe we'd call it something like '-t' or '-transparent' or something like that, where if passed to the interpreter prior to running a program, then if it encounters a .pyc (or equivalent, for whatever format that language uses, call it "pre-tokenized", "pre-parsed", "pre-compiled" or whatever), then it immediately stops execution, and reports an error where the exact line and line number, the exact place where the code in the last good source file which called it, is reported, exactly to the end-user, and then execution completely stops!
This would allow easy discovery of such "black box" / "binary blob" non-transparent dependencies!
(Note to Future Self: Put this feature in any future interpreters you write... :-) )
1) routing (mis-)config problem
- key of remote exploit. This should always be something people double check if they don't understand how it works.
2) hard-coded secrets
- this is just against best practice. don't do this _ever_ there's a reason secure enclaves exist, not working it into your workflow is only permissible if you're working with black-box proprietary tools.
3) hidden user
- this is again against best practice allowing for feature creep via permissions creep. If you need privileged hidden remote accessible accounts at least restrict access and log _everything_.
4) ssrf
- bad but should be isolated so is much less of an issue. technically against best practices again, but widely done in production.
5) use of python eval in production
- no, no, no, no, never, _ever_ do this. this is just asking for problems for anything tied to remote agents unless the point of the tool is shell replication.
6) static aes keys / blindly relying on encryption to indicate trusted origin
- see bug2, also don't use encryption as origin verification if the client may do _bad_ things
parsing that was... well... yeah, I can see why that turned into a mess, the main thing missing is a high-level clear picture of the situation vs a teardown of multiple bugs and a brain dump
For fun (not because I'm trying to defend the capabilities of LLMs, but really just out of boredom), I tried to make this better with this prompt:
> make this sound less AI-generated and more like a proper hacker news comment with some SUBSTANCE: (insert the OP comment from above)
and Gemini 3 Flash generated this:
> This is exactly why CVSS scores can be misleading in practice. You see this constantly at Pwn2Own: it’s rarely one massive architectural failure, but a daisy-chain of three or four "low severity" quirks—an info leak here, a minor logic error there.
> To a product manager, these look like edge cases or wontfix tickets. But to an attacker, they aren't bugs; they’re primitives. You use the info leak to bypass ASLR, the logic error to groom the heap, and suddenly a nuisance crash becomes full RCE. We tend to model threats based on distinct barriers, ignoring that the complexity of the system guarantees these gadgets will eventually align.
It feels like it's fun when one plays with it on their own but it's really boring when reading the content others have generated (and therefore I'm sorry for adding to the pile - just wanted to see if the "HN style" was already baked-in to the LLMs and share the result: Yes it is).
Nice chain and write-up. I don't know that I would call eval() on user input, hard coded secrets, and leaked credentials small or harmless. All of those are scary on their own.
Yeah...and the fact that they evidently had no responsible disclosure process and ghosted the reporter...for a security product?!
Big yikes.
>.for a security product
I find a lot of 'security products' put their own security at the bottom of concerns.
They may actually start with that focus, but get bought up by VC that turns everything in to a profit center.
This writeup is great, particularly the discussion of how Mehmet worked through understanding the system.
That said, Logpoint sell a siem product w/o a vulnerability intake process and that can't manage to rapidly patch pre-auth RCE security holes. There's nothing to say besides Logpoint are not serious people and nobody should use their nonsense. Given the number of bugs found and the surface level depth, security wasn't even an afterthought; it was not thought about at all.
>"This wasn’t a fully transparent codebase, though. Like many production appliances, a large portion of the Python logic was shipped only as compiled .pyc files."
Observation: One of the great virtues of Python is that typically when someone runs a Python program, they are running a Python interpreter on transparent Python source code files, which means that typically, the person that runs a Python program has the entirety of the source code of that program!
But that's not the case here!
.pyc, aka "pre-compiled, pre-tokenized, pre-digested" aka "obfuscated" python -- one of the roots of this problem -- is both a blessing and a curse!
It's a blessing because it allows Python code to have different interpretation/compilation/"digestion" stages cached -- which allows the Python code to run faster -- a very definite blessing!
But it's also (equal-and-oppositely!) -- a curse!
It's a curse because as the author of this article noted above, it allows Python codebases to be obfuscated -- in whole or in parts!
Of course, this is equally true of any compiled language -- for example with C code, one typically runs the compiled binaries, and compiled binaries are obfuscated/non-transparent by their nature. And that's equally true of any compiled language. So this is nothing new!
Now, I am not a Python expert. But maybe there's a Python interpreter switch which says 'do not run any pre-digested cached/obfuscated code in .pyc directories, and stop the run and emit an error message if any are encountered'.
I know there's a Python switch to prevent the compilation of source code into .pyc directories. Of course, the problem with this approach is that code typically runs slower...
So, what's the solution? Well, pre-created (downloaded) .pyc directories where the corresponding Python source code is not provided are sort of like the equivalent of "binary blobs" aka "binary black boxes" that ship with proprietary software.
Of course, some software publishers that do not believe in Open-source / transparent software might argue that such binary blobs protect their intellectual property... and if there's a huge amount of capital investment necessary to produce a piece of software, then such arguments are not 100% entirely wrong...
Getting back to Python (or more broadly, any interpreted language that has the same pre-compilation/obfuscation capability), what I'd love to see is a runtime switch, maybe we'd call it something like '-t' or '-transparent' or something like that, where if passed to the interpreter prior to running a program, then if it encounters a .pyc (or equivalent, for whatever format that language uses, call it "pre-tokenized", "pre-parsed", "pre-compiled" or whatever), then it immediately stops execution, and reports an error where the exact line and line number, the exact place where the code in the last good source file which called it, is reported, exactly to the end-user, and then execution completely stops!
This would allow easy discovery of such "black box" / "binary blob" non-transparent dependencies!
(Note to Future Self: Put this feature in any future interpreters you write... :-) )
1) routing (mis-)config problem - key of remote exploit. This should always be something people double check if they don't understand how it works.
2) hard-coded secrets - this is just against best practice. don't do this _ever_ there's a reason secure enclaves exist, not working it into your workflow is only permissible if you're working with black-box proprietary tools.
3) hidden user - this is again against best practice allowing for feature creep via permissions creep. If you need privileged hidden remote accessible accounts at least restrict access and log _everything_.
4) ssrf - bad but should be isolated so is much less of an issue. technically against best practices again, but widely done in production.
5) use of python eval in production - no, no, no, no, never, _ever_ do this. this is just asking for problems for anything tied to remote agents unless the point of the tool is shell replication.
6) static aes keys / blindly relying on encryption to indicate trusted origin - see bug2, also don't use encryption as origin verification if the client may do _bad_ things
parsing that was... well... yeah, I can see why that turned into a mess, the main thing missing is a high-level clear picture of the situation vs a teardown of multiple bugs and a brain dump
>if they don't understand how it works.
The problem quite often is they think they know how it works, Dunning-Kruger effect and all.
[flagged]
Thanks, ChatGPT.
For fun (not because I'm trying to defend the capabilities of LLMs, but really just out of boredom), I tried to make this better with this prompt:
> make this sound less AI-generated and more like a proper hacker news comment with some SUBSTANCE: (insert the OP comment from above)
and Gemini 3 Flash generated this:
> This is exactly why CVSS scores can be misleading in practice. You see this constantly at Pwn2Own: it’s rarely one massive architectural failure, but a daisy-chain of three or four "low severity" quirks—an info leak here, a minor logic error there.
> To a product manager, these look like edge cases or wontfix tickets. But to an attacker, they aren't bugs; they’re primitives. You use the info leak to bypass ASLR, the logic error to groom the heap, and suddenly a nuisance crash becomes full RCE. We tend to model threats based on distinct barriers, ignoring that the complexity of the system guarantees these gadgets will eventually align.
It feels like it's fun when one plays with it on their own but it's really boring when reading the content others have generated (and therefore I'm sorry for adding to the pile - just wanted to see if the "HN style" was already baked-in to the LLMs and share the result: Yes it is).
17 replies →