I was dealing with a good old anti-tampering userspace library last week. They did everything right.
The process detects that it's traced (by asking the kernel nicely) and shuts down. So I patched the kernel and now I can connect with and poke around gdb.
I can't put a software breakpoint because the process computes checksum of it's memory and jumps through a table index computed from a hash, so I had to put the hardware read watchpoint on modified memory location, record who reads it and patch the jump index to the right one.
Of course, there is another function that checksums the memory and runs the process into sigsegv, it has tons of obfuscated confusing stuff, so I have to patch it with 'lol return 0'.
And then I can finally use frida to disable ssl pinning to mitmproxy it. It all took a week to bypass all the levels of obfuscation, find the actual thing I was looking for and extract it. Can't imagine how much time the people at $securitycompanyname spent on adding all those levels of obfuscation and anti-debug. More than a week for sure. What was it doing? A custom HOTP.
It wasn't any better on actual secure boots 20 years ago where bootloader checksummed the whole firmware before transferring control, because bootloader itself was in ROM and of course it had subtle logical bugs and you only need to find one and bootloader is there in ROM bugged forever.
How many more amateur attempts did these layers thwart? Did its creators collect enough revenue before the crack was produced?
I suppose uncrackable software, in the sense of e.g. license protection, cannot exist. Software is completely beholden to hardware, and known hardware can be arbitrarily emulated, and there's nowhere to hide any tamper-resistant secret bits. Only in a combination with locked-down, uncrackable hardware can properly designed software without critical bugs remain uncrackable; see stuff like yubikeys. Similarly, communication can remain uncrackable as long as the secret bits (like a private key) remain secret.
I'm not ever cracking anything, the software is free to use, I just wanted to mitmproxy it to see the requests and figure out some custom crypto inside of it
> All encryption is end-to-end, if you’re not picky about the ends.
This reminds of how Apple iMessage is E2E encrypted, but Apple runs on-device content detection that pings their servers, which you can't possibly even think of disabling. [1][2]
They're actually two separate claims, one of which the blogpost does support. The other one is seemingly ought to be supported by some conversations on a Discord server.
The concern is obvious though, not sure what's unclear about that: it's a bit pointless to have E2EE, if the adversary has full access to one of the ends anyways.
> the network traffic sent and received by mediaanalysisd was found to be empty and appears to be a bug.
I say "supposedly debunked" because empty traffic doesn't mean there's nothing going on. It could just be a file deemed safe. But then the author said:
> The network call that raised concerns is a bug. Apple has since released macOS 13.2, which has fixed this issue, and the process no longer makes calls to Apple servers
> You need an integrated root-of-trust in your CPU in order to solve these.
Yes, quite. The BIOS/UEFI absolutely needs to store a public key of a primary key on the TPM, probably the EKpub itself for simplicity. Without that you will be vulnerable to an MITM attack, at least early in boot, and since the MITM could fool you about the root of trust for later, as long as the MITM can commit to always being there you cannot detect the attack.
I expected something about cryptography keys hidden in a decoration somewhere (kinda like LoTR Gate of Moria style), article was not quite what I expected. Although it is in a sense
> Unexplainable security features are just marketing materials.
I feel this way about a lot of hardware-based security solutions like TPMs, and TEEs. These are actually useful solutions that can help solve problems that we have (as evidenced by this article) but unfortunately, these solutions tend to be poorly publicly documented. As a result, we rely on academics to do the work for us in order to learn how to better contextualize these solutions.
POWER9 had quite a few neat things going on. I think it's unfortunate that it never became mainstream. The switch to closed source firmware in Power10 is also a downer.
> Active physical interposer adversaries are a very real part of legitimate threat models. You need an integrated root-of-trust in your CPU in order to solve these.
It's been almost 10 years since Microsoft, based on their Xbox experience, started saying "stop using discrete TPMs over the bus, they are impossible to secure, we need the TPM embedded in the CPU itself"
The TPM itself can actually be discrete, as long as you have a root-of-trust inside the CPU with a unique secret. Derive a secret from the unique secret and the hash of the initial bootcode the CPU is running like HMAC(UDS, hash(program)) and derive a public/private key pair from that. Now you can just do normal Diffie-Hellman to negotiate encryption keys with the TPM and you're safe from any future interposers.
This matters because for some functionality you really want tamper-resistant persistent storage, for example "delete the disk encryption keys if I enter the wrong password 10 times". Fairly easy to do on a TPM that can be made on a process node that supports flash vs a general CPU where that just isn't an option.
It's just a link to a blogpost, like all the other links on HN. None are treated any special way. Why would content that was on lobsters first be treated differently.
Protecting secrets via hardware is always "decorative" in some sense, the question is just how much time+work it takes to extract them (and probability of destroying the secrets/device in the process). (outside of things like QKD)
But for software systems under a software threat model, bug-free implementations are possible, in theory at least.
Perfect security isn't a thing. Hardware/Software engineers are in the business of making compromise harder, but eyes are wide open about "perfection".
Confidential Computing is evolving, and it's steadily gotten much more difficult to bypass the security properties.
I don't follow this - the software must necessarily run on some hardware, so while the software may be provably secure that doesn't help if an attacker can just pull key material off the bus?
In any case, I'm curious to hear your argument for how "PGP has some implementation problems" (unsurprising to most people that have dared to look at its internals even briefly) implies "all non-information-theoretic cryptography is futile".
What do you mean exactly? Both AMD/Intel have signed firmware, and both support hardware attestation, where they sign what they see with an AMD/Intel key and you can later check that signature. This is the basis of confidential VMs, where not even the machine physical owner can tamper with the VM in an undetectable way.
I was dealing with a good old anti-tampering userspace library last week. They did everything right.
The process detects that it's traced (by asking the kernel nicely) and shuts down. So I patched the kernel and now I can connect with and poke around gdb.
I can't put a software breakpoint because the process computes checksum of it's memory and jumps through a table index computed from a hash, so I had to put the hardware read watchpoint on modified memory location, record who reads it and patch the jump index to the right one.
Of course, there is another function that checksums the memory and runs the process into sigsegv, it has tons of obfuscated confusing stuff, so I have to patch it with 'lol return 0'.
And then I can finally use frida to disable ssl pinning to mitmproxy it. It all took a week to bypass all the levels of obfuscation, find the actual thing I was looking for and extract it. Can't imagine how much time the people at $securitycompanyname spent on adding all those levels of obfuscation and anti-debug. More than a week for sure. What was it doing? A custom HOTP.
It wasn't any better on actual secure boots 20 years ago where bootloader checksummed the whole firmware before transferring control, because bootloader itself was in ROM and of course it had subtle logical bugs and you only need to find one and bootloader is there in ROM bugged forever.
How many more amateur attempts did these layers thwart? Did its creators collect enough revenue before the crack was produced?
I suppose uncrackable software, in the sense of e.g. license protection, cannot exist. Software is completely beholden to hardware, and known hardware can be arbitrarily emulated, and there's nowhere to hide any tamper-resistant secret bits. Only in a combination with locked-down, uncrackable hardware can properly designed software without critical bugs remain uncrackable; see stuff like yubikeys. Similarly, communication can remain uncrackable as long as the secret bits (like a private key) remain secret.
I'm not ever cracking anything, the software is free to use, I just wanted to mitmproxy it to see the requests and figure out some custom crypto inside of it
How was your experience with Xbox? I heard it was rather watertight?
Why would I ever pay for anything microsoft made?
> All encryption is end-to-end, if you’re not picky about the ends.
This reminds of how Apple iMessage is E2E encrypted, but Apple runs on-device content detection that pings their servers, which you can't possibly even think of disabling. [1][2]
[1] https://sneak.berlin/20230115/macos-scans-your-local-files-n... [2] Investigation in Beeper/PyPush discord for iMessage spam blocking
What’s the concern here? The blog post you linked does not really support its claims with evidence.
They're actually two separate claims, one of which the blogpost does support. The other one is seemingly ought to be supported by some conversations on a Discord server.
The concern is obvious though, not sure what's unclear about that: it's a bit pointless to have E2EE, if the adversary has full access to one of the ends anyways.
[1] is supposedly debunked: https://pawisoon.medium.com/debunked-the-truth-about-mediaan...
> the network traffic sent and received by mediaanalysisd was found to be empty and appears to be a bug.
I say "supposedly debunked" because empty traffic doesn't mean there's nothing going on. It could just be a file deemed safe. But then the author said:
> The network call that raised concerns is a bug. Apple has since released macOS 13.2, which has fixed this issue, and the process no longer makes calls to Apple servers
The phrase "threat model gerrymandering" is fantastic, is fantastic, I will be using that a lot I think.
Definitely the word of the day for me.
> You need an integrated root-of-trust in your CPU in order to solve these.
Yes, quite. The BIOS/UEFI absolutely needs to store a public key of a primary key on the TPM, probably the EKpub itself for simplicity. Without that you will be vulnerable to an MITM attack, at least early in boot, and since the MITM could fool you about the root of trust for later, as long as the MITM can commit to always being there you cannot detect the attack.
I expected something about cryptography keys hidden in a decoration somewhere (kinda like LoTR Gate of Moria style), article was not quite what I expected. Although it is in a sense
The Gate of Moria inscription was plaintext. The first person to not try to interpret it as a riddle solved it.
> All encryption is end-to-end, if you’re not picky about the ends.
This is a great quote.
> Unexplainable security features are just marketing materials. I feel this way about a lot of hardware-based security solutions like TPMs, and TEEs. These are actually useful solutions that can help solve problems that we have (as evidenced by this article) but unfortunately, these solutions tend to be poorly publicly documented. As a result, we rely on academics to do the work for us in order to learn how to better contextualize these solutions.
I find it surprising that IBM POWER9 had key imprints in 2017 (sic!!) and it's still nowhere to be found on contemporary CPU's...
POWER9 had quite a few neat things going on. I think it's unfortunate that it never became mainstream. The switch to closed source firmware in Power10 is also a downer.
> Active physical interposer adversaries are a very real part of legitimate threat models. You need an integrated root-of-trust in your CPU in order to solve these.
It's been almost 10 years since Microsoft, based on their Xbox experience, started saying "stop using discrete TPMs over the bus, they are impossible to secure, we need the TPM embedded in the CPU itself"
The TPM itself can actually be discrete, as long as you have a root-of-trust inside the CPU with a unique secret. Derive a secret from the unique secret and the hash of the initial bootcode the CPU is running like HMAC(UDS, hash(program)) and derive a public/private key pair from that. Now you can just do normal Diffie-Hellman to negotiate encryption keys with the TPM and you're safe from any future interposers.
This matters because for some functionality you really want tamper-resistant persistent storage, for example "delete the disk encryption keys if I enter the wrong password 10 times". Fairly easy to do on a TPM that can be made on a process node that supports flash vs a general CPU where that just isn't an option.
That's assuming you trust the CPU vendor not to have their own interposer.
If you don't trust the CPU vendor in your machine you have bigger problems.
8 replies →
[dead]
[flagged]
Never stopped. But the last time it came up, it was officially sanctioned, so... :/
At least we got acknowledgement and not gaslighting this time: https://news.ycombinator.com/item?id=43249197
It's just a link to a blogpost, like all the other links on HN. None are treated any special way. Why would content that was on lobsters first be treated differently.
2 replies →
Given with yesterday's article on here about the issues of PGP, it looks like all software encryption short of a one-time pad are decorative.
I like the idea of a key part of the the CPU (comment below); does anyone know why Intel/ARM/AMD have not picked up this IBM feature?
The logic you're using here is: if PGP is unsafe, all cryptography must be unsafe too? No, that doesn't hold, at all.
Protecting secrets via hardware is always "decorative" in some sense, the question is just how much time+work it takes to extract them (and probability of destroying the secrets/device in the process). (outside of things like QKD)
But for software systems under a software threat model, bug-free implementations are possible, in theory at least.
This is a reasonable take.
Perfect security isn't a thing. Hardware/Software engineers are in the business of making compromise harder, but eyes are wide open about "perfection".
Confidential Computing is evolving, and it's steadily gotten much more difficult to bypass the security properties.
I don't follow this - the software must necessarily run on some hardware, so while the software may be provably secure that doesn't help if an attacker can just pull key material off the bus?
3 replies →
What article?
In any case, I'm curious to hear your argument for how "PGP has some implementation problems" (unsurprising to most people that have dared to look at its internals even briefly) implies "all non-information-theoretic cryptography is futile".
Except 99% of one-time pad implementations fail in at least one criteria:
* Using CSPRNGs instead of HWRNGs to generate the pads,
* Try to make it usable and share short entropy and reinvent stream ciphers,
* Share that short entropy over Diffie-Hellman RSA,
* Fail to use unconditionally secure message authentication,
* Re-use pads,
* Forget to overwrite pads,
* Fail to distribute pads off-band via sneakernet or dead drops or QKD.
OTP is also usually the first time someone dabbles in creating cryptographic code so the implementations are full of footguns.
What do you mean exactly? Both AMD/Intel have signed firmware, and both support hardware attestation, where they sign what they see with an AMD/Intel key and you can later check that signature. This is the basis of confidential VMs, where not even the machine physical owner can tamper with the VM in an undetectable way.
I have bad news on that front.
https://tee.fail/
6 replies →