In a multisig interaction there are 3 ways to get hacked:
- The multisig smart contract is owned
- The computer you're signing on is owned
- The hardware wallet (ledger, trezor) you're using is owned
The multisig contract in question here (Gnosis Safe) has shown to be incredibly robust, and hardware wallets are very difficult to attack, so the current weak point is the computer.
Cryptocurrency companies need to start solving this by moving to a more locked-down, dedicated machine for signing, as well as actually verifying what is shown on the tiny hardware wallet screen instead of blindly clicking "yes".
Why should it go online at all? $1.5 billion buys a lot of plane tickets to the same physical place, and how frequently do they need to be accessing the whole lump, anyway?
For that matter, I know signatures are long and human-unfriendly, but isn’t it on the order of a couple hundred bytes? Surely $1.5 billion buys transcribing the putative signature request into an isolated machine in a known state, validating/interpreting/displaying the request’s meaning on that offline machine, performing your signing there offline, copying down the result, and carrying the attestation to your secret conclave lair to combine with the others’ or whatever?
I think this shows that the best option or protection is to just send many small transactions, never a big one. Define some max tolerance for loss and then send that. This is the advantage of XRP. instant and very cheap TX. You can just automate many small transactions. If something goes wrong, then you can overhaul everything before losing all your $.
> attackers stole approximately $1.5B from their multisig cold storage wallet. At this time, it appears the attackers compromised multiple signers’ devices, manipulated what signers saw in their wallet interface, and collected the required signatures while the signers believed they were conducting routine transactions.
If hackers can get remote access and 'manipulate what signers saw in their wallet interface' that doesn't sound like cold storage to me.
My understanding of "cold storage" was always that they keys are not accessible to the internet. That could be stored on paper, a flash drive or engraved in metal and put in a safe, or it could be in a regular digital wallet on a device never connected to the internet. If you want to do transactions, put it on an airgapped device, create the transaction, then move the transaction to an internet-connected device to broadcast the transaction.
Cold storage means the coins are stored offline. If the offline computer has malware, it is possible to tamper with the transaction data at the offline stage. Cold storage means signing the transaction offline and then broadcasting it on the online computer. if both are tampered then in theory this is possible by both computers showing erroneous data (where the offline computer tampers with the transaction by signing off to the wrong recipient but showing the correct one). This is hard to pull off as both computers need to be infected. This can be prevented by the super-paranoid by using a 3rd computer e.g. a VPS or sending small amounts.
it is possible to infect the offline computer by infecting a USB drive with stealth malware which then propagates to the offline one.
It could also be an inside job in exchange for an employee getting a kickback from N. Korea . it's not like this has not happened in the past. Imagine being a low-paid employee at an exchange and being enticed by an offer for tens of millions by North Korea to pretend to be hacked and infect one's own computers with the malware supplied by North Korea. This would be easy for an employee to do, who has access to the computers, and then pass it off as a hack.
There is no concept of "coin storage" in the actual security model of cryptocurrency. The security model of cryptocurrency is about the storage of keys.
"Cold storage" has come to mean that the keys are stored in some offline location. It doesn't necessarily mean that the keys are hard to access or that the money being moved is otherwise hard to get to. That is used to be what it means, but practically, a wallet on a hardware keychain is called "cold" exactly the way a wallet whose keys are split up on slips of paper between 5 different physical vaults is "cold."
Usually you want to boot from a cryptographic-ally verified medium where a checksum can be verified before you execute the system.
The emphasis is on running the correct software. If you have to input cryptographic data every time you boot that's okay because you're offline and should be in a secure room (no internet connected devices).
But yeah, malware attack is still possible if you don't have a secure chain and that's a long one.
The online security world is so wild. In pretty much any other field of engineering, foreign nation states explicitly targeting the thing you built is just kinda out of scope. There's no skyscraper in existence that is designed to withstand sustained artillery shelling, and your car is not going to withstand a tank shell either. Neither do they have to be designed to that specification. If North Korea killed someone with a missile or even destroyed a minor building or something, there would be public outrage and swift (military) repercussions.
But online, it's the wild wild west. The North Koreans can throw anything they want at your systems and the main response you get is "lol get good noob, should have built more secure systems" despite the opposing side literally having quite literally hundreds of people specifically trained to take on organisations like yours.
Not saying the Bybit people couldn't have been more careful of whatever, but let's appreciate how wild the online environment actually is sometimes.
Factories are not designed to withstand sustained aerial bombardment because the chance of sustained aerial bombardment is small to non-existent due to effective (geopolitical) mitigations.
But, if you are in a active war and being actively bombed, then you absolutely design your factories to be resistant to sustained aerial bombardment. You do not just throw your hands up in the air and say: “Who could have expected this totally routine and expected situation in our operational environment? We can not be blamed for not adequately mitigating known risks and intentionally mischaracterizing our risk mitigations as adequate for commonplace risks we know we can not adequately mitigate.”
If there were effective geopolitical mitigations that made the chances of a attack minimal, then your argument holds weight. But, that is not the case. Failure to accommodate for known, standard, commonplace failure modes is incompetence. Deceptively implying you do mitigate risks while lying or with a disregard for the truth is fraud and maliciousness.
There is also a second problem with your argument which is the relative accessibility of executing these attacks being trivial compared to military operations; being easily within the reach of lone individuals, let alone groups, organized crime, or entire governments. They require 10,000% security improvements to actually stop commonplace and routine attacks. But that is a longer argument I am not going to get into right now since the qualitative argument I made above applies regardless of the quantitative difficulty.
>But, if you are in a active war and being actively bombed, then you absolutely design your factories to be resistant to sustained aerial bombardment.
That's not really a viable strategy. It has been tried a few times - Mittelwerk and Kőbánya spring to mind - but you can't really build a self-contained factory. If your enemy can't bomb the factory, they'll bomb the roads and railways serving your factory, they'll bomb the worker housing, they'll bomb the less-sensitive factories that supply your factory with raw materials and components. You very quickly run into the diseconomies of operating under siege conditions.
At least during WWII, it was generally far more effective to rely on camouflage, secrecy and redundancy. Rather than having a super-fortified factory that shouts "this is vital national infrastructure", spread your capacity out into lots of mundane-looking facilities and plan for a certain level of attrition. Compartmentalise information to prevent your enemy from mapping out your supply chain and identifying bottlenecks. Your overall system can be highly resilient, even if the individual parts of that system are fragile.
Sure, if there was an active war going on. But while NK and the USA are not exactly friendly, they're definitely not at war either. In basically any other field, the question of "what do we do when a nation state deploys hundreds of people, well funded and well trained, specifically to screw us over?" is met with some variant of "that's why we pay taxes, so the army can protect us from that".
A normal bank being robbed for 1.5 billion, ESPECIALLY by a pariah country like North Korea, would absolutely not be met with "oh that was definitely your own fault" as many of the sibling comments seem to imply.
No one said "they didn't need to defend", or at least that's not how I read OP. The observation is merely that the situation is so wildly different from the physically local world. It's remarkable.
I suspect the truth lies between the two - we are under constant attack, but we aren’t as a society reacting as if we were.
It’s like a building occasionally gets hit by a shell and we dont get on a war footing.
The closest analogy I can come up with is England in the 1600s and early 1700s. Fairly regularly ships would be attacked by pirates from North Africa, and sometimes an actual land raid woukd occur- pirates from North Africa would take slaves from small seaside towns.
It was not till Englands navy grew strong enough that the threat was eliminated - and perhaps that’s the real issue here - we know it’s happening, we cannot turn the Wild West into urban peace, so we just have to keep taking the licks and keep building more secure and stronger
What has changed is that there is an digital (as opposed to gold) international form of money whose transactions cannot be reversed or stopped. Bybit and those holders of large crypto are operating with a fundamentally different threat model where its worthwhile for an attacker to invest millions of dollars of effort (for the Bybit payout even tens or hundreds of millions) attacking them. Everyone else just needs to worry about getting ransomed for a much smaller amount.
There's a long BBC podcast on Lazarus that touches on the spending.
The members are state sponsored and young/bright. Top 0.1℅ academic sorts. At one point, the BBC got access to a conversation with one of the hackers, and their only question was "how much do you get paid?" (the context was that the hacker thought they were talking to Someone else in the tech space)
Apparently they aren't paid very well at all. Far less than the average Western IT worker. Their lives are not luxurious either. They're in barracks style living quarters with strict schedules and travel. Presumably, the anonymous Lazarus hacker was putting out a probing question because they must have been ruminating about what life on the other side would be like, what they are really worth, etc.
That's part of the power of Lazarus, the ability to dedicate resources far in excess of what most expect due to their indentured servant hackers (the opportunity to join is presented as a gift, Which to some extent it is because it does come with the extremely rare opportunity to travel. Many of them are in China.)
It is essentially a financial institution handling billions of dollars. It is not the average website of your neighbourhood restaurant that got hacked or a scattershot ransomware attack. I would expect that for that scale nation state actors are not out of scope, even if it is usually about infiltration and IP/secrets theft than outright getting robbed.
Eh said financial institution chose a field of operation where certain risks are present by design. They make good profits because other institutions judge those risks too high.
It's the mob attacking a casino and making off with chips. That people keep valuing those chips is one of the mysteries of our days.
That’s a really good point. If a nation state bombed a private oil rig with $1.5B in damages all hell would break loose. But if it’s a cyber attack no one cares and we blame the victim.
I think it really boils down to plausible deniability, and the fact that it’s convenient for the governments on the receiving end to ignore the damages done to private citizens when there’s no physical harm and clear responsibility.
No president is going to bomb NK because they attacked a crypro exchange. Maybe they should, but it’s not something the public will support. So it’s easy to say “oh well we don’t really know for sure who did it” and call it a day. It’s our own fault.
I also agree that private citizens have a responsibility to secure ourselves, but where do you draw the line? If I don’t have an AA gun on my roof, am I responsible for enemy warplanes bombing my business? Isn’t this partially why I pay taxes?
Well, there's a couple of airliner shootdowns that kind of go in this category. MH17, PS752, AHY8243... That's at least $0.5B in damage plus many hundreds of civilian lives.
At a certain scale nation-state-level actors have to be part of your threat model, there's no excuse.
But yeah, it's quite baffling how in a couple years we seemingly went from stealing email addresses to credit cards to straight up billions of dollars.
If we can expect that everything shifts online eventually, where will this end? Clicked on the wrong link? Guess your house is gone... tough luck.
This is why it is dangerous to replace people and laws with code. With laws, you eventually get to talk to a human being who has leeway in interpreting the situation. With code, it just works the way it does, regardless of circumstances.
Cryptocurrencies avoid a central authority, but by doing that, they also avoid any possibility of human discretion, oversight, or recourse. There is no institution to appeal to, no customer service to call, and no regulator to enforce fairness.
What concerns me is the idea that risks like these might leak into the regulated financial sector.
Right now, if I want to avoid my dollars being among those billions stolen, I can (and do) keep them someplace far away from instant digital currency. With firms that, while they could move large sums of their money somewhere else, build in a whole lot of friction in proportion to the amount being moved—by their customers, their staff, and their counterparties. Limited and well-understood modes of potential malfeasance, and strong structural discouragement for each of them.
There is nothing that I need to do that needs to move fast. But I’d hate for the firms servicing my slow, boring needs to be tempted by the new shiny.
> If North Korea killed someone with a missile or even destroyed a minor building or something, there would be public outrage and swift (military) repercussions.
Russia kills people in the West with nerve-gasses or Plutonium, cuts electrical and Internet cables, blows up ammunition factories or puts incendiary devices on cargo airplanes and there are no repercussions.
This post is light on the details of how the hack occurred. Given it talks about their toolkit, am I right to understand that people were tricked into downloading and running malicious software?
". At this time, it appears the attackers compromised multiple signers’ devices, manipulated what signers saw in their wallet interface, and collected the required signatures while the signers believed they were conducting routine transactions."
Depends how this plays out. If Bybit collapses due to this, yeah, lots of individual investors. Though history shows (MtGox, FTX) that eventually they’d be made at least partially whole.
If Bybit doesn’t collapse (can handle all the on-going withdrawals), then Bybit lost money that they’ll need to recoup through operations.
Currently it’s trending towards the second scenario.
My understanding is this multisig failed because, like most security, everyone just pressed yes and didn’t communicate, investigate, or ask questions, defeating the purpose of a multisig.
Yea, how is it that multiple people signed a transaction for over a billion dollars of assets without due diligence?
If you did this for non crypto there would be lawyers, bankers, etc involved in the transaction.
Root certificate authorities have already solved this problem with signing rituals which take place in person in an air gapped vault on specialized hardware and multiple parties as witness.
They didn't sign a transaction for 1 billion dollars.
They all signed what they thought was a routine transfer, but in reality what they signed gave the hacker full control of the smart contract (the Gnosis Safe) in which the 1.4B $ of tokens were stored.
The hackers, having gained control of the smart contract, proceeded to empty it of funds.
The concept of strong safeties was not in place. Safeties refer to layers that go beyond common trust mechanisms. In this case, signing a transaction of that magnitude solely based on multi-signature approval was completely insufficient. There should have been additional safeguards, such as special approvals and extra verification steps, specifically designed for transactions within that amount range.
Indeed. As in, the organization should only sign such transactions when all signers are present in person in a secure location and they follow a procedure witnessed by independent auditors. “Work from home” when you control billion in value does not cut it.
They didn't sign a transaction for 1 billion dollars. They all signed what they thought was a routine transfer, but in reality what they signed gave the hacker full control of the smart contract (the Gnosis Safe) in which the 1.4B $ of tokens were stored.
The hackers, having gained control of the smart contract, proceeded to empty it of funds.
This was a multisig - meaning M out of N signatures from different signing devices were needed to sign a transaction. The attacker infected enough signer devices to go unnoticed and the signers failed to verify what they were signing on air-gapped devices
I hate how complexity has become the norm in the industry. Instead of having simple systems with code and modules that are simple, fit-for-purpose and fully auditable, the approach has been to have insanely complex systems and then to add some even more complex security solution on top like CrowdStrike. Seems like a bandaid patch.
Multi-sig means multiple signatures, by multiple private keys. Nothing about that means that they have to be by multiple people - this isn’t secure like a bank - or that they aren’t vulnerable to the same attack.
Unsure why the title says this era has arrived as if it's something new. As an internal penetration tester, I can attest it's already a disaster. The issue is that companies live and die by the cope that social engineering is a high bar or that if a vulnerability isn't internet facing, it's not a big deal.
The point of the article seems to be that it used to be bugs and raw incompetence, and now it's graduated to insufficient OpSec.
Significant progress for crypto.
The other side of this coin is all the companies and infrastructure that has popped up, which intentionally or not enables the laundering of ill-gotten cryptocurrency [1].
I have a hard time feeling sympathy here because I consider cryptocurrency to be fundamentally silly. Reversible transactions of fiat currency transactions is a feature not a bug.
I feel like securing something like this is practically impossible. There's always the risk of a bad actor who introduces malware for a small fee.
it can still be still hard to reverse fiat even if easier than crypto. try disputing a wire. this is why you should always use a credit card, preferably Amex, for purchases-tons of buyer protection.
Reversibility is a trade-off. It's great if you are on the sending end of a transaction. It can be a nightmare on the receiving end. Irreversibility is the other way around. And both approaches have different costs and assumptions.
I think it’s less about reversibility itself and more the larger system within which it works. Banking works because the companies agree to follow rules so there’s a social context where if I make a mistake you will help fix it because the odds are fair that you will make a mistake at some point, too. In contrast, cryptocurrency is a political movement so the ideological “trust less” purity test matters more than whether the system is actually used. There is no technical reason why a system couldn’t have something like a settlement period to allow fraud reversal.
Adjusting your comment for the situation:
> Is $100.00 in cash silly? It has the same property (non-reversibility)
No, not silly if that's what I am comfortable to keep on me (wallet, mattress, etc) and I'm mugged/robbed most people will recover. (Especially if you're also able to afford the inherent risk of crypto.)
> Is $1,500,000,000.00 in cash silly? It has the same property (non-reversibility)
YES! And probably a challenge for most humans even if you're able to get that cash in the limited US $100,000.00 bill [1] - that's 15,000 green slips of paper. (I'm making a bold assumption that this link [2] is reasonably actually for the physical scale, though this apparently only shows 13,000 not the 15,000 needed.)
They effectively treated the $1.5B like a pile of cash in a fence with a few (easily pickable apparently) locks keeping it shut.
That SHOULD have been in a 100% offline, air gapped system with multiple levels of 2+ person approvals to access.
But this failure implies to me that even THEY didn't really consider the crypto assets they were holding as something with a real value either.
Orders of magnitude matter, and you have to look at the overall system. You can’t move $1.5B in cash without a fleet of trucks and a lot of time, and serious banking has lots of safeguards around it to prevent thefts by requiring more people to cooperate on an insider theft.
Cryptocurrency was designed as a political statement rather than a serious banking system so you effectively have the same level of precaution for both large and small amounts, akin to a bank keeping a billion dollars in the teller’s tray.
Taking a step back from this attack, it looks like the new crypto-reality is far far far immature security-wise & compliance-wise ("compliance to what??" you can ask me).
While it is nearly impossible to steal $100mn from one of the mega-banks, those <expletive> crypto bros, a bunch of failed morons (self-proven by all these hacks), manage to lose people's money. Now.. I am not defending the banking system (and its ethics/morals), but damn-it they do a f-a-r better job at IT Audit/IT Compliance/IT Sec (my bread and b utter for decades).
Being in the thick of it, I can tell you the compliance side is pushing towards what exists in traditional finance, be IT, money laundering, accounting practices etc. At least in Europe and to a lesser extent the US. If you go working at new banks (say Revolut or N26) or at growing asset-managing crypto companies in Europe you'll find the landscape to be extremely similar.
As far as I'm concerned, if you're parking money with a company based in an area that has lax regulation you're holding the gun that'll shoot your foot.
I have a hard time seeing something like this happen at Bitpanda or Kraken, though you never know.
The difference is that conventional banks can roll back transactions. The normal banking system is essentially a consensus mechanism "A: I owe you this amount. A: I just transferred you this amount, ok? B: Yup, accepted, thanks." If something goes wrong, A can say "A: Woops, I made a mistake. Reverse please, here are the laws stating in this case I have the right. B: Alright, I must comply.". In cryptocurrencies, by design, "the code is law". And this law does not predict reversing transactions. So you can lose any amount of currency due to an illegal act or even some simple error, like transferring to a dead address.
> those <expletive> crypto bros, a bunch of failed morons (self-proven by all these hacks)
Bankers are a bunch of idiots, too. I know this to be true because that one investment bank collapsed a bunch of years ago.
In all seriousness though, ETH is just a commodity; a bearer instrument; a thing. It's similar to gold or cash in some ways. If you store it properly, you're fine. If you give it to someone untrustworthy who loses it, of course that's a problem.
Well-regulated banks can start holding crypto on behalf of customers as soon as they're given the regulatory go-ahead. They've stored gold in vaults for thousands of years; they can store crypto in digital vaults too.
I’d be shit scared of a trad-fi institution holding crypto. I doubt they have the operational muscle, instinct, and know-how to properly safeguard it. Unless they partner with someone who does, which is what they’d likely do.
I'm not sure how you'd do compliance, though. At least not universally. You could (which I suppose is your point) implement compliance requirements for crypto companies operating facilities on your soil. That doesn't really do anything for decentralized systems though
Compliance has a centralizing effect, for example the American OFAC sanctions list.
You can do business outside of it but you're cutting yourself out of a lot of institutional money.
In the end while there's a lot of money being made in sanctions-evasion, money-laundering and whatnot, at the macro level the industry prefers trying to cozy up to Blackrock and Vanguard than to narcos.
Recent and related: Bybit loses $1.5B in hack - https://news.ycombinator.com/item?id=43130143
In a multisig interaction there are 3 ways to get hacked:
- The multisig smart contract is owned
- The computer you're signing on is owned
- The hardware wallet (ledger, trezor) you're using is owned
The multisig contract in question here (Gnosis Safe) has shown to be incredibly robust, and hardware wallets are very difficult to attack, so the current weak point is the computer.
Cryptocurrency companies need to start solving this by moving to a more locked-down, dedicated machine for signing, as well as actually verifying what is shown on the tiny hardware wallet screen instead of blindly clicking "yes".
They should only use a computer that is air gapped to go online only when signing something. This is an op sec failure to not have this procedure
Why should it go online at all? $1.5 billion buys a lot of plane tickets to the same physical place, and how frequently do they need to be accessing the whole lump, anyway?
For that matter, I know signatures are long and human-unfriendly, but isn’t it on the order of a couple hundred bytes? Surely $1.5 billion buys transcribing the putative signature request into an isolated machine in a known state, validating/interpreting/displaying the request’s meaning on that offline machine, performing your signing there offline, copying down the result, and carrying the attestation to your secret conclave lair to combine with the others’ or whatever?
1 reply →
That’s precisely what happened in this attack.
They were attacked when they went online
1 reply →
I think this shows that the best option or protection is to just send many small transactions, never a big one. Define some max tolerance for loss and then send that. This is the advantage of XRP. instant and very cheap TX. You can just automate many small transactions. If something goes wrong, then you can overhaul everything before losing all your $.
The missing part is that you cannot apply the same procedure to 1 ETH as you would to 1k ETH, regardless of the technology being used.
Or, you know, employ technology that allows for mistakes to be fixed.
> attackers stole approximately $1.5B from their multisig cold storage wallet. At this time, it appears the attackers compromised multiple signers’ devices, manipulated what signers saw in their wallet interface, and collected the required signatures while the signers believed they were conducting routine transactions.
If hackers can get remote access and 'manipulate what signers saw in their wallet interface' that doesn't sound like cold storage to me.
Isn't cold storage about where the keys are? You still need to be able to actually interact with a chain.
My understanding of "cold storage" was always that they keys are not accessible to the internet. That could be stored on paper, a flash drive or engraved in metal and put in a safe, or it could be in a regular digital wallet on a device never connected to the internet. If you want to do transactions, put it on an airgapped device, create the transaction, then move the transaction to an internet-connected device to broadcast the transaction.
1 reply →
Cold storage means the coins are stored offline. If the offline computer has malware, it is possible to tamper with the transaction data at the offline stage. Cold storage means signing the transaction offline and then broadcasting it on the online computer. if both are tampered then in theory this is possible by both computers showing erroneous data (where the offline computer tampers with the transaction by signing off to the wrong recipient but showing the correct one). This is hard to pull off as both computers need to be infected. This can be prevented by the super-paranoid by using a 3rd computer e.g. a VPS or sending small amounts.
it is possible to infect the offline computer by infecting a USB drive with stealth malware which then propagates to the offline one.
It could also be an inside job in exchange for an employee getting a kickback from N. Korea . it's not like this has not happened in the past. Imagine being a low-paid employee at an exchange and being enticed by an offer for tens of millions by North Korea to pretend to be hacked and infect one's own computers with the malware supplied by North Korea. This would be easy for an employee to do, who has access to the computers, and then pass it off as a hack.
There is no concept of "coin storage" in the actual security model of cryptocurrency. The security model of cryptocurrency is about the storage of keys.
"Cold storage" has come to mean that the keys are stored in some offline location. It doesn't necessarily mean that the keys are hard to access or that the money being moved is otherwise hard to get to. That is used to be what it means, but practically, a wallet on a hardware keychain is called "cold" exactly the way a wallet whose keys are split up on slips of paper between 5 different physical vaults is "cold."
Usually you want to boot from a cryptographic-ally verified medium where a checksum can be verified before you execute the system.
The emphasis is on running the correct software. If you have to input cryptographic data every time you boot that's okay because you're offline and should be in a secure room (no internet connected devices).
But yeah, malware attack is still possible if you don't have a secure chain and that's a long one.
Stuxnet managed to infect air-gapped computers.
Yeah, it sounds like an attack on the Metamask extension, or the browser hosting it.
not at all
The online security world is so wild. In pretty much any other field of engineering, foreign nation states explicitly targeting the thing you built is just kinda out of scope. There's no skyscraper in existence that is designed to withstand sustained artillery shelling, and your car is not going to withstand a tank shell either. Neither do they have to be designed to that specification. If North Korea killed someone with a missile or even destroyed a minor building or something, there would be public outrage and swift (military) repercussions.
But online, it's the wild wild west. The North Koreans can throw anything they want at your systems and the main response you get is "lol get good noob, should have built more secure systems" despite the opposing side literally having quite literally hundreds of people specifically trained to take on organisations like yours.
Not saying the Bybit people couldn't have been more careful of whatever, but let's appreciate how wild the online environment actually is sometimes.
Your logic is backwards.
Factories are not designed to withstand sustained aerial bombardment because the chance of sustained aerial bombardment is small to non-existent due to effective (geopolitical) mitigations.
But, if you are in a active war and being actively bombed, then you absolutely design your factories to be resistant to sustained aerial bombardment. You do not just throw your hands up in the air and say: “Who could have expected this totally routine and expected situation in our operational environment? We can not be blamed for not adequately mitigating known risks and intentionally mischaracterizing our risk mitigations as adequate for commonplace risks we know we can not adequately mitigate.”
If there were effective geopolitical mitigations that made the chances of a attack minimal, then your argument holds weight. But, that is not the case. Failure to accommodate for known, standard, commonplace failure modes is incompetence. Deceptively implying you do mitigate risks while lying or with a disregard for the truth is fraud and maliciousness.
There is also a second problem with your argument which is the relative accessibility of executing these attacks being trivial compared to military operations; being easily within the reach of lone individuals, let alone groups, organized crime, or entire governments. They require 10,000% security improvements to actually stop commonplace and routine attacks. But that is a longer argument I am not going to get into right now since the qualitative argument I made above applies regardless of the quantitative difficulty.
>But, if you are in a active war and being actively bombed, then you absolutely design your factories to be resistant to sustained aerial bombardment.
That's not really a viable strategy. It has been tried a few times - Mittelwerk and Kőbánya spring to mind - but you can't really build a self-contained factory. If your enemy can't bomb the factory, they'll bomb the roads and railways serving your factory, they'll bomb the worker housing, they'll bomb the less-sensitive factories that supply your factory with raw materials and components. You very quickly run into the diseconomies of operating under siege conditions.
At least during WWII, it was generally far more effective to rely on camouflage, secrecy and redundancy. Rather than having a super-fortified factory that shouts "this is vital national infrastructure", spread your capacity out into lots of mundane-looking facilities and plan for a certain level of attrition. Compartmentalise information to prevent your enemy from mapping out your supply chain and identifying bottlenecks. Your overall system can be highly resilient, even if the individual parts of that system are fragile.
3 replies →
Sure, if there was an active war going on. But while NK and the USA are not exactly friendly, they're definitely not at war either. In basically any other field, the question of "what do we do when a nation state deploys hundreds of people, well funded and well trained, specifically to screw us over?" is met with some variant of "that's why we pay taxes, so the army can protect us from that".
A normal bank being robbed for 1.5 billion, ESPECIALLY by a pariah country like North Korea, would absolutely not be met with "oh that was definitely your own fault" as many of the sibling comments seem to imply.
12 replies →
No one said "they didn't need to defend", or at least that's not how I read OP. The observation is merely that the situation is so wildly different from the physically local world. It's remarkable.
I suspect the truth lies between the two - we are under constant attack, but we aren’t as a society reacting as if we were.
It’s like a building occasionally gets hit by a shell and we dont get on a war footing.
The closest analogy I can come up with is England in the 1600s and early 1700s. Fairly regularly ships would be attacked by pirates from North Africa, and sometimes an actual land raid woukd occur- pirates from North Africa would take slaves from small seaside towns.
It was not till Englands navy grew strong enough that the threat was eliminated - and perhaps that’s the real issue here - we know it’s happening, we cannot turn the Wild West into urban peace, so we just have to keep taking the licks and keep building more secure and stronger
2 replies →
The state of online security hasn't changed much.
What has changed is that there is an digital (as opposed to gold) international form of money whose transactions cannot be reversed or stopped. Bybit and those holders of large crypto are operating with a fundamentally different threat model where its worthwhile for an attacker to invest millions of dollars of effort (for the Bybit payout even tens or hundreds of millions) attacking them. Everyone else just needs to worry about getting ransomed for a much smaller amount.
There's a long BBC podcast on Lazarus that touches on the spending.
The members are state sponsored and young/bright. Top 0.1℅ academic sorts. At one point, the BBC got access to a conversation with one of the hackers, and their only question was "how much do you get paid?" (the context was that the hacker thought they were talking to Someone else in the tech space)
Apparently they aren't paid very well at all. Far less than the average Western IT worker. Their lives are not luxurious either. They're in barracks style living quarters with strict schedules and travel. Presumably, the anonymous Lazarus hacker was putting out a probing question because they must have been ruminating about what life on the other side would be like, what they are really worth, etc.
That's part of the power of Lazarus, the ability to dedicate resources far in excess of what most expect due to their indentured servant hackers (the opportunity to join is presented as a gift, Which to some extent it is because it does come with the extremely rare opportunity to travel. Many of them are in China.)
It is essentially a financial institution handling billions of dollars. It is not the average website of your neighbourhood restaurant that got hacked or a scattershot ransomware attack. I would expect that for that scale nation state actors are not out of scope, even if it is usually about infiltration and IP/secrets theft than outright getting robbed.
Eh said financial institution chose a field of operation where certain risks are present by design. They make good profits because other institutions judge those risks too high.
It's the mob attacking a casino and making off with chips. That people keep valuing those chips is one of the mysteries of our days.
That’s a really good point. If a nation state bombed a private oil rig with $1.5B in damages all hell would break loose. But if it’s a cyber attack no one cares and we blame the victim.
I think it really boils down to plausible deniability, and the fact that it’s convenient for the governments on the receiving end to ignore the damages done to private citizens when there’s no physical harm and clear responsibility.
No president is going to bomb NK because they attacked a crypro exchange. Maybe they should, but it’s not something the public will support. So it’s easy to say “oh well we don’t really know for sure who did it” and call it a day. It’s our own fault.
I also agree that private citizens have a responsibility to secure ourselves, but where do you draw the line? If I don’t have an AA gun on my roof, am I responsible for enemy warplanes bombing my business? Isn’t this partially why I pay taxes?
Well, there's a couple of airliner shootdowns that kind of go in this category. MH17, PS752, AHY8243... That's at least $0.5B in damage plus many hundreds of civilian lives.
It seems more analogous to the Soviets infiltrating your small business. Which no small business owner is prepared to screen for, and which happened.
If your small business has $1.5billion in the safe then it’s not a “small business”
1 reply →
At a certain scale nation-state-level actors have to be part of your threat model, there's no excuse.
But yeah, it's quite baffling how in a couple years we seemingly went from stealing email addresses to credit cards to straight up billions of dollars.
If we can expect that everything shifts online eventually, where will this end? Clicked on the wrong link? Guess your house is gone... tough luck.
This is why it is dangerous to replace people and laws with code. With laws, you eventually get to talk to a human being who has leeway in interpreting the situation. With code, it just works the way it does, regardless of circumstances.
Cryptocurrencies avoid a central authority, but by doing that, they also avoid any possibility of human discretion, oversight, or recourse. There is no institution to appeal to, no customer service to call, and no regulator to enforce fairness.
2 replies →
What concerns me is the idea that risks like these might leak into the regulated financial sector.
Right now, if I want to avoid my dollars being among those billions stolen, I can (and do) keep them someplace far away from instant digital currency. With firms that, while they could move large sums of their money somewhere else, build in a whole lot of friction in proportion to the amount being moved—by their customers, their staff, and their counterparties. Limited and well-understood modes of potential malfeasance, and strong structural discouragement for each of them.
There is nothing that I need to do that needs to move fast. But I’d hate for the firms servicing my slow, boring needs to be tempted by the new shiny.
> If North Korea killed someone with a missile or even destroyed a minor building or something, there would be public outrage and swift (military) repercussions.
Russia kills people in the West with nerve-gasses or Plutonium, cuts electrical and Internet cables, blows up ammunition factories or puts incendiary devices on cargo airplanes and there are no repercussions.
This post is light on the details of how the hack occurred. Given it talks about their toolkit, am I right to understand that people were tricked into downloading and running malicious software?
Found the answer, yes https://x.com/0xcygaar/status/1892967062160511164
". At this time, it appears the attackers compromised multiple signers’ devices, manipulated what signers saw in their wallet interface, and collected the required signatures while the signers believed they were conducting routine transactions."
Does anyone know how many signers there were/are?
Genuine question because I know almost nothing about crypto: who actually lost money in this attack? Lots of individuals?
Depends how this plays out. If Bybit collapses due to this, yeah, lots of individual investors. Though history shows (MtGox, FTX) that eventually they’d be made at least partially whole.
If Bybit doesn’t collapse (can handle all the on-going withdrawals), then Bybit lost money that they’ll need to recoup through operations.
Currently it’s trending towards the second scenario.
Remember when ETH hard forked over $50M stolen 9 years ago?
that was way more relative to size of network , and the hacker still got his forked tokens, so he did ok
My understanding is this multisig failed because, like most security, everyone just pressed yes and didn’t communicate, investigate, or ask questions, defeating the purpose of a multisig.
Yea, how is it that multiple people signed a transaction for over a billion dollars of assets without due diligence?
If you did this for non crypto there would be lawyers, bankers, etc involved in the transaction.
Root certificate authorities have already solved this problem with signing rituals which take place in person in an air gapped vault on specialized hardware and multiple parties as witness.
They didn't sign a transaction for 1 billion dollars. They all signed what they thought was a routine transfer, but in reality what they signed gave the hacker full control of the smart contract (the Gnosis Safe) in which the 1.4B $ of tokens were stored.
The hackers, having gained control of the smart contract, proceeded to empty it of funds.
TFA seems to suggest that the thieves modified the signers’ applications to display a routine transaction but actually sign the heist transaction.
Given that the UI they saw was compromised, they likely believed they were signing some routine 1M rebalancing transaction.
3 replies →
The concept of strong safeties was not in place. Safeties refer to layers that go beyond common trust mechanisms. In this case, signing a transaction of that magnitude solely based on multi-signature approval was completely insufficient. There should have been additional safeguards, such as special approvals and extra verification steps, specifically designed for transactions within that amount range.
Indeed. As in, the organization should only sign such transactions when all signers are present in person in a secure location and they follow a procedure witnessed by independent auditors. “Work from home” when you control billion in value does not cut it.
They didn't sign a transaction for 1 billion dollars. They all signed what they thought was a routine transfer, but in reality what they signed gave the hacker full control of the smart contract (the Gnosis Safe) in which the 1.4B $ of tokens were stored. The hackers, having gained control of the smart contract, proceeded to empty it of funds.
The displayed information was tampered due to malware. communication would not have helped.
I really do not understand why they do not separate these into multiple separate wallets
They did.
This was a multisig - meaning M out of N signatures from different signing devices were needed to sign a transaction. The attacker infected enough signer devices to go unnoticed and the signers failed to verify what they were signing on air-gapped devices
> the signers failed to verify what they were signing on air-gapped devices
This is the part that really surprises me given the amount of money involved.
2 replies →
They have other cold wallets, so I guess they do?
they did. this is why the exchange is still solvenet
I hate how complexity has become the norm in the industry. Instead of having simple systems with code and modules that are simple, fit-for-purpose and fully auditable, the approach has been to have insanely complex systems and then to add some even more complex security solution on top like CrowdStrike. Seems like a bandaid patch.
Wild to think that North Korea could assign whole teams of people working 24/7 to trick just one person into clicking a couple of buttons.
Not one person. The multi in mukti-sig means multiple.
Multi-sig means multiple signatures, by multiple private keys. Nothing about that means that they have to be by multiple people - this isn’t secure like a bank - or that they aren’t vulnerable to the same attack.
2 replies →
Unsure why the title says this era has arrived as if it's something new. As an internal penetration tester, I can attest it's already a disaster. The issue is that companies live and die by the cope that social engineering is a high bar or that if a vulnerability isn't internet facing, it's not a big deal.
The point of the article seems to be that it used to be bugs and raw incompetence, and now it's graduated to insufficient OpSec. Significant progress for crypto.
We took the new era out of the title above.
The other side of this coin is all the companies and infrastructure that has popped up, which intentionally or not enables the laundering of ill-gotten cryptocurrency [1].
I have a hard time feeling sympathy here because I consider cryptocurrency to be fundamentally silly. Reversible transactions of fiat currency transactions is a feature not a bug.
I feel like securing something like this is practically impossible. There's always the risk of a bad actor who introduces malware for a small fee.
[1]: https://www.chainalysis.com/blog/2024-crypto-money-launderin...
Reversible transactions is a feature for fiat money
Reversible transactions would generally be a bug regarding cash & hard assets of which cryptocurrency is trying to imitate.
1.5 billion in cash would not disappear this easy. You would need trucks to even transport it.
1 reply →
it can still be still hard to reverse fiat even if easier than crypto. try disputing a wire. this is why you should always use a credit card, preferably Amex, for purchases-tons of buyer protection.
Crypto has reversible transactions when both parties agree to use that functionality in advance (well, the reasonably programmable ones do, anyway)
It's not a bug if both parties give consent, which sounds like a wonderful way to transact, to me!
2 replies →
Reversibility is a trade-off. It's great if you are on the sending end of a transaction. It can be a nightmare on the receiving end. Irreversibility is the other way around. And both approaches have different costs and assumptions.
I think it’s less about reversibility itself and more the larger system within which it works. Banking works because the companies agree to follow rules so there’s a social context where if I make a mistake you will help fix it because the odds are fair that you will make a mistake at some point, too. In contrast, cryptocurrency is a political movement so the ideological “trust less” purity test matters more than whether the system is actually used. There is no technical reason why a system couldn’t have something like a settlement period to allow fraud reversal.
5 replies →
bank error in your favor
Is cash silly? It has the same property (non-reversibility)
> Is cash silly?
No, of course not.
Adjusting your comment for the situation: > Is $100.00 in cash silly? It has the same property (non-reversibility)
No, not silly if that's what I am comfortable to keep on me (wallet, mattress, etc) and I'm mugged/robbed most people will recover. (Especially if you're also able to afford the inherent risk of crypto.)
> Is $1,500,000,000.00 in cash silly? It has the same property (non-reversibility)
YES! And probably a challenge for most humans even if you're able to get that cash in the limited US $100,000.00 bill [1] - that's 15,000 green slips of paper. (I'm making a bold assumption that this link [2] is reasonably actually for the physical scale, though this apparently only shows 13,000 not the 15,000 needed.)
They effectively treated the $1.5B like a pile of cash in a fence with a few (easily pickable apparently) locks keeping it shut.
That SHOULD have been in a 100% offline, air gapped system with multiple levels of 2+ person approvals to access.
But this failure implies to me that even THEY didn't really consider the crypto assets they were holding as something with a real value either.
1- https://en.m.wikipedia.org/wiki/United_States_one-hundred-th...
2- https://www.reddit.com/r/pics/s/GHNABiJh6A
1 reply →
Orders of magnitude matter, and you have to look at the overall system. You can’t move $1.5B in cash without a fleet of trucks and a lot of time, and serious banking has lots of safeguards around it to prevent thefts by requiring more people to cooperate on an insider theft.
Cryptocurrency was designed as a political statement rather than a serious banking system so you effectively have the same level of precaution for both large and small amounts, akin to a bank keeping a billion dollars in the teller’s tray.
It’s also impossible to steal from afar and transactions of $100/$1000000/$1000000000 each look very different.
Crypto is like having a $1.5 billion bill.
Keeping $1.5 billion in cash is silly.
Taking a step back from this attack, it looks like the new crypto-reality is far far far immature security-wise & compliance-wise ("compliance to what??" you can ask me).
While it is nearly impossible to steal $100mn from one of the mega-banks, those <expletive> crypto bros, a bunch of failed morons (self-proven by all these hacks), manage to lose people's money. Now.. I am not defending the banking system (and its ethics/morals), but damn-it they do a f-a-r better job at IT Audit/IT Compliance/IT Sec (my bread and b utter for decades).
Being in the thick of it, I can tell you the compliance side is pushing towards what exists in traditional finance, be IT, money laundering, accounting practices etc. At least in Europe and to a lesser extent the US. If you go working at new banks (say Revolut or N26) or at growing asset-managing crypto companies in Europe you'll find the landscape to be extremely similar.
As far as I'm concerned, if you're parking money with a company based in an area that has lax regulation you're holding the gun that'll shoot your foot. I have a hard time seeing something like this happen at Bitpanda or Kraken, though you never know.
Classic “you get what you pay for”.
The difference is that conventional banks can roll back transactions. The normal banking system is essentially a consensus mechanism "A: I owe you this amount. A: I just transferred you this amount, ok? B: Yup, accepted, thanks." If something goes wrong, A can say "A: Woops, I made a mistake. Reverse please, here are the laws stating in this case I have the right. B: Alright, I must comply.". In cryptocurrencies, by design, "the code is law". And this law does not predict reversing transactions. So you can lose any amount of currency due to an illegal act or even some simple error, like transferring to a dead address.
> those <expletive> crypto bros, a bunch of failed morons (self-proven by all these hacks)
Bankers are a bunch of idiots, too. I know this to be true because that one investment bank collapsed a bunch of years ago.
In all seriousness though, ETH is just a commodity; a bearer instrument; a thing. It's similar to gold or cash in some ways. If you store it properly, you're fine. If you give it to someone untrustworthy who loses it, of course that's a problem.
Well-regulated banks can start holding crypto on behalf of customers as soon as they're given the regulatory go-ahead. They've stored gold in vaults for thousands of years; they can store crypto in digital vaults too.
I’d be shit scared of a trad-fi institution holding crypto. I doubt they have the operational muscle, instinct, and know-how to properly safeguard it. Unless they partner with someone who does, which is what they’d likely do.
1 reply →
I'm not sure how you'd do compliance, though. At least not universally. You could (which I suppose is your point) implement compliance requirements for crypto companies operating facilities on your soil. That doesn't really do anything for decentralized systems though
Compliance has a centralizing effect, for example the American OFAC sanctions list. You can do business outside of it but you're cutting yourself out of a lot of institutional money. In the end while there's a lot of money being made in sanctions-evasion, money-laundering and whatnot, at the macro level the industry prefers trying to cozy up to Blackrock and Vanguard than to narcos.