Comment by kyboren

5 years ago

I think what people are missing with all these analogies about burglaries and negligence is the funny difference between cyberspace and meatspace: In cyberspace, your attacker can be anywhere on the planet, located in virtually any jurisdiction, and reliably tracing and attributing attacks is a very difficult task. In meatspace, your attacker must be physically present and is generally obvious and thus vulnerable. This difference has dramatic implications on the ability of the enforcement model to reduce incidence of attacks.

In meatspace, assigning 100% of the burden of blame to the attacker and absolving the victim of any blame at all agrees with our ideas of morality and sort of works because there is a non-negligible chance of holding the attacker accountable. This provides a measure of deterrence to would-be attackers.

In contrast, in cyberspace, the chance of holding attackers accountable is much lower. There is little deterrence to would-be attackers, especially state-sponsored attackers. Here we need to let go of our fantasy that blame must be assigned according to our idea of who is morally at fault.

Of course the attacker is always morally at fault. But legally, we must hold accountable organizations who are breached, because we need them to improve their security posture. An improved security posture is the only realistic path to a future with fewer and less impactful cyberspace attacks.

Strict liability or "victim blaming" for cyberspace attacks goes against our notions of morality but IMO it is essential.

> I think what people are missing with all these analogies about burglaries and negligence is the funny difference between cyberspace and meatspace: In cyberspace, your attacker can be anywhere on the planet, located in virtually any jurisdiction, and reliably tracing and attributing attacks is a very difficult task. In meatspace, your attacker must be physically present and is generally obvious and thus vulnerable. This difference has dramatic implications on the ability of the enforcement model to reduce incidence of attacks.

The jurisdiction part is the key here, IMHO.

Sure, the likelyhood of reliably tracing a single attack to a crew is very low, but a prolific crew has dozens or even hundreds of attacks going on in parallel, so tracing just one of them should be enough to take them down.

However, most crews live in "bullet proof" jurisdictions, where we cannot reach them.

If this practice of ransomware attacks continue, we really need a better solution for that. This could go as far as cutting off the "bullet proof" countries from the Internet, if it weren't for China and Russia, that simply from an economic and political perspective, we cannot disconnect.

I guess diplomatic solutions are needed, as well as investing more in IT security, secure OSS etc.

  • The way you have laid out this problem makes it seem similar to the naval piracy issue in the Age of Exploration. You have small, untraceable actors launching both ad-hoc and privateer-style attacks on large national and corporate entities.

    Everything you suggested seems valid, and as you pointed out both the carrot and the stick are needed. The European powers enlarged their navies to absorb the surplus of unemployed sailors and used the enlarged navies to hunt the remaining pirates. British naval dominance (followed by American naval dominance) is what makes naval piracy comparatively rare today. I reckon a similar strategy would work digitally (put the best talent in golden handcuffs and hunt down the rest), but I'm not sure anyone has the resources, political will and the national interest right now.

    • > but I'm not sure anyone has the resources, political will and the national interest right now.

      Well, this will change if/when ransomware attacks are becoming a big enough issue to noticeably impact the economy, health care, or something else that politicians and voters care about.

      I'm not an IT security expert, but I do think we are now observing an increased industrialization of ransomware. Some crews specialize in initial attack vectors, and sell them to others who specialize in the lateral movement, and then those resell fully compromised systems to specialists that do the actual ransomware and payment.

      If this trend continues, countries will be forced to take this far more seriously than they do it now.

      1 reply →

    • Millions for defense, not one cent for tribute. Funny how that makes sense again.

      It even occurs to me that like Tripoli of old, a lot of these "bulletproof" locations have a significant chunk of their economies based around this piracy. Romania's got some towns notorious for this, and India has places where scammy call centers are a way of life for thousands of people.

  • I saw once that the likelihood of a crime is s function of likelihood of getting caught and the severity of the punishment.

    It's difficult enough getting government departments within the same country to cooperate. Tunneling attacks along several international jurisdictions compounds the problem, especially if the attacker chooses to tunnel through states that are adversaries to the victim nation.

  • Another solution: plugging attack vectors, like users ability run arbitrary non-sandboxed binaries. Server-side systems and thin clients are almost bullet proof. No virii for Chromebooks.

    • How likely do you think this is to make a significant dent in, say, the next 18 months?

      > Server-side systems and thin clients are almost bullet proof.

      Thin clients, maybe. Service-side systems, not so much.

      I remember multiple pre-auth code exec bugs in VPN concentrators and other Internet-facing security appliances this year alone.

      I really want to believe that better security practices can save us all, but somehow I've lost hope during the last 10 to 12 months...

I don't want to start thinking of corporations as moral actors, like humans are. I just want them to be held legally culpable and made accountable when their behavior has negative consequences. The issue of morality is saved for humans.

Fortunately, companies can already be held accountable if their negligence exposes personal data in security breaches.

Likewise, individual members of the corporation can already be held both morally and legally responsible for their personal actions and negligence. That's enough for me, I don't need to try to shame legal constructs too.

Now, whether companies are actually held accountable in practice is a separate issue. Equifax certainly wasn't, not in any way that matters. The same could be said for morally culpable CEOs (or CISOs for that matter). But, that's a question of how the law is applied, not whether our moral stance should be changed.

I’ve always thought along similar lines. What bothers me is that, if someone were to break into your home there is risk to them because you are allowed to defend yourself by fighting back.

As far as I know, we aren’t allowed to counter attack cyber attackers so our only option is better defenses and then handing things off to authorities. I used to work for an smaller eBay-for-a-niche market type site and dealing with fraud was our biggest issue.

We tracked fraud ourselves and even managed to send a delivery to a PO Box used by someone who had swindled customers out of thousands of dollars. We contacted the authorities, told them everything and exactly where the criminal would be.

They did nothing.

If we aren’t allowed to fight back and the authorities won’t do anything, what deterrent is there?

  • Parent comment is saying that the deterrent could be how difficult you make it to hack you.

    A case where the best, and possibly only, offense is a good defence.

    • > But legally, we must hold accountable organizations who are breached

      Parent comment is also insisting on the imature idea that we must generically hold organizations accountable when breached. I say immature because this idea keeps popping up once in a while from people who didn't yet realize that it's been debated repeatedly in the past and it didn't get applied so generically for good reason.

      There are so many nuances OP has ignored, and so many ways this is not only impossible, it's also a bad way of dealing with the situation. When a private citizen gets breached due to an insecure ISP router, is it just the ISP to blame or also the user for not buying a better one even though the ISP allowed it? Who's responsible when a company user gets tricked by fishing even after the regulation training? Personally liable for the breach? When a company Linux server vulnerability is exploited who gets the blame? The user? The admin? The distro maintainer? The developer who pushed the code? This would kick OSS software to the curb because most of it does not have an "organization" behind it to take the blame for every vulnerability.

      Organizations will be breached. Most of them can't even afford the defenses that an averagely determined attacker can afford to penetrate. Where do you draw the line between who's to blame, attacker or victim? With real world crime we did a good job of fine tuning that threshold over centuries.

      Best you can do (and we should do) is come up with a set of rules, regulations, and best practices that are enforced by law, and I think this is coming one way or another. For example "patch any CVSS 9 or higher within 14 days of publishing", "implement 2FA for x and y access". But even these rules will always be behind the times and never enough to thwart attacks. It raises the bar for a successful attack and creates a clearer (not clear) threshold for responsibility.

      Sure, some cases are clear cut, you haven't patched for 2 years and have no leg to stand on. But the solution is certainly not blanket blaming the victim because you can fit it in an HN comment.

    • The crims have obviously worked out that it's much easier to subvert the "users" rather than have a head-to-head battle with IT. If a user (even a careful one) clicks on a link in an email, should they actually be held responsible for what follows, or is it the fault of IT/Security whose security setup allowed an email with a dubious attachment to make it through to the user?

      I know many intelligent, conscientious, non-techy users who'd be mortified to think they enabled a ransomware attack - but is it their fault?

  • “there is risk to them because you are allowed to defend yourself by fighting back”

    This depends a lot on what jurisdiction you live in.

I partially agree, but think it depends on the nature of the attack and the types of security procedures/protections that were already in place.

For example, consider seat-belts. If you don't wear one, and you are involved in a crash, there is a serious likelihood you will die or be seriously injured. Hence we make it the drivers responsibility to ensure passengers wear their seatbelts. Now, if everybody was wearing their seat belts, the car was serviced, there were airbags etc, but out of nowhere a tree hits the car, should we hold the driver accountable for not having installed a cutting edge anti-tree device to their vehicle? Of course not!

Unfortunately, defenders are always on the back foot. You can have the best security posture and still fall to a zero day. We need a nuanced policy in place which blames victims that have no security posture whatsoever, but properly assigns responsibility to the attacker when the victim did everything they reasonably could. Defining "reasonably could" is the very challenging part.

  • I think you are mistaking safety for security. The whole discussion is about security - preventing attacks from attackers on purpose. What you described here in the seat belt example is safety - preventing accidents that happen without intention/malice.

Overall, it’s a good thing to encourage obligations of organizations to be diligent about cyber security.

However, I think comparing cyberspace attacks to meatspace burglaries (in the not-Ocean’s 11 sense) and negligence is an unfair comparison.

It’s like a cat and mouse game in actuality. Even with good defenses, determined attackers could still keep banging at the gates trying to get in. There are also attackers that have a good deal of sophistication and ‘cyber arsenals’ to go after these bigger orgs - including nation-states and large crime rings.

In a meatspace analogy: If someone owned a staffing agency, they might require employee ID badges, set 2FA, and have cameras in a building... but probably have no contingency plans for the Russian government attacking them or criminals with a wrecking ball smashing through the walls.

It's similar to the difference between drone warfare and 'normal' warfare and this is one of the reasons why drone warfare worries me quite a bit.

> In meatspace, assigning 100% of the burden of blame to the attacker and absolving the victim of any blame at all agrees with our ideas of morality

There's a lot of victim blaming that goes on for physical/non-cyber attacks of all types because of people's ideas about morality (some valid, some not); where the victim of the attack is generally responsibility to a third party for the care of the object of the attack, that also extends into the legal system (mostly validly).

While your argument that there must be some duty to protect online data well as an obligation not to attack, the distinction between ”meatspace” and “cyberspace” you are drawing on the topic seems specious and ill-informed about the way society in general and law in particular handles responsibilities outside of cyberspace.

This. I was not expecting HN crowd to almost universally blame the attackers and fully absolve Funke. It just doesn't make any sense if you have the faintest idea about cyber security in modern age.

  • With physical security I can walk around and check it for myself. I can even watch the contractors put it in place. There are several people involved that can spot mistakes.

    With cyber security I need to trust that some programmer didn't make a mistake 15 years ago when they wrote the TCP stack in a 12 hour crunch shift because their boss needed to meet a deadline. It's impossible to check for the layman and extremely hard even for experts.

    • This is a great comparison!

      With physical security, you need to trust that the lock designers and manufacturers didn't make material mistakes. It is impossible to check for the layman and extremely hard even for experts. You can watch people install it, but that only offers so much assurance and is limited mostly to their expertise in installation. Further, we know that any lock can be bypassed given enough effort, so we have insurance against theft and maybe additional layers of security (cameras, a fence, watchful neighbors, etc.).

      With cyber security your position is similar. You're working with a series of tools, none of which you can trust completely, and most of which have limitations or flaws. You layer them with the goal of increasing the amount of effort requires to breach all your defenses to be too high for your adversaries to want to take on.

      In both security domains, the basic positions are the same. Non-experts need to layer imperfect defensive systems atop one another to make successful attacks more difficult to achieve. Risk assessments play an important role in helping people decide how much is enough.

      2 replies →

    • I still blame the company in the second scenario. Pay a multiple for a secure setup or don't store data, even if that means funding new development when no secure solutions exist. I would like people to take user data so seriously that they would go so far as to develop a new operating system to securely handle it. That should be the burden we put on companies that want to collect data on people.

  • I think there's a strong incentive for a lot of small-business people and software engineers alike to wholly blame attackers. If it's the attackers fault, you don't have to wonder if your insurance is good enough. You don't have to examine if you keep your software sufficiently patched. You don't have to examine if your company's custom internal infrastructure is resilient or if it's one giant shared CIFS drive full of sensitive customer data without backups.

    Often, taking security seriously feels like directing a certain amount of resources for uncertain returns at a domain that feels like it should come for free. Software engineering feels like it is like manufacturing, where you produce artifacts and ship them. It's jarring to recontextualize this as actively engaging in an adversarial, human-driven domain.

    Between the two, our fellow users are heavily incentivized to find ways that they and people like them are blameless. It's a way to avoid engaging with what can feel like an impossible problem. Without attackers there wouldn't be any cybersecurity issues, right?

  • well just previously we had a story where a company was taken to task for how they implemented a test of cyber security by using an email that promised bonus money or such.

    such is the issue at hand, the attackers know no bounds and it will take coordination among governments to track them down and hold them or their master's accountable.

    this does not excuse the victims of such attacks but even the best efforts of many can be circumvented by the latest method, a careless employee or even a malicious one.

    I am sure many have experience having access we routinely expected yanked which felt unfair but also be on the other side of the issue trying to lock down users only to have push back that we went too far; the heartache our support team got in locking out what users could do on their desktop could fill novels

>There is little deterrence to would-be attackers, especially state-sponsored attackers

At least with state-sonsored attacks there is theoretically the option of striking back, although speaking as a German citizen, we've sadly neglected both our defensive and offensive capacities and the national infrastructure is simply not up to the task.

  • That's not 'sadly', that is a direct consequence of the aftermath of World War II when the German offensive capability was purposefully reduced.

    And to be fair: this is what allowed Germany to quickly re-emerge as the economic powerhouse in Europe, without having to spend a fortune on defense and with a Marshall plan plus a lot of knowledge about electronics and mechanization what may look like a disadvantage to you today actually is historically a huge advantage.

    • tbh I don't thnk that's a reason any more. It's not like anyone goes "oh no the Germans are at it again" if we could actually repel and deter Russian or Iranian attacks in cyberspace. In fact our allies have been asking us to actually build up our capacities both analog and digital for a long time now. It's much more mundane, politics just doesn't care. I know a few people who went to do IT careers in the military and it's just bad on all fronts. Payment is bad, they recruit the absolute bottom of the barrel, the resources aren't there, and the infrastructure is neglected.

      In the US or Israel a lot of highly qualified people go into military service and it's a priority, here it's a third wheel.

      1 reply →

I think there's a legal analogy to be made with vehicular liability insurance. By choosing to operate a car, you're putting others at some small risk, and you're therefore required to hedge against the scenario where that impacts someone else.

> In contrast, in cyberspace, the chance of holding attackers accountable is much lower.

Yes, because this is still a new and partially unregulated space. But deterrence is growing slowly but surely, for both individuals and states.