← Back to context

Comment by TeMPOraL

8 months ago

My go-to example of a whole mesh of "accountability sinks" is... cybersecurity. In the real world, this field is really not about the tech and math and crypto - almost all of it is about distributing and dispersing liability through contractual means.

That's why you install endpoint security tools. That's why you're forced to fulfill all kinds of requirements, some of them nonsensical or counterproductive, but necessary to check boxes on a compliance checklist. That's why you have external auditors come to check whether you really check those boxes. It's all that so, when something happens - because something will eventually happen - you can point back to all these measures, and say: "we've implemented all best practices, contracted out the hard parts to world-renowned experts, and had third party audits to verify that - there was nothing more we could do, therefore it's not our fault".

With that in mind, look at the world from the perspective of some corporations, B2B companies selling to those corporations, other suppliers, etc.; notice how e.g. smaller companies are forced to adhere to certain standards of practice to even be considered by the larger ones, etc. It all creates a mesh, through which liability for anything is dispersed, so that ultimately no one is to blame, everyone provably did their best, and the only thing that happens is that some corporate insurance policies get liquidated, and affected customers get a complimentary free credit check or some other nonsense.

I'm not even saying this is bad, per se - there are plenty of situations where discharging all liability through insurance is the best thing to do; see e.g. how maritime shipping handles accidents at sea. It's just that understanding this explains a lot of paradoxes of cybersecurity as a field. It all makes much more sense when you realize it's primarily about liability management, not about hat-wearing hackers fighting other hackers with differently colored hats.

> "we've implemented all best practices, contracted out the hard parts to world-renowned experts, and had third party audits to verify that - there was nothing more we could do, therefore it's not our fault"

The amount of (useless) processes/systems at banks I've seen in my career that boil down to this is incredible, e.g. hundreds of millions spent on call center tech for authentication that might do nothing, but the vendor is "industry-leading" and "best in-class".

> It's just that understanding this explains a lot of paradoxes of cybersecurity as a field. It all makes much more sense when you realize it's primarily about liability management, not about hat-wearing hackers fighting other hackers with differently colored hats.

Bingo. The same situation for most risk departments at banks or healthcare fraud and insurance companies.

I thought risk at a bank was going to be savvy quants, but it's literally lawyers/compliance/box-checking marketing themselves as more sophisticated than they are. Like the KYC review for products never actually follow up and check if the KYC process in the new products works. There's no analytics, tracking, etc. until audit/regulators come in an ask, "our best-in-class vendor handles this". All the systems are implemented incorrectly, but it doesn't matter because the system is built by a vendor and implemented by consultants, and they hold the liability (they don't, but it will take ~5 years in court to get to that point).

Beginning to understand what "bureaucracy" mechanically is.

  • The fun part of bank bureaucracy is you get to experience it 10x worse if you actually work at one.

    I once worked on a global, cross-asset application. The change management process was not designed for this and essentially required like 9 Managing Directors to click "approve release" in a 48 hour window for us to do a release.

    We got one shot at this per week, and failing any clicks we would have to try again the next week. The electronic form itself to trigger the process took 1-2 hours to fill out and we had 3 guys on the team who were really good at it (it took everyone else 2x as long).

    Inevitably this had at least 3 very stupid outcomes -

    First we had tons of delayed releases. Second the majority of releases became "emergency releases" in which we were able to forego the majority of process and just.. file the paperwork in retrospect.

    Finally, we instructed staff in each region to literally go stand in the required MD delegates office (of course the MD wouldn't actually click) until they clicked. The conversations usually went something like this "I don't know what this is / fine fine you aren't gonna leave, I'll approve it if you say it won't break anything / ok don't screw up"

  • What's funny is that checklists in hospitals have been shown, empirically, to be massive life-saving devices.

    cyber perhaps not so much...

    • Checklists solve the problem of forgetting specific details. They work very well in situations where all possible problems have been enumerated and the only failure mode is forgetting to check for one.

      They do not solve the problem of getting people to think things through and recognize novel issues.

      There are some jobs you can't do well. You can do them adequately or screw them up. Checklists are helpful in those jobs.

    • Checklists work well in high stress situations where you cannot forget a step (medicine, aviation).

      A checklist in a security incident? Probably helpful.

      A security checklist to satisfy auditors and ancient regulations? This is an entirely different kind.

      4 replies →

    • Checklists are a good tool for making sure you don't forget something. They're a terrible replacement for actually thinking.

Security is closer to product management and marketing than engineering. It's a narrative and the mirror image of product and marketing, where instead of creating something people want based on desire, it's managing the things people explicitly don't want. When organizations don't have product management, they have anti-product management, which is security. We could say, "There is no Anti-Product Division."

Specifically on accountability, I bootstrapped a security product that replaced 6-week+ risk assessment consultant spreadsheets with 20mins of product manager/eng conversation. It shifted the accountability "left" as it were.

When I pitched it to some banks, one of the lead security guys took me aside and said something to the effect of, "You don't get it. we don't want to find risk ourselves, we pay the people to tell us what the risks and solutions are because they are someone else. It doesn't matter what they say we should do, the real risk is transferred to their E&O insurance as soon as they tell us anything. By showing us the risks, your product doesn't help us manage risk, it obligates us to do build features to mitigate and get rid of it."

I was enlightened. Manage means to get value from. The decade I had spent doing security and privacy risk assessments and advocating for accountability for risk was as a dancing monkey.

  • I worked in GRC space for a while, which is where I finally realized the things I wrote above. Our product intended to give CISOs greater visibility into threats and their impacts, making it easy to engage in probabilistic forecasting to prioritize mitigations. Working on designing and building it made me see the field from the perspective of our customers, and from their POV, cyber-threats are all denominated in dollars, mitigating threats boils down to not having to pay corresponding dollars, and that it's often more effective to ensure someone else pays than to address the underlying technological or social vulnerability.

    • we have close experiences for sure. mine was positioned as pre-GRC, more of a design stage tool. like an aha.io/roadmap.com for security. an early champion kept asking how it got them compliance and what compliance frameworks did it implement. I kept insisting this isn't for compliance, it's product level design for security- and that I wasn't interested in making a compliance tool because compliance is stupid. ironically it was essentially an anti-corporate security product.

      of course security people said, "wat, wut?" and it it was because I had made something for what I thought people should do, but not what they wanted. it's funny looking back at it, as I was so burned out and hating the security work I was doing that I just said f'it, and automated it. the biggest conceit (among many) was believing customers would want the results of the risk assessment consulting services I offered if they could do it themselves for 1/100th of the price. the other lesson was, if someone doesn't or won't take accountability for risks, it's almost never because they are dumb.

We should really define a new term for such work.

Perhaps "Risk Compliance Security" or "Security Compliance Engineering"

Where "Security Compliance Engineering" is the practice of designing, implementing, and maintaining security controls that satisfy regulatory frameworks, contractual obligations, and insurance requirements. Its primary objective is not to prevent cyberattacks, but to ensure that organizations can demonstrate due diligence, minimize liability, and maintain audit readiness in the event of a security incident.

Key goals:

- Pass external audits and internal reviews - Align with standards like ISO 27001, SOC 2, or NIST

- Mitigate organizational risk through documentation and attestation

- Enable business continuity via legal defensibility and insurability

In contrast…

Cybersecurity is focused on actively detecting, preventing, and responding to cyber threats. It’s concerned with protecting systems and data, not accountability sinks.

That is also why so much of the security[tm] software is so bad. Usability and fitness for purpose are not box-tickers. The industry term in play is "risk transfer".

Most security software does not do what it advertises, because it doesn't have to. Its primary function is for the those who bought the product, to be able to blame the vendor. "We paid vendor X a lot of money and transferred the risk to them, this cannot be our fault." Well, guess what? You may not be legally the one holding the bag, but as a business on the other end of the transaction you are still at fault. Those are your customers. You messed up.

As for vendor X? If the incident was big enough, they got free press coverage. The incentives in the industry truly are corrupt.

Disclosure: in the infosec sphere since the early 90's. And as it happens, I did a talk about this state of affairs earlier this week.

The most unfortunate thing about much of corporate 'cybersecurity' is that it combines expensive and encumbering theatre around compliance and deniability... with ridiculously insecure practices.

Imagine, for example, if more companies would hire for software developers and production infrastructure experts who build secure systems.

But most don't much care about security: they want their compliances, they may or may not detect and report the inevitable breaches, and the CISO is paid to be the fall-person, because the CEO totally doesn't care.

Now we're getting cottage industries and consortia theatre around things like why something that should be a static HTML Web page is pulling in 200 packages from NPM, and now you need bold third-party solutions to combat all the bad actors and defective code that invites.

  • > Imagine, for example, if more companies would hire for software developers and production infrastructure experts who build secure systems.

    I do imagine that, and they get hacked (because you have to get lucky every time, but the hackers only need to get lucky once), and then the press says "were you doing all the things the whole industry says to do?" and they say "no, but we were actually secure!" and the press goes "well no you weren't, you got hacked, and you weren't even doing the bare minimum!" and then the company is never heard of again.

I wonder what the difference is between cybersecurity and civil aviation safety. At a glance they both have a lot of processes and requirements. Somehow on one side they are as you said, a way to deal with liability without necessarily increasing security, while on the other safety is actually significantly increased.

  • I think a big part of it is that failures in aviation safety cost lives, often dozens or hundreds per incident, in quite immediate, public and visceral fashion. There also isn't much gradation - an issues either causes massive loss of life, or could cause it if not caught early, or... it's not relevant to safety. On top of that, any incident is hugely impactful on the entire industry - most people are fully aware how likely they'd be to survive a drop from airliner altitude, so it doesn't take many accidents to scare people away of flying in general.

    Contrast that to cybersecurity, where vast majority of failures have zero impact on life or health of people, directly or otherwise. Even data breaches - millions of passwords leak every other week, yet the impact of this on anyone affected is... nil. Yes, theoretically cyberattacks could collapse countries and cause millions to die if they affected critical infrastructure, but so far this never happened, and it's not what your regular cybersecurity specialist deals with. In reality, approximately all impact of all cyberattacks is purely monetary - as long as isn't loss of life or limb, it can be papered over with enough dollars, which makes everyone focus primarily on ensuring they're not the ones paying for it.

    I think it's also interesting to compare both to road safety - it sits kind of in between on the "safety vs. theater" spectrum, and has the blend of both approaches, and both outcomes.

    • > I think a big part of it is that failures in aviation safety cost lives

      This is an interesting point, and it certainly affects the incentives involved and the amount of resources allocated to mitigating the problems.

      I do think cyber security incidents with real consequences are likely to become more common going forward (infrastructure etc). We haven't experienced large state actors being malicious in a war time footing (yet).

      Will we able to better mitigate attacks given better incentives? I think that is an open question. We will certainly throw more resources at the problem, and we will weight outcomes more heavily when designing processes, but whether we know how to prevent cybersecurity incidents even if we really want to... that I wonder about.

    • I do think you're broadly right -- the lack of immediate and obvious impact creates a perception that there is no impact. But even your first example -- data breaches -- does have an impact. It might not have happened to you, it might not have happened to me, but people do get their identities stolen, and recovering from that is a nightmare. And nobody is going to 'paper over' John Doe's missing retirement fund or ruined credit score, that harm is permanent.

      > this never happened

      This is also wrong. Russia has employed cyberwarfare against Ukraine multiple times -- e.g. in 2016 when they took large chunks of the grid for an hour, or more pointedly in 2022 when it was used to disrupt infrastructure and digital operations across the country as part of an invasion. Stuxnet and Triton were also pretty serious -- unlikely to kill millions, but they did have a real effect. If you're bringing this up to explain why people don't care as much as they should, then I agree -- but I would think that it's misguided to suggest that "this has never happened" actually implies that it never will. It took 20 years after the advent of commercial airlines for someone to bomb one, but clearly that is now a major and continuing concern.

  • Aviation safety is mostly about learning from past experience. You mitigate known hazards that, once mitigated, stay mitigated.

    Cybersecurity is about adversarial hazards. When you mitigate them they actively try to unmitigated themselves.

    It is more analogous to TSA security checks than to FAA equipment checklists. The checklist approach can prevent copycats from repeating past exploits but is largely useless for preventing new and creative problems.

  • There is lot less aircraft models as well. About 17 in current production(although more variants), 3 in planning, 26 "out-of-production" and some more historical.

    In the end there is just not that many products overall.

    Now compare that to amount of software being worked on. And number of companies involved just on buying bespoke ones or developing for own use...

Honestly is is just like Insurance. You understand the value of things you are protecting (and simple compliance has a value to you in penalties and liabilities avoided) and make sure it costs more than that to break into your system.

At a corporate level, it is contractually almost identical to insurance, with the product being sold liability for that security, not the security itself.

  • Right. I sometimes call it meta-level insurance, because it's structurally what it is. Funnily, actual insurance is a critical part of it - it's the ultimate liability sink, discharging whatever liability that didn't get diluted and diffused among all relevant parties.

    And, I guess it's fine - it's the general way of dealing with impact that can be fully converted into dollars (i.e. that doesn't cause loss of life or health).

    • It’s really not fine. Expensive and useless security theater isn’t just inefficient and corrupt, it’s way more actively harmful than that because there’s a huge opportunity cost associated with all the wasted time and money AND the incentivized deliberate refusal to make obviously good/easy/cheap improvements. Even in matters pertaining purely to dollars.. Spreading out liability can’t erase injury completely. it just pushes it onto the tax payer because someone is paying the judge to sit in the chair and listen to the insurance people and the lawyers.

Rhyming with this observation - the only time I've ever heard someone getting fired over a phishing incident anywhere I've worked.. was a guy on the cybersecurity team who clicked through and got phished.

+1 Insightful

Thank you for sharing this really illuminating take. I spend an unreasonable amount of time dealing with software security, and you've put things in a light where it makes a bit more sense.

This is the ultimate nihilistic take on security.

Yes, 'cyber' security has devolved to box checking and cargo culting in many orgs. But what's your counter on trying to fix the problems that every tech stack or new SaaS product comes without of the box?

For most people when their Netflix (or HN) password gets leaked that means every email they've sent since 2004 is also exposed. It might also mean their 401k is siphoned off. So welcome the annoying and checkbox-y MFA requirements.

If you're an engineer cutting code for a YC startup -- Who owns the dependancy you just pulled in? Are you or your team going to track changes (and security bugs) for it in 6 months? What about in 2 or 3 years?

Yes, 'cyber' security brings a lot of annoying checkboxes. But almost all of them are due to externalities that you'd happily blow past otherwise. So -- how do we get rid annoying checkboxes and ensure people do the right thing as a matter of course?

  • Actual accountability. Do not let companies be like "Well, we were SOC2 compliant, this breach is not our fault despite not updating Apache Struts! Tee Hee" When Equifax got away with what was InfoSec murder by 6 months of jail time suspended, Executives stopped caring. This is political problem, not technology one.

    >So -- how do we get rid annoying checkboxes and ensure people do the right thing as a matter of course?

    By actually having the power to enforce this, if you pull our SBOM, realize we have a vulnerability and get our Product Owner to prioritize fixing it even if takes 6 weeks because we did dumb thing 2 years ago and tech debt bill has come due. Otherwise, stop wasting my time with these exercises, I have work to do.

    Not trying to be mean but that's my take with my infosec team right now. You are powerless outside your ability to get SOC2 and we all know this is theater, tell us what piece of set you want from me, take it and go away.

    • It's a two-sided coin though.

      We should be stopping leaks, but we also need to reduce the value of leaked data.

      Identity theft doesn't get meaningfully prosecuted. Occasionally they'll go after some guy who runs a carding forum or someone who did a really splashy compromise, but the overall risk is low for most fraudulent players.

      I always wanted a regulation that if you want to apply for credit, you have to show up in person and get photographed and fingerprinted. That way, the moment someone notices their SSN was misused, they have all the information on file to make a slam-dunk case against the culprit. It could be an easier deal for lazy cops than going after minor traffic infractions.

      1 reply →

  • > For most people when their Netflix (or HN) password gets leaked that means every email they've sent since 2004 is also exposed. It might also mean their 401k is siphoned off. So welcome the annoying and checkbox-y MFA requirements.

    Not true. For most people, when their Netflix or HN password gets leaked, that means fuck all. Most people don't even realize their password was leaked 20 times over the last 5 years. Yes, here and there someone might get deprived of their savings (or marriage) this way, but at scale, approximately nothing ever happens to anyone because of password or SSN leaks. In scope of cybersec threats, people are much more likely to become victims of ransomware and tech support call scams.

    I'm not saying that cybersec is entirely meaningless and that you shouldn't care about security of your products. I'm saying that, as a field, it's focused on liability management, because that's what most customers care about, pay for, and it's where the most damage actually manifests. As such, to create secure information systems, you often need to work against the zeitgeist and recommendations of the field.

    EDIT:

    > This is the ultimate nihilistic take on security.

    I don't believe it is. In fact, I've been putting efforts to become less cynical over last few months, as I realized it's not a helpful outlook.

    It's more like, techies in cybersecurity seem to have overinflated sense of uniqueness and importance of their work. The reality is, it's almost all about liability management - and is such precisely because most cybersec problems are nothingburgers that can be passed around like a hot potato and ultimately discharged through insurance. It's not the worst state of things - it would be much worse if typical cyber attack would actually hurt or kill people.

    • This really resonated with me because I'm also working to avoid becoming more cynical as I gain experience and perspective on what problems "matter" and what solutions can gain traction.

      I think in this case the cognitive dissonance comes from security-minded software engineers (especially the vocal ones that would chime in on such a topic) misunderstanding how rare their expertise is as well as the raw scope of risks that large corporations are exposed to and what mitigations are sensible. If you are an expert it's easy to point at security compliance implementation at almost any company and poke all kinds of holes in specific details, but that's useless if you can't handle the larger problem of cybersecurity management and the fallout from a mistake.

      And if you zoom out you realize the scope of risk introduced by the internet, smart phones and everything doing everything online all the time is unfathomably huge. It's not something that an engineering mentality of understanding intricate details and mechanics can really get ones head around. From this perspective, liability and insurance is a very rational way to handle it.

      As far as the checklists go, if you are an expert you can peel back the layers and realize the rationales for these things and adjust accordingly. If you have competent and reasonable management and decision makers then things tend to go smoothly, and ultimately auditors are paid by the company, so there is typically a path to doing the right thing. If you don't have competent and reasonable management then you're probably fucked in unnumerable ways, such that security theater is the least of your worries.