Comment by chillfox
1 day ago
The requirements usually don’t come from IT.
It’s usually on the checklist for some audit that the organisation wants because it lowers insurance premiums or credit card processing fees. In some cases it’s because an executive believes it will be good evidence for them having done everything right in case of a breach.
Point being the people implementing it usually know it’s a bad idea and so do the people asking for it. But politics and incentives are aligned with it being safer for the individuals to go along with it.
I belonged to an organization that had password complexity requirements. That's normal and understandable. However one requirement was that no part of my password could contain a three character subsstring that was included in my full name. I won't give my real name here, but sadly it includes some three letter subsequences that are somewhat common in many English words. I can understand a policy that prevents someone from using "matthew1234" as Matthew Smith's password, but this rule also prevents such a person from using "correcthorsebatterystaple" because it has 'att' in it.
Turns out, this rule was not from IT. It was a requirement from the cybersecurity insurance policy the organization had taken.
> Turns out, this rule was not from IT. It was a requirement from the cybersecurity insurance policy the organization had taken.
I wonder if some of these constraints are to try to find a way not to pay out on the policy.
It absolutely was/is.
To bastardize Douglas Adams: For-profit insurance is a scam; breach insurance, doubly-so.
Just an unbreakable law of the universe.
"Why did this stupid shit happen? Oh, it's money again."
It's not money but inertia of very large systems. All these password changes cost money as well. If anything it's a market failure that insurance companies seem to have too little incentive to update their security requirements. This would likely be solved by reducing friction with both evaluating insurers in detail and switching between them.
It's also a sort of moral hazard problem.
If you, the person in charge of these decisions, allow an incumbent policy - even a bad one - to stand, then if something goes wrong you can blame the policy. If you change the policy, though, then you're at risk of being held personally responsible if something goes wrong. Even if the change isn't related to the problem.
It's not just cybersecurity. I have a family member who was a medical director, and ran up against it whenever he wanted to update hospital policies and standards of care to reflect new findings. Legal would throw a shitfit about it every time. With the way tort law in the US works, the solution to the trolley problem is always "don't throw the switch" because as soon as you touch it you're involved and can be held responsible for what happens.
3 replies →
Not money; incentives