← Back to context

Comment by matt_heimer

1 day ago

The people configuring WAF rules at CDNs tend to do a poor job understanding sites and services that discuss technical content. It's not just Cloudflare, Akamai has the same problem.

If your site discusses databases then turning on the default SQL injection attack prevention rules will break your site. And there is another ruleset for file inclusion where things like /etc/hosts and /etc/passwd get blocked.

I disagree with other posts here, it is partially a balance between security and usability. You never know what service was implemented with possible security exploits and being able to throw every WAF rule on top of your service does keep it more secure. Its just that those same rulesets are super annoying when you have a securely implemented service which needs to discuss technical concepts.

Fine tuning the rules is time consuming. You often have to just completely turn off the ruleset because when you try to keep the ruleset on and allow the use-case there are a ton of changes you need to get implemented (if its even possible). Page won't load because /etc/hosts was in a query param? Okay, now that you've fixed that, all the XHR included resources won't load because /etc/hosts is included in the referrer. Now that that's fixed things still won't work because some random JS analytics lib put the URL visited in a cookie, etc, etc... There is a temptation to just turn the rules off.

> I disagree with other posts here, it is partially a balance between security and usability.

And economics. Many people here are blaming incompetent security teams and app developers, but a lot of seemingly dumb security policies are due to insurers. If an insurer says "we're going to jack up premiums by 20% unless you force employees to change their password once every 90 days", you can argue till you're blue in the face that it's bad practice, NIST changed its policy to recommend not regularly rotating passwords over a decade ago, etc., and be totally correct... but they're still going to jack up premiums if you don't do it. So you dejectedly sigh, implement a password expiration policy, and listen to grumbling employees who call you incompetent.

It's been a while since I've been through a process like this, but given how infamous log4shell became, it wouldn't surprise me if insurers are now also making it mandatory that common "hacking strings" like /etc/hosts, /etc/passwd, jndi:, and friends must be rejected by servers.

  • Not just economics, audit processes also really encourage adopting large rulesets wholesale.

    We're SOC2 + HIPAA compliant, which either means convincing the auditor that our in-house security rules cover 100% of the cases they care about... or we buy an off-the-shelf WAF that has already completed the compliance process, and call it a day. The CTO is going to pick the second option every time.

    • Yeah. SOC2 reminds me that I didn't mention sales as well, another security-as-economics feature. I've seen a lot of enterprise RFPs that mandate certain security protocols, some of which are perfectly sensible and others... not so much. Usually this is less problematic than insurance because the buyer is more flexible, but sometimes they (specifically, the buyer's company's security team, who has no interest besides covering their own ass) refuse to budge.

      If your startup is on the verge of getting a 6 figure MRR deal with a company, but the company's security team mandates you put in a WAF to "protect their data"... guess you're putting in a WAF, like it or not.

      4 replies →

    • OS-level monitoring / auditing software also never ceases to amaze me (for how awful it is). Multiple times, at multiple companies, I have seen incidents that were caused because Security installed / enabled something (AWS GuardDuty, Auditbeat, CrowdStrike…) that tanked performance. My current place has the latter two on our ProxySQL EC2 nodes. Auditbeat is consuming two logical cores on its own. I haven’t been able to yet quantify the impact of CrowdStrike, but from a recent perf report, it seemed like it was using eBPF to hook into every TCP connection, which is quite a lot for a DB connection poolers.

      I understand the need for security tooling, but I don’t think companies often consider the huge performance impact these tools add.

  • I wish IT teams would say "sorry about the password requirement, it's required by our insurance policy". I'd feel a lot less angry about stupid password expiration rules if they told me that.

    • Sometime in the past few years I saw a new wrinkle: password must be changed every 90 days unless it is above a minimum length (12 or so as best I recall) in which case you only need to change it yearly. Since the industry has realized length trumps dumb "complexity" checks, it's a welcome change to see that encoded into policy.

      14 replies →

  • Having worked with PCI-DSS, some rules seem to only exist to appease insurance. When criticising decisions, you are told that passing audits to be able to claim insurance is the whole game, even when you can demonstrate how you can bypass certain rules in reality. High-level security has more to do with politics (my definition) than purely technical ability. I wouldn't go as far as to call it security theatre, there's too much good stuff there that many don't think about without having a handy list, but the game is certainly a lot bigger than just technical skills and hacker vs anti-hacker.

    I still have a nervous tick from having a screen lock timeout "smaller than or equal to 30 seconds".

  • > but a lot of seemingly dumb security policies are due to insurers.

    I keep hearing that often on HN, however I've personally never seen seen such demands from insurers. I would greatly appreciate if one share such insurance policy. Insurance policies are not trade secrets and OK to be public. I can google plenty of commercial cars insurance policies for example.

    • I found an example!

      https://retail.direct.zurich.ch/resources/definition/product...

      Questionnaire Zurich Cyber Insurance

      Question 4.2: "Do you have a technically enforced password policy that ensures use of strong passwords and that passwords are changed at least quarterly?"

      Since this is an insurance questionnaire, presumably your answers to that question affect the rates you get charged?

      (Found that with the help of o4-mini https://chatgpt.com/share/680bc054-77d8-8006-88a1-a6928ab99a...)

      18 replies →

    • The fun part is that they don't demand anything, they just send you a worksheet that you fill out and presumably it impacts your rates. You just assume that whatever they ask about is what they want. Some of what they suggest is reasonable, like having backups that aren't stored on storage directly coupled to your main environment.

      The worst part about cyber insurance, though, is that as soon as you declare an incident, your computers and cloud accounts now belong to the insurance company until they have their chosen people rummage through everything. Your restoration process is now going to run on their schedule. In other words, the reason the recovery from a crypto-locker attack takes three weeks is because of cyber insurance. And to be fair, they should only have to pay out once for a single incident, so their designated experts get to be careful and meticulous.

    • This is such an important comment.

      Fear of a prospective expectation, compliance, requirement, etc., even when that requirement does not actually exist is so prevalent in the personality types of software developers.

      1 reply →

    • You can buy insurance for just about anything, not just cars. Companies frequently buy insurance against various low-probability incidents such as loss of use, fraud, lawsuit, etc.

  • There should be some limits and some consequences to the insurer as well. I don't think the insurer is god and should be able to request anything no matter if it makes sense or not and have people and companies comply.

    If anything, I think this attitude is part of the problem. Management, IT security, insurers, governing bodies, they all just impose rules with (sometimes, too often) zero regard for consequences to anyone else. If no pushback mechanism exists against insurer requirements, something is broken.

    • > There should be some limits and some consequences to the insurer as well. I don't think the insurer is god and should be able to request anything no matter if it makes sense or not and have people and companies comply.

      If the insurer requested something unreasonable, you'd go to a different insurer. It's a competitive market after all. But most of the complaints about incompetent security practices boil down to minor nuisances in the grand scheme of things. Forced password changes once every 90 days is dumb and slightly annoying but doesn't significantly impact business operations. Having to run some "enterprise security tool" and go through every false positive result (of which there will be many) and provide an explanation as to why it's a false positive is incredibly annoying and doesn't help your security, but it's also something you could have a $50k/year security intern do. Turning on a WAF that happens to reject the 0.0001% of Substack articles which talk about /etc/hosts isn't going to materially change Substack's revenue this year.

      3 replies →

    • This is why everyone should have a union, including highly paid professionals. Imagine what it would be like. "No, fuck you, we're going on strike until you stop inconveniencing us to death with your braindead security theater. No more code until you give us admin on our own machines, stop wasting our time with useless Checkmarx scans, and bring the firewall down about ten notches."

  • > If an insurer says "we're going to jack up premiums by 20% unless you force employees to change their password once every 90 days", you can argue till you're blue in the face that it's bad practice, NIST changed its policy to recommend not regularly rotating passwords over a decade ago, etc., and be totally correct... but they're still going to jack up premiums if you don't do it.

    I would argue that password policies are very context dependent. As much as I detest changing my password every 90 days, I've worked in places where the culture encouraged password sharing. That sharing creates a whole slew of problems. On top of that, removing the requirement to change passwords every 90 days would encourage very few people to select secure passwords, mostly because they prefer convenience and do not understand the risks.

    If you are dealing with an externally facing service where people are willing to choose secure passwords and unwilling to share them, I would agree that regularly changing passwords creates more problems than it solves.

    • > removing the requirement to change passwords every 90 days would encourage very few people to select secure passwords

      When you don’t require them to change it, you can just assign them a random 16 character string and tell them it’s their job to memorize it.

      1 reply →

  • > jack up premiums by 20% unless you force employees to change their password once every 90 days"

    Always made me judge my company's security teams as to why they enable this stupidity. Thankfully they got rid of this gradually, nearly 2 years ago now (90 days to 365 days to never). New passwords were just one key l/r/u/d on the keyboard.

    Now I'm thinking maybe this is why the app for a govt savings scheme in my country won't allow password autofill at all. Imagine expecting a new password every 90 days and not allowing auto fill - that just makes passwords worse.

  • I believe that these kind of decisions are mostly downstream of security audits/consultants with varying level of up to date slideshows.

    I believe that this is overall a reasonable approach for companies that are bigger than "the CEO knows everyone and trusted executives are also senior IT/Devs/tech experts" and smaller than "we can spin an internal security audit using in-house resources"

  • I'm no expert, but I did take a CISSP course a while ago. One thing I actually remember ;P, is that it recommended long passwords in in lieu of the number, special character, upper, lower ... I don't remember the exact wording of course and maybe it did recommend some of that, but it talked about having a sentence rather than all that mess in 6-8 characters, but many sites still want the short mess that I never will actually remember

    • While the password recommendation stuff is changing (the US government updating it guidelines last year), it’s generally best practice to not share passwords which itself implies using a password manager anyway which makes the whole “long passphrase” vs “complex” password moot - just generate 32 lowercase random characters to make it easier to type or use the autogenerated password your password manager recommends.

      The long passphrase is more for the key that unlocks your password manager rather than the random passwords you use day to day.

      2 replies →

  • Worse. If you are not in the USA, i.e., if NIST is not the correct authority, that insurer might actually be enforcing what the "correct" authority believes to be right, i.e., password expiration.

  • Why wouldn't the IT people just tell the grumbling employees that exact explanation?

    • IT doesn't always hear the grumbles, hidden away as they frequently are behind a ticketing system; the help desk technicians who do hear the grumbles aren't always informed of the "why" behind certain policies, and don't have the time or inclination to go look them up if they're even documented; and it's a very unsatisfying answer even if one receives a detailed explanation.

      Information loss is an inherent property of large organizations.

      2 replies →

    • In small orgs that might happen, in large orgs it's some game of telephone where the insurance requirements are forwarded to the security team which makes the policies which are enforced by several layers of compliance which come down on the local IT department.

      The underlying purpose of the rules and agency to apply the spirt rather than the letter gets lost early in the chain and trying to unwind it can be tedious.

    • If you've read this thread, it would appear that most people here on HN aren't actually involved with policy compliance work dictated from above. Have you ever seen a Show HN dealing with boring business decisions? No. We do, however, get https://daale.club/

      1 reply →

    • In a lot of cases the it people are just following the rules and don’t know this.

  • Maybe it wouldn't make a difference, but if I was the IT person telling users they have to change their passwords every 90 days, I would 100% include a line in the email blaming the insurance company.

    • I'm not in an IT dept (developer instead), but I'd bet money that would get you a thorough dressing down by an executive involved with the insurance. That sort of blaming goes over well with those at the bottom of the hierarchy, and poorly with those at the top.

      2 replies →

    • You would probably have no idea what the requirement actually said or where it ultimately came from.

      It would've gone from the insurer to the legal team, to the GRC team, to the enterprise security team, to the IT engineering team, to the IT support team, and then to the user.

      Steps #1 to #4 can (and do) introduce their own requirements, or interpret other requirements in novel ways, and you'd be #5 in the chain.

"You never know..." is the worst form of security, and makes systems less secure overall. Passwords must be changed every month, just to be safe. They must be 20 alphanumeric characters (with 5 symbols of course), just to be safe. We must pass every 3-letter compliance standard with hundreds of pages of checklists for each. The server must have WAF enabled, because one of the checklists says so.

Ask the CIO what actual threat all this is preventing, and you'll get blank stares.

As an engineer what incentive is there to put effort into knowing where each form input goes and how to sanitize it in a way that makes sense? You are getting paid to check the box and move on, and every new hire quickly realizes that. Organizations like these aren't focused on improving security, they are focused on covering their ass after the breach happens.

  • > Ask the CIO what actual threat all this is preventing

    the CIO is securing his job.

    • the CIO is securing his job.

      Every CIO I have worked for (where n=3) has gotten where they are because they're a good manager, even though they have near-zero current technical knowledge.

      The fetishizing of "business," in part through MBAs, has been detrimental to actually getting things done.

      A century ago, if someone asked you what you do and you replied, "I'm a businessman. I have a degree in business," you'd get a response somewhere between "Yeah, but what to you actually do" and outright laughter.

      4 replies →

This looks like a variation of the Scunthorpe problem[1], where a filter is applied too naively, aggressively, and in this case, to the wrong content altogether. Applying the filter to "other stuff" sent to and among the servers might make sense, but there doesn't seem to be any security benefit to filtering actual text payload that's only going to be displayed as blog content. This seems like a pretty cut and dried bug to me.

1: https://en.wikipedia.org/wiki/Scunthorpe_problem

  • > This looks like a variation of the Scunthorpe problem[1], where a filter is applied too naively

    No.

    > aggressively

    No.

    >, and in this case, to the wrong content altogether.

    Yes - making it not a Scunthorpe problem.

  • This is exactly what I was thinking as well, it's a great Scunthorpe example. Nothing from the body of a user article should ever be executed in any way. If blocking a list of strings is providing any security at all you're already in trouble because attackers will find a way around that specific block list.

> The people configuring WAF rules at CDNs tend to do a poor job understanding sites and services that discuss technical content

They shouldn't be doing that job at all. The content of user data is none of their business.

I don't get why you'd have SQL injection filtering of input fields at the CDN level. Or any validation of input fields aside from length or maybe some simple type validation (number, date, etc). Your backend should be able to handle arbitrary byte content in input fields. Your backend shouldn't be vulnerable to SQL injection if not for a CDN layer that's doing pre-filtering.

  • A simple reason would be if you're just using it as a proxy signal for bad bots and you want to reduce the load on your real servers and let them get rejected at the CDN level. Obvious SQL injection attempt = must be malicious bot = I don't want my servers wasting their time

    • > A simple reason would be if you're just using it as a proxy signal for bad bots

      Who would be that stupid?

  • It should be thought of as defense-in-depth only. The backend had better be immune to SQL injection, but what if someone (whether in-house or vendor) messes that up?

    I do wish it were possible to write the rules in a more context-sensitive way, maybe possible with some standards around payloads (if the WAF knows that an endpoint is accepting a specific structured format, and how escapes work in that format, it could relax accordingly). But that's probably a pipe dream. Since the backend could be doing anything, paranoid rulesets have to treat even escaped data as a potential issue and it's up to users to poke holes.

  • The farther a request makes it into infrastructure, the more resources it uses.

  • Because someone said "we need security" and someone else said "what is security" and someone else said "SQL injection is security" and someone looked up SQL injections and saw the word "select" and "insert".

    WAFs are always a bad idea (possible exception: in allow-but-audit mode). If you knew the vulnerabilities you'd protect against them in your application. If you don't know the vulnerabilities all you get is a fuzzy feeling that Someone Else is Taking Care of it, meanwhile the vulnerabilities are still there.

    Maybe that's what companies pay for? The feeling?

    • WAFs can be a useful site of intervention during incidents or when high-severity vulns are first made public. It's not a replacement for fixing the vuln, that still has to happen, but it gives you a place to mitigate it that may be faster or simpler than deploying code changes.

    • If your clients will let you pass the buck on security like this it would be very tempting to work towards the least onerous insurance metric and no further.

Yup. Were a database company that needs to be compliant with SOC2, and I’ve had extremely long and tiring arguments with our auditor why we couldn’t adhere to some of these standard WAF rulesets because it broke our site (we allow people to spin up a demo env and trigger queries).

We changed auditors after that.

  • sounds like your security policy is wrong (or doesnt have a provision for exceptions managed by someone with authority to grant them), or your auditor was swerving out of his lane. As far as I've seen: SOC2 doesn't describe any hard security controls - it just asks to evaluate your policy versus your implemented controls.

    • You are absolutely correct, which is why we switched auditors. We use a third party to verify compliance of all our cloud resources (SecureFrame), and one of their checks is that specific AWS WAF rulesets are enabled on e.g. CloudFront endpoints. These are managed rulesets by AWS.

      We disabled this check, auditor swerved out of his lane, I spent more several hours explaining things he didn’t understand, and things resolved after our CEO had a call with him (you can imagine how the discussion went).

      All in all, if the auditor would have been more reasonable it wouldn’t have been an issue, but I’ve always been wary of managed firewall rulesets because of this reason.

In my experience the pain of false positives required to outweigh the "WAF is best practice" is just very very heigh. Most big businesses would rather lose/frustrate a small percentage of customers, to be "safe".

> I disagree with other posts here, it is partially a balance between security and usability. You never know what service was implemented with possible security exploits and being able to throw every WAF rule on top of your service does keep it more secure. Its just that those same rulesets are super annoying when you have a securely implemented service which needs to discuss technical concepts.

I might be out of the loop here, but it seems to me that any WAF that's triggered when the string "/etc/hosts" is literally anywhere in the content of a requested resource, is pretty obviously broken.

  • I don't think so. This rule for example probably block attacks on a dozen old WordPress vulnerabilities.

    • And a rule that denies everything blocks all vulnerabilities entirely.

      A false positive from a conservative evaluation of a query parameter or header value is one thing, conceivably understandable. A false positive due to the content of a blog post is something else altogether.

      1 reply →

> The people configuring WAF rules at CDNs tend to do a poor job understanding sites and services that discuss technical content. It's not just Cloudflare, Akamai has the same problem.

I agree. There is a business opportunity here. Right in the middle of your sentences.

Hint: Context-Aware WAF.

Many platforms have emerged in the last decade - some called it smart WAF, some called it nextgen WAF.. All vaporware garbage that consumes tons and tons of system resource and still manages to do a shit job of _actually_ WAF'ing web requests.

To be truly context-aware, you need to compute a priori about the situation - the user, the page, the interactions etc.

This is what surprises me in this story. I could not, at first glance, assume that either Substack people or Cloudflare people were incompetent.

Oh: I resisted tooth and nail about turning on a WAF at one of my gigs (there was no strict requirement for it, just cargo cult). Turns out - I was right.

> There is a temptation to just turn the rules off

Definitely, though I have seen other solutions, like inserting non-printable characters in the problematic strings (e.g. "/etc/ho<b></b>sts" or whatever, you get the idea). And honestly that seems like a reasonable, if somewhat annoying, workaround to me that still retains the protections.

  • Another silly workaround would be to take a screenshot of “/etc/hosts” and use images instead. Would break text browsers/reading mode though.

I've had the issue where filling out form fields for some company website triggers a WAF and then nobody in the company is able to connect me to the responsible party who can fix the WAF rules. So I'm just stuck.

There's no "trade-off" here. Blocking IPs that send "1337 h4x0r buzzword /etc/passwd" in it is completely naive and obtrusive, which is the modus operandi of the CDN being discussed here. There are plenty of other ways of hosting a website.

100! [good] security just doesn't work as a mixing pattern... I'm not saying it's necessarily bad to use those additional protections but they come with severe limitations so the total value (as in cost/benefit) is hard to gauge.

I agree. From a product perspective, I would also support the decision. Should we make the rules more complex by default, potentially overlooking SQL injection vulnerabilities? Or should we blanket prohibit anything that even remotely resembles SQL, allowing those edge cases to figure it out?

I favor the latter approach. That group of Cloudflare users will understand the complexity of their use case accepting SQL in payloads and will be well-positioned to modify the default rules. They will know exactly where they want to allow SQL usage.

From Cloudflare’s perspective, it is virtually impossible to reliably cover every conceivable valid use of SQL, and it is likely 99% of websites won’t host SQL content.

  • If your web application is relying on Cloudflare filtration of input values to prevent SQL injection, your web application is vulnerable to SQL injection.

    • Defense in-depth. I would hope few would want a vulnerable web app and simply protect it via a WAF. But just because your web app is 'invulnerable' doesn't mean you should forgo the WAF.

      7 replies →

  • Sorry, we have to reject your comment due to security. The text "Cloudflare<apostrophe>s" is a potential SQL injection.

    • You know, I get the spirit of this criticism. But, specially in the age of AI, we're going to get thousands of barely reviewed websites on Cloudflare.

      If you know what you're doing, turn these protections off. If you don't, there's one less hole out there.

      2 replies →

  • Why not just whitelist the thousand most common words? That should be good enough for 99% of approriate content, and the smelly nerds who make websites or talk about them can take their tiny market segment and get bent.