← Back to context

Comment by lazide

7 months ago

Though the issue with ‘too many byte’ limits is that this tends to cause outages later then time has passed and now whatever the common size was is now ‘tiny’, like if you’re dealing with images, etc.

Time limits tend to also defacto limit size, if bandwidth is somewhat constrained.

Deliberately denying service in one user flow because technology has evolved is much better than accidentally denying service to everyone because some part of the system misbehaved.

Timeouts and size limits are trivial to update as legitimate need is discovered.

  • Oh man, I wish I could share some outage postmortems with you.

    Practically speaking, putting an arbitrary size limit somewhere is like putting yet-another-ssl-cert-that-needs-to-be-renewed in some critical system. It will eventually cause an outage you aren’t expecting.

    Will there be a plausible someone to blame? Of course. Realistically, it was also inevitable someone would forget and run right into it.

    Time limits tend to not have this issue, for various reasons.

    • > Practically speaking, putting an arbitrary size limit somewhere is like putting yet-another-ssl-cert-that-needs-to-be-renewed in some critical system. It will eventually cause an outage you aren’t expecting.

      No, not at all. A TLS cert that expires takes the whole thing down for everyone. A size limit takes one operation down for one user.

    • But not putting the limits, leaves the door open to a different class of outages in the form of buffer overflows, that additionally can also pose a security risk as could be exploitable by an attacker. maybe this issue would be better solved at the protocol level, but in the meantime size limit it is.

      1 reply →

    • > putting yet-another-ssl-cert-that-needs-to-be-renewed in some critical system

      I found a fix for this some years back:

          openssl req -x509 -days 36500