Comment by kqr
7 months ago
While that is true, I recommend on the request anyway, because it makes it abundantly clear to the programmer that requests can fail, and failure needs to be handled somehow – even if it's by killing and restarting the process.
I second this: depending on the context, there might be a more graceful way of handling a response that's too long then crashing the process.
Though the issue with ‘too many byte’ limits is that this tends to cause outages later then time has passed and now whatever the common size was is now ‘tiny’, like if you’re dealing with images, etc.
Time limits tend to also defacto limit size, if bandwidth is somewhat constrained.
Deliberately denying service in one user flow because technology has evolved is much better than accidentally denying service to everyone because some part of the system misbehaved.
Timeouts and size limits are trivial to update as legitimate need is discovered.
5 replies →
That's a lead into one of my testing strategies. It's easy to set the timeouts too short randomly, the buffer size too small. Use that to make errors happen and see what the system does. Does it hiccup and keep going or does it fall on it's face?