← Back to context

Comment by eru

7 months ago

That doesn't necessarily need to be in the request itself.

You can also limit the wider process or system your request is part of.

While that is true, I recommend on the request anyway, because it makes it abundantly clear to the programmer that requests can fail, and failure needs to be handled somehow – even if it's by killing and restarting the process.

  • I second this: depending on the context, there might be a more graceful way of handling a response that's too long then crashing the process.

    • Though the issue with ‘too many byte’ limits is that this tends to cause outages later then time has passed and now whatever the common size was is now ‘tiny’, like if you’re dealing with images, etc.

      Time limits tend to also defacto limit size, if bandwidth is somewhat constrained.

      7 replies →

  • That's a lead into one of my testing strategies. It's easy to set the timeouts too short randomly, the buffer size too small. Use that to make errors happen and see what the system does. Does it hiccup and keep going or does it fall on it's face?

Then you kill your service which might also be serving legitimate users.

  • It depends on how you set things up.

    Eg if you fork for every request, that process only serves that one user. Or if you can restart fast enough.

    I'm mostly inspired by Erlang here.