Comment by kqr

7 months ago

The lesson for any programmers reading this is to always set an upper limit for how much data you accept from someone else. Every request should have both a timeout and a limit on the amounts of data it will consume.

That doesn't necessarily need to be in the request itself.

You can also limit the wider process or system your request is part of.

  • While that is true, I recommend on the request anyway, because it makes it abundantly clear to the programmer that requests can fail, and failure needs to be handled somehow – even if it's by killing and restarting the process.

    • I second this: depending on the context, there might be a more graceful way of handling a response that's too long then crashing the process.

      8 replies →

    • That's a lead into one of my testing strategies. It's easy to set the timeouts too short randomly, the buffer size too small. Use that to make errors happen and see what the system does. Does it hiccup and keep going or does it fall on it's face?

  • Then you kill your service which might also be serving legitimate users.

    • It depends on how you set things up.

      Eg if you fork for every request, that process only serves that one user. Or if you can restart fast enough.

      I'm mostly inspired by Erlang here.

      1 reply →