Comment by tremon
7 months ago
That assumes they're using a stream decompressor library and are feeding that stream manually. Solutions that write the received file to $TMP and just run an external tool (or, say, use sendfile()) don't have the option to abort after N decompressed bytes.
> Solutions that write the received file to $TMP and just run an external tool (or, say, use sendfile()) don't have the option to abort after N decompressed bytes
cgroups with hard-limits will let the external tool's process crash without taking down the script or system along with it.
> cgroups with hard-limits
This is exactly the same idea as partitioning, though.
> That assumes they're using a stream decompressor library and are feeding that stream manually. Solutions that write the received file to $TMP and just run an external tool (or, say, use sendfile()) don't have the option to abort after N decompressed bytes.
In a practical sense, how's that different from creating a N-byte partition and letting the OS return ENOSPC to you?