Comment by lelanthran
15 hours ago
This.
As someone who spent most of their career as an embedded dev, yes, this is fine for (like parent said) some types of software.
Even for places where you'd think this is a bad idea, it's still can be a good approach, for example allocating and mapping all memory up to the limit you are designing. Honestly this is how engineering is done - you have specified limits in the design, and you work explicitly to those limits.
So "allocate everything at startup" need not be "allocate everything at program startup", it can be "allocate everything at workflow startup", where "workflow" can be a thread, a long-running input-directed sequence of functions, etc.
For example, I am starting a tiny stripped down web-server for a project, and my approach is going to be a single 4Kb[1] block for each request, allocated via a pool (which can expand on pressure up to some maximum) and returned to the pool once the response is sent.
The 4Kb includes at most 14 headers (regardless of each headers size) with the remaining data for the JSON payload. The JSON payload is limited to at most 10 fields. This makes parsing everything "allocate-less" because the array holding pointers to the keys+values of the header is `const char *headers[14]` and to the payload JSON data `const char *fields[10]`.
A request that doesn't fit in any of that will be rejected. This means that everything is simple and the allocation for each request happens once at startup (pool creation) even while parsing the input.
I'm toying with the idea of doing the same for responses too, instead of writing it out as and when the output is determined during the servicing of the request.
-------------------------
[1] I might switch to 6Kb or 8Kb if requests need more; whatever number is chosen, it's going to be a static number.
No comments yet
Contribute on Hacker News ↗