Comment by d3ckard

1 day ago

Personally I believe static allocation has pretty huge consequences for theoretical computer science.

It’s the only kind of program that can be actually reasoned about. Also, not exactly Turing complete in classic sense.

Makes my little finitist heart get warm and fuzzy.

I'm not an academic, but all those ByteArray linked lists have me feeling like this is less "static allocation" and more "I re-implemented a site-specific allocator and all that that implies".

Also it's giving me flashbacks to LwIP, which was a nightmare to debug when it would exhaust its preallocated buffer structures.

  • This is still a more dependable approach with resource constraints. Fragmentation is eliminated and you can monitor pools for usage in a worst case scenario. The only other risk here versus true static allocation is a memory leak which can be guarded against with suitable modern language design.

    LwIPs buffers get passed around across interrupt handler boundaries in and out of various queues. That's that makes it hard to reason about. The allocation strategy is still sound when you can't risk using a heap.

  • Personally, I see dynamic allocation more and more as a premature optimization and a historical wart.

    We used to have very little memory, so we developed many tricks to handle it.

    Now we have all the memory we need, but tricks remained. They are now more harmful than helpful.

    Interestingly, embedded programming has a reputation for stability and AFAIK game development is also more and more about avoiding dynamic allocation.

    • > AFAIK game development is also more and more about avoiding dynamic allocation.

      That might have been the case ~30 years ago on platforms like the Gameboy (PC games were already starting to use C++ and higher level frameworks) but certainly not today. Pretty much all modern game engines allocate and deallocate stuff all the time. UE5's core design with its UObject system relies on allocations pretty much everywhere (and even in cases where you do not have to use it, the existing APIs still force allocations anyway) and of course Unity using C# as a gameplay language means you get allocations all over the place too.

    • Also not a game dev, but my understanding there is that there there's a lot of in-memory objects whose lifetimes are tied to specific game-time entities, like a frame, an NPC, the units of octtree/bsp corresponding to where the player is, etc.

      Under these conditions, you do need a fair bit of dynamism, but the deallocations can generally be in big batches rather than piecemeal, so it's a good fit for slab-type systems.

      1 reply →

> It’s the only kind of program that can be actually reasoned about.

Theoretically infinite memory isn't really the problem with reasoning about Turing-complete programs. In practice, the inability to guarantee that any program will halt still applies to any system with enough memory to do anything more than serve as an interesting toy.

I mean, I think this should be self-evident: our computers already do have finite memory. Giving a program slightly less memory to work with doesn't really change anything; you're still probably giving that statically-allocated program more memory than entire machines had in the 80s, and it's not like the limitations of computers in the 80s made us any better at reasoning about programs in general.

  • Yes, but allocations generate ever increasing combinatorial space of possible failure modes.

    Static allocation requires you to explicitly handle overflows, but also by centralizing them, you probably need not to have as many handlers.

    Technically, all of this can happen as well in language with allocations. It’s just that you can’t force the behavior.

    • Sure, but let's be clear: it's a tradeoff. If every program reserved as much memory at startup as needed to service 100% of its theoretically-anticipated usage, the amount of programs we could run in parallel would be drastically reduced. That is to say, static allocation makes OOM conditions dramatically more likely by their very nature, because programs are greedily sitting on unused memory that could be doled out to other processes.

      1 reply →

> It’s the only kind of program that can be actually reasoned about.

No. That is one restriction that allows you to theoretically escape the halting problem, but not the only one. Total functional programming languages for example do it by restricting recursion to a weaker form.

Also, more generally, we can reason about plenty of programs written in entirely Turing complete languages/styles. People keep mistaking the halting problem as saying that we can never successfully do termination analysis on any program. We can, on many practical programs, including ones that do dynamic allocations.

Conversely, there are programs that use only a statically bounded amount of memory for which this analysis is entirely out of reach. For example, you can write one that checks the Collatz conjecture for the first 2^1000 integers that only needs about a page of memory.

> It’s the only kind of program that can be actually reasoned about.

What do you mean? There are loads of formal reasoning tools that use dynamic allocation, e.g. Lean.