C3 solved memory lifetimes with scopes

6 months ago (c3-lang.org)

"No more [...] slow compile times with complex ownership tracking."

Presumably this is referring to Rust, which has a borrow checker and slow compile times. The author is, I assume, under the common misconception that these facts are closely related. They're not; I think the borrow checker runs in linear time though I can't find confirmation of this, and in any event profiling reveals that it only accounts for a small fraction of compile times. Rust compile times are slow because the language has a bunch of other non-borrow-checking-related features that trade off compilation speed for other desiderata (monomorphization, LLVM optimization, procedural macros, crates as a translation unit). Also because the rustc codebase is huge and fairly arcane and not that many people understand it well, and while there's a lot of room for improvement in principle it's mostly not low-hanging fruit, requiring major architectural changes, so it'd require a large investment of resources which no one has put up.

  • I know very little about how rustc is implemented, but watching what kind of things make make Rust compile times slower, I tend to agree with you. The borrow checker rarely seems to be the culprit here. It tends to spike up exactly on the things you've mentioned: procedural macros use, generics use (monomorphization) and release builds (optimization).

    There are other legitimate criticisms you can raise at the Rust borrow checker such as cognitive load and higher cost of refactoring, but the compilation speed argument is just baseless.

    • Procedural macros are not really _that_ slow themselves, the issue is more that they tend to generate enormous amount of code that will then have to be compiled, and _that_'s slow.

      5 replies →

  • https://learning-rust.github.io/docs/lifetimes/

    > Lifetime annotations are checked at compile-time. ... This is the major reason for slower compilation times in Rust.

    This misconception is being perpetuated by Rust tutorials.

  • Generally why I think rust compile is unfixably slow is the decision to rely on compile-time static dispatch, and heavy generic specialization, which means there's a LOT of code to compile and the resulting binary size is large.

    Many-many people remarked that this is the wrong approach in todays world, where CPUs are good at doing dynamic dispatch prediction, but the cache sizes (esp. L1, and instr cache) is very limited, for most code (with the exception of very hot tight loops), fetching code into cache is going to be the bottleneck.

    Not to mention, for a systems programming language, I'd expect a degree of neatness of the generated machine code (e.g. no crazy name mangling, having the same generic method appear 30 places in assembly etc.)

    • There are compile-time techniques that can mitigate the compile-time cost of monomorphization to a degree: optimizing on a generic IR (MIR) and polymorphization (merging functions that produce equivalent bodies) come to mind as immediate examples that have been talked about or implemented to a degree in rustc.

    • > is unfixably slow

      it's not at all unfixable, I mean sure there is a limit to speed improvements but many of the things you mention aren't really as fundamental as they seem

      one one hand you don't have to go crazy with generics, `dyn` is a thing and not being generic is often just fine. Actually it's not rare to find it projects code guidelines to avoid unnecessary monopolization e.g. use `&mut dyn FnMut()` over `impl FnMut()` and similar. And sure there is some issue with people spreading some "always use generics it's faster, dynamic dispatch is evil FUD" but that's more a people then a language problem.

      on the other hand rust gives very limited guarantees about how exactly a lot of stuff happens under the hood, including the rust calling convention, struct layout etc. As long as rust don't change "observed" side effects it can to whatever it wants. Dynamic/Static dispatch is in general not counted as a observed side effect so the compiler is free to not monomorphe things if it can make it work. While it already kinda somewhat doesn't monomorphize some part (e.g. T=usize,T=u64 on 64bit systems) there is a lot of untapped potential. Sure there are big limits on how far this can go. But if combined with not obsessing with generics and other improvements I think rust can have very reasonable compile times, especially in a dev->unit test loop. And many people are already fine with them now so nothing I'm overly worried about tbh.

      > neatness of the generated machine code

      Why would you care about that in a language where you close to never have to look at assembly of it or anything similar? It's also not really what any other languages pursue, even in modern C that is more a side effect then a intend.

      Through without question kilobytes large type signatures are an issue (but the mangling isn't, IMHO if you don't use a tool to unmangle symbols on the fly that is a you problem).

      2 replies →

  • Rust is really getting hurt by at least not having some kind of interpreter like OCaml and Haskell have, to dispel the perpetual of urban myths from devs without background in compilers.

    • fun fact there is work in progress to have a cranelift based backend, which isn't exactly an interpreter but more like a AOT compiler build for WASM

      but it anyway does compile things much faster at the cost of less optimizations (doesn't mean no optimizations or that it's slow per-see it's still designed to run WASM performant in a situation where a fast/low latency AOT is needed, but WASM programs are normally already pre-optimized and you many have to to do certain low level instruction optimizations which it still does)

      AFIK the goal is to run it by default for the dev->unit test loop, as very often you don't care about high perf. code execution but about low latency getting feedback.

      Through idk. the state of it.

      1 reply →

  • Also, Rust compile times aren't that bad the last time I checked. Maybe they got better and people just don't realize it?

    • > Also, Rust compile times aren't that bad the last time I checked.

      I dunno - I've got a trivial webui that queries an SQLite3 database and outputs a nice table and from `cargo clean`, `cargo build --release` takes 317s on my 8G Celeron and 70s on my 20G Ryzen 7. Will port it to Go to test but I'd expect it to take <15s from clean even on the Celeron.

      4 replies →

  • maybe the borrow checker takes most compile time if you take an avergae of how often it runs vs. how often the next compile phases are triggered over code lifespan :') (yes ok so i dont do well with lifetimes hah)

  • > They're not;

    it's complicated, and simple

    the simple answer is rust compiler times are not dominated by the borrow checker at all, so "it's fast" and you can say it's not overly related to their being borrow checking

    the other simple answer is that a simple reasonable well implemented borrow checker is pretty much always fast

    to complicated answer is that rusts borrow checker isn't simple as there are a _huge lot_ of code a simple borrow checker wouldn't allow which is safe and people want to write and the borrow checker rust needs to run to support all that edge cases has to basically run a constraint solver. (Which a) is a thing which in O notation is quite slow and b) is a thing CS has researched optimizations and heuristics for since decades so it is often quite fast in practice.) And as far as I remember rust currently does (did? wanted to?) run this in two layers, the simple checker checks most code and the more powerful on only engages for the cases where the simple checker fails. But , like mentioned, as the compilation still isn't dominated by the borrow checker this doesn't exactly mean its slow.

    So the borrow checker isn't an issue and if you create a C-like language with a rust like borrow checker it will compile speedily, at least theoretically, if you then also have a tone of code gen and large compilation units you might run into similar issues as rust does ;)

    Also recently most of the "especially bad cases" project in rust have run into (wrt. compiler times, AFIK) all had the same kind of pattern: A enormous huge amount of code (often auto generated, often even huge before monomorphization) being squeezed into very few (often one single) LLVM compilation unit leading to both LLVM struggling hard with optimizations and then the linker drowning, too. And here is the thing, that can happen to you in C too and then your compilation times will be terrible, too. Through people tend to very rarely run into it in C.

    > not low-hanging fruit, requiring major architectural changes, so it'd require a large investment of resources which no one has put up.

    it still happens from time to time (e.g. polenious) and then there are still many "hard" but very useful improvements which don't require any large scale architectural changes and also some bigger issues which wouldn't be fixed by large scale architectural improvements. So not sure if we are anywhere close to needing a large scale architectural overall in rustc, probably not.

    E.g. in a somewhat recent article about way too long rust compiler times many HN comments thought that rustc had some major architectural issues wrt. parallelization, but the issue was that rust failed to properly subsection the massive auto-generated crate when handing code units to LLVM and that isn't an architectural issue. Or e.g. not replacing LLVM with cranelift (if viable) for the change->unit test loop is a good example for a change which can largely improve dev experience/decrease compiler times for the place where it matters the most (technically it does change the architecture of the stack, and needed many many small changes to allow a non LLVM backend, but it's not "a major rewrite(architectural) change" in the rustc compiler code).

The core benefit of the borrow checker is not "make sure to remember to clean up memory to avoid leaks." The core benefits are "make sure that you can't access memory after it has been destroyed" and "make sure that you can't mutate something that somebody else needs to be constant." This is fundamentally a statement about the relationship between many objects, which may have different lifetimes and which are allocated in totally different parts of the program.

Lexically scoped lifetimes don't address this at all.

  • Well, the title (which is poorly worded as has been pointed out) refers to C3 being able to implement good handling of lifetimes for temporary allocations by baking it into the stdlib. And so it doesn't need to reach for any additional language features. (There is for example a C superset that implements borrowing, but C3 doesn't take that route)

    What the C3 solution DOES to provide a way to detect at runtime when already freed temporary allocation is used. That's of course not the level of compile time checking that Rust does. But then Rust has a lot more in the language in order to support this.

    Conversely C3 does have contracts as a language feature, which Rust doesn't have, so C3 is able to do static checking with the contracts to reject contract violations at compile time, which runtime contracts like some Rust creates provides, can't do.

    • > What the C3 solution DOES to provide a way to detect at runtime when already freed temporary allocation is used.

      The article makes no mention of this, so in the context of the article the title remains very wrong. I could also not find a page in the documentation claiming this is supported (though I have to admit I did not read all the pages), nor an explanation of how this works, especially in relation to the performace hit it would result in.

      > C3 is able to do static checking with the contracts to reject contract violations at compile time

      I tries searching how these contracts work in the C3 website [1] and these seems to be no guaranteed static checking of such contracts. Even worse, violating them when not using safe mode results in "unspecified behaviour", but really it's undefined behaviour (violating contracts is even their list of undefined behaviour! [2])

      [1]: https://c3-lang.org/language-common/contracts/

      [2]: https://c3-lang.org/language-rules/undefined-behaviour/#list...

      8 replies →

    • > What the C3 solution DOES to provide a way to detect at runtime when already freed temporary allocation is used.

      I looked at the allocator source code and there’s no use-after-free protection beyond zeroing on free, and that is in no way sufficient. Many UAF security exploits work by using a stale pointer to mutate a new allocation that re-uses memory that has been freed, and zeroing on free does nothing to stop these exploits.

      4 replies →

    • That was already available in languages like Modula-2 and Object Pascal, as the blog post acknowledges the idea is quite old, and was also the common approach to manage memory originally with Objective-C on NeXTSTEP, see NSZone.

      Hence why all these wannabe be C replacements, but not like Rust, should bring more to the table.

  • Agreed. I personally am interested in Rust after doing some research for parallel semantics. The state of the art language is usually some pure functional programming language coupled with a garbage collector, which is not suitable for the type of embedded development I'm thinking of.

    Doing alias analysis on mutable pointers seems to be inevitable in so many areas of programming and Rust is just one of the few programming languages brave enough to embark on this idea.

I'm struggling to understand how this has anything to do with borrow checking. Borrow checking is a way to reason about aliasing, which doesn't seem to be a concern here.

This post is about memory management and doesn't seem to be concerned much about safety in any way. In C3, does anything prevent me from doing this:

  fn int* example(int input)
  {
      @pool()
      {
          int* temp_variable = mem::tnew(int);
          *temp_variable = input;
          return temp_variable;
      };
  }

  • Yes, this has little to nothing to do with borrow checking or memory/concurrency safety in the sense of Rust. Uncharitably, the author appears not to have a solid technical grasp of what they're writing about, and I'm not sure what this says about the rest of the language.

  • No, that is quite possible. You will not be able to use that memory you just returned though. What actually happens is an implementation issue, but it ranges from having the memory overwritten (but still being writable) on platforms with the least support, to being neither read or writable, to throwing an exact error with ASAN on. Crashing on every use is often a good sign that there is a bug.

    • It might not be on every use though. The assignment could very well be conditional. If a dangling reference could escape from the arena in which it was allocated, you cannot claim to have memory safety. You can claim that the arena prevents memory leaks (if you remember to allocate everything correctly within the arena), but it doesn't provide memory safety.

      2 replies →

The post's title is quite hyperbolic and I don't think it serves the topic right.

Memory arenas/pools have been around for ages, and binding arenas to a lexical scope is also not a new concept. C++ was doing this with RAAI, and you could implement this in Go with defer and in other languages by wrapping the scope with a closure.

This post discusses how arenas are implemented in C3 and what they're useful for, but as other people have said this doesn't make sense to compare arenas to reference counting or a borrow checker. Arenas make memory management simpler in many scenarios, and greatly reduce (but don't necessarily eliminate - without other accompanying language features) the chances of a memory leak. But they contribute very little to memory safety and they're not nearly as versatile as a full-fledged borrow checker or reference counting.

  • Ada did this in 1983 before C89 even existed as another point of reference and I'm sure other languages did before that.

    (I have not actually checked the standard, but I'm reasonably sure pools were there.)

    > But they contribute very little to memory safety [...]

    They do solve memory safety if designed properly, as in Ada, but they're not designed in a way that does anything useful here. In Ada the pointer type has to reference the memory pool, so it's simply impossible for a pointer to a pool to exist once the pool is out of scope because the pointer type will also be out of scope. This of course assumes that you only use memory pools and never require explicit allocation/deallocation, which you often do in real world programs.

    • One should note that Ada evolve quite a lot since 1983, and there are many improvements upon on how to manage resources in the language in a safe way.

      There is its own version of RAII, controlled lifetimes, unbounded collections, the dynamic stack allocation on runtime (with exception/retry) is also a way to do arena like stuff, SPARK proofs, and as of recent ongoing standards work, some affine types magic dust as well.

What core type theory is C3 actually built on?

The blog claims that @pool "solves memory lifetimes with scopes" yet it looks like a classic region/arena allocator that frees everything at the end of a lexical block… a technique that’s been around for decades.

Where do affine or linear guarantees come in?

From the examples I don’t see any restrictions on aliasing or on moving data between pools, so how are use‑after‑free bugs prevented once a pointer escapes its region?

And the line about having "solved memory management" for total functions::: bravo indeed…

Could you show a non‑trivial case where @pool eliminates a leak that an ordinary arena allocator wouldn’t?

Could you show a non‑trivial case, say, a multithreaded game loop where entities span multiple frames, or a high‑throughput server that streams chunked responses, where @pool prevents leaks that a plain arena allocator would not?

  • It is unfortunate that the title mentions borrow checking which doesn't actually have anything to do with the idea presented. "Forget RAII" would have made more sense.

    This doesn't actually do any compile-time checks (it could, but it doesn't). It will do runtime checks on supported platforms by using page protection features eventually, but that's not really the goal.

    The goal is actually extremely simple: make working with temporary data very easy, which is where most memory management messes happen in C.

    The main difference between this and a typical arena allocator is the clearly scoped nature of it in the language. Temporary data that is local to the function is allocated in a new @pool scope. Temporary data that is returned to the caller is allocated in the parent @pool scope.

    Personally I don't like the precise way this works too much because the decision of whether returned data is temporary or not should be the responsibility of the caller, not the callee. I'm guessing it is possible to set the temp allocator to point to the global allocator to work around this, but the callee will still be grabbing the parent "temp" scope which is just wrong to me.

    • > "Forget RAII" would have made more sense.

      For memory only, which is one of the simplest kinds of resource. What about file descriptors? Graphics objects? Locks? RAII can keep track of all of those. (So does refcounting, too, but tracing GC usually not.)

      6 replies →

Rust’s interface for using different allocators is janky, and I wish they had something like this, or had moved forward with the proposal for the mechanism for making it a part of a flexible implicit context mechanism that was passed along with function calls.

But mentioning the borrow checker raises an obvious question that I don’t see addressed in this post: what happens if you try to take a reference to an object in the temporary allocator, and use it outside of the temporary allocator’s scope? Is that an error? Rust’s borrow checker has no runtime behavior, it only exists to create errors in cases like that, so the title invites the question of how your this mechanism handles that case but doesn’t answer it.

  • A dangling pointer will generally still possible to dereference (this is an implementation detail, that might get improved – temp allocators aren't using virtual memory on supporting platforms yet), but in safe more that data will be scratched out with a value, I believe we use 0xAA by default. So as soon as this data is used out of scope you'll find out.

    This is of course not as good as ASAN or a borrow checker, but it interacts very nicely with C.

    • > C3 not having to implement any recently popular language features in order to solve the problem of memory lifetimes for temporary objects as they arise in a language with C-like semantics.

      But you said it yourself in your previous message:

      > A dangling pointer will generally still possible to dereference (this is an implementation detail, that might get improved – temp allocators aren't using virtual memory on supporting platforms yet)

      So the issue is clearly not solved.

      And to be complete about the answer:

      > in safe more that data will be scratched out with a value, I believe we use 0xAA by default. So as soon as this data is used out of scope you'll find out.

      I can see multiple issues with this:

      - it's only in safe mode

      - it's safe only as long as the memory is never used again for a different purpose, which seems to imply that either this is not safe (if it's written again) or that it leaks massive amounts of memory (if it's never written to again)

      > Now clearly people are misreading the title when it stands on its own as "borrow checkers suck, C3 has a way of handling memory safety that is much better". That is very unfortunate, but chance to fix that title already passed.

      Am I still misreading the title if I read it as "C3 solves the same issues that the borrow checker solves"? To me that way of reading seems reasonable, but the title still looks plainly wrong.

      Heck, even citing the borrow checker *at all* seems wrong, this is more about RAII than lifetimes (and RAII in Rust is solved with ownership, not the borrow checker).

      4 replies →

It took me some time to collect my thoughts on this.

One: I don't believe they have solved use-after-free. Marking memory freed and crashing at runtime is as good as checked bounds indexing. It turns RCE into DOS which is reasonable, but what would be much better is solving it provably at compile time to reject invalid programs (those that use memory after it has been deallocated). But enough about that.

I want to write about memory leaks. Solving memory leaks is not hard because automatically cleaning up memory is hard. This is a solved problem, and the domain of automatic memory management/reclamation aka garbage collection. However I don't think they've gone through the rigor to prove why this is significantly different than say, segmented stacks (where each stack segment is your arena). By "significantly different" you should be able to prove this enables language semantics that are not possible with growable stacks - not just nebulous claims about performance.

No, the hard part of solving memory leaks is that they need to be solved for a specific class of program: one that must handle resource exhaustion (otherwise - assume infinite memory; leaks are not a bug). The actual hard thing is when there are no memory leaks in the sense that your program has correctly cleaned itself up everywhere it is able and you are still exhausting resources and must selectively crash tasks (in O(1) memory, because you can't allocate), those tasks need to be able to handle being crashed, and they must not spawn so many more tasks as to overwhelm again. This is equivalent to the halting problem, by the way, so automatic solutions for the general case are provably impossible.

I don't believe that can be solved by semantically inventing an infinite stack. It's a hard architectural problem, which is why people don't bother to solve it - they assume infinite memory, crash the whole program as needed, and make a best effort at garbage collection.

All that said, this is a very interesting design space. We are trapped in the malloc/free model of the universe which are known performance and correctness pits and experimenting with different allocation semantics is a good thing. I like where C3 and Zig's heads are at here, because ignoring allocators is actually a huge problem in Rust in practice.

  • > One: I don't believe they have solved use-after-free. Marking memory freed and crashing at runtime is as good as checked bounds indexing.

    It’s also something allocators commonly implement already.

    • One recurring theme about these new wannabe C and C++ replacements, but not like Rust, is that their solution is basically to use what already exists in C and C++ for the last 30 years, if only people actually bother to learn how to use their tools.

      Unfortunely it is a bit like debugging, lets keep doing printf(), instead of take the effort to learn how to use better approaches.

I don't think technical writing needs this kind of rage-bait. They could have presented just the features of the language. Borrow-checker is clearly unrelated here.

I feel like "solved" is a strong word for what's described here. This works for some - possibly even many - scenarios, but it does not solve memory lifetime in the general case, especially when data from different scopes needs to interact.

Seems overly simplistic and doesn't seem to cover extremely common cases such as a thread allocating some memory then putting it into a queue to be consumed by other threads which then eventually free the memory, or any allocation lifetime that isn't simply the scope of the enclosing block.

  • Well the latter is covered: you can make temp allocations out of order when having nested "@pool"s. There are examples in the blog post.

    It doesn't solve the case when lifetimes are indeterminate. But often they are well know. Consider "foo(bar())" where "bar()" returns an allocated object that we wish to free after "foo" has used it. In something like C it's easy to accidentally leak such a temporary object, and doing it properly means several lines of code, which might be bad if it's intended for an `if` statement or `while`.

This literally doesn't solve any actual problems. If all memory allocation patterns were lexical this is the most easy and most obvious thing to do. That is why stack allocation is the default and works exactly like this.

  • Imagine we have a function "foo" which returns an allocated object Bar, we want to pass this to a function "bar" and then have it released.

    Now we usually cannot do "bar(foo())" because it then leaks. We could allocate a buffer on the stack, and then do "bar(foo(&buffer))", but this relies on us safely knowing that the buffer does not overflow.

    If the language has RAII, we can use that to return an object which will release itself after going out of scope e.g. std::unique_ptr, but this relies on said RAII and preferably move semantics.

    If the context is RAII-less semantics, this is not trivial to solve. Languages that run into this is C3, Zig, C and Odin.

    With the temp allocator solution, we can write `bar(foo())` if `foo` always allocates a temp variable, or `bar(foo(tmem))` if it takes an allocator.

    • Wait, you are implying this is some kind of algorithmic 'solution' to a long standing problem. It's not. This is notable because it's an implementation that works in C++. The 'concept' of tracking allocations in a lexical way is trivially obvious.

      2 replies →

I really do not see the benefit of this over C++ destructors and or facilities like `unique_ptr` and `shared_ptr`.

@pool appears to be exactly what C++ does automatically when objects fall out of scope.

  • The advantage is that the allocations are grouped: they're allocated in the same memory region (good memory locality) and freed in bulk. The tradeoff is needing to explicitly create these scopes and not being able to have custom deallocation logic like you can in a destructor.

    (This doesn't seem to have anything to do with borrow checking though, which is a memory safety feature not a memory management feature. Rust manages memory with affine types which is a completely separate thing, you could write an entire program without a single reference if you really wanted to)

  • The benefit is that it: (a) works in a language without RAII, and C-like languages usually does not have that (b) there are no individual heap allocations and frees (c) allocations are grouped together.

I am having a hard time to connect the title with what they present. Is this just freeing memory when the program leaves the current context? How that matches the rust lifetime borrow checker?… this is a defer function that frees whatever has been allocated within a given scope, not that useful, unless i am missing something…

The post doesn't even mention how it works/improves DX in a multi-threaded environment, borrow checkers are targeting specifically that use case.

Wow, this is such a fascinating concept. The syntax can’t stop reminding me of @autoreleasepool from ObjC. I’ll definitely try this out on a small project soon.

Also, since D lang usually implements all kinds of possible concepts and mechanism from other languages, I would love to see those being implemented aswell! D already has a borrow checker no so why not also add this, would be very cool to play with it!

  • I think there was a discussion to add SuperStack a threadlocal buffer that will be used for arena style allocation back in 2010-2011. While std wont have it I see no problem implementing one yourself.

Is this different from NSAutoreleasePool, which has been around for over 30 years?

  • NSAutoreleasePool keeps a list of autoreleased objects, that are given a "release" message when the pool goes out of scope.

    `@pool` flushes the temp allocator and all allocations made by the temp allocator are freed when the pool goes out of scope.

    There are similarities, but NSAutoreleasePool is for refcounting and an object released by the autoreleasepool might have other objects retaining it, so it's not necessarily freed.

Whether I like this feature or not depends on the low-level details of how @pool behaves and whether and how I can control it. I can't tell what @pool is going to do to my program, unlike when I'm using an arena (or another allocator) directly.

It seems that @pool is providing context the the allocator function(s), but is the memory in the pool contiguous? What is the initial capacity of the pool? What happens when the pool needs to grow?

I think I prefer the explicit allocator passing in Zig. I don't need to ask these questions because I'm choosing and passing the allocator myself.

  • You can actually see the whole implementation of `@pool` inside the standard library (link: https://github.com/c3lang/c3c/blob/f082cac762939d9b43f7f7301...) if you're interested. You'll have to follow a few function calls, but the whole implementation is defined within the standard library and thus easily modified for your needs.

    I believe that the memory inside the pool is indeed contiguous, and you can ask for a custom initial capacity. The default capacity depends on `env::MEMORY_ENV` at compile-time, but for normal use it's 256kB (256 * 1024, in bytes).

    About the explicit allocator passing, that's also a theme throughout the C3 standard library. Functions that need to allocate some memory will take in an `Allocator` as first argument. For those kind of functions there is always a `t<function>` variant which does not take in an allocator but calls the regular function with the temporary allocator. It's a nice naming convention used which really helps together with `@pool`. Examples are `String format(Allocator allocator, String fmt, args...)` and `String tformat(String fmt, args...)`.

    I hope that clears up some "concerns", and maybe you'll also find some joy in programming in C3 =)

Smart compilers already do this with escape analysis.

  • No, I don't think they do.

    Given a function `foo` that is allocating an object "o" and returns it to the upper scope, how would you do "escape analysis" to determine it should be freed and HOW it should be freed? What is the mechanism if you do not have RAII, ARC or GC?

Ok, now give me an example of a resource manager (eg in a game) that has methods for loading resources into memory and also for releasing such resources - all of a sudden if a system needs to give away pointer access to its buffers, things become more complicated and arena allocators are not enough.

  • For that scenario you can use a pool allocator backed by a fixed size allocation from an arena. That gives you the flexibility to allocate and free resources on the fly, but with a fixed upper limit to the lifetime (e.g. the lifetime of the level or chunk). Once you're ready to unload a level or a chunk, you can rewind the arena, which is a very cheap operation (as opposed to calling free in a loop, which can be expensive if the free implementation tries to defragment the freelist)

  • I am not sure how this would be a problem. Certainly the resource manager should manage the memory itself in some manner.

    It has very little to do with trying to manage temporary memory lifetimes.

Why add a default '@pool' for main?

Operating systems free resources upon process exit. That's one of the fundamental responsibilities of an OS. You can use malloc without free all you want if you are just going to exit the process.

  • If I could take a guess, then it is for embedded applications.

    You could argue though that people should then really do there own memory management, but in the end you might end up just recreating the temp allocator with `@pool` anyway. It's a neat feature. (btw, `@pool` is just a macro from the standard library, no fancy compiler builtin)

I never heard about this language, so I quickly looked through the docs and here is what I didn't like:

- integers use names like "short" instead of names with numbers like "i16"

- they use printf-like formatting functions instead of Python's f-strings

- it seems that there is no exception in case of integer overflow or floating point errors

- it seems that there is no pointer lifetime checking

- functions are public by default

- "if" statement still requires parenthesis around boolean expression

Also I don't think scopes solve the problem when you need to add and delete objects, for example, in response to requests.

This does not seem like it solves any hard problem. It's just a 10x better alloca() with allocator integration?

  • Alloca would not allow you to pass data from the current scope up to a parent scope.

    • I did say 10x better alloca. I'm saying that's not good enough, and seems very narrow.

      You could do this in C++, with RAII stacked arena allocators. Though it's unclear to me from the blog post if C3 would prevent returning a pointer to memory in the top most pool. C++ probably wouldn't help you prevent that.

funny thing is that Malloc also behaves like an arena. When your program starts, Malloc reserves a lot of memory, and when your program ends, all this memory is released. Memory Leak ends up not being a problem with Memory Safety.

So, you will still need a borrow checker for the same reasons Rust needs one, and C/C++ also needed.