Can you link to one that has individual virtual memory processes where the memory isn't freed? It sounds like what you're talking about is just leaking memory and processes have nothing to do with it.
virtual memory requires pages and this sucker doesn’t have them. Only a heap that you can use with heap_x.c
Everything is manual.
I get you people are trying to be cheeky and point out all modern OS’s don’t have this problem but C runs on a crap ton of other systems. Some of these “OS” are really nothing more than a coroutine from pid 0.
Yeah I think I get your problem. I am prototyping a message-passing actor platform running in a flat address space, and virtual memory is the only way I can do cleanup after a process ends (by keeping track of which pages were allocated to a process and freeing them when it terminates)
Without virtual memory, I would either need to force the use of a garbage collector (which is an interesting challenge in itself to design a GC for a flat address space full of stackless coroutines), or require languages with much stricter memory semantics such as Rust so I can be safe everything is released at the end (though all languages are designed for isolated virtual memory and not even Rust might help without serious re-engineering)
Do you keep notes of these types of platforms you’re working on? Sounds fun.
Tbh on such a bare bones system I would use my own trivial arena bump allocator and only do a single malloc at startup and a single free before shutdown (if at all, because why even use the C stdlib on embedded systems instead of talking directly to the OS or hardware)
Why is something running on an rtos even able to leak memory?
If your design is going to be dirty, you've got to account for that.
In 30 years, I've never seen a memory leak in the wild.
Set up a memory pool, memory limits, garbage collectors or just switch to an OS/language that will better handle that for you.
Rust is favored among C++ users, but even Python could be a better fit for your use case.
I think the short answer is that it is very hard, time-consuming, and expensive to develop and prove out formal verification build/test toolchains.
I haven’t looked at C3 yet, but I imagine it can’t be used in a formally verified toolchain either unless the toolchain can compile the C3 bits somehow.
Are you really telling someone to 'correct their tone' because one of their many suggestions doesn't work on your mystery platform that you won't mention?
I don't see anything wrong with my tone. I could have been snarky about it.
I provided the C solutions as well but an interpreter written in C could at least allocate objects and threads within the interpreter context and not leak memory allowing you to restart it along any services within which is apparently better than whatever framework people sharing this sentiment are using.
I'm genuinely curious. What kind of mission-critical embedded real-time design dynamically(!) allocates objects and threads and then loses track of them?
PS: On topic, I really like the decisions made in C3
Can you link to one that has individual virtual memory processes where the memory isn't freed? It sounds like what you're talking about is just leaking memory and processes have nothing to do with it.
virtual memory requires pages and this sucker doesn’t have them. Only a heap that you can use with heap_x.c
Everything is manual.
I get you people are trying to be cheeky and point out all modern OS’s don’t have this problem but C runs on a crap ton of other systems. Some of these “OS” are really nothing more than a coroutine from pid 0.
I have 30 years experience in this field.
Yeah I think I get your problem. I am prototyping a message-passing actor platform running in a flat address space, and virtual memory is the only way I can do cleanup after a process ends (by keeping track of which pages were allocated to a process and freeing them when it terminates)
Without virtual memory, I would either need to force the use of a garbage collector (which is an interesting challenge in itself to design a GC for a flat address space full of stackless coroutines), or require languages with much stricter memory semantics such as Rust so I can be safe everything is released at the end (though all languages are designed for isolated virtual memory and not even Rust might help without serious re-engineering)
Do you keep notes of these types of platforms you’re working on? Sounds fun.
1 reply →
Tbh on such a bare bones system I would use my own trivial arena bump allocator and only do a single malloc at startup and a single free before shutdown (if at all, because why even use the C stdlib on embedded systems instead of talking directly to the OS or hardware)
RTOSes I'm aware of call them tasks rather than processes, specifically because they don't provide the sort of isolation that a "proper" OS does.
Why is something running on an rtos even able to leak memory? If your design is going to be dirty, you've got to account for that. In 30 years, I've never seen a memory leak in the wild. Set up a memory pool, memory limits, garbage collectors or just switch to an OS/language that will better handle that for you. Rust is favored among C++ users, but even Python could be a better fit for your use case.
I think the short answer is that it is very hard, time-consuming, and expensive to develop and prove out formal verification build/test toolchains.
I haven’t looked at C3 yet, but I imagine it can’t be used in a formally verified toolchain either unless the toolchain can compile the C3 bits somehow.
python is not an option in this environment. Correct your tone.
Are you really telling someone to 'correct their tone' because one of their many suggestions doesn't work on your mystery platform that you won't mention?
I don't see anything wrong with my tone. I could have been snarky about it.
I provided the C solutions as well but an interpreter written in C could at least allocate objects and threads within the interpreter context and not leak memory allowing you to restart it along any services within which is apparently better than whatever framework people sharing this sentiment are using.
I'm genuinely curious. What kind of mission-critical embedded real-time design dynamically(!) allocates objects and threads and then loses track of them?
PS: On topic, I really like the decisions made in C3
2 replies →