← Back to context

Comment by hanez

10 days ago

I chose isolated state (like Lua) rather than a single global lock (like Python’s GIL). Each VM has its own heap, scheduler, and garbage collection. There are no cross-VM pointers. Concurrency and data exchange happen via message passing and a few carefully scoped shared-memory primitives for high‑throughput use cases. This keeps the C API simple, predictable, and safe to embed in multi‑threaded hosts.

Isolated state seems like the right call. I am curious how you implemented the shared memory primitives though. I spent a while trying to get zero-copy buffer sharing right in a previous project and usually ended up complicating the host API significantly to guarantee safety. Are you using reference counting or some kind of ownership transfer model there?

  • • We default to isolates for safety and scaling.

    • Zero‑copy sharing is done with fun_shared_buffer, an off‑heap, GC‑untracked, pointer‑free block that’s immutable from the VM’s point of view.

    • Lifetime is managed with plain reference counting (retain/release).

    • For hot paths, we also support an adoption (ownership‑transfer) pattern during message passing so the sender can drop its ref without copying.

Isolated state is definitely the right call. I am curious how you implemented the shared memory primitives though. Usually that is where the complexity creeps back in if you want to avoid global locks. How do you expose that without forcing the host to manage its own synchronization?

  • We don’t expose shared mutability to VMs. The trick is: publish‑as‑immutable plus adoption via ports. Ports/queues do the synchronization; fun_shared_buffer is off‑heap and refcounted with atomic ops. The host doesn’t need to lock anything for the common paths.