Comment by devit
1 year ago
I think the optimal approach for development would be to not produce a traditional linked executable at all, but instead just place the object files in memory, and then produce a loader executable that hooks page faults in those memory areas and on-demand mmaps the relevant object elsewhere, applies relocations to it, and then moves it in place with mremap.
Symbols would be resolved based on an index where only updated object files are reindexed. It could also eagerly relocate in the background, in order depending on previous usage data.
This would basically make a copyless lazy incremental linker.
This makes some very naïve assumptions about the relationships between entities in a program; in particular that you can make arbitrary assertions about the representation of already-allocated datastructures across multiple versions of a component, that the program's compositional structure morphs in understandable ways, and that you can pause a program in a state where a component can actually be replaced.
By the time you have addressed these, you'll find yourself building a microkernel system with a collection of independent servers and well-defined interaction protocols. Which isn't necessarily a terrible way to assemble something, but it's not quite where you're trying to go...
You can sort of do that with some of LLVM's JIT systems https://llvm.org/docs/JITLink.html, I'm surprised that no one has yet made a edit and continue system using it.
My parens sense is tingling. This sounds like a lisp-machine, or just standard lisp development environment.
Maybe of interest: https://github.com/clasp-developers/clasp/ (Lisp env. that uses LLVM for compilation; new-ish, actively developed.) However, my impression (I didn't measure it) is that the compilation speed is an order of magnitude slower than in SBCL, never mind CCL.
They have! It's called Julia and it's great.
Sounds like dynamic linking, sort of.
> Symbols would be resolved based on an index where only updated object files are reindexed. It could also eagerly relocate in the background, in order depending on previous usage data.
Not exactly this, but Google's Propeller fixes up ("relinks") Basic Blocks (hot code as traced from PGO) in native code at runtime (like an optimizing JIT compiler would): https://research.google/pubs/propeller-a-profile-guided-reli...
Sounds like Apple's old ZeroLink from the aughts?
Isn't this how dynamic linking works? If you really want to reduce build times, you should be making your hot path in the build a shared library, so you don't have to relink so long as you're not changing the interface.
But do rust’s invariants work across dynamic links?
I thought a lot of its proofs were done at compile time not link time.
Yesn't.
Rust is perfectly happy to emit/use dynamic links.[0] It's just that the primary C use case (distributing and updating the main app and its libraries separately) ends up being unsafe since Rust's ABI is unstable (so compiler versions, libraries, etc must match exactly).
Avoiding static relinking during development is pretty much the use where it does work. In fact, Bevy recommends this as part of its setup guide![1]
Practice paints a slightly less rosy picture, though; since the feature is exercised quite rarely, not all libraries work well with it in practice.[2]
[0]: https://doc.rust-lang.org/reference/linkage.html#r-link.dyli...
[1]: https://bevyengine.org/learn/quick-start/getting-started/set...
[2]: For example, https://github.com/linebender/bevy_vello/issues/84
The proof can be done on the whole code (in memory, incremental, etc), and then the modules emitted as dynamically loadable objects.
That sounds a lot like traditional dynamic language runtimes. You kind of get that for free with Smalltalk/LISP/etc.
Linker overlays?