Comment by throwaway2048
7 years ago
Most of these things are coincidental byproducts of how Windows (NT) is designed, not carefully envisioned trade offs that are what make Windows Ready for the Desktop (tm).
For some counterexamples of how those designs make things harder and more irritating, look at file locking and how essentially every Windows update forces a reboot, that is pretty damn user unfriendly.
Even without file locking, how would live updates work when processes communicate with each other and potentially share files/libraries? I feel like file locking isn't really the core problem here.
Everything that is running keeps using the old libraries. The directory entries for the shared libraries or executables are removed but as long as a task holds a live file descriptor the actual shared library or executable is not deleted from the disk. New processes will have the dynamic linker read the new binaries for the updated libraries. Unless the ABI or API somehow changes during the update (and they don't, big updates usually bump the library version) things work pretty fine.
Do they work fine though?
1. On the one hand I see folks accessing files over and over by paths/names, and on the other hand they demand features that would break unless they switched their fundamental approach to handles/descriptors. Which is it? You can't claim descriptors would fix a problem and simultaneously insist on path-based approaches being perfectly fine. Most programs use paths to access everything (and this goes beyond shared libraries) and assume files won't have changed in between. You can blame it on the program not using fds if that makes you feel better, but the question is how do you magically fix this for the end user?
2. Do you actually see this working smoothly on a Linux desktop environment in practice, or do you just mean this is possible in a theoretical sense? Do you not e.g. get errors/crashes after an apt-get upgrade that presumably upgraded a package your desktop environment depended on (say GTK or whatever)? That happens to me frequently (and I'm practically guaranteed to see a problem if I open a new window in some program in the middle of an update), and it scares me what might be getting corrupted on the way -- makes me wish it would reboot instead of crashing and stop giving me errors.
5 replies →
you can always restart processes, on Windows it is fundamentally impossible to overwrite a running DLL or EXE file. So for example if some services are needed to apply updates, they can never be updated without a reboot.
Yes, I'm aware how Windows file locking works -- in fact you can sometimes rename running executables -- it depends.
Your solution to a rebooting the system being user-unfriendly is... restarting processes? How would that be so much more user-friendly? That's almost the same from a user standpoint, you might as well actually lock down the system and reboot to make sure the user doesn't try to mess with the system during the update.
And on top of all that, if you're actually willing to kill processes, then they won't be locking files anymore in the first place, so now you can update the files normally...
So yeah, I really don't understand how file locking is the actual problem here, despite Linux folks always trying to blame lack of live updates on that. I know I for one easily get errors after updating libraries on e.g. Ubuntu making programs or the desktop constantly crash until I reboot... if anything, that's far less user-friendly.
5 replies →
>So for example if some services are needed to apply updates, they can never be updated without a reboot
I wouldn't say never. Hotpatching was introduced in windows server 2003[1]. However, it's seldom available for windows update patches, and even if it's available, you have to opt-in (using a command line flag) to actually use it.
[1] https://jpassing.com/2011/05/01/windows-hotpatching/
IIRC this is because under memory pressure, files can be paged out to their existing disk location, rather than taking up extra space in swap.