Comment by quelsolaar
7 years ago
I think this is a very good example of how windows is different in its goals and designs from Linux. I have a feeling this is the reason Linux has had a hard time catching on, on the desktop. Its easy to complain about a slow filesystem, but Microsoft lives in a different world, where other priorities exist. For someone building a server running a database serving loads of people Linux is a no brainer. You can pick the parts you want, shrink wrap it and fire it up without any fat. On a desktop machine, you want to be able to update drivers in the background without rebooting, you want virus scanners, and you want to have driver ready the moment the user plug's in a new device. Both Windows and Linux is for the most part very well engineered, but with very different priorities.
I’m very confused by your post. You start off talking about desktop machines but NT was actually engineered for servers and then later ported to the desktop. You then describe a bunch of features Linux does better than Windows (eg updating drivers without a reboot).
I think a more reasonable argument is to make is just that Windows is engineered differently to Linux. There’s definitely advantages and disadvantages to each approach but ultimately it’s a question of personal preference.
NT is engineered for a different category of servers, though - it's a workgroup server first (originally its chief competitor was NetWare), and a Web/Internet server second. That drives a different set of priorities.
For example, as someone elsewhere in the comments pointed out, NT does file access in a way that works very well when accessing network shares. That's a pretty core use case for Windows on business workstations, where it's common for people to store all the most important files they work with on a network share, for easier collaboration with other team members.
NT was architected to handle high-end workstations from day one — there’s a reason why running the GUI was mandatory even when the resource costs were fairly substantial.
Check out e.g. https://en.wikipedia.org/wiki/Windows_NT_3.1 for the history of that era. The big selling point was that your business could code against one API everywhere, rather than having DOS PCs and expensive Unix, VAX, etc. hardware which was completely different and only a few people on staff were comfortable with.
OS/2 was a high end desktop OS, but NT diverged a little and took some heavy design principles from VMS (hence it’s name, WNT) and was thusly pivoted towards back office systems rather than desktop usage.
At that time Microsoft’s server offering was a UNIX platform, Xenix, but it was becoming clear that there needed to be a platform to serve workstations that wasn’t a full blown mainframe. So Microsoft handed Xenix to SCO to focus on their collaboration with IBM so the intent there was always to build something more than just high end workstation. And Given it was intended to be administrated by people who were Windows users rather than UNIX grey beards (like myself) it clearly made sense to make the GUI a first class citizen; but that doesn’t mean it was sold to be a desktop OS.
2 replies →
Drivers in Linux live in the kernel. Whenever the kernel is updated a reboot is required (in most distros). Hence your assertion that Linux updates drivers without a reboot better than windows does is questionable.
You only need to restart if there has been a kernel update (on any platform, not just “some distros”). For regular driver updates between the same kernel ABI you can use modprobe to unload and reload the drivers. This works because while drivers share the same kernel memory space (as you illuded to), they aren’t (generally) part of the same kernel binary. They normally get bundled in the same compressed archive but are separate files with a .ko extension.
This isn’t a system that is unique to Linux either. Many UNIX platforms adopt similar mechanisms and Windows obviously also has its drivers as separate executables too.
It just so happens that rebooting is an easier instruction to give users than “unload and reload the device driver”; which might also potentially be dangerous for some devices to “hot-unload” while in use. So a reboot tends to be common practice on all platforms. But at least on Linux, it’s not mandatory like it is on Windows (for reasons other than the ability to reload drivers on a live system)
3 replies →
> On a desktop machine, you want to be able to update drivers in the background without rebooting, you want virus scanners, and you want to have driver ready the moment the user plug's in a new device.
With the exception of the virus scanner these actually sound like arguments in favour of Linux, in my experience.
(Although there are also excellent virus scanners available for Linux anyway)
I’m pretty confused by this post. What would you identify their priorities as?
Regardless, as a person not a fan of windows (not worth learning a unix alternative), I would argue it’s the polish that makes the experience worth it, not some better engineered experience. For instance: working with variable DPI seems to be trivial, whereas it still seems years off in Linux. Same with printers and notifications and internet and almost everything. These aren’t feats of engineering per se, but they do indicate forethought I deeply appreciate when I do use windows.
I would hesitate to ascribe too much purpose to everything you see. Microsoft is a huge company with conflicting political factions and a deep ethos of maintaining backwards compatibility so there are plenty of things which people didn’t anticipate ending up where they are but which are risky to change.
One big factor is the lost era under Ballmer. Stack ranking meant that the top n% of workers got bonuses and the bottom n% were fired, and management reportedly heavily favored new features over maintenance. Since the future was WinFS and touching something core like NTFS would be a compatibility risk, you really wouldn’t have an incentive to make a change without a lot of customer demand.
As a c# Dev, I am constantly annoyed that Windows updates and sometimes installs require reboots or stopping all user activity, while I've never had to reboot or block during an upgrade on ubuntu
To be fair, a lot of Linux updates require a reboot or at least a logout to properly take effect, too. Windows is just very aggressive about forcing you to upgrade and reboot, which does undeniably have security benefits when you consider that Windows has a lot of non-technical users and a huge attack surface. At least they have relaxed it a bit, the frequent forced reboots caused me some serious problems on a Windows machine I had controlling a CNC machine.
Windows also requires rebooting for the actual upgrading process. A Linux update might need a reboot to take affect but the reboot is still a normal reboot, it won't take longer because it's trying to install something.
Both Windows and macOS suffer from this. Big updates to both systems can render the computer unusable for 30 minutes while they are installing.
This is true. Fedora now only has a Reboot and Update button in the GNOME Software GUI because some software like Firefox and some GNOME components crash if you update them while they're running (although this seems to happen more often with Wayland than Xorg for some reason). At least Linux and the BSDs give you a choice whether to do offline or online updates.
2 replies →
Most of these things are coincidental byproducts of how Windows (NT) is designed, not carefully envisioned trade offs that are what make Windows Ready for the Desktop (tm).
For some counterexamples of how those designs make things harder and more irritating, look at file locking and how essentially every Windows update forces a reboot, that is pretty damn user unfriendly.
Even without file locking, how would live updates work when processes communicate with each other and potentially share files/libraries? I feel like file locking isn't really the core problem here.
Everything that is running keeps using the old libraries. The directory entries for the shared libraries or executables are removed but as long as a task holds a live file descriptor the actual shared library or executable is not deleted from the disk. New processes will have the dynamic linker read the new binaries for the updated libraries. Unless the ABI or API somehow changes during the update (and they don't, big updates usually bump the library version) things work pretty fine.
6 replies →
you can always restart processes, on Windows it is fundamentally impossible to overwrite a running DLL or EXE file. So for example if some services are needed to apply updates, they can never be updated without a reboot.
8 replies →
> On a desktop machine, you want to be able to update drivers in the background without rebooting, Exactly what does Linux and Windows DON'T does .
Windows can update many drivers without rebooting - even graphics drivers (try that with Linux and X!).
Yeah, it's amazing there is just a brief flash and everything is up and running again.
When I had slightly more unstable drivers, Windows could recover from that as well. The driver would crash, screen goes black, and then back up and running again without most apps noticing (excluding games and video playback).
Indeed I’ve updated many a drivers on windows (including graphics as you mention) without a reboot required. Always needed a reboot to do the equivalent kind of updates under Linux.