Comment by Animats

9 days ago

Agreed. Windows Server 2000 through Windows 7 were peak Microsoft operating system.

By Windows 2000 Server, they finally had the architecture right, and had flushed out most of the 16 bit legacy.

The big win with Windows 7 was that they finally figured out how to make it stop crashing. There were two big fixes. First, the Static Driver Verifier. This verified that kernel drivers couldn't crash the rest of the kernel. First large scale application of proof of correctness technology. Drivers could still fail, but not overwrite other parts of the kernel. This put a huge dent into driver-induced crashes.

Second was a dump classifier. Early machine learning. When the system crashed, a dump was sent to Microsoft. The classifier tried to bring similar dumps together, so one developer got a big collection of similar crashes. When you have hundreds of dumps of the same bug, locating the bug gets much easier.

Between both of those, the Blue Screen of Death mostly disappeared.

I agree with one big exception, the refocus on COM as the main Windows API delivery mechanism.

It is great as idea, pity that Microsoft keeps failing to deliver in developer tooling that actually makes COM fun to use, instead of something we have to endure.

From OLE 1.0 pages long infrastructure in Windows 16 bit, via ActiveX, OCX, MFC, ATL, WTL, .NET (RCW/CCW), WinRT with. NET Native and C++/CX, C++/WinRT, WIL, nano-COM, .NET 5+ COM,....

Not only do they keep rebooting how to approach COM development, in terms of Visual Studio tooling, one is worse than the other, not at the same feature parity, only to be dropped after the team's KPI change focus.

When they made the Hilo demo for Windows Vista and later Windows 7 developers with such great focus on being back on COM, after how Longhorn went down, a better tooling would be expected.

https://devblogs.microsoft.com/cppblog/announcing-hilo/

Drivers can crash the rest of the kernel in Windows 7. People playing games during the Windows 7 days should remember plenty of blue screens citing either graphics drivers (mainly for ATI/AMD graphics) or their kernel anticheat software. Second, a “proof of correctness” has never been made for any kernel. Even the seL4 guys do not call their proof a proof of correctness.

  • Not the operating system:

    https://en.m.wikipedia.org/wiki/Driver_Verifier

    • Driver Verifier is a tool that developers can choose to use for testing and debugging purposes.

      It's not used on production machines and it does nothing to prevent a badly written driver from crashing the kernel.

      3 replies →

    • My PC used to regularly crash Windows 10 because of buggy Nvidia driver. Eventually they fixed the bug, but until then, I had a crash every few days.

    • From your own link:

      "Driver Verifier is not normally used on machines used in productive work. It can cause ... blue screen fatal system errors."

  • I lost less time to bluescreens than I have to forced updates and sidestepping value add nonsense like one drive, edge.

  • They didn't "prove the kernel is correct", they built a tool to prove that a single driver maintains an invariant throughout execution.

    • It does not prove that the driver will not crash the kernel. It should be fairly easy to find a driver that passed QA testing under that tool, yet still crashed the kernel. You just need one of the many driver developers for Microsoft Windows to admit to having used that tool and fixed a crash bug that it missed, and you have an example. Likely all developers who have used that tool can confirm that they have had bugs that the tool missed.

I think it ended at the first "ribbon" UI, which was in the 2003 era, but not all products ate the dirt at once.

  • Yeah the ribbon drove me to LibreOffice and Google Docs and I haven’t been back.

    Windows 2000 Pro was the peak of the Windows UX. They could not leave well enough alone.

  • The original ribbon sucked but with the improvements it's hard to say it's generally a bad choice.

    The ribbon is a great fit for Office style apps with their large number of buttons and options.

    Especially after they added the ability to minimize, expand on hover, or keep expanded (originally this was the only option), the ribbon has been a great addition.

    But then they also had to go ahead and dump it in places where it had no reason to be, such as Windows Explorer.

    • > The ribbon is a great fit for Office style apps with their large number of buttons and options.

      To me this is the exact use case where it fails. I find it way harder to parse as it's visually intense (tons of icons, buttons of various sizes, those little arrows that are sometimes in group corners...).

      Office 2003 had menus that were at most 20-25 entries long with icons that were just the right size to hint what the entries are about, yet not get in the way. The ribbon in Office 2007 (Word, for example) has several tabs full of icons stretching the entire window width or even more. Mnemonics were also made impractical as they dynamically bind to the buttons of the currently visible tab instead of the actions themselves.

    • > The original ribbon sucked but with the improvements it's hard to say it's generally a bad choice.

      This is also what I hear about GNOME. "OK, yes, GNOME 3.x was bad, but by GNOME 40 it's fine."

      No, it's not. None of my core objections have been fixed.

      Both ribbons and GNOME are every bit as bad as they were in the first release, nearly 20 years ago.

      1 reply →

    • Close to 20 years later, people still complain about the ribbon. (1)

      I think that says something about it.

      --

      1. And not just "grumble, grumble... get off my lawn..." Many of its controls are at best obscure. It hides many of them away. It makes them awkward to reach.

      Many new users seem as clueless, or even more so, than pre-existing customers who experienced the rug pull. At least pre-ribbon users knew there was certain functionality that they just wanted to find.

      (And I still remember how MS concurrently f-cked with Excel shortcut keys. Or seemed to have, when I next picked Excel up after a couple year hiatus from being a power user.)

    • > The original ribbon sucked but with the improvements it's hard to say it's generally a bad choice.

      It is a terrible choice. Always have to search for items.

    • For me peak UX was before Ribbon. Just menus and customizable toolbars. Didn't need nothing more to be productive enough. Nowadays I can hardly use Office suite, its feature discoverability essentially zero for me.

  • I never understood the issue with the ribbon UI. Epecially for Office it was great, so much easier to find stuff.

    • > I never understood the issue with the ribbon UI. Epecially for Office it was great, so much easier to find stuff.

      1. I don't need to find stuff.

      I knew where stuff is.

      2. I read text. I only need menus. I don't need toolbars etc. and so I turn them all off.

      I cannot read icons. I have to guess. It's like searching for 3 things I need in an unfamiliar supermarket.

      3. Menus are very space efficient.

      Ribbons hog precious vertical space. This is doubly disastrous on widescreens.

      4. I am a keyboard user.

      I use keys to navigate menus. It's much faster than aiming at targets with the mouse and I don't need to look. The navigation keys don't work any more.

      Ribbons help those who don't know what they are doing and do not care about speed and efficiency.

      They punish experts who do know, don't search, don't hunt, and customise themselves and their apps for speed and efficient use of time and screen space.

      17 replies →

    • My big problem with it is that it’s stateful. A menu or toolbar admits muscle memory - since you get used to where a certain button or option is and you can find it easily. With ribbons you need to know if you’re in the right submenu first.

      Though personally, I’m increasingly delighted by the quicksilver - style palette / action tools that vscode and IntelliJ use for infrequently used options. Just hit the hotkey and type, and the option you want appears under the enter key.

    • Those of us working in jobs use the same couple of functions in our office products. We don't really go and find features.

  • > I think it ended at the first "ribbon" UI, which was in the 2003 era,

    Nah. 2007 era.

    Office 2007 introduced the ribbon to the main apps: Word, Excel, I think Powerpoint. The next version it was added to Outlook and Access, IIRC.

    I still use Word 2003 because it's the last pre-Ribbon version.

  • I don't know quite when it started to happen, but changing and/or eliminating the default Office keyboard shortcuts in the last few iterations has really irked me.

Another often-underappreciated advancement was the UAC added in Vista. People hated it, but viruses and rootkits were a major problem for XP.

  • People hated it because it was all over the place. Change this or that setting? UAC. Install anything? UAC. Then you'd get a virus in a software installer, confirm the UAC as usual, and it wouldn't stop a thing.

  • It is more of a warning than an actual security mechanism though. Similar to Mark of the Web.

    • No, in XP you were essentially logged in as root 24/7 (assuming it was your machine), and any program -- including your browser -- was running as root too. I remember watching a talk about how stupidly easy it was to write rootkits for XP. "Drive-by viruses" were a thing, where a website could literally install a rootkit on your machine just by visiting it (usually taking advantage of some exploit in flash, java, or adobe reader). Vista flipped it, by disabling the admin account, so that in order to do something as admin you needed to "sudo" first. That alone put a stop to tons of viruses.

      1 reply →

    • > It is more of a warning than an actual security mechanism though. Similar to Mark of the Web.

      It's both a warning and an actual security mechanism.

      Obviously its most visible form is triggered when an application tries to write to system-level settings or important parts of the filesystem, and also when various heuristics decide that the application is likely to want to do so (IIRC "setup.exe" and "install.exe" at the root of a removable disk are assumed to need elevation).

      Because Microsoft knew that a lot of older software wrote to system areas just because it predated Windows being a multi-user system UAC also provided a partial sandboxing mechanism where writes to these areas could be redirected to user-specific folders.

      The warning was also a tool in itself, because the fact that it annoyed users finally provided the right kick in the ass to lazy software developers who had no need to be writing to privileged areas of the system and could easily run under a limited user but hadn't bothered to because most non-corporate NT users were owners and thus admins and most corporate environments would just accept "make users local admin". A portion of the reason we saw UAC prompts a lot less in later versions of Windows is because Microsoft tweaked some things to make certain settings per-user and to reorganize certain dialogs so unprivileged settings could be accessed without escalation, but a lot of it is because applications that had been doing it wrong for as long as NT had existed finally got around to changing their default paths.

    • It got old people to call their grandsons when an image or .doc file asked for permissions though, which at the time was a huge help

> This verified that kernel drivers couldn't crash the rest of the kernel.

How did crowdstrike end up crashing windows though?

  • > Static Driver Verifier

    Well, the Crowdstrike driver isn't (wasn't?) static. It loaded a file that Crowdstrike changed with an update.

    Most drivers pass through rigorous verification on every change. But Crowdstrike is (was?) allowed to change their driver whenever they want by designing it to load a file.

    • The EU forced MS to allow stuff like CrowdStrike as part of an anti-trust settlement.

      MS tried to use the incident to get the regulators to waive the requirement.

      7 replies →

> The big win with Windows 7 was that they finally figured out how to make it stop crashing.

Changing the default system setting so the system automatically rebooted itself (instead of displaying the BSOD until manually rebooted) was the reason users no longer saw the BSOD.

> First large scale application of proof of correctness technology.

Curious about this. How does it work? Does it use any methods invented by Leslie Lamport?