Windows: Interface Guidelines (1995) [pdf]

6 years ago (ics.uci.edu)

Windows 95/98/2000 and Office 95/97/2000 is in many ways my “native interface”, probably because those were the platforms I grew up using most during my late-teenager formative years in High School and in the early years of University.

I have to say that those interfaces are clunky in retrospect, but they are undeniably clear and do not place form over function as many of the modern ‘flat’ and touch-orientated interfaces seem to.

The other two graphical interfaces I remember most fondly are NeXT’s and BeOS’, which are also, probably not coincidentally, OSes I used frequently over the same period of time.

(Just to give you some context, I remember avidly reading the Windows 95 Resource Kit in the run-up to the Windows 95 release in August 1995 because I had no internet access and therefore had had no way of downloading and testing the many “Chicago” betas that everybody had been raving about... and therefore I know that radio buttons on the interface were originally intended to be diamond-shaped rather than round.)

  • I honestly still refer to the Windows 2000 User Experience book I found online years ago whenever we have to add a new form to our old Winforms applications. Funny thing is, back when that document was written, application skinning was all the rage and I spent considerable time masking that clunky old interface.

  • In what way are those interfaces clunky?

    • I agree - I'd say clean rather than clunky. The old and (by modern standards) spartan appearance of Windows 95 applications doesn't mean the UI design is no good. Similarly, command-line interfaces can be very effective, even if they lack GUI gloss.

      Somewhat related: long live the FOX Toolkit and its hard-coded Windows 95 theme http://fox-toolkit.org/screenshots.html

The 90s were definitely a time when people thought deeply about how to make computer applications more usable. Apple also had excellent guidelines. Problem was that back then the hardware and the operating systems sucked. Now it’s the opposite. Hardware and OS are very stable now but applications are getting worse.

  • > but applications are getting worse

    So many people say this but I fundamentally disagree.

    Applications are so much more complex today, supporting more combinations of OS and input method and data storage and accessibility and display modes and whatnot.

    Applications are harder to use, yes, but because they do so much more. Your interface has to work and be responsive whether your file sits on a local disk or in the cloud, or maybe has to be synced. It needs to work with mouse and touch and a screenreader. And so on ad finitum.

    Relative to their complexity, applications are doing just fine today I think. (Also don't forget there were so many terribly designed applications in the 90's. It's not like everybody was even remotely following established UX guidelines.)

    The same kind of clear UX standards just don't exist anymore because there are so many different apps that do so many different things, and there's no obvious best answer.

    But the good news is that applications do slowly converge on best practices. Think of how things like hamburger menus or swipe-to-refresh or pinch-to-zoom have become expected standards.

    • > Applications are harder to use, yes, but because they do so much more

      I use Microsoft office 2000, because since then, no new features have been added to word or Excel that I care about. In fact, I couldn't even name a single feature added since then. What they did add, is the ribbon instead of the toolbar, which makes it impossible to find things you need, and a whole lot of bloat.

      On modern machines, office 2000 opens faster than I can release my mouse button from clicking its icon.

      That is to say, I entirely disagree with your statement.

      20 replies →

    • > But the good news is that applications do slowly converge on best practices. Think of how things like hamburger menus or swipe-to-refresh or pinch-to-zoom have become expected standards.

      Hamburger menus are literal garbage with a little bit of everything and zero organization. Give me a menu bar instead. Swipe-to-refresh is completely useless for well-behaving software and pinch-to-zoom often activates when I wanted to press a button instead.

      Mobile device features were shoved into desktop UI without regard for desktop users. Desktop users' productivity has suffered as a consequence.

      4 replies →

    • >Applications are so much more complex today, supporting more combinations of OS and input method and data storage and accessibility and display modes and whatnot.

      But do they need to be that complex? Often times we are solving for the same problems (most CRUD apps aren't doing anything we didn't do in the late 90's), but devs have convinced themselves that all of this abstraction and overly engineered layering is necessary. It's often not.

    • Nobody asked for applications to support "more combinations of OS, input methods and data storage and accessibility and whatever else", developers decided to shove all that because reasons.

      If applications are worse because they are doing so much more then they should stop doing that "much more", focus on doing one thing and leave the rest to other applications.

      > Think of how things like hamburger menus or swipe-to-refresh or pinch-to-zoom have become expected standards.

      That isn't a great example for best practice IMO since the only expectation i have about hamburger menus is for them to die in a fire.

      2 replies →

    • If things are so complex now why does Google Maps constantly shift buttons and menus around without offering new functionality? To me it seems designers are just spinning their wheels. The whole data driven UX stuff reminds me a little of Agile with its story points and velocity charts. Looks “scientific” but if you take a closer look it’s just BS.

      4 replies →

    • "Applications are harder to use, yes, but because they do so much more."

      But do they? The move to web browser apps and the loss of rich native desktop functionality means that many web apps offer far less functionality than native desktop apps. The companies that offer these web apps sell them on their easy sharing capability and collaboration features.

      An example: thirty years ago (or more), you could use any desktop word processor and perform basic tasks like spell check, change the colour of text, choose fonts and change their size.

      Or today, in 2020, you can use Dropbox Paper without any spell check, no way to change the colour of text, no ability to choose fonts or even alter their size. But it does runs in a web browser. This is apparently progress.

      1 reply →

    • Applications are harder to use, yes, but because they do so much more. Your interface has to work and be responsive whether your file sits on a local disk or in the cloud, or maybe has to be synced.

      Why? Why does every application need to be "cloud connected"? What's wrong with having a normal desktop application that saves files to the filesystem like every application did for thirty-odd years? The only reason for this that I can discern is that it's an easy way to lock users into paying a monthly or annual recurring fee, rather than a one-time fee for the software.

      Users themselves are not asking for cloud connectivity. People understand files. They can save files, copy files to a thumbdrive (or Dropbox), and e-mail files as attachments. Files are an interface that people have figured out. We don't need to reinvent that wheel.

      It needs to work with mouse and touch and a screenreader.

      In my experience, older applications are far more screenreader friendly than new applications. Moreover, not all visually impaired people are so visually impaired as to require screenreaders, and the more skeuomorphic designs that were favored in the '90s and 2000s were far easier for them to use than today's flat designs where one can't tell what is and is not a button. Heck, even I get confused sometimes on Android UIs and don't notice what is a plain text label and what is an element that I can interact with. I can only think that it's far worse for people who have sensory and cognitive deficits.

      As for "it needs to work with a mouse and touch", my answer is once again, "No it does not." Mouse and touch are different enough that trying to handle both in one app is a fool's errand. Mice and trackpads are far more precise than touch, and any interface that attempts to both mouse and touch with a single UI ends up being scaled for the lower precision input (touch), which results in acres of wasted space in the desktop UI.

      The same kind of clear UX standards just don't exist anymore because there are so many different apps that do so many different things, and there's no obvious best answer.

      Of course there's no obvious best answer if you're trying to support everything from a smartwatch to a 4k monitor with a single app. So why are you trying to do that? Make separate UIs! Refactor your code into shared libraries and use it from multiple UIs, rather than attempting to make a single mediocre UI for every interface.

      But the good news is that applications do slowly converge on best practices. Think of how things like hamburger menus or swipe-to-refresh or pinch-to-zoom have become expected standards.

      The problem is that all of these new "best practices" are far worse, from a usability perspective, than the WIMP (windows, icons, menus, pointer) paradigm that preceded them. Swipe to refresh is much less discoverable than a refresh button, and much more difficult to invoke with a mouse. Pinch-to-zoom is impossible to invoke with a mouse. Hamburger menus are far more difficult to navigate than a traditional menu bar.

      When today's best practices are worse than yesterday's best practices, I think it is fair to say that applications are getting worse.

      7 replies →

  • > a time when people thought deeply

    As opposed to "completely succumbed to metrics". Conventions like shift to range-multiselect/ctrl to toggle-multiselect cannot evolve from a series of a/b tests. It's not as if people never tested UI ideas back then, but it was a tool, not the entire process.

  • It was also a time when most people's experience with using a new operating system (nevermind different software) was their first, ui guidelines had different requirements because the use case was different.

    • Today most people's experience with using any single app or webpage is also their first, UI-wise, because nothing is consistent with each other anymore. So I'm not sure what this is an argument for.

  • > applications are getting worse

    Citation needed

    • "Anecdata"

      Today's rush to simplified, web interfaces means typically that common keyboard scenarios have been completely forgotten about. A market leader in a niche sector, re-built their UI in Electron, it's primary purpose is to selectively migrate items from one technology to another.

      While it does provide a treeview hierarchical structure, with a checkbox next to each item, selecting that via say "spacebar" or selecting multiple items using CTRL+SHIFT does not function. As well, it's non-native scrollbar does not accurately reflect your position and does not allow finely-grained re-positioning.

      This is $5,000/seat software that has glowing reviews and has essentially captured the market for what it does - yes a small market, with approximately 6-8 competitors - almost all of whom have copied their user-interface and even Electron implementation.

      1 reply →

    • - Atrocious input lag

      - Lagging menus and widgets

      - You misclick in Google Maps? Better if you start over the route

      - 30x more resources for something done under 80MB, such as Discord vs Kopete. The later had inline LaTeX. And video previews. In 2007.

      - Invisible scrollbars with no intuitive use

      - Flat design not being able to distinguish a button for the background layer. Compare it with Motif, W9x, BeOS, KDE3 with Keramik.

Some good stuff in there! I really like this one:

Forgiveness

Users like to explore an interface and often learn by trial and error. An effective interface allows for interactive discovery. It provides only appropriate sets of choices and warns users about potential situations where they may damage the system or data, or better, makes actions reversible or recoverable.

  • Yes! And if software lets a user do something that lands them in an error condition, then there should be a way to recover from that condition in the software.

    • For me, undo is probably one of the greatest inventions in computing next to the compiler and the internet.

  • Haha!

    I’m home on sick leave today, a colleague just called because he’s unchecked some boxes in the CAM software resulting in the license being disabled and the check boxes disappear.

    I’ve remoted in, no obvious and no hidden way to get the check boxes back, so he has to call the support line.

One of my fondest memories in learning to program as a kid in the late 90s, was writing windows 98 UI clone in QBasic.

I would screenshot the start menu, buttons, window borders, and various other UI components and try to recreate them in QBasic by zooming in and inspecting all the pixels.

I had subroutines to create windows, buttons, menues, various fonts, 255 colors and mouse support. It was coming together incredibly well given I had no idea how any of these were built. I had a working version of minesweeper and a text editor.

  • > One of my fondest memories in learning to program as a kid in the late 90s, was writing windows 98 UI clone in QBasic.

    I did the same although trying to create a Unix GUI (in a purely visual “I’ve seen this in the movies” sense) and I did it in Amos Basic.

    Needless to say it wasn’t a great success, but it provided me with the foundation for a making couple of neat-looking applications which actually did useful things (to me).

    It was slow as heck, but I had great fun doing it.

  • That's funny, I did the exact same thing, although I didn't make it as far as you. I had a working mouse cursor (reading the mouse data directly from the serial port) and buttons. At that age, I didn't know about subroutines and had gotos all over the place.

  • It was a common rite of passage at the time. I did something very similar, first cloning Borland's TUI, then Win3.1.

    The nice thing about Windows of that era - its widgets and their default color scheme was designed to still work with just the original 16 EGA colors (since that was the baseline for video cards back then). To be even more precise, everything other than window title and selection was done in 4 colors - white, black, and two shades of gray. Window/selection added a fifth. Things like selection rectangles and resizable window borders were done using XOR. This all was readily accessible in a DOS app, pretty much regardless of the language.

  • Did you build a gui toolkit or just hard-code everything? I remember creating a GUI paint program in Turbo Pascal (the only language I could get my mouse to work in) and I quickly got over my head as I didn't abstract anything out

    • It was abstracted out, but I don't know if it qualified as a GUI. I had subroutines for creating the various components and placing them anywhere on the screen. I don't remember how I handled the events. One of my biggest regrets is loosing all my work around that time.

The best Windows UX, except one thing that to this day i never liked: minimized windows in MDI applications having a "button" form instead of an icon form. I always found Windows 3.1's approach of using icons much better. Though i guess they tried to mimic minimizing the top level windows to the taskbar, but a real inner taskbar would work better IMO - mIRC did it best there - and functionally closer to what most applications do nowadays with tabs (but without losing the functionality of also having unmaximized windows, like opening multiple views of an image side by side at different zoom levels in an image editor - or just having multiple documents visible at the same time in general instead of being forced to only view one).

  • Opera had pretty much the perfect MDI interface - with a tab bar mimicking taskbar, but otherwise all MDI features were still there, like resizable windows.

    And hey, MDI is still there, and often still the easiest way to organize things in a desktop Windows app.

90's era HCI research was so excellently focused on details. Apples Human Interface Guidelines from 1993 should also be mandatory reading for anyone building a human facing applications: https://woofle.net/impdf/HIG.pdf

In some ways interfaces were richer at that time. I can't wait for the flat interface fad will go away and some old thing to reemerge.

  • Many seem to think like us on this front (at least, in the NH comments). Now, what can we do concretely besides implementing those concepts in our own apps?

    • Convince designers people will hire them if they see designs like that in their portfolios. AFAI can tell designers favor whichever design will produce screen shots likely to make their next job search easier, actual usability or cost of implementation be damned, which makes perfect sense.

I actually worked on the followup to this book for the release of Windows XP at Microsoft. You can find it on Amazon if you're interested. https://www.amazon.com/Microsoft-Windows-Experience-Professi...

It was written largely by Tandy Trower (inventor of Clippy) and has many similarities to Apple's original Human Interface Guidelines though very different too.

  • Yeah I've been consulting this a lot lately. Been writing a data-dense desktop application and trying to make it as good to use on keyboard-only users as it is for mouse-using users.

    I figure the closer I get to that, the easier the port to gui.cs will be.

    Another good UX book I found was "The Definitive Guide to the .NET Compact Framework" by Larry Roof and Dan Fergus. Yes, it had mostly back-end stuff, but the UX concepts taught the reader to consider his audience.

    Is the person using your app likely to be using it in a dock hooked to a full keyboard like you, Mr. Dev?

    No, he will be standing next to a cellphone tower wearing gloves and trying to get the Falcon x3 out of the sunlight enough to see what the screen is showing him.

    Okay then, make the buttons big enough for a gloved finger to mash, use combo-boxes everywhere you can stand it. So what if its ugly - if its functional and the user never has to use the SIP, then fine.

  • I've had some wonky PDF version of this for years I don't know why it never occurred to me that this was a book I could just buy. I can't wait to have a physical copy to reference!

I think in some regards, classic GUI interfaces peaked in about 2002/2003 with KDE2. The influence of Win95/98/NT4 on KDE was definitely there, but they took it in its own unique direction. Definitely some inspirations from NeXT as well.

I had a really nice FreeBSD+xfree86+KDE setup at that time. The closest I can come now is something based on XFCE4.

I'm mostly surprised by the number of pen input elements 95 had. I was still a kid at the time so I didn't have any exposure to more advanced hardware. How common was it?

  • Windows 95 was developed at a time when people thought that Pen computing would be the next big thing so they put in a lot of pen stuff that eventually was barely used. It always stayed a niche.

    • Maybe now is the time for it to come back? Windows 2-in-1 devices with a pen are magical these days, but there's far too little well-designed pen-oriented applications.

      (I'm worried this won't improve until web folks fix the broken pointer events APIs, and even then it'll only lead to proliferation of pen-oriented Electron apps.)

      1 reply →

  • A special build of Windows 3.1 called "Windows for Pen Computing" was made in the early 90s for very early tablet PCs. I'm guessing they rolled that stuff into the mainline build of 95.

Wow, "user centered". It was a refreshing read. Something quite contrary to modern: it looks purty to me and if it is not functional the rest can go eff themselves.

Now I see control panel icon is hammer and screwdriver and not cold and hot water tap as I always thought.

Thank you for posting this! I grew up in this era of computing and I’m working on recreating it for myself.

This looks like a fantastically good resource for inspiration :)

I realize I still have the hard copy of this book on my bookshelf for 25 years!