Comment by quanticle

6 years ago

Applications are harder to use, yes, but because they do so much more. Your interface has to work and be responsive whether your file sits on a local disk or in the cloud, or maybe has to be synced.

Why? Why does every application need to be "cloud connected"? What's wrong with having a normal desktop application that saves files to the filesystem like every application did for thirty-odd years? The only reason for this that I can discern is that it's an easy way to lock users into paying a monthly or annual recurring fee, rather than a one-time fee for the software.

Users themselves are not asking for cloud connectivity. People understand files. They can save files, copy files to a thumbdrive (or Dropbox), and e-mail files as attachments. Files are an interface that people have figured out. We don't need to reinvent that wheel.

It needs to work with mouse and touch and a screenreader.

In my experience, older applications are far more screenreader friendly than new applications. Moreover, not all visually impaired people are so visually impaired as to require screenreaders, and the more skeuomorphic designs that were favored in the '90s and 2000s were far easier for them to use than today's flat designs where one can't tell what is and is not a button. Heck, even I get confused sometimes on Android UIs and don't notice what is a plain text label and what is an element that I can interact with. I can only think that it's far worse for people who have sensory and cognitive deficits.

As for "it needs to work with a mouse and touch", my answer is once again, "No it does not." Mouse and touch are different enough that trying to handle both in one app is a fool's errand. Mice and trackpads are far more precise than touch, and any interface that attempts to both mouse and touch with a single UI ends up being scaled for the lower precision input (touch), which results in acres of wasted space in the desktop UI.

The same kind of clear UX standards just don't exist anymore because there are so many different apps that do so many different things, and there's no obvious best answer.

Of course there's no obvious best answer if you're trying to support everything from a smartwatch to a 4k monitor with a single app. So why are you trying to do that? Make separate UIs! Refactor your code into shared libraries and use it from multiple UIs, rather than attempting to make a single mediocre UI for every interface.

But the good news is that applications do slowly converge on best practices. Think of how things like hamburger menus or swipe-to-refresh or pinch-to-zoom have become expected standards.

The problem is that all of these new "best practices" are far worse, from a usability perspective, than the WIMP (windows, icons, menus, pointer) paradigm that preceded them. Swipe to refresh is much less discoverable than a refresh button, and much more difficult to invoke with a mouse. Pinch-to-zoom is impossible to invoke with a mouse. Hamburger menus are far more difficult to navigate than a traditional menu bar.

When today's best practices are worse than yesterday's best practices, I think it is fair to say that applications are getting worse.

100% agree with your comment. One caveat though:

Mouse and touch seem like completely different things, but they're more similar when you consider pen/stylus input. 2-in-1 devices running a proper desktop-grade OS[0] are amazing devices, and one thing they're missing are properly designed apps, which are few and far between. 2-in-1 made me actually appreciate the ribbon a bit more - though an overall regression in UX, it shines with touch/pen devices, which I'm guessing was MS's intention all along[1]. 2-in-1s with pen are really magical things; I use one (a Dell Latitude) as my sidearm, and started to prefer it over my main Linux desktop on the grounds of convenience and versatility.

Best pen-oriented apps actually allow you to use keyboard + finger touch + pen simultaneously. You use pen for precise input (e.g. drawing, scaling, selecting), fingers for imprecise input (e.g. panning/rotating/scaling, manipulating support tools like rulers) and keyboard for function selection (e.g. picking the tool you'll use with stylus).

--

[0] - Read: MS Surface and its clones.

[1] - For instance, Windows Explorer would be near-unusable as a touch app without a pen, if not for the ribbon that makes necessary functions very convenient to access using finger touch.

  • I agree with your caveat, but reply with a caveat of my own. The key difference between a mouse and pen/touch is the ability to hover. With a mouse, I can put the cursor over a UI element without "clicking" or otherwise interacting with it. That's difficult to do with a pen and impossible to do with touch. The key use case that hover enables is the ability to preview changes by hovering over a UI control and confirming changes by clicking. A pen/touch UI would have to handle that interaction differently.

    • Thank you to your caveat to my caveat, and let me add a caveat to your caveat to my caveat: while you're spot on with the hover feature being an important differentiator, it's not in any way difficult with a pen. It works very well in practice. On Windows, even with old/pen-oblivious applications, it works just like moving the mouse - you gain access to tooltips and it reveals interactive elements of the UI. That's another reason I prefer pens over fingers.

      (Tooltips usually show when you hold the mouse pointer stationary over an UI element. With pen, it's somewhat harder to do unless you're in a position that stabilizes your forearm, but there's an alternative trick: you keep the pen a little further from the screen than usual and, once over an element you want to see the tooltip for, you pull the pen back a little, so that it goes out of hover detection range. It's simpler than it sounds and it's something you stop thinking about once you get used to it.)

      2 replies →

> Why does every application need to be "cloud connected"? What's wrong with having a normal desktop application that saves files to the filesystem like every application did for thirty-odd years? ... Users themselves are not asking for cloud connectivity.

Of course they absolutely* are. I keep literally all my documents in the cloud. I'm constantly editing my documents from different devices -- my phone, my laptop, my tablet. Users like myself are absolutely asking for cloud connectivity. I simply won't use an app if it doesn't have it. Your argument makes as much sense of "why does every skyscraper have to have elevators? Users aren't asking for anything more than stairs!"

> Mouse and touch are different enough that trying to handle both in one app is a fool's errand.

Except you don't have a choice. Many apps these days are webapps, and absolutely require both interfaces to work. Many laptops also support both. That's just how it is.

> The problem is that all of these new "best practices" are far worse, from a usability perspective, than the WIMP (windows, icons, menus, pointer) paradigm that preceded them... When today's best practices are worse than yesterday's best practices, I think it is fair to say that applications are getting worse.

Except WIMP doesn't work on mobile. So it's an apples-to-oranges comparison.

  • Of course they absolutely are. I keep literally all my documents in the cloud. I'm constantly editing my documents from different devices -- my phone, my laptop, my tablet. Users like myself are absolutely asking for cloud connectivity.

    If by "cloud" you mean a filesystem-like abstraction that's synchronized across multiple systems (e.g. Dropbox or OneDrive), I have no objection to that. Heck, I even called out Dropbox as a viable alternative to "cloud connectivity". What I am objecting to is the tendency that many apps (especially mobile apps) have of locking your data away in their cloud, making it impossible to get at your data, back it up, or share it with a different application.

    Many apps these days are webapps, and absolutely require both interfaces to work.

    That's a nonsequitir. It's entirely possible to detect the size and capabilities of the device that user is using and display a UI that's appropriate to that device. What I'm militating against is the lazy approach of designing the UI for mobile first, and then using CSS media queries to scale it up fit a desktop viewport. That results in acres of wasted space and a poor user experience, because the user doesn't have the same interaction expectations that they would have if they were using the UI on a mobile/touch device.

    Except WIMP doesn't work on mobile.

    And mobile UIs don't work on desktop. Trying to make a one-size-fits-all UI is a fool's errand. Much better to design each UI for the platform that it will be displayed on (laptop, tablet, phone, smartwatch, etc) than trying to scale a single UI across multiple devices.