← Back to context

Comment by DebtDeflation

2 years ago

Not sure I would lump command line interfaces from circa 1964 with GUIs from 1984 through to the present, all in a single "paradigm". That seems like a stretch.

Agreed.

Also, Uber (and many other mobile apps) wouldn't work as a CLI or desktop GUI, so leaving out mobile is another stretch.

  • That seems like a technology centered view. Nielsen is talking from the field of Human-Computer Interaction where he is pioneer, which deals with the point of view of human cognition. In terms of the logic of UI mechanics, what about mobile is different? Sure gestures and touch UI bring a kind of difference. Still, from the standpoint of cognition, desktop and mobile UIs have fundamentally the same cognitive dynamics. Command line UIs make you remember conmands by heart, GUIs make you select from a selection offered to you but they still do not undestand your intention. AI changes the paradigm as it is ostensibly able to understand intent so there is no deterministic selection of available commands. Instead, the interaction is closer to collaboration.

    • Good CLIs don't make users remember commands by heart. Except at a very basic level. I often joke that the average Linux user only really needs three keys on their keyboard: Up, Enter and Tab. (Not strictly true, since sometimes you press ctrl-R, but that's a substitute for pressing Up a bunch of times.) Tab completion on many CLIs is good enough that I'm often frustrated when the tab key isn't the 'do what I'm thinking' button. And whenever browsers change their predictive text algorithms so I need to type more than three letters of a URL for it to complete, I get annoyed because I'm so used to the predictor knowing what I want. And I get the feeling that if Google doesn't autocomplete your query long before you're finished writing it, it's because you're not going to get any results for it anyway.

      The implementation may be different, but expecting a computer to know what I want based on my or similar people's past behaviour rather than telling it exactly has been the norm for quite some time. Some of this is from humans using their experience to implement rules, and some of it is actually ML that predates the current LLM trend.

  • It’s still action/response you have to tap buttons and make choices based on what you see on the screen. The new paradigm would be to tell Uber that you need a ride later after the party and then it figures out when and where to pick you up and what address you’ll be going to.

  • Why wouldn't apps like Uber work on the desktop?

    • The distinction between a 'mobile' UI and a desktop one is more to do with the difference between a client and an application. Which is of course one that's basically as old as computer networks.