Comment by savolai
2 years ago
That seems like a technology centered view. Nielsen is talking from the field of Human-Computer Interaction where he is pioneer, which deals with the point of view of human cognition. In terms of the logic of UI mechanics, what about mobile is different? Sure gestures and touch UI bring a kind of difference. Still, from the standpoint of cognition, desktop and mobile UIs have fundamentally the same cognitive dynamics. Command line UIs make you remember conmands by heart, GUIs make you select from a selection offered to you but they still do not undestand your intention. AI changes the paradigm as it is ostensibly able to understand intent so there is no deterministic selection of available commands. Instead, the interaction is closer to collaboration.
Good CLIs don't make users remember commands by heart. Except at a very basic level. I often joke that the average Linux user only really needs three keys on their keyboard: Up, Enter and Tab. (Not strictly true, since sometimes you press ctrl-R, but that's a substitute for pressing Up a bunch of times.) Tab completion on many CLIs is good enough that I'm often frustrated when the tab key isn't the 'do what I'm thinking' button. And whenever browsers change their predictive text algorithms so I need to type more than three letters of a URL for it to complete, I get annoyed because I'm so used to the predictor knowing what I want. And I get the feeling that if Google doesn't autocomplete your query long before you're finished writing it, it's because you're not going to get any results for it anyway.
The implementation may be different, but expecting a computer to know what I want based on my or similar people's past behaviour rather than telling it exactly has been the norm for quite some time. Some of this is from humans using their experience to implement rules, and some of it is actually ML that predates the current LLM trend.