← Back to context

Comment by krm01

2 years ago

The article fails to grasp the essence of what UI is actually about. I agree that AI is adding a new layer to UI and UX design. In our work [1] we have seen an increase in AI projects or features the last 12 months (for obvious reasons).

However, the way that AI will contribute to better UI is to remove parts of the Interface. not simply giving it a new form.

Let me explain, the ultimate UI is no UI. In a perfect scenario, you think about something (want pizza) and you have it (eating pizza) as instant as you desire.

Obviously this isn’t possible so the goal of Interface design is to find the least amount of things needed to get you from point A to the desired Destination as quickly as possible.

Now, with AI, you can start to add a level of predictive Interfaces where you can use AI to remove steps that would normally require users to do something.

If you want to design better products with AI, you have to remember that product design is about subtracting things not adding them. AI is a technology that can help with that.

[1] https://fairpixels.pro

> the goal of Interface design is to find the least amount of things needed to get you from point A to the desired Destination as quickly as possible.

That shouldn't be the primary goal of user interfaces, in my opinion. The primary goal should be to allow users to interface with the machine in a way that allows maximal understanding with minimal cognitive load.

I understand a lot of UI design these days prioritizes the sort of "efficiency" you're talking about, but I think that's one of the reasons why modern UIs tend to be fairly bad.

Efficiency is important, of course! But (depending on what tool the UI is attached to) it shouldn't be the primary goal.

  • > I understand a lot of UI design these days prioritizes the sort of "efficiency" you're talking about, but I think that's one of the reasons why modern UIs tend to be fairly bad.

    IMO, the main problem is that this "efficiency" usually involves making assumptions that can't be altered, which achieves "efficiency" by eliminating choices normally available to the user. This is rarely done for the benefit of the user - rather, it just reduces the UI dev work, and more importantly, lets the vendor lock-in the option that's beneficial to them.

    In fact, I've been present on UI design discussions for a certain SaaS product, and I quickly realized one of the main goals for that UI was to funnel the users towards a very specific workflow which, to be fair, reduced the potential for users to input wrong data or screw up the calculations, but more importantly, it put them on a very narrow path that was optimized to give results that were impressive, even if this came at the expense of accuracy - and it neatly reduced the amount of total UI and technical work, without making it obvious that the "golden path" is the only path.

    It's one of those products I believe would deliver much greater value to the users if it was released as an Excel spreadsheet. In fact, it was actually competing with an Excel plugin - and all the nice web UI did was making things seem simpler, by dropping almost all useful functionality except that which happened to align with the story the sales folks were telling.

    • > In fact, I've been present on UI design discussions for a certain SaaS product

      That makes sense. An SaaS-type offering is fundamentally different from selling a product. SaaS companies are incentivized to engage in manipulation of their customers. For them, the UI is more a sales tool than a user interface.

  • > The primary goal should be to allow users to interface with the machine in a way that allows maximal understanding with minimal cognitive load.

    If you use your phone, is your primary goal to interface with it in a way that allows maximal understanding with minimal cognitive load?

    I’m pretty sure that’s not the case. You go read the news, send a message to a loved one etc. there’s a human need that you’re aiming to fulfill. Interfacing with tech is not the underlying desire. It’s what happens on the surface as a means.

    • > If you use your phone, is your primary goal to interface with it in a way that allows maximal understanding with minimal cognitive load?

      Yes, absolutely. That's what makes user interfaces "disappear".

      > Interfacing with tech is not the underlying desire.

      Exactly. That's why it's more important that a UI present a minimal cognitive load over the least number of steps to do a thing.

      1 reply →

> Let me explain, the ultimate UI is no UI. In a perfect scenario, you think about something (want pizza) and you have it (eating pizza) as instant as you desire.

That doesn’t solve for discovery. For instance, order the pizza from where? What kinds of pizza are available? I’m kinda in the mood for pizza, but not dead set on it so curious about other cuisines too. Etc.

I hate to appeal to authority, but I am fairly sure that Jakob Nielsen grasps the essence of what UI is actually about.

It seems rather obvious to me that when Nielsen is talking about AI enabling users to express intent, that naturally lends itself to being able to remove steps that were there only due to the nature of the old UI paradigm. Not sure what new essence you’re proposing? Best UI is no UI is a well known truism in HCI/Human Centered Design.

Having no UI sounds horrible. I don’t want every random desire I have to be satisfied immediately. I’d rather have what I need available at the appropriate time and in a reasonable quantity and have the parameters of that system be easily adjusted. So instead of want pizza = have pizza it would be healthy meal I enjoy shows up predictably at the time I should eat and the meal and time are configurable so I can change them when I’m planning my diet.

You can't eliminate the UI if you want to be able to do more than one thing (e.g., order a pizza).

The UI should simply let you easily do what needs to be done.

sometimes I wonder if the edges of articulated desire may always be essentially binary / quantitative, meaning that slow yes / nos are in fact the best way for us to grapple with them, and systems that allow us a set of these yes/no buttons are in fact a reflection of ourselves and not a requirement of the machine. So long as we are builders, I think we'll have buttons. even in transhumanist cyberspace perhaps. Still waiting on peer review for that one though