Comment by piker

1 day ago

Rust GUI is in a tough spot right now with critical dependencies under-staffed and lots of projects half implemented. I think the advent of LLMs has been timed perfectly to set the ecosystem back for a few more years. I wrote about it, and how it affected our development yesterday: https://tritium.legal/blog/desktop

Interesting read, however as someone from the same age group as Casey Muratori, this does not make much sense.

> The "immediate mode" GUI was conceived by Casey Muratori in a talk over 20 years ago.

Maybe he might have made it known to people not old enough to have lived through the old days, however this is how we used to program GUIs in 8 and 16 bit home computers, and has always been a thing in game consoles.

  • I think this is the source of the confusion:

    > To describe it, I coined the term “Single-path Immediate Mode Graphical User Interface,” borrowing the “immediate mode” term from graphics programming to illustrate the difference in API design from traditional GUI toolkits.

    https://caseymuratori.com/blog_0001

    Obviously it’s ludicrous to attribute “immediate mode” to him. As you say, it’s literally decades older than that. But it seems like he used immediate mode to build a GUI library and now everybody seems to think he invented immediate mode?

    • Is Win16 / Win32 GDI which goes back to 1985 an immediate mode GUI?

      Win32 GUI common controls are a pretty thin layer over GDI and you can always take over WM_PAINT and do whatever you like.

      If you make your own control you musts handle WM_PAINT which seems pretty immediate to me.

      https://learn.microsoft.com/en-us/windows/win32/learnwin32/y...

      Difference between game engine and say GDI is just the window buffer invalidation, WM_PAINT is not called for every frame, only when windows thinks the windows rectangle has changed and needs to be redrawn independently of screen refresh rate.

      I guess I think of retained vs immediate in the graphic library / driver because that allows for the GPU to take over more and store the objects in VRAM and redraw them. At the GUI level thats just user space abstractions over the rendering engine, but the line is blurry.

      4 replies →

  • It's like the common claim that data-oriented programming came out of game development. It's ahistorical, but a common belief. People can't see past their heroes (Casey Muratori, Jonathon Blow) or the past decade or two of work.

    • I partly agree, but I think you're overcorrecting. Game developers didn't invent data-oriented design or performance-first thinking. But there's a reason the loudest voices advocating for them in the 2020s come from games: we work in one of the few domains where you literally cannot ship if you ignore cache lines and data layout. Our users notice a 5ms frame hitch- While web developers can add another React wrapper and still ship.

      Computing left game development behind. Whilst the rest of the industry built shared abstractions, we worked in isolation with closed tooling. We stayed close to the metal because there was nothing else.

      When Casey and Jon advocate for these principles, they're reintroducing ideas the broader industry genuinely forgot, because for two decades those ideas weren't economically necessary elsewhere. We didn't preserve sacred knowledge. We just never had the luxury of forgetting performance mattered, whilst the rest of computing spent 20 years learning it didn't.

      5 replies →

    • It clearly didn’t come out of game dev. Many people doing high performance work on either embedded or “big silicon” (amd64) in that era were fully aware of the importance of locality, branch prediction, etc

      But game dev, in particular Mike Acton, did an amazing job of making it more broadly known. His CppCon talk from 2014 [0] is IMO one of the most digestible ways to start thinking about performance in high throughput systems.

      In terms of heroes, I’d place Mike Acton, Fabian Giesen [1], and Bruce Dawson [2] at the top of the list. All solid performance-oriented people who’ve taken real time to explain how they think and how you can think that way as well.

      I miss being able to listen in on gamedev Twitter circa 2013 before all hell broke loose.

      [0] https://youtu.be/rX0ItVEVjHc?si=v8QJfAl9dPjeL6BI

      [1] https://fgiesen.wordpress.com/

      [2] https://randomascii.wordpress.com/

  • There's also good reasons that immediate mode GUIs are largely only ever used by games, they are absolutely terrible for regular UI needs. Since Rust gaming is still largely non-existent, it's hardly surprising that things like 'egui' are similarly struggling. That doesn't (or shouldn't) be any reflection on whether or not Rust GUIs as a whole are struggling.

    Unless the Rust ecosystem made the easily predicted terrible choice of rallying behind immediate mode GUIs for generic UIs...

    • >Unless the Rust ecosystem made the easily predicted terrible choice of rallying behind immediate mode GUIs for generic UIs...

      That's exactly what they did :D

      2 replies →

  • I mean, fair enough, but [at least] wikipedia agrees with that take.

    > Graphical user interfaces traditionally use retained mode-style API design,[2][5] but immediate mode GUIs instead use an immediate mode-style API design, in which user code directly specifies the GUI elements to draw in the user input loop. For example, rather than having a CreateButton() function that a user would call once to instantiate a button, an immediate-mode GUI API may have a DoButton() function which should be called whenever the button should be on screen.[6][5] The technique was developed by Casey Muratori in 2002.[6][5] Prominent implementations include Omar Cornut's Dear ImGui[7] in C++, Nic Barker's Clay[8][9] in C and Micha Mettke's Nuklear[10] in C.

    https://en.wikipedia.org/wiki/Immediate_mode_(computer_graph...

    [Edit: I'll add an update to the post to note that Casey Muratori simply “coined the term” but that it predates his video.]

    • Dig out any source code for Atari, Spectrum or Commodore 64 games, written in Assembly, or early PC games, for example.

      And you will see which information is more accurate.

      1 reply →

    • I am pretty sure there are people here qualified enough to edit that Wikipedia page in a proper way.

  • > Maybe he might have made it known to people

    Yes, he coined the term rather than invent the technique

    • I won't be bothered to go hunting for digital copies of 1980's game development books, but I have my doubts on that.

Your recent post resonated with me deeply, as someone heavily invested in the Rust GUI I've fallen into this same conundrum. I think ultimately the Rust GUI ecosystem is still not mature and as a consequence we have to make big concessions when picking a framework.

I also came to a similar endpoint when building out a fairy large GUI application using egui. While egui solves the "draw widgets" part of building out the application, inevitably I had to restructure my app entirely with a new architecture to make it maintainable. In many places the "immediate" nature of the GUI mutable editing the state was no longer an advantage. Not to mention that UI code I wrote 6 months ago became difficult to read, especially if there was advanced layout happening.

Ultimately I've boiled my choices down to:

- egui for practicality but you pay the price in architecture + styling

- iced for a nice architecture but you have to roll all your own widgets

- slint maybe one day once they make text rendering a higher priority but even then the architecture side is not solved for you either

- tauri/dioxus/electron if you're not a purist like me

- Rewind 20 years and use Qt/WPF/etc.

  • If your main gripe about the Rust GUI ecosystem is that it's not mature then rewinding 20 years and using Qt/WPF/etc sounds like an excellent alternative. Old and mature versus modern and immature.

In my experience immediate mode guis almost always ignore internationalization and accessibility.

The thing you get by using an OS widget and putting a string in it is that the OS can interact with the string. It can read it out load, translate it, fill it in with a password, look it up in a dictionary, edit it right to left, handle input method editors whose hot keys are in conflict with app doing its own editing, etc…

There’s a reason why the most popular ImGUIs are targeted at game dev tools and in game dev uis and not end user uis

You could potentially make an Immediate mode gui that wrapped a retained gui. arguably that is what react is. From the programmers pov it’s supposed to look like imgui code all the way down. It runs into the issues of having to keep to two representations in sync. The ui represented by react and the actual widgets (html or native) and that’s where all its complications come from

  • Yes, one argument that I didn't make in the post but that does favor immediate mode is that you can somewhat straightforwardly convert from an immediate mode GUI to retained mode by just introducing your own abstractions. In some sense this makes you more disciplined about the FPS which could be a net win over all.

    [Note that Tritium at least is translated into a number of a different languages. That part isn't that hard.]

> Rust GUI is in a tough spot right now with critical dependencies under-staffed and lots of projects half implemented.

Down the stack, low-level 3D acceleration is in a rough spot too unfortunately. The canonical Rust Vulkan wrapper (Ash) hasn't cut a release for nearly two years, and even git main is far behind the latest spec updates.

This is why I'm using LLMs to help me hand code the GUI for my Rust app in SDL2. I'm hoping that minimizing the low-level, drawing-specific code and maximizing the abstractions in Rust will allow me to easily switch to a better GUI library if one arises. Meanwhile, SDL is not half bad.

Honestly I think all native GUI is in a tough spot right now. The desktop market has matured so there aren't any large companies willing to put a ton of money into new fully featured GUI libraries. What corporate investment we do see into new technologies (Electron, SwiftUI, React Native) is mainly to allow developers to reuse work from other platforms like web and mobile in order to cut costs on desktop development. Without that corporate investment I don't think we'll ever see any new native GUI libraries become as fully featured as Win32 or Qt Widgets.

  • I 100% agree on pretty much everything. The "webapp masquerading as a native app" is a huge problem, and IMO, at least partially because of a failure of native-language tooling (everything from UI frameworks to build tools --- as the latter greatly affect ease of use of libraries, which, in turn, affects popularity with new developers).

    To be honest, I've been (slowly) working towards my own native GUI library, in C. It's a big undertaking, but one saving grace is that --- at least on my part --- I don't need the full featureset of Qt or similar.

    My plan for the portability issue is to flip the script --- make it a native library that can compile to the web (using actual DOM/HTML elements there, not canvas/WebGL/WGPU). And on Android/iOS/etc, I can already do native anyway.

    Though I should add that a native look is not a goal in my case (quite a few libraries already go for that, go use those! --- and some, like Windows, don't really have a native look), which also means that I don't have to use native widgets on e.g. Android. The main reason for using DOM on the web is to be able to provide for a more "web-like" experience, to get e.g. text selection working properly, as well as IME, easier debuggability, and accessibility (an explicit goal, though not a short-term one --- in part due to a lack of testers). Though it wouldn't be too much of a stretch to allow either canvas or DOM on the web at that point --- by treating the web the same as a native platform in terms of displaying the widgets.

    It's more about native performance, low memory use, and easy integration without a scripting engine inbetween --- with a decent API.

    I am a bit on the fence between an immediate-mode vs retained-mode API. I'll probably do a semi-hybrid, where it's immediate-y but with a way to explicitly provide "keys" (kind of like Flutter, I think?).

  • I believe it's never been a better time for cross-platform desktop GUI. Vulkan API works on Windows, Android, Linux. Even web has Vulkan support. The only outlier is Apple.

    On Linux, Wayland provides better drawing surface and input API.

    The only missing piece is a high-level GUI and Vulkan/Metal compatibility layer. Along with ancient issue of packaging of course.

Open source GUI development is perpetually cursed by underestimating the difficulty of the problem.

A mature high-quality GUI with support for all the features of a modern desktop UI, accessibility, support for all the display variations you encounter in the wild, high quality rendering, high performance, low overhead, etc. is a development task on par with creating a mature game engine like Unity.

Nearly all open source GUI projects get 80% of the way there and stall, not realizing that they are only 20% of the way there.

  • You're right, and I think that's because the core functionality of a UI lib is not too difficult. I've tinkered in that space myself, and it's a fun side project.

    Then you start to think about full unicode support, right-to-left rendering, and so on. Then you start to think about properly implementing accessibility features. The necessary work increases by a magnitude. And it's not fun work. So you stall out with a bare-bones implementation.

> We ignore for these purposes Zed's GPUI which the Zed team has transparently, and understandably abandoned as an open source endeavour

Do you have a source for this?

  • https://news.ycombinator.com/item?id=47003569

    • Ok so it is not going closed source, they are just going to extend it as they need to drive Zed features. Totally understandable for an in-house UI framework, this is why you’d build one yourself anyway. I can imagine maintaining backwards compatibility, doing releases, writing documentation and growing a community around it is a considerable distraction from their product work.

  • The Zed team said it themselves. There is a direct quote in the parent thread.

I'd love to read a writeup of the state of Rust GUI and the ecosystem if you could point me at one.

  • https://www.boringcactus.com/2025/04/13/2025-survey-of-rust-...

    I started writing a program that needed to have a table with 1 million rows. This means it needs to be virtualised. Pretty common in GUI libraries. The only Rust GUI library I found that could do this easily was gpui-component (https://github.com/longbridge/gpui-component). It also renders text crisply (rules out egui), looks nice with the default style (rules out GTK, FLTK, etc.), isn't web-based (rules out Dioxus), was pretty easy to use and the developers were very responsive.

    Definitely the best option today (I would say it's probably the first option that I haven't hated in some way). The only other reasonable choices I would say are:

    * egui - doesn't render very nicely and some of the APIs are amateurish, but it's quick and it works. Good option for simple tools.

    * Iced - looks nice and seemed to work fairly well. No virtualised lists though.

    * Slint (though in some ways it is weird and it requires quite a lot of boilerplate setup).

    All the others will cause you pain in some way. I think the "ones to watch" are:

    * Makepad - from the demos I've seen this looks really cool, especially for arty GUI projects like synthesizers and car UIs. However it has basically no documentation so don't bother yet.

    * Xilem - this is an attempt to make an 100% perfect Rust GUI library, which is cool and all but I imagine it also will never be finished.

    • I wouldn't bother watching Makepad. They're in the process of rewriting the entire thing with AI and (it seems to me) destroying any value they has accumulated. And I also suspect Xilem will never be finished.

      Beyond egui/Iced/Slint, I'd say the "ones to watch" are:

      * Freya

      * Floem

      * Vizia

      I think all three of those offer virtualized lists.

      Dioxus Native, the non-webview version of Dioxus is also nearing readiness.

    • I’m currently writing an application that uses virtual lists in GTK: GtkListView, GtkGridView, there may be others. You ruled out GTK because of its looks I guess, I’m targeting Linux so the looks are perfect.

      3 replies →

    • I believe latest Iced versions do have a `Lazy` widget wrapper, but I believe that effectively means you need to make your own virtual list on top of it

      1 reply →

    • I've been somewhat involved in a project using Iced this week, seems pretty reasonable. Not sure how tricky it would be to e.g. invent custom widgets though.

I don't feel like having one main library for creating windows it's bad, I feel like that way the work gets shared and more collaboration happens

Really? It seems better than ever to me now that we have gpui-component. That seems to finally open doors to have fully native guis that are polished enough for even commercial release. I haven't seen anything else that I would put in that category, but one choice is a start.

  • The problem is that Zed has understandably and transparently abandoned supporting GPUI as an open source endeavour except to the extent contributions align with its business mission.

    • I remember when that came out, but I'm not sure I understand the concern. They use GPUI, so therefore they MUST keep it working and supportable, even if updating it isn't their current priority. Or are you saying they have a closed source fork now?

      Actually, this story is literally them changing their renderer on linux, so they are maintaining it.

      > except to the extent contributions align with its business mission

      Isn't that every single open source project that is tied to a commercial entity?

      2 replies →

    • They haven't. They are just heads down on other work. It wouldn't make sense for them to abandon it - they have no alternative. What that message was about was supporting _community_ prs and development of gpui.

      Focus ebbs and flows at Zed, they'll be back on it before long.

  • I tried gpui recently and I found it to be very, very immature. Turns out even things like input components aren't in gpui, so if you want to display a dialog box with some text fields, you have to write it from scratch, including cursor, selection, clipboard etc. — Zed has all of that, but it's in their own internal crates.

    Do you know how well gpui-component supports typical use cases like that? Edit boxes, buttons, scroll views, tables, checkbox/radio buttons, context menus, consistent native selection and clipboard support, etc. are table stakes for desktop apps.

    • I do think gpui needs a native input element (enough that I wrote one (https://github.com/zed-industries/zed/pull/43576) just before they stopped reviewing gpui prs) but outside of that I think it is pretty ok and cool that gpui just exports the tools to make whatever components you need.

      I could see more components being shipped first party if the community took over gpui, or for some crazy reason a team was funded to develop gpui full time, but developing baseline components is an immense amount of work, both to create an maintain.

      Buttons (any div can be a button), clipboard, scroll views (div, list, uniform_list) should all already be in gpui.

Can I humbly ask how are LLMs and Rust GUIs related?

  • They're just straining already-strained resources on the "contributions" side and pushing interest in other directions (e.g. Electron).

  • What’s the point of writing open source if it’s just going to be vacuumed up by the AI companies and regurgitated for $20 a month.