← Back to context

Comment by SaintSeiya

9 days ago

As I see it, the current state of graphics API is worse now than the OpenGL era, despite its promises none of the modern API's are easier to use, truly portable and cross platform. Having to reinvent OpenGL by creating custom wrappers around Vulkan, Metal, DirectX12, etc is such a time waster as dropping strings and going back to raw char arrays in the name of performance on every modern language.

What promises were made, by whom? Graphics APIs have never been about ease of use as a first order goal. They've been about getting code and data into GPUs as fast as reasonably possible. DevEx will always play second fiddle to that.

I think WebGPU is a decent wrapper for exposing compute and render in the browser. Not perfect by any means - I've had a few paper cuts working with the API so far - but a lot more discoverable and intuitive than I ever found WebGL and OpenGL.

  • > They've been about getting code and data into GPUs as fast as reasonably possible. DevEx will always play second fiddle to that.

    That's a tiny bit revisionist history. Each new major D3D version (at least before D3D12) also fixes usability warts compared to the previous version with D3D11 probably being the most convenient to use 3D API - while also giving excellent performance.

    Metal also definitely has a healthy balance between convenience and low overhead - and more recent Metal versions are an excellent example that a high performance modern 3D API doesn't have to be hard to use, nor require thousands of lines of boilerplate to get a triangle on screen.

    OTH, OpenGL has been on a steady usability downward trend since the end of the 1990s, and Vulkan unfortunately had continued this trend (but may steer into the right direction in the future:

    https://www.youtube.com/watch?v=NM-SzTHAKGo

    • I hear you but I also don't see a ton of disagreement here either. Like, the fact that D3D12 includes _some_ usability fixes suggests that DevEx really does take a back seat to the primary goal.

      I'm not arguing that DevEx doesn't exist in graphics programming. Just that it's second to dots on screen. I also find webgpu to be a lot nicer in terms of DevEx than WebGL.

      Wdyt? Still revisionist, or maybe just a slightly different framing of the same pov?

      2 replies →

    • > Metal also definitely has a healthy balance between convenience and low overhead - and more recent Metal versions are an excellent example that a high performance modern 3D API doesn't have to be hard to use, nor require thousands of lines of boilerplate to get a triangle on screen.

      Metal 4 has moved a lot in the other direction, and now copies a lot of concepts from Vulkan.

      https://developer.apple.com/documentation/metal/understandin...

      https://developer.apple.com/documentation/metal/resource-syn...

      1 reply →

  • "What promises were made, by whom?"

    Technically true, but practically tone deaf.

    WebGPU is both years too late, and just a bit early. Wheras WebGL was OpenGL circa 2005, WebGPU is native graphics circa 2015. It shouldn't need to be said that the bleeding edge new standard for web graphics shouldn't be both 10 years out of date and awful.

    Vendors are finally starting to deprecate the old binding model as the byzantine machinery that it is. Bindless resources are an absolute necessity for the modern style of rendering with nanite and raytracing.

    Rust's WGPU on native supports some of this, but WebGPU itself doesn't.

    It's only intuitive if you don't realize just how huge the gap is between dispatching a vertex shader to render some triangles, and actually producing a lit, shaded and occlusioned image with PBR, indirect lighting, antialiasing and postfx. Would you like to render high quality lines or points? Sorry, it's not been a priority to make that simple. Better go study up on SDFs and beziers.

    Which, tbh, is the impression I get from webgpu efforts. Everyone forgets the drivers have been playing pretend for decades, and very few have actually done the homework. Of those that have, most are too enamored with being a l33t gfx coder to realize how terrible the dev exp is.

    • I'm not sure I disagree with you really - and I ack that webgpu feels like 2015 tech to someone who knows their stuff. I don't have a take on "l33t gfx coder"; I'm a hobbyist not a professional, and I've enjoyed getting up to speed with WebGPU over and above my experiences with WebGL. Happy to be schooled.

      I've never impl PBF or raytracing because my interests haven't gone that way. I don't find SDFs to be a particularly difficult concept to "study up on" either though. It's about as close to math-as-drawing that I've seen and doesn't require much more than a couple triangles and a fragment shader. By contrast I've been learning about SVT for a couple months and still haven't quite pieced together a working impl in webgpu... though I understand there are extensions specifically in support of virtual tiling that WebGPU could pursue in a future version.

      Agreed DevEx broadly isn't great when working on graphics. But WebGPU feels like a considerable improvement rather than a step backward.

      8 replies →

    • >It's only intuitive if you don't realize just how huge the gap is between dispatching a vertex shader to render some triangles, and actually producing a lit, shaded and occlusioned image with PBR, indirect lighting, antialiasing and postfx. Would you like to render high quality lines or points? Sorry, it's not been a priority to make that simple. Better go study up on SDFs and beziers.

      I think this is a tad unfair. You're basically describing a semi-robust renderer at that point. IMO to make implementing such a renderer truly "intuitive" (I don't know what this word means to you, so I'm taking it to mean--offloading these features to the API itself) would require railroading the developer some, which appears to go against the design of modern graphics APIs.

      I think Unity/Unreal/Godot/Bevy make more sense if you're trying to quickly iterate such features. But even then, you may have to hand write the shader code yourself.

    • As a former l33t gfx, my love for Khronos APIs ended with Long Peaks failure, the endless way to load extensions, and the realisation of how much better the experience with proprietary APIs happens to be when it is though out end to end, with a proper SDK, IDE tooling and graphical debugging.

    • From Steve Wittens, a well respected graphics hacker, and maker of the excellent Use.GPU. https://acko.net/tv/usegpu/ . I'm mostly posting to expand context, and sprinkle in a couple light options.

      > Bindless resources are an absolute necessity for the modern style of rendering with nanite and raytracing.

      Yeah, for real. Looking at the November 2024 post "What's next for WebGPU" and HN comments, bindless is pretty high up there! There's a high level field survey & very basic proposal (in the hackmd link), and wgpu seems to be filling in the many gaps and seemingly quite far along in implementation. Not seeing any signs yet that the broader WebGPU implementors/spec folks are involved or following along, but at least wgpu is very cross platform & well regarded.

      https://developer.chrome.com/blog/next-for-webgpu https://news.ycombinator.com/item?id=42209272 https://hackmd.io/PCwnjLyVSqmLfTRSqH0viA https://hackmd.io/@cwfitzgerald/wgpu-bindless https://github.com/gfx-rs/wgpu/issues/3637 https://github.com/gpuweb/gpuweb/issues/380

      > Would you like to render high quality lines or points? Sorry, it's not been a priority to make that simple. Better go study up on SDFs and beziers.

      I realize lines and font rendering are an insanely complex fields, and that OpenGL offering at least lines and Vulkan not sure feels like a slap in the face. The work being done by groups like https://linebender.org/ is intense. Overall though that intensity makes me question the logic of trying to include it, wonders whether fighting to specify something that clearly we don't have full mastery over makes sense: even the very best folks are still improving the craft. We could specify an API without specifying an exact implementation, without conformance tests, perhaps, but that feels like a different risk. Maybe having to reach for a library that does the work reflects where we are, causes the iteration & development we sort of need?

      > actually producing a lit, shaded and occlusioned image with PBR, indirect lighting, antialiasing and postfx

      I admit to envying the ambition to make this simple, to have such a great deep knowledge as Steve and to think such hard things possible.

      I really really am so thankful and hope funding can continue for the incredibly hard work of developing webgpu specs & implementations, and wgpu. As @animats chimes in in the HN submission, bindless in particular is quite a crisis, which either will enable the web to go forward, or remain a lasting real barrier to the web's growth. Really seems to be the tension of Steve's opening position:

      > WebGPU is both years too late, and just a bit early. Wheras WebGL was OpenGL circa 2005, WebGPU is native graphics circa 2015.

      1 reply →

I don't see the problem. There have been lower-level APIs in the graphics stack for a long time (e.g. Mesa's Gallium), only now they are standardised and people are actually choosing to use them. It's not like higher-level APIs don't exist now, OpenGL is still supported on reasonable platforms and WebGPU has been usable from native code for some time.

As for true portability of those low-level APIs, you've basically got Apple to blame (and game console manufacturers, but I don't think anyone expected them to cooperate).

  • > you've basically got Apple to blame

    Yeah, that's the thing that really irks me. WebGPU could have been just a light wrapper over Vulkan like WebGL is (or was, it's complicated now) for OpenGL. But apple has been on a dumb war with Khronos for the last decade which has made everything more difficult.

    So now we have n+1 low level standards for GPU programming not because we needed them, but because 1 major player is obstinate.

    • That would be a problem indeed if Metal wouldn't be a much better designed API than Vulkan. As it stands, Vulkan would do good to 'steal' a few ideas from Metal to make the Vulkan API more convenient to use without sacrificing too much performance.

      4 replies →

  • Your last paragraph is fairly revisionist to me.

    How is Apple solely to blame when there are multiple parties involved ? They went to Khronos to turn AMD’s mantle into a true unified next gen APi. Khronos and NVIDIA shot them down to further AZDO OpenGL. Therefore Metal came to be and then DX12 followed and then Vulkan when Khronos realized they had to move that way.

    But even if you exclude Metal, what about Microsoft and D3D? Also similarly non-portable. Yet it’s the primary API in use for non-console graphics. You rarely see people complaining about the portability of DX for some reason…

    And then in an extremely distant last place is Vulkan. Very few graphics apps actually use Vulkan directly.

    Have you tried writing any of the graphics APIs?

    • People don't complain about DX portability because Windows has first-party support for Vulkan and OpenGL, unlike macOS. Also, since the XBox also uses DirectX, you get two birds with one stone. And third, you aren't forced to use Microsoft hardware to develop for DirectX (these days, you don't even have to use Windows.)

      Basically, people are mad that you need to buy Apple hardware, use Apple software (macOS), Apple tooling (Xcode), just to develop graphics code for iOS and macOS. At least you don't also need to use Apple language (Swift) to use Metal, though I don't have any first-hand experience with their C++ bindings so I can't judge if it's a painful experience or not.

      9 replies →

I think what we learned from the OpenGL era is it's actually not very relevant all platforms use the same high level API to talk to the GPU hardware. What matters it the platform's chosen API offers good control of the hardware it uses.

You say this requires reinvention but really the end work is "translate OpenGL to something the hardware can actually understand" in both scenarios. The difference with the OpenGL era is you did not have the option to avoid using the wrapper, not that no wrapper existed. Targeting the best of each possibly hardware type individually without baking in assumptions about the hardware has proven to not be very practical, but it only matters if you're building a "easy translation layer" rather than using it or trying to target specific types of hardware very directly (in which case you don't want something super generic or simple, you want something which exposes the hardware as directly as is reasonable for that hardware type).

  • Writing engines predates OpenGL, it has always been more a FOSS culture thing to think OpenGL exists everywhere due to its UNIX origins.

    • Sure, OpenGL wasn't even the first 3D API from SGI. I don't think what is first or whether something was universal was as relevant as what was happening during the era OpenGL was being developed.

OpenGL became a mess of an API after 2.0, and WebGPU is actually a fairly easy to use wrapper around Vk, D3D12, Metal - definitely better designed than what OpenGL has become.

Apart from that, D3D11 and Metalv1 are probably the sweet spot between ease-of-use and performance (especially D3D11's performance is hard to beat even in Vulkan and D3D12).

  • D3D11 is so nice to use that I feel like the ideal workflow for ground-up graphics applications is to just write everything in D3D11 and then let Middleware layers on Linux (proton) or Mac (Game porting toolkit) handle the translation. DirectX also has a whole suite of first party software (DirectXMath, DirectXTK) which make a lot of common workflows much simpler.

    If only the windows team could get out of a tailspin because almost everything else MS produces on the Windows side gets worse and worse every year.

OpenGL is still such a powerful technology. I use it all the time because Vulkan is just so much more difficult to use. It's a pity, so much good software not being built because ogl is more or less a dead man walking

I agree. I'll keep using OpenGL with CUDA interop until something better shows up. Vulkan isn't it. I tried Vulkan to get away from OpenGL, but ended up with CUDA instead since it's so much nicer to work with. Vulkan has way too much overengineered complexity with zero benefit.

Not really, for example in the OpenGL era there was this urban myth that game consoles used OpenGL, this was never really the case.

Nintendo after graduating to devkits where C and C++ could be used like N64, had OpenGL inspired APIs, which isn't really the same. Although there was some GLSL like shader support.

They only started supporting Khronos APIs with the Switch, and even then, if you want the full power of the Switch, NVN is the way to go.

Playstation always had proprietary APIs, they did a small stint OpenGL ES 1.0 + Cg, which had very little to no uptake among developers, and they dropped it from the devkits.

Sega only had proprietary APIs, and there was a small collaboration with Microsoft for DirectX, which only a few studios took advantage of.

XBox naturally has always been about DirectX.

Go watch GDC Vault programming track to see how many developers you will find complaining about writing middleware for their game engines, if any at all, versus how many talks about taking the advantage of every little low level detail of hardware architecture.

  • Early console APIs were more similar to Direct3D 1, with very rudimentary immediate mode commands. Modern console APIs still have a less stateful, easy API layer, like D3D10/11, but also expose more low-level stuff, too.

    OpenGL didn't match the hardware well except on SGI hardware or carryover descendants like 3dfx.

IMO things have never been better.

Vulkan works approximately everywhere (except Apple, but that's entirely self inflicted and there's a compatibility layer so it's NotMyProblem). OpenGL is more portable than ever thanks to software implementations that yield far more consistent behavior between platforms than was available historically. WebGPU is actually fairly nice to work with, has a well maintained native implementation for two major systems languages, and both of those implementations have (AFAIK) fully functional WASM support. If it happens to gain a native Mesa implementation once everything stabilizes that will merely be icing on the cake. OpenCL has multiple competing implementations, including PoCL which is an adapter providing decently broad support on top of other backends.

And if you don't want to fiddle with native APIs (which no offense intended but you very clearly sound like you don't) there's quite a few choices available to abstract all the low level details away with cross platform cross API middleware which are FOSS and actively maintained.

Yeah, it’s kind of insane how things have gotten.

There are no adults, no leaders with an eye on things leading us away from further mistakes, and we keep going deeper.

  • The point of the graphics APIs is to be as close to the metal as possible. It’s a balancing act between portability and hardware design/performance. I really don’t think it’s as trivial as non-graphics engineers make it out to be to make something universal.

    But even when it existed in the form of OpenGL , or now WebGPU, people complain about the performance overhead. So you end up back here.

    • Vulkan isnt close to the metal, though. It's a high-level wrapper around all the quirks and differences of ancient mobile to modern desktop GPUs. Render passes, for example, are entirely irrelevant for desktop GPUs. They are not close to metal, but add needless complexity. Recently, a Vulkan driver engineer even told me that they are not necessary for tile-based mobile GPUs for which they were intended, since they can figure the necessary things out by themselves. And I would guess they need to, since they became optional in Vulkan, so they cant relly on them anymore. They are still mandatory in WebGPU, for no good reason.

      And there are so many pointless things that are no longer relevant, or should at best be optional so that devs can get things done before optimizing.

      3 replies →

    • These metal implementations are constructs, and they are constructed differently for no good reason. Is there any benefit to anyone with all these proprietary implementations? Maybe, but if so, the beneficiary isn’t the consumer and it isn’t the game developer.

      So who is the graphics hardware built for? Again, not the consumer and not the game developer.

      It is in the interests of these hardware manufacturers to make performance as easy as possible but none of them do. They write their own drivers which implement DirectX 12 or Vulcan or Metal or OpenGL.

      So now as a game developer, if I want my game to perform on all platforms, I have to write my shaders natively for Metal, Vulcan, and DirectX 12, at least. Cross-compilers exist but they don’t do their job as well as a human can, so they’re simply not options for some.

      All of this is harder for no good reason. And no one cares. No one wants to see things improve. They just make excuses for the hardware manufacturers and kill conversations which explain how things currently suck for a lot of people.

  • What would a leader do? Nvidia wants to sell hardware, Nintendo wants to sell games, Microsoft wants to either buy Linux or crush it. Nobody has a stake in things actually working

Modern graphics APIs are the graphical equivalent to assembly language - you are not supposed to use them directly, but through another higher level layer, like a programming language or a graphics engine.

They are a specialized API intended for tool writers.