> Do you have advice on linear vs srgb space antialiasing?
Unfortunately, this is completely context dependent. One central point is, whether or not the graphics pipeline is instructed to perform corrections (GL_FRAMEBUFFER_SRGB in OpenGL), as that changes the answer. Another point is, in which context blending is performed. Luckily the developer has full freedom here and can even specify separate blending for alpha and color [1], something that GPU accelerated terminal emulator Alacritty makes use of [2], though it doesn't do MSDF rendering.
One thing I can say though: The alpha, the fading of edge, has to be linear at the end or perceived as such. Or rather if the edge were to be stretched to 10 pixels, each pixel has to be a 0.1 alpha step. (If smoothstep is used, the alpha has to follow that curve at the end) Otherwise the Anti-Aliasing will be strongly diminished. This is something you can always verify at the end. Correct blending of colors is of course a headache and context specific.
> fonts have been tweaked to compensate, so sometimes srgb is better
This should not concern MSDF rendering. These tweaks happened at specific resolutions with monitors popular at that time. Especially when considering HiDPI modes of modern window systems, all bets are off, DPI scaling completely overthrows any of that. MSDF is size independent and the "tweaks" are mainly thickness adjustments, which MSDF has control over. So if the font doesn't match as it looks in another rendering type, MSDF can correct for it.
Great post! Minor nitpick: WebGL does support MSAA since WebGL1, but in WebGL1 only on the canvas, and you don't have any control over the number of samples (can only select antialiasing on/off) - not that it matters much anymore :)
What WebGL2 is still missing is MSAA texture objects (it only supports MSAA render buffers), which makes it impossible to directly load individual samples in a shader (useful for custom-resolve render passes). That's only possible in WebGPU.
Thank you for the excellent writeup, terrific work!
> Whole communities rally around fixing this, like the reddit communities “r/MotionClarity” or the lovingly titled “r/FuckTAA”, all with the understanding, that Anti-Aliasing should not come at the cost of clarity. FXAA creator Timothy Lottes mentioned, that this is solvable to some degree with adjustments to filtering, though even the most modern titles suffer from this.
I certainly agree that the current trend of relying on upscalers has gone too far and results in blurry and artifact riddled AAA game experiences for many. But after seeing this [1] deep dive by Digital foundry I find the arguments he makes quite compelling. There is a level of motion stability and clarity only tech like DLSS can achieve, even outperforming SSAA. So I've shifted my stance from TAA == blurry, TAA + ML when used right == best AA possible currently for 3D games.
Disclaimer: I haven't really played games with high-end graphics on a high-end system in over a decade, and have always had more of a soft spot for games that use beautiful art design to work around the limitations of "underpowered" systems (IMO the graphics in those games hold up much better throughout the decades anyway). And as I mentioned elsewhere I'm generally quite lost when it comes to AA names and abbreviations. It would be disingenuous of me to tell a community I'm not part of what the best solution is for them.
However, on a meta-level I find something like “r/FuckTAA” fundamentally entitled and ungrateful to the people who put years of their lives into making these games. Of course the loudest gamers tend to be smaller subgroup of the entitled toxic ones, so perception is distorted anyway. Plus, I do get it to some degree: if you invest a lot of money and time into powerful hardware to get beautiful graphics, you'd like to actually get beautiful graphics out of it.
Still, every time I read any article on the technical workings of modern graphics, it strikes me as a community full of extremely passionate people who care about squeezing beautiful graphics out of the available hardware. There are nicer ways to say "sorry but this particular technical solution/aesthetic trend doesn't vibe with me".
(Obviously this is not aimed at your nuanced take with a sincere question for discussion. And thank you for the link, will watch, because the tech still is an interesting topic to me even if I don't play these types of games)
I have done a few live visualization based blog posts, and they take me ages to do. I kind of think that's the right idea though. There is so much content out there, taking longer to produce less content at a higher quality benefits everyone.
The commit history [1] reveals that, took a while. I don't write professionally, something I do in lunch breaks from time to time. Thanks for the kind words.
One small bit of tecnical feedback for the website itself: it would be nice if the links in the article open in a new tab by default, because reloading the webpage via the back button is a little broken on my mobile browsers. I suspect it has something to do with trying to restore the state of the page while also having WebGL contexts.
Basically no... Analytic AA is a really hard problem for video games, and I know of no general purpose solutions.
For font and 2D vector rendering it's likely, in fact afaik, some solutions, such as Slug already do.
But for 3d rendering I don't know of any solutions.
For an intuition, consider two triangles that intersect the same pixel.
Consider if say one has 20% coverage and the other 30%, does that pixel have 50% coverage, 30% by one, 20% by one and 10% by another, or any other conceivable mix? It's very difficult to say without selecting specific points and sampling directly.
I tried to do this for polygons rendered aliased by computing edge position and reweighting pixel colours. It looks great when polygons are large but it breaks for small polygons (when polygon size is close to pixel size)
Not a question but some unsolicited (sorry) feedback. The intro seems designed to set people up for disappointment. You start off by talking about AA methods used for 3D scenes, and you've picked a very cool way to present them... but the article is actually about antialiased drawing of SDFs, which is not exactly a hard problem and not applicable to 3D scenes. Unless your scene is made up of SDF shapes, but I don't think the method you're presenting would be fast enough on a nontrivial scene as you would need to rely on alpha-blending across seams. (I think Alex Evans' talk on Dreams mentions they tried something similar to get fuzzy shapes but dropped it due to perf and sorting issues.) In any case, it would have been nice for the article's intro to more clearly say what it's about and what the technique is useful for.
True, this is something I struggled with writing and ended up with just a small note commenting that this is not widely applicable. Will clarify more in coming posts. It's all incredibly context specific. The reason for this order, is that you can very much can use all these approaches (SSAA, FXAA, MSAA etc.) for rendering simpler shapes and HUDs. So going through them, where these approaches break down and when it does make sense to go the extra mile with SDFs.
Still, non-standard rendering approaches are very much a thing [1] and I could see setups like [2] be used in scientific particle visualizations.
Great write up, excellent explorables. I skimmed some parts so forgive me if this was covered, but I wonder what happens with overlapping shapes in this approach. For example, a white background with a black disc and then a white disc of the exact same size and position would probably leave a fuzzy gray hairline circle? With regular antialiasing it should be all white.
What happens during overlap, is something you control fully, at every step. (Except when using this with MSAA, as that is implementation defined)
How intersections happen when blending shapes of multiple draw calls or multiple quads is defined by the blending function you set before issuing the draw call. In WebGL the call is blendFunc() [1] and there are a bunch of options.
How to blend multiple shapes on the same quad, within a draw call is shown in the section "Drawing multiple?". There you fully control the intersection with the blending math.
Finally, there is always the gamma question, which is true for all AA solutions. This is not covered in the post, but that might mess with the results, as is true for any kind of blending.
Can you elaborate a bit on the tech-stack used for this blog? I didn't find any hints in the source (but I'm not an expert). Is it some known framework? What does the input look like (Markdown etc).?
The blog is built with eleventy https://www.11ty.dev/ and as the previous reply already mentioned, the source code is on GitHub [2].
All posts are single Markdown file each with HTML inserted where I need it. The javascript for the demos is custom and changes from post to post. The basic style comes from Sakura CSS [3] with a bunch of custom stuff on top.
Yes, that also surprised me and I tested it on multiple mobile Apple devices. It's not really a mistake per se, the implementation is free to do what it wants. Selecting MSAAx2 on these types of mobile GPUs simply has no upside and isn't really supported over MSAAx4 and I guess apple still wanted to make the choice somehow possible, as opposed to Android, where there is only an illusion of choice.
It's just so happens to produce visible artifacts in this case. I suppose for 3D scenes it's mostly fine.
Massive thanks for this! I’m already using my own version of analytical antialiasing but there were some bits I couldn’t get quite right so this is perfect!
Sorry, was most likely funnier in my head then it really is, but I was committed to the bit once I had the idea. I was exited to see Neotokyo mention somewhere popular.
NeoTokyo and Dystopia regulars compete in each others tournaments and are always looking forward to new players :)
Biggest Tournament of the year:
- 5v5 matches of ~4-6 teams
- broadcasted on twitch with
- 1 camera person
- 2-3 casters/commentators
- definitely overproduced
- to an audience of 15-25
Tangent: my biggest problem with AA is something adjacent to it, which is that almost none of my games bother explain what the differences are between the different abbreviations available in the settings, half of which are completely unknown to me. Like, sure, I can look them up but a little bit of user-friendliness would be appreciated.
This article will probably help for future reference though!
Games/graphics are one of those domains with a lot of jargon for sure. If you don't want to be a wizard you can just mess with it and see what happens. I like how dolphin approaches this with extensive tooltips in the settings, but there's always going to be some implicit knowledge.
On a meta level - I feel like I've seen anti-acronym sentiment a lot recently. I feel like it's never been easier to look these things up. There's definitely levels of acronyms which are anti-learning or a kind of protectionism, but to my mind there are appropriate levels of it to use because you have to label concepts at a useful level to accomplish things, and graphics settings of a game definitely are on the reasonable side.
And even if you know every detail, that's still the best course of action, I think. Which kind of antialiasing you prefer, and how it trades with performance and resolution is highly subjective, and it can be "none".
There are 3 components to rescaling/rendering pixels: aliasing, sharpness and locality. Aliasing is, well, aliasing, sharpness is the opposite of blurriness, and locality is about these "ringing" artefacts you often see in highly compressed images and videos. You can't be perfect on all three. Disabling antialiasing gives you the sharpest image with no ringing artefacts, but you get these ugly staircase effects. Typical antialiasing trades this for blurriness, in fact, FXAA is literally a (selective) blur, that's why some people don't like it. More advanced algorithms can give you both antialiasing and sharpness, but you will get these ringing artefacts. The best of course is to increase resolution until none of these effects become noticeable, but you need the hardware.
The best algorithms attempt to find a good looking balance between all these factors and performance, but "good looking" is subjective, that's why your best bet is to try for yourself. Or just keep the defaults, as it is likely to be set to what the majority of the people prefer.
The PS4 Pro introduced the gaming world to the simplification of settings from dozens of acronyms that were common to PC Gamers, down to “Performance” and “Quality”.
I wouldn’t be surprised if there’s now a market demand for that to spread back to PC land.
Well, I suppose I'm not the assumed target audience. Back when I was younger I had the time to tweak everything on my computer: my Linux distro, my games (dualbooting windows just for that), and looking things up for that reason. I also could play long gaming sessions.
Nowadays I'm a dad with practically no spare time between raising a toddler and work. Suddenly the question of "does the game overstay its welcome?" has become a thing; I almost exclusively play games that can be played in short bursts of time, and that deliver a great experience as a whole that can be completed in a relatively short playtime. I got a Steam Deck a few years ago for the specific purpose of separating my work computer from my gaming platform, and being able to pick up and play a game and pausing it without problems.
Even with the built-in performance overlay of the Steam Deck (which is very nice) it takes time to assess the quality-vs-performance results of every possible combination of settings. Often more than I would spend on playing a game.
I suspect that people like me either already are or soon will be the bigger segment of the (paying) customers though, so that is something to consider for developers.
And some games do give short explanations of what each type of technique does and how they compare, along with statements like "this usually has a small impact on performance" or "this has a large impact on performance" to guide me, which is already a great help.
Graphics programming analysis done using examples written in WebGL–genius. Hypertext that takes full advantage of the medium. This reminds me of something I'd see on https://pudding.cool/, but it goes far more in depth than anything there. Absolutely fantastic article.
I've been using MSAAx4 in my rendering engine for some time and only recently have considered switching to a FXAA / TAA implementation. I'm actually not sure I'm going to go through with that now. I definitely learned a lot here, and will probably use the analytical approach for UI items, I hadn't heard about that anywhere.
Not often you see graphics-programming stuff on HN. For anyone interested in more graphics write-ups, this list of frame breakdowns is one of my favorite resources:
Steve Wittens also does a lot of these kinds of articles (math with WebGL-infused illustrations, etc.) at https://acko.net/
One of my favorites: https://acko.net/blog/how-to-fold-a-julia-fractal/. This helped me understand the relationship between trigonometric functions and complex numbers like nothing else I've ever seen.
It’s very strange. I had a vivid dream, only about 5 hours ago, where I was debating the drawbacks of TAA with some scientists in a lab (likely because Half Life has been in the news this week). I think I dream about rendering algorithms once every several years.
And now today there’s this post and your comment here in the front page.
scrolling thru the post the NeoTokyo screenshot struck me instantly, I ran through the hallway thousands of times - I ran a server for that mod for some years and had great fun with a small community of good/capable people.
The even more amazing thing is that it's still actively played. There is a full server every Friday night (sometimes also in Saturday/Sunday). It has got some quite dedicated fan base and I've never seen such dedication in another old multiplayer game.
Though a little caveat from my side, as I have written both 2D and 3D rendering engines. Let me tell you, they could not be more different. It is not just another dimension but completely different goals, use-cases and expectations.
So instead of:
> Everything we talked about extends to the 3D case as well.
I would say the entire post is mostly about 3D, not 2D rendering. If you are curious about this topic being approached for 2D rendering here is a nice write-up I found about that: https://ciechanow.ski/alpha-compositing/
One particular criteria for AA techniques that no one cares about in 3D but is very relevant in 2D is correctness and bias. AAA for example is heavily biased and thus incorrect. Drawing the exact same shape multiple times in the same place will make it more opaque / darker. The same thing does not happen with MSAA which has a bounded error and is unbiased.
Hey, i'm brainstorming for a 3d vector renderer in WebGPU on JS/TS and stumbled on your project [0] yesterday.
(Thick) line drawing is especially interesting to me, since it's hard [1].
I also stumbled upon this [2] recently and then wondered if i could use that technique for every shape, by converting it to quadratic bezier curve segments.
- Implicit Curve Rendering (Loop-Blinn) and stencil geometry (tessellation-less) for filling
- Polygonization (with tangent space parameter distribution) of offset curves for stroking
> by converting it to quadratic bezier curve segments
Mathematically, the offset curve of a bezier curve is not a bezier curve of the same degree in the general case (exceptions are trivial cases like straight lines, circles and ellipses). Instead you get terrible high degree polynomials. You will have to approximate the offset curve anyway. I choose to use polygons (straight line segments), but you could also use splines (bezier segments), it is just overtly complex for little to no benefit IMO.
BTW, distance fields and offset curves are very similar. In fact the distance field is the set of all possible offsets of offset curves and the offset curves are the isolines on the distance field.
I don't think the article generalizes trivially to 3D, though.
The solution presented relies on signed distance fields, yet skims over the important part - a distance to what? In 2D it is obvious, we are measuring distance to an edge between the object and its background, to a silhouette.
In 3D, when objects may rotate producing self-occlusions, things get more complicated - what are we measuring the SDF against? The silhouette of a 3D object's 2D projection is ever-changing and cannot be trivially precomputed.
Appreciated that link out to Captain Disillusion. I had not heard of that guy. Incredible work, here's a direct link for those interested in video effects: https://www.youtube.com/@CaptainDisillusion
amazing blog, both in content and presentation. Love it when articles give you controls to play with. Gives me hope for the future of the web. the NeoTokyo mention reveals great taste.
The catch is the alpha-blending, which is something modern games avoid doing as much as possible which is why so many games use that dither-pattern transparency you may have seen before.
To do alpha blending correctly you need to blend with what's behind the object. But what if you haven't rendered what's behind the object? You won't have anything to blend with!
This means you first have to sort all the objects you're rendering and render them in-order. It also means you can't use a depth pre-pass because you need to draw what's behind the object even for the parts you already know will be covered by the object in front of it. There's a bunch of more reasons to avoid it, but those are some of the basic ones.
An alternative is to draw only a few pixels of what's behind the object you're rendering at the edges of the object that's in front and then alpha blend those extra samples together, which seems to be the solution proposed in the article for actual 3D games. So then the catch is that you're doing MSAA, just with high-quality blending rather than the standard averaging of the samples.
Now that I am delving into the retro-gaming world, I find it funny that 30 years ago gamers lambasted the Sega Saturn for using dithering patterns instead of proper transparencies, only for the same technique to come back decades later.
I really miss MSAA. I still dislike DLSS personally. I realize many people seem to like it, but it just does not look that good to me. Or as good as things used to look or I believe could look.
Sure it's better than TAA, but come on, this can't be the ultimate end for gaming graphics... At least I hope it isn't.
The problem with MSAA is that it only handles aliasing from geometry edges. It's pretty much useless against shader aliasing, which became a massive problem after normal mapping and HDR lighting became standard in the PS3 era.
This is only a problem if you naively undersample the normal maps in your fragment shader. No AA algorithm can can generate a high quality result from undersampled lighting calculations.
MSAA also doesn't require you to have only one color sample for each pixel but this can be varied per draw call (at least for desktop GPUs) - effectively allowing you to supersample some objects when needed.
DLSS even in Balanced mode is annoying when the camera laterally moves, I start seeing all kinds of artifacts. Strangely enough I prefer FSR 2.0 even though there’s slightly less overall detail.
Now I'm very curious about analytical AO and shadows as you mentioned Last Of Us uses. I'd heard about the spheres but never seen an explanation of how they get turned into shadows
Thanks for sharing! Author here, happy to answer any questions.
Google Maps uses AAA on capsule shapes for all road segments - I wrote it ~10 years ago. :D
Neat. Does that mean that every road segment is a separate mesh?
3 replies →
That's awesome! Added this as an addendum to the post.
Fantastic article! I've been trying to figure out antialiasing for MSDF fonts, and have run across some claims:
1. antialiasing should be done in linear rgb space instead of srgb space [1] [2]
2. because of the lack of (1) for decades, fonts have been tweaked to compensate, so sometimes srgb is better [3] [4]
Do you have advice on linear vs srgb space antialiasing?
[1] http://hikogui.org/2022/10/24/the-trouble-with-anti-aliasing...
> Do you have advice on linear vs srgb space antialiasing?
Unfortunately, this is completely context dependent. One central point is, whether or not the graphics pipeline is instructed to perform corrections (GL_FRAMEBUFFER_SRGB in OpenGL), as that changes the answer. Another point is, in which context blending is performed. Luckily the developer has full freedom here and can even specify separate blending for alpha and color [1], something that GPU accelerated terminal emulator Alacritty makes use of [2], though it doesn't do MSDF rendering.
One thing I can say though: The alpha, the fading of edge, has to be linear at the end or perceived as such. Or rather if the edge were to be stretched to 10 pixels, each pixel has to be a 0.1 alpha step. (If smoothstep is used, the alpha has to follow that curve at the end) Otherwise the Anti-Aliasing will be strongly diminished. This is something you can always verify at the end. Correct blending of colors is of course a headache and context specific.
> fonts have been tweaked to compensate, so sometimes srgb is better
This should not concern MSDF rendering. These tweaks happened at specific resolutions with monitors popular at that time. Especially when considering HiDPI modes of modern window systems, all bets are off, DPI scaling completely overthrows any of that. MSDF is size independent and the "tweaks" are mainly thickness adjustments, which MSDF has control over. So if the font doesn't match as it looks in another rendering type, MSDF can correct for it.
[1] https://developer.mozilla.org/en-US/docs/Web/API/WebGLRender...
[2] https://github.com/search?q=repo%3Aalacritty%2Falacritty+ble...
Great post! Minor nitpick: WebGL does support MSAA since WebGL1, but in WebGL1 only on the canvas, and you don't have any control over the number of samples (can only select antialiasing on/off) - not that it matters much anymore :)
What WebGL2 is still missing is MSAA texture objects (it only supports MSAA render buffers), which makes it impossible to directly load individual samples in a shader (useful for custom-resolve render passes). That's only possible in WebGPU.
Thank you for the excellent writeup, terrific work!
> Whole communities rally around fixing this, like the reddit communities “r/MotionClarity” or the lovingly titled “r/FuckTAA”, all with the understanding, that Anti-Aliasing should not come at the cost of clarity. FXAA creator Timothy Lottes mentioned, that this is solvable to some degree with adjustments to filtering, though even the most modern titles suffer from this.
I certainly agree that the current trend of relying on upscalers has gone too far and results in blurry and artifact riddled AAA game experiences for many. But after seeing this [1] deep dive by Digital foundry I find the arguments he makes quite compelling. There is a level of motion stability and clarity only tech like DLSS can achieve, even outperforming SSAA. So I've shifted my stance from TAA == blurry, TAA + ML when used right == best AA possible currently for 3D games.
Thoughts?
[1] https://youtu.be/WG8w9Yg5B3g
Disclaimer: I haven't really played games with high-end graphics on a high-end system in over a decade, and have always had more of a soft spot for games that use beautiful art design to work around the limitations of "underpowered" systems (IMO the graphics in those games hold up much better throughout the decades anyway). And as I mentioned elsewhere I'm generally quite lost when it comes to AA names and abbreviations. It would be disingenuous of me to tell a community I'm not part of what the best solution is for them.
However, on a meta-level I find something like “r/FuckTAA” fundamentally entitled and ungrateful to the people who put years of their lives into making these games. Of course the loudest gamers tend to be smaller subgroup of the entitled toxic ones, so perception is distorted anyway. Plus, I do get it to some degree: if you invest a lot of money and time into powerful hardware to get beautiful graphics, you'd like to actually get beautiful graphics out of it.
Still, every time I read any article on the technical workings of modern graphics, it strikes me as a community full of extremely passionate people who care about squeezing beautiful graphics out of the available hardware. There are nicer ways to say "sorry but this particular technical solution/aesthetic trend doesn't vibe with me".
(Obviously this is not aimed at your nuanced take with a sincere question for discussion. And thank you for the link, will watch, because the tech still is an interesting topic to me even if I don't play these types of games)
2 replies →
How long did this take to write?
I have done a few live visualization based blog posts, and they take me ages to do. I kind of think that's the right idea though. There is so much content out there, taking longer to produce less content at a higher quality benefits everyone.
The commit history [1] reveals that, took a while. I don't write professionally, something I do in lunch breaks from time to time. Thanks for the kind words.
[1] https://github.com/FrostKiwi/treasurechest/commits/main/post...
One small bit of tecnical feedback for the website itself: it would be nice if the links in the article open in a new tab by default, because reloading the webpage via the back button is a little broken on my mobile browsers. I suspect it has something to do with trying to restore the state of the page while also having WebGL contexts.
Ohh right! I'm sure the must be an Eleventy setting for that...
1 reply →
As a non-gamedev person but just gamer, I should expect that this will replace TAA anytime soon? Should it replace TAA?
Basically no... Analytic AA is a really hard problem for video games, and I know of no general purpose solutions.
For font and 2D vector rendering it's likely, in fact afaik, some solutions, such as Slug already do.
But for 3d rendering I don't know of any solutions.
For an intuition, consider two triangles that intersect the same pixel.
Consider if say one has 20% coverage and the other 30%, does that pixel have 50% coverage, 30% by one, 20% by one and 10% by another, or any other conceivable mix? It's very difficult to say without selecting specific points and sampling directly.
2 replies →
I tried to do this for polygons rendered aliased by computing edge position and reweighting pixel colours. It looks great when polygons are large but it breaks for small polygons (when polygon size is close to pixel size)
Not a question but some unsolicited (sorry) feedback. The intro seems designed to set people up for disappointment. You start off by talking about AA methods used for 3D scenes, and you've picked a very cool way to present them... but the article is actually about antialiased drawing of SDFs, which is not exactly a hard problem and not applicable to 3D scenes. Unless your scene is made up of SDF shapes, but I don't think the method you're presenting would be fast enough on a nontrivial scene as you would need to rely on alpha-blending across seams. (I think Alex Evans' talk on Dreams mentions they tried something similar to get fuzzy shapes but dropped it due to perf and sorting issues.) In any case, it would have been nice for the article's intro to more clearly say what it's about and what the technique is useful for.
True, this is something I struggled with writing and ended up with just a small note commenting that this is not widely applicable. Will clarify more in coming posts. It's all incredibly context specific. The reason for this order, is that you can very much can use all these approaches (SSAA, FXAA, MSAA etc.) for rendering simpler shapes and HUDs. So going through them, where these approaches break down and when it does make sense to go the extra mile with SDFs.
Still, non-standard rendering approaches are very much a thing [1] and I could see setups like [2] be used in scientific particle visualizations.
[1] https://www.youtube.com/watch?v=9U0XVdvQwAI
[2] https://bgolus.medium.com/rendering-a-sphere-on-a-quad-13c92...
Great write up, excellent explorables. I skimmed some parts so forgive me if this was covered, but I wonder what happens with overlapping shapes in this approach. For example, a white background with a black disc and then a white disc of the exact same size and position would probably leave a fuzzy gray hairline circle? With regular antialiasing it should be all white.
What happens during overlap, is something you control fully, at every step. (Except when using this with MSAA, as that is implementation defined) How intersections happen when blending shapes of multiple draw calls or multiple quads is defined by the blending function you set before issuing the draw call. In WebGL the call is blendFunc() [1] and there are a bunch of options.
How to blend multiple shapes on the same quad, within a draw call is shown in the section "Drawing multiple?". There you fully control the intersection with the blending math.
Finally, there is always the gamma question, which is true for all AA solutions. This is not covered in the post, but that might mess with the results, as is true for any kind of blending.
[1] https://developer.mozilla.org/en-US/docs/Web/API/WebGLRender...
1 reply →
If you fade out the edge pixels like that, won't that create a 1px gap between adjacent squares? What do you do about that?
I think this is the same method Skia uses, and it doesn't work for rendering vectors from SWF files in my experience because of this problem.
Can you elaborate a bit on the tech-stack used for this blog? I didn't find any hints in the source (but I'm not an expert). Is it some known framework? What does the input look like (Markdown etc).?
The blog is built with eleventy https://www.11ty.dev/ and as the previous reply already mentioned, the source code is on GitHub [2]. All posts are single Markdown file each with HTML inserted where I need it. The javascript for the demos is custom and changes from post to post. The basic style comes from Sakura CSS [3] with a bunch of custom stuff on top.
[1] https://www.11ty.dev/
[2] https://github.com/FrostKiwi/treasurechest
[3] https://github.com/oxalorg/sakura
Not the parent, but seems this is the source code for the blog. https://github.com/FrostKiwi/treasurechest
Found it by going to the comments since the comments are GitHub issues the "x comment" is a link to the issues page.
> Mobile chips support exactly MSAAx4 [...] the driver will force 4x anyways
On what GPUs and through what APIs did you see this? This seems fairly weird. I especially wouldn't expect Apple to have problems.
Yes, that also surprised me and I tested it on multiple mobile Apple devices. It's not really a mistake per se, the implementation is free to do what it wants. Selecting MSAAx2 on these types of mobile GPUs simply has no upside and isn't really supported over MSAAx4 and I guess apple still wanted to make the choice somehow possible, as opposed to Android, where there is only an illusion of choice.
It's just so happens to produce visible artifacts in this case. I suppose for 3D scenes it's mostly fine.
I’m definitely seeing similar artifacts when at 2x on an iPhone 15 Pro.
2 replies →
Massive thanks for this! I’m already using my own version of analytical antialiasing but there were some bits I couldn’t get quite right so this is perfect!
I would love to connect on some ideas around using antialiasing as a way to extend inference in extracting information from computer vision outputs.
What an absolutely fantastic read.
"A scene from my favorite piece of software in existence: NeoTokyo°."
I am sorry, but may I ask you, do you uphold your duties as a NeoTokyo fan, by promoting both the 2 best (un-)dead Source Mods, NeoTokyo and Dystopia?
https://store.steampowered.com/app/17580/Dystopia/ https://store.steampowered.com/app/244630/NEOTOKYO/
Sorry, was most likely funnier in my head then it really is, but I was committed to the bit once I had the idea. I was exited to see Neotokyo mention somewhere popular. NeoTokyo and Dystopia regulars compete in each others tournaments and are always looking forward to new players :)
Biggest Tournament of the year: - 5v5 matches of ~4-6 teams - broadcasted on twitch with - 1 camera person - 2-3 casters/commentators - definitely overproduced - to an audience of 15-25
Its so much fun ^.^
Tangent: my biggest problem with AA is something adjacent to it, which is that almost none of my games bother explain what the differences are between the different abbreviations available in the settings, half of which are completely unknown to me. Like, sure, I can look them up but a little bit of user-friendliness would be appreciated.
This article will probably help for future reference though!
Games/graphics are one of those domains with a lot of jargon for sure. If you don't want to be a wizard you can just mess with it and see what happens. I like how dolphin approaches this with extensive tooltips in the settings, but there's always going to be some implicit knowledge.
On a meta level - I feel like I've seen anti-acronym sentiment a lot recently. I feel like it's never been easier to look these things up. There's definitely levels of acronyms which are anti-learning or a kind of protectionism, but to my mind there are appropriate levels of it to use because you have to label concepts at a useful level to accomplish things, and graphics settings of a game definitely are on the reasonable side.
> just mess with it and see what happens
And even if you know every detail, that's still the best course of action, I think. Which kind of antialiasing you prefer, and how it trades with performance and resolution is highly subjective, and it can be "none".
There are 3 components to rescaling/rendering pixels: aliasing, sharpness and locality. Aliasing is, well, aliasing, sharpness is the opposite of blurriness, and locality is about these "ringing" artefacts you often see in highly compressed images and videos. You can't be perfect on all three. Disabling antialiasing gives you the sharpest image with no ringing artefacts, but you get these ugly staircase effects. Typical antialiasing trades this for blurriness, in fact, FXAA is literally a (selective) blur, that's why some people don't like it. More advanced algorithms can give you both antialiasing and sharpness, but you will get these ringing artefacts. The best of course is to increase resolution until none of these effects become noticeable, but you need the hardware.
The best algorithms attempt to find a good looking balance between all these factors and performance, but "good looking" is subjective, that's why your best bet is to try for yourself. Or just keep the defaults, as it is likely to be set to what the majority of the people prefer.
1 reply →
The PS4 Pro introduced the gaming world to the simplification of settings from dozens of acronyms that were common to PC Gamers, down to “Performance” and “Quality”.
I wouldn’t be surprised if there’s now a market demand for that to spread back to PC land.
5 replies →
Well, I suppose I'm not the assumed target audience. Back when I was younger I had the time to tweak everything on my computer: my Linux distro, my games (dualbooting windows just for that), and looking things up for that reason. I also could play long gaming sessions.
Nowadays I'm a dad with practically no spare time between raising a toddler and work. Suddenly the question of "does the game overstay its welcome?" has become a thing; I almost exclusively play games that can be played in short bursts of time, and that deliver a great experience as a whole that can be completed in a relatively short playtime. I got a Steam Deck a few years ago for the specific purpose of separating my work computer from my gaming platform, and being able to pick up and play a game and pausing it without problems.
Even with the built-in performance overlay of the Steam Deck (which is very nice) it takes time to assess the quality-vs-performance results of every possible combination of settings. Often more than I would spend on playing a game.
I suspect that people like me either already are or soon will be the bigger segment of the (paying) customers though, so that is something to consider for developers.
And some games do give short explanations of what each type of technique does and how they compare, along with statements like "this usually has a small impact on performance" or "this has a large impact on performance" to guide me, which is already a great help.
1 reply →
Graphics programming analysis done using examples written in WebGL–genius. Hypertext that takes full advantage of the medium. This reminds me of something I'd see on https://pudding.cool/, but it goes far more in depth than anything there. Absolutely fantastic article.
I've been using MSAAx4 in my rendering engine for some time and only recently have considered switching to a FXAA / TAA implementation. I'm actually not sure I'm going to go through with that now. I definitely learned a lot here, and will probably use the analytical approach for UI items, I hadn't heard about that anywhere.
Not often you see graphics-programming stuff on HN. For anyone interested in more graphics write-ups, this list of frame breakdowns is one of my favorite resources:
https://www.adriancourreges.com/blog/
Steve Wittens also does a lot of these kinds of articles (math with WebGL-infused illustrations, etc.) at https://acko.net/
One of my favorites: https://acko.net/blog/how-to-fold-a-julia-fractal/. This helped me understand the relationship between trigonometric functions and complex numbers like nothing else I've ever seen.
I really dislike TAA, especially on lower framerates. There's too much ghosting. I often switch it to a slower algorithm just so I don't get ghosting.
It’s very strange. I had a vivid dream, only about 5 hours ago, where I was debating the drawbacks of TAA with some scientists in a lab (likely because Half Life has been in the news this week). I think I dream about rendering algorithms once every several years.
And now today there’s this post and your comment here in the front page.
1 reply →
My feeling is if I can render at 4K I can just not do AA at all. It really looks quite fine without, at least for me.
3 replies →
Those frames with the circle and zoomed bit are a fantastic way to convey this message, well done the whole article reads great.
Awesome article.
SDF(or mSDF) isn't the future. It's already "good enough" classic.
> This works, but performance tanks hard, as we solve every > bezier curve segment per pixel
This is "the future" or even present as used in Slug and DirectWrite with great performance
https://sluglibrary.com/ https://learn.microsoft.com/en-us/windows/win32/directwrite/...
Don't forget about implicit curve rendering [0]. The patent will expire soon [1].
[0]: https://www.microsoft.com/en-us/research/wp-content/uploads/... [1]: https://patents.google.com/patent/US20070097123A1/en
I wrote an implementation of the Loop/Blinn paper for Microsoft Game Studios ~20 years ago, I wonder if they're still using it.
Had to do a _lot_ of work to make it production-ready, as their voronoi-based tesselation goes pathological on a lot of Asian glyphs.
I may be remembering totally wrong, but isn't the algorithm used in Slug patented?
https://news.ycombinator.com/item?id=42194175
scrolling thru the post the NeoTokyo screenshot struck me instantly, I ran through the hallway thousands of times - I ran a server for that mod for some years and had great fun with a small community of good/capable people.
The even more amazing thing is that it's still actively played. There is a full server every Friday night (sometimes also in Saturday/Sunday). It has got some quite dedicated fan base and I've never seen such dedication in another old multiplayer game.
Great write-up!
Though a little caveat from my side, as I have written both 2D and 3D rendering engines. Let me tell you, they could not be more different. It is not just another dimension but completely different goals, use-cases and expectations.
So instead of:
> Everything we talked about extends to the 3D case as well.
I would say the entire post is mostly about 3D, not 2D rendering. If you are curious about this topic being approached for 2D rendering here is a nice write-up I found about that: https://ciechanow.ski/alpha-compositing/
One particular criteria for AA techniques that no one cares about in 3D but is very relevant in 2D is correctness and bias. AAA for example is heavily biased and thus incorrect. Drawing the exact same shape multiple times in the same place will make it more opaque / darker. The same thing does not happen with MSAA which has a bounded error and is unbiased.
Hey, i'm brainstorming for a 3d vector renderer in WebGPU on JS/TS and stumbled on your project [0] yesterday.
(Thick) line drawing is especially interesting to me, since it's hard [1].
I also stumbled upon this [2] recently and then wondered if i could use that technique for every shape, by converting it to quadratic bezier curve segments.
Do you think that's a path to follow?
[0] https://github.com/Lichtso/contrast_renderer
[1] https://mattdesl.svbtle.com/drawing-lines-is-hard
[2] https://scribe.rip/@evanwallace/easy-scalable-text-rendering...
My implementation does:
- Implicit Curve Rendering (Loop-Blinn) and stencil geometry (tessellation-less) for filling
- Polygonization (with tangent space parameter distribution) of offset curves for stroking
> by converting it to quadratic bezier curve segments
Mathematically, the offset curve of a bezier curve is not a bezier curve of the same degree in the general case (exceptions are trivial cases like straight lines, circles and ellipses). Instead you get terrible high degree polynomials. You will have to approximate the offset curve anyway. I choose to use polygons (straight line segments), but you could also use splines (bezier segments), it is just overtly complex for little to no benefit IMO.
BTW, distance fields and offset curves are very similar. In fact the distance field is the set of all possible offsets of offset curves and the offset curves are the isolines on the distance field.
Here is a good summary of all the edge cases to think about in 2D rendering: https://www.slideshare.net/slideshow/22pathrender/12494534
About subpixel AA: Don't bother, LCDs are on the down trend.
2 replies →
I don't think the article generalizes trivially to 3D, though.
The solution presented relies on signed distance fields, yet skims over the important part - a distance to what? In 2D it is obvious, we are measuring distance to an edge between the object and its background, to a silhouette.
In 3D, when objects may rotate producing self-occlusions, things get more complicated - what are we measuring the SDF against? The silhouette of a 3D object's 2D projection is ever-changing and cannot be trivially precomputed.
Appreciated that link out to Captain Disillusion. I had not heard of that guy. Incredible work, here's a direct link for those interested in video effects: https://www.youtube.com/@CaptainDisillusion
It is well presented, but I think the part attacking TAA will lead to confusion, as SDF AA is in no way an alternative to TAA.
TAA covers all types of aliasing, while this only covers edge aliasing.
Many modern games use monte carlo based approaches for indirect lighting and other effects, which basically requires TAA.
> Mobile chips support exactly MSAAx4 and things are weird. Android will let you pick 2x, but the driver will force 4x anyways.
Hmm... On my Android phone I definitely see a difference between 2x and 4x, but it's not "rounded" like the iPhone one.
amazing blog, both in content and presentation. Love it when articles give you controls to play with. Gives me hope for the future of the web. the NeoTokyo mention reveals great taste.
I've been so used to screen space techniques that I initially read SSAA as "screen space antialiasing", not "super sampled antialiasing".
My favorite is stil SSSSS, or screen space sub surface scattering.
those buttery smooth gradients are soooo pleasing to watch <3
What's the catch?
The catch is the alpha-blending, which is something modern games avoid doing as much as possible which is why so many games use that dither-pattern transparency you may have seen before.
To do alpha blending correctly you need to blend with what's behind the object. But what if you haven't rendered what's behind the object? You won't have anything to blend with!
This means you first have to sort all the objects you're rendering and render them in-order. It also means you can't use a depth pre-pass because you need to draw what's behind the object even for the parts you already know will be covered by the object in front of it. There's a bunch of more reasons to avoid it, but those are some of the basic ones.
An alternative is to draw only a few pixels of what's behind the object you're rendering at the edges of the object that's in front and then alpha blend those extra samples together, which seems to be the solution proposed in the article for actual 3D games. So then the catch is that you're doing MSAA, just with high-quality blending rather than the standard averaging of the samples.
Now that I am delving into the retro-gaming world, I find it funny that 30 years ago gamers lambasted the Sega Saturn for using dithering patterns instead of proper transparencies, only for the same technique to come back decades later.
1 reply →
I really miss MSAA. I still dislike DLSS personally. I realize many people seem to like it, but it just does not look that good to me. Or as good as things used to look or I believe could look.
Sure it's better than TAA, but come on, this can't be the ultimate end for gaming graphics... At least I hope it isn't.
The problem with MSAA is that it only handles aliasing from geometry edges. It's pretty much useless against shader aliasing, which became a massive problem after normal mapping and HDR lighting became standard in the PS3 era.
This is only a problem if you naively undersample the normal maps in your fragment shader. No AA algorithm can can generate a high quality result from undersampled lighting calculations.
For normal maps specifically, you can preserve lost information from downsampled normal maps as roughness: https://media.steampowered.com/apps/valve/2015/Alex_Vlachos_...
MSAA also doesn't require you to have only one color sample for each pixel but this can be varied per draw call (at least for desktop GPUs) - effectively allowing you to supersample some objects when needed.
Same here. I can't put my finger on what exactly it is, but it just never feels and looks as good as lower quality at full resolution.
DLSS even in Balanced mode is annoying when the camera laterally moves, I start seeing all kinds of artifacts. Strangely enough I prefer FSR 2.0 even though there’s slightly less overall detail.
Now I'm very curious about analytical AO and shadows as you mentioned Last Of Us uses. I'd heard about the spheres but never seen an explanation of how they get turned into shadows
The relevant talk is "Lighting Technology of The Last Of Us" by Michał Iwanicki: http://miciwan.com/SIGGRAPH2013/Lighting%20Technology%20of%2...
[flagged]