Comment by jms55

1 year ago

Fast light transport is an incredibly hard problem to solve.

Raytracing (in its many forms) is one solution. Precomputing lightmaps, probes, occluder volumes, or other forms of precomputed visibility are another.

In the end it comes down to a combination of target hardware, art direction and requirements, and technical skill available for each game.

There's not going to be one general purpose renderer you can plug into anything, _and_ expect it to be fast, because there's no general solution to light transport and geometry processing that fits everyone's requirements. Precomputation doesn't work for dynamic scenes, and for large games leads to issues with storage size and workflow slow downs across teams. No precomputation at all requires extremely modern hardware and cutting edge research, has stability issues, and despite all that is still very slow.

It's why game engines offer several different forms of lighting methods, each with as many downsides as they have upsides. Users are supposed to pick the one that best fits their game, and hope it's good enough. If it's not, you write something custom (if you have the skills for that, or can hire someone who can), or change your game to fit the technical constraints you have to live with.

> Nobody has a good solution to this yet. What does the renderer need to know from its caller? A first step I'm looking at is something where, for each light, the caller provides a lambda which can iterate through the objects in range of the light. That way, the renderer can get some info from the caller's spatial data structures. May or may not be a good idea. Too early to tell.

Some games may have their own acceleration structures. Some won't. Some will only have them on the GPU, not the CPU. Some will have an approximate structure used only for specialized tasks (culling, audio, lighting, physics, etc), and cannot be generalized to other tasks without becoming worse at their original task.

Fully generalized solutions will be slow be flexible, and fully specialized solutions will be fast but inflexible. Game design is all about making good tradeoffs.

The same argument could be made against Vulkan, or OpenGL, or even SQL databases. The whole NoSQL era was based on the concept that performance would be better with less generality in the database layer. Sometimes it helped. Sometimes trying to do database stuff with key/value stores made things worse.

I'm trying to find a reasonable medium. I have a hard scaling problem - big virtual world, dynamic content - and am trying to make that work well. If that works, many games with more structured content can use the same approach, even if it is overkill.