Comment by ogurechny
2 days ago
Those discussions are a bit misleading. Original Doom updates its state only 35 times a second, and ports that need to remain compatible must follow that (though interpolation and prediction tricks are possible for visual smoothing of the movement). Rendering engine is also completely orthogonal to polygon-based 3D accelerators, so all their power is unused (apart from, perhaps, image buffers in fast memory and hardware compositing operations). Performance on giant maps therefore depends on CPU speed. The point of this project is making the accelerator do its job with a new rendering process.
Though I wonder how sprites, which are a different problem orthogonal to polygonal rendering, are handled. So, cough cough, Doxylamine Moon benchmarks?
"Rendering engine is also completely orthogonal to polygon-based 3D accelerators"
Software rendering engine, yes (and even then you can parallelize it). But there is really no reason why doom maps can't be broken down in polygons. Proper sprite rendering is a problem, though.
Sure, that has been done since the late '90s release of the source code, both by converting visible objects to triangles to be drawn by the accelerator (glDoom, DoomGL), or by transplanting game data and mechanics code into an existing 3D engine (Vavoom used recently open-sourced Quake).
However, proper recreation of the original graphics would require shaders and much more modern extensive and programmable pipelines, while the relaxed artistic attitude (or just contemporary technical limitations) unfortunately resulted in trashy y2k amateur 3D shooter look. Leaving certain parts to software meant that CPU had to do most of the same things once again. Also, 3D engines were seen as a base for exciting new features (arbitrary 3D models, complex lighting, free camera, post-processing effects, etc.), so the focus shifted in that direction.
In general, CPU performance growth meant that most PCs could run most Doom levels without any help from the video card. (Obviously, map makers rarely wanted to work on something that was too heavy for their systems, so the complexity was also limited by practical reasons.) 3D rendering performance (in non-GZDoom ports) was boosted occasionally to enable some complex geometry or mapping tricks in popular releases, but there was little real pressure to use acceleration. On the other hand, the linear growth of single core performance has stopped long ago, while the urges of map makers haven't, so there might be some need for “real” complete GPU-based rendering.
As I said, traditional doom bsp-walker software renderer is quite parallelizable. You can split the screen vertically into several subscreens and render them separately (does wonders for epic maps). The game logic, or at least most of it, can probably be run in parallel with the rendering.
And I don't think any of the above is necessary. Even according to their graphs popular doom ports can render huge maps at sufficiently high fps on reasonably modern hardware. The goal of this project, as stated in the doomworld thread, is to be able to run epic maps on a potato.