I've got a 61mp camera, and an RX 7900XT. It takes about 15s/picture for DXO to denoise, which is a lot longer than people are willing to wait on a phone to take a photo. Topaz is even slower. A cloud service could be used to do it in post, but someone has to pay for that.
Yet in modern computer games, modern graphics cards denoise a scene in real-time at 60 frames per second using machine learning models [1][2] while doing all the other rendering at the same time. Granted, that's ray tracing, and the resolution is lower, and they technically cheat by using additional information, but it might be that DXO is not optimized very well.
I've got a 61mp camera, and an RX 7900XT. It takes about 15s/picture for DXO to denoise, which is a lot longer than people are willing to wait on a phone to take a photo. Topaz is even slower. A cloud service could be used to do it in post, but someone has to pay for that.
Yet in modern computer games, modern graphics cards denoise a scene in real-time at 60 frames per second using machine learning models [1][2] while doing all the other rendering at the same time. Granted, that's ray tracing, and the resolution is lower, and they technically cheat by using additional information, but it might be that DXO is not optimized very well.
1: https://blogs.nvidia.com/blog/ai-decoded-ray-reconstruction/
2: https://gpuopen.com/amd-fsr-rayregeneration/