Comment by MobiusHorizons
3 months ago
Someone elsewhere in the thread replied that it has to do with the blurring happening on the GPU combined with bandwidth issues reading that dataset back followed by software encoding the video.
If I understand that all correctly blurring is cheap when you already have the raw video data on the gpu for encoding, but introduces too much latency when combined with software encoding.
Software blur should be possible, but the feature has not been implemented and would not be nearly as cheap as it is on the gpu
H.264 is still hardware accelerated, isn't it?