← Back to context

Comment by dddgghhbbfblk

7 months ago

>Is there a reason codec's don't use the previous frame(s) as stored textures, and remap them on the screen? I can move a camera through room and a lot of the texture is just reprojectivetransformed.

I mean, that's more or less how it works already. But you still need a unit of granularity for the remapping. So the frame will store eg this block moves by this shift, this block by that shift etc.

> But you still need a unit of granularity for the remapping. So the frame will store eg this block moves by this shift, this block by that shift etc.

This is exactly what I question. Why should there be block shaped units of granularity? defining a UV-textured 3D mesh that moves and carries previous decoded pixel values should have much less seams, with a textured mesh instead of blocks the only de novo pixel values would be the seams between reusable parts of the mesh, for example when an object rotates and reveals a newly visible part of its surface.

  • And how do you plan to extract that mesh and texture from an arbitrary input video?

    Having worked in the field of photogrammetry, I can tell you that it is a really complex task.