← Back to context

Comment by badmintonbaseba

2 days ago

I don't think your algorithm is correct. At least on the checkerboard example on the cube face the diagonals are curved. Perspective transformation doesn't do that.

Possibly you do the subdivisions along the edges uniformly in the target space, and map them to uniform subdivisions in the source space, but that's not correct.

edit:

Comparison of the article's and the correct perspective transform:

https://imgur.com/RbRuGxD

Considering that the author considers math below his pay-grade not a huge surprise that it is wrong.

  • YES! I was taken aback by that statement too. I think the opposite: in this age of AI, actually knowing things will be a huge bonus IMHO.

  • > math below his pay-grade

    Completely backwards. Math is much more difficult than programming and LLMs still can't consistently add numbers correctly last I checked. What a strange attitude to take.

Even more obviously, the squares in the front aren’t bigger than the squares in the back. It looks like each square has equal area even as their shapes change.

It’s fascinating how plausible it looks at a glance while being so glaringly wrong once you look at it more closely.

Author here: I don’t think the commenter here has set the same focal length, the focal length can make a surface appear curved, I set it explicitly to a low value to test the algorithm’s ability to handle the increased distortion. You can google “focal length distortion cube” to see examples of how a focal length distorts a grid or you can google “fish eye lens cube” etc.

Edit: I think there’s a lot of confusion because the edges of the cube (the black lines), do not incorporate the perspective transform all along their edge. The texture is likely correct given the focal length, and the cube’s edge is misleadingly straight. My bad, the technique is valid, but the black lines of the cube’s edge are misleadingly straight (they are not rendered the same way as the texture)

  • I think the original commenter is correct that there is a mistake in the perspective code. It seems the code calculates the linear interpolation for the grid points too late. It should be before projecting, not after.

    I opened an issue ticket on the repository with a simple suggested fix and a comparison image.

    https://github.com/tscircuit/simple-3d-svg/issues/14

    • That admittedly looks a lot more correct! Thanks for digging in, i will absolutely test and submit a correction to the article (i am still concerned the straight edges are misleading here)! And thanks to the original commentor as well! I think I will try to quickly output an animated version of each subdivision level, the animation would make it a lot more clear for me!

  • I might be missing something but you sound genuinely confused to me. The perspective in your post is linear perspective. It's the one used in CSS and it doesn't curve straight lines/planes. It's not the perspective of fish-eye images (curvilinear perspective).

    • I was at least a little confused because yea fish eye isn’t possible with a 4x4 perspective transform matrix. I’m investigating an issue with the projection thanks to some help from commenters and there will be a correction in the article, as well as an animation which should help confirm the projection code.

Is it actually possible to draw the correct perspective using only affine transformations? I thought that was the point of the article.

  • It is possible to approximate perspective using piecewise affine transformations. It is certainly possible to match the perspective transformation at the vertices of the subdivisions, and only be somewhat off within.

    • With 6 degrees of freedom, you can only fit 3 2d points at a time. Triangulation causes the errors shown in the article, hence why subdivision is needed.

  • I think GP's point is that besides the unavoidable distortions coming from approximating a perspective transform by a piece-wise affine transform, the implementation remains incorrect.