Comment by gpm

7 years ago

We have 3d output capabilities, we maneuver our hands in 3 dimensions. As jerf explains in a comment above better than I can we only have 2d input capabilities.

I interpret this post as primarily saying that the UI's display (as in what is fed to our input) should only be 2d, not necessarily the other way around.

https://news.ycombinator.com/item?id=19963247

Edit: Linked wrong comment, fixed.

The idea that we only have 2D input doesn't make sense to me. If that were the case, how would we drive a car or ride a bike? You don't need to jump between focusing on things that are far away and up close, as Carmack says. It's totally natural.

I would argue that humans, like most mammals, are actually most at ease in an immersive 3D medium.

And in theory, the only thing stopping us from implementing something like Bret Victor's Dynamicland (https://dynamicland.org) in VR is the lack of good 3D input methods, like say a pair of sensor-ridden smart gloves.

John Carmack's argument reminds me of the early criticisms of the point-and-click interface (2D), and how at its inception it was much less efficient than the well developed command line interface (1D).

Plus, the most designers are trained in 2D interfaces so they're probably applying the wrong assumptions for 3D.

  • 2D + depth. We can tell how far away the car in front of us is, but it occludes our view of cars in front of it. That’s good enough to drive, but suboptimal. If you were designing a user interface meant to show someone the positions of cars on roads – i.e. a map – you would use a bird’s-eye view, since roads are mostly 2D from that perspective.

  • You can drive a car without depth perception. You don't need to see the depth of cars/obstacles/the road if it can be deduced from geometry.