Comment by munchbunny
7 years ago
On the one hand, I totally agree with this as my experience with AR/VR interfaces as well.
However, I'm not convinced that this is due to any inherent limitations in our brain's ability to understand a third dimension, just that we are accustomed to only having to deal with two dimensions. I'm carefully wording this in terms of UI design in 3-D because I agree about the physical issues of eyes and focus.
For example, in many spaceflight simulators, the radar includes vertical bars to indicate height above/below your plane. That's a 3-D interface rendered onto a 2-D screen, though it's 3-D in VR. Once you learn to read it, you become highly effective at mapping what you see to a spatial understanding of what's going on around you.
The driver's controls in a car are also a good example of a 3-D interface demonstrating patterns that are inherently 3-D (twist knobs, paddles, H pattern gearbox, etc.), and it works mostly because you are expected to develop muscle memory.
Are car controls really a good example of a 3D interface in the sense Carmack is discussing? I think he's specifically talking about 3D visualization.
In contrast car interfaces are specifically non-visual. You have to look to learn how to use them, but after that, anything you have to look at to use is bad, because it takes the driver's eyes off the road. But in the same way that the human eyes extract a 2D version of a 3D world, I think the a car's nobs are a proprioceptive projection of the 3D world.
I know Carmack was specifically talking about 3D visualization, but I was responding to the parent comment's point about the value of data simplification that 2D brings. I don't usually quote myself, but that's what I meant by:
I'm carefully wording this in terms of UI design in 3-D because I agree about the physical issues of eyes and focus.
I specifically brought up the car's knobs and wheels and buttons because it's an example of a fairly complex but widely adopted and successful non-2D interface. My point was that I don't think there is an inherent limitation in the human brain's ability to comprehend or think in 3D, and that I think it's more of a training and acclimatization phenomenon.
And I'm saying I think it's not actually the brain thinking in 3D. I think it's thinking in terms of body position and movement. Which is something that takes place in a 3D space, but isn't actually a generalized 3D comprehension.
One way to check would be to have people reproduce things they know via touch in visual contexts. E.g., can people draw their steering wheel? Given how their tongue knows their teeth, can they do a 3D model of that without looking at references? If we hand people a box where they can put their hand in but can't look, can they model the shape just as well as they could by looking at it?
For me at least, these are very different kinds of knowledge. If I'm, say, feeling around and working on the back of a server I can't see, actually looking at it is a very different experience.
3 replies →
“twist knobs” – 1 degree of freedom
“paddles” – more or less a discrete switch, in form of a stick
“H pattern gearbox” – a few discrete choices
pedals – 1 degree of freedom
steering wheel – 1 degree of freedom
These things all exist in a 3-dimensional world, but they are not “3-D interfaces”.
Compare with https://en.wikipedia.org/wiki/SpaceOrb_360
* * *
If you define a car’s interface to be “3D” then so is basically anything, say a TV remote or the knobs on an oven.
These are tactile controls arranged in a 3D space.
The 2D equivalent would be a flat screen with virtual knobs, wheels, and switches, and would be completely unusable.
The problem with AR is that it's 2.5D. Physical 3D is fully tactile in every dimension. AR lacks that kind of tactility.
True - not just visual - VR would have fingertip touch control of every object in the scene. That would (literally) be a game-changer, but we don't have the technology to make it happen yet.
The 3D "mice" that exist for AR/VR are clunky and crude with hand-level resolution rather than fingertip resolution.
Compare with a 2D touchscreen which has good tactile control. And a 2D desktop which (usually) gives you a bigger screen area in return for simplified but still very usable tactile control with a mouse.
In other words, VR/AR is not just about the visuals.
> These are tactile controls arranged in a 3D space.
They're arranged roughly on the same plane (the XY plane).
> The driver's controls in a car are also a good example of a 3-D interface demonstrating patterns that are inherently 3-D (twist knobs, paddles, H pattern gearbox, etc.)
These are almost purely input mechanisms, in that sense we've had 3D interfaces in video games for decades by way various controller shapes. The problem is output interfaces don't work or at least don't gain much from the third dimension.
> H pattern gearbox
What's 3-D about that? The shifter moves in a two-dimensional pattern, though conceptually you're only moving in one dimension (up and down the gears).
I suppose you could treat the clutch pedal as the third dimension together with two dimensions for the H.
Knobs and paddles are likewise only single dimensional, though embedded in 3D space, from my perspective.
The controls exist on many different planes, and their placement requires some muscle memory, such as moving your hand between the steering wheel and the shifter. Each individual interaction in the interface might be constrained to two degrees of freedom, but the interface requires operating on a spatial mental model.