Comment by pazimzadeh
7 years ago
I think there is a severe lack of imagination going on here. A pencil is a 3D interface.
This is as good a time as any to post this eight-year old gem from Bret Victor: http://worrydream.com/ABriefRantOnTheFutureOfInteractionDesi...
A keyboard is also a 3D interface with one dimension constrained. I have to be able to move my hand past the keyboard without accidentally inputting data. You can see the problems of this kind when you have a large touch screen and people try to gesture at it while talking and end up accidentally changing something.
Look at machine rooms in submarines or control rooms for power plants or tv studios. Different tasks are in different locations sure, but the inputs are constrained to an area near the arc defined by the fingers at a comfortable distance from the shoulder. For really good reasons.
> A pencil is a 3D interface.
Is it? I’d say the VR analogue of a pencil is a pointing device, which makes the user interface the VR analogue of paper. And paper is fundamentally a 2D interface. There are some 3D actions like turning a page, but those are secondary and rare. You can move and rotate the paper as a whole in 3D, and that’s useful, but the same functionality can easily be added to VR 2D interfaces.
On the other hand, there are other types of art that are fundamentally 3D, such as sculpture and pottery – but both of those rely on feeling a 3D object with your hands, which partially bypasses the 2D limitation of vision, but isn’t yet possible to emulate in VR.
Then again, there’s also the common VR toy of a “pencil” that doodles in thin air, which is certainly interesting... though I’m not sure how well it generalizes to more abstract user interfaces. If you’re using such a system, you have to constantly rotate the object you’re drawing, and/or move your body around, in order to properly perceive the object in 3D. This is kind of a pain. If your primary goal is to create a 3D object, it’s an unavoidable pain and the benefit is well worth it; but if what you’re interacting with is just an abstract interface meant to manipulate something non-spatial, it’s probably better to avoid.
> both of those rely on feeling a 3D object with your hands, which partially bypasses the 2D limitation of vision, but isn’t yet possible to emulate in VR.
That's the entire point of the comment you're responding to and the reason for Dynamicland.
If so, it was off topic. John Carmack's post was about VR interfaces, and I interpreted the parent comment in that context. In any case, I would contest that bypassing the limitations of vision, specifically, represents a significant part of the reason for Dynamicland.
1 reply →
> I’d say the VR analogue of a pencil is a pointing device, which makes the user interface the VR analogue of paper
In this day and age, when accelerometers can be embedded in small objects, why don't we stop using analogues and just design a real world smart pencil that be used to control the VR floating pencil?
Because it's not a natural interface?
Given there are is no touch feedback when you feel surfaces in VR and that just holding your arms up in the arm for long periods of times is tiring, I honestly don't see why it's any better than pointing on a 2D surface.
People wanted to do Minority Report-style UIs when they saw them, but we generally don't interact with computers in those ways for the same reason. Keyboard and mouse (or trackpad) is going to be hard to improve upon.
We have 3d output capabilities, we maneuver our hands in 3 dimensions. As jerf explains in a comment above better than I can we only have 2d input capabilities.
I interpret this post as primarily saying that the UI's display (as in what is fed to our input) should only be 2d, not necessarily the other way around.
https://news.ycombinator.com/item?id=19963247
Edit: Linked wrong comment, fixed.
The idea that we only have 2D input doesn't make sense to me. If that were the case, how would we drive a car or ride a bike? You don't need to jump between focusing on things that are far away and up close, as Carmack says. It's totally natural.
I would argue that humans, like most mammals, are actually most at ease in an immersive 3D medium.
And in theory, the only thing stopping us from implementing something like Bret Victor's Dynamicland (https://dynamicland.org) in VR is the lack of good 3D input methods, like say a pair of sensor-ridden smart gloves.
John Carmack's argument reminds me of the early criticisms of the point-and-click interface (2D), and how at its inception it was much less efficient than the well developed command line interface (1D).
Plus, the most designers are trained in 2D interfaces so they're probably applying the wrong assumptions for 3D.
2D + depth. We can tell how far away the car in front of us is, but it occludes our view of cars in front of it. That’s good enough to drive, but suboptimal. If you were designing a user interface meant to show someone the positions of cars on roads – i.e. a map – you would use a bird’s-eye view, since roads are mostly 2D from that perspective.
You can drive a car without depth perception. You don't need to see the depth of cars/obstacles/the road if it can be deduced from geometry.
3 replies →
Great read. Really highlights all the problems I have with touch interfaces. Unfortunately I don't see it changing much in the near-future mainly because it's Good Enough for most people and most operations. Wish someone would put money into researching tactile screen interfaces that physically deform and respond to touch.
> A pencil is a 3D interface.
A pencil is 2D or less, it can't move on a third axis, a good example of why 3D is worse (Ever seen someone using the rare 3Dish pencils?)