Comment by msgodel
5 days ago
I use AR glasses heavily for work and other things. I'm actually typing this on a pair right now. I've never understood what the application for environment locked screens is other than novelty/marketing. My glasses provide enough sensor data to implement this but I just can't be bothered.
I guess it makes a lot more sense if you're emulating more than one monitor, without environment locking you'd only ever be able to see the sides of the monitor(s) beside the one you're looking at.
But yeh for a single monitor, I guess it takes a bit of getting used to but non-locked seems far more reliable
I'm curious how/what you use the screens via AR for if you're not using environment locked screens. Particularly in a productivity/work environment.
Unless I'm misunderstanding the feature, it seems like enironment locked screens allows for more natural usage and interactions with the screens in the virtual space?
My experience is mostly with VR/AR products like Oculus has been mostly with environment locked AR information.
I suppose they could? I prefer having my posture decoupled from what I'm looking at though.
It's like having a very nice monitor that uses ~1 watt of power and happens to be positioned exactly wherever is most comfortable without even having to think about it. It's way better than a normal monitor if you don't have to do eg pair programming.
How are you finding the focus? I use the Xreal Air 2, but the edges are blurry, and I can't get the glasses close enough to my face to see the entire screen in focus, even if the top of the glasses is touching my forehead.
2 replies →
I think the use-case for these is more VR focused, with the AR just being a "being able to notice when something needs your attention" feature (where you would respond to such an interrupt by taking the glasses off, not by trying to look at the interrupting thing through the glasses.)
I've heard people propose that these "screen in glasses" devices (like the Xreal Air) are useful for situations where you want a lot of visual real-estate but don't have the physical room for it — like in a dorm room, or on a plane. (Or at a library/coffee shop if you're not afraid of looking weird.)
---
Tangent: this use-case could likely just as well be solved today with zero-passthrough pure-VR glasses, with a small, low-quality outward-facing camera+microphone on the front, connected only to an internal background AI model running on its own core, that monitors your surroundings in order to nudge you within the VR view if something "interesting" happens in the real world. That'd be both a fair bit simpler/cheaper to implement than camera-based synced-reality AR, and higher-fidelity for the screen than passthrough-based AR.
† Which wouldn't even need to be a novel model — you could use the same one that cloud-recording security cameras use in the cloud to decide which footage is interesting enough to clip/preserve/remote-notify you about as an "event".
The most obvious advantage is being able to "zoom in" on the screen by moving closer to it (as with a real monitor), which is impossible with 3DOF or view-locked XR.