Comment by modeless
3 months ago
Foveated streaming! That's a great idea. Foveated rendering is complicated to implement with current rendering APIs in a way that actually improves performance, but foveated streaming seems like a much easier win that applies to all content automatically. And the dedicated 6 GHz dongle should do a much better job at streaming than typical wifi routers.
> Just like any SteamOS device, install your own apps, open a browser, do what you want: It's your PC.
It's an ARM Linux PC that presumably gives you root access, in addition to being a VR headset. And it has an SD card slot for storage expansion. Very cool, should be very hackable. Very unlike every other standalone VR headset.
> 2160 x 2160 LCD (per eye) 72-144Hz refresh rate
Roughly equivalent resolution to Quest 3 and less than Vision Pro. This won't be suitable as a monitor replacement for general desktop use. But the price is hopefully low. I'd love to see a high-end option with higher resolution displays in the future, good enough for monitor replacement.
> Monochrome passthrough
So AR is not a focus here, which makes sense. However:
> User accessible front expansion port w/ Dual high speed camera interface (8 lanes @ 2.5Gbps MIPI) / PCIe Gen 4 interface (1-lane)
Full color AR could be done as an optional expansion pack. And I can imagine people might come up with other fun things to put in there. Mouth tracking?
One thing I don't see here is optional tracking pucks for tracking objects or full body tracking. That's something the SteamVR Lighthouse tracking ecosystem had, and the Pico standalone headset also has it.
More detail from the LTT video: Apparently it can run Android APKs too? Quest compatibility layer maybe? There's an optional accessory kit that adds a top strap (I'm surprised it isn't standard) and palm straps that enable using the controllers in the style of the Valve Index's "knuckles" controllers.
> Foveated streaming! That's a great idea.
Back when I was in Uni, so late 80s or early 90s, my dad was Project Manager on an Air Force project for a new F-111 flight simulator, when Australia upgraded the avionics on their F-111 fighter/bombers.
The sim cockpit had a spherical dome screen and a pair of Silicon Graphics Reality Engines. One of them projected an image across the entire screen at a relatively low resolution. The other projector was on a turret that pan/tilted with the pilot's helmet, and projected a high resolution image but only in a perhaps 1.5m circle directly in from of where the helmet was aimed.
It was super fun being the project manager's kid, and getting to "play with it" on weekends sometimes. You could see what was happening while wearing the helmet and sitting in the seat if you tried - mostly ny intentionally pointing your eyes in a different direction to your head - but when you were "flying around" it was totally believable, and it _looked_ like everything was high resolution. It was also fun watching other people fly it, and being able to see where they were looking, and where they weren't looking and the enemy was speaking up on them.
I'll share a childhood story as well.
Somewhere between '93 and '95 my father took me abroad to Germany and we visited a gaming venue. It was packed with typical arcade machines, games where you sit in a cart holding a pistol and you shoot things on the screen while cart was moving all over the place simulating bumpy ride, etc.
But the highlight was a full 3D experience shooter. You got yourself into a tiny ring, 3D headset and a single puck hold in hand. Rotate the puck and you move. Push the button and you shoot. Look around with your head. Most memorable part - you could duck to avoid shots! Game itself, as I remember it, was full wireframe, akin to Q3DM17 (the longest yard) minus jump pads, but the layout was kind of similar. Player was holding a dart gun - you had a single shot and you had to wait until the projectile decayed or connected with other player.
I'm not entirely sure if the game was multiplayer or not.
I often come back to that memory because shortly after within that time frame my father took me to a computer fair where I had the opportunity to play doom/hexen with VFX1 (or whatever it was called) and it was supposed to revolutionize the world the way AI is suppose to do it now.
Then there was a P5 glove with jaw dropping demo videos of endless possibilities of 3D modelling with your hands, navigating a mech like you were actually inside, etc.
It never came.
That sounds like you're describing dactyl nightmare. [1] I played a version where you were attacking pterodactyls instead of other players, but it was more or less identical. That experience is what led me to believe that VR would eventually take over. I still, more or less, believe it even though it's yet to happen.
I think the big barrier remains price and experiences that are focusing more on visual fidelity over gameplay. An even bigger problem with high end visual fidelity tends to result in motion sickness and other side effects in a substantial chunk of people. But I'm sticking to my guns there - one day VR will win.
[1] - https://www.youtube.com/watch?v=hBkP2to1P_c
6 replies →
I played that game in Berlin in the late 90s. There were four such pods, iirc, and you could see the other players. The frame rate was about 5 frames per second, so it was borderline unplayable, but it was fun nevertheless.
Later, I found out that it was a game called ”Dactyl Nightmare” that ran on Amiga hardware:
https://en.wikipedia.org/wiki/Virtuality_(product)
Maybe something like this?
https://en.wikipedia.org/wiki/Virtuality_(product)
I think I played with the 1000CS or similar in a bar or arcade at some point in early 90's
7 replies →
>It never came.
Everything you described and more is available from modern home Vr devices you can purchase right now.
Mecha, planes, skyrim, cinema screens. In VR, with custom controllers or a regular controller if you want that. Go try it! It’s out and it’s cheap and it’s awesome. Set IPD FIRST.
[flagged]
3 replies →
That’s reality cool. My first job out of college was implementing an image generator for the simulator for the landing signal officer on the USS Nimitz, also using SGI hardware. I would have loved to have seen the final product in person but sadly never had the chance.
I remember there was a flight simulator project that had something like that, or even it was that.
it was called ESPRIT, which I believe was eye slaved programmed retinal insertion technique.
> 2160 x 2160 LCD (per eye) 72-144Hz refresh rate
I question that we could not create a special purpose video codec that handles this without trickery. The "per eye" part sounds spooky at first, but how much information is typically different between these frames? The mutual information is probably 90%+ in most VR games.
If we were to enhance something like x264 to encode the 2nd display as a residual of the 1st display, this could become much more feasible from a channel capacity standpoint. Video codecs already employ a lot of tricks to make adjacent frames that are nearly identical occupy negligible space.
This seems very similar (identical?) to the problem of efficiently encoding a 3d movie:
https://en.wikipedia.org/wiki/2D_plus_Delta
https://en.wikipedia.org/wiki/Multiview_Video_Coding
I'm entirely unfamiliar with the vr rendering space, so all I have to go on is what (I think) your comment implies.
Is the current state of VR rendering really just rendering and transporting two videostreams independent of eachother? Surely there has to be at least some academic prior-art on the subject, no?
Foveated streaming is cool. FWIW the Vision Pro does that for their Mac virtual display as well, and it works really well to pump a lot more pixels through.
It's the same amount of pixels though, just with reduced bitrate for unfocused regions so you save time in encoding, transmitting, and decoding, essentially reducing latency.
For foveated rendering, the amount of rendered pixels are actually reduced.
At least when we implemented this in the first version of Oculus Link, the way it worked is that it was distorted (AADT [1]) to a deformed texture before compression and then rectilinear regenerated after compression as a cheap and simple way to emulate fixed foveated rendering. So it’s not that there’s some kind of adaptive bitrate which applies less bits outside the fovea region but achieves a similar result by giving it fewer pixels in the resulting image being compressed; doing adaptive bitrate would work too (and maybe even better) but encoders (especially HW accelerated ones) don’t support that.
Foveated streaming is presumably the next iteration of this where the eye tracking gives you better information about where to apply this distortion, although I’m genuinely curious how they manage to make this work well - eye tracking is generally high latency but the eye moves very very quickly (maybe HW and SW has improved but they allude to this problem so I’m curious if their argument about using this at a low frequency really improves meaningfully vs more static techniques)
[1] https://developers.meta.com/horizon/blog/how-does-oculus-lin...
1 reply →
That depends on the specifics of the encode/decode pipeline for the streamed frames. Could be the blurry part actually is lower res and lower bitrate until it's decoded, then upscaled and put together with the high res part. I'm not saying they do that, but it's an option.
It’s the same number of pixels rendered but it lets you reduce the amount of data sent , thereby allowing you to send more pixels than you would have been able to otherwise
I think it works really well to pump the same amount of pixels, just focusing them on the more important parts.
Always PIP, Pump Important Pixels
It lets you pump more pixels in a given bandwidth window.
People are conflating rendering (which is not what I’m talking about) with transmission (which is what I’m talking about).
Lowering the quality outside the in focus sections lets them reduce the encoding time and bandwidth required to transmit the frame over.
Foveated streaming is wild to me. Saccades are commonly as low as 20-30ms when reading text, so guaranteeing that latency over 2.4Ghz seems Sisyphean.
I wonder if they have an ML model doing partial upscaling until the eyetracking state is propagated and the full resolution image under the new fovea position is available. It also makes me wonder if there's some way to do neural compression of the peripheral vision optimized for a nice balance between peripheral vision and hints in the embedding to allow for nicer upscaling.
I worked on a foveated video streaming system for 3D video back in 2008, and we used eye tracking and extrapolated a pretty simple motion vector for eyes and ignored saccades entirely. It worked well, you really don't notice the lower detail in the periphery and with a slightly over-sized high resolution focal area you can detect a change in gaze direction before the user's focus exits the high resolution area.
Anyway that was ages ago and we did it with like three people, some duct tape and a GPU, so I expect that it should work really well on modern equipment if they've put the effort into it.
It is amazing how many inventions duck tape found its way into.
Foveated rendering very clearly works well with a dedicated connection, wiht predictable latency. My question was more about the latency spikes inherent in a ISM general use band combined with foveated rendering, which would make the effects of the latency spikes even worse.
They're doing it over 6GHz, if I understand correctly, which with a dedicated router gets you to a reasonable latency with reasonable quality even without foveated rendering (with e.g. a Quest 3).
With foveated rendering I expect this to be a breeze.
Even 5.8Ghz is getting congested. There's a dedicated router in this case (a USB fob), but you still have to share spectrum with the other devices. And at the 160Mhz symbol rate mode on WiFi6, you only have one channel in the 5.8GHz spectrum that needs to be shared.
22 replies →
The real trick is not over complicating things. The goal is to have high fidelity rendering where the eye is currently focusing so to solve for saccades you just build a small buffer area around the idealized minimum high res center and the saccades will safely stay inside that area within the ability of the system to react to the larger overall movements.
Picture demonstrating the large area that foveated rendering actually covers as high or mid res: https://www.reddit.com/r/oculus/comments/66nfap/made_a_pic_t...
It was hard for me to believe as well but streaming games wirelessly on a Quest 2 was totally possible and surprisingly latency-free once I upgraded to wifi 6 (few years ago)
It works a lot better than you’d expect at face value.
At 100fps (mid range of the framerate), you need to deliver a new frame every 10ms anyway, so a 20ms saccade doesn't seem like it would be a problem. If you can't get new frames to users in 30ms, blur will be the least of your problems, when they turn their head, they'll be on the floor vomiting.
> Saccades are commonly as low as 20-30ms when reading text
What sort of resolution are one's eyes actually resolving during saccades? I seem to recall that there is at the very least a frequency reduction mechanism in play during saccades
During a saccade you are blind. Your brain receives no optical input. The problem is measuring/predicting where the eye will aim next and getting a sharp enough image in place over there by the time the movement ends and the saccade stabilizes.
Yeah. I’d love to understand how they tackle saccades. To be fair they do mention they’re on 6ghz - not sure if they support 2.4 although I doubt the frequency of the data radio matters here.
I would guess that the “foveated” region that they stream is larger than the human fovea, large enough to contain the saccades movement (with some good-enough probability).
3 replies →
They use a 6 Ghz dongle
> Roughly equivalent resolution to Quest 3 and less than Vision Pro. This won't be suitable as a monitor replacement for general desktop use. But the price is hopefully low.
Question, what is the criteria for deciding this to be the case? Could you not just move your face closer to the virtual screen to see finer details?
There's no precise criteria but the usual measure is ppd (pixels per degree) and it needs to be high enough such that detailed content (such as text) displayed at a reasonable size is clearly legible without eye strain.
> "Could you not just move your face closer to the virtual screen to see finer details?"
Sure, but then you have the problem of, say, using an IMAX screen as your computer monitor. The level of head motion required to consume screen content (i.e., a ton of large head movements) would make the device very uncomfortable quite quickly.
The Vision Pro has about ~35ppd and generally people seems to think it hits the bar for monitor replacement. Meta Quest 3 has ~25ppd and generally people seem to think it does not. The Steam Frame is specs-wise much closer to Quest 3 than Vision Pro.
There are some software things you can do to increase legibility of details like text, but ultimately you do need physical pixels.
Even the vision pro at 35ppd simply isn't close to the PPD you can get from a good desktop monitor (we can calculate PPD for desktop monitors too, using size and viewing distance).
Apple's "retina" HiDPI monitors typically have PPD well beyond 35 at ordinary viewing distances, even a 1080p 24 inch monitor on your desk can exceed this.
For me personally, 35ppd feels about the minimum I would accept for emulating a monitor for text work in a VR headset, but it's still not good enough for me to even begin thinking about using it to replace any of my monitors.
> https://phrogz.net/tmp/ScreenDensityCalculator.html
10 replies →
Not only would it be a chore to constantly lean in closer to different parts of your monitor to see full detail, but looking at close-up objects in VR exacerbates the vergence-accommodation mismatch issue, which causes eye strain. You would need varifocal lenses to fix this, which have only been demonstrated in prototypes so far.
Couldn't you get around that by having a "zoom" feature on a very large but distant monitor?
3 replies →
This all sounds a bit like the “better horse” framing. Maybe richer content shouldn’t be consumed as primarily a virtualized page. Maybe mixing font sizes and over sized text can be a standard in itself.
It's just about what pixel per degree will get you close to the modern irl setup. Obviously it's enough for 80 char consoles but you'd need to dip into large fonts for a desktop.
I did the math on this site and I'd have to hunch less than a foot from the screen to hit 35 PPD on my work provided Thinkpad X1 Carbon with a 14" 1920x1200 screen. My usual distance is nearly double that so my ppd normally is more like 70 ppd, roughly.
https://phrogz.net/tmp/ScreenDensityCalculator.html#find:dis...
And foveated streaming has a 1-2ms wireless latency on modern GPUs according to LTT. Insane.
That's pretty quick. I've heard that in ideal circumstances Wi-Fi 6 can get close to 5ms and Wi-Fi 7 can get down to 2ms.
I's impressive if they're really able to get below 2ms motion-to-photon latency, given that modern consumer headsets with on-device compute are also right at that same 2ms mark.
Wow, that's just 1 frame of latency at 60 fps.
Edit: Nevermind, I'm dumb. 1/60th of a second is 16 milliseconds, not 1.6 milliseconds.
No, thats between 0.06 and 0.12 frame latency on 60fps. It's not even a frame on 144Hz (1s/144≈7ms)
Much less than, 1 frame is 16ms
60 fps is 16.67 ms per frame.
> Roughly equivalent resolution to Quest 3 and less than Vision Pro. This won't be suitable as a monitor replacement for general desktop use.
The real limiting factor is more likely to be having a large headset on your face for an extended period of time, combined with a battery that isn't meant for all-day use. The resolution is fine. We went decades with low resolution monitors. Just zoom in or bring it closer.
The battery isn't an issue if you're stationary, you can plug it in.
The resolution is a major problem. Old-school monitors used old-school OSes that did rendering suitable for the displays of the time. For example, anti-aliased text was not typically used for a long time. This meant that text on screen was blocky, but sharp. Very readable. You can't do this on a VR headset, because the pixels on your virtual screen don't precisely correspond with the pixels in the headset's displays. It's inevitably scaled and shifted, making it blurry.
There's also the issue that these things have to compete with what's available now. I use my Vision Pro as a monitor replacement sometimes. But it'll never be a full-time replacement, because the modern 4k displays I have are substantially clearer. And that's a headset with ~2x the resolution of this one.
> There's also the issue that these things have to compete with what's available now. [...] But it'll never be a full-time replacement, because the modern 4k displays I have are substantially clearer.
What's available now might vary from person to person. I'm using a normal-sized 1080p monitor, and this desk doesn't have space for a second monitor. That's what a VR headset would have to compete against for me; just having several virtual monitors might be enough of an advantage, even if their resolution is slightly lower.
(Also, I have used old-school VGA CRT monitors; as could be easily seen when switching to a LCD monitor with digital DVI input, text on a VGA CRT was not exactly sharp.)
VR does need a lot of resolution when trying to display text.
Can get away with less for games where text is minimized (or very large)
The weight on your face is half that of Quest 3, they put the rest of the weight on the back which perfectly balances it on your head. It's going to be super comfortable.
Yeah, already many people use something like the Bobovr alternative headstrap for the Quest3 that has an additional battery pack in the back, which helps balancing the device in the front.
1 reply →
Whether or not we used to walk to school uphill both ways, that won't make the resolution fine.
To your point, I'd use my Vision Pro plugged in all day if it was half the weight. As it stands, its just too much nonsense when I have an ultrawide. If I were 20 year old me I'd never get a monitor (20 year old me also told his gf iPad 1 would be a good laptop for school, so,)
One problem is that in most settings a real monitor is just a better experience for multiple reasons. And in a tight setting like an airplane where VR monitors might be nice, the touch controls become more problematic. "Pardon me! I was trying to drag my screen around!"
> (20 year old me also told his gf iPad 1 would be a good laptop for school, so,)
Yikes. How'd that relationship end up? Haha.
1 reply →
2k X 2k doesn't sound low res it is like full HD, but with twice vertical. My monitor is 1080p.
Never tried VR set, so I don't know if that translates similarly.
Your 2K monitor occupies something like a 20-degree field of view from a normal sitting position/distance. The 2K resolution in a VR headset covers the entire field of view.
So effectively your 1080p monitor has ~6x the pixel density of the VR headset.
1 reply →
The problem is that 2k square is spread across the whole FOV of the headset so when it's replicating a monitor unless it's ridiculously close to your face a lot of those pixels are 'wasted' in comparison to a monitor with similar stats.
2 replies →
Why hasn't Meta tried this given the huge amount of R&D they've put into VR and they had literally John Carmack on the team in the past?
They prioritized cost, so they omitted eye tracking hardware. They've also bet more on standalone apps rather than streaming from a PC. These are reasonable tradeoffs. The next Quest may add eye tracking, who knows. Quest Pro had it but was discontinued for being too expensive.
We'll have to wait on pricing for Steam Frame, but I don't expect them to match Meta's subsidies, so I'm betting on this being more expensive than Quest. I also think that streaming from a gaming PC will remain more of a niche thing despite Valve's focus on it here, and people will find a lot of use for the x86/Windows emulation feature to play games from their Steam library directly on the headset.
It will be interesting to see how the X86 emulation plays out. In the Verge review of the headset they mentioned stutters when playing on the headset due to having to 'recompile x86 game code on the fly', but they may offer precompiled versions which can be downloaded ahead of time, similar to the precompiled shaders the Steam Deck downloads.
If they get everything working well I'm guessing we could see an ARM powered Steam Deck in the future.
Despite the fact it uses a Qualcomm chip, I'm curious on whether it retains the ability to load alternative OS's like other Steam hardware.
1 reply →
If you mean foveated streaming - It’s available on the Quest Pro with Steam Link.
What do you mean? What part have they not tried?
I use a 1920x1080 headset as a monitor replacement. It's absolutely fine. 2160x2160 will be more than workable as long as the tracking is on point.
> But the price is hopefully low.
The main value of Meta VR and AR products is the massive price subsidy which is needed because the brand has been destroyed for all generations older than Alpha.
The current price estimate for the Steam Frame is $1200 vs Quest 3 at $600 which is still a very reasonable price given the technology, tariffs, and lack of ad invading privacy
Quest 3 is $499 and Quest 3S is $299 in the US
> Very cool, should be very hackable. Very unlike every other standalone VR headset. That might be the reason I'm going to buy it. I want to support this and Steam has done a lot to get gaming on linux going.
I guess there's a market for this but I'm personally disappointed that they've gone with the "cram a computer into the headset" route. I'd much rather have a simpler, more compact dumb device like the Bigscreen Beyond 2, which in exchange should prove much lighter and more comfortable to wear for long time periods.
The bulk and added component cost of the "all in one" PC/headset models is just unnecessary if you already have a gaming PC.
I'm personally quite hyped to see the first commercially available Linux-based standalone VR headset announced. This thing is quite a bit lighter than any of the existing "cram a computer in" solutions.
Strictly speaking the mobile Oculus/Meta Go/Quest headsets were linux/android based, you can run Termux terminal with Fedora/Ubuntu on them and use an Android VNC/X app to run the 2D graphical part. But I share your SteamOS enthousiasm.
Yeah, this is exactly what I've been waiting for for quite a long time. I'm very excited.
They crammed a computer into the headset, but UNLIKE Meta's offerings, this is indeed an actual computer you can run linux on. Perhaps even do standard computer stuff inside the headset like text editing, Blender modeling, or more.
As a current and frequent user of this form factor (Pico 4, with the top strap, which the Steam Frame will also have as an option, over Virtual Desktop) I can assure you that it's quite comfortable over long periods of time (several hours). Of course it will ultimately depend on the specific design decisions made for this headset, but this all looks really good to me.
Full color passthrough would have been nice though. Not necessarily for XR, but because it's actually quite useful to be able to switch to a view of the world around you with very low friction when using the headset.
There's always going to be a computer in it to drive it. It's just a matter of how generalised it is and how much weight/power consumption it's adding.
It's nice to have some local processing for tracking and latency mitigation. Cost from there to full computer on headset is marginal, so you might as well do that.
You can get a Beyond if that's what you want. It's an amazing device, and will be far more comfortable and higher resolution than this one. Valve has supported Bigscreen in integrating Lighthouse tracking, and I hope that they continue that support by somehow allowing them to integrate the inside-out tracking they've developed for this device in the next version of the Beyond.
That would probably add a lot of extra weight and it would need to make the device bigger.
3 replies →
I agree. Hopefully Bigscreen continues making hardware. I still have the original bigscreen beyond and im very happy with it besides the glare.
How is Linux support?
3 replies →
It's super light compared to Quest 3, half the weight on your face, the rest is on the back which balances the headset. Big Screen Beyond isn't wireless and has a narrower field of view.
> has a narrower field of view.
On the beyond 2, only by 2 degrees horizontally. I don't think that would even be noticeable.
I was worried about the built in computer as well, but then I found out it's only 185g. It is 78g more than the Bigscreen Beyond 2, but it's still pretty light.
I once lived in a place that had a bathroom with mirrors that faced each other. I think I convinced myself that not only is my attention to detail more concentrated at the center, but that my response time was also fastest there (can anyone confirm that?).
So this gets me thinking. What would it feel like to correct for that effect? Could you use the same technique to essentially play the further parts early, so it all comes in at once?
Kinda a hair brained idea, I know, but we have the technology, and I'm curious.
Peripheral vision is extremely good at spotting movement at low resolution and moving the eye to look at it.
I don't know if it's faster, but it's a non-trivial part of the experience.
Yea, I've heard and noticed that as well (thought about adding a note about it to my original comment). But what I'm curious about is the timing. What I suspect is that peripherals are more sensitive to motion, but still lag slightly behind the center of focus. I'm not sure if it's dependent on how actively you are trying to focus. I'd love to learn more about this, but I didn't find anything when I looked online a bit.
It's good enough to see flickering on crt monitors at 50-60hz for some people.
1 reply →
> Foveated streaming! That's a great idea.
It would be interesting to see⁰ how that behaves when presented with weird eyes like mine or worse. Mine often don't always point the same way and which one I'm actually looking through can be somewhat arbitrary from one moment to the next…
Though the flapping between eyes is usually in the presence of changes, however minor, in required focal distance, so maybe it wouldn't happen as much inside a VR headset.
----
[0] Sorry not sorry.
Have a look at this video by Dave2D. In his hands-on, he was very impressed with foveated streaming https://youtu.be/356rZ8IBCps.
Yet this is shaping up to be one of the most interesting VR releases
How the hell would foveated streaming even work, it seems physically impossible. Tracking where your eye is looking then sending that information to a server, it processing it and then streaming that back seems impossible.
The data you're sending out is just the position and motion vectors of the pupils. And you probably only need about 16 bits for each of these numbers for 2 eyes. So the equivalent of two floating point numbers along a particular channel or 32 bits at minimum. Any lag can be compensated for by simply interpolating the motion vectors.
It actually makes a lot of sense!
Eye tracking hardware and software specifically focus on low latency, e.g. FPGA close to the sensor. The resulting packets they send is also ridiculously small (e.g 2 numbers as x,y positions of the pupils) so ... I can see that happening.
Sure eyes move very VERY fast but if you do relatively small compute on dedicated hardware it can also go quite fast while remaining affordable.
It just needs to be less impossible than not doing it. I.e. sending a full frame of information must be an even more impossible problem.
> Mouth tracking?
What a vile thought in the context of the steam… catalogue.
I'm guessing it's main use case will be VR chat syncing mouths to avatars.
The porn industry disagrees.
1 reply →
They're probably thinking of it in comparison to the Apple Pro which attempts to do some facial tracking of the bottom of your face to inform their 'Personas', it notably still fails quite badly on bearded people where it can't see the bottom half of the face well.
I gathered as much, but still.
Funny enough the Digital Foundry folks put a Gabe quote about tongue input in their most recent podcast.
https://www.youtube.com/watch?v=c9zfExb5vCU&t=1h32m44s