1. The scanning is fast, it takes longer to set up a fingerprint on a macbook air. Just turning the head from side to side, then up and down, smiling and raising one's eyebrows.
2. I used the M5, and the processing time to generate the persona was quick. I didn't time it, but it felt like less than 10 seconds.
3. My cheeks tend to restrict smiling while wearing the headset, it works but people that know me understood what I meant when I said my smile was hindered.
4. Despite the limited actions used for set up, it reproduces a far greater range of facial movements. For example if I do the invisible string trick, it captures my lips correctly (when you move the top lip in one direction and the lower lip in the opposite direction, as if pulled by a string.)
5. I wasn't expecting this big of a jump in quality from the v1.
CorridorDigital recently used the tech to assist in remaking the rooftop bullet-time scene from The Matrix. It's used for making the environment instead of modeling it from scratch.
They also had an earlier video that more heavily featured gaussian splats. Using them to recreate the inside of the universal studios theme park without permission. I was very impressed with how it handles reflections on glass.
Oh man that was weird; I opened the video in a private browsing thing to not pollute my watch history and the version I got was automatically translated to Dutch, including voiceover which I presume is AI driven to try and match the tone of the original video. Still a bit robotic though.
While I have my browser configured to prefer Dutch, the second one is English; I wish I could tell it / them that I don't want them to translate anything if it's in one of those languages.
Came this close to buying an AVP, before learning that they only mirror a single screen with no virtual monitors.
Like, guize, c'mon. Virtual desktop can do three. For 3.5k you gotta do better. I don't particularly need a virtual me in space as much as I need more screens that can do, like, actual work.
I've always used 2-3 monitors pretty comfortably but with high latency AI agents adding more concurrency to my workflows I'm feeling very crowded. I would love a VR experience with an arbitrary number of screens/windows as well as more clearly separated environments (like having a visually different virtual office per project) that I can quickly switch between.
I'm usually a fan of Norm's videos, but this might be the first time I've seen a Tested video that felt more like paid-promotion than an actual unbiased review. I don't keep up with it though.
Video and audio aren't always quite in sync in Zoom, in my experience. But you're right, the overall latency of the connection should've been my question.
What is missing from the article is that creating a model from a few pictures is not that hard (well it is to do well, but hear me out)
The difficult part is animating it realistically with the sensors you have, in real time.
Extracting signal from eye-gaze cameras with a sighlty wider field of view, that allows realistic not not uncanny valley animation is quite hard to do on the general public Peoples faces are all different sizes and shapes, to the point that even getting accraute gaze vectors is hard, let alone smile and check position (those are done with different cameras, not just eye gaze. )
This is what fascinates me as well. I have to assume there's a neural net that effectively learns all of the possible muscles in the face. The limited sensor data gets fed in, and it's able to infer the full face shape. It seems perfectly plausible in theory, but I'm still impressed it seems to work so well in practice.
It’s amazing tech, it’s just a solution looking for a problem.
It feels a bit like the original Segway’s over-engineered solution versus cheap Chinese hoverboards, then the scooters and e-bikes that took over afterwards.
Why would I be paying all this money for this realistic telepresence when my shitbox HP laptop from Walmart has a perfectly serviceable webcam?
Many use cases come to mind. If (retinal?) identities were private, encrypted, and “anonymized” in handshake:
web browsing without captchas, anubis, bot tests, etc. (“human only” internet, maybe like Berners-Lee’s “semantic web” idea [1][2])
Non “anonymized”:
non-jury court and arbitration appearances (with expansion of judges to clear backlogs [3])
medical checkups and social care (eg neurocognitive checkups for elderly, social services checkins esp. children, checkins for depressed or isolated needing offwork social interactions, etc.)
bureaucratic appointments (customer service by humans, DMV, building permits, licenses, etc.)
web browsing for routine tasks without logins (banks, email, etc)
Human-only Internet: why choose this implementation over something simpler? Surely there’s a simpler way to prove you’re human that doesn’t involve 3D avatar construction on a head-worn display that screws up your hair and makeup. [1] E.g., an Apple Watch-like device can verify you have a real pulse and oxygen in your blood.
Court: solution is already in place, which is showing up to a physical courtroom. Clearing backlogs can be done without a technological solution, it’s more of a process and staffing problem. Moving the judges from a court to a home office doesn’t magically make them clear cases faster.
Medical checkups: phone selfie camera
Bureaucratic appointments: solution in place, physical building, or many of these offer virtual appointments already over a webcam.
Web browsing without logins: passkeys, FaceID, fingerprint
[1] yet another male-designed tech bro product that never considered the concerns of the majority of the population.
I used my VP extensively recently when working remotely. It's not glamorous, but I used Screen Sharing with a Macbook that grants you a virtual ultrawide monitor.
Once you're already in VR, it's nice to not have to break out for a meeting, and that's where Personas fit in.
It's not a killer app carrying the product, it's a necessary feature making sure there's not a gap in workflow.
I think this explanation makes the situation sound even worse.
The vision pro’s overall productivity solution is inferior to existing, cheaper technology, and it has to be supplemented by a solution to a problem created by its own design.
Essentially you’re saying that after putting on a double headband device that wrecks my hair, gets me sweaty, strains my neck with weight, and fucks up my makeup, I now have to use a workaround fake avatar because the tech bros who made this product had to say “oh shit, if you have a headset on you can’t be on camera!”
For $3500 I can be in real reality and be surrounded by higher resolution professional monitors and just show my real self on camera instead.
You have to back up from that question. “How to be present in a video call” is already an answered question.
The “when you’re using the headset” part is the issue. Why are we using the headset? What are the benefits? Why am I making these tradeoffs like messing up my hair, putting a heavy device on my head, messing up my makeup, etc.
This is like saying “The Segway had advanced self-leveling to solve the problem of how to balance when you’re on an upright two wheel device”.
But why are you on an upright two wheel device? Why not just add a third wheel? Why not ride a bicycle? Why not ride a scooter?
The solution is really cool and technologically advanced but it doesn’t actually solve anything besides an artificially introduced problem.
I always viewed the current generations of 'cheap Chinese hovebords' etc being a direct descendant of the Segway, and that Kamen and his believers weren't quite as ridiculous as we thought them at the time - they were just ahead of their time and expecting too much from too low a point in the technology curve.
They had the right idea but over-engineered the solution.
They could never cut the price down because of it. The knockoffs used much simpler ways to balance yourself, including just changing the form factor to something more conventional that doesn’t even need balance correction (scooters and e-bikes).
I live half way across the world from my folks so I don’t see them often. I’d love something that gives me a greater sense of presence than a video call can give.
> when 1920x1080 is perfectly fine for 99.999% of use cases
A lot of people here work with text all day every day and we would rather work with text that looks like it came out of a laser printer than out of a fax machine.
Because it's not. Facial expressions and body language carry gigantic amounts of information.
So many misunderstandings arise when the channel is audio-only. E.g. if a majority of people in a meeting are uneasy with something, they can see it on each others' faces, realize they're not alone, and bring it up. When it's audio-only, everyone thinks they're the only one with concerns and so maybe it's not worth interrupting what they incorrectly assume to be the general consensus over audio.
These analogies don’t compare well. Your examples don’t demonstrate an extreme tradeoff like you get with the Vision Pro.
Why do we have video calls? Because a webcam costs $1-5 to put into a laptop and bandwidth is close enough to free.
Why do we have 4K monitors? Because they only cost a small amount more than 1080p monitors and make the image sharper with not a whole lot of downsides (you can even bump them down to 1080p if you have a difficult time driving the resolution). I paid $400 for my 4K 150Hz gaming monitor so going with 1080p high refresh rate VRR would have only saved me $200 or so.
Serviceability for purpose is a spectrum and the Vision Pro is at the wrong end of it.
For more than the price of three 4K OLED 144Hz monitors, you get to don a heavy headset that messes up your hair, makes you sweaty, screws up your makeup, and you get less resolution and workspace than the monitors. Your battery lasts an hour so it’s inferior to a laptop with an external portable monitor or two. It’s actually harder to fit into a backpack than a laptop plus portable monitors since it’s not flat.
Then you have to use some complicated proprietary technology [1] to make a 3D avatar of yourself to overcome the fact that you now have a giant headset on your head and look like an idiot if you were to go on camera.
You can’t do a bunch of PC stuff on it because it’s basically running iPadOS.
This is not the same as “why are we bothering with 4K?”
[1] What will you do if Apple starts charging money for this feature?
I actually think about this a lot, and I could argue both sides of this. On the one hand, you could look at your list of examples as obvious examples of modern innovation/improvement that enrich our lives. On the other, you could take it as a fascetious list that proves the point of GP, as one other commenter apparently already has.
I often think how stupid video call meetings are. Teams video calls are one of the few things that make every computer I own, including my M1 MPB, run the fans at full tilt. I've had my phone give me overheat warnings from showing the tile board of bored faces staring blankly at me. And yeah, honestly, it feels like a solution looking for a problem. I understand that it's not, and that some people are obsessed for various reasons (some more legitimate than others) with recreating the conference room vibe, but still.
And with monitors? This is a far more "spicy" take, but I think 1280x1024 is actually fine. Even 1024x768. Now, I have a 4K monitor at home, so don't get me wrong: I like my high DPI monitor.
But I think past 1024x768, the actual productivity gains from higher resolutions begins to rapidly dwindle. 1920x1080, especially in "small" displays (under 20 inches) can look pretty visually stunning. 4K is definitely nicer, but do we really need it?
I'm not trying to get existential with this, because what do we really "need"? But I think that, objectively, computing is divided into two very broad eras. The first era, ending around the mid 2000s, was marked by year-after-year innovation where 2-4 years brought new features that solved _real problems_, as in, features that gave users new qualitative capabilities. Think 24-bit color vs 8-bit color, or 64-bit vs 32-bit (or even 32-bit vs 16-bit). Having a webcam. Having 5+ hours of battery life on a laptop, with a real backlit AMLCD display. Having more than a few gigabytes of internal storage. Having a generic peripheral bus (USB/firewire). Having PCM audio. Having 3D hardware acceleration...
I'm not prepared to vigorously defend this thesis ;-) but it seems at about 2005-ish, the PC space had reached most of these "core qualitative features". After that, everything became better and faster, quantitatively superior versions of the same thing.
And sometimes yeah, it can feel both like it's all gone to waste on ludicrously inefficient software (Teams...), and sometimes, like modern computing did become a solution in search of a problem, in order to keep selling new hardware and software.
I'm curious about practical application in everyday life of these avatars - but in the real life, not examples provided by marketing department. With that price Vision Pro still feels like a toy for wealthy people, or perhaps for CEOs of companies who can afford conferences in virtual environment. But then exactly, why? Majority of world tested during pandemic video calls, conferences and all sorts of other activities like virtual crowds for tv programs (pretty sure British panel shows shown grids of people as a substitute for studio audience). News services were inviting their guests by video calls when Skype was still around.
yes and to a degree which i find particularly interesting. its never going to happen because of your example
i prefer working in my vp and see a possible world where vp makes my remote team collaborate as if were in the office, from the comfort of the most ergonomic location in my house
it solves this problem and 0.0001% of people are dorks like me who try and say, "they did it" while the rest of the world keeps going to work as before
all of the tech problems were solvable. people simply dont want to put a thing on their face and i think thats unsolvable
I would not describe creating an experience that feels like you are in the room with a group of people, even allowing cross talk, is a solution looking for a problem. I think it's the thing everyone slowing dying on Zoom calls wishes they could have.
I disagree. Many of us don't use a headset regularly or carry it with us like a phone or laptop; it is an express inconvenience to use, with only marginal benefits. Businesses won't want one if webcams still do the trick, and users might respond positively but are always priced-out of owning one.
If I'm doing work at my desk and I get a Zoom call, there is a 0.00% chance I will go plug in my Vision Pro to answer it. I'm just going to open the app and turn on my webcam, spatial audio be damned.
How's the latency? Latency is what makes Zoom et al painful for me now - it ruins the ability to politely interject, give confirmatiom, etc. Does Apple do a better job of this than Google/Zoom? In theory you could get 20-30ms (just spitballing numbers I used to get playing shooters!) but i've never got anywhere near that with vid conferencing.
Even so, latency-in-zoom kind of becomes an attribute of the medium and you learn to adapt. How does it feel with the Vision Pro though? The article talks about a really convincing sense of being in the same place with someone - how does latency affect that? (And does it differ based on if you're all physically in Silicon Valley or not?)
I would assume any added latency is negligible -- the sensors + interpretation + rendering should be very fast.
But you've still got all the network latency including Wi-Fi latency on both ends. And you always need a small audio buffer so discrete network packets can be assembled into continuous audio without gaps.
So I wouldn't expect this latency to be any different from regular videoconferencing.
> latency-in-zoom kind of becomes an attribute of the medium and you learn to adapt.
To some degree but not fully. When you adapt your brain is still doing extra work to compensate, similarly to how you don’t «hear» jet engine noise after acclimating to an airplane but it will still tire you to some degree.
I had Zoom and Teams meetings daily during Covid, and personal FaceTime calls almost daily for a while. I still get «Zoom fatigue» if a call goes on for over an hour, if I need to talk face to face during the call (i.e. no screen sharing, can’t disable video and look at something else, etc.) I’m fine if I don’t look at people’s faces but rather people’s screen sharing.
This video might help explain 3D Gaussian splatting.
https://www.youtube.com/watch?v=wKgMxrWcW1s
Essentially, an entirely new graphics pipeline with different fundamental techniques which allow for high performance and fidelity compared to... what we did before(?)
Cool.
Not quite, it’s just a way to assign a color value to a point in space (think point clouds) based on photogrammetry. It’s voxels on steroids but still is drawn using the same techniques. It’s the magic of creating the splats that’s interesting.
A color value for each point is a good starting place to gain an intuition. Some readers might be interested to know that the color is not constant for each point, but instead dependent on viewing angle. That is part of what allows splats to look realistic. Real objects have some degree of specularity which makes them take on slightly different shades as you move your head.
Sorry but this is a horrible video. The guy just spews superlatives in an annoying voice until 4:30 (of a 6 minute video mind you), when he finally gives a 10 second "explanation" of Gaussian splatting, which doesn't really explain anything, then jumps to a sponsored ad.
yeah... their older videos are a bit more useful from what I remember (more time spent on the research paper content, etc), but they've become so content-free that I just block the channel outright nowadays. it's the "this changes everything (every time, every day)" hype-channel for graphics.
I gotta say, these new Personas are good.
The previous beta ones were terrifying frankenstein monsters. The new ones fooled my boss for 30 minutes.
There's a bit of uncanny valley left, nevertheless. My persona's smile reminds of the horrible expressions people like to make in Source Filmmaker.
I have a few similar take aways:
1. The scanning is fast, it takes longer to set up a fingerprint on a macbook air. Just turning the head from side to side, then up and down, smiling and raising one's eyebrows.
2. I used the M5, and the processing time to generate the persona was quick. I didn't time it, but it felt like less than 10 seconds.
3. My cheeks tend to restrict smiling while wearing the headset, it works but people that know me understood what I meant when I said my smile was hindered.
4. Despite the limited actions used for set up, it reproduces a far greater range of facial movements. For example if I do the invisible string trick, it captures my lips correctly (when you move the top lip in one direction and the lower lip in the opposite direction, as if pulled by a string.)
5. I wasn't expecting this big of a jump in quality from the v1.
> There's a bit of uncanny valley left
Perhaps how their heads, eyes move with this weird "fluid" effect and way too much blurred faces?
What eventually tipped your boss off? Was it the smile issue?
CorridorDigital recently used the tech to assist in remaking the rooftop bullet-time scene from The Matrix. It's used for making the environment instead of modeling it from scratch.
https://www.youtube.com/watch?v=iq5JaG53dho&t=2s
To be clear: they used Gaussian splatting, they didn't use Vision Pros.
They also had an earlier video that more heavily featured gaussian splats. Using them to recreate the inside of the universal studios theme park without permission. I was very impressed with how it handles reflections on glass.
https://www.youtube.com/watch?v=cetf0qTZ04Y
There's a bit more of a conversation / demo here which is pretty impressive: https://www.youtube.com/watch?v=KbZfbqHeJNU.
Oh man that was weird; I opened the video in a private browsing thing to not pollute my watch history and the version I got was automatically translated to Dutch, including voiceover which I presume is AI driven to try and match the tone of the original video. Still a bit robotic though.
While I have my browser configured to prefer Dutch, the second one is English; I wish I could tell it / them that I don't want them to translate anything if it's in one of those languages.
Yeah that is awful behavior of YouTube. I can only imagine none of the YouTube developers or managers speak multiple languages.
The floating heads in a room having a meeting reminds me of terrible sci fi.
Came this close to buying an AVP, before learning that they only mirror a single screen with no virtual monitors.
Like, guize, c'mon. Virtual desktop can do three. For 3.5k you gotta do better. I don't particularly need a virtual me in space as much as I need more screens that can do, like, actual work.
I've always used 2-3 monitors pretty comfortably but with high latency AI agents adding more concurrency to my workflows I'm feeling very crowded. I would love a VR experience with an arbitrary number of screens/windows as well as more clearly separated environments (like having a visually different virtual office per project) that I can quickly switch between.
is this still the case even with the new M5? if so, wtf apple
Tested talked similar about Personas. https://youtu.be/LzZ2j9CAcww?si=IRvxNaNZeBQp7WLV
I'm usually a fan of Norm's videos, but this might be the first time I've seen a Tested video that felt more like paid-promotion than an actual unbiased review. I don't keep up with it though.
There's a video version of the article linked partway down which actually works better than the text one for seeing the thing in action a bit.
For those who have had Persona conversations, how does varying audio latency affect immersion? Is there a recommended chat service?
What audio latency?
There's regular latency due to distance, just like on a phone call if you're chatting with someone halfway across the world.
But on a normal connection, audio and the persona should always be in sync, the same way audio and video are over Zoom or FaceTime.
There shouldn't be any extra latency for the audio only.
Video and audio aren't always quite in sync in Zoom, in my experience. But you're right, the overall latency of the connection should've been my question.
1 reply →
I don't use it very frequently but when from the few times I did I can't recall any imperceptible lag via Apple iMessage.
Good to know. I should try iMessage video chat more, in general.
TLDR Gaussian splatting.
What is missing from the article is that creating a model from a few pictures is not that hard (well it is to do well, but hear me out)
The difficult part is animating it realistically with the sensors you have, in real time.
Extracting signal from eye-gaze cameras with a sighlty wider field of view, that allows realistic not not uncanny valley animation is quite hard to do on the general public Peoples faces are all different sizes and shapes, to the point that even getting accraute gaze vectors is hard, let alone smile and check position (those are done with different cameras, not just eye gaze. )
This is what fascinates me as well. I have to assume there's a neural net that effectively learns all of the possible muscles in the face. The limited sensor data gets fed in, and it's able to infer the full face shape. It seems perfectly plausible in theory, but I'm still impressed it seems to work so well in practice.
It’s amazing tech, it’s just a solution looking for a problem.
It feels a bit like the original Segway’s over-engineered solution versus cheap Chinese hoverboards, then the scooters and e-bikes that took over afterwards.
Why would I be paying all this money for this realistic telepresence when my shitbox HP laptop from Walmart has a perfectly serviceable webcam?
Many use cases come to mind. If (retinal?) identities were private, encrypted, and “anonymized” in handshake:
web browsing without captchas, anubis, bot tests, etc. (“human only” internet, maybe like Berners-Lee’s “semantic web” idea [1][2])
Non “anonymized”:
non-jury court and arbitration appearances (with expansion of judges to clear backlogs [3])
medical checkups and social care (eg neurocognitive checkups for elderly, social services checkins esp. children, checkins for depressed or isolated needing offwork social interactions, etc.)
bureaucratic appointments (customer service by humans, DMV, building permits, licenses, etc.)
web browsing for routine tasks without logins (banks, email, etc)
[1] <https://www.newyorker.com/magazine/2025/10/06/tim-berners-le...> [2] <https://newtfire.org/courses/introDH/BrnrsLeeIntrnt-Lucas-Nw...> [3] <https://nysfocus.com/2025/05/30/uncap-justice-act-new-york-c...>
Let’s run down your use cases:
Human-only Internet: why choose this implementation over something simpler? Surely there’s a simpler way to prove you’re human that doesn’t involve 3D avatar construction on a head-worn display that screws up your hair and makeup. [1] E.g., an Apple Watch-like device can verify you have a real pulse and oxygen in your blood.
Court: solution is already in place, which is showing up to a physical courtroom. Clearing backlogs can be done without a technological solution, it’s more of a process and staffing problem. Moving the judges from a court to a home office doesn’t magically make them clear cases faster.
Medical checkups: phone selfie camera
Bureaucratic appointments: solution in place, physical building, or many of these offer virtual appointments already over a webcam.
Web browsing without logins: passkeys, FaceID, fingerprint
[1] yet another male-designed tech bro product that never considered the concerns of the majority of the population.
1 reply →
I used my VP extensively recently when working remotely. It's not glamorous, but I used Screen Sharing with a Macbook that grants you a virtual ultrawide monitor.
Once you're already in VR, it's nice to not have to break out for a meeting, and that's where Personas fit in.
It's not a killer app carrying the product, it's a necessary feature making sure there's not a gap in workflow.
Ah, right! Because you can’t videoconference with the headset on.
Thank you! Now I get it!
So it’s sort of a stopgap solution before the ar glasses are small enough to do actual video calls without looking silly?
3 replies →
I think this explanation makes the situation sound even worse.
The vision pro’s overall productivity solution is inferior to existing, cheaper technology, and it has to be supplemented by a solution to a problem created by its own design.
Essentially you’re saying that after putting on a double headband device that wrecks my hair, gets me sweaty, strains my neck with weight, and fucks up my makeup, I now have to use a workaround fake avatar because the tech bros who made this product had to say “oh shit, if you have a headset on you can’t be on camera!”
For $3500 I can be in real reality and be surrounded by higher resolution professional monitors and just show my real self on camera instead.
2 replies →
I disagree, because it answers a pretty simple question: How to be present in a video call when you're using the headset.
To me it would be a shortcoming of the device if I couldn't show me and the thing I'm working on at the same time.
You have to back up from that question. “How to be present in a video call” is already an answered question.
The “when you’re using the headset” part is the issue. Why are we using the headset? What are the benefits? Why am I making these tradeoffs like messing up my hair, putting a heavy device on my head, messing up my makeup, etc.
This is like saying “The Segway had advanced self-leveling to solve the problem of how to balance when you’re on an upright two wheel device”.
But why are you on an upright two wheel device? Why not just add a third wheel? Why not ride a bicycle? Why not ride a scooter?
The solution is really cool and technologically advanced but it doesn’t actually solve anything besides an artificially introduced problem.
2 replies →
I always viewed the current generations of 'cheap Chinese hovebords' etc being a direct descendant of the Segway, and that Kamen and his believers weren't quite as ridiculous as we thought them at the time - they were just ahead of their time and expecting too much from too low a point in the technology curve.
They had the right idea but over-engineered the solution.
They could never cut the price down because of it. The knockoffs used much simpler ways to balance yourself, including just changing the form factor to something more conventional that doesn’t even need balance correction (scooters and e-bikes).
I live half way across the world from my folks so I don’t see them often. I’d love something that gives me a greater sense of presence than a video call can give.
Do you believe that seeing a computer generated picture of them is more lifelike than an actual video of them talking to you live?
Why do we have video call meetings when people mostly just listen and the information is carried via audio?
Why do we have 4K monitors when 1920x1080 is perfectly fine for 99.999% of use cases?
If you look at the world through this lens called "serviceability" you'll think everything is a solution looking for a problem.
> when 1920x1080 is perfectly fine for 99.999% of use cases
A lot of people here work with text all day every day and we would rather work with text that looks like it came out of a laser printer than out of a fax machine.
21 replies →
> and the information is carried via audio?
Because it's not. Facial expressions and body language carry gigantic amounts of information.
So many misunderstandings arise when the channel is audio-only. E.g. if a majority of people in a meeting are uneasy with something, they can see it on each others' faces, realize they're not alone, and bring it up. When it's audio-only, everyone thinks they're the only one with concerns and so maybe it's not worth interrupting what they incorrectly assume to be the general consensus over audio.
These analogies don’t compare well. Your examples don’t demonstrate an extreme tradeoff like you get with the Vision Pro.
Why do we have video calls? Because a webcam costs $1-5 to put into a laptop and bandwidth is close enough to free.
Why do we have 4K monitors? Because they only cost a small amount more than 1080p monitors and make the image sharper with not a whole lot of downsides (you can even bump them down to 1080p if you have a difficult time driving the resolution). I paid $400 for my 4K 150Hz gaming monitor so going with 1080p high refresh rate VRR would have only saved me $200 or so.
Serviceability for purpose is a spectrum and the Vision Pro is at the wrong end of it.
For more than the price of three 4K OLED 144Hz monitors, you get to don a heavy headset that messes up your hair, makes you sweaty, screws up your makeup, and you get less resolution and workspace than the monitors. Your battery lasts an hour so it’s inferior to a laptop with an external portable monitor or two. It’s actually harder to fit into a backpack than a laptop plus portable monitors since it’s not flat.
Then you have to use some complicated proprietary technology [1] to make a 3D avatar of yourself to overcome the fact that you now have a giant headset on your head and look like an idiot if you were to go on camera.
You can’t do a bunch of PC stuff on it because it’s basically running iPadOS.
This is not the same as “why are we bothering with 4K?”
[1] What will you do if Apple starts charging money for this feature?
I actually think about this a lot, and I could argue both sides of this. On the one hand, you could look at your list of examples as obvious examples of modern innovation/improvement that enrich our lives. On the other, you could take it as a fascetious list that proves the point of GP, as one other commenter apparently already has.
I often think how stupid video call meetings are. Teams video calls are one of the few things that make every computer I own, including my M1 MPB, run the fans at full tilt. I've had my phone give me overheat warnings from showing the tile board of bored faces staring blankly at me. And yeah, honestly, it feels like a solution looking for a problem. I understand that it's not, and that some people are obsessed for various reasons (some more legitimate than others) with recreating the conference room vibe, but still.
And with monitors? This is a far more "spicy" take, but I think 1280x1024 is actually fine. Even 1024x768. Now, I have a 4K monitor at home, so don't get me wrong: I like my high DPI monitor.
But I think past 1024x768, the actual productivity gains from higher resolutions begins to rapidly dwindle. 1920x1080, especially in "small" displays (under 20 inches) can look pretty visually stunning. 4K is definitely nicer, but do we really need it?
I'm not trying to get existential with this, because what do we really "need"? But I think that, objectively, computing is divided into two very broad eras. The first era, ending around the mid 2000s, was marked by year-after-year innovation where 2-4 years brought new features that solved _real problems_, as in, features that gave users new qualitative capabilities. Think 24-bit color vs 8-bit color, or 64-bit vs 32-bit (or even 32-bit vs 16-bit). Having a webcam. Having 5+ hours of battery life on a laptop, with a real backlit AMLCD display. Having more than a few gigabytes of internal storage. Having a generic peripheral bus (USB/firewire). Having PCM audio. Having 3D hardware acceleration...
I'm not prepared to vigorously defend this thesis ;-) but it seems at about 2005-ish, the PC space had reached most of these "core qualitative features". After that, everything became better and faster, quantitatively superior versions of the same thing.
And sometimes yeah, it can feel both like it's all gone to waste on ludicrously inefficient software (Teams...), and sometimes, like modern computing did become a solution in search of a problem, in order to keep selling new hardware and software.
2 replies →
4K monitors are better and more comfortable.
On the other hand, video calls are worse and less comfortable than audio calls.
I'm curious about practical application in everyday life of these avatars - but in the real life, not examples provided by marketing department. With that price Vision Pro still feels like a toy for wealthy people, or perhaps for CEOs of companies who can afford conferences in virtual environment. But then exactly, why? Majority of world tested during pandemic video calls, conferences and all sorts of other activities like virtual crowds for tv programs (pretty sure British panel shows shown grids of people as a substitute for studio audience). News services were inviting their guests by video calls when Skype was still around.
yes and to a degree which i find particularly interesting. its never going to happen because of your example
i prefer working in my vp and see a possible world where vp makes my remote team collaborate as if were in the office, from the comfort of the most ergonomic location in my house
it solves this problem and 0.0001% of people are dorks like me who try and say, "they did it" while the rest of the world keeps going to work as before
all of the tech problems were solvable. people simply dont want to put a thing on their face and i think thats unsolvable
I would not describe creating an experience that feels like you are in the room with a group of people, even allowing cross talk, is a solution looking for a problem. I think it's the thing everyone slowing dying on Zoom calls wishes they could have.
Oh no, they wish to have fewer useless meetings.
I disagree. Many of us don't use a headset regularly or carry it with us like a phone or laptop; it is an express inconvenience to use, with only marginal benefits. Businesses won't want one if webcams still do the trick, and users might respond positively but are always priced-out of owning one.
If I'm doing work at my desk and I get a Zoom call, there is a 0.00% chance I will go plug in my Vision Pro to answer it. I'm just going to open the app and turn on my webcam, spatial audio be damned.
How's the latency? Latency is what makes Zoom et al painful for me now - it ruins the ability to politely interject, give confirmatiom, etc. Does Apple do a better job of this than Google/Zoom? In theory you could get 20-30ms (just spitballing numbers I used to get playing shooters!) but i've never got anywhere near that with vid conferencing.
Even so, latency-in-zoom kind of becomes an attribute of the medium and you learn to adapt. How does it feel with the Vision Pro though? The article talks about a really convincing sense of being in the same place with someone - how does latency affect that? (And does it differ based on if you're all physically in Silicon Valley or not?)
I would assume any added latency is negligible -- the sensors + interpretation + rendering should be very fast.
But you've still got all the network latency including Wi-Fi latency on both ends. And you always need a small audio buffer so discrete network packets can be assembled into continuous audio without gaps.
So I wouldn't expect this latency to be any different from regular videoconferencing.
> latency-in-zoom kind of becomes an attribute of the medium and you learn to adapt.
To some degree but not fully. When you adapt your brain is still doing extra work to compensate, similarly to how you don’t «hear» jet engine noise after acclimating to an airplane but it will still tire you to some degree.
I had Zoom and Teams meetings daily during Covid, and personal FaceTime calls almost daily for a while. I still get «Zoom fatigue» if a call goes on for over an hour, if I need to talk face to face during the call (i.e. no screen sharing, can’t disable video and look at something else, etc.) I’m fine if I don’t look at people’s faces but rather people’s screen sharing.
On last SIGGRAPH there was actually a company which makes dynamic 3D Gaussian splatting videos now rather than static scenes:
https://www.youtube.com/live/ucRukZM0d1s?t=1h1m50s
https://zju3dv.github.io/freetimegs/
https://www.4dv.ai/
The videos can be played back in real-time, though they require multiple cameras to capture.
[dead]
[dead]
"Now out of beta"??
Just in time for Vision Pro to go big. Right?
This video might help explain 3D Gaussian splatting. https://www.youtube.com/watch?v=wKgMxrWcW1s Essentially, an entirely new graphics pipeline with different fundamental techniques which allow for high performance and fidelity compared to... what we did before(?) Cool.
Not quite, it’s just a way to assign a color value to a point in space (think point clouds) based on photogrammetry. It’s voxels on steroids but still is drawn using the same techniques. It’s the magic of creating the splats that’s interesting.
A color value for each point is a good starting place to gain an intuition. Some readers might be interested to know that the color is not constant for each point, but instead dependent on viewing angle. That is part of what allows splats to look realistic. Real objects have some degree of specularity which makes them take on slightly different shades as you move your head.
1 reply →
That video didn’t explain what Gaussian splatting is at all, but I did get a minute ad read for some cloud GPU service.
https://packet39.com/blog/a-primer-on-gaussian-splats/ is much better (don't load on mobile though, lots of data).
The same graphics pipeline is used: rasterization.
Rasterization is a very general term. There is a big difference in practice between the traditional rasterization pipeline and splat rasterizers
1 reply →
Sorry but this is a horrible video. The guy just spews superlatives in an annoying voice until 4:30 (of a 6 minute video mind you), when he finally gives a 10 second "explanation" of Gaussian splatting, which doesn't really explain anything, then jumps to a sponsored ad.
yeah... their older videos are a bit more useful from what I remember (more time spent on the research paper content, etc), but they've become so content-free that I just block the channel outright nowadays. it's the "this changes everything (every time, every day)" hype-channel for graphics.
1 reply →