All the VR/AR/XR demos are so insanely trivial and yet still manage to be much more difficult than current methods of doing things. Like, really, cooking?
Normal method:
* Search for a recipe
* Leave my phone on a stand and glance at it if I forget a step
Meta glasses:
* Put glasses on (there's a reason I got lasek, it's because wearing glasses sucks)
* Talk into the void, trying to figure out how to describe my problem as well as the format that I want the LLM to structure the response
* Correct it when it misreads one of my ingredients
* Hope that the rng gods give me a decent recipe
Or basically any of the things shown off for Apple's headset. Strap on a giant headset just so I can... browse photos? or take a video call where the other person can't even see my face?
I dunno, if these worked perfectly I don't think it'd be awful to be able to open my fridge and say "what can I make with this" and it could rattle of some suggestions based on my known preferences and even show me images in their new display.
Hands-free while cooking (not having to touch my phone with messy hands) is not a bad thing either.
It sucks now, no idea why, but a few years ago, with the Google Home mini, I could just yell out all kinds of cooking related questions with "Hey Google" and it would always give me a good answer, was great for doing stuff hands free when cooking, like when I just don't want to get raw chicken or whatever on my phone.
But yeah, it doesn't give me good answeres any more, usually trys to start an unrelated YouTube video or email me something about Youtube plus or w/e
Right, but we're in the 1992 of these glasses. Maybe they'll be good eventually. They aren't now.
And frankly, even the online recipe experience leaves much to be desired. Skip past the blog post. Skip past the list of ingredients. Skip past another blog post. Find the single statblock on the bottom that lists ingredients & amounts, & instructions - hoping that it exists.
Like other commenters, I've also started going back to paper cookbooks.
Internet and recipe websites solve a real problem: accessing recipes was expensive and not easy
AR headsets don't solve any problems. If anything, they make up a nonexistent problem, attempts but fails to solve the problem, during which the experience becomes even worse.
New gadget from mult-billion dollar company: showcases on a live demonstration that it's a broken piece of crap that doesn't work.
Like, are we forgetting that it didn't work? It sucked at the job! Let's not what-if or have some imaginary "okay, but pretend it's actually good," deal here. It was bad!
i got the art of italian cooking recently and it's genuinely far easier to get a recipe than trying to scroll through a 50 page monologue about the intracicies of someones childhood before even listing the ingredients
Watching the announcement, every feature felt like something my phone already does—better.
With glasses, you have to aim your head at whatever you want the AI to see. With a phone, you just point the camera while your hands stay free. Even in Meta’s demo, the presenter had to look back down at the counter because the AI couldn’t see the ingredients.
It feels like the same dead end we saw with Rabbit and the Humane pin—clever hardware that solves nothing the phone doesn’t already do. Maybe there’s a niche if you already wear glasses every day, but beyond that it’s hard to see the case.
If executed well I think this could reduce a lot of friction in the process. I can definitely unlock my phone and hold it with one hand while I prepare and cook, but that's annoying. If my glasses could monitor progress and tell me what to do with what while I'm doing it, that's far more convenient. It's clearly not there yet, but in a few years I have no doubt it will be. And this is just the start. With the screens they'll be able to offer AR. Imagine working on electronics or a car and the instructions are overlaid on the screen while the AI is providing verbal instructions.
I'm oldish, so maybe I'm biased, but this sort of product seems like something no one will want, outside a few technophiles, but that industry desperately needs you to want. It's like 3d TV, a solution in search of a problem because the mfgs need to make the next big thing with the associated high margins.
To me the phone is a pretty good form factor. Convenient enough(especially with voice control), unobtrusive, socially acceptable, and I need to own one anyway because it's a phone. I'm a geek so I think this tech is cool, but I see zero chance I would use one, even if it were a few steps better than it is.
On the other hand, having to constantly consult a recipe on my phone while I cook is the main quality of life aspect of home cooking that could be improved.
You're missing the part where I'm reminded that my phone autolocks so I have to go into the settings in the middle of cooking to make it never autolock (or be lazy and unlock it every time I need it). And then I have to find a clean knuckle to scroll the ingredient list and the recipe steps every time I'm trying to remember what step I'm at.
You could do some killer recipe UX with a HUD on some glasses.
These companies are reaching really hard for use cases while ignoring the only ones VR actually works well for. If they just went all in on gaming it would be a much better product than trying to push AI slop cooking help.
As a gamer, in my experience people don't want to play VR games either.
Beat Saber as a social party experience with friends in the same room, sure, that's fun... but for day to day gaming the amount of people who want to play VR games on the regular is nearly zero.
If they really want to lean into the VR use case that people want, its porn, but I suspect they won't put that front and center.
In my experience, the biggest obstacle to broader AR and VR adoption beyond reducing the price, size, and weigh of the hardware will always be the lack of good content creation tools.
I've been involved with two VR projects that were ultimately cancelled because, while we developed a sexy tech demo that showed the potential, building things out into something sustainable required too many resources and took too much time to maintain.
Well it's clearly a first gen product. They could ship Snake and Tetris on it, probably, but I'm certain they're thinking about how to get apps and games on it.
No offense, but there it this chart, and what this tells me, maybe just me, is that gaming is a niche within VR, not even majority use case. Zuck is probably right about VR/AR being the next big social media, only he's wrong that it'll be like Facebook/Instagram type of social media; it's old Twitter type of social media.
Voice input is just too annoying but with the display and wristband I think the dream is there. Your hands are deep in messy food prep, you have a recipe up, you can still pause your music or take a call with the wristband and without stopping to wash up or getting oil or batter on everything.
I wear my glasses all the time. If I could just talk to the void and get help with things I’m directly seeing reliably that would be a game changer. I’ve used Gemini’s video mode and we’re not all that far away.
People dont realise how amazingly efficient touch interfaces already are.
THere is no need for these stupid glasses. Some refuse to accept it - especially Zuckerberg who relies on folks like Apple to make his money. Thats really whats driving this project if you tear away all the BS.
If you watch it carefully, he preempts the AI with "What do I do first" before it even answered the first time. This strongly suggests it did this in rehearsal to me and hence was far more than just "bad luck" or bad connectivity. Perhaps the bad connectivity stopped the override from working and it just kept repeating the previous response. Either way it suggests some troubling early implications about how well Meta's AI work is going to me, that they got this stuck on the main live demo for their flagship product on such a simple thing.
I think preempting the AI the first time was meant to be a feature (it's not trivial to implement and is something people often ask for). Failing from there definitely wasn't great, although it's kind of what I'd expect from an(y) LLM.
No, he preempted it because it was about to list all the ingredients necessary to make a steak sauce, despite having them in front of him. These are glasses, it should have skipped that part and went straight to what to do first.
The way he clung to „what do I do first” makes me think that the whole conversation was scripted in the prompt and AI was asked to reply in specific way to specific sentences. Possibility not even actually connected to the camera?
I distrust meta (and hate these voice assistants) as much as the next guy but to me it’s obvious that you would prepare the prompt and use pretty much the exact phrasing. Also, repeating yourself is normal if there’s no response at all. If it was truly all fake why not just cheat outright and just prerecord all of it?
> Either way it suggests some troubling early implications about how well Meta's AI work is going
I fully expect the AI to suck initially and then over many months of updates evolve to mostly annoying and only occasionally mildly useful.
However, the live stage demo failing isn't necessarily supporting evidence. Live stage demos involving Wifi are just hard because in addition to the normal device functionality they're demoing, they need to simultaneously compress and transmit a screen share of the final output back over wifi so the audience can see it. And they have to do all that in a highly challenging RF environment that's basically impossible to simulate in advance. Frankly, I'd be okay with them using a special headset that has a hard-wired data link for the stage demo.
I assume you couldn't watch the video because it's just a live stream of a guy standing in a kitchen and talking to his glasses. He's not on the stage with hundreds of people on the wifi and you can't see what the glasses are displaying at all.
I run multiple live streams from speakers to conference rooms and other bandwidth intensive offerings throughout the day in an incredibly crowded RF space. WiFi is certainly up to the task. Meta is a nearly 2 Trillion dollar company a failure of this order is ridiculous.
I've done live demos of AI. Even with the same queries, I got a different answers than my 4 previous practice attempts. My demos keep me on my toes and I try to limit the scope much more now.
(I didn't have control over temperature settings.)
> (I didn't have control over temperature settings.)
That's...interesting. You'd think they'd dial the temperature to 0 for you before the demo at least. Regardless, if the tech is good, I'd hope all the answers are at least decent and you could roll with it. If not....then maybe it needs to stay in R&D.
Reducing temperature to 0 doesn't make LLMs deterministic. There's still a bunch of other issues such as float math results depending on which order you perform mathematically commutative operations in.
If you’ve ever used the current Meta Ray Ban and AI, this almost exactly happens when the connection is bad. Pure confusion but the AI still tries to give you an answer.
I bet the device hardware is small/cheap and susceptible to interference
I have the Meta glasses and I've never noticed this, and don't even understand why it could be the connection's fault. The AI gets your audio and your image, if it gives the wrong answer, it's because the AI went wrong. How would the bad connection ever affect it?
Exactly. Like... what are they even saying here - that if the connection drops then it falls back to a tiny "dropped on their head as a child" 4b parameter LLM embedded in the physical firmware and so that's why it is giving inane responses?
Mad props to the presenter for holding it together though.
Yeah I was also cringing at that cop out. It doesn’t appear connectivity related. Plus even if it was, it beautifully highlights the connectivity requirement which sucks for so many reasons.
Ouch. Kudos for trying, though. I miss the days of live demos at Apple events, instead of all these polished videos of people standing in silly poses around the Apple campus.
I have mad respect to them for actually attempting this on the fly - especially a public company. Nothing really to gain versus a scripted demo, and absolutely everything to lose. Admirable.
Yep I hope that mindset never dies. Meta is one of the last engineering-first companies in big tech and willing to live demo something so obviously prone to mishaps is a great sign of it. It's not unlike SpaceX and being willing to iterate by crashing Starships for the world to see. You make mistakes and fix them, no big deal.
I saw Jobs give a demo of some NeXT technology and the system crashed and rebooted right in the middle of it. He just said “oops” and talked around it until the system came back up.
Totally agree. Up until a few years ago failures during live demos on stage used to be a mark of authenticity, and companies playing recordings was always written off as exaggerated or fake. Now all of Apple's keynotes are prerecorded overproduced garbage.
"At least it's not faked" was my main reaction, too. Some other big-tech AI-related demos the last couple years have been caught being faked.
Zuckerberg handling it reasonably well was nice.
(Though the tone at the end of "we'll go check out what he made later" sounded dismissive. The blame-free post-mortem will include each of the personnel involved in the failure, in a series of one-on-one MMA sparring rounds. "I'm up there, launching a milestone in a trillion-dollar strategic push, and you left me @#$*&^ my @#*$&^@#( like a #@&#^@! I'll show you post-mortem!")
Typical Meta product. I used to believe and wasted money on multiple generations of Quest & Ray-bans. I expect this device to be unsupported at launch, just like Quest Pro was
All the VR/AR/XR demos are so insanely trivial and yet still manage to be much more difficult than current methods of doing things. Like, really, cooking?
Normal method:
* Search for a recipe
* Leave my phone on a stand and glance at it if I forget a step
Meta glasses:
* Put glasses on (there's a reason I got lasek, it's because wearing glasses sucks)
* Talk into the void, trying to figure out how to describe my problem as well as the format that I want the LLM to structure the response
* Correct it when it misreads one of my ingredients
* Hope that the rng gods give me a decent recipe
Or basically any of the things shown off for Apple's headset. Strap on a giant headset just so I can... browse photos? or take a video call where the other person can't even see my face?
I dunno, if these worked perfectly I don't think it'd be awful to be able to open my fridge and say "what can I make with this" and it could rattle of some suggestions based on my known preferences and even show me images in their new display.
Hands-free while cooking (not having to touch my phone with messy hands) is not a bad thing either.
I touch my phone with messy hands all the time. They are water resistant now, just wash it after
3 replies →
It sucks now, no idea why, but a few years ago, with the Google Home mini, I could just yell out all kinds of cooking related questions with "Hey Google" and it would always give me a good answer, was great for doing stuff hands free when cooking, like when I just don't want to get raw chicken or whatever on my phone.
But yeah, it doesn't give me good answeres any more, usually trys to start an unrelated YouTube video or email me something about Youtube plus or w/e
I suppose thats a bit easier than reading it out to ChatGPT.
But your $800 glasses are exposed to the cooking area with steam, grease fumes, heat etc.
2 replies →
This reads a bit like like a pre-PC take: "Why use a computer when a cookbook works fine?"
Imagine it’s 1992:
Cookbook: Open book, follow steps.
PC: Turn on tower, wait for DOS, fiddle with floppies, pray the printer works, hope the shareware recipe isn’t weird.
Not saying you're wrong but its easy to miss the big picture
> "Why use a computer when a cookbook works fine?"
I still feel that way. I have cookbooks because I find the UX better than searching for recipes.
2 replies →
Right, but we're in the 1992 of these glasses. Maybe they'll be good eventually. They aren't now.
And frankly, even the online recipe experience leaves much to be desired. Skip past the blog post. Skip past the list of ingredients. Skip past another blog post. Find the single statblock on the bottom that lists ingredients & amounts, & instructions - hoping that it exists.
Like other commenters, I've also started going back to paper cookbooks.
Not the same.
Internet and recipe websites solve a real problem: accessing recipes was expensive and not easy
AR headsets don't solve any problems. If anything, they make up a nonexistent problem, attempts but fails to solve the problem, during which the experience becomes even worse.
1 reply →
Okay. Now: Imagine it's 2025:
Cookbook: Open book, follow steps.
New gadget from mult-billion dollar company: showcases on a live demonstration that it's a broken piece of crap that doesn't work.
Like, are we forgetting that it didn't work? It sucked at the job! Let's not what-if or have some imaginary "okay, but pretend it's actually good," deal here. It was bad!
No? Because traditional cookbook (paper or digital) is deterministic and LLMs are not.
honestly cookbooks genuinely are better
i got the art of italian cooking recently and it's genuinely far easier to get a recipe than trying to scroll through a 50 page monologue about the intracicies of someones childhood before even listing the ingredients
4 replies →
Watching the announcement, every feature felt like something my phone already does—better.
With glasses, you have to aim your head at whatever you want the AI to see. With a phone, you just point the camera while your hands stay free. Even in Meta’s demo, the presenter had to look back down at the counter because the AI couldn’t see the ingredients.
It feels like the same dead end we saw with Rabbit and the Humane pin—clever hardware that solves nothing the phone doesn’t already do. Maybe there’s a niche if you already wear glasses every day, but beyond that it’s hard to see the case.
If executed well I think this could reduce a lot of friction in the process. I can definitely unlock my phone and hold it with one hand while I prepare and cook, but that's annoying. If my glasses could monitor progress and tell me what to do with what while I'm doing it, that's far more convenient. It's clearly not there yet, but in a few years I have no doubt it will be. And this is just the start. With the screens they'll be able to offer AR. Imagine working on electronics or a car and the instructions are overlaid on the screen while the AI is providing verbal instructions.
I'm oldish, so maybe I'm biased, but this sort of product seems like something no one will want, outside a few technophiles, but that industry desperately needs you to want. It's like 3d TV, a solution in search of a problem because the mfgs need to make the next big thing with the associated high margins.
To me the phone is a pretty good form factor. Convenient enough(especially with voice control), unobtrusive, socially acceptable, and I need to own one anyway because it's a phone. I'm a geek so I think this tech is cool, but I see zero chance I would use one, even if it were a few steps better than it is.
On the other hand, having to constantly consult a recipe on my phone while I cook is the main quality of life aspect of home cooking that could be improved.
You're missing the part where I'm reminded that my phone autolocks so I have to go into the settings in the middle of cooking to make it never autolock (or be lazy and unlock it every time I need it). And then I have to find a clean knuckle to scroll the ingredient list and the recipe steps every time I'm trying to remember what step I'm at.
You could do some killer recipe UX with a HUD on some glasses.
These companies are reaching really hard for use cases while ignoring the only ones VR actually works well for. If they just went all in on gaming it would be a much better product than trying to push AI slop cooking help.
As a gamer, in my experience people don't want to play VR games either.
Beat Saber as a social party experience with friends in the same room, sure, that's fun... but for day to day gaming the amount of people who want to play VR games on the regular is nearly zero.
If they really want to lean into the VR use case that people want, its porn, but I suspect they won't put that front and center.
11 replies →
In my experience, the biggest obstacle to broader AR and VR adoption beyond reducing the price, size, and weigh of the hardware will always be the lack of good content creation tools.
I've been involved with two VR projects that were ultimately cancelled because, while we developed a sexy tech demo that showed the potential, building things out into something sustainable required too many resources and took too much time to maintain.
VR gaming seems like it is a bit of a niche, though. I think they want to sell glasses in quantities more like cellphones than gaming peripherals.
I agree they are reaching (and not finding) for an application.
11 replies →
Well it's clearly a first gen product. They could ship Snake and Tetris on it, probably, but I'm certain they're thinking about how to get apps and games on it.
> the only ones VR actually works well for
I had really expected a different "only one"
No offense, but there it this chart, and what this tells me, maybe just me, is that gaming is a niche within VR, not even majority use case. Zuck is probably right about VR/AR being the next big social media, only he's wrong that it'll be like Facebook/Instagram type of social media; it's old Twitter type of social media.
[1]:
Most played VR games
1: https://steamdb.info/charts/?tagid=21978
2 replies →
Games are not a prolific spy tentacle for hoovering up all kinds of data. They may have changed their name, but this is still the facebook company.
Voice input is just too annoying but with the display and wristband I think the dream is there. Your hands are deep in messy food prep, you have a recipe up, you can still pause your music or take a call with the wristband and without stopping to wash up or getting oil or batter on everything.
I wear my glasses all the time. If I could just talk to the void and get help with things I’m directly seeing reliably that would be a game changer. I’ve used Gemini’s video mode and we’re not all that far away.
People dont realise how amazingly efficient touch interfaces already are.
THere is no need for these stupid glasses. Some refuse to accept it - especially Zuckerberg who relies on folks like Apple to make his money. Thats really whats driving this project if you tear away all the BS.
If you watch it carefully, he preempts the AI with "What do I do first" before it even answered the first time. This strongly suggests it did this in rehearsal to me and hence was far more than just "bad luck" or bad connectivity. Perhaps the bad connectivity stopped the override from working and it just kept repeating the previous response. Either way it suggests some troubling early implications about how well Meta's AI work is going to me, that they got this stuck on the main live demo for their flagship product on such a simple thing.
I think preempting the AI the first time was meant to be a feature (it's not trivial to implement and is something people often ask for). Failing from there definitely wasn't great, although it's kind of what I'd expect from an(y) LLM.
No, he preempted it because it was about to list all the ingredients necessary to make a steak sauce, despite having them in front of him. These are glasses, it should have skipped that part and went straight to what to do first.
The way he clung to „what do I do first” makes me think that the whole conversation was scripted in the prompt and AI was asked to reply in specific way to specific sentences. Possibility not even actually connected to the camera?
Yeah as a fully integrated system and the selling point I'd expect you'd say something like "Look again I think you're getting ahead of yourself".
Maybe the tech wasn't quite fool proof and they tried to fake it and then the fake version messed up.
I distrust meta (and hate these voice assistants) as much as the next guy but to me it’s obvious that you would prepare the prompt and use pretty much the exact phrasing. Also, repeating yourself is normal if there’s no response at all. If it was truly all fake why not just cheat outright and just prerecord all of it?
> Either way it suggests some troubling early implications about how well Meta's AI work is going
I fully expect the AI to suck initially and then over many months of updates evolve to mostly annoying and only occasionally mildly useful.
However, the live stage demo failing isn't necessarily supporting evidence. Live stage demos involving Wifi are just hard because in addition to the normal device functionality they're demoing, they need to simultaneously compress and transmit a screen share of the final output back over wifi so the audience can see it. And they have to do all that in a highly challenging RF environment that's basically impossible to simulate in advance. Frankly, I'd be okay with them using a special headset that has a hard-wired data link for the stage demo.
I assume you couldn't watch the video because it's just a live stream of a guy standing in a kitchen and talking to his glasses. He's not on the stage with hundreds of people on the wifi and you can't see what the glasses are displaying at all.
1 reply →
I run multiple live streams from speakers to conference rooms and other bandwidth intensive offerings throughout the day in an incredibly crowded RF space. WiFi is certainly up to the task. Meta is a nearly 2 Trillion dollar company a failure of this order is ridiculous.
I've done live demos of AI. Even with the same queries, I got a different answers than my 4 previous practice attempts. My demos keep me on my toes and I try to limit the scope much more now.
(I didn't have control over temperature settings.)
It looks like true 0-temperature (i.e. determinism) will happen. Here's some good context: https://thinkingmachines.ai/blog/defeating-nondeterminism-in...
HN discussion https://news.ycombinator.com/item?id=45200925
But 0 temp is much less "Creative" and may not be conducive to showing off the AI's latest tricks
1 reply →
> (I didn't have control over temperature settings.)
That's...interesting. You'd think they'd dial the temperature to 0 for you before the demo at least. Regardless, if the tech is good, I'd hope all the answers are at least decent and you could roll with it. If not....then maybe it needs to stay in R&D.
Reducing temperature to 0 doesn't make LLMs deterministic. There's still a bunch of other issues such as float math results depending on which order you perform mathematically commutative operations in.
6 replies →
This one was also pretty bad: https://x.com/jason/status/1968496622884495847?s=46&t=9d1Ha4...
I think there’s some respect to give cause they’re doing it live and non-scripted.
Non-scripted? You must be kidding.
I take it they meant pre-recorded. It was definitely scripted and practiced.
Respect for trying it live now Apple just does pre-recorded with a ton of VFX.
If you’ve ever used the current Meta Ray Ban and AI, this almost exactly happens when the connection is bad. Pure confusion but the AI still tries to give you an answer.
I bet the device hardware is small/cheap and susceptible to interference
I have the Meta glasses and I've never noticed this, and don't even understand why it could be the connection's fault. The AI gets your audio and your image, if it gives the wrong answer, it's because the AI went wrong. How would the bad connection ever affect it?
Exactly. Like... what are they even saying here - that if the connection drops then it falls back to a tiny "dropped on their head as a child" 4b parameter LLM embedded in the physical firmware and so that's why it is giving inane responses?
Mad props to the presenter for holding it together though.
The ai is in the cloud
Edit0: ie without internet access the ai is unable to produce an answer other than some prerecorded ones I guess
In the live showcase the presenter even mentions that the wifi must have been bad for the ai to repeat the answer
10 replies →
next time they need 1 public and 1 private router and shut the public off right before the demo.
Even if it’s small/cheap, if the item is scanned multiple times this will prevent any electrical infetterence.
I don’t even think that’s a word!
It’s the WiFi, ya sure.
Yeah I was also cringing at that cop out. It doesn’t appear connectivity related. Plus even if it was, it beautifully highlights the connectivity requirement which sucks for so many reasons.
Ouch. Kudos for trying, though. I miss the days of live demos at Apple events, instead of all these polished videos of people standing in silly poses around the Apple campus.
I have mad respect to them for actually attempting this on the fly - especially a public company. Nothing really to gain versus a scripted demo, and absolutely everything to lose. Admirable.
Obviously scripted, just the LLM didn't follow its part of the script.
Hearing this AI-generated voice awakens some primal aggression in me.
This is why Jobs spent months prepping for each presentation.
But hey, at least it's not all faked
When I was at Meta (then facebook), people lived and died by the live demo creedo.
Pitches can be spun, data is cherry picked. But the proof is always in the pudding.
This is embarrassing for sure, but from the ashes of this failure we find the resolve to make the next version better.
Yep I hope that mindset never dies. Meta is one of the last engineering-first companies in big tech and willing to live demo something so obviously prone to mishaps is a great sign of it. It's not unlike SpaceX and being willing to iterate by crashing Starships for the world to see. You make mistakes and fix them, no big deal.
why did they choose to air this live?
For an internal team sure absolutely, but for public-facing work, prerecorded is the way to go
4 replies →
I saw Jobs give a demo of some NeXT technology and the system crashed and rebooted right in the middle of it. He just said “oops” and talked around it until the system came back up.
i love jobs but i do remember the “everybody please turn off your laptops” presentation.
live demonstrations are tough - i wish apple would go back to them.
Totally agree. Up until a few years ago failures during live demos on stage used to be a mark of authenticity, and companies playing recordings was always written off as exaggerated or fake. Now all of Apple's keynotes are prerecorded overproduced garbage.
1 reply →
"At least it's not faked" was my main reaction, too. Some other big-tech AI-related demos the last couple years have been caught being faked.
Zuckerberg handling it reasonably well was nice.
(Though the tone at the end of "we'll go check out what he made later" sounded dismissive. The blame-free post-mortem will include each of the personnel involved in the failure, in a series of one-on-one MMA sparring rounds. "I'm up there, launching a milestone in a trillion-dollar strategic push, and you left me @#$*&^ my @#*$&^@#( like a #@&#^@! I'll show you post-mortem!")
I appreciate the live demo but I'm suprised they didn't at least have a prerecorded backup. I wanted to see how video calls work!
Considering there's no camera pointing to your face they can't be all that interesting.
It was painful even before it started malfunctioning
The demo gods were not present that day
It was the WiFi though
Typical Meta product. I used to believe and wasted money on multiple generations of Quest & Ray-bans. I expect this device to be unsupported at launch, just like Quest Pro was
The portal was like their best product and they just abandoned it.
so when I talk but not to it, it may response like i accidentally say siri? Except is every time?
For those who didn't pick up on it, they were being sarcastic about the issue being wifi related haha
That was not sarcasm. They were being serious.
I’m surprised everyone is saying they weren’t sarcastic. They were even being MORE sarcastic about it being the Wi-Fi after the failed WhatsApp call.
It didn't sound like sarcasm at all to me?