Comment by ravenstine
2 years ago
I find it funny people are still using these voice "assistants" despite the frequent frustration. Alexa gets it wrong more than half the time when people I know try to use it, but they seem to still want to believe.
All of these “AI/ML” driven products seem to have a whizz-bang initial release where they seem 80% ready and then never come close to the remaining 5/10/15/20%
Hell I used dragon naturally speaking in the late 90s and the stuff now doesn’t even feel 10x better despite billions invested and 10000x the compute.
Self driving cars feels similar. Always five year away from mass market.
We can tune to be pretty good most of the time, but being fully good enough all the time out to a bunch of 9s just ends up being a moonshot by comparison.
Really curious how much better in what dimensions these generative / LLMs will actually get in 5/10/15 years.
Dragon Naturally Speaking was, ironically, more flexible than today's voice tech besides the fact that it wasn't internet-connected (I think?). It's not like you can attempt to write an essay with Alexa or control a browser window with it. What's also funny is how we have this narrative that cloud computing is a necessity for AI, and yet Dragon had NLP that fit on a CD-ROM. Ok, maybe it came on multiple discs... I'm forgetting, but my point still stands.
Most of our advances have been in marketing rather than substance.
The current generation of AI/ML may change that in some way. Dragon Naturally Speaking may have been a thing in the 90s, but I'm pretty sure we didn't have anything close to GPT or Stable Diffusion.
At least for home device control and in-car use hands free control more than makes up for the error rate in lots of applications. For a lot of stuff that people do with their phone and have their phone in their hand for, the trade off seems less clear, but there is a lot of subjectivity involved.