← Back to context

Comment by prvc

2 years ago

>It's a minimum of 3x faster than its contemporaries.

Ok, but what would make that useful?

> Ok, but what would make that useful?

Lower run-time means less electricity and less tying up of the CPU, making it available for other things. As a real-life example: i frequently use my Raspberry Pi 4 to convert videos from one format to another. This past week i got a Pi 5 and moved the conversion to that machine: it takes maybe 1/4th as much time. The principle with a faster converter, as opposed to faster hardware, is the same: the computer isn't tied up for as long, and not draining as much power.

  • Yes, but there's a threshold for effective improvements. If the more compatible and more efficient format only uses 16 seconds to encode 1 hour of audio, it's hard to imagine this making a big difference in any real use case, offline or real-time.

    • > ... it's hard to imagine this making a big difference in any real use case, offline or real-time.

      Google once, back in 2013, made an API change to their v8 engine because it saved a small handful of CPU instructions on each call into client-defined extension functions[^1]. That change broke literally every single v8 client in the world, including thousands of lines of my own code, and i'm told that the Chrome team needed /months/ to adapt to that change.

      Why would they cause such disruption for a handful of CPU instructions?

      Because at "Google Scale" those few instructions add up to a tremendous amount of electricity. Saving even 1 second per request or offline job, when your service handles thousands or millions of requests/jobs per day, adds up to a considerable amount of CPU time, i.e. to a considerable amount of electricity, i.e. to considerable electricity cost savings.

      [1]: https://groups.google.com/g/v8-users/c/MUq5WrC2kcE

      9 replies →

Lower latency for real time streaming over the Internet, for one

  • Whilst faster decoding is always useful, most audio decoding can easily happen within the typical output buffer size (e.g. 512 samples at 44.1KHz ~ 12ms). As long as your machine can decode within that timeframe, there is no difference in latency.