Comment by prvc

2 years ago

Yes, but there's a threshold for effective improvements. If the more compatible and more efficient format only uses 16 seconds to encode 1 hour of audio, it's hard to imagine this making a big difference in any real use case, offline or real-time.

> ... it's hard to imagine this making a big difference in any real use case, offline or real-time.

Google once, back in 2013, made an API change to their v8 engine because it saved a small handful of CPU instructions on each call into client-defined extension functions[^1]. That change broke literally every single v8 client in the world, including thousands of lines of my own code, and i'm told that the Chrome team needed /months/ to adapt to that change.

Why would they cause such disruption for a handful of CPU instructions?

Because at "Google Scale" those few instructions add up to a tremendous amount of electricity. Saving even 1 second per request or offline job, when your service handles thousands or millions of requests/jobs per day, adds up to a considerable amount of CPU time, i.e. to a considerable amount of electricity, i.e. to considerable electricity cost savings.

[1]: https://groups.google.com/g/v8-users/c/MUq5WrC2kcE

  • Yes, but this must be weighed against increased storage costs, not to mention the computational cost of transcoding (and others to do with the proliferation of formats). Within the parameters of this application and taking into account the relative costs of compute and storage (in money or energy), it is not clear to me that there would be any advantage to switching.

    • The compression rate in audio compression is really limited. In most cases it is difficult to decrease below 50 percent.

      Therefore, it is not a logical choice to increase the process rate in order to provide a few percent more compression between audio codecs. As a result, high processing times are high energy.

      2 replies →

    • > ... it is not clear to me that there would be any advantage to switching.

      Indeed, getting an accurate answer would require looking at the whole constellation for a given use case.

  • Nobody is going to use this at Google scale without source code! However knowing that something exists elsewhere can push somebody to re-invent it locally.

  • Saving a small fraction of a second millions of times over, or a handful of cycles a trillion times over, is so much more impactful than saving a dozen seconds per hour-long recording.

    Also your link doesn't explain what they changed?

    • > Saving a small fraction of a second millions of times over, or a handful of cycles a trillion times over, is so much more impactful than saving a dozen seconds per hour-long recording.

      At a large-enough scale, all savings are significant.

      > Also your link doesn't explain what they changed?

      They changed a function signature to use an output argument instead of a return value. i don't recall the exact signature, but it was conceptually like:

          v8::Value foo(...);
      

      to

          void foo(..., v8::Value &result);
      

      Why? Because their measurements showed a microscopic per-call savings for the latter construct.

      PS: i wasn't aware that source code for this codec is not available. That of course puts a damper on it.

      1 reply →