Comment by vlovich123

2 days ago

If you’re live streaming video, you can make sure every frame is a P-frame which brings your bandwidth costs to a minimum, but then a lost packet completely permanently disables the stream. Or you periodically refresh the stream with I-frames sent over a reliable channel so that lost packets corrupt the video going forward only momentarily.

Sure, if performance characteristics were the same, people would go for strong consistency. The reason many different consistency models are defined is that there’s different tradeoffs that are preferable to a given problem domain with specific business requirements.

You've got the frame types backwards, which is probably contributing to the disagreement you're seeing.

If the video is streaming, people don't really care if a few frames drop, hell, most won't notice.

It's only when several frames in a row are dropped that people start to notice, and even then they rarely care as long as the message within the video has enough data points for them to make an (educated) guess.

  • P/B frames (which is usually most of them) reference other frames to compress motion effectively. So losing a packet doesn't mean a dropped frame, it means corruption that lasts until the next I-frame/slice. This can be seconds. If you've ever seen corrupt video that seems to "smear" wrong colors, etc. across the screen for a bunch of frames, that's what we're talking about here.

  • Okay but now you're explaining that correctness is not necessarily the only reasonable state. It's possible to sacrifice some degree of correctness for enormous gains in performance because having absolute correctness comes at a cost that might simply not be worth it.