Comment by Kinrany
2 days ago
I continue to be surprised that in these discussions correctness is treated as some optional highest possible level of quality, not the only reasonable state.
Suppose we're talking about multiplayer game networking, where the central store receives torrents of UDP packets and it is assumed that like half of them will never arrive. It doesn't make sense to view this as "we don't care about the player's actual position". We do. The system just has tolerances for how often the updates must be communicated successfully. Lost packets do not make the system incorrect.
A soft-realtime multiplayer game is always incorrect(unless no one is moving).
There are various decisions the netcode can make about how to reconcile with this incorrectness, and different games make different tradeoffs.
For example in hitscan FPS games, when two players fatally shoot one another at the same time, some games will only process the first packet received, and award the kill to that player, while other games will allow kill trading within some time window.
A tolerance is just an amount of incorrectness that the designer of the system can accept.
When it comes to CRUD apps using read-replicas, so long as the designer of the system is aware of and accepts the consistency errors that will sometimes occur, does that make that system correct?
If you’re live streaming video, you can make sure every frame is a P-frame which brings your bandwidth costs to a minimum, but then a lost packet completely permanently disables the stream. Or you periodically refresh the stream with I-frames sent over a reliable channel so that lost packets corrupt the video going forward only momentarily.
Sure, if performance characteristics were the same, people would go for strong consistency. The reason many different consistency models are defined is that there’s different tradeoffs that are preferable to a given problem domain with specific business requirements.
You've got the frame types backwards, which is probably contributing to the disagreement you're seeing.
If the video is streaming, people don't really care if a few frames drop, hell, most won't notice.
It's only when several frames in a row are dropped that people start to notice, and even then they rarely care as long as the message within the video has enough data points for them to make an (educated) guess.
P/B frames (which is usually most of them) reference other frames to compress motion effectively. So losing a packet doesn't mean a dropped frame, it means corruption that lasts until the next I-frame/slice. This can be seconds. If you've ever seen corrupt video that seems to "smear" wrong colors, etc. across the screen for a bunch of frames, that's what we're talking about here.
5 replies →
Okay but now you're explaining that correctness is not necessarily the only reasonable state. It's possible to sacrifice some degree of correctness for enormous gains in performance because having absolute correctness comes at a cost that might simply not be worth it.
Back in the day there were some P2P RTS games that just sent duplicates. Like each UDP packet would have a new game state and then 1 or more repetitions of previous ones. For lockstep P2P engines, the state that needs to be transferred tends towards just being the client's input, so it's tiny, just a handful of bytes. Makes more sense to just duplicate ahead of time vs ack/nack and resend.