Comment by bluGill

8 hours ago

Protobuf does something important that copying memory cannot do: a protocol that can be changed separately on either end and things can still work. You have to build for "my client doesn't send some new data" (make a good default), or "I got extra data I don't understand" (ignore it). However the ability to upgrade part of the system is critical when the system is large and complex since you can't fix everything to understand your new feature without making the new feature take ages to roll out.

Protobuf also handles a bunch of languages for you. The other team wants to write in a "stupid language" - you don't have to have a political fight to prove your preferred is best for everything. You just let that team do what they want and they can learn the hard way it was a bad language. Either it isn't really that bad and so the fight was pointless, or it was but management can find other metrics to prove it and it becomes their problem to decide if it is bad enough to be worth fixing.

But something more modern that doesn’t have the encoding/decoding penalty of Protobuf would be better (eg cap’n’proto but there’s a bunch now in this space).

  • Not that you are wrong, but in the real world this is not significant for most uses. If it is significant you are doing too much IPC. Or maybe using protobuf where you should be making a direct function call. Fix the architecture either way. (similar to how I can make bubble sort faster with careful machine code optimization, but it is hard to make modern tim sort slower in the real world no matter how bad the implementation is)