← Back to context

Comment by yetanotherdood

1 day ago

Unix Domain Sockets are the standard mechanism for app->sidecar communication at Google (ex: Talking to the TI envelope for logging etc.)

servo's Ipc-channel doesn't use Unix domain sockets to move data. It uses it to share a memfd file descriptor effectively creating a memory buffer shared between two processes

Search around on Google Docs for my 2018 treatise/rant about how the TI Envelope was the least-efficient program anyone had ever deployed at Google.

  • Ok, now it sounds like you're blaming unix sockets for someone's shitty code...

    No idea what "TI Envelope" is, and a Google search doesn't come up with usable results (oh the irony...) - if it's a logging/metric thing, those are hard to get to perform well regardless of socket type. We ended up using batching with mmap'd buffers for crash analysis. (I.e. the mmap part only comes in if the process terminates abnormally, so we can recover batched unwritten bits.)

    • > Ok, now it sounds like you're blaming unix sockets for someone's shitty code...

      No, I am just saying that the unix socket is not Brawndo (or maybe it is?), it does not necessarily have what IPCs crave. Sprinkling it into your architecture may or may not be relevant to the efficiency and performance of the result.

      3 replies →