← Back to context

Comment by GuB-42

8 hours ago

> BTW what makes you think writing their end in C would yield even higher performance?

C is not inherently faster, you are right about that.

But what I understand is that the library they use works with data structures that are designed to be used in a C-like language, and are presumably full of raw pointers. These are not ideal for working in Rust, instead, presumably, they wrote their own data model in Rust fashion, which means that now, they need to make a conversion, which is obviously slower than doing nothing.

They probably could have worked with the C structures directly, resulting in code that could be as fast as C, but that wouldn't make for great Rust code. In the end, they chose the compromise of speeding up conversion.

Also, the use of Protobuf may be a poor choice from a performance perspective, but it is a good choice for portability, it allows them to support plenty of languages for cheaper, and Rust was just one among others. The PgDog team gave Rust and their specific application special treatment.

> which means that now, they need to make a conversion, which is obviously slower than doing nothing.

One would think. But since caches have grown so large, and memory speed and latency haven't scaled with compute, so long as the conversion fits in the cache and is operating on data already in the cache from previous operations, which admittedly takes some care, there's often an embarrassing amount of compute sitting idle waiting for the next response from memory. So if your workload is memory or disk or network bound, conversions can oftentimes be "free" in terms of wall clock time. At the cost of slightly more wattage burnt by the CPU(s). Much depends on the size and complexity of the data structure.