gRPC: From service definition to wire format

5 days ago (kreya.app)

I like the idea of grpc because I wanted the contract but I tried it on a small service and I think I would avoid it in the future. Too many rough edges and features I didnt really need. I was using it in Rust and python mainly (maybe it is better in Go?) but it had a whole bunch of google stuff in there I didnt need.

- Configuring the python client with a json string that did not seem to have a documented schema

- Error types that were overly general in some ways and overly specific in other ways

- HAProxy couldn't easily health check the service

There were a few others that I cant remember because it was ~5 years ago. I liked the idea of the contract and protobuf seemed easy to write but had no need for client side dns load balancing and the like and was not working in GoLang.

  • I think connect-rpc[0] strikes a good balance between normal http apis, and gRPC. It allows protobuf as json. So you could think of it as an opinionated http api spec. A health check would be just a call to an url /something.v1.MyService/MyMethod -d { "input": "something }.

    it works really well, and the tooling is pretty good, though it isn't that widely supported yet. Rust for one doesn't have an implementation. But I've been using it at work, and we basically haven't had any issues with it (go and typescript).

    But the good thing is that it can interoperate with normal grpc servers, etc. But that of course locks it into using the protobuf wireformat, which is part of the trouble ;)

    0: https://connectrpc.com/

    • Using connectrpc was a pretty refreshing experience for me. Implementing a client for the HTTP stuff at least is pretty easy! I was able to implement a basic runner for forgejo using the protobuf spec for the runner + libcurl within a few days.

  • I've only enjoyed using Protobuf + gRPC after we've started using https://buf.build. Before that it was always a pain with Makefiles building obscure commands, then developers having different versions of the Protobuf compiler installed and all kinds of paper cuts like that.

    Now it's just "buf generate", every developer has the exact same settings defined in the repo and on the frontend side we are just importing the generated Typescript client and have all the types instantly available there. Also nice to have a hosted documentation to link people to.

    My experience is mostly with Go, Python and TS.

My gripe with grpc is that it doesn’t play super well with kubernetes services… you have to take a little bit of care, you need to understand how k8s services work and you have to understand how load balancing in grpc works. Ideally I would want to use protobuf as an interchange format, and a “dumb” http server that understands that.

That being said… once you do configure it properly it can be a powerful tool. The complexity though is usually not worth it unless you’re at a certain scale.

  • I wrote gRPC xDS server for Kubernetes that is configuration-free. Basically just load xDS client library into your code, then use xds:///servicename.namespace (unlike DNS, namespace is always required). It should be as lightweight and scales in similar way to the cluster DNS.

    My company run this exact code in production since it was created in 2022. We probably have several times more than 1000 rps gRPC requests running internally including over the public internet for hybrid cloud connectivity. That being said, gRPC's xDS client is not always bugs-free.

    https://github.com/wongnai/xds

Protobuf is good, but it's not perfect. The handling of "oneof" fields is weird and the Python bindings were written by drunk squirrels, enums are strange, etc.

gRPC is terrible, but ConnectRPC allows sane integration of PB with regular browser clients. Buf.build also has a lot of helpful tools, like backwards compatibility checking.

But it's not worse than other alternatives like Thrift. And waaaaaaaaaayyyyyy better than OpenAPI monstrosities.

> The contract-first philosophy

gRPC/protobuf is largely a Google cult. I've seen too projects with complex business logic simply give up and embed JSON strings inside pb. Like WTF...?

Everything was good in the begining, as long as everyone submits their .proto to a centralized repo. Once the one team starts to host their own, things get broken quickly.

As it occured to me, gRPC could optionally just serve those .proto files in the initial h2 handshake on the wire. It add just few kilobytes but solves a big problem.

  • > As it occured to me, gRPC could optionally just serve those .proto files in the initial h2 handshake on the wire

    Do you mean the reflection protocol, or some other .proto files?

  • I personally really like gRPC and protobufs. I think they strike a good balance between a number of indirectly competing objectives. However I completely agree with your observation that as soon as you move beyond a single source of truth for the .proto files it all goes to shit. I've seen some horrible things--generated code being committed to version control and copied between repos, .proto files duplicated and manually kept up to date (or not). Both had hilarious failure modes. There is no viable synchronization mechanism except to ensure that each .proto file is defined in exactly one place, that each time someone touches a .proto file all the downstream dependencies on that file are updated--everyone who consumes any code generated from that .proto--and that for every such change clients are deployed before servers. Usually these invariants are maintained by meatspace protocols which invariably fail.

    • I don't see why any of that would be necessary. There are simple rules for protobuf compatibility and people only need to follow them. Never re-use a field number to mean something else. Never change the type of a field. That's it. Those are the only rules. If you follow them you don't have to think about any of that stuff that you mentioned.

      3 replies →

  • It does have discovery built in. Is that what you want?

    • you mean grpc.reflection.v1alpha.ServerReflection? Close enough, sadly not generally enabled.