Comment by wood_spirit
19 hours ago
The last year I’ve been doing all my dev on a vscode VM thingy my company set up. It’s just been getting better and better. It’s like local dev but, tbh, better. It’s at the point where I don’t even install dev tooling locally any more at all. My computer is just a thin client.
The aspect I miss is the distributed compilation hinted at in the article. I remember back at the end of 1990s using distcc and things, but that never seemed to happen in the Java world and the tooling like maven etc is structured to make everything one long dependent chain. Shame.
You want bazel. Once you've internalized the bazel (blaze) system, you want all builds and tests to work that way.
How do you internalize it?
Our bazel system is full of custom skylark code so understanding the build means effectively reading a bunch of ad-hoc code written with varying degrees of competence and with confusing dependencies. I’m kinda ashamed I don’t have a deep understanding of a tool I use daily - but every time I try reading the documentation I quickly give up.
The first thing is hermeticity and and what it implies: caching. That if targets are a strict function of inputs, and inputs can be hashed, then you can reliably cache them - including test results!
The second thing is distributed caching. Done right, not only are your test results cached, but CI's test results can be cached too.
The third thing is distributed builds. This only starts to matter in big projects, but compilation is inherently a spiky load and if you can share a big pool of compute between a big pool of engineers, you get higher hardware utilization and lower latency to build artifacts.
The fourth thing, something that isn't really feasible outside big tech, is you could be bazel all the way down in a big monorepo. One of the niftiest things at Google is to be able to put a printf inside a database server and run your client test, and blaze knows that it needs to rebuild the database server and it will do it automatically, so that you can get extra insight at almost any level in the stack.
Probably not in the way that you might mean it, but for me (Xoogler, 2010 - 2023) internalizing bazel means:
"Hey, where's your tool's code in $MONOREPO?" "<path/to/stuff>"
Cool:
... and you get a running version of whatever $stuff is, immediately built from head, quickly - no matter the set of dependencies, or which language they were built in. I can just try your thing out immediately with a common interface for all the builds, and I don't need to understand the build at all, unless or until I do, and then OK, absolutely every single build is always expressed in exactly the same way, same idioms, same patterns...
2 replies →
I would recommend learning the various "bazel query" variants starting with a plain "bazel query" https://bazel.build/query/language
Well bazel is a joy to use as a user but it’s painful to set up.
Maybe, but I feel like an article I’ve read many, many times is “we hired one or more Xooglers for our startup and this turned out to be a catastrophe because they insisted on trying to bring blaze/bazel with them and it nearly destroyed the company.” It’s always bazel specifically in these articles, never any of the other internal Google stuff like Spanner.
Wait please post the articles where they brought Spanner over; those sound like fun reads
I mean, bazel is great and I would use it when building a codebase from scratch, but the win from switching from one build system to another is, at best, some efficiency, and you need a lot of aggregate efficiency gains to pay for effort.
This is the other way people work at Google. You have a Vm and then connect IDE of choice to it via SSH. But honestly it’s a lot more effort that just using Coder