Comment by alexgartrell
11 hours ago
For the peanut gallery more: I worked with both of these guys at Meta on this.
The "servers are only on for a few hours" thing was like never true so I have no idea where that claim is coming from. The web performance test took more than a few hours to run alone and we had way more aggressive soaks for other workloads.
My recollection was that "write zeroes" just became a cheaper operation between '12 and '14.
A fun fact to distract from the awkwardness: a lot of the kernel work done in the early days was exceedingly scrappy. The port mapping stuff for memcached UDP before SO_REUSEPORT for example. FB binaries couldn't even run on vanilla linux a lot of the time. Over the next several years we put a TON of effort in getting as close to mainline as possible and now Meta is one of the biggest drivers of Linux development.
It's not just that zeroing got cheaper, but also we're doing a lot less of it, because jemalloc got much better.
If the allocator returns a page to the kernel and then immediately asks back for one, it's not doing its job well: the main purpose of the allocator is to cache allocations from the kernel. Those patches are pre-decay, pre-background purging thread; these changes significantly improve how jemalloc holds on to memory that might be needed soon. Instead, the zeroing out patches optimize for the pathological behavior.
Also, the kernel has since exposed better ways to optimize memory reclamation, like MADV_FREE, which is a "lazy reclaim": the page stays mapped to the process until the kernel actually need it, so if we use it again before that happens, the whole unmapping/mapping is avoided, which saves not only the zeroing cost, but also the TLB shootdown and other costs. And without changing any security boundary. jemalloc can take advantage of this by enabling "muzzy decay".
However, the drawback is that system-level memory accounting becomes even more fuzzy.
(hi Alex!)
[ Edit: "servers" in this context meant the HHVM server processes, not the physical server which of course had a longer uptime ]
People got promoted for continuous deployment
https://engineering.fb.com/2017/08/31/web/rapid-release-at-m...
I think it's fair to say the hardware changed, the deployment strategy changed and the patches were no longer relevant, so we stopped applying them.
When I showed up, there were 100+ patches on top of a 2009 kernel tree. I reduced the size to about 10 or so critical patches, rebased them at a 6 months cadence over 2-3 years. Upstreamed a few.
Didn't go around saying those old patches were bad ideas and I got rid of them. How you say it matters.
The linked article says they decided to do CD in 2016 fwiw so that's not inconsistent with what I said.
You reduced the number of patches a lot and also pushed very hard to get us to 3.0 after we sat on 2.6.38 ~forever. Which was very appreciated, btw. We built the whole plan going forward based on this work.
I'm not arguing that anyone should be nice to anyone or not (it's a waste of breath when it comes to Linux). I'm just saying that the benchmarking was thorough and that contemporary 2014 hardware could zero pages fast.
Tangentially, on this CD policy - it leads to really high p99s for a long tail of rare requests which don’t get reliable prewarming due to these frequent HHVM restarts…
This is why I always read the comments here.
That is, wow, a story.
At what point did you realize how different fb engineering was from what you expected?
For me it happened around my first week after the bootcamp, so about 6 weeks from joining.
An important nuance - most Facebook engineers don't believe that Facebook/Meta would continue to grow next year; and that disbelief had been there since as early as in 2018 (when I'd joined).
very few facebook employees use their products outside of testing, which is a big contributor to that fear - they just can't believe that there are billions of people who would continue to use apps to post what they had for lunch!
And as a result of that lack of faith, most of them believe that Meta is a bubble and can burst at any point. Consequently, everyone works for the next performance review cycle, and most are just in rush to capture as much money as they could before that bubble bursts.
> don't believe that Facebook/Meta would continue to grow next year
Huh.
The time I worked at a hyper growth company, us working in the coal mine had much the same skepticism. Our growth rate seemed ridiculous, surely we're over building, how much longer can this last?!
Happily, the marketing research team regularly presented stuff to our department. They explained who are customers were, projected market sizes (regionally, internationally), projected growth rates, competitive analysis (incumbents and upstarts), etc.
It helped so much. And although their forecasts seemed unbelievable, we over performed every year-over-year. Such that you sort of start to trust the (serious) marketing research types.
I use Facebook and Instagram and think you all suck. Slagging each other in public. Grow tf up.
Fwiw, this sounds like a healthy discourse - you don’t have to agree on everything, every approach has its merits, code that ends up shipping and supporting production wins the argument in some sense…
This is not special to Meta in any way, I observed it in any team which has more than 1 strong senior engineer.
I'm personally appreciative of these comments. It's good that people make claims, be challenged, and both sides walk away with informative points being made. It's entirely possible both sides here are correct and wrong in their own way.
This is literally how pretty much every conversation goes when you work with people close to the metal. It's a stylistic thing at this point.
For what it's worth, 20 years ago all programming newsgroups were like this. I grew my thick skin on alt.lang.perl lol