Comment by plqbfbv
11 hours ago
> Since you've been on the ride since '04, I'm curious to hear your thoughts. How do you feel the maintenance burden compares today versus the GCC 3.x era? With the modern binhost fallback and the improvements in portage, I feel like we now spend less time fighting rebuild loops than back then? But I wonder if long time users feel the same.
I'm another one on it since the same era :)
In general stable has become _really_ stable, and unstable is still mostly usable without major hiccups. My maintenance burden is limited nowadays compared to 10y ago - pretty much running `emerge -uDN @world --quiet --keep-going` and fixing issues if any, maybe once a month I get package failures but I run a llvm+libcxx system and also package tests, so likely I get more issues than the average user on GCC.
For me these days it's not about the speed anymore of course, but really the customization options and the ability to build pretty much anything I need locally. I also really like the fact that ebuilds are basically bash scripts, and if I need to further customize or reproduce something I can literally copy-paste commands from the package manager in my local folder.
The project has successfully implemented a lot of by-default optimizations and best practices, and in general I feel the codebases for system packages have matured to the point where it's odd to run in internal compiler errors, weird dependency issues, whole-world rebuilds etc. From my point of view it also helped a lot that many compilers begun enforcing more modern and stricter C/C++ standards over time, and at the same time we got Github, CI workflows, better testing tools etc.
I run `emerge -e1 @world` maybe once a year just to shake out stuff lurking in the shadows (like stuff compiled with clang 19 vs clang 21), but it's really normally not needed anymore. The configuration stays pretty much untouched unless I want to enable a new USE for a new package I'm installing.
> so likely I get more issues than the average user on GCC.
its been years since I had a build failure, and I even accept several on ~amd64. (with gcc)
I am replying here as a kind of "better place to attach".
Anyway, to answer grandparent, I basically never had rebuild loops in 19 years.. just emerge -uU world every day or sometimes every week. I have been running the same base system since..let's see:
I have never once had to rebuild the whole system from scratch in those 19 years. (I've just rsync'd the rootfs from machine to machine as I upgraded HW and gradually rebuilt because as many others here have said, for me it wasn't about "perf of everything" or some kind of reproducible system - "more customization + perf of some things".) The upgrade from monolithic X11 to split X11 was "fun", though. /s
I do engage in all sorts of package.mask/per-package use/many global use. I have my own portage/local overlay for things where I disagree with upstream. I even have an automated system to "patch" my disagreements in. E.g, I control how fast I upgrade my LLVM junk so I do it on my own timeline. Mostly I use gcc. I control that, too. Any really slow individual build, basically.
If over the decades, they ever did anything that made it look like crazy amounts of rebuilds would happen, I'd tend to wait a few days/week or so and then figure something out. If some new dependency brings in a mountain of crap, I usually figure out how to block that.