Comment by egorfine
3 months ago
I agree that all of these are valid concerns.
Which we somehow did not have for the last few decades. I wonder why.
3 months ago
I agree that all of these are valid concerns.
Which we somehow did not have for the last few decades. I wonder why.
The reason is business demands. Maybe it's not the sexiest reason for ideological users or simple PC operators, but it's undeniably why systemd is the standard now. Enterprise applications (which comprise the vast majority of Linux users) needed to aggregate their reliability and observability data of their servers to deploy faster and keep their backend healthy.
There was a time, in the 1990s, when UNIX heavily leaned into the idea of multiuser multiprocessing. That philosophy is pretty much dead in a world that prioritizes networked systems, Docker images and idempotent deployment. Most Linux boxen are cattle, not pets.
> Which we somehow did not have for the last few decades. I wonder why.
Speak for yourself. Lots of us spent many man-months over years engineering around crusty 80s abstractions that no longer worked.