This is what happens when a system is designed by multiple people and companies over a long period of time. An amalgam of ideas which are there just because. There's no reason Linux should be like this. e.g., see https://gobolinux.org/ which has more sane dirs.
Linux does not use this split any more. Many of these dirs were merged back together. The "/usr merge" was adopted by Debian, Ubuntu, Fedora, Red Hat, Arch Linux, openSUSE and other major distros:
Question: why did they decide to make /usr/bin the "primary" and /bin the symlink? Methinks it should have been the other way around as was the original Unix design before the split.
Also the first URL is serving me scam popup ads that do a crap job at pretending to be android system alerts. Next time please try to choose a more reputable source.
Oh, that's an awesome idea to get rid of those awful splits and focus on apps! Scoop package manager on Windows works the same way. Though it has a few issues when some security apps ignore "current" symlinks (and don't support regex for versioned paths), and then versioned dirs bite you when versions changes.
Wonder whether this distro has similar issues and whether it'd be better to have the current version a regular dir and then the versioned dir a symlink
> Standards bureaucracies like the Linux Foundation (which consumed the Free Standards Group in its' ever-growing accretion disk years ago) happily document and add to this sort of complexity without ever trying to understand why it was there in the first place.
this is the reason in my opinion and experience
as a lead dev in a rather complicated environment I tended to solve the problem many times where some identifier was used. short deadlines and no specification made us solve the problem quickly, so some shortcuts and quick actions were done. this identifier gets asked about later and super overcomplicated explanations given as a reason by people that don't know the history.
...and the history is often like 'they mounted stuff to /usr because they got a third drive'. and now, people even in this thread keep giving explanations like it's something more.
gobo's a neat idea. I for one really like that its package management can have multiple packages without conflicts etc.
I think the only other I can think of like this is probably nix or spark and nix really wants to learn a new language so it has some friction but nix is a neat idea too
I think not many people know this but how tinycore packages work are really fascinating as well and its possible I think to have this as well by just downloading the .tcz and manually running it since it actually mounts the code to a squashfs mount loop, I am not familiar with the tech but removing applications and adding them can be just about as easy as deleting and adding files when one thinks about it.
Does anybody know some more reference pointers to a more smooth/easy way of not having to deal with dependency management etc.
I think that mise for programming languages is also another good one. Appimages/zapps are nice too for what they are worth. Flatpak's a little too focused on gui section for my liking though. Its great that we have flatpak but I dont think its just the right primitive for cli applications that much
Not really, back then disks were very expensive and you had no choice but to split. And disk sizes were very small.
But, I in a way int kind of makes sense.
/bin and /sbin, needed for system boot. /usr/bin and /usr/sbin for normal runtime.
's' for items regular users do not need to run, remember, UN*X is a multi-user system, not a one person system like macs, windows and in most cases Linux.
I really should write that "Yes, Virginia; executables once went in /etc." Frequently Given Answer.
Because it was /etc (and of course the root directory) where the files for system boot and system administration went in some of the Unices of yesteryear. In AT&T Unix System 5 Release 3, for example, /etc was the location of /etc/init, /etc/telinit, and /etc/login .
sbin is actually quite complex, historically, because there were a whole lot of other directories as well.
> /bin and /sbin, needed for system boot. /usr/bin and /usr/sbin for normal runtime.
Nowadays most Linux systems boot with initramfs, that is a compressed image that includes everything the system needs to boot, so you're basically saying /bin and /sbin is useless.
This post gets some of the details wrong. /usr/local is for site-local software - e.g. things you compile yourself, i.e in the case of the BSDs the ports collection - things outside the base system. (They may be compiled for you).
Since Linux has no concept of a base system, it's a stand-alone kernel with a hodgepodge of crap around it - this distinction makes no sense on Linux.
/opt is generally for software distros for which you don't have source; only binaries. Like commercial software packages. More common on Real UNIX(R) because most Linux users outside enterprise aren't running commercial software. You're putting your $500k EDA software under /opt.
> Since Linux has no concept of a base system, it's a stand-alone kernel with a hodgepodge of crap around it - this distinction makes no sense on Linux.
The Linux base system is managed by the package manager, leaving local for the sysadmin to `make install` into
> Linux has no concept of a base system, it's a stand-alone kernel with a hodgepodge of crap around it
Good grief. How does this end up as the top comment on HN of all places? I'll bet anything that this author also thinks that systemd is way too opinionated and unified and that the system needs a less coupled set of init code.
Edit to be at least a tiny bit more productive: the Linux Filesystem Hierarchy Standard is about to pop the cork on its thirty second birthday. It's likely older than most of the people upvoting the post I responded to. https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard
To wit: that's outrageous nonsense, and anyone who know anything about how a Linux distro is put together (which I thought would have included most of the readers here, but alas) would know that.
1. The history of /usr subdirectories is a lot more complex than that. There was a /usr/lbin once, for example.
1. /usr/local is not where third party softwares from packages/ports go on "the BSDs". On NetBSD, they go in /usr/pkg instead, again exemplifying that this is quite complex through history and across operating systems.
> /opt is generally for software distros for which you don't have source; only binaries. Like commercial software packages. More common on Real UNIX(R) because most Linux users outside enterprise aren't running commercial software
Steam says hi.
On Windows, a common Steam library exists in Program Files directory, therefore not user specific. On Linux, each user has a separate Steam installation and library. I'm not sure why there isn't a common Steam library on Linux, but /opt would be a good place for it.
By default, Program Files is not writable by non-Administrators. This is likely done by some background service. Or they loosened the default file permissions (which would be dumb).
No reason this can't be done on Linux but since NT's security model is more flexible it's a lot easier to do so on Windows. You'd need to add dedicated users. (Running a Steam daemon as root would probably cause an uproar.)
Now I get what the folks using FreeBSD typically like to point to as a reason why they prefer FreeBSD over Linux because there is a clear distinction between the base system and userland.
Linux has more of a clear distinction between kernel and userspace. But the base system in *BSD includes a lot of userspace, so the API boundary is more the libc and some core libraries (TLS) instead of the kernel ABI.
FreeBSD is moving to a scheme where the base system is managed with pkg. In the release notes for last month's 15.0 release, they suggest that this will be mandatory in 16.0.
The ports tree will still be very different from base, but I feel this may erode some of the difference between FreeBSD and a typical Linux distro in terms of user experience, with respect to base vs ports. You'll update both with pkg.
While practically useless in reality /usr/local is `site-local software`, E.G. software that if you nfs mounted /usr, would be local to the `site` not the machine.
The BSD ports explanation is a bit revisionist I hate to say, this all predates ports.
It was a location in a second stage mount you knew the upstream wouldn’t overwrite with tar or cpio. Later ports used it to avoid the same conflict.
Anywhere in your `$PATH` that isn't managed by `apt`/`dpkg`. E.g. add `~/bin` to your `$PATH`, and install it in there. No risk of overwriting files the system package manager manages & having manually-installed software break next time it updates them.
> /usr/local is for site-local software - e.g. things you compile yourself
See, you assume here that /usr/local/ makes any sense.
I use a versioned appdir prefix approach similar to GoboLinux. So for me,
/usr/local never ever made any sense at all. Why should I adhere to it?
I have ruby under e. g. /Programs/Ruby/4.0.0/. It would not matter in the
slightest WHO would compile it, but IF I were to need to store that information, I would put that
information under that directory too, perhaps in a file such as environment.md
or some other file; and perhaps additionally into a global database if it were
important to distinguish (but it is not). The problem here is that you do not
challenge the notion whether /usr/local/ would make any sense to begin with.
> /opt is generally for software distros for which you don't have source; only
binaries.
Makes no sense. It seems to be about as logical as the FHS "standard". Why would
I need to use /opt/? If I install libreoffice or google chrome there under /opt,
I can as well install it under e. g. /Programs/ or whatever hierarchy I use for
versioned appdirs. Which I actually do. So why would I need /opt/ again?
> See, you assume here that /usr/local/ makes any sense.
You’re presenting your comment as a rebuttal but you’re actually arguing something completely different to the OP.
They’re talking about UNIX convention from a historic perspective. Whereas you’re talking about your own opinions about what would make sense if we were to design the file system hierarchy today.
I don’t disagree with your general points, but it also doesn’t mean that the OP is incorrect either.
I understand /usr/local to be for anything not managed by your distribution but following the standard system layout (e.g. Python that you compiled yourself) while /opt is used for things that are (relatively) self-contained and don't integrate with the system, similar to Program Files on Windows (e.g. a lot of Java software).
Regarding "that's a Linux-ism" - well yeah? Linux is the main OS this is about. FreeBSD can do what it wants, too.
Here [1] is a related trick in the old Unix to run either `foo`, `/bin/foo` or `/usr/bin/foo` (apparently before `PATH` convention existed):
char string[10000];
strp = string;
for (i=0; i<9; i++)
*strp++ = "/usr/bin/"[i];
p = *argv++;
while(*strp++ = *p++);
// string == "/usr/bin/foo"
execv(string+9, args); // foo (execv returns only in case of error, i.e. when foo does not exist)
execv(string+4, args); // /bin/foo
execv(string, args); // /usr/bin/foo
Sometime around 2000 someone decided that /bin and /sbin isn't enough to boot and mount the rest of the system, so they added further complexity: an initrd/initramfs that does the basic job of /bin and /sbin. They had to complicate the kernel build process, the kernel update, the bootloader, the kernel command line and for what? Just because they didn't want the kernel to have the storage drivers built-in?
So the /bin /sbin became redundant.
Sometime around 2020 someone observed that no current Linux can boot without /usr anyway. So what did they do? Move everything from /usr to / and drop the whole /usr legacy? Noooo, that would be too simple. Move / to /usr. And because that is still too simple, also move /bin, /sbin and /usr/sbin to /usr/bin, and then keep symlinks at the old locations because who's gonna fix hardcoded paths in 99% of all Linux apps anyway??
Oh, how I wish I was born in the '60s, when the world was still sane.
/ has to be writeable (or have separate writeable mounts under it), /usr doesn't. The reasons for unifying under /usr are clearly documented and make sense and it's incredibly tedious seeing people complain about it without putting any effort into understanding it.
If you had been born in the 1960s, you might well have learned by dint of being alive at the time that the world underneath /usr was pretty complicated in the 1970s, 1980s, and 1990s; that /etc was where some of the things that were used to boot the system once went; and that the tale of sbin is complex and slightly sad.
The tale that things were simple until they went to pot in 2000 is wholly ahistoric.
busybox in Alpine Linux has for example `ps` builtin. If you install ps with `apk add ps` to get the full version, it will remove the symlink for /bin/ps and replace it with the one you installed.
You need to read up on the purpose of busybox. It is not something that the kernel people has decided upon. It is an initiative of an group of people who needed some tools onto a single floppy.
/bin/ps on a Debian distro is 154522 bytes.
The whole busybox in Alpine Linux is 804616 bytes and contains a whole lot more than just ps.
> merge-usr is a script which may be used to migrate a system from the legacy "split-usr" layout to the newer "merged-usr" layout as well as the "sbin merge".
> It is required for systemd ≥255 due to changes upstream, but it remains optional for other init systems.
The next logical evolution is to get rid of directories and put everything in /. This will simplify a lot of the build process. /usr/include and /usr/lib are already a mess (on linux).
Mount-points were key to early history of the split. Nowadays it's more about not breaking shebangs.
Nearly every shell script starts with "#!/bin/sh", so you can't drop /bin. Similarly, nearly every python script starts with "#!/usr/bin/env python", so you can't drop /usr/bin.
Some people think today's file hierarchy is complicated. .That's amusing.
I worked at an R&D center where we had hundreds of UNIX systems orf all types(i.e. Sun, Ultrix, HP, Symbolics, etc.) We also had Sun 2's , 3's and 4's - each with different CPU's/architectures and incompatible binaries. Some Suns had no disks at all. And with hundreds of systems, we literally had a hundred different servers across the entire site.
I would compile a program for a Sun 3, and needed a way to install the program once, for use on hundreds of computers. Also teams of people on dozens of different computers needed to share files with each other.
On a similar note: just the other day I was thinking about how the Unixy systems I used 20+ years ago used to nudge/push you toward creating several actual partitions during installation. Maybe /, /usr, swap… maybe one or two more? IIRC, I think some of the BSDs, at least, maybe still do? Always seemed weird and suboptimal to me for most installations, but I remember being told by graybeards at the time that it was the Right Way.
I have always made /home a separate partition.
This makes it so much easier to reinstall and/or wipe out a distro and install a new one. All of my files are left undisturbed.
still makes sense to prevent overruns right? IE /home/ cant drop the whole system just cause you torrented too many debian ISOs and blew out your disk.
same for /var/ or wherever you store your DB tables like MySQL.
Ah, yeah, that makes sense, thanks. My experience as "sysadmin" has largely been from the standpoint of personal systems for which that has mostly not been a big concern for me.
1. The title says “understanding sbin” but the content gives zero understanding of that. If someone has a historical explanation, please provide it.
2. “Then somebody decided
/usr/local wasn't a good place to install new packages, so let's add /opt”
Not exactly. /usr/local exists so you don’t accidentally mess up your distro/package manager by changing its files. It’s “local” to your installation. But it is still structured — /usr/local/bin, /usr/local/lib, etcetera — divided into binaries, shared libraries, manpages.
Whereas /opt has no structure. It’s “the wild west”…application binaries, libraries, configuration, data files, etcetera with no distinction. Apps with “universal” packaging, or sometimes secondary package managers.
For example /usr/local/bin is normally part of PATH, but /opt is not (unless eg homebrew adds it to your bashrc).
Does anyone know why, when Lennart and friends wrote their XDG Base Directory Specification, they decided that each user should replicate /usr/local/ subdirectories under $HOME/.local/?
Doesn't being under $HOME make .local redundant? I guess one could argue for binaries going in an architecture-specific subdirectory if $HOME was on a shared filesystem, but that's not what's being done here.
To me, $HOME/.local/share and its siblings are just a needless level of indirection, forcing me to jump through an extra hoop every time I want to access what's in there.
(I know it's sometimes possible to override it with an environment variable, but the predictably spotty support for those overrides means I would then have to look for things in two places. I think sensible defaults would be nicer.)
> Does anyone know why, when Lennart and friends wrote their XDG Base Directory Specification,
It is Microsoft thing. You must pollute the user's /home as much as you can.
Can i say that i have 3 daemons on my computer respobsible for ... credentials.
This is the way to go.
Dunno the historical reason but I sure as heck find it nice to know without ambiguity that the folder called "share" corresponds to that special directory and isn't a random folder in my home directory for files that were intended to be e.g. shared with someone.
That doesn't align with their choice of $HOME/.cache (to which users need to navigate much less frequently than $HOME/.local/share), nor with how few items $HOME/.local typically saves from landing in $HOME, nor with the normally hidden state of everything starting with a dot.
So if that was their reasoning, it reinforces my view that they didn't think their design through very well.
This is also around the same time Vista started enforcing the AppData and ProgramData folder redirect. It was a messy time for all developers. IMHO they made the right decision by enforcing that redirect as we now know where every file should be in a Windows program.
This is low-effort fantasy history. It may be directionally correct, but why bother when you don't care about the details? From analyzing the UNIX manuals and other old files we get the following (not fully complete) picture:
We'll skip PDP-7 UNIX, no hierarchical file system yet.
UNIX v1 on the PDP-11 had an RF11 fixed head disk (1mb) for / and swap, and an RK05 moving head disk (2.5mb) for /usr (the user directories)
By v2 they had added a second RK05 at /sys for things like the kernel, manual pages, and system language stuff like the c compiler and m6.
By v3 they added yet another RK05 at /crp for, well, all sorts of crap (literally), including yacc apparently. /usr/bin is mentioned here for the first time.
I don't feel like looking up when sbin was first introduced but it is not a Bell Labs thing. possibly BSD or AT&T UNIX? Binaries that one would normally not want to run were kept in /etc, which includes thing like init, mount, umount, getty, but also the second pass of the assembler (as2), or helpers like glob.
Also i don't know when /home became canonical but at Bell Labs it was never a thing (plan 9 has user directories in /usr where they had always belonged logically).
The lib situation is more difficult. Looks like it started with /usr/lib. By v3 we find the equivalent directory as /lib, where it contains the two passes of the C compiler (no optimization pass back then), C runtime and lib[abc].a (assembler, B, C libraries respectively). /usr/lib had been repurposed for non-object type libraries, think text-preparation and typesetting.
By v4 the system had escaped the labs (see the recent news) and at that point everyone modified the system to their taste anyway. Perhaps it should be noted that the v7 distribution (which is the first that is very clearly the ancestor of every modern UNIX) has no /usr/bin, only /bin. /lib and /usr/lib are split however.
These are just some rough notes and due to a lack of early material they're still not as accurate as i would like. Also UNIX ran on more than one machine even in the early days (the manuals mention the number of installations) so there must have been some variation anyway. Something I'd like to know in particular is when and where RP03 disk drives were used. These are pretty huge in comparison to the cute RK05s.
I've only used modern immutable Linux (Alpine, MicroOS) and wondered why of all places `/var/` was chosen as the location for rw stuff. It's fun to be reminded that there was of course a time when an immutable OS was the default, and you ran it off of floppies. So there's a lot of history to using `/var/` for that. Guess we've come full circle!
Was it immutable? I thought it was just different storage types, like you'd have a smaller disk for the root stuff and then make var on a larger disk. I'm surprised that, having something immutable, you'd choose to go the other direction.
Good point, considering the nature of floppies I suppose it technically mustn't have been immutable. But I feel like it would have been wise to mount your OS root read-only to prevent yourself from accidentally ruining your (possibly only) copy of your OS. At least before you had a reasonably sized hard drive.
> I'm surprised that, having something immutable, you'd choose to go the other direction.
I can somewhat imagine that having been limited by space and having to swap out disks all the time one would jump on the train of mutability without fully appreciating the benefits of immutability.
Also /bin vs /sbin believe is that the latter is meant for statically linked binaries such that if your system is corrupted these at least will keep working.
Practically in this century if I was starting a new OS I would set it up like so:
/bin for all system binaries. Any binary from a package installed by the OS package manager lived here.
/lib same but for shared libraries
/var for variable data. This is where you would put things like your Postgres data files.
/tmp for temporary files.
/home as usual.
/dev as usual.
/boot as usual
/etc as usual
/usr would be what /usr/local is on most systems. So /usr/bin is binaries not installed by the OS package manager. /usr/etc is where you put config files for packages not installed by the package manager and so on.
Get rid of /usr/local and /sbin.
/media replaces /mnt entirely (or vice versa).
Ditch /opt and /srv
Add /sub for subsystems: container overlays should live here. This would allow the root user (or a docker group, etc.) to view the container file system, chroot into it, or run a container on it.
Then again, nobody gave me a PDP-11 to decide so my vote doesn’t count :)
> Also /bin vs /sbin believe is that the latter is meant for statically linked binaries such that if your system is corrupted these at least will keep working.
My understanding is that sbin for system binaries, not necessarily statically linked. Normally /sbin is only in root's PATH, not normal user's. They are likely world executable, but in many cases you cannot actually run them as non-root since they usually touch things only root can access without special access (e.g. raw devices, privileged syscalls, /etc/shadow etc.). Not always though, like you can run /sbin/ifconfig as normal user in read-only mode.
> /var for variable data. This is where you would put things like your Postgres data files.
This one never sat well with me. I think of /var as temporary data, something I can lose without much consequence. But never data files. I know it's the default, but still.
/srv I like because it seems like a proper place to separate server-related data, i.e. /srv/wwwroot or similar. But if you like /var, that of course would be the place for this type of data.
No. Temporary data is /var/tmp or /tmp. The difference: /var/tmp should survive a reboot. /tmp might be lost on reboot.
/var is data that needs to be writable (/usr/*, /bin and /lib may be readonly), and that might be important. Like databases, long-term caches, mail and printer queues, etc.
> Also /bin vs /sbin believe is that the latter is meant for statically linked binaries such that if your system is corrupted these at least will keep working.
I think that became the rationale for /[s]bin vs. /usr/[s]bin (although based on the linked article, that may have been retconned a bit).
You were supposed to keep your root/boot filesystem very small and mostly read-only outside major updates. That meant that you could boot to a small amount of utilities (e.g. fsck) that would let you repair /usr or any other volume if it became corrupted.
I think the other poster is correct that stuff like fsck is supposed to go into /sbin because it is a "system" binary (but also statically linked since /usr/lib isn't mounted yet) and doesn't make sense to have in user $PATHs since nobody other than root should really be running that one.
Regardless, this is all deeply meaningless these days, particularly if you are running "ephemeral" infrastructure where if anything goes that wrong you just repave it all and start over.
If a system is intended to serve data on a network (file shares, databases, websites, remote backups, etc), /srv is where the requisite data for those things should live. I think that's a good idea.
Basically: https://sta.li/filesystem/. Arguably /usr shouldn't exist because rather polluting system with unmanaged installations should be making a package and installing with package manager.
I used to package a lot of my stuff as Debian packages and it is a process that takes an hour or three for most packages. I really liked it and would have loved to be able to do that as just a normal way to distribute everything but it just is a little too much overhead. A shame, really, since once you get it working it is way nicer than any Docker setup you can think of.
If I was starting a new system layout, I wouldn't have every package smush its files together with everyone else's into a single shared directory hierarchy. /opt would reign supreme, and we already have pkg-config to deal with that sort of layout.
/srv is for services. Which is weird because so is /var. The choice between /var/lib/postgresql and /srv/postgresql is arbitrary to me. Except in /var you can also have things like /var/cache, /var/tmp, and so on.
Good point. It isn’t variable data as in it only changes on package updates. But it isn’t user data either, so it doesn’t belong in /usr. I guess that one should have been in /lib/share and /lib/man. Or it could be a brand new directory called /doc which contains the documentation type content.
/usr/lib contents belongs in /lib if the contents is installed by the package manager. That way /usr/lib is what the user installed.
Rob Landley is right. Others realised this too - see GoboLinux. But some regular distributions too.
Slapping it down as "FHS is now a standard" does not change anything. People will ask why it is suddenly a standard when it hasn't made any sense at all whatsover. bin versus sbin is also pointless. Inertia is one primary reason why nobody fixes things usually.
One complication caused by shared libraries was the security threat. An executable using a shared library allowed the user to execute with a different (updated) library without recompilation.
This is a security threat, especially with SETUID programs. If you could change the library, you could install new code and gain privileged access.
This was why /usr/sbin was created - all of the programs there were compiled with static libraries.
The reason you didn't just drop stuff in /usr/local? Space.
One of our devs was also a gimp contributor, and he dropped gimp into /usr/local and filled up the filesystem. And back then package managers didn't exist, so you had to read the makefile and hope you didn't remove anything that was shared
/opt/gimp or /usr/local/gimp.
Local because in some places they mounted an nfs share, and local was local to you.
I had written a similar comment here asking for people's opinion but I would like to add something that I know about which I didn't see in your list
Tinycorelinux
I know that it doesn't follow the best user practices etc. but I did find its tcz package format fascinating because they kind of work similar to mountable drives and I am not exactly sure but I am fairly certain that a modern package management system where two or more packages with conflicts etc. can run on the same system.
I really enjoyed the idea of gobolinux as well. I haven't played with that but it would be good if some more mainstream os could also implement it. Nix and Guix are more mainstream but they also require to learn a new language and I think that we might need something in the middle like gobo but perhaps more mainstream or adding more ideas / additions perhaps? I would love it if someone can tell me about some projects we are missing to talk about and what they add on the table etc.
I haven't tried Gobo though so I am not sure but I really wish more distros could add features like gobo, perhaps even having a gobofied debian/fedora eh?
macOS has all of that (mostly inherited from NeXTSTEP which was significantly based on 4.3/4.4BSD). It's hidden by default in the GUI, visible in Terminal.
Nowadays most end users just use /usr/local or /opt/local or whatever is managed by Homebrew or Macports.
Not really. I wish we had a new OS based on the Linux kernel - the legacy (shared files, r/w mounted OS, etc). I think Google's Fuchsia has some interesting ideas.
Nowadays I think packages should turn to portable applications isolated within their own directories. Those directories would have an standard libraries directory that the application would use.
Latter, if desired, the system, could override those libraries with another ones (newer compatible or patched), more thinking is needed about this. The key, from the process point of view, would to limit the access of such process to their own directories and some very limited system only local services by default,
And to extend this permissions, each binary in such directory would need to be in companion of a permissions request file that would require the approbation from the user or the system defaults patterns (each distro would have a point of view I guess), in the aim of improve process isolation and system, drivers, services access permissions.
This would need also restructure the console philosophy, how can manage the processes, and so on, that would need a big restructuration.
I mean, anyway people is duplicating space with containers trying to isolate process, remark in trying.
I know this is unrealistic due the deep change it would suppose, so consider I'm just thinking out loud.
PS: If you answer it already exists with AppArmor, SELinux, etc, then you did not understood the root of the issue with such modules.
Plan 9 (a Bell Labs successor to Unix) did away with the whole bin, sbin, usr/sbin thing and its shell only looked in /bin. How things got into /bin is a different story.
For me this was an eye-opener. I kept trying to wrap my head around all these different paths and "standards" because I thought it was correct and deliberately designed. Looking back through the history this doesn't seem to be the case; I feel much better for being confused by all the different PATH conventions and strict hierarchies.
When I hear people complaining about this sort of thing, I want to say, “Just go and invent your own, then.”
But then you get things like Esperanto. Esperanto takes about 1/4 of the time to learn compared to other languages. It’s taught in China and used as primary language in some settings. But, aside from a large number of people learning some Esperanto from Duolingo several years ago, it’s just another language now to have to learn.
This is what happens when a system is designed by multiple people and companies over a long period of time. An amalgam of ideas which are there just because. There's no reason Linux should be like this. e.g., see https://gobolinux.org/ which has more sane dirs.
Linux does not use this split any more. Many of these dirs were merged back together. The "/usr merge" was adopted by Debian, Ubuntu, Fedora, Red Hat, Arch Linux, openSUSE and other major distros:
https://itsfoss.gitlab.io/post/understanding-the-linux--usr-...
`man file-hierarcy` defines modern Linux filesystem layout.
https://www.man7.org/linux/man-pages/man7/file-hierarchy.7.h...
Question: why did they decide to make /usr/bin the "primary" and /bin the symlink? Methinks it should have been the other way around as was the original Unix design before the split.
Also the first URL is serving me scam popup ads that do a crap job at pretending to be android system alerts. Next time please try to choose a more reputable source.
2 replies →
Oh, that's an awesome idea to get rid of those awful splits and focus on apps! Scoop package manager on Windows works the same way. Though it has a few issues when some security apps ignore "current" symlinks (and don't support regex for versioned paths), and then versioned dirs bite you when versions changes. Wonder whether this distro has similar issues and whether it'd be better to have the current version a regular dir and then the versioned dir a symlink
> Standards bureaucracies like the Linux Foundation (which consumed the Free Standards Group in its' ever-growing accretion disk years ago) happily document and add to this sort of complexity without ever trying to understand why it was there in the first place.
this is the reason in my opinion and experience
as a lead dev in a rather complicated environment I tended to solve the problem many times where some identifier was used. short deadlines and no specification made us solve the problem quickly, so some shortcuts and quick actions were done. this identifier gets asked about later and super overcomplicated explanations given as a reason by people that don't know the history.
...and the history is often like 'they mounted stuff to /usr because they got a third drive'. and now, people even in this thread keep giving explanations like it's something more.
> There's no reason Linux should be like this. e.g., see https://gobolinux.org/ which has more sane dirs.
And I thought we just got over the systemd drama…
gobo's a neat idea. I for one really like that its package management can have multiple packages without conflicts etc.
I think the only other I can think of like this is probably nix or spark and nix really wants to learn a new language so it has some friction but nix is a neat idea too
I think not many people know this but how tinycore packages work are really fascinating as well and its possible I think to have this as well by just downloading the .tcz and manually running it since it actually mounts the code to a squashfs mount loop, I am not familiar with the tech but removing applications and adding them can be just about as easy as deleting and adding files when one thinks about it.
Does anybody know some more reference pointers to a more smooth/easy way of not having to deal with dependency management etc.
I think that mise for programming languages is also another good one. Appimages/zapps are nice too for what they are worth. Flatpak's a little too focused on gui section for my liking though. Its great that we have flatpak but I dont think its just the right primitive for cli applications that much
Not really, back then disks were very expensive and you had no choice but to split. And disk sizes were very small.
But, I in a way int kind of makes sense.
/bin and /sbin, needed for system boot. /usr/bin and /usr/sbin for normal runtime.
's' for items regular users do not need to run, remember, UN*X is a multi-user system, not a one person system like macs, windows and in most cases Linux.
I really should write that "Yes, Virginia; executables once went in /etc." Frequently Given Answer.
Because it was /etc (and of course the root directory) where the files for system boot and system administration went in some of the Unices of yesteryear. In AT&T Unix System 5 Release 3, for example, /etc was the location of /etc/init, /etc/telinit, and /etc/login .
sbin is actually quite complex, historically, because there were a whole lot of other directories as well.
* https://jdebp.uk/FGA/unix-path-and-personalities.html
> /bin and /sbin, needed for system boot. /usr/bin and /usr/sbin for normal runtime.
Nowadays most Linux systems boot with initramfs, that is a compressed image that includes everything the system needs to boot, so you're basically saying /bin and /sbin is useless.
6 replies →
How does splitting help save space?
1 reply →
This post gets some of the details wrong. /usr/local is for site-local software - e.g. things you compile yourself, i.e in the case of the BSDs the ports collection - things outside the base system. (They may be compiled for you).
Since Linux has no concept of a base system, it's a stand-alone kernel with a hodgepodge of crap around it - this distinction makes no sense on Linux.
/opt is generally for software distros for which you don't have source; only binaries. Like commercial software packages. More common on Real UNIX(R) because most Linux users outside enterprise aren't running commercial software. You're putting your $500k EDA software under /opt.
I normally wouldn’t be this pedantic, but given that this is a conversation about pedantry it only seems right: you’re using i.e. and e.g. backwards.
My mnemonic is “In Essence” and “for EGsample”
3 replies →
> Since Linux has no concept of a base system, it's a stand-alone kernel with a hodgepodge of crap around it - this distinction makes no sense on Linux.
The Linux base system is managed by the package manager, leaving local for the sysadmin to `make install` into
> The Linux base system
There is no such thing as a Linux base system.
Separate components, separate people.
Hence the term Ganoo plus Leenox...
9 replies →
> Linux has no concept of a base system, it's a stand-alone kernel with a hodgepodge of crap around it
Good grief. How does this end up as the top comment on HN of all places? I'll bet anything that this author also thinks that systemd is way too opinionated and unified and that the system needs a less coupled set of init code.
Edit to be at least a tiny bit more productive: the Linux Filesystem Hierarchy Standard is about to pop the cork on its thirty second birthday. It's likely older than most of the people upvoting the post I responded to. https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard
To wit: that's outrageous nonsense, and anyone who know anything about how a Linux distro is put together (which I thought would have included most of the readers here, but alas) would know that.
The problems with what you say are that:
1. The history of /usr subdirectories is a lot more complex than that. There was a /usr/lbin once, for example.
1. /usr/local is not where third party softwares from packages/ports go on "the BSDs". On NetBSD, they go in /usr/pkg instead, again exemplifying that this is quite complex through history and across operating systems.
> /opt is generally for software distros for which you don't have source; only binaries. Like commercial software packages. More common on Real UNIX(R) because most Linux users outside enterprise aren't running commercial software
Steam says hi.
On Windows, a common Steam library exists in Program Files directory, therefore not user specific. On Linux, each user has a separate Steam installation and library. I'm not sure why there isn't a common Steam library on Linux, but /opt would be a good place for it.
By default, Program Files is not writable by non-Administrators. This is likely done by some background service. Or they loosened the default file permissions (which would be dumb).
No reason this can't be done on Linux but since NT's security model is more flexible it's a lot easier to do so on Windows. You'd need to add dedicated users. (Running a Steam daemon as root would probably cause an uproar.)
15 replies →
I seem to recall Solaris put packages in /opt. Each package got its own prefix under /opt.
Now I get what the folks using FreeBSD typically like to point to as a reason why they prefer FreeBSD over Linux because there is a clear distinction between the base system and userland.
Linux has more of a clear distinction between kernel and userspace. But the base system in *BSD includes a lot of userspace, so the API boundary is more the libc and some core libraries (TLS) instead of the kernel ABI.
FreeBSD is moving to a scheme where the base system is managed with pkg. In the release notes for last month's 15.0 release, they suggest that this will be mandatory in 16.0.
The ports tree will still be very different from base, but I feel this may erode some of the difference between FreeBSD and a typical Linux distro in terms of user experience, with respect to base vs ports. You'll update both with pkg.
While practically useless in reality /usr/local is `site-local software`, E.G. software that if you nfs mounted /usr, would be local to the `site` not the machine.
The BSD ports explanation is a bit revisionist I hate to say, this all predates ports.
It was a location in a second stage mount you knew the upstream wouldn’t overwrite with tar or cpio. Later ports used it to avoid the same conflict.
So, in Debian, where should I be placing a Firefox tarball I download from Mozilla’s site?
It is open-source, and I can get source files, but it’s precompiled…
Anywhere in your `$PATH` that isn't managed by `apt`/`dpkg`. E.g. add `~/bin` to your `$PATH`, and install it in there. No risk of overwriting files the system package manager manages & having manually-installed software break next time it updates them.
> /usr/local is for site-local software - e.g. things you compile yourself
See, you assume here that /usr/local/ makes any sense.
I use a versioned appdir prefix approach similar to GoboLinux. So for me, /usr/local never ever made any sense at all. Why should I adhere to it? I have ruby under e. g. /Programs/Ruby/4.0.0/. It would not matter in the slightest WHO would compile it, but IF I were to need to store that information, I would put that information under that directory too, perhaps in a file such as environment.md or some other file; and perhaps additionally into a global database if it were important to distinguish (but it is not). The problem here is that you do not challenge the notion whether /usr/local/ would make any sense to begin with.
> /opt is generally for software distros for which you don't have source; only binaries.
Makes no sense. It seems to be about as logical as the FHS "standard". Why would I need to use /opt/? If I install libreoffice or google chrome there under /opt, I can as well install it under e. g. /Programs/ or whatever hierarchy I use for versioned appdirs. Which I actually do. So why would I need /opt/ again?
> See, you assume here that /usr/local/ makes any sense.
You’re presenting your comment as a rebuttal but you’re actually arguing something completely different to the OP.
They’re talking about UNIX convention from a historic perspective. Whereas you’re talking about your own opinions about what would make sense if we were to design the file system hierarchy today.
I don’t disagree with your general points, but it also doesn’t mean that the OP is incorrect either.
I understand /usr/local to be for anything not managed by your distribution but following the standard system layout (e.g. Python that you compiled yourself) while /opt is used for things that are (relatively) self-contained and don't integrate with the system, similar to Program Files on Windows (e.g. a lot of Java software).
Regarding "that's a Linux-ism" - well yeah? Linux is the main OS this is about. FreeBSD can do what it wants, too.
> anything not managed by your distribution
That's a Linux-ism. Other *nix there is a lot more in /usr/local.
In reality /usr is similar to Windows' System32 directory on most Unicies.
/opt is really the only good place for Java and where I've been putting it for decades (old habits die hard).
> This post gets some of the details wrong
"some" is an understatement.
You've entirely missed the point of the article.
Here [1] is a related trick in the old Unix to run either `foo`, `/bin/foo` or `/usr/bin/foo` (apparently before `PATH` convention existed):
[1] https://github.com/dspinellis/unix-history-repo/blob/Researc...
Sometime around 2000 someone decided that /bin and /sbin isn't enough to boot and mount the rest of the system, so they added further complexity: an initrd/initramfs that does the basic job of /bin and /sbin. They had to complicate the kernel build process, the kernel update, the bootloader, the kernel command line and for what? Just because they didn't want the kernel to have the storage drivers built-in?
So the /bin /sbin became redundant.
Sometime around 2020 someone observed that no current Linux can boot without /usr anyway. So what did they do? Move everything from /usr to / and drop the whole /usr legacy? Noooo, that would be too simple. Move / to /usr. And because that is still too simple, also move /bin, /sbin and /usr/sbin to /usr/bin, and then keep symlinks at the old locations because who's gonna fix hardcoded paths in 99% of all Linux apps anyway??
Oh, how I wish I was born in the '60s, when the world was still sane.
> Oh, how I wish I was born in the '60s, when the world was still sane.
As one who was, I find it makes the current world even harder to accept. Be careful what you wish for.
/ has to be writeable (or have separate writeable mounts under it), /usr doesn't. The reasons for unifying under /usr are clearly documented and make sense and it's incredibly tedious seeing people complain about it without putting any effort into understanding it.
Documented where?
2 replies →
If you had been born in the 1960s, you might well have learned by dint of being alive at the time that the world underneath /usr was pretty complicated in the 1970s, 1980s, and 1990s; that /etc was where some of the things that were used to boot the system once went; and that the tale of sbin is complex and slightly sad.
The tale that things were simple until they went to pot in 2000 is wholly ahistoric.
This is busybox, not the general linux distros.
busybox in Alpine Linux has for example `ps` builtin. If you install ps with `apk add ps` to get the full version, it will remove the symlink for /bin/ps and replace it with the one you installed.
You need to read up on the purpose of busybox. It is not something that the kernel people has decided upon. It is an initiative of an group of people who needed some tools onto a single floppy.
/bin/ps on a Debian distro is 154522 bytes. The whole busybox in Alpine Linux is 804616 bytes and contains a whole lot more than just ps.
https://en.wikipedia.org/wiki/BusyBox https://busybox.net/
No, it's not (just) Busybox. Quotes from Gentoo: https://wiki.gentoo.org/wiki/Merge-usr
> merge-usr is a script which may be used to migrate a system from the legacy "split-usr" layout to the newer "merged-usr" layout as well as the "sbin merge".
> It is required for systemd ≥255 due to changes upstream, but it remains optional for other init systems.
The next logical evolution is to get rid of directories and put everything in /. This will simplify a lot of the build process. /usr/include and /usr/lib are already a mess (on linux).
> So what did they do? Move everything from /usr to / and drop the whole /usr legacy? Noooo, that would be too simple.
It's a lot simpler to merge them in a directory that can be mounted across multiple machines than have four separate mountpoints.
Mount-points were key to early history of the split. Nowadays it's more about not breaking shebangs.
Nearly every shell script starts with "#!/bin/sh", so you can't drop /bin. Similarly, nearly every python script starts with "#!/usr/bin/env python", so you can't drop /usr/bin.
Hence symlink.
And you haven't even touched upon paths used by Snap and Flatpak.
Some people think today's file hierarchy is complicated. .That's amusing.
I worked at an R&D center where we had hundreds of UNIX systems orf all types(i.e. Sun, Ultrix, HP, Symbolics, etc.) We also had Sun 2's , 3's and 4's - each with different CPU's/architectures and incompatible binaries. Some Suns had no disks at all. And with hundreds of systems, we literally had a hundred different servers across the entire site.
I would compile a program for a Sun 3, and needed a way to install the program once, for use on hundreds of computers. Also teams of people on dozens of different computers needed to share files with each other.
This was before SSH. We had to use NFS.
It was fairly seamless and .... interesting.
On a similar note: just the other day I was thinking about how the Unixy systems I used 20+ years ago used to nudge/push you toward creating several actual partitions during installation. Maybe /, /usr, swap… maybe one or two more? IIRC, I think some of the BSDs, at least, maybe still do? Always seemed weird and suboptimal to me for most installations, but I remember being told by graybeards at the time that it was the Right Way.
I have always made /home a separate partition. This makes it so much easier to reinstall and/or wipe out a distro and install a new one. All of my files are left undisturbed.
still makes sense to prevent overruns right? IE /home/ cant drop the whole system just cause you torrented too many debian ISOs and blew out your disk.
same for /var/ or wherever you store your DB tables like MySQL.
The inverse is also true - cannot download an 60Gb game due to partition size being too small even if there is cumulative free space available.
Ah, yeah, that makes sense, thanks. My experience as "sysadmin" has largely been from the standpoint of personal systems for which that has mostly not been a big concern for me.
This is much better solved by quotas which can be adjusted on the fly without rewriting your partition tables.
Ironically using "modern" filesystems like zfs or btrfs you can do that if they are on the same disk
1 reply →
I think that is still the recommended way? The GNU/Linux Debian installer definitely does it by default. Even MS Windows does now-a-days.
> I'm still waiting for /opt/local to show up...
Well...
:'-)
And funnily enough, only one file there.
Why ? I don't know. But I do need libgeos.
1. The title says “understanding sbin” but the content gives zero understanding of that. If someone has a historical explanation, please provide it.
2. “Then somebody decided /usr/local wasn't a good place to install new packages, so let's add /opt”
Not exactly. /usr/local exists so you don’t accidentally mess up your distro/package manager by changing its files. It’s “local” to your installation. But it is still structured — /usr/local/bin, /usr/local/lib, etcetera — divided into binaries, shared libraries, manpages.
Whereas /opt has no structure. It’s “the wild west”…application binaries, libraries, configuration, data files, etcetera with no distinction. Apps with “universal” packaging, or sometimes secondary package managers.
For example /usr/local/bin is normally part of PATH, but /opt is not (unless eg homebrew adds it to your bashrc).
What do you mean?
I mean the article doesn’t explain sbin. The author symlinks it to bin but doesn’t explain why it exists.
1 reply →
Does anyone know why, when Lennart and friends wrote their XDG Base Directory Specification, they decided that each user should replicate /usr/local/ subdirectories under $HOME/.local/?
Doesn't being under $HOME make .local redundant? I guess one could argue for binaries going in an architecture-specific subdirectory if $HOME was on a shared filesystem, but that's not what's being done here.
To me, $HOME/.local/share and its siblings are just a needless level of indirection, forcing me to jump through an extra hoop every time I want to access what's in there.
(I know it's sometimes possible to override it with an environment variable, but the predictably spotty support for those overrides means I would then have to look for things in two places. I think sensible defaults would be nicer.)
> Does anyone know why, when Lennart and friends wrote their XDG Base Directory Specification,
It is Microsoft thing. You must pollute the user's /home as much as you can. Can i say that i have 3 daemons on my computer respobsible for ... credentials. This is the way to go.
Dunno the historical reason but I sure as heck find it nice to know without ambiguity that the folder called "share" corresponds to that special directory and isn't a random folder in my home directory for files that were intended to be e.g. shared with someone.
Sure, but I think the better choice for $HOME/.local/share would be $HOME/.share, not $HOME/share
This would match the more recent $HOME/.var that's in widespread use via Flatpak.
The ~/.local prefix is so that you don't pollute the home directory too much.
That doesn't align with their choice of $HOME/.cache (to which users need to navigate much less frequently than $HOME/.local/share), nor with how few items $HOME/.local typically saves from landing in $HOME, nor with the normally hidden state of everything starting with a dot.
So if that was their reasoning, it reinforces my view that they didn't think their design through very well.
This is also around the same time Vista started enforcing the AppData and ProgramData folder redirect. It was a messy time for all developers. IMHO they made the right decision by enforcing that redirect as we now know where every file should be in a Windows program.
This is low-effort fantasy history. It may be directionally correct, but why bother when you don't care about the details? From analyzing the UNIX manuals and other old files we get the following (not fully complete) picture:
We'll skip PDP-7 UNIX, no hierarchical file system yet.
UNIX v1 on the PDP-11 had an RF11 fixed head disk (1mb) for / and swap, and an RK05 moving head disk (2.5mb) for /usr (the user directories)
By v2 they had added a second RK05 at /sys for things like the kernel, manual pages, and system language stuff like the c compiler and m6.
By v3 they added yet another RK05 at /crp for, well, all sorts of crap (literally), including yacc apparently. /usr/bin is mentioned here for the first time.
I don't feel like looking up when sbin was first introduced but it is not a Bell Labs thing. possibly BSD or AT&T UNIX? Binaries that one would normally not want to run were kept in /etc, which includes thing like init, mount, umount, getty, but also the second pass of the assembler (as2), or helpers like glob. Also i don't know when /home became canonical but at Bell Labs it was never a thing (plan 9 has user directories in /usr where they had always belonged logically).
The lib situation is more difficult. Looks like it started with /usr/lib. By v3 we find the equivalent directory as /lib, where it contains the two passes of the C compiler (no optimization pass back then), C runtime and lib[abc].a (assembler, B, C libraries respectively). /usr/lib had been repurposed for non-object type libraries, think text-preparation and typesetting.
By v4 the system had escaped the labs (see the recent news) and at that point everyone modified the system to their taste anyway. Perhaps it should be noted that the v7 distribution (which is the first that is very clearly the ancestor of every modern UNIX) has no /usr/bin, only /bin. /lib and /usr/lib are split however.
These are just some rough notes and due to a lack of early material they're still not as accurate as i would like. Also UNIX ran on more than one machine even in the early days (the manuals mention the number of installations) so there must have been some variation anyway. Something I'd like to know in particular is when and where RP03 disk drives were used. These are pretty huge in comparison to the cute RK05s.
I've always heard /sbin contained only static binaries, so it seems likely the distinction would have grown out of BSD.
I am also totally adding a /crp directory to my next system.
I've only used modern immutable Linux (Alpine, MicroOS) and wondered why of all places `/var/` was chosen as the location for rw stuff. It's fun to be reminded that there was of course a time when an immutable OS was the default, and you ran it off of floppies. So there's a lot of history to using `/var/` for that. Guess we've come full circle!
Was it immutable? I thought it was just different storage types, like you'd have a smaller disk for the root stuff and then make var on a larger disk. I'm surprised that, having something immutable, you'd choose to go the other direction.
Good point, considering the nature of floppies I suppose it technically mustn't have been immutable. But I feel like it would have been wise to mount your OS root read-only to prevent yourself from accidentally ruining your (possibly only) copy of your OS. At least before you had a reasonably sized hard drive.
> I'm surprised that, having something immutable, you'd choose to go the other direction.
I can somewhat imagine that having been limited by space and having to swap out disks all the time one would jump on the train of mutability without fully appreciating the benefits of immutability.
Also /bin vs /sbin believe is that the latter is meant for statically linked binaries such that if your system is corrupted these at least will keep working.
Practically in this century if I was starting a new OS I would set it up like so:
/bin for all system binaries. Any binary from a package installed by the OS package manager lived here.
/lib same but for shared libraries
/var for variable data. This is where you would put things like your Postgres data files.
/tmp for temporary files.
/home as usual.
/dev as usual.
/boot as usual
/etc as usual
/usr would be what /usr/local is on most systems. So /usr/bin is binaries not installed by the OS package manager. /usr/etc is where you put config files for packages not installed by the package manager and so on.
Get rid of /usr/local and /sbin.
/media replaces /mnt entirely (or vice versa).
Ditch /opt and /srv
Add /sub for subsystems: container overlays should live here. This would allow the root user (or a docker group, etc.) to view the container file system, chroot into it, or run a container on it.
Then again, nobody gave me a PDP-11 to decide so my vote doesn’t count :)
> Also /bin vs /sbin believe is that the latter is meant for statically linked binaries such that if your system is corrupted these at least will keep working.
My understanding is that sbin for system binaries, not necessarily statically linked. Normally /sbin is only in root's PATH, not normal user's. They are likely world executable, but in many cases you cannot actually run them as non-root since they usually touch things only root can access without special access (e.g. raw devices, privileged syscalls, /etc/shadow etc.). Not always though, like you can run /sbin/ifconfig as normal user in read-only mode.
The s in sbin stood for static initially. Of course nowadays this is not enforced.
2 replies →
> /var for variable data. This is where you would put things like your Postgres data files.
This one never sat well with me. I think of /var as temporary data, something I can lose without much consequence. But never data files. I know it's the default, but still.
/srv I like because it seems like a proper place to separate server-related data, i.e. /srv/wwwroot or similar. But if you like /var, that of course would be the place for this type of data.
No. Temporary data is /var/tmp or /tmp. The difference: /var/tmp should survive a reboot. /tmp might be lost on reboot.
/var is data that needs to be writable (/usr/*, /bin and /lib may be readonly), and that might be important. Like databases, long-term caches, mail and printer queues, etc.
> Also /bin vs /sbin believe is that the latter is meant for statically linked binaries such that if your system is corrupted these at least will keep working.
I think that became the rationale for /[s]bin vs. /usr/[s]bin (although based on the linked article, that may have been retconned a bit).
You were supposed to keep your root/boot filesystem very small and mostly read-only outside major updates. That meant that you could boot to a small amount of utilities (e.g. fsck) that would let you repair /usr or any other volume if it became corrupted.
I think the other poster is correct that stuff like fsck is supposed to go into /sbin because it is a "system" binary (but also statically linked since /usr/lib isn't mounted yet) and doesn't make sense to have in user $PATHs since nobody other than root should really be running that one.
Regardless, this is all deeply meaningless these days, particularly if you are running "ephemeral" infrastructure where if anything goes that wrong you just repave it all and start over.
> Ditch /opt and /srv
If a system is intended to serve data on a network (file shares, databases, websites, remote backups, etc), /srv is where the requisite data for those things should live. I think that's a good idea.
What is the difference from /var for databases, websites, etc. and /media for backups?
2 replies →
Basically: https://sta.li/filesystem/. Arguably /usr shouldn't exist because rather polluting system with unmanaged installations should be making a package and installing with package manager.
I used to package a lot of my stuff as Debian packages and it is a process that takes an hour or three for most packages. I really liked it and would have loved to be able to do that as just a normal way to distribute everything but it just is a little too much overhead. A shame, really, since once you get it working it is way nicer than any Docker setup you can think of.
If I was starting a new system layout, I wouldn't have every package smush its files together with everyone else's into a single shared directory hierarchy. /opt would reign supreme, and we already have pkg-config to deal with that sort of layout.
Why not call it /local instead of /usr?
Along p_ing's lines I'd rename /var to something else, possibly not /srv because it's not just for servers, but it could be /data
/srv is for services. Which is weird because so is /var. The choice between /var/lib/postgresql and /srv/postgresql is arbitrary to me. Except in /var you can also have things like /var/cache, /var/tmp, and so on.
what you suggest to replace /usr/share , /usr/man , /usr/lib ... ?
Good point. It isn’t variable data as in it only changes on package updates. But it isn’t user data either, so it doesn’t belong in /usr. I guess that one should have been in /lib/share and /lib/man. Or it could be a brand new directory called /doc which contains the documentation type content.
/usr/lib contents belongs in /lib if the contents is installed by the package manager. That way /usr/lib is what the user installed.
Rob Landley is right. Others realised this too - see GoboLinux. But some regular distributions too.
Slapping it down as "FHS is now a standard" does not change anything. People will ask why it is suddenly a standard when it hasn't made any sense at all whatsover. bin versus sbin is also pointless. Inertia is one primary reason why nobody fixes things usually.
Ha! TIL. Funny and informative post.
Speaking of things which are needlessly complex, I'm reminded of this classic post on the tortured history of the browser User-Agent header:
https://webaim.org/blog/user-agent-string-history/
Highly recommended!
One complication caused by shared libraries was the security threat. An executable using a shared library allowed the user to execute with a different (updated) library without recompilation.
This is a security threat, especially with SETUID programs. If you could change the library, you could install new code and gain privileged access.
This was why /usr/sbin was created - all of the programs there were compiled with static libraries.
The reason you didn't just drop stuff in /usr/local? Space.
One of our devs was also a gimp contributor, and he dropped gimp into /usr/local and filled up the filesystem. And back then package managers didn't exist, so you had to read the makefile and hope you didn't remove anything that was shared
/opt/gimp or /usr/local/gimp.
Local because in some places they mounted an nfs share, and local was local to you.
I am going to use this story in place of the "Pot Roast Principle" [0]
[0]: https://www.psychologytoday.com/us/blog/thinking-makes-it-so...
Is there a mainstream distro that disregards all the legacy cruft? Gobo, but that’s not really mainstream.
Mac OS?
NixOS and Guix are fairly established in this regard.
macOS is certified Unix, and necessarily implements the "legacy" cruft.
I had written a similar comment here asking for people's opinion but I would like to add something that I know about which I didn't see in your list
Tinycorelinux
I know that it doesn't follow the best user practices etc. but I did find its tcz package format fascinating because they kind of work similar to mountable drives and I am not exactly sure but I am fairly certain that a modern package management system where two or more packages with conflicts etc. can run on the same system.
I really enjoyed the idea of gobolinux as well. I haven't played with that but it would be good if some more mainstream os could also implement it. Nix and Guix are more mainstream but they also require to learn a new language and I think that we might need something in the middle like gobo but perhaps more mainstream or adding more ideas / additions perhaps? I would love it if someone can tell me about some projects we are missing to talk about and what they add on the table etc.
I haven't tried Gobo though so I am not sure but I really wish more distros could add features like gobo, perhaps even having a gobofied debian/fedora eh?
2 replies →
at some point we gotta let go of legacy stuff tho, and Apple has shown in the past that they're not afraid of doing that.
It was partial, but IIRC Arch Linux made the switch to get rid of at least some of the directories previously: https://news.ycombinator.com/item?id=5944594
I think most of them have started simplifying somewhat (/bin vs /usr/bin): https://systemd.io/THE_CASE_FOR_THE_USR_MERGE/
(FWIW, Fedora 17 was released in 2012.)
Plan 9 is definitely not mainstream but readers of your comment's replies may find it interesting when looking into Unix/Linux cruft.
macOS has all of that (mostly inherited from NeXTSTEP which was significantly based on 4.3/4.4BSD). It's hidden by default in the GUI, visible in Terminal.
Nowadays most end users just use /usr/local or /opt/local or whatever is managed by Homebrew or Macports.
Not really. I wish we had a new OS based on the Linux kernel - the legacy (shared files, r/w mounted OS, etc). I think Google's Fuchsia has some interesting ideas.
I wrote an article to remind myself which bin directory I prefer and why.
https://joeldare.com/where-i-put-personal-binaries-in-macos
Nowadays I think packages should turn to portable applications isolated within their own directories. Those directories would have an standard libraries directory that the application would use.
Latter, if desired, the system, could override those libraries with another ones (newer compatible or patched), more thinking is needed about this. The key, from the process point of view, would to limit the access of such process to their own directories and some very limited system only local services by default,
And to extend this permissions, each binary in such directory would need to be in companion of a permissions request file that would require the approbation from the user or the system defaults patterns (each distro would have a point of view I guess), in the aim of improve process isolation and system, drivers, services access permissions.
This would need also restructure the console philosophy, how can manage the processes, and so on, that would need a big restructuration.
I mean, anyway people is duplicating space with containers trying to isolate process, remark in trying.
I know this is unrealistic due the deep change it would suppose, so consider I'm just thinking out loud.
PS: If you answer it already exists with AppArmor, SELinux, etc, then you did not understood the root of the issue with such modules.
Honestly the first half of that is just describing NixOS
The second half is more or less Android. iOS isn't terribly different in that respect either.
1 reply →
Plan 9 (a Bell Labs successor to Unix) did away with the whole bin, sbin, usr/sbin thing and its shell only looked in /bin. How things got into /bin is a different story.
For me this was an eye-opener. I kept trying to wrap my head around all these different paths and "standards" because I thought it was correct and deliberately designed. Looking back through the history this doesn't seem to be the case; I feel much better for being confused by all the different PATH conventions and strict hierarchies.
Reason: because it’s always been that way.
Additional info: many rules from many places are now in force that maintain the historical structure.
and /local/ to top off the confusion
Obligatory XKCD Standards https://xkcd.com/927/
When I hear people complaining about this sort of thing, I want to say, “Just go and invent your own, then.”
But then you get things like Esperanto. Esperanto takes about 1/4 of the time to learn compared to other languages. It’s taught in China and used as primary language in some settings. But, aside from a large number of people learning some Esperanto from Duolingo several years ago, it’s just another language now to have to learn.