← Back to context

Comment by emteycz

4 years ago

Yeah, except that tells me nothing useful... The question is exactly the same: So where do I install this random binary I downloaded from the internet or compiled myself? Is it /opt, /usr/bin, /usr/local/bin, or /bin? Where do I put the dependencies I compiled for this software - /usr/lib, /usr/local/lib, /lib, /opt/lib, /opt/<app name>/lib, or what?

I was taught /usr/local/bin

/opt is for standalone packages, so if it’s a single file, no.

/bin is only for stuff needed on single user mode, so probably not (unless that’s what the binary is for.

/usr/bin is going to typically contain files installed by your package manager and should probably be left unaltered by human hands.

The deps I would assume /usr/local/lib but it hasn’t ever come up for me.

To add: when you install software yourself you choose this, when your install software from e.g. a distribution package it is chosen by the package maintainers, and to a larger extent the maintainers of the distribution.

This is one of the big advantages of using a pre-made advantages of using a ready-made Linux distribution: beyond the convenience of having an installer or easy to install packages, you get some assurance that the system as a whole has been thoughtfully put together.

Arch Linux for example symlinks /bin and /sbin to /usr/bin and /lib to /usr/lib among other things.

Is your account the only account that's expected to run the binary? If so, then `$HOME/bin` is a perfectly acceptable (albeit not standard) place to put it.

If you expect other users to be able to execute the program, then you should put it in either `/usr/bin` or `/usr/local/bin`, depending on whether the former is already being used by a package manager. `/opt` is generally for self-contained software that doesn't play nicely with the rest of the system, but might still be installable through the default package manager.

  • $HOME/.local is the equivalent if /usr/local for per-user stuff.

    • I don’t think there’s any “official” word on that (the XDG spec that defines ~/.local/share doesn’t mention ~/.local/{bin,lib} IIRC, and the traditional per-user entry in PATH seems to be ~/bin), but a fair number of people use it this way, yes, including me.

  • I started out using $HOME/bin, but a fair amount of stuff assumes a /usr- or /usr/local-style folder structure when doing make install, so I've settled on using $HOME/usr/bin instead, so that programs can create $HOME/usr/include and $HOME/usr/share and whatever, without trampling on stuff in my home folder.

    Can't remember the last time I had a problem arranging this. If using autotools, which covers 95+% of stuff, it's usually a question of something like "./configure --prefix=$HOME/usr".

    (If I want to share stuff between users, /usr/local/ is of course a better place. macOS is a bit more restrictive, so I have a separate user for this, whose /usr folder is readable by everybody.)

    • > $HOME/bin

      On freedesktop systems there's the ~/.local directory which is supposed to be a mirror of the file system hierarchy. Seems like a good place for bin, lib, include directories.

    • Yeah, it definitely gets hairier when using anything that's more than just a drop-in binary.

Follow your distribution. For example Arch Linux provides PKGBUILDs for official repos and AUR. Most of the time someone has already published PKGBUILD, but if not I just patch accordingly.

And conditions that formed separation are long gone, Arch Linux symlinks most of it:

    /bin -> /usr/bin
    /sbin -> /usr/bin
    /usr/sbin -> /usr/bin

    /lib -> /usr/lib
    /lib64 -> /usr/lib
    /usr/lib64 -> /usr/lib

The standard is, indeed, excessively vague because it was written to let many existing implementations be conformant as is, though I’d say it’s still more helpful than many other standards with that deficiency. There’s a method to it, however:

- Things installed in /, if it’s different from /usr, are generally not to be touched;

- Things installed in /usr are under the distro’s purview or otherwise under a package manager, any modifications are on pain of confusing it;

- Things installed in /usr/local are under the admin’s purview and unmanaged one-offs, there are always some but overuse will lead to anarchy;

- Things installed in /opt are for whatever is so foreign and hopeless in not conforming to the usual factoring that you just give up and put it in its own little padded cell (hello, Mathematica);

- Everything is generally configured using files in /etc, possibly with the exception of some of the special snowflakes in /opt; the package manager will put config files meant to be edited there and expect the admin to merge any changes in manually, and sometimes put default settings meant to be overridden by them in /usr/share (see below)—both approaches can be problematic, but the difficulty is with migrating configuration in general, not the FHS as such.

There used to be additional hierarchies like /usr/X11R6, and even a /usr/etc on some (non-Linux?) systems, but AFAIU everyone agrees their existence makes no sense (anymore?), so much that even FHS doesn’t lower itself to permitting them.

The distinction between / and /usr might appear to be pointless as well, and nowadays it might be (some distros symlink them together), but previously (especially before initial ramdisks were widespread) stuff in / was whatever was needed to bring up the system enough that it could netmount a shared /usr.

Inside each of /, /usr and /usr/local there is bin for things that are supposed to be directly executable, whether binary or a script and all in a single place; share and lib for other portable and non-portable (usually but not necessarily text and binary) shared files, respectively, segregated by application or purpose; finally, due to the dominance of C ABIs and APIs on Unices, the top level of lib also hosts C and C++ library files and there’s an additional directory called include for the headers required to use them. Some people also felt that putting auxiliary executables (things like cc1, the first pass of the C compiler) inside lib was awkward so they created libexec for that purpose, but I don’t think the distinction turned out to be particularly useful so not all distros maintain it.

That’s it, basically. There are subtler but logical points (files vs subdiretories in /etc) and things people haven’t found an obviously superior solution for (multilib and cross environments), and I made no attempt to be historically accurate (the original separation of / and /usr happened for intensely silly reasons), but those are the fundamental principles of the system, and I feel it does make sense as a coherent implementation of a particular design. Other designs are possible (separation by application or package not purpose, Plan 9-ish overlays, NixOS’s isolated environments), but that’s a discussion on a different level; the point is that this one is at the very least internally consistent.

Re the unfriendly names ... I honestly don’t know. Newbie-friendliness matters, but it’s not the only thing that does; particularly in a system intended for interactive text-mode use, concise names have a quality of their own. There’s a reason I’m more willing to reach for curl and jq rather than for httpx and lxml, for regular expressions rather than for Parsec, and even for cmd.exe, as miserable as it is, rather than for PowerShell.

I feel weird that no HCI people seem to have seriously considered the tension between interactive and programmatic environments and what the text-mode user’s experience in Unix says about it, but even Tcl, which is in many ways a Bourne shell done right, loses something in casual REPL use when it eliminates (as far as idiomatic libraries are concerned) short switches. Coming up with things like rsync -avz or objdump -Ctsr is not very pleasant initially, but I certainly wouldn’t want to type out the longhand form that would be the only possible one in most programming languages (even if I find their syntax beautiful, e.g. Smalltalk/Self).

  • >the original separation of / and /usr happened for intensely silly reasons

    As I recall, there were very good reasons for separating / and /usr (as well as /home and /var). The biggest one was that various Unix kernels would panic[0] if / was full. But that issue was almost universally fixed by 1990 or so.

    And netmounts of pretty much everything other than / were pretty common for many years, due to the high cost of storage.

    So no, the reasons weren't silly, they just don't apply to more modern systems.

    [0] https://en.wikipedia.org/wiki/Kernel_panic

    • OK, I didn’t put this completely correctly. The original separation of /usr to hold user home directories (!) and / to hold everything else was because the first RK05 disk ran out, but it makes sense in any case. The additional hierarchy under /usr was created some time later when space on the first RK05 disk ran out again, and while this can be a perfectly sensible decision for a single installation on a single site, taking it seriously decades later is silly. Neither does that mean that there weren’t good reasons the split got preserved in subsequent systems, just that they couldn’t have been the same as the original ones; there are no netmounts in V6, after all.

      (I have an old Unix intro book that describes /usr as user home directories, the rest is a second-hand retelling[1].)

      [1] http://lists.busybox.net/pipermail/busybox/2010-December/074...

      1 reply →

  • Thank you for the thoughtful reply, the point about netmounting shared usr makes it much easier to understand.

> So where do I install this random binary I downloaded from the internet or compiled myself?

In your home directory.

Wherever you want. All of the above, or none. It really is up to you.

  • That's exactly the problem. This leads to mess. The Windows model of C:\Program Files\<app name> is much better.

    • But why are many Windows programs under C:\Windows\System32 then, if Windows has only a single model? Why aren't all Steam-provided (for example) games in a single location? Or, if they are, does Windows really have a single model?

      Yes, the Linux/POSIX model is confusing, but the split is to segregate administrative domains:

      - / and /usr are the domain of the distribution. As a user, you should never install there. The administrative group is root.

      - /usr/local is the domain of the machine admin. If the machine is yours to manage, you can install software there. The administrative group is staff.

      - /opt/$vendor is the domain of third-party vendors. Each vendor (like Steam, Eclipse, Arduino Studio) can get its own subdirectory and its own administrative user group.

      How would you achieve the same on Windows? How do you make sure the Adobe updater can only install new versions of CS, but not surreptitiously install a new (free!) spyware package under C:\Windows? How would you allow certain power users to share one Google Chrome installation, allow each of them to update it, but not let them install additional software system-wide?

    • Okay, but what about ProgramData? I have enough programs that put their junk in there instead of Program Files, and others that make their own directories on the root of the drive (driver installers are really bad about this).

      I think the best model I've seen for consistent binary locations is the 'Applications' folder in Mac OS X, but it fails as well by retaining the /usr/bin elsewhere.

    • When you download a portable app (just a bare .exe), do you make a folder for it and drop it in program files? (quite possible, you'd just be unusual) If not, why does Windows get a free pass?

    • Except instead of config files, Windows has the registry.

      Also, as mentioned by the siblings to thia comment, the 'mess' has a purpose, and is less messy than it appears.

      Want to manually install something? Into /usr/local it goes. Done.

      The only way to handle this that I've been really impressed with is Mac's "Applications" folder. Unfortunately, I dislike most other things about Mac.