← Back to context

Comment by sigio

4 months ago

As someone who logs into hundreds of servers in various networks, from various customers/clients, there is so little value in using custom tooling, as they will not be available on 90% of the systems.

I have a very limited set of additional tools I tend to install on systems, and they are in my default ansible-config, so will end up on systems quickly, but I try to keep this list short and sweet.

95% of the systems I manage are debian or ubuntu, so they will use mostly the same baseline, and I then add stuff like ack, etckeeper, vim, pv, dstat.

Another reason emacs as an OS (not fully, but you know) is such a great way to get used to things you have on systems. Hence the quote: "GNU is my operating system, linux is just the current kernel".

As a greybeard linux admin, I agree with you though. This is why when someone tells me they are learning linux the first thing I tell them is to just type "info" into the terminal and read the whole thing, and that will put them ahead of 90% of admins. What I don't say is why: Because knowing what tooling is available as a built-in you can modularly script around that already has good docs is basically the linux philosophy in practice.

Of course, we remember the days where systems only had vi and not even nano was a default, but since these days we do idempotent ci/cd configs, adding a tui-editor of choice should be trivial.

  • > we remember the days where systems only had vi and not even nano was a default

    What are you talking about? I'm still living those days in modern day AWS with latest EC2 machines!

"servers" is the key word here. Some of the tools listed on that page are just slightly "improved" versions of common sysadmin utilities, and indeed, those are probably not worth it. But some are really development tools, things that you'd install on the small number of machines where you do programming. Those might be.

The ones that leap out at me are ripgrep (a genuinely excellent recursive grepper), jq (a JSON processor - there is no alternative to this in the standard unix toolkit), and hyperfine (benchmarking).

Is there any tool or ssh extension that would bring these apps into the remote session?

Is something like that possible? Seems like you could conceivablely dump these small file size tools into a temp folder and and use them and that could be automated.

Is there a security issue with that? Do any of these tools need more permission than the remote session would have?

Maybe the main issue is portability of these apps?

This is certainly a common sentiment (I've felt it myself) so is it at all possible?

  • If you have enough privileges to mount a filesystem on login, that would be one way to do it. If that process requires significant time or extra steps then you probably ought to make that a manual step. I don't think there is a security issue with this approach from the user's perspective, since it is their tools being executed. But if you are an administrator you might have grave objections to allowing random binaries and scripts to be imported into the environment with no audit trail.

As someone who works back and forth on Windows and Linux all day, its handy to have excellent cross-platform tools like ripgrep.

What's the relevance of these "as someone who ..." posts? Nobody cares that these tools don't happen to fit into your carefully curated list of tools that you install on remote computers. You can install these on your local computer to reap some benefits.

You're again confusing this website with your personal email inbox. This is a public message board, all messages you see haven't been written for you specifically - including this blog post.