Comment by kristopolous
1 year ago
it's the same mentality that brought us the git design - the easiest, least typing options are rarely, if ever the thing you want to do.
Instead, these invocations give cryptic messages, throw errors, or sometimes, even break things.
The most common and helpful things are hidden deep behind multiple flags and command line arguments in manuals that read like dictionaries more than guides.
I'm always at a complete loss as to how such decisions are made. For instance, "git branch -vv" is the useful output you would like to see, every time, that should be "git branch". Why not make the current output, "git branch -qq"? Is a humane interface too much to ask for? Apparently...
I know people defend this stuff, but as a senior engineer in programming pits for 30 years, they're wrong. Needless mistakes and confusions are the norm. We can do better.
We need to stop conflating elitism with fucked up design.
Yes, it is too much.
You have the wrong mentality, and I hope this can help make your life easier. Programs are made so that the simplest option is the base option. This is because there is a high expectation that things will be scripted AND understanding that there is a wide breadth of user preference. There's an important rule
Customization is at the root of everything. We have aliases that solve most of the problems and small functions for everything else. You default to an non-noisy output, showing only __essentials__ and nothing more unless asked. Similarly, you do no filtering other than hidden files. This way, everyone can get what they want. Btw, this is why so many people are upset with default options on things like fdfind and ripgrep.
For your problem with git, there are 2 solutions you have.
The design isn't fucked up, it is that you don't understand the model. This is okay. You aren't going to learn it unless you read docs or books on linux. If you learn the normal way, by usage, then it is really confusing at first. But there is a method to the madness. Things will start making more sense if you understand the reason for the design choices. (In a sibling comment I wrote the abstraction to command patterns that makes the gp's confusion odd. Because systemd follows the standard)
Side note: if you try to design something that works for everyone or works for the average person, you end up designing something that is BAD for most people. This is because people's preference is not uniformly distributed, meaning the average person is not representative of any person in the distribution. This is because anything that is normally distributed has its density along the shell while a uniform distribution has a uniform density all throughout it.
You're fundamentally misunderstanding things.
If every person has the same point of confusion than they are not the problem, it's the thing they're confused by.
There's better ways to do things and calling people naive for suggesting the obvious is the problem.
And about your side note: no. For example, when people checkout a branch, they want to track the remote branch 99.9% of the time. It should be the default.
The default journalctl should show where things have failed, that's why people are invoking it.
Also there's plenty of counterexamples that do these things. "ping host" genuinely pings the host. "ssh host" genuinely ssh's into the host.
You don't need to specify say, the encryption algorithm, that you want a shell, use say, "--resolve=dns" to specify the hostname resolution... It has sensible defaults that do what most people intend.
Under the model you advocate for "ssh host" would simply open up a random socket connection and then you'd have to manually attach a virtual terminal to it and request the invocation of shell separately, stacking each piece on top of the other, before you log in.
This design could be defended in the same way: Some people do port mapping, tunneling, SOCKS proxies, there's too many use cases! How can we assume the user wants a shell? Answer: because they do.
Most things are reasonable like certbot, apt, tune2fs, mkfs, awk, cut, paste, grep, sort, so many reasonable things. Even emacs is reasonable.
But systemd and git are not and the users are not the problems. Choices were made to be hostile to usability and they continue to be defended by being hostile to usability. Things like lex and yacc are inherently complicated and there's nothing to do there. Other things are intentionally complicated. Those can be fixed.
How? What do you filter for? Emergency? Critical? Error? Alert? (see `dmesg -l`). What's the default? Do you do since boot? Since a certain time? Since restart?
FWIW, I invoke it all the time for other reasons. I am legitimately checking the status. Is it online? Do I have it enabled? What's the PID? Where's the file (though I can run `sudo systemctl edit foo.service`)? What's the memory usage? When did it start? And so on. The tail of the log are useful but not the point of status.
If I'm looking to debug a service I look at the journal instead. I hope this helps
That's why __you're__ using it, but don't assume that your usecase is the general. Remember, linux has a large breadth in types of users. But their biggest market is still servers and embedded systems. There's far more of those than PC users.
Idk, when systemd became the main thing I hated it too. But mostly because it was different and I didn't know how to use it. But then I learned and you know what? I agreed. This took awhile though and I had to see the problems they are solving. Otherwise it looks really bloaty and confusing. Like why have things like nspawn? Why use systemd jobs instead of using cron? Why use systemd-homed instead of useradd?
Well a big part of it is security and flexibility.
I write systemd services now instead of cron jobs. With a cron job I can't make private tmps[0]. Cron won't run a job if the computer is off during the job time. Cron can't stagger services. Cron can't wait for other services first. Cron can't be given limited CPU, memory, or other resource limitations.
Nspawn exists to make things highly portable. Chroot on steroids is used for a reason. Being able to containerize things, placing specific capabilities and all that. This is all part of a systemd job anyways. It really makes it a lot easier to give a job the minimum privileges. So often nspawn is a better fit than things like docker.
Same goes for homed. You can do things like setting timezones unique to users. But there's so much more like how it can better handle encryption. And you can do really cool things like move your home directory from one machine to another. This might seem like a non-issue to most people but it isn't. That whole paradigm where your keyboard is just an interface to a machine (i.e. a terminal, and I don't mean the cli. There's a reason that's called a terminal "emulator"). This is a really useful thing when you work on servers.
Look, there's a reason the distros switched over. It's not collective madness.
[0] https://www.redhat.com/en/blog/new-red-hat-enterprise-linux-...
https://systemd.io/
5 replies →
I've heard this reasoning multiple times, but that doesn't make it right. Like you said, you ARE presuming something: That scripting and a (useless) base case are the most common usage.
How many people script Git? More importantly, how often do you change that script? Rarely, and rarely. That means it's FAR FAR less work for the scripter to look up the weird incantations to remove the "human" parts (like a quiet flag).
Conversely, the human is far more likely to write git commands impromptu, and I dare say, this is the FAR FAR more common use case.
That means Git (and a lot of these kinds of commands) optimize for the RARE choice that happens infrequently over the common case that happens often. That's horrible design.
TL;DR: Just because there's a consistent design philosophy behind obtuse linux commands does not make a good, helpful, useful, modern, or simple philosophy. If a library author wrote code this way, we'd all realize how horrible that design is.
You're thinking as a user. I'm sorry, but the vast majority of linux systems are still servers and embedded systems.
Here's two versions of the Unix Philosophy
Yes, scripting is a common use case (I use git in scripts! I use git in programs. You probably do too, even if you don't know it). Piping is a common use case. And I'm not sure what we're talking about that is "useless". I've yet to run into a command where the default is useless. Even though I have `alias grep='grep --color=always --no-messages --binary-files=without-match'` I still frequently run `\grep`.
But we're also talking about programs that are DECADES old. git will turn 20 this year. You're going to be breaking a lot of stuff if you change it now.
And look, you don't have to agree, that's fine. But I'm trying to help you get into a mindset so that this doesn't look like a bunch of gobbledegook. You call these weird incantations. I get it. But if you get this model and get why things are the way they are, then those incantations become a lot less weird and a lot easier to remember. Because frankly, you'll have to remember far less. And either way, it's not like you're going to get linux changed. If you don't use it, great, move on. But if you do, then I'm trying to help, because if you can't change it you got to figure out the way.
https://en.wikipedia.org/wiki/Unix_philosophy
1 reply →
Why are you assuming that your use case is the most common?
Fully this. For all its foibles, Linux was built to never presume too much, and its users tend to be power users who will almost certainly have dotfiles to tune their systems to their needs. In the context of making choices that will necessarily be universal, I admire how thoughtfully most standard Linux packages have been designed to never interfere with the users’ intentions.
Making things customizable doesn't mean the defaults should be useless. zsh, tmux, emacs, vim and bash, out of the box, for instance, have both pretty nice defaults and are highly customizable.
I know it's hard to make things like this but let's do it anyway.
1 reply →
And to all the linux noobies[0], you'll be a hell of a lot more efficient if you learn the philosophy of the design early. It will make it so that you can learn a new command and instantly know how to use several options. It will dramatically reduce the number of things you have to learn. I also HIGHLY suggest learning a bit of bash scripting.
Take a look at the manual
and bookmark "Bash Pitfalls"
Live in the terminal as much as you can. It is harder at first, but you will get huge boosts in productivity quicker than you would know it (easily <2 weeks). It sucks doing things "the hard way" but sometimes that comes with extra lessons. The thing is, those extra lessons are the real value.
[0] no matter how many years you've been using it there's no shame in being a noobie
None of those problems are unsolvable. The basic idea is: your terminal and script can get two different outputs. Git can already do that with colours, so it can also use more verbose for basic commands.
The other part is - scripts can use longer options while the console commands should be optimised for typing.
This has nothing to do with understanding the model or model itself. Complex things can have good interfaces - those just require good interface design and git did not get much of that.
You already answered your own question. There's plenty of terminals that don't support colors. Plenty that don't support 16 bit colors. Do a grep into a curl and you'll find some fun shenanigans.
You're right. This has to do with you thinking your usecase is the best or most common.
As stated before, there is no average user[0]. So the intent is that you have the command that can be built upon. Is everyone unhappy with the base command? Probably. But is is always easier to build up than to tear down.
[0] Again, linux servers and linux embedded devices significantly out number users. Each of them do. So even if there were an average user, guess what, it wouldn't be you.
2 replies →