Comment by fergie
10 hours ago
I basically live in the terminal. However, every single one of these tools offers a solution to a problem that I don't have; aren't installed on my system; and mysteriously have many tens of thousands of github stars.
I genuinely don't know what is going on here.
> I basically live in the terminal. However, every single one of these tools offers a solution to a problem that I don't have; aren't installed on my system; and mysteriously have many tens of thousands of github stars.
> I genuinely don't know what is going on here.
I basically live in my music library. However, every single pop artist offers songs that I don't like, are not in my library, and mysteriously have many millions of albums sold.
I genuinely don't know what is going on here.
Joking aside, have you ever tried to use some of these tools ? I use to not understand why people where using vim until I really tried.
> Joking aside, have you ever tried to use some of these tools
No.
> I use to not understand why people where using vim until I really tried.
There's your problem. I respectfully suggest installing Emacs.
Out of curiosity, how would you recursively grep files ignoring (hidden files [e.g., `.git`]), only matching a certain file extension? (E.g., `rg -g '*.foo' bar`.)
I use the command line a lot too and this is one of my most common commands, and I don't know of an elegant way to do it with the builtin Unix tools.
(And I have basically the same question for finding files matching a regex or glob [ignoring the stuff I obviously don't want], e.g., `fd '.foo.*'`.)
Depends on how big the directory is. If it only contains a few files, I'd just enumerate them all with `find`, filter the results with `grep`, and perform the actual `grep` for "bar" using `xargs`:
(This one I could do from muscle memory.)
If traversing those hidden files/directories were expensive, I'd tell `find` itself to exclude them. This also lets me switch `xargs` for `find`'s own `-exec` functionality:
(I had to look that one up.)
Thanks yeah this is a good example of why I prefer the simpler interface for `rg` and `fd`. Those examples would actually be fine if this were something only did once in awhile (or in a script). But I search from the command line many times per day when I'm working, so I prefer a more streamlined interface.
For the record, I think `git grep` is probably the best builtin solution to the problem I gave, but personally I don't know off-hand how to only search for files matching a glob and to use the current directory rather than the repository root with `git grep` (both of which are must haves for me). I'd also need to learn those same commands for different source control systems besides git (I use one other VCS regularly).
2 replies →
Just noting why I answered why don't do this (and honestly do think it's a very good idea) here https://news.ycombinator.com/item?id=45569313
The one issue with this approach is that it would still traverse all hidden folders, which could be expensive (e.g. in a git repo with an enormous revision history in `.git/`). `-not -path ...` just prevents entities from being printed, not being traversed. To actually prevent traversal, you need to use `-prune`.
grep -ri foo ./*
Hits in hidden files is not really a pain point for me
Curious if that answers the "I genuinely don't know what is going on here" then? Not searching hidden files (or third-party dependencies, which `rg` also does automatically with its ignore parsing) isn't just a nice to have, it's mandatory for a number of tasks a software engineer might be performing on a code base?
That doesn't apply to the very specific case for which the parent asked a solution.
This is the POSIX way. You'd probably put it in a function in .bashrc
Just noting that I answered why I don't use this approach here https://news.ycombinator.com/item?id=45569313
The core Unix toolset is so good, that you can easily get by with it. Many of these tools are better, but still not necessary, and they certainly aren't widely available by default.
The list includes jq. I’d frankly love to never have a problem that jq solves, but, well, here we are.
ripgrep is something I have installed, but use only via text editor integrations. fzf is nice for building ad-hoc TUIs. fd may make sense (I’m told it’s faster than find), but I already know enough find.
The “next gen ls” family of tools in the article is baffling.
You never use fzf? What a tough life in terminal then huh. It's not as useful to run it directly, but pretty much any shell has plugin for fzf support that let's you Ctrl+R to fuzzy search over bash_history (or fish_history or whatever) and Ctrl+T let's you fuzzy search files in current directory.
fish fuzzy matches on Ctrl+R, and on Tab, without fzf.
What do you do in the terminal all day that does not leave you with the desire to improve your toolset? Do you write all your own tools?
How would you filter and transform a large JSON file, without jq?
Many of us just don't use JSON in our day jobs, weird I know, but true.
The only thing I use JQ for at work is parsing the copilot API response so I remember what the model names are - that's it! TBH, I could just skip it and read the json
They tend to be popular with the "rewrite it in rust/go" crowd as far as I can tell. Or in other words, you are no longer part of the cool kids.
I've seen an online radio player in Go which was unusabiily slow on my Atom n270 due to the badly coded ANSI audio visualization FX' using floating math. Meanwhile a with Cava or another visualizer and mpd+mpc I could do the same using 200x less resources.