Comment by regus
6 hours ago
Jq's syntax is so arcane I can never remember it and always need to look up how to get a value from simple JSON.
6 hours ago
Jq's syntax is so arcane I can never remember it and always need to look up how to get a value from simple JSON.
I think the big problem is it's a tool you usually reach for so rarely you never quite get the opportunity to really learn it well, so it always remains in that valley of despair where you know you should use it, but it's never intuitive or easy to use.
It's not unique in that regard. 'sed' is Turing complete[1][2], but few people get farther than learning how to do a basic regex substitution.
[1] https://catonmat.net/proof-that-sed-is-turing-complete
[1] And arguably a Turing tarpit.
That’s interesting! Can you say a little more? I find jq’s syntax and semantics to be simple and intuitive. It’s mostly dots, pipes, and brackets. It’s a lot like writing shell pipelines imo. And I tend to use it in the same way. Lots of one-time use invocations, so I spend more time writing jq filters than I spend reading them.
I suspect my use cases are less complex than yours. Or maybe jq just fits the way I think for some reason.
I dream of a world in which all CLI tools produce and consume JSON and we use jq to glue them together. Sounds like that would be a nightmare for you.
I'm not GP, I use jq all the time, but I each time I use it I feel like I'm still a beginner because I don't get where I want to go on the first several attempts. Great tool, but IMO it is more intuitive to JSON people that want a CLI tool than CLI people that want a JSON tool. In other words, I have my own preconceptions about how piping should work on the whole thing, not iterating, and it always trips me up.
Here's an example of my white whale, converting JSON arrays to TSV.
cat input.json | jq -S '(first|keys | map({key: ., value: .}) | from_entries), (.[])' | jq -r '[.[]] | @tsv' > out.tsv
Here's an easier to understand query for what you're trying to do (at least it's easier to understand for me):
That whole map and from entries throws it off. It's not a good use for what you're doing. tsv expects a bunch of arrays, whereas you're getting a bunch of objects (with the header also being one) and then converting them to arrays. That is an unnecessary step and makes it a little harder to understand.
1 reply →
I find it much harder to remember / use each time then awk
Sound similar to how power shell works, and it’s not great. Plain text is better.
Shameless plug, but you might like this: https://github.com/IvanIsCoding/celq
jq is the CLI I like the most, but sometimes even I struggled to understand the queries I wrote in the past. celq uses a more familiar language (CEL)
Cool tool! Really appreciate the shoutout to gron in the readme, thanks! :)
I had never heard of CEL, looks useful though, thanks for posting this!
CEL looks interesting and useful, though it isn't common nor familiar imo (not for me at least). Quoting from https://github.com/google/cel-spec
That’s some fair criticism, but the same page tells that the language wanted to have a similar syntax to C and JavaScript.
I think my personal preference for syntax would be Python’s. One day I want to try writing a query tool with https://github.com/pydantic/monty
I agree, even trivial tasks require us to go back to jq's manual to learn how to write their language.
this and other reasons is why I built: https://github.com/dhuan/dop
To fix this I recently made myself a tiny tool I called jtree that recursively walks json, spitting out one line per leaf. Each line is the jq selector and leaf value separated by "=".
No more fiddling around trying to figure out the damn selector by trying to track the indentation level across a huge file. Also easy to pipe into fzf, then split on "=", trim, then pass to jq
You might like https://github.com/tomnomnom/gron
If we're plugging jq alternatives, I'll plug my own: https://git.sr.ht/~charles/rq
I was working at lot with Rego (the DSL for Open Policy Agent) and realized it was actually a pretty nice syntax for jq type use cases.
Highly recommend gron. https://github.com/tomnomnom/gron
or https://github.com/adamritter/fastgron
JMESPath is what I wish jq was. Consistent grammar. It only issue is it lacks the ability to convert JSON to other formats like CSV.
I just ask Opus to generate the queries for me these days.
LOL ... I can absolutely feel your pain. That's exactly why I created for myself a graphical approach. I shared the first version with friends and it turned into "ColumnLens" (ImGUI on Mac) app. Here is a use case from the healthcare industry: https://columnlens.com/industries/medical
I also genuinely hate using jq. It is one of the only things that I rely heavily on AI.
You should try nushell or PowerShell which have built ins to convert json to objects. It makes it so easy.
Second this. Working with nushell is a joy.
At that point why don't we ask the AI directly to filter through our data? The AI query language is much more powerful.
Because the output you get can have hallucinations, which don’t happen with a deterministic tool. Furthermore, by getting the `jq` command you get something which is reusable, fast, offline, local, doesn’t send your data to a third-party, doesn’t waste a bunch of tokens, … Using an LLM to filter the data is worse in every metric.
6 replies →
Because the input might be sensitive.
Because the input might be huge.
Because there is a risk of getting hallucinations in the output.
Isn't this obvious?
1 reply →
You really need to go and learn about the concept of determinism and why for some tasks we need and want deterministic solutions.
It's an important idea in computer science. Go and learn.
4 replies →
yeah I literally just use gemini / claude to one-shot JQ queries now
[flagged]