Don't Force Your LLM to Write Terse [Q/Kdb] Code: An Information Theory Argument

4 months ago (medium.com)

Author here. Majromax challenged me to test `i = 1 + i`, which broke my theoretical framework. While setting up that experiment, I realized I hadn't used chat templates in my original measurements (rookie mistake with an Instruct model!).

Re-running with proper methodology completely flips the results - the terse version actually wins. I'll add a correction note to the article once AWS/Medium comes back online and will write a follow-up with the corrected experiments.

This is open science working as intended - community scrutiny improves the work. Thank you all for the engagement, and especially to Majromax for the challenge that led to discovering this!

This approach of solving a problem by building a low-perplexity path towards the solution reminds me of Grothendieck's approach towards solving complex mathematical problems - you gradually build a theory which eventually makes the problem obvious.

https://ncatlab.org/nlab/show/The+Rising+Sea

  • > you gradually build a theory which eventually makes the problem obvious.

    Which incidentally is how programming in Haskell feels like

  • what is striking to me is how far reasoning by analogy and generalization can get you. some of the deepest theorems are about relating disparate things by analogy.

The bigger issue is that LLMs haven’t had much training on Q as there’s little publically available code. I recently had to try and hack some together and LLMs couldn’t string simple pieces of code together.

It’s a bizarre language.

  • I don't think that's the biggest problem. I think it's the tokenizer: it probably does a poor job with array languages.

    • Perhaps for array languages LLMs would do a better job running on a q/APL parse tree (produced using tree-sitter?) with the output compressed into the traditional array-language line noise just before display, outside the agentic workflow.

      2 replies →

> I think the aesthetic preference for terseness should give way to the preference for LLM accuracy, which may mean more verbose code

From what I understand, the terseness of array languages (Q builds on K) serves a practical purpose: all the code is visible at once, without the reader having to scroll or jump around. When reviewing an LLM's output, this is a quality I'd appreciate.

  • Perl and line noise also share these properties. Don’t particularly want to read straight binary zip files in a hex editor, though.

    Human language has roughly, say, 36% encoding redundancy on purpose. (Or by Darwinian selection so ruthless we might as well call it "purpose".)

    • > Human language has roughly, say, 36% encoding redundancy on purpose.

      The purpose is being understandable by a person of average intellect and no specialized training. Compare with redundancy in math notation, for example.

      1 reply →

  • I agree with you, though in the q world people tend to take it to the extreme, like packing a whole function into a single line rather than a single screen. Here's a ticker plant standard script from KX themselves; I personally find this density makes it harder to read, and when reading it I put it into my text editor and split semicolon-separated statements onto different lines: https://github.com/KxSystems/kdb-tick/blob/master/tick.q E.g. one challenge I've had was generating a magic square on a single line; for odd-size only, I wrote: ms:{{[(m;r;c);i]((.[m;(r;c);:;i],:),$[m[s:(r-1)mod n;d:(c+1) mod n:#:[m]];((r+1)mod n;c);(s;d)])}/[((x;x)#0;0;x div 2);1+!:[x*x]]0}; / but I don't think that's helping anyone

    • There's a difference between one line and short/terse/elegant.

        {m:(x,x)#til x*x; r:til[x]-x div 2; 2(flip r rotate')/m} 
      

      generates magic squares of odd size, and the method is much clearer. This isn't even golfed as the variables have been left.

      2 replies →

    • I've been dabbling in programming language design as of late, when trying to decide if including feature 'X' makes sense or not, with readability being the main focus I realized some old wisdom:

      1 line should do 1 thing - that's something C has established, and I realized that putting conceptually different things on the same line destroys readability very quickly.

      For example if you write some code to check if the character is in a rectangular area, and then turn on a light when yes, you can put the bounds check expressions on the same line, and most people will be able to read the code quickly - but if you also put the resulting action there, your code readability will suffer massively - just try it with some code.

      That's why ternary expressions like a = condition? expr1: expr2 kinda controversial - they're not always bad, as they can encode logic about a single thing - if said character is friendly, turn the light color should be green, otherwise red - is a good example - but doing error handling there is not.

      I haven't been able to find any research that backs this up (didn't try very hard tho), but I strongly believe this to be true.

      A nice thing is that some other principles, like CQRS, can be derived from this, for example CQRS dictates that a function like checkCharacterInAreaThenSetLightState() is bad, and should be split up into checkCharacterInArea() and setLightState()

      2 replies →

    • Hey, another language with smileys! Like haskell, which has (x :) (partial application of a binary operator)

First pass on my local deepseekv3.1-Terminus at Q4 answered it correctly. if anything, i think LLMs should write terse code, Q/J/APL/Forth/Prolog/Lisp, tokens is precious. It's insane to waste precious tokens generating Java, javascript and other overly verbose code...

https://pastebin.com/VVT74Rp9

  • It did go back on itself 3 times, no? "Actually, let’s trace for x=3:" (it had just computed for x=3 the first time); then "Better to check actual q output:" -- did it actually run it in a q session, or just pretended? And another one "That doesn’t seem to align. Let’s do it step by step:"

> Let’s start with an example: (2#x)#1,x#0 is code from the official q phrasebook for constructing an x-by-x identity matrix.

Is this... just to be clever? Why not

    (!x)=/:!x

aka. the identity matrix is defined as having ones on the diagonal? Bonus points AI will understand the code better.

  • while both versions are O(N^2), your version is slower because comparison operation, which affects execution speed. This is suboptimal.

      q)x:1000
      q)\t:1000 sum (til x)=/:(til x)
      889
      q)\t:1000 sum (til x)=/:(til x)
      871
      q)\t:1000 sum (2#x)#1,x#0
      602
      q)\t:1000 sum (2#x)#1,x#0
      599
    

    upd: in ngn/k, situation is opposite ;-o

  • Unless the interpreter is capable of pattern-recognizing that whole pattern, that will be less efficient, e.g. having to work with 16-bit integers for x in the range 128..32767, whereas the direct version can construct the array directly (i.e. one byte or bit per element depending on whether kdb has bit booleans). Can't provide timings for kdb for licensing reasons, but here's Dyalog APL and CBQN doing the same thing, showing the fast version at 3.7x and 10.5x faster respectively: https://dzaima.github.io/paste/#0U1bmUlaOVncM8FGP5VIAg0e9cxV...

  • The vibe I get from q/kdb in general is that its concision has passed the point of increasing clarity through brevity and is now about some kind of weird hazing or machismo thing. I've never seen even numpy's verbosity be an actual impediment to understanding an algorithm, so we're left speculation about social and psychological explanations for why someone would write (2#x)#1,x#0 and think it beautiful.

    Some brief notations make sense. Consider, say, einsum: "ij->ji" elegantly expresses a whole operation in a way that exposes the underlying symmetry of the domain. I don't think q's line noise style (or APL for that matter) is similarly exposing any deeper structure.

> When I(short proof)=I(long proof), per-token average surprisal must be lower for the long proof than for the short proof. But since surprisal for a single token is simply -log P, that would mean that, on average, the shorter proof is made out of less probable tokens.

This assertion is intuitive, but it isn't true. Per-token entropy of the long proof can be larger if the long proof is not minimal.

For example, consider the "proof" of "list the natural numbers from 1 to 3, newline-delimited." The 'short proof' is:

"1\n2\n3\n" (Newlines escaped because of HN formatting)

Now, consider the alternative instruction to give a "long proof", "list the natural numbers from 1 to 3, newline-delimited using # for comments. Think carefully, and be verbose." Trying this just now with Gemini 2.5-pro (Google AI Studio) gives me:

"# This is the first and smallest natural number in the requested sequence.\n 1

# Following the first, this is the second natural number, representing the concept of a pair.\n 2

# This is the third natural number, concluding the specified range from 1 to 3.\n 3"

We don't have access to Gemini's per-token logits, but repeating the prompt gives different comments so we can conclude that there is 'information' in the irrelevant commentary.

The author's point regains its truth, however, if we consider the space of all possible long proofs. The trivial 'long' proof has higher perplexity than the short proof, but that's because there are so many more possible long proofs of approximately equal value. The shortest possible proof is a sharp minimum, but and longer proofs are shallower and 'easier'.

The author also misses a trick with:

> Prompted with “Respond only with code: How do you increment i by 1 in Python?”, I compared the two valid outputs: i += 1 has a perplexity of approximately 38.68, while i = i + 1 has a perplexity of approximately 10.88.

… in that they ignore the equally-valid 'i = 1 + i'.

  • Thanks so much for this challenge! I just ran the experiment with i = 1 + i and you're absolutely right - it breaks my theoretical framework (same semantic information, but much higher perplexity).

    While setting this up, I realized I hadn't used chat templates in my original measurements (rookie mistake with an Instruct model!). Re-running with proper methodology completely flips the results - the terse version actually wins.

    I'll add a correction note to the article once AWS/Medium comes back online, and will write a proper follow-up with all the corrected experiments. Your comment literally made the research better - thank you!

I was wondering recently, is fine tuning an effective way to make this the default? If so, does fine tuning this behavior on one language have a carry-over effect on other languages (maybe even non-programming language?), or is the effect localized to the language of the fine-tuning dataset?

> Medium will be back.

> Due to a global hosting outage, Medium is currently unavailable. We’re working to get you reading and writing again soon.

> — The Medium Team

Dang.

LLMs were created to use the same interface as humans (language/code).

Asking humans to change for the sake of LLMs is an utterly indefensible position. If humans want terse code, your LLM better cope or go home.

  • Disagree. If some small adjustments to your workflow or expectations enable you to use LLMs to produce good, working, high-quality code much faster than you could otherwise, at some point you should absolutely welcome this, not stubbornly refuse change.

    • I think there's a mighty big assumption in there.

      I see no reason to believe LLMs can write working let alone good or high-quality code, nor that the adjustments to my workflow or expectations will be small. But sure, if such a thing happened, I would probably welcome it.

      Meanwhile, there are people who write good and high-quality working code faster than me, and they all write as much as possible on one line with the most bare-bones of text editors, so I will continue to learn from them, rather than the people who say LLMs are helping them. Maybe you should reconsider.

    • Somehow I don't think writing verbose English to communicate with an LLM is ever going to beat a language purpose-built for its particular niche. Being terse is the point and what makes it so useful. If people wanted to use python with their LLM instead, they have that option.

  • Do you swing a nailgun?

    Use the tool according to how it works, not according to how you think it should work.

    • Chances are hell is going to freeze over before people start writing verbose q code. Q being less verbose than alternatives is the whole point. Nobody is feeling any pressure to bend over backwards to accommodate the guy who struggles to get by when his LLM can't explain a piece of code to him.

      To use your nailgun analogy as an example: Waddling in with your LLM and demanding the q community change is like walking into a clockmaker's workshop with your nailgun and demand they accommodate your chosen tool.

      "But I can't fit my nailgun into these tiny spaces you're making, you should build larger clocks with plenty of space for my nailgun to get a good angle!"

      No, we're not going to build larger clocks, but you're free to come back with a tiny automatic screwdriver instead. Alternatively you and your nailgun might feel more at home with the construction company across the street.

      2 replies →

I think that there are a few critical issues that are not being considered:

* LLMs don't understand the syntax of q (or any other programming language).

* LLMs don't understand the semantics of q (or any other programming language).

* Limited training data, as compared to kanguages like Python or javascript.

All of the above contribute to the failure modes when applying LLMs to the generation or "understanding" of source code in any programming language.

  • > Limited training data, as compared to kanguages like Python or javascript.

    I use my own APL to build neural networks. This is probably the correct answer, and inline with my experience as well.

    I changed the semantics and definition of a bunch of functions and none of the coding LLMs out there can even approach writing semidecent APL.

I was kind of taken aback by the author's definition of 'terse'. I was expecting a discussion about architecture not about syntax aesthetics.

Personally I don't like short variable names, short function names or overly fancy syntactical shortcuts... But I'm obsessed with minimizing the amount of logic.

I want my codebases to be as minimalist as possible. When I'm coding, I'm discovering the correct lines, not inventing them.

This is why I like using Claude Code on my personal projects. When Claude sees my code, it unlocks a part of its exclusive, ultra-elite, zero-bs code training set. Few can tap into this elite set. Your codebase is the key which can unlock ASI-like performance from your LLMs.

My friend was telling me about all the prompt engineering tricks he knows... And in a typical midwit meme moment; I told him, dude, relax, my codebase basically writes itself now. The coding experience is almost bug free. It just works first time.

I told my friend I'd consider letting him code on my codebase if he uses an LLM... And he took me up on the offer... I merged his first major PR directly without comment. It seems even his mediocre Co-pilot was capable of getting his PR to the standard.

  • I'd bet a lot of people are trying to optimize their codebases for LLMs. I'd be interested to see some examples of your ASI-unlocking codebase in action!

    • If you're interested, I added my email to my profile 'about' section. I could try to screen record my next feature development on one of my side projects.

      I'm also kind of interested to see how others use LLMs for coding. I can speak for myself having worked on both good and bad code-bases; my experience is that it works MUCH better on those 'good' codebases (by my definition).