← Back to context

Comment by drpixie

7 days ago

> Do you find the resulting natural language description is easier to reason about?

An example from an different field - aviation weather forecasts and notices are published in a strongly abbreviated and codified form. For example, the weather at Sydney Australia now is:

  METAR YSSY 031000Z 08005KT CAVOK 22/13 Q1012 RMK RF00.0/000.0

It's almost universal that new pilots ask "why isn't this in words?". And, indeed, most flight planning apps will convert the code to prose.

But professional pilots (and ATC, etc) universally prefer the coded format. Is is compact (one line instead of a whole paragraph), the format well defined (I know exactly where to look for the one piece I need), and it's unambiguous and well defined.

Same for maths and coding - once you reach a certain level of expertise, the complexity and redundancy of natural language is a greater cost than benefit. This seems to apply to all fields of expertise.

Reading up on the history of mathematics really makes that clear as shown in

https://www.goodreads.com/book/show/1098132.Thomas_Harriot_s...

(ob. discl., I did the typesetting for that)

It shows at least one lengthy and quite wordy example of how an equation would have been stated, then contrasts it in the "new" symbolic representation (this was one of the first major works to make use of Robert Recorde's development of the equals sign).

  • Although if you look at most maths textbooks or papers there's a fair bit of English waffle per equation. I guess both have their place.

    • People definitely could stand to write a lot more comments in their code. And like... yea, textbook style prose, not just re-stating the code in slightly less logical wording.

      2 replies →

    • Textbooks aren't just communicating theorems and proofs (which are often just written in formal symbolic language), but also the language required to teach these concepts, why these are important, how these could be used and sometimes even the story behind the discovery of fields.

      So this is far from an accurate comparison.

      6 replies →

    • Yes, plain language text to support and translate symbology to concepts facilitates initial comprehension. It's like two ends of a connection negotiating protocols: once agreed upon, communication proceeds using only symbols.

An interesting perspective on this is that language is just another tool on the job. Like any other tool, you use the kind of language that is most applicable and efficient. When you need to describe or understand weather conditions quickly and unambiguously, you use METAR. Sure, you could use English or another natural language, but it's like using a multitool instead of a chef knife. It'll work in a pinch, but a tool designed to solve your specific problem will work much better.

Not to slight multitools or natural languages, of course - there is tremendous value in a tool that can basically do everything. Natural languages have the difficult job of describing the entire world (or, the experience of existing in the world as a human), which is pretty awesome.

And different natural languages give you different perspectives on the world, e.g., Japanese describes the world from the perspective of a Japanese person, with dedicated words for Japanese traditions that don't exist in other cultures. You could roughly translate "kabuki" into English as "Japanese play", but you lose a lot of what makes kabuki "kabuki", as opposed to "noh". You can use lots of English words to describe exactly what kabuki is, but if you're going to be talking about it a lot, operating solely in English is going to become burdensome, and it's better to borrow the Japanese word "kabuki".

All languages are domain specific languages!

  • I would caution to point of that the Strong Sapir-Whorf hypothesis is debunked; Language may influence your understanding, but it's not deterministic and just means more words to explain a concept for any language.

  • > You can use lots of English words to describe exactly what kabuki is, but if you're going to be talking about it a lot, operating solely in English is going to become burdensome, and it's better to borrow the Japanese word "kabuki".

    This is incorrect. Using the word "kabuki" has no advantage over using some other three-syllable word. In both cases you'll be operating solely in English. You could use the (existing!) word "trampoline" and that would be just as efficient. The odds of someone confusing the concepts are low.

    Borrowing the Japanese word into English might be easier to learn, if the people talking are already familiar with Japanese, but in the general case it doesn't even have that advantage.

    Consider that our name for the Yangtze River is unrelated to the Chinese name of that river. Does that impair our understanding, or use, of the concept?

    • The point is that Japanese has some word for kabuki, while English would have to borrow the word, or coin a new one, or indeed repurpose a word. Without a word, an English speaker would have to resort to a short essay every time the concept was needed, though in practice of course would coin a word quickly.

      Hence jargon and formal logic, or something. And surfer slang and txtspk.

> Same for maths and coding - once you reach a certain level of expertise, the complexity and redundancy of natural language is a greater cost than benefit. This seems to apply to all fields of expertise.

And as well as these points, ambiguity. A formal specification of communication can avoid ambiguity by being absolute and precise regardless of who is speaking and who is interpreting. Natural languages are riddled wth inconsistencies, colloquialisms, and imprecisions that can lead to misinterpretations by even the most fluent of speakers simply by nature of natural languages being human language - different people learn these languages differently and ascribe different meanings or interpretations to different wordings, which are inconsistent because of the cultural backgrounds of those involved and the lack of a strict formal specification.

  • Extending this further, "natural language" changes within populations over time where words or phrases carry different meaning given context. The words "cancel" or "woke" were fairly banal a decade ago. Whereas they can be deeply charged now.

    All this to say "natural language"'s best function is interpersonal interaction not defining systems. I imagine most systems thinkers will understand this. Any codified system is essentially its own language.

  • Sure, but much ambiguity is trivially handled with a minimum amount of context. "Tomorrow I'm flying from Austin to Atlanta and I need to return the rental". (Is the rental (presumably car) to be returned to Austin or Atlanta? Almost always Austin, absent some unusual arrangement. And presumably to the Austin airport rental depot, unless context says it was another location. And presumably before the flight, with enough timeframe to transfer and checkin.)

    (You meant inherent ambiguity in actual words, though.)

you guys are not wrong. explain any semi complez program, you will instantly resort to diagrams, tables, flow charts etc. etc.

ofcourse, you can get your LLM to be bit evil in its replies, to help you truly. rather than to spoon feed you an unhealthy diet.

i forbid my LLM to send me code and tell it to be harsh to me if i ask stupid things. stupid as in, lazy questions. send me the link to the manual/specs with an RTFM or something i can digest and better my undertanding. send links not mazes of words.

now i can feel myself grow again as a programmer.

as you said. you need to build expertise, not try to find ways around it.

with that expertise you can find _better_ ways. but for this, firstly, you need the expertise.

  • If you don't mind sharing - what's the specific prompt you use to get this to happen, and which LLM do you use it with?

    • I can share a similar approach I'm finding beneficial. I add "Be direct and brutally honest in your feedback. Identify assumptions and cognitive biases to correct for." (I also add a compendium of cognitive biases and examples to the knowledge I give the LLM.

    • The rudest and most aggressive LLM I've used is Deepseek. Most LLMs have trained-in positivity bias but I can prompt Deepseek to tell me my code is shit very easily.

      9 replies →

You can see the same phenomenon playing a roguelike game.

They traditionally have ASCII graphics, and you can easily determine what an enemy is by looking at its ASCII representation.

For many decades now graphical tilesets have been available for people who hate the idea of ASCII graphics. But they have to fit in the same space, and it turns out that it's very difficult to tell what those tiny graphics represent. It isn't difficult at all to identify an ASCII character rendered in one of 16 (?) colors.

Exactly. Within a given field, there is always a shorthand for things, understood only by those in the field. Nobody describes things in natural language because why would you?

And to this point - the English language has far more ambiguity than most programming languages.

  • I'm told by my friends who've studied it that Attic Greek - you know, what Plato spoke - is superb for philosophical reasoning, because all of its cases and declinsions allow for a high degree of specificity.

    I know Saffir-Whorf is, shall we say, over-determined - but that had to have helped that kind of reasoning to develop as and when and how it did.

I wonder why the legal profession sticks to natural language

  • They don't, though. Plenty of words in law mean something precise but utterly detached from the vernacular meaning. Law language is effectively a separate, more precise language, that happens to share some parts with the parent language.

  • There was that "smart contract" idea back when immutable distributed ledgers were in fashion. I still struggle to see the approach being workable for anything more complicated (and muddied) than Hello World level contracts.

  • Because law isn’t a fixed entity, it is a suggestion for the navigation of an infinite wiring

  • Backwards compatibility works differently there, and legalese has not exactly evolved naturally.

> prefer the coded format. Is is compact...

On the other hand "a folder that syncs files between devices and a server" is probably a lot more compact than the code behind Dropbox. I guess you can have both in parallel - prompts and code.

  • Let’s say that all of the ambiguities are automatically resolved in a reasonable way.

    This is still not enough to let 2 different computers running two different LLMs to produce compatible code right? And no guarantee of compatibility as you refine it more etc. And if you get into the business of specifying the format/protocol, suddenly you have made it much less concise.

    So as long as you run the prompt exactly once, it will work, but not necessarily the second time in a compatible way.

    • Does it need to result in compatible code if run by 2 different LLM's? No one complains that Dropbox and Google Drive are incompatible. It would be nice if they were but it hasn't stopped either of them from having lots of use.

      2 replies →

  • More compact, but also more ambiguous. I suspect an exact specification what Dropbox does in natural language will not be substantially more compact compared to the code.

  • You just cut out half the sentence and responded to one part. Your description is neither well defined nor us it unambiguous.

    You can't just pick a singular word out of an argument and argue about that. The argument has a substance, and the substance is not "shorter is better".

  • What do you mean by "sync"? What happens with conflicts, does the most recent version always win? What is "recent" when clock skew, dst changes, or just flat out incorrect clocks exist? Do you want to track changes to be able to go back to previous versions? At what level of granularity?

  • I’ll bet my entire net worth that you can’t get an LLM exactly recreate Dropbox from this mescription alone.

The point of LLM is to enable "ordinary people" to write software. This movement is along with "zero code platform", for example. Creating algorithms by drawing block-schemes, by dragging rectangles and arrows. This is old discussion and there are many successful applications of this nature. LLM is just another attempt to tackle this beast.

Professional developers don't need this ability indeed. Most professional developers, who had to deal with zero code platforms, probably would prefer to just work with ordinary code.

  • I feel that's merely side-stepping the issue: if natural language is not succint and unambiguous enough to fully specify a software program, how will any "ordinary person" trying to write software with it be able to avoid these limitations?

    In the end, people will find out that in order to have their program execute successfully they will need to be succinct in their wording and construct a clear logic flow in their mind. And once they've mastered that part, they're halfway to becoming a programmer themselves already and will either choose to hire someone for that task or they will teach themselves a non-natural programming language (as happened before with vbscript and php).

  • I think this is the principle-agent problem at work. Managers/executives who don't understand what programmers do believing that programmers can be easily replaced. Why wouldn't LLM vendors offer to sell it to them?

    I pity the programmers of the future who will be tasked with maintaining the gargantuan mess these things end up creating.

    • No pity for the computer security industry though. It's going to get a lot of money.

    • "I pity the programmers of the future who will be tasked with maintaining the gargantuan mess these things end up creating."

      With even a little bit of confidence, they could do quite well otherwise.