← Back to context

Comment by misja111

8 days ago

I somewhat disagree with this. In real life, say in some company, the inception of an idea for a new feature is made in the head of some business person. This person will not speak any formal language. So however you turn it, some translation from natural language to machine language will have to be done to implement the feature.

Typically the first step, translation from natural to formal language, will be done by business analysts and programmers. But why not try to let computers help along the way?

Computers can and should help along the way, but Dijkstra's argument is that a) much of the challenge of human ideas is discovered in the act of converting from natural to formal language and b) that this act, in and of itself, is what trains our formal logical selves.

So he's contesting not only the idea that programs should be specified in natural language, but also the idea that removing our need to understand the formal language would increase our ability to build complex systems.

It's worth noting that much of the "translation" is not translation, but fixing the logical ambiguities, inconsistencies and improper assumptions. Much of it can happen in natural language, if we take Dijkstra seriously, precisely because programmers at the table who have spent their lives formalizing.

There are other professions which require significant formal thinking, such as math. But also, the conversion of old proofs into computer proofs has lead us to discover holes and gaps in many well accepted proofs. Not that much has been overturned, but we still do t have a complete proof for Fermats last theorem [1].

[1] https://xenaproject.wordpress.com/2024/12/11/fermats-last-th...

  • But even real translation is bad.

    There has been some efforts to make computer languages with local (non-english) keywords. Most have fortunately already failed horribly.

    But it still exists, e.g. in spreadsheet formulas.

    In some cases even number formatting (decimal separators) are affected.

I don't think youre fully comprehending Dijkstra's argument. He's not saying to not use tool to help with translation, he is saying that not thinking in terms of formal symbols HURTS THINKING. Your ideas are worse if you don't think in formal systems. If you don't treat your thoughts as formal things.

In your example, he has no opinion on how to translate the idea of a "business person" because in his view the ideas of the "business person" are already shallow and bad because they don't follow a formalism. They are not worth translating.

  • If that's correct, then it's very falsifiable. If a businessperson says "there's a gap in the market - let's build X" they will be following a formalism at their level of detail. They see the market, the interactions between existing products and customers, and where things might be going.

    Just because they can't spell it out to the nth degree doesn't matter. Their formalism is "this is what the market would like".

    Having an LLM then tease out details - "what should happen in this case" would actually be pretty useful.

    • You're either not really thinking through what you're saying, or you're being disingenuous because you want to promote AI.

      A formalism isn't "person says Y". It's about adhering to a structure, to a form of reasoning. Mathematical formalism is about adhering to the structure of mathematics, and making whatever argument you desire to make in the formal structure of formulas and equations.

      Saying "A palindrome is a word that reads the same backwards as it does forwards" is not a formal definition. Saying "Let r(x) be the function that when given a string x returns the reversed string, x is then a palindrome iff x = r(x)" (sans the formal definitions of the function r).

      Formalism is about reducing the set of axioms (the base assumptions of your formal system) to the minimal set that is required to build all other (provable) arguments. It's not vague hand waving about what some market wants, it's naturally extrapolating from a small set of axioms, and being rigorous if ever to add new ones.

      If your hypothetical "business person" every says "it was decided" then they are not speaking a formal language, because formalism does not have deciders.

The first step isn't from natural language to formal language. It's from the idea in your head into natural language. Getting that step right in a way that a computer could hope to turn into a useful thing is hard.

  • >It's from the idea in your head into natural language. Getting that step right in a way that a computer could hope to turn into a useful thing is hard.

    The "inside the head" conversion step would be more relevant in the reply to the gp if the hypothetical AI computer would be hooked up directly to brain implants like neuralink, functional MRI scans, etc to translate brain activity to natural language or programming language code.

    But today, human developers who are paid to code for business people are not translating brain implant output signals. (E.g. Javascript programmers are not translating raw electrical waveforms[1] into React code.)

    Instead, they translate from "natural language" specifications of businesspeople to computer code. This layer of translation is more tractable for future AI computers even though natural language is more fuzzy and ambiguous. The language ambiguity in business requirements is unavoidable but it still hasn't stopped developers from somehow converting it into concrete non-ambiguous code.

    [1] https://www.technologyreview.com/2020/02/06/844908/a-new-imp...

  • Without descending fully into epistemology, I tend to think that there is no proper "idea" in your head before it's phrased in language - the act of initially describing something in natural language *is* the act of generating it.

    • Research on LLMs suggest that's probably not the case. See the work on reasoning in latent space, and on shared concepts between languages being represented independently of the individual language.

      Of course one might argue that even if LLMs are capable of ideation and conceptualisation without natural language, doesn't mean humans are.

      But the fact that up to 50% of people have no inner monologue seems to refute that.

      3 replies →

Because then you don't know what the computer's doing. The whole point of this article was that there is value in the process of writing your ideas out formally. If you "let computers help you along the way", you'll run straight into the issue of needing an increasingly formal natural language to get sufficiently good results from the machine.

Say, doesn't each business - each activity - have its own formal language?

Not as formalized as programming languages, but it's there.

Try to define any process, you end up with something trending towards formalized even if you don't realize it.

  • That's pretty much the whole basis of Domain Driven Design. The core message is to get to an Ubiquitous Language which is the formalization of the business jargon (pretty much a glossary). From which the code can then be derived naturally.

    • > From which the code can then be derived naturally.

      Disagree with "naturally". Unless you want to end up on accidentally quadratic. Or on accidentally exponential if there's such a list.

      1 reply →

I like your take.

The only issue I have with trusting a computer to do so much is that it doesn't necessarily have the long term vision or intuition some humans might have for the direction of the software product. There's so much nuance to the connection between a business need and getting it into software, or maybe I am overthinking it :D