← Back to context

Comment by fizza_pizza

5 hours ago

The certification angle is the most interesting part to me. Regulated industries (aviation, medical devices) often can't use JIT for exactly this reason, the code that runs has to be the code that was certified. Static translation that produces a signable binary is a real unlock there, code bloat notwithstanding.

I wonder: how relevant is this portion of the software industry? Because I’m guessing there is also no way they can apply LLms at scale, which is never discussed in the larger AI at work narrative

  • I work in an industry that requires reproducible binaries from source, and cryptographic hashes filed with a regulator.

    It's also not aviation or medical. So perhaps it's more common than you imagine.

    • I think my comment conveyed the wrong sentiment, my bad. I’m suggesting exactly this: there are extremely common cases in which deterministic software outcomes are needed/mandatory/regulated. Way more often than we think, often in boring and solved but critical environments. Yet the entire AI industry acts as if that is an afterthought or an unimportant edge case.

  • It is completely relevant, if you want reliable software that you use daily to continue running without a massive rewrite.

    Before suggesting to use LLMs to completely rewrite this sort of software, there is a reason why compilers need to be certified to operate in safety critical environments. Not everything needs to use LLMs as the solution to a problem.

    I would go as far to say that using an LLM in this context is the wrong solution and is irrelevant to critical systems. Maybe some here see everything as tokens and must solve everything in the form of using LLMs.

    Rewriting a toy web app using LLMs from Javascript to Typescript is great, but isn't good for safety critical systems.

    • Safety critical software is mostly a compliance dance that incidentally produces artifacts with lower defect rates than usual. LLMs can help with safety critical code as long as a human signs their name that they are responsible for its behavior.

      3 replies →

    • I agree with you. The question is: how the hell is this never discussed when assessing the economic potential of AI-driven disruption. I ask because I have the impression that all the really relevant industries are resistent to the current narrative. That said we had Claud helping bomb a school full of kids, you would guess the military would know better but no :/