Comment by rvz
6 hours ago
It is completely relevant, if you want reliable software that you use daily to continue running without a massive rewrite.
Before suggesting to use LLMs to completely rewrite this sort of software, there is a reason why compilers need to be certified to operate in safety critical environments. Not everything needs to use LLMs as the solution to a problem.
I would go as far to say that using an LLM in this context is the wrong solution and is irrelevant to critical systems. Maybe some here see everything as tokens and must solve everything in the form of using LLMs.
Rewriting a toy web app using LLMs from Javascript to Typescript is great, but isn't good for safety critical systems.
Safety critical software is mostly a compliance dance that incidentally produces artifacts with lower defect rates than usual. LLMs can help with safety critical code as long as a human signs their name that they are responsible for its behavior.
When I'm sitting in the plane that has CAS firmware, I'd like to think it wasn't written by an LLM and that my death in the case of a CAS failure isn't chalked up to "some engineer somewhere gets in trouble".
There probably already is generated code in there, only it was generated from UML. I don’t think that LLM generated code will be treated differently from the point of view of the relevant regulations.
1 reply →
I agree with you. The question is: how the hell is this never discussed when assessing the economic potential of AI-driven disruption. I ask because I have the impression that all the really relevant industries are resistent to the current narrative. That said we had Claud helping bomb a school full of kids, you would guess the military would know better but no :/