← Back to context

Comment by bruckie

17 hours ago

> Do we, really?

Yes, or pretty close to it. What we don't know how to do (AFAIK) is do it at a cost that would be acceptable for most software. So yes, it mostly gets done for (components of) planes, spacecraft, medical devices, etc.

Totally agreed that most software is a morass of bugs. But giving examples of buggy software doesn't provide any information about whether we know how to make non-buggy software. It only provides information about whether we know how to make buggy software—spoiler alert: we do :)

> So yes, it mostly gets done for (components of) planes, spacecraft, medical devices, etc.

I have to disagree here. All of these you mentioned have regularly bugs. Multiple spacecraft got lost because of these. For planes there's not so distant Boeing 737 MAX fiasco (admittedly this was bad software behavior caused by sensor failure). And medical devices, the news about their bugs semi-regularly pop up. So while the software for these might do a bit better than the rest, they certainly are not anywhere close to being bug free.

And same goes for specifications the software is based on. Those aren't bug-free either. And writing software based on flawed specification will inevitably result in flawed software.

That's not to say we should give up on trying to write bug free software. But we currently don't know how to do so.

There is a huge wetware problem too. Like if I can send you an email or other message that tricks you and gets you to send me $10k, what do I care if the industry is 100% effective at blocking RCE?

That software also often has bugs. It's usually a bit more likely that they are documented, though, and unlikely to cause a significant failure on their own.

  • building around bugs that you know exists but dont know where is also a part of it. Reliability in the face of bugs. The mere existence of bugs isn't enough to call the software buggy, if the outcome is reliable (e.g., a triple module redundancy).

    • For a silly example, see how Python programs have plenty of bugs, but they still (usually) don't allow for the kind of memory exploits that C programs give you.

      You could say that Python is designed around preventing these memory bugs.

Then we can't do it. Cost is a requirement

  • Cost is a parameter subject to engineering tradeoffs, just like performance, feature sets, and implementation time.

    Security and reliability are also parameters that exist on a sliding scale, the industry has simply chosen to slide the "cost" parameter all the way to one end of the spectrum. As a result, the number of bugs and hacks observed are far enough from the desired value of zero that it's clear the true requirements for those parameters cannot be honestly said to be zero.

    • Is it the industry making this choice or the customer?

      You could make a car that's safer than others at 10x the price but what would the demand look like at that price?

      Would you pay 2x for your favourite software and forego some of the more complex features to get a version with half the security issues?

    • > the number of bugs and hacks observed are far enough from the desired value of zero

      Zero is not the desired number, particularly not when discussing "hacks". This may not matter in current situation, but there's a lot of "security maximalism" in the industry conversations today, and people seem to not realize that dragging the "security" slider all the way to the right means not just the costs becoming practically infinite, but also the functionality and utility of the product falling down to 0.

      1 reply →

  • The question was not if it was possible within price boundary X, but if it was possible at all. There is a difference, please don't confound possibility with feasibility.

  • Is having problematic features that causes problems also a requirement?

    The answer to the above question will reveal if someone an engineer or a electrician/plumber/code monkey.

    In virtually every other engineering discipline engineers have a very prominent seat at the table, and the opposite is only true in very corrupt situations.

    • Unlimited budget and unlimited people won't solve unlimited problems with perfection.

      Even basic theorems of science are incorrect.

  • Also people keep insisting on using unsafe languages like C.

    It depends on exactly what you are doing but there are many languages which are efficient to develop in if less efficient to execute like Java and Javascript and Python which are better in many respects and other languages which are less efficient to develop in but more efficient to run like Rust. So at the very least it is a trilemma and not a dilemma.

    • C is about the safest language you can choose, between cbmc, frama-c and coccinelle there is hardly another language with comparable tooling for writing actually safe software, that you can actually securely run on single-core hardened systems. I would be really interested to hear the alternatives, though!

    • > if less efficient to execute like Java and Javascript and Python

      One of these is not like the others...

      Java (JVM) is extremely fast.

      3 replies →

    • The language plays a role, but I think the best example of software with very few bugs is something like qmail and that's written in C. qmail did have bugs, but impressively few.

      Write code that carefully however is really not something you just do, it would require a massive improvement of skills overall. The majority of developers simply aren't skilled enough to write something anywhere near the quality of qmail.

      Most software also doesn't need to be that good, but then we need to be more careful with deployments. The fact that someone just installs Wordpress (which itself is pretty good in terms of quality) and starts installing plugins from un-trusted developers indicates that many still doesn't have a security mindset. You really should review the code you deploy, but I understand why many don't.

      2 replies →