Comment by fzeroracer

9 hours ago

I cannot empirically prove that my OS is secure, because I haven't written it. I trust that the maintainers of my OS have done their due diligence in ensuring it is secure, because they take ownership over their work.

But when I write software, critical software that sits on a customer's device, I take ownership over the areas of code that I've written, because I can know what I've written. They may contain bugs or issues that I may need to fix, but at the time I can know that I tried to apply the best practices I was aware of.

So if I ask you the same thing, do you know if your software is secure? What architecture prevents someone from exfiltrating all of the account data from pine town? What best practices are applied here?

I didn't say OS, I said OSS. Open-source software.

  • Fair mistake on my end, I'm aware of what OSS means but my eyes will have a tendency to skip a letter or two. The same argument applies; because if I write something and release it to the OSS community there's going to be an expectation that A) I know how it works deeply and B) I know if it's reasonably secure when it's dealing with personal data. They can verify this by looking at the code, independently.

    But if the code is unreadable and I can't make a valid argument for my software, what's left?

    • Are you saying you know your code has exactly zero bugs because you wrote it? That's obviously absurd, so what you're really saying is "I'm fairly familiar with all the edge cases and I'm sure that my code has very few issues", which is the same thing I say.

      Regardless, though, this argument is a bit like tilting at windmills. Software development has changed, it's never going back, and no matter how many looms you smash, the question now is "how do we make LLM-generated code safer/better/faster/more maintainable", not "how do we put the genie back in the bottle?".

      Also I will give myself credit for using three analogies in two sentences.