I think that's too pessimistic. The code is there and it can be used to push the project forward. If some part of it is not good enough, then an alternative implementation can be created (potentially in a different language)
>> "The idea that new code is better than old is patently absurd. Old code has been used. It has been tested. Lots of bugs have been found, and they’ve been fixed. There’s nothing wrong with it. It doesn’t acquire bugs just by sitting around on your hard drive."
>> "Each of these bugs took weeks of real-world usage before they were found. The programmer might have spent a couple of days reproducing the bug in the lab and fixing it. If it’s like a lot of bugs, the fix might be one line of code, or it might even be a couple of characters, but a lot of work and time went into those two characters."
>> "When you throw away code and start from scratch, you are throwing away all that knowledge. All those collected bug fixes. Years of programming work."
It's an older piece, but like good old code, it still holds up. Newer tools and technology have improved the creating of new code, but they've also made improving old code easier in equal measure.
It's a good point in general, but in this case it's not clear if the cost of re-writing the existing codebase is less than the cost of staying with a memory-unsafe language.
We know from past experience that it takes an extreme amount of time and effort to harden a browser written in C++ against malicious web content. The Ladybird codebase is not particularly "old" in any sense of the word. Judging by Github's stats most of the code is less than 4 years old and it is still a long ways from being ready for general use. I think it's safe to say Ladybird still has a vast amount of work to be done fixing vulnerabilities that arise from lack of memory safety.
I find it quite plausible that the cost of re-writing the existing code in Rust is less than the cost of fixing all of the current and future bugs in the C++ codebase that Rust would catch at compile time.
The only exception is if you have 500k LOC in a language whose runtime is going to be deprecated on all platforms overnight.
I'm referring to the uh, retrospectively unfortunate decision I made in 2007 to start building large scale business app frontends in AS3.
I guess I should be thankful for the work, having to rewrite everything in TS from scratch a decade later. (At least the backends didn't have to be torn down).
Old code does acquire new bugs by sitting in your hard drive,
since it interfaces with dozens of libraries and APIs that
don't care about how well test the code is: every path of code
is dependent on multiple components playing well and following
standards/APIs/formats that old code has no knowledge of.
Also, the mountain of patch-fixes and "workarounds" in the end
force the programmers into a corner, where development is hobbled
by constraints and quirks of "battle-tested" code, that will be thrown
away as soon as it couldn't support fancy new feature X or
cannot use fancy new library API without extra layers of indirection.
I think that's too pessimistic. The code is there and it can be used to push the project forward. If some part of it is not good enough, then an alternative implementation can be created (potentially in a different language)
A classic: https://www.joelonsoftware.com/2000/04/06/things-you-should-...
>> "The idea that new code is better than old is patently absurd. Old code has been used. It has been tested. Lots of bugs have been found, and they’ve been fixed. There’s nothing wrong with it. It doesn’t acquire bugs just by sitting around on your hard drive."
>> "Each of these bugs took weeks of real-world usage before they were found. The programmer might have spent a couple of days reproducing the bug in the lab and fixing it. If it’s like a lot of bugs, the fix might be one line of code, or it might even be a couple of characters, but a lot of work and time went into those two characters."
>> "When you throw away code and start from scratch, you are throwing away all that knowledge. All those collected bug fixes. Years of programming work."
It's an older piece, but like good old code, it still holds up. Newer tools and technology have improved the creating of new code, but they've also made improving old code easier in equal measure.
It's a good point in general, but in this case it's not clear if the cost of re-writing the existing codebase is less than the cost of staying with a memory-unsafe language.
We know from past experience that it takes an extreme amount of time and effort to harden a browser written in C++ against malicious web content. The Ladybird codebase is not particularly "old" in any sense of the word. Judging by Github's stats most of the code is less than 4 years old and it is still a long ways from being ready for general use. I think it's safe to say Ladybird still has a vast amount of work to be done fixing vulnerabilities that arise from lack of memory safety.
I find it quite plausible that the cost of re-writing the existing code in Rust is less than the cost of fixing all of the current and future bugs in the C++ codebase that Rust would catch at compile time.
1 reply →
The only exception is if you have 500k LOC in a language whose runtime is going to be deprecated on all platforms overnight.
I'm referring to the uh, retrospectively unfortunate decision I made in 2007 to start building large scale business app frontends in AS3.
I guess I should be thankful for the work, having to rewrite everything in TS from scratch a decade later. (At least the backends didn't have to be torn down).
5 replies →
Old code does acquire new bugs by sitting in your hard drive, since it interfaces with dozens of libraries and APIs that don't care about how well test the code is: every path of code is dependent on multiple components playing well and following standards/APIs/formats that old code has no knowledge of. Also, the mountain of patch-fixes and "workarounds" in the end force the programmers into a corner, where development is hobbled by constraints and quirks of "battle-tested" code, that will be thrown away as soon as it couldn't support fancy new feature X or cannot use fancy new library API without extra layers of indirection.
Would another language have avoided this?
Nothing but Rust is safe from being attacked by the Rust zealots. It's been extremely annoying these last few years.
This was more C++ versus "we haven't picked what we are going to write this in"
If we go by parents definition of “C++==tech debt”, then yes.