Comment by astrobe_
17 hours ago
> There is a prevalent culture of expecting users to not make mistakes.
I think the older of us C/C++ programmers come from no-safety languages like assembly language. That doesn't mean that all of us are "macho programmers" (as I was called here once). C's weak typing and compilers emitting warnings give a false sense of security which is tricky to deal with.
The statement you make is not entirely correct. The more correct statement is that there is a prevalent culture of expecting users to find strategies to avoid mistakes. We are engineers. We do what we need with what we have, and we did what we had to with what we had.
When you program with totally unsafe languages, you develop more strategies than just relying on a type checker and borrow checker: RAII, "crash early", TDD, completion-compatible naming conventions, even syntax highlighting (coloring octal numbers differently)...
BUT. the cultural characteristics of the programmers are only one-quarter of the story. The bigger part is about company culture, and more specifically the availability of programmers. You won't promote safer languages and safer practice by convincing programmers that it has zero impact on performance. It's the companies that you need to convince that the safe alternatives are as productive [1] as the less safe alternatives.
I get the feeling you didn't watch my talk. The example in question is sorting. Say for example your comparison function does not implement a strict weak ordering, which can easily happen if you use <= instead of <, in C++ you routinely get out-of-bounds read and write, in Rust you get some unspecified element order.
In what world is the first preferable to the latter?
This behavior is purely an implementation choice. Even the C people glibc and LLVM libc consider this to be undesirable and are willing to spend 2-3% overhead on making sure you don't get that behavior.
No, this is not "expecting users to find strategies to avoid mistakes".
> Even the C people glibc and LLVM libc consider this to be undesirable and are willing to spend 2-3% overhead on making sure you don't get that behavior.
libc++ actually had to roll back a std::sort improvement because it broke too much code that was relying on bad comparators. From the RFC for adding comparator checks to debug libc++ [0]:
> Not so long ago we proposed and changed std::sort algorithm [1]. However, it was rolled back in 16.0.1 because of failures within the broken comparators. That was even true for the previous implementation, however, the new one exposed the problems more often.
[0]: https://discourse.llvm.org/t/rfc-strict-weak-ordering-checks...
[1]: https://reviews.llvm.org/D122780 (not the original link, but I think this is the review for the changeset that was rolled back)
It looks more like an implementation error to me, and actually it looks more like a design mistake than an implementation choice because there are arguments in favor of using an abstract functor class for the comparison function (you'll need a closure sooner or later), which would have given the chance to warn the user about this particular issue in the docs - at least it would have been more visible and clearer than it currently is [1].
Because until vibe coding becomes a culture, programmers are at least expected to "RTFM". But that's also a requirement which is becoming harder to meet by the year, because - you almost said it the first few minutes of your talk - "we needed to merge it ASAP".
This mistake seems to have been somewhat fixed in C++20 [2]. "Too little too late", yes, probably.
[1] https://en.cppreference.com/w/cpp/algorithm/sort.html
[2] ibidem, tacit use of std::less.
> When you program with totally unsafe languages, you develop more strategies than just relying on a type checker and borrow checker: RAII, "crash early", TDD, completion-compatible naming conventions, even syntax highlighting (coloring octal numbers differently)...
Having written a fair bit of rust and C, I don't consider the tools for safety in C to be good enough.
In C, its so easy for small mistakes to turn into CVEs. ASAN and friends help. But they're a long way from perfect. Testing helps. But in C, there's usually a fair bit of time that passes between when I make a mistake and when I discover the bug through testing. Its also so easy for bugs to hide in C in the cracks of UB.
One of my clearest experiences with C and rust was a rope library I wrote several years ago. Ropes are "fancy strings". They're strings, but they support O(log n) insert & delete, at arbitrary positions. I wrote my library in pure C, implemented on top of a skip list. The code is very subtle - like, there's a lot of very careful logic. A single incorrect line of code will often cause silent data corruption or memory errors that don't show up until much later.
It took about as long to properly test & debug the library as it took to write it in the first place. Debugging it was exhausting - there were a myriad of obscure edge cases that I needed fuzzing to track down. When the fuzzer found problems, going from a failing fuzzer trace to a code fix was a big job.
Before I started, I had a bunch of optimisations in mind that I wanted to add to the library. But it was so exhausting getting it working at all that I never got around to most of them. Eg I wanted to make each node in the skip list into a gap buffer to reduce memcopies. But implementing that would have required significant code changes - which in turn would have meant a new round of memory bugs and debugging. I never brought myself to do it.
At some point I rewrote the library in rust, with liberal use of raw pointers. I made just as many mistakes in the implementation - though the compiler caught a lot of them. The first time I ran it it segfaulted. And I thought "here we go again". But despite using raw pointers, there were only 2 unsafe functions in the whole program. A segfault in rust can only happen from unsafe code. So I took a read of that code - and lo and behold, there was my bug, plain as day. Time to fix: 2 minutes. The library never segfaulted again in all my testing. The first time I benchmarked it it was ~10% faster than the C version. I still have no idea why.
It was so much easier to write that a little while later, I put the gap buffer optimisation in. Now the rust library is 2-3x faster than C. In this case, memory safety made my program easier to write. And that resulted in better performance.
If anyone is curious, C / rust code is here:
https://github.com/josephg/librope
https://github.com/josephg/jumprope-rs
> BUT. the cultural characteristics of the programmers are only one-quarter of the story. The bigger part is about company culture, and more specifically the availability of programmers.
Yeah absolutely. I think this is the biggest downside of rust. Rust is really hard - and painful - to learn. It front loads all the pain. In C, you suffer while debugging. In rust, all that suffering happens while learning the language in the first place. I spent months fighting the borrow checker. And its very demotivating not being able to compile your program at all. Once you understand it, it makes sense. But I think there will always be a limited pool of programmers willing to struggle through.
Even chatgpt is bad at rust. It makes all sorts of classic beginner mistakes with lifetimes, and the resulting code often won't compile. Even after pointing out the problem, chatgpt is often unable to correct lifetime bugs.