Comment by drnick1
4 days ago
> What is interesting is that the apparent cleanliness of the code (it reads very well) is obscuring the fact that the quality is actually quite low.
I think this is a general feature and one of the greatest advantages of C. It's simple, and it reads well. Modern C++ and Rust are just horrible to look at.
I slightly unironically believe that one of the biggest hindrances to rust's growth is that it adopted the :: syntax from C++ rather than just using a single . for namespacing.
I believe that the fanatics in the rust community were the biggest factor. They turned me off what eventually became a decent language. There are some language particulars that were strange choices, but I get that if you want to start over you will try to get it all right this time around. But where the Go authors tried to make the step easy and kept their ego out of it, it feels as if the rust people aimed at creating a new temple rather than to just make a new tool. This created a massive chicken-and-the-egg problem that did not help adoption at all. Oh, and toolchain speed. For non-trivial projects for the longest time the rust toolchain was terribly slow.
I don't remember any other language's proponents actively attacking the users of other programming language.
> But where the Go authors tried to make the step easy and kept their ego out of it
That is very different to my memories of the past decade+ of working on Go.
Almost every single language decision they eventually caved on that I can think of (internal packages, vendoring, error wrapping, versioning, generics) was preceded by months if not years of arguing that it wasn't necessary, often followed by an implementation attempt that seems to be ever so slightly off just out of spite.
Let's don't forget that the original Go 1.0 expected every project's main branch to maintain backward compatibility forever or else downstreams would break, and without hacks (which eventually became vendoring) you could not build anything without an internet connection.
To be clear, I use Go (and C... and Rust) and I do like it on the whole (despite and for its flaws) but I don't think the Go authors are that different to the Rust authors. There are (unfortunately) more fanatics in the Rust community but I think there's also a degree to which some people see anything Rust-related as being an attack on other projects regardless of whether the Rust authors intended it to be that way.
1 reply →
> I believe that the fanatics in the rust community were the biggest factor.
I second this; for a few years it was impossible to have any sort of discussion on various programming places when the topic was C: the conversation would get quickly derailed with accusations of "dinosaur", etc.
Things have gone quiet recently (last three years, though) and there have been much fewer derailments.
2 replies →
Being a C++ developer and trafficking mostly in C++ spaces, there is a phenomenon I've noticed that I've taken to calling Rust Derangement Syndrome. It's where C and C++ developers basically make Rust the butt of every joke, and make fun it it in a way that is completely outsized with how much they interact with Rust developers in the wild.
It's very strange to witness. Annoying advocacy of languages is nothing new. C++ was at one point one of those languages, then it was Java, then Python, then Node.js. I feel like if anything, Rust was a victim of a period of increased polarization on social media, which blew what might have been previously seen as simple microaggressions completely out of proportion.
4 replies →
Software vulnerabilities are an implicit form of harassment.
1 reply →
> I don't remember any other language's proponents actively attacking the users of other programming language.
I just saw someone on Hacker News saying that Rust was a bad language because of its users
4 replies →
The safer the C code, the more horrible it starts looking though... e.g.
I don't understand why people think this is safer, it's the complete opposite.
With that `char msg[static 1]` you're telling the compiler that `msg` can't possibly be NULL, which means it will optimize away any NULL check you put in the function. But it will still happily call it with a pointer that could be NULL, with no warnings whatsoever.
The end result is that with an "unsafe" `char *msg`, you can at least handle the case of `msg` being NULL. With the "safe" `char msg[static 1]` there's nothing you can do -- if you receive NULL, you're screwed, no way of guarding against it.
For a demonstration, see[1]. Both gcc and clang are passed `-Wall -Wextra`. Note that the NULL check is removed in the "safe" version (check the assembly). See also the gcc warning about the "useless" NULL check ("'nonnull' argument 'p' compared to NULL"), and worse, the lack of warnings in clang. And finally, note that neither gcc or clang warn about the call to the "safe" function with a pointer that could be NULL.
[1] https://godbolt.org/z/qz6cYPY73
> I don't understand why people think this is safer, it's the complete opposite.
Yup, and I don't even need to check your godbolt link - I've had this happen to me once. It's the implicit casting that makes it a problem. You cannot even typedef it away as a new type (the casting still happens).
The real solution is to create and use opaque types. In this case, wrapping the `char[1]` in a struct would almost certainly generate compilation errors if any caller passed the wrong thing in the `char[1]` field.
Compared to other languages, this is still nice.
It is - like everything else - nice because you, me and lots of others are used to it. But I remember starting out with C and thinking 'holy crap, this is ugly'. After 40+ years looking at a particular language it no longer looks ugly simply because of familiarity. But to a newcomer C would still look quite strange and intimidating.
And this goes for almost all programming languages. Each and every one of them has warts and issues with syntax and expressiveness. That holds true even for the most advanced languages in the field, Haskell, Erlang, Lisp and more so for languages that were originally designed for 'readability'. Programming is by its very nature more akin to solving a puzzle than to describing something. The puzzle is to how to get the machine to do something, to do it correctly, to do it safely and to do it efficiently, and all of those while satisfying the constraint of how much time you are prepared (or allowed) to spend on it. Picking the 'right' language will always be a compromise on some of these, there is no programming language that is perfect (or even just 'the best' or 'suitable') for all tasks, and there are no programming languages that are better than any other for any subset of all tasks until 'tasks' is a very low number.
3 replies →
Meanwhile, in Modula-2 from 1978, that would be
Now you can use LOW() and HIGH() to get the lower and upper bounds, and naturally bounds checked unless you disabled them, locally or globaly.
This should not be downvoted, it is both factually correct and a perfect illustration of these problems already being solved and ages ago at that.
It is as if just pointing this out already antagonizes people.
3 replies →