The C++ Core Guidelines have existed for nearly 10 years now. Despite this, not a single implementation in any of the three major compilers exists that can enforce them. Profiles, which Bjarne et al have had years to work on, will not provide memory safety[0].
The C++ committee, including Bjarne Stroustrup, needs to accept that the language cannot be improved without breaking changes. However, it's already too late. Even if somehow they manage to make changes to the language that enforce memory safety, it will take a decade before the efforts propagate at the compiler level (a case in point is modules being standardised in 2020 but still not ready for use in production in any of the three major compilers).
> The C++ committee, including Bjarne Stroustrup, needs to accept that the language cannot be improved without breaking changes.
The example in the article starts with "Wow, we have unordered maps now!"
Just adding things modern languages have is nice, but doesn't fix the big problems.
The basic problem is that you can't throw anything out. The mix of old and new stuff leads to obscure bugs. The new abstractions tend to leak raw pointers, so that old stuff can be called.
C++ is almost unique in having hiding ("abstraction") without safety. That's the big problem.
I find the unordered_map example rather amusing. C++’s unordered_map is, somewhat infamously, specified in an unwise way. One basically cannot implement it with a modern, high performance hash table for at least two reasons:
1. unordered_map requires some bizarre and not widely useful abilities that mostly preclude hash tables with probing:
2. unordered_map has fairly strict iteration and pointer invalidation rules that are largely incompatible with the implementations that turn out to be the fastest. See:
> References and pointers to either key or data stored in the container are only invalidated by erasing that element, even when the corresponding iterator is invalidated.
And, of course, this is C++, where (despite the best efforts of the “profiles” people), the only way to deal with lifetimes of things in containers is to write the rules in the standards and hope people notice. Rust, in contrast, encodes the rules in the type signatures of the methods, and misuse is deterministically caught by the compiler.
You absolutely can throw things out, and they have! Checked exceptions, `auto`, and breaking changes to operator== are the two I know of. There were also some minor breaking changes to comparison operators in C++20.
They absolutely could say "in C++26 vector::operator[] will be checked" and add an `.at_unsafe()` method.
They won't though because the whole standards committee still thinks that This Is Fine. In fact the number of "just get good" people in the committee has probably increased - everyone with any brains has run away to Rust (and maybe Zig).
While I sort of agree on the complaint, personally I think the best spot of C++ in this ecosystem is still on great backward-compatibility and marginal safety improvements.
I would never expect our 10M+ LOC performance-sensive C++ code base to be formally memory safe, but so far only C++ allowed us to maintain it for 15 years with partial refactor and minimal upgrade pain.
I think at least Go and Java have as good backwards compatibility as C++.
Most languages take backwards compatibility very seriously. It was quite a surprise to me when Python broke so much code with the 3.12 release. I think it's the exception.
Language is improving (?), although IME it went besides the point I'm finding new features to be less useful for every day code. I'm perfectly happy with C++17/20 for 99% of the code I write. And keeping the backwards compatibility for most of the real-world software is a feature not a bug, ok? Breaking it would actually make me go away from the language.
I feel like a few decades ago, standards intended to standardize best practices and popular features from compilers in the field. Dreaming up standards that nobody has implemented, like what seems to happen these days, just seems crazy to me.
I hoped Sean would open source Circle. It seemed promising, but it's been years and don't see any tangible progress. Maybe I am not looking hard enough?
Profiles will not provide perfect memory safety, but they go a long way to making things better. I have 10 million lines of C++. A breaking change (doesn't matter if you call it new C++ or Rust) would cost over a billion dollars - that is not happening. Which is to say I cannot use your perfect solution, I have to deal with what I have today and if profiles can make my code better without costing a full rewrite then I want them.
Changes which re-define the language to have less UB will help you if you want safety/ correctness and are willing to do some work to bring that code to the newer language. An example would be the initialization rules in (draft) C++ 26. Historically C++ was OK with you just forgetting to initialize a primitive before using it, that's Undefined Behaviour in the language so... if that happens too bad all bets are off. In C++ 26 that will be Erroneous Behaviour and there's some value in the variable, it's not always guaranteed to be valid (which can be a problem for say, booleans or pointers) but just looking at the value is no longer UB and if you forgot to initialize say an int, or a char, that's fine since any possible bit sequence is valid, what you did was an error, but it's not necessarily fatal.
If you're not willing to do any work then you're just stuck, nobody can help you, magic "profiles" don't help either.
But, if you're willing to do work, why stop at profiles? Now we're talking about a price and I don't believe that somehow the minimum assignable budget is > $1Bn
Enforcing style guidelines seems like an issue that should be tackled by non-compiler tools. It is hard enough to make a compiler without rolling in a ton of subjective standards (yes, the core guidelines are subjective!). There are lots of other tools that have partial support for detecting and even fixing code according to various guidelines.
It's part of a compiler ecosystem. ie. The front end is shared.
See clang-tidy and clang analyzer for example.
ps: That's what I like most about the core guidelines, they are trying very hard to stick to guidelines (not rules) that pretty much uncontroversially make things safer _and_ can be checked automatically.
They're explicitly walking away from bikeshed paintings like naming conventions and formatting.
What are you talking about, the language gets better with each release. Using C++ today is a hell of a lot better than even 10 years ago. It seems like people hold "memory safety" as the most important thing a language can have. I completely disagree. It turns out you can build awesome and useful software without memory safety. And it's not clear if memory safety is the largest source of problems building software today.
In my opinion, having good design and architecture are much higher on my list than memory safety. Being able to express my mental model as directly as possible is more important to me.
Does it matter whether it is a common class of bugs or a not so common one? The point is, this is a class of bugs you do not have when picking a different language.
C++ claimed for decades to be about eliminating a class of resource management bugs you can have in C code, that was its biggest selling point. So why is eliminating another class of bugs a nice to have now?
C++ is loosing projects to memory safe languages for decades now, just think of all the business software in Java, scientific SW in python, ... . The industry is moving towards memory safe software for decades now. Rust is just the newest option -- and a very compelling one as it has no runtime environment or garbage collector, just like C++.
> And it's not clear if memory safety is the largest source of problems building software today.
The Chromium team found that
> Around 70% of our high severity security bugs are memory unsafety problems (that is, mistakes with C/C++ pointers). Half of those are use-after-free bugs.
It’s possible you hadn’t come across these studies before. But if you have, and you didn’t find them convincing, what did they lack?
- Were the codebases not old enough? They’re anywhere between 15 and 30 years old, so probably not.
- Did the codebases not have enough users? I think both have billions of active users, so I don’t think so.
- Was it a “skill issue”? Are the developers at Google and Microsoft just not that good? Maybe they didn’t consider good design and architecture at any point while writing software over the last couple of decades. Possible!
There’s just one problem with the “skill issue” theory though. Android, presumably staffed with the same calibre of engineers as Chrome, also written in C++ also found that 76% of vulnerabilities were related to memory safety. We’ve got consistency, if nothing else. And then, in recent years, something remarkable happened.
> the percentage of memory safety vulnerabilities in Android dropped from 76% to 24% over 6 years as development shifted to memory safe languages.
They stopped writing new C++ code and the memory safety vulnerabilities dropped dramatically. Billions of Android users are already benefiting from much more secure devices, today!
You originally said
> And it's not clear if memory safety is the largest source of problems building software today.
It is possible to defend this by saying “what matters in software is product market fit” or something similar. That would be technically correct, while side stepping the issue.
Instead I’ll ask you, do you still think it is possible to write secure software in C++, but just trying a little harder. Through “good design and architecture”, as your previous comment implied.
> Profiles, which Bjarne et al have had years to work on, will not provide memory safety
While I agree with this in a general sense, I think it ought to be quite possible to come up with a "profile" spec that's simply meant to enforce the language restriction/subsetting part of Safe C++ - meaning only the essentials of the safety checking mechanism, including the use of the borrow checker. Of course, this would not be very useful on its own without the language and library extensions that the broader Safe C++ proposal is also concerned with. It's not clear as of yet if these can be listed as part of the same "profile" specifications or would require separate proposals of their own. But this may well be a viable approach.
I have seen 3 different safe c++ proposals (most are not papers yet, but they are serious efforts to show what safe c++ could look like). However there is a tradeoff here. the full bower checker in C++ approach is incompatible with all current C+++ and so adopting it is about as difficult is rewriting all your code in some other language. The other proposals are not as safe, but have different levels of you can use this with your existing code. All are not ready to get added to C++, but they all provide something better and I'm hopeful that something gets into C++ (though probably not before C++32)
Last weekend, I took an old cross-platform app written by somebody else between 1994-2006 in C++ and faffed around with it until it compiled and ran on my modern Mac running 14.x. I upped the CMAKE_CXX_STANDARD to 20, used Clang, and all was good. Actually, the biggest challenge was the shoddy code in the first place, which had nothing to do with its age. After I had it running, Sonar gave me 7,763 issues to fix.
The moral of the story? Backwards compatibility means never leaving your baggage behind.
> [M]any developers use C++ as if it was still the previous millennium. [...] C++ now offers modules that deliver proper modularity.
C++ may offer modules (in fact, it's been offering them since 2020), however, when it comes to their implementation in mainstream C++ compilers, only now things are becoming sort of usable with modules still being a challenge in more complex projects due to compiler bugs in the corner cases.
I think we need to be honest and upfront about this. I've talked to quite a few people who have tried to use modules but were unpleasantly surprised by how rough the experience was.
I was an extreme C++ bigot back in the late 90's, early 2000's. My license plate back then was CPPHACKR[1]. But industry trends and other things took my career in the direction of favoring Java, and I've spent most of the last 20+ years thinking of myself as mainly a "Java guy". But I keep buying new C++ books and I always install the C++ tooling on any new box I build. I tell myself that "one day" I'm going to invest the time to bone up on all the new goodies in C++ since I last touched it, and have another go.
When the heck that day will actually arrive, FSM only knows. The will is sort-of there, but there are just SO many other things competing for my time and attention. :-(
[1]: funny side story about that. For anybody too young to remember just how hot the job market was back then... one day I was sitting stopped at a traffic light in Durham (NC). I'm just minding my own business, waiting for the light to change, when I catch a glimpse out of my side mirror, of somebody on foot, running towards my car. The guy gets right up to my car, and I think I had my window down already anyway. Anyway, the guy gets up to me, panting and out of breath from the run and he's like "Hey, I noticed your license plate and was wondering if you were looking for a new job." About then the light turned green in my direction, and I'm sitting there for a second in just stunned disbelief. This guy got out of his car, ran a few car lengths, to approach a stranger in traffic, to try to recruit him. I wasn't going to sit there and have a conversation with horns honking all around me, so I just yelled "sorry man" and drove off. One of the weirder experiences of my life.
The programmers on the sound team at the video game company I worked for as an intern in 1998 would always stash a couple of extra void pointers in their classes just in case they needed to add something in later. Programmers should never lose sight of pragmatism. Seeking perfection doesn’t help you ship on time. And often, time to completion matters far more than robustness.
Funny, sounds like the Simpsons gag from the same time period: “what’s wrong with this country? Can’t a man walk down the street without being offered a job?”
Interesting. I was SO into the Simpsons at one time, but somehow I'd never seen that episode (as best as I can remember anyway). Now I feel the urge to go back and rewatch every episode of the Simpsons from the beginning. It would be fun, but man, what a time sink. I started the same thing with South Park a while back and stalled out somewhere around Season 5. I'd like to get back to it, but time... time is always against us.
Note to the above: I am wrong. My license plate back then was C++HACKR, with the actual "+" signs. NC license plates do allow that, although while the +'s are on the tag, they don't show up on your registration card or in the DMV computer system.
I mixed up the tag and my old domain name, which was "cpphacker.co.uk" (and later, just cpphacker.com/org).
Here's how Bjarne describes that first C++ program:
"a simple program that writes every unique line from input to output"
Bjarne does thank more than half a dozen people, including other WG21 members, for reviewing this paper, maybe none of them read this program?
More likely, like Bjarne they didn't notice that this program has Undefined Behaviour for some inputs and that in the real world it doesn't quite do what's advertised.
The collect_lines example won't even compile, it's not valid C++, but there's undefined behavior in one of the examples? I'm very surprised and would like to know what it is, that would be truly shocking.
Really? If you've worked with C++ it shouldn't be shocking.
The first example uses the int type. This is a signed integer type and in practice today it will usually be the 32-bit signed integer Rust calls i32 because that's cheap on almost any hardware you'd actually use for general purpose software.
In C++ this type has Undefined Behaviour if allowed to overflow. For the 32-bit signed integer that will happen once we see 2^31 identical lines.
In practice the observed behaviour will probably be that it treats 2^32 identical lines as equivalent to zero prior occurrences and I've verified that behaviour in a toy system.
"Undefined behavior" is not a bug. It's something that isn't specified by an ISO standard.
Rust code is 100 percent undefined behavior because Rust doesn't have an ISO standard. So, theoretically some alternative Rust compiler implementation could blow up your computer or steal your bitcoins. There's no ISO standard to forbid them from doing so.
(You see where I'm going with this? Standards are good, but they're a legal construct, not an algorithm.)
I haven't read much from Bjarne but this is refreshingly self-aware and paints a hopeful path to standardize around "the good parts" of C++.
As a C++ newbie I just don't understand the recommended path I'm supposed to follow, though. It seems to be a mix of "a book of guidelines" and "a package that shows you how you should be using those guidelines via implementation of their principles".
After some digging it looks like the guidebook is the "C++ Core Guidelines":
> use parts of the standard library and add a tiny library to make use of the guidelines convenient and efficient (the Guidelines Support Library, GSL).
Which seems to be this (at least Microsoft's implementation):
And I'm left wondering, is this just how C++ is? Can't the language provide tooling for me to better adhere to its guidelines, bake in "blessed" features and deprecate what Bjarne calls, "the use of low-level, inefficient, and error-prone features"? I feel like these are tooling-level issues that compilers and linters and updated language versions could do more to solve.
The problem with 45 years of C++ is that different eras used different features. If you have 3 million lines of C++ code written in the 1990's that still compiles and works today, should you use new 202x C++ features?
I still feel the sting of being bit by C++ features from the 1990s that turned out to be footguns.
Honestly, I kinda like the idea of "wrapper" languages. Typescript/Kotlin/Carbon.
I'm curious about that now, too. Is there the equivalent of Python's ruff or Rust's cargo clippy that can call out code that is legal and well-formed but could be better expressed another way?
Clang-tidy can rewrite some old code to better. However there is a lot of working code from the 1990s that cannot be automatically rewritten to a new style. Which is what makes adding tooling hard - somehow you need to figure out what code should follow the new style and what is the old style and updating to modern would be too expensive.
> As a C++ newbie I just don't understand the recommended path I'm supposed to follow, though
Did you even read the article ? He has given the recommended path in the article itself.
Two books describe C++ following these guidelines except when illustrating errors: “A tour of C++” for experienced programmers and “Programming: Principles and Practice using C++” for novices. Two more books explore aspects of the C++ Core Guidelines
J. Davidson and K. Gregory Beautiful C++: 30 Core Guidelines for Writing Clean, Safe, and Fast Code. 2021. ISBN 978-0137647842
R. Grimm: C++ Core Guidelines Explained. Addison-Wesley. 2022. ISBN 978-0136875673.
> And I'm left wondering, is this just how C++ is? Can't the language provide tooling for me to better adhere to its guidelines
Well, first, the language can't provide tooling: C++ is defined formally, not through tools; and tools are not part of the standard. This is unlike, say, Rust, where IIANM - so far, Rust has been what the Rust compiler accepts.
But it's not just that. C++ design principles/goals include:
* multi-paradigmatism;
* good backwards compatibility;
* "don't pay for what you don't use"
and all of these in combination prevent baking in almost anything: It will either break existing code; or force you to program a certain way, while legitimate alternatives exist; or have some overhead, which you may not want to pay necessarily.
And yet - there are attempts to "square the circle". An example is Herb Sutter's initiative, cppfront, whose approach is to take in an arguably nicer/better/easier/safer syntax, and transpile it into C++ :
How does enforcing profiles per-translation unit make any sense? Some of these guarantees can only be enforced if assumptions are made about data/references coming from other translation units.
This is the one major stumbling block for profiles right now that people are trying to fix.
C++ code involves numerous templates, and the definition of those templates is almost always in a header file that gets included into a translation unit. If a safety profile is enabled in one translation unit that includes a template, but is omitted from another translation unit that includes that same template... well what exactly gets compiled?
The rule in C++ is that it's okay to have multiple definitions of a declaration if each definition is identical. But if safety profiles exist, this can result in two identical definitions having different semantics.
I guess modules are supposed to be the magic solution for that, Bjarne has shown them in this article, even using import std.
Its a bit optimistic cause modules are still not really a viable option in my eyes, because you need proper support from the build systems, and notably cmake only has limited support for them right now.
Bjarne Stroustrup,
AT&T Labs, Florham Park, NJ, USA
Abstract
This paper outlines the proposal for generalizing the overloading rules for Standard C++ that is expected
to become part of the next revision of the standard. The focus is on general ideas rather than technical
details (which can be found in AT&T Labs Technical Report no. 42, April 1, 1998).
Modules sound cool for compile time, but do they prevent duplicative template instantiations? Because that's the real performance killer in my experience.
(It's a great post in general. N.B. that it's also quite old and export templates have been removed from the standard for quite some time after compiler writers refused to implement them.)
TL;DR: Declare your templates in a header, implement them in a source file, and explicitly instantiate them inside that same source file for every type that you want to be able to use them with. You lose expressiveness but gain compilation speed because the template is guaranteed to be compiled exactly once for each instantiation.
Which is to say, "extern template" is a thing that exists, that works, and can be used to do what you want to do in many cases.
The "export template" feature was removed from the language because only one implementer (EDG) managed to implement them, and in the process discovered that a) this one feature was responsible for all of their schedule misses, b) the feature was far too annoying to actually implement, and c) when actually implemented, it didn't actually solve any of the problems. In short, when they were asked for advice on implementing export, all the engineers unanimously replied: "don't". (See https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2003/n14... for more details).
1. You gain the ability to use the compilation unit's anonymous namespace instead of a detail namespace, so there is better encapsulation of implementation details. The post author stresses this as the actual benefit of export templates, rather than compile times.
2. You lose the ability to instantiate the template for arbitrary types, so this is probably a no-go for libraries.
3. Your template is guaranteed to be compiled exactly once for each explicit instantiation. (Which was never actually guaranteed for real export templates).
Bjarne Stroustrup (the creator of C++) is the best language designer. Many language designers will create a language, work on it for a couple years, and then go and make another language. Stroustrup on the other hand has been methodically working on C++ and each year the language becomes better.
Seeing badly formatted code snippets without color highlighting in article called "21st Century C++" somehow resonates with my opinion on how hard to write and to ready C++ still is after working with other laguages.
This honestly looks like C++ being feature-juryrigged to a degree that it doesn't even look like what C++ is: a c-derived low level language.
Everything is unobvious magic. Sure, you stick to a very restricted set of API usages and patterns, and all the magic allocation/deallocation happens out of sight.
But does that make it easier to debug? Better to code it?
This simply looks like C++ trying not to look like C++: like a completely different language, but one that was not built from the ground up to be that language, rather a bunch of shell games to make it look like another language as an illusion.
Yeah, I didn't have a problem keeping my shit straight in C++ in the '90s. The kitchen-sink approach since then hasn't been worth keeping up with. The fact that we're still dealing with header files means that the language stewards' priorities are not in line with practical concerns.
Same. Luckily my team switched to Rust almost 100%. So I don't need to learn about the godforsaken coroutine syntax and what pitfalls they laid when you use char wrong with it or in which subset of calls std::range does something stupid and causes a horrible performance regression.
Bjarne has been criticized for accepting too many (questionable) things into the language even at the dawn of C++ and committee kept that behavior. Moreover they have this pattern that given the options they always choose the easiest to misuse and most unsafe implementation of anything that goes into standard. std::optional is a mess, so is curly bracket initialization, auto is like choosing between stepping on Legos or putting your arm into a spider-full bag.
The committee is the worst combination of "move fast and break things" and "not in my watch". C++98 was an okay language, C++11 was alright. Anything after C++14 is a minesweeper game with increasing difficulty.
> Bjarne has been criticized for accepting too many (questionable) things
He even writes that way in his own article... The quote from the last section of the introduction was hilarious, and actually made me laugh a little bit for almost those exact reasons.
BS, Comm ACM > "I would have preferred to use the logically minimal vector{m} but the standards committee decided that requiring from_range would be a help to many."
I went from being curious about C++, to hating C++, to wanting to love it, to being fine with it, to using it for work for 5+ years, to abandoning it and finally to want to use it for game development, maybe. It's the circle of life.
The masochist in me keeps coming back to c++. My analogy of it to other languages is that it’s like painting a house with a fine brush versus painting the Mona Lisa with a roller. Right tool for the job I suppose.
It's my job and career(well, C and C++) but I often try to avoid C++. Whenever I use it(usually writing tests) I go through this cycle of re-learning some cool tricks, trying to apply them, realizing they won't do what I want or the syntax to do it is awkward and more work than the dumb way, and I end up hating C++ and feeling burned yet again.
>>contemporary C++30 can express the ideas embodied in such old-style code far simpler
IMO, newer C++ versions are becoming more complex (too many ways to do the same thing), less readable (prefer explicit types over 'auto', unless unavoidable) and harder to analyse performance and memory implications (hard to even track down what is happening under the hood).
I wish the C++ language and standard library would have been left alone, and efforts went into another language, say improving Rust instead.
I have used auto liberally for 8+ years; maybe I'm accustomed to reading code containing it but I really can't think of it being a problem. I feel like auto increases readability, the only thing I dislike is that they didnt make it a reference by default.
Where do you see difficult to track down performance/memory implications? Lambda comes to mind and maybe coroutines (yet to use them but guessing there may be some memory allocations under the hood). I like that I can breakpoint my C++ code and look at the disassembly if I am concerned that the compiler did something other than expected.
You don't 'have' to keep up with the language and I don't know that many people try to keep up with every single new feature - but it is worse to be one of those programmers for whom C++ stopped at C++03 and fight any feature introduced since then (the same people generally have strong opinions about templates too).
There are certainly better tools for many jobs and it is important to have languages to reach for depending on the task at hand. I don't know that anything is better than C++ for performance sensitive code.
I’ve been using c++ since the late 90’s but am not stuck there.
I was using c++11 when it was still called c++0x (and even before that when many of the features were developing in boost).
I took a break for a few years over c++14, but caught up again for c++17 and parts of c++20...
Which puts me 5-6 years behind the current state of things and there’s even more new features (and complexity) on the horizon.
I’m supportive of efforts to improve and modernize c++, but it feels like change didn’t happen at all for far too long and now change is happening too fast.
The ‘design by committee’ with everyone wanting their pet feature plus the kitchen sink thrown in doesn’t help reduce complexity.
Neither does implementing half-baked features from other ‘currently trendy’ languages.
It’s an enormous amount of complexity - and maybe for most code there’s not that much extra actual complexity involved but it feels overwhelming.
If you only read HN, you would think C++ died years ago.
As someone who worked in HFT, C++ is very much alive and new projects continue to be created in it simply because of the sheer of amount of experts in it. (For better or for worse)
since -14 or -17 I feel no need to keep up with it. thats cool if they add a bunch more stuff, but what I'm using works great now. I only feel some "peer pressure" to signal to other people that I know c++20, but as of now, I've put nothing into it. I think it's best to lag behind a few years (for this language, specifically).
The compilers tend to lag a few years behind the language spec too, especially if you have to support platforms where the toolchains lag latest gcc/clang (Apple / Android / game consoles).
Respectfully, you might want to add at least a few C++20 features into your daily usage?
consteval/constinit guarantees to do what you usually want constexpr to do. Have personally found it great for making lookup tables and reducing the numbers of constants in code (and c++23 expands what can be done in consteval).
Designated initializer is a game-changer for filling structures. No more accidentally populating the wrong value into a structure initializer or writing individual assignments for each value you want to initialize.
You don't have to "keep up with it", if by this you mean what I think you mean.
You don't have to use features. Instead, when you have a (language) problem to solve or something you'd like to have, you look into the features of the language.
Knowing they exist beforehand is better but is the hard part, because "deep" C++ is so hermetic that it is difficult to understand a feature when you have no idea which problem it is trying to solve.
I think it's good enough or side projects. More powerful than C so I don't need to hand roll strings and some algos but I tend to keep a minimum number of features because I'm such an amateur.
> I used the from_range argument to tell the compiler and a human reader that a range is used, rather than other possible ways of initializing the vector. I would have preferred to use the logically minimal vector{m} but the standards committee decided that requiring from_range would be a help to many.
Oh so I have to remember from_range and can't do the obvious thing? Great. One more thing to distract me from solving the actual problem I'm working on.
What exactly is wrong with the C++ community that blinds them to this sort of thing? I should be able to write performant, low-level code leveraging batteries-included algorithms effortlessly. This is 2025 people.
On the other hand, the decline of robust and high quality software started with the introduction of very immature languages such as both javascript or typescript ecosystems.
It's really any other language other than those two.
The only places where C++ failed to take C's crown has been on UNIX clones (naturally, due to the symbiotic relationship), and embedded where even modern C couldn't replace C89 + compiler extensions from the chip vendor, many shops are stuck in the past, even though most toolchains are already up to C++20 and C17 nowadays.
Rust is still too new for many folks to adopt, it depends on how much you would be willing to help grow the ecosystem, versus doing the actual application.
It will eventually get there, but also have the same issues as C++, regarding taking over C in UNIX/POSIX and embedded, and C++ has the advantage of having been a kind of Typescript for C, in terms of adoption effort, being a UNIX language from AT&T, designed to fit into C ecosystem.
Depends exactly what you want to do. C is not very popular at all in professional settings - C++ is far more popular. I would say if you know Rust then C++ isn't very hard though. You'll write better C++ code too because you'll naturally keep the good habits that the Rust compiler enforces and the C++ compiler doesn't.
That's why C++ is still around today, it was built on some solid principles. Bjarne is such a good language designer because he never abandoned it. Lesser designers make a language and start another in 5 or 10 years. Bjarne saw the value in what he created and had a sense of responsibility to those using it to keep making it better and take their projects seriously.
Whenever I have an idea and I start a project, I start with C++ because I know if the idea works out, the project can grow and work 10 years later.
loving he goes 'int main() { ... }' and never returns an int from it. Even better: without extra error / warning flags the compiler will just eat this and generate some code from it, returning ... yeah. Your guess is probably better than mine.
If the uber-bean counter, herald of the language of bean counters demonstrate unwillingness to count beans, maybe the beans are better counted in another way.
Well, actually... the "main" function is handled specially in the standard. It is the only one where the return type is not void and you don't need to explicitly return from it - if you do it, it is treated as if you returned 0.
(You will most definitely get a compiler error if you try this with any other function.)
You might say this is very silly, and you'd be right. But as quirks of C++ go it is one of the most benign ones. As usual it is there for backwards compatibility.
And, for what it's worth, the uber-bean counter didn't miss a bean here...
Depends. For certain fields the pay is great and there’s a dearth of candidates.
For other fields there is also a dearth of candidates but the pay falls short and you’ll be leaving tens of thousands of dollars on the table compared to what you could get with other languages.
I have often thought about writing something vaguely similar. We’ll see if I ever do. It wouldn’t be the same because I don’t hold the same position Bjarne did in the early days, but I am very interested in Rust history, and want to preserve it. It wouldn’t be from my perspective rather than from the creator’s perspective.
I did give a talk one time on Rust’s history. It was originally at FOSDEM, but there was an issue with the recording. The ACM graciously asked me to do it again to get it down on video https://dl.acm.org/doi/10.1145/2959689.2960081
Unfortunately, Rust is significantly less expressive than C++ and therefore is unlikely to replace it for high-performance systems code. As much as I don’t like C++, it is very powerful as a tool. The ability to express difficult low-level systems constructs and optimizations concisely and safely in the language are its killer feature. Once you know how to use it, other languages feel hobbled.
C++ doesn't allow you to express low level systems constructs concisely and safely though. You usually get neither.
Look at the first example in the article, where the increment can overflow and cause UB despite that overflow having completely defined semantics at the hardware level. Fixing it requires either a custom addition function or C++26, another include, and add_sat(). I wouldn't consider either concise in a program that doesn't include all of std.
Just reading the first 1/5 of this made me bored. I started my career with C++, being heavy into it for 10 years. But I've been doing Swift for the last 10 at least. I had a job interview last week for a job that was heavy C++, with major reliance on templates and post-C++ 11... and it didn't go well. You know what? I don't give a shit.
It's crazy that with that amount of experience you wouldn't get the job, just because you lack some modern C++ info in your brain's memory. Stuff you could search for or ask an LLM in 5 seconds (or even look up in a freaking physical book). You'd probably be fully up to date within a few weeks.
Says a lot about the people hiring imo. Good luck to them finding someone who can recite C++ spec from memory.
If you last worked on Pre templates C++ and now need to work on a template heavy codebase you are effectively writing in a different language. I don't think it will be a few weeks of catching up.
Ha, thanks, and obviously true. But I can understand companies wanting people who can just march into their codebase and "hit the ground running," I guess.
I don't need the stress anyway. The dough would've been nice, though...
Something about the formatting of the code blocks used is all messed up for me. Seems to be independent on browser, happens in both Firefox and Chrome.
This is a Bjarne issue. For personal reasons he uses proportional fonts in his code blocks (in his texts) instead of monospaced and the code snippets always look bad. I guess he is stuck in his ways, just have to work around this ugly look.
The font is selected by the HTML/CSS of the ACM site, not by Bjarne.
There may be a bug in the CSS of the ACM site, but I think that it is more likely that anyone who does not see correctly formatted code on that page has forgotten to open the settings of their browsers and select appropriate default fonts for "serif", "sans serif" and "monospace".
As installed, most browsers very seldom have appropriate default fonts, you normally must choose them yourself.
In this case, whoever does not see a monospace font, which is mandatory for rendering the code on that page, because the indentation is done with spaces, which become too narrow if rendered with a proportional font, must have that proportional font set in their browser as a default monospace font, so they should correct this.
It's typical Stroustrup style to write code in a variable width font. I'd wager they didn't have an option to use a variable-width font in their code blocks in their CMS and normal paragraphs are trimmed automatically.
I didn't see the author at first. However, immediately after seeing the code I checked for the author, because I was sure it was Stroustrup.
While you are right about the books of Stroustrup, here your inference is wrong, because Stroustrup cannot have anything to do with the CSS style sheets of the ACM Web site, which, in conjunction with the browser settings, determine the font used for rendering the text.
On my browser, all the code is properly indented, most likely because my browsers are configured correctly, i.e. with a monospace font set as the default for "monospace".
Whoever does not see indentation, most likely has not set the right default font in their browser.
The code blocks aren't in a preformatted tag like <pre> so the whitespace gets collapsed. It seems the intention was to turn spaces into but however it was done was messed up because lots of spaces didn't get converted.
Have you verified that your browsers have correct settings for their default fonts, i.e. a real monospace font as the default for "monospace"?
Here the code is displayed with my default monospace font, as configured in browsers, so the formatting is fine.
There are only 2 possible reasons for the bad formatting: a bug in the CSS of the ACM site, which selects a bad font on certain computers or a bad configuration of your own browsers, where you have not selected appropriate default fonts.
This doesn't seem to be a code blog, but a general science communication blog. The editors may not be familiar with code syntax, and may simply be using a content management system and copy-pasting from source material.
> ACM, the Association for Computing Machinery, is the world's largest educational and scientific society, uniting computing educators, researchers and professionals to inspire dialogue, share resources and address the field's challenges.
Most of programming language conferences are organized by ACM.
Communications of the ACM has had unbelievably bad typography for code samples for decades (predating the web). No idea how this is allowed to continue.
> Between Rust and Zig, the problems of C++ have been solved much more elegantly
Those languages occupy different points in the design space than C++. And thus, in the general sense, neither of them, nor their combination, is "C++ with the problems solved". I know very little Rust and even less Zig. But I do know that there are various complaints about Rust, which are different than the kinds of complaints you get about C++ - not because Rust is bad, just because it's different in significant ways.
> It is so objectively horrible in every capacity
Oh, come now. You do protest too much... yes, it has a lot of warts. And it keeps them, since almost nothing is ever removed from the language. And still, it is not difficult to write very nice, readable, efficient, and safe C++ code.
> it is not difficult to write very nice, readable, efficient, and safe C++ code
That's a fine case of Stockholm Syndrome you've got there. In reality, it is hard. The language fights you every step of the way. That's because the point in the design space C++ occupies is a uniquely stupid one. It wants to have it's cake and eat it too. The pipe-dream behind C++ is that you can write code in an expressive manner and magically have it also be performant. If you want fast code, you have to be explicit about many things. C++ ties itself in knots trying to be implicitly explicit about those things, and the result is just plain harder to reason about. If you want code that's safe and fast, you go with Rust. If you want code that's easy and fast, you go with Zig. If you want code that's easy and safe you go with some GCed lang. Then if you want code that's easy, safe, and fast, you pick C++ and get code which might be fast. You cannot have all three things. Many other langues find an appropriate balance of these three traits to be worthwhile, but C++ does not. It's been 40 years since the birth of C++ and they are only just now trying to figure out how to make it compile well.
Even Cobol code hasn't been ported in it's entirety, and the whole codebase at the peak was probably orders of magnitude smaller than C++. It's also far easier to port Cobol - with it being used mostly for data processing and business logic - than C++ that was used for all manners of strange, esoteric and complicated pieces of software requiring thousand to millions of man-hours to port (for example most of Gecko and Blink).
C++ will be here forever, at least in some manner.
We can all at least appreciate that COBOL is something you try to get rid of where possible. If we took the same attitude to C++ as we do COBOL, then I think the issue would be much less severe.
That in and of itself is a failure. The decision to continually bolt more stuff onto this mess instead of developing a viable alternative is honestly painful. When you look at something like Zig, it gets you much of what C++ offers and in a way that doesn't cause you pain. Is the argument that Zig simply wasn't possible 30 years ago? I doubt it. As best I can tell, Zig comes as the result of a relatively experienced C programmer making the observation that you could improve C in a lot of easy ways. Were it not for the existing mess, he might have called his language C++. Instead a Scandinavian nut-job decided to heap some mess on top of C and everyone just went along with it.
Honestly, I am a happier and more productive developer since left C++ behind for other languages. And it's not just the language, but the lack of ecosystem too. Things like the build system, managing dependencies, etc, all such a pain compared to modern languages with good ecosystem (Rust, Flutter, Kotlin, etc)
Rust doesn't "depend" on LLVM in the sense you seem to imagine, you can instead lower Rust's MIR into Cranelift (which is written in Rust) if you want for example.
LLVM's optimiser is more powerful, and it handles unwinding, so today most people want LLVM but actually I think LLVM's future might involve more Rust.
The C++ Core Guidelines have existed for nearly 10 years now. Despite this, not a single implementation in any of the three major compilers exists that can enforce them. Profiles, which Bjarne et al have had years to work on, will not provide memory safety[0]. The C++ committee, including Bjarne Stroustrup, needs to accept that the language cannot be improved without breaking changes. However, it's already too late. Even if somehow they manage to make changes to the language that enforce memory safety, it will take a decade before the efforts propagate at the compiler level (a case in point is modules being standardised in 2020 but still not ready for use in production in any of the three major compilers).
[0] https://www.circle-lang.org/draft-profiles.html
> The C++ committee, including Bjarne Stroustrup, needs to accept that the language cannot be improved without breaking changes.
The example in the article starts with "Wow, we have unordered maps now!" Just adding things modern languages have is nice, but doesn't fix the big problems. The basic problem is that you can't throw anything out. The mix of old and new stuff leads to obscure bugs. The new abstractions tend to leak raw pointers, so that old stuff can be called.
C++ is almost unique in having hiding ("abstraction") without safety. That's the big problem.
I find the unordered_map example rather amusing. C++’s unordered_map is, somewhat infamously, specified in an unwise way. One basically cannot implement it with a modern, high performance hash table for at least two reasons:
1. unordered_map requires some bizarre and not widely useful abilities that mostly preclude hash tables with probing:
https://stackoverflow.com/questions/21518704/how-does-c-stl-...
2. unordered_map has fairly strict iteration and pointer invalidation rules that are largely incompatible with the implementations that turn out to be the fastest. See:
> References and pointers to either key or data stored in the container are only invalidated by erasing that element, even when the corresponding iterator is invalidated.
https://en.cppreference.com/w/cpp/container/unordered_map
And, of course, this is C++, where (despite the best efforts of the “profiles” people), the only way to deal with lifetimes of things in containers is to write the rules in the standards and hope people notice. Rust, in contrast, encodes the rules in the type signatures of the methods, and misuse is deterministically caught by the compiler.
4 replies →
You absolutely can throw things out, and they have! Checked exceptions, `auto`, and breaking changes to operator== are the two I know of. There were also some minor breaking changes to comparison operators in C++20.
They absolutely could say "in C++26 vector::operator[] will be checked" and add an `.at_unsafe()` method.
They won't though because the whole standards committee still thinks that This Is Fine. In fact the number of "just get good" people in the committee has probably increased - everyone with any brains has run away to Rust (and maybe Zig).
10 replies →
While I sort of agree on the complaint, personally I think the best spot of C++ in this ecosystem is still on great backward-compatibility and marginal safety improvements.
I would never expect our 10M+ LOC performance-sensive C++ code base to be formally memory safe, but so far only C++ allowed us to maintain it for 15 years with partial refactor and minimal upgrade pain.
I think at least Go and Java have as good backwards compatibility as C++.
Most languages take backwards compatibility very seriously. It was quite a surprise to me when Python broke so much code with the 3.12 release. I think it's the exception.
16 replies →
Language is improving (?), although IME it went besides the point I'm finding new features to be less useful for every day code. I'm perfectly happy with C++17/20 for 99% of the code I write. And keeping the backwards compatibility for most of the real-world software is a feature not a bug, ok? Breaking it would actually make me go away from the language.
Clion, clang tidy and Visual C++ analysers do have partial support for the Core Guidelines, and they can be enforced.
Granted, it is only those that can be machine verified.
Office is using C++20 modules in production, Vulkan also has a modules version.
>Despite this, not a single implementation in any of the three major compilers exists that can enforce them
Because no one wants it enough to implement it.
I feel like a few decades ago, standards intended to standardize best practices and popular features from compilers in the field. Dreaming up standards that nobody has implemented, like what seems to happen these days, just seems crazy to me.
1 reply →
Or it's better to have other languages besides from C++ for that.
I hoped Sean would open source Circle. It seemed promising, but it's been years and don't see any tangible progress. Maybe I am not looking hard enough?
He's looking to sell Circle. That must be the reason he's not open sourcing it.
1 reply →
I think Carbon is more promising to be honest. They are aiming for something production-ready in 2027.
Profiles will not provide perfect memory safety, but they go a long way to making things better. I have 10 million lines of C++. A breaking change (doesn't matter if you call it new C++ or Rust) would cost over a billion dollars - that is not happening. Which is to say I cannot use your perfect solution, I have to deal with what I have today and if profiles can make my code better without costing a full rewrite then I want them.
Changes which re-define the language to have less UB will help you if you want safety/ correctness and are willing to do some work to bring that code to the newer language. An example would be the initialization rules in (draft) C++ 26. Historically C++ was OK with you just forgetting to initialize a primitive before using it, that's Undefined Behaviour in the language so... if that happens too bad all bets are off. In C++ 26 that will be Erroneous Behaviour and there's some value in the variable, it's not always guaranteed to be valid (which can be a problem for say, booleans or pointers) but just looking at the value is no longer UB and if you forgot to initialize say an int, or a char, that's fine since any possible bit sequence is valid, what you did was an error, but it's not necessarily fatal.
If you're not willing to do any work then you're just stuck, nobody can help you, magic "profiles" don't help either.
But, if you're willing to do work, why stop at profiles? Now we're talking about a price and I don't believe that somehow the minimum assignable budget is > $1Bn
3 replies →
Enforcing style guidelines seems like an issue that should be tackled by non-compiler tools. It is hard enough to make a compiler without rolling in a ton of subjective standards (yes, the core guidelines are subjective!). There are lots of other tools that have partial support for detecting and even fixing code according to various guidelines.
It's part of a compiler ecosystem. ie. The front end is shared.
See clang-tidy and clang analyzer for example.
ps: That's what I like most about the core guidelines, they are trying very hard to stick to guidelines (not rules) that pretty much uncontroversially make things safer _and_ can be checked automatically.
They're explicitly walking away from bikeshed paintings like naming conventions and formatting.
1 reply →
What are you talking about, the language gets better with each release. Using C++ today is a hell of a lot better than even 10 years ago. It seems like people hold "memory safety" as the most important thing a language can have. I completely disagree. It turns out you can build awesome and useful software without memory safety. And it's not clear if memory safety is the largest source of problems building software today.
In my opinion, having good design and architecture are much higher on my list than memory safety. Being able to express my mental model as directly as possible is more important to me.
The top memory safety bugs in shipped code for C and C++ are out of bounds array indexing.
2 replies →
Does it matter whether it is a common class of bugs or a not so common one? The point is, this is a class of bugs you do not have when picking a different language.
C++ claimed for decades to be about eliminating a class of resource management bugs you can have in C code, that was its biggest selling point. So why is eliminating another class of bugs a nice to have now?
C++ is loosing projects to memory safe languages for decades now, just think of all the business software in Java, scientific SW in python, ... . The industry is moving towards memory safe software for decades now. Rust is just the newest option -- and a very compelling one as it has no runtime environment or garbage collector, just like C++.
> And it's not clear if memory safety is the largest source of problems building software today.
The Chromium team found that
> Around 70% of our high severity security bugs are memory unsafety problems (that is, mistakes with C/C++ pointers). Half of those are use-after-free bugs.
Chromium Security: Memory Safety (https://www.chromium.org/Home/chromium-security/memory-safet...)
Microsoft found that
> ~70% of the vulnerabilities Microsoft assigns a CVE each year continue to be memory safety issues
A proactive approach to more secure code (https://msrc.microsoft.com/blog/2019/07/a-proactive-approach...)
It’s possible you hadn’t come across these studies before. But if you have, and you didn’t find them convincing, what did they lack?
- Were the codebases not old enough? They’re anywhere between 15 and 30 years old, so probably not.
- Did the codebases not have enough users? I think both have billions of active users, so I don’t think so.
- Was it a “skill issue”? Are the developers at Google and Microsoft just not that good? Maybe they didn’t consider good design and architecture at any point while writing software over the last couple of decades. Possible!
There’s just one problem with the “skill issue” theory though. Android, presumably staffed with the same calibre of engineers as Chrome, also written in C++ also found that 76% of vulnerabilities were related to memory safety. We’ve got consistency, if nothing else. And then, in recent years, something remarkable happened.
> the percentage of memory safety vulnerabilities in Android dropped from 76% to 24% over 6 years as development shifted to memory safe languages.
Eliminating Memory Safety Vulnerabilities at the Source (https://security.googleblog.com/2024/09/eliminating-memory-s...)
They stopped writing new C++ code and the memory safety vulnerabilities dropped dramatically. Billions of Android users are already benefiting from much more secure devices, today!
You originally said
> And it's not clear if memory safety is the largest source of problems building software today.
It is possible to defend this by saying “what matters in software is product market fit” or something similar. That would be technically correct, while side stepping the issue.
Instead I’ll ask you, do you still think it is possible to write secure software in C++, but just trying a little harder. Through “good design and architecture”, as your previous comment implied.
9 replies →
> Profiles, which Bjarne et al have had years to work on, will not provide memory safety
While I agree with this in a general sense, I think it ought to be quite possible to come up with a "profile" spec that's simply meant to enforce the language restriction/subsetting part of Safe C++ - meaning only the essentials of the safety checking mechanism, including the use of the borrow checker. Of course, this would not be very useful on its own without the language and library extensions that the broader Safe C++ proposal is also concerned with. It's not clear as of yet if these can be listed as part of the same "profile" specifications or would require separate proposals of their own. But this may well be a viable approach.
I have seen 3 different safe c++ proposals (most are not papers yet, but they are serious efforts to show what safe c++ could look like). However there is a tradeoff here. the full bower checker in C++ approach is incompatible with all current C+++ and so adopting it is about as difficult is rewriting all your code in some other language. The other proposals are not as safe, but have different levels of you can use this with your existing code. All are not ready to get added to C++, but they all provide something better and I'm hopeful that something gets into C++ (though probably not before C++32)
10 replies →
Last weekend, I took an old cross-platform app written by somebody else between 1994-2006 in C++ and faffed around with it until it compiled and ran on my modern Mac running 14.x. I upped the CMAKE_CXX_STANDARD to 20, used Clang, and all was good. Actually, the biggest challenge was the shoddy code in the first place, which had nothing to do with its age. After I had it running, Sonar gave me 7,763 issues to fix.
The moral of the story? Backwards compatibility means never leaving your baggage behind.
> [M]any developers use C++ as if it was still the previous millennium. [...] C++ now offers modules that deliver proper modularity.
C++ may offer modules (in fact, it's been offering them since 2020), however, when it comes to their implementation in mainstream C++ compilers, only now things are becoming sort of usable with modules still being a challenge in more complex projects due to compiler bugs in the corner cases.
I think we need to be honest and upfront about this. I've talked to quite a few people who have tried to use modules but were unpleasantly surprised by how rough the experience was.
Ya that is rather disingenuous, modules aren't ready, and likely won't be for another 5 years.
Also they are difficult to switch to, so I would expect very few established projects to bother.
Modules were known to be difficult to implement and difficult to migrate to. If modules are mainstream in 5 years, it would be an excellent result.
Office is one of such established projects.
I was an extreme C++ bigot back in the late 90's, early 2000's. My license plate back then was CPPHACKR[1]. But industry trends and other things took my career in the direction of favoring Java, and I've spent most of the last 20+ years thinking of myself as mainly a "Java guy". But I keep buying new C++ books and I always install the C++ tooling on any new box I build. I tell myself that "one day" I'm going to invest the time to bone up on all the new goodies in C++ since I last touched it, and have another go.
When the heck that day will actually arrive, FSM only knows. The will is sort-of there, but there are just SO many other things competing for my time and attention. :-(
[1]: funny side story about that. For anybody too young to remember just how hot the job market was back then... one day I was sitting stopped at a traffic light in Durham (NC). I'm just minding my own business, waiting for the light to change, when I catch a glimpse out of my side mirror, of somebody on foot, running towards my car. The guy gets right up to my car, and I think I had my window down already anyway. Anyway, the guy gets up to me, panting and out of breath from the run and he's like "Hey, I noticed your license plate and was wondering if you were looking for a new job." About then the light turned green in my direction, and I'm sitting there for a second in just stunned disbelief. This guy got out of his car, ran a few car lengths, to approach a stranger in traffic, to try to recruit him. I wasn't going to sit there and have a conversation with horns honking all around me, so I just yelled "sorry man" and drove off. One of the weirder experiences of my life.
The programmers on the sound team at the video game company I worked for as an intern in 1998 would always stash a couple of extra void pointers in their classes just in case they needed to add something in later. Programmers should never lose sight of pragmatism. Seeking perfection doesn’t help you ship on time. And often, time to completion matters far more than robustness.
Vulkan does that with `void* pNext` in a lot of its structs so that they can be extended in the future.
Funny, sounds like the Simpsons gag from the same time period: “what’s wrong with this country? Can’t a man walk down the street without being offered a job?”
https://youtube.com/watch?v=yDbvVFffWV4
Interesting. I was SO into the Simpsons at one time, but somehow I'd never seen that episode (as best as I can remember anyway). Now I feel the urge to go back and rewatch every episode of the Simpsons from the beginning. It would be fun, but man, what a time sink. I started the same thing with South Park a while back and stalled out somewhere around Season 5. I'd like to get back to it, but time... time is always against us.
2 replies →
AIEXPERT here I come!
Awesome! My current tag is /DEV/AGI :-)
Note to the above: I am wrong. My license plate back then was C++HACKR, with the actual "+" signs. NC license plates do allow that, although while the +'s are on the tag, they don't show up on your registration card or in the DMV computer system.
I mixed up the tag and my old domain name, which was "cpphacker.co.uk" (and later, just cpphacker.com/org).
what is the job market like now for C++ programmers? I'm looking for a job.
Here's how Bjarne describes that first C++ program:
"a simple program that writes every unique line from input to output"
Bjarne does thank more than half a dozen people, including other WG21 members, for reviewing this paper, maybe none of them read this program?
More likely, like Bjarne they didn't notice that this program has Undefined Behaviour for some inputs and that in the real world it doesn't quite do what's advertised.
The collect_lines example won't even compile, it's not valid C++, but there's undefined behavior in one of the examples? I'm very surprised and would like to know what it is, that would be truly shocking.
Really? If you've worked with C++ it shouldn't be shocking.
The first example uses the int type. This is a signed integer type and in practice today it will usually be the 32-bit signed integer Rust calls i32 because that's cheap on almost any hardware you'd actually use for general purpose software.
In C++ this type has Undefined Behaviour if allowed to overflow. For the 32-bit signed integer that will happen once we see 2^31 identical lines.
In practice the observed behaviour will probably be that it treats 2^32 identical lines as equivalent to zero prior occurrences and I've verified that behaviour in a toy system.
2 replies →
"Undefined behavior" is not a bug. It's something that isn't specified by an ISO standard.
Rust code is 100 percent undefined behavior because Rust doesn't have an ISO standard. So, theoretically some alternative Rust compiler implementation could blow up your computer or steal your bitcoins. There's no ISO standard to forbid them from doing so.
(You see where I'm going with this? Standards are good, but they're a legal construct, not an algorithm.)
> "Undefined behavior" is not a bug. It's something that isn't specified by an ISO standard.
An ISO standard? According to who, ISO?
1 reply →
I haven't read much from Bjarne but this is refreshingly self-aware and paints a hopeful path to standardize around "the good parts" of C++.
As a C++ newbie I just don't understand the recommended path I'm supposed to follow, though. It seems to be a mix of "a book of guidelines" and "a package that shows you how you should be using those guidelines via implementation of their principles".
After some digging it looks like the guidebook is the "C++ Core Guidelines":
https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines
And I'm supposed to read that and then:
> use parts of the standard library and add a tiny library to make use of the guidelines convenient and efficient (the Guidelines Support Library, GSL).
Which seems to be this (at least Microsoft's implementation):
https://github.com/microsoft/GSL
And I'm left wondering, is this just how C++ is? Can't the language provide tooling for me to better adhere to its guidelines, bake in "blessed" features and deprecate what Bjarne calls, "the use of low-level, inefficient, and error-prone features"? I feel like these are tooling-level issues that compilers and linters and updated language versions could do more to solve.
The problem with 45 years of C++ is that different eras used different features. If you have 3 million lines of C++ code written in the 1990's that still compiles and works today, should you use new 202x C++ features?
I still feel the sting of being bit by C++ features from the 1990s that turned out to be footguns.
Honestly, I kinda like the idea of "wrapper" languages. Typescript/Kotlin/Carbon.
>footguns
I was expecting that someone would have posted this by now:
How to Shoot Yourself In the Foot:
https://www-users.york.ac.uk/~ss44/joke/foot.htm
I'm curious about that now, too. Is there the equivalent of Python's ruff or Rust's cargo clippy that can call out code that is legal and well-formed but could be better expressed another way?
Clang-tidy can rewrite some old code to better. However there is a lot of working code from the 1990s that cannot be automatically rewritten to a new style. Which is what makes adding tooling hard - somehow you need to figure out what code should follow the new style and what is the old style and updating to modern would be too expensive.
> As a C++ newbie I just don't understand the recommended path I'm supposed to follow, though
Did you even read the article ? He has given the recommended path in the article itself.
Two books describe C++ following these guidelines except when illustrating errors: “A tour of C++” for experienced programmers and “Programming: Principles and Practice using C++” for novices. Two more books explore aspects of the C++ Core Guidelines
J. Davidson and K. Gregory Beautiful C++: 30 Core Guidelines for Writing Clean, Safe, and Fast Code. 2021. ISBN 978-0137647842
R. Grimm: C++ Core Guidelines Explained. Addison-Wesley. 2022. ISBN 978-0136875673.
> And I'm left wondering, is this just how C++ is? Can't the language provide tooling for me to better adhere to its guidelines
Well, first, the language can't provide tooling: C++ is defined formally, not through tools; and tools are not part of the standard. This is unlike, say, Rust, where IIANM - so far, Rust has been what the Rust compiler accepts.
But it's not just that. C++ design principles/goals include:
* multi-paradigmatism;
* good backwards compatibility;
* "don't pay for what you don't use"
and all of these in combination prevent baking in almost anything: It will either break existing code; or force you to program a certain way, while legitimate alternatives exist; or have some overhead, which you may not want to pay necessarily.
And yet - there are attempts to "square the circle". An example is Herb Sutter's initiative, cppfront, whose approach is to take in an arguably nicer/better/easier/safer syntax, and transpile it into C++ :
https://github.com/hsutter/cppfront/
How does enforcing profiles per-translation unit make any sense? Some of these guarantees can only be enforced if assumptions are made about data/references coming from other translation units.
This is the one major stumbling block for profiles right now that people are trying to fix.
C++ code involves numerous templates, and the definition of those templates is almost always in a header file that gets included into a translation unit. If a safety profile is enabled in one translation unit that includes a template, but is omitted from another translation unit that includes that same template... well what exactly gets compiled?
The rule in C++ is that it's okay to have multiple definitions of a declaration if each definition is identical. But if safety profiles exist, this can result in two identical definitions having different semantics.
There is currently no resolution to this issue.
I guess modules are supposed to be the magic solution for that, Bjarne has shown them in this article, even using import std.
Its a bit optimistic cause modules are still not really a viable option in my eyes, because you need proper support from the build systems, and notably cmake only has limited support for them right now.
2 replies →
I definitely wouldn't have used "<<" in an "ad" for C++ :)
(I must say that I was happy to see/read that article, though)
Generalizing Overloading for C++2000
Bjarne Stroustrup, AT&T Labs, Florham Park, NJ, USA
Abstract
This paper outlines the proposal for generalizing the overloading rules for Standard C++ that is expected to become part of the next revision of the standard. The focus is on general ideas rather than technical details (which can be found in AT&T Labs Technical Report no. 42, April 1, 1998).
https://www.stroustrup.com/whitespace98.pdf
Modules sound cool for compile time, but do they prevent duplicative template instantiations? Because that's the real performance killer in my experience.
Modules don't treat templates any differently than non-modules so no, they don't prevent duplicate template instantiations.
The best way that I know of to do this is the """ "Manual" export templates """ idea discussed here: http://warp.povusers.org/programming/export_templates.html
(It's a great post in general. N.B. that it's also quite old and export templates have been removed from the standard for quite some time after compiler writers refused to implement them.)
TL;DR: Declare your templates in a header, implement them in a source file, and explicitly instantiate them inside that same source file for every type that you want to be able to use them with. You lose expressiveness but gain compilation speed because the template is guaranteed to be compiled exactly once for each instantiation.
You can declare a template in a header file, and only provide its definition (and hence expansion) in a source file. See for example Firefox doing this for its string implementation here: https://searchfox.org/mozilla-central/source/xpcom/string/ns... (extern template declarations are at the end of the header file, and the actual template definitions are in https://searchfox.org/mozilla-central/source/xpcom/string/ns...).
Which is to say, "extern template" is a thing that exists, that works, and can be used to do what you want to do in many cases.
The "export template" feature was removed from the language because only one implementer (EDG) managed to implement them, and in the process discovered that a) this one feature was responsible for all of their schedule misses, b) the feature was far too annoying to actually implement, and c) when actually implemented, it didn't actually solve any of the problems. In short, when they were asked for advice on implementing export, all the engineers unanimously replied: "don't". (See https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2003/n14... for more details).
> You lose expressiveness
Or more, correctly, the following happens:
1. You gain the ability to use the compilation unit's anonymous namespace instead of a detail namespace, so there is better encapsulation of implementation details. The post author stresses this as the actual benefit of export templates, rather than compile times.
2. You lose the ability to instantiate the template for arbitrary types, so this is probably a no-go for libraries.
3. Your template is guaranteed to be compiled exactly once for each explicit instantiation. (Which was never actually guaranteed for real export templates).
Bjarne Stroustrup (the creator of C++) is the best language designer. Many language designers will create a language, work on it for a couple years, and then go and make another language. Stroustrup on the other hand has been methodically working on C++ and each year the language becomes better.
Prof. Bjarne's commitment to C++ is beyond comparison!
So now even H news are being poluted with IA.
Seeing badly formatted code snippets without color highlighting in article called "21st Century C++" somehow resonates with my opinion on how hard to write and to ready C++ still is after working with other laguages.
This honestly looks like C++ being feature-juryrigged to a degree that it doesn't even look like what C++ is: a c-derived low level language.
Everything is unobvious magic. Sure, you stick to a very restricted set of API usages and patterns, and all the magic allocation/deallocation happens out of sight.
But does that make it easier to debug? Better to code it?
This simply looks like C++ trying not to look like C++: like a completely different language, but one that was not built from the ground up to be that language, rather a bunch of shell games to make it look like another language as an illusion.
Yeah, I didn't have a problem keeping my shit straight in C++ in the '90s. The kitchen-sink approach since then hasn't been worth keeping up with. The fact that we're still dealing with header files means that the language stewards' priorities are not in line with practical concerns.
I want to love C++.
Over my career I’ve written hundreds of thousands of lines of it.
But keeping up with it is time consuming and more and more I find myself reaching for other languages.
Same. Luckily my team switched to Rust almost 100%. So I don't need to learn about the godforsaken coroutine syntax and what pitfalls they laid when you use char wrong with it or in which subset of calls std::range does something stupid and causes a horrible performance regression.
Bjarne has been criticized for accepting too many (questionable) things into the language even at the dawn of C++ and committee kept that behavior. Moreover they have this pattern that given the options they always choose the easiest to misuse and most unsafe implementation of anything that goes into standard. std::optional is a mess, so is curly bracket initialization, auto is like choosing between stepping on Legos or putting your arm into a spider-full bag.
The committee is the worst combination of "move fast and break things" and "not in my watch". C++98 was an okay language, C++11 was alright. Anything after C++14 is a minesweeper game with increasing difficulty.
> Bjarne has been criticized for accepting too many (questionable) things
He even writes that way in his own article... The quote from the last section of the introduction was hilarious, and actually made me laugh a little bit for almost those exact reasons.
BS, Comm ACM > "I would have preferred to use the logically minimal vector{m} but the standards committee decided that requiring from_range would be a help to many."
I went from being curious about C++, to hating C++, to wanting to love it, to being fine with it, to using it for work for 5+ years, to abandoning it and finally to want to use it for game development, maybe. It's the circle of life.
The masochist in me keeps coming back to c++. My analogy of it to other languages is that it’s like painting a house with a fine brush versus painting the Mona Lisa with a roller. Right tool for the job I suppose.
It's my job and career(well, C and C++) but I often try to avoid C++. Whenever I use it(usually writing tests) I go through this cycle of re-learning some cool tricks, trying to apply them, realizing they won't do what I want or the syntax to do it is awkward and more work than the dumb way, and I end up hating C++ and feeling burned yet again.
1 reply →
Same here.
>>contemporary C++30 can express the ideas embodied in such old-style code far simpler
IMO, newer C++ versions are becoming more complex (too many ways to do the same thing), less readable (prefer explicit types over 'auto', unless unavoidable) and harder to analyse performance and memory implications (hard to even track down what is happening under the hood).
I wish the C++ language and standard library would have been left alone, and efforts went into another language, say improving Rust instead.
I have used auto liberally for 8+ years; maybe I'm accustomed to reading code containing it but I really can't think of it being a problem. I feel like auto increases readability, the only thing I dislike is that they didnt make it a reference by default.
Where do you see difficult to track down performance/memory implications? Lambda comes to mind and maybe coroutines (yet to use them but guessing there may be some memory allocations under the hood). I like that I can breakpoint my C++ code and look at the disassembly if I am concerned that the compiler did something other than expected.
12 replies →
You don't 'have' to keep up with the language and I don't know that many people try to keep up with every single new feature - but it is worse to be one of those programmers for whom C++ stopped at C++03 and fight any feature introduced since then (the same people generally have strong opinions about templates too).
There are certainly better tools for many jobs and it is important to have languages to reach for depending on the task at hand. I don't know that anything is better than C++ for performance sensitive code.
I’ve been using c++ since the late 90’s but am not stuck there.
I was using c++11 when it was still called c++0x (and even before that when many of the features were developing in boost).
I took a break for a few years over c++14, but caught up again for c++17 and parts of c++20...
Which puts me 5-6 years behind the current state of things and there’s even more new features (and complexity) on the horizon.
I’m supportive of efforts to improve and modernize c++, but it feels like change didn’t happen at all for far too long and now change is happening too fast.
The ‘design by committee’ with everyone wanting their pet feature plus the kitchen sink thrown in doesn’t help reduce complexity.
Neither does implementing half-baked features from other ‘currently trendy’ languages.
It’s an enormous amount of complexity - and maybe for most code there’s not that much extra actual complexity involved but it feels overwhelming.
4 replies →
I've been writing C++ since 1996-ish.
Less and less, for sure.
Nothing the past few years.
They killed it.
If you only read HN, you would think C++ died years ago.
As someone who worked in HFT, C++ is very much alive and new projects continue to be created in it simply because of the sheer of amount of experts in it. (For better or for worse)
8 replies →
Took me a moment to realize that "killed it" was being used in the negative sense.
Almost a haiku :)
since -14 or -17 I feel no need to keep up with it. thats cool if they add a bunch more stuff, but what I'm using works great now. I only feel some "peer pressure" to signal to other people that I know c++20, but as of now, I've put nothing into it. I think it's best to lag behind a few years (for this language, specifically).
The compilers tend to lag a few years behind the language spec too, especially if you have to support platforms where the toolchains lag latest gcc/clang (Apple / Android / game consoles).
Respectfully, you might want to add at least a few C++20 features into your daily usage?
consteval/constinit guarantees to do what you usually want constexpr to do. Have personally found it great for making lookup tables and reducing the numbers of constants in code (and c++23 expands what can be done in consteval).
Designated initializer is a game-changer for filling structures. No more accidentally populating the wrong value into a structure initializer or writing individual assignments for each value you want to initialize.
You don't have to "keep up with it", if by this you mean what I think you mean.
You don't have to use features. Instead, when you have a (language) problem to solve or something you'd like to have, you look into the features of the language.
Knowing they exist beforehand is better but is the hard part, because "deep" C++ is so hermetic that it is difficult to understand a feature when you have no idea which problem it is trying to solve.
Wrong. Most programmers spend tremendous amounts of time reading and maintaining someone else's code. You absolutely have to keep up with it.
1 reply →
I think it's good enough or side projects. More powerful than C so I don't need to hand roll strings and some algos but I tend to keep a minimum number of features because I'm such an amateur.
I mean, right from Bjarne's mouth:
> I used the from_range argument to tell the compiler and a human reader that a range is used, rather than other possible ways of initializing the vector. I would have preferred to use the logically minimal vector{m} but the standards committee decided that requiring from_range would be a help to many.
Oh so I have to remember from_range and can't do the obvious thing? Great. One more thing to distract me from solving the actual problem I'm working on.
What exactly is wrong with the C++ community that blinds them to this sort of thing? I should be able to write performant, low-level code leveraging batteries-included algorithms effortlessly. This is 2025 people.
On the other hand, the decline of robust and high quality software started with the introduction of very immature languages such as both javascript or typescript ecosystems.
It's really any other language other than those two.
For someone who wants to get into systems programming professionally, is C++ going to be a hard requirement or can one mostly get away with C/Rust?
The only places where C++ failed to take C's crown has been on UNIX clones (naturally, due to the symbiotic relationship), and embedded where even modern C couldn't replace C89 + compiler extensions from the chip vendor, many shops are stuck in the past, even though most toolchains are already up to C++20 and C17 nowadays.
Rust is still too new for many folks to adopt, it depends on how much you would be willing to help grow the ecosystem, versus doing the actual application.
It will eventually get there, but also have the same issues as C++, regarding taking over C in UNIX/POSIX and embedded, and C++ has the advantage of having been a kind of Typescript for C, in terms of adoption effort, being a UNIX language from AT&T, designed to fit into C ecosystem.
Depends exactly what you want to do. C is not very popular at all in professional settings - C++ is far more popular. I would say if you know Rust then C++ isn't very hard though. You'll write better C++ code too because you'll naturally keep the good habits that the Rust compiler enforces and the C++ compiler doesn't.
That's why C++ is still around today, it was built on some solid principles. Bjarne is such a good language designer because he never abandoned it. Lesser designers make a language and start another in 5 or 10 years. Bjarne saw the value in what he created and had a sense of responsibility to those using it to keep making it better and take their projects seriously.
Whenever I have an idea and I start a project, I start with C++ because I know if the idea works out, the project can grow and work 10 years later.
I always hear about “import std” but still don’t see out of the support for it. Is it still experimental?
Let us know when C++ gets rid of the mess that is header files.
Until then... YAWN.
The article does mention modules.
But it doesn’t mention that you can’t actually use modules without passing a bunch of random compiler flags and hoping that they work.
loving he goes 'int main() { ... }' and never returns an int from it. Even better: without extra error / warning flags the compiler will just eat this and generate some code from it, returning ... yeah. Your guess is probably better than mine.
If the uber-bean counter, herald of the language of bean counters demonstrate unwillingness to count beans, maybe the beans are better counted in another way.
Well, actually... the "main" function is handled specially in the standard. It is the only one where the return type is not void and you don't need to explicitly return from it - if you do it, it is treated as if you returned 0. (You will most definitely get a compiler error if you try this with any other function.)
You might say this is very silly, and you'd be right. But as quirks of C++ go it is one of the most benign ones. As usual it is there for backwards compatibility.
And, for what it's worth, the uber-bean counter didn't miss a bean here...
To me, it's kinda funny that he starts with
> using namespace std
something you get told not to do easily! :D
C++ should be known for the amount of collective brain cycles wasted on arguing what subset of C++ is the right one to use.
Professionals know what tool to use for a job. Does it take time to become good? Of course, like anything.
Not a question of difficulty or skill. I am saying professionals can't agree what subset to use!
6 replies →
is the job market for C++ developers still good?
Depends. For certain fields the pay is great and there’s a dearth of candidates.
For other fields there is also a dearth of candidates but the pay falls short and you’ll be leaving tens of thousands of dollars on the table compared to what you could get with other languages.
Tangential question: is there a Rust equivalent for the book “The Design and Evolution of C++”?
There is not.
I have often thought about writing something vaguely similar. We’ll see if I ever do. It wouldn’t be the same because I don’t hold the same position Bjarne did in the early days, but I am very interested in Rust history, and want to preserve it. It wouldn’t be from my perspective rather than from the creator’s perspective.
I did give a talk one time on Rust’s history. It was originally at FOSDEM, but there was an issue with the recording. The ACM graciously asked me to do it again to get it down on video https://dl.acm.org/doi/10.1145/2959689.2960081
Thank you. It would be interesting to read the history, including the design decisions, the influences (and distractions), the trade offs, etc.
When I read “The Design and Evolution of C++”, it gave me a better understanding of the language.
I would 100% buy a hardback gold embossed version of this book.
1 reply →
21st century C++? AKA Rust?
Unfortunately, Rust is significantly less expressive than C++ and therefore is unlikely to replace it for high-performance systems code. As much as I don’t like C++, it is very powerful as a tool. The ability to express difficult low-level systems constructs and optimizations concisely and safely in the language are its killer feature. Once you know how to use it, other languages feel hobbled.
C++ doesn't allow you to express low level systems constructs concisely and safely though. You usually get neither.
Look at the first example in the article, where the increment can overflow and cause UB despite that overflow having completely defined semantics at the hardware level. Fixing it requires either a custom addition function or C++26, another include, and add_sat(). I wouldn't consider either concise in a program that doesn't include all of std.
6 replies →
Most high performance code is vectorized and Rust is better at autovectorization and aliasing analysis than C++, so I'm not really seeing your point.
Having to drop down to intrinsics early is not a strength.
1 reply →
After decades of C++ development, I prefer C, modern Fortran and Rust.
your 200+ git repo's attest to that !
Just reading the first 1/5 of this made me bored. I started my career with C++, being heavy into it for 10 years. But I've been doing Swift for the last 10 at least. I had a job interview last week for a job that was heavy C++, with major reliance on templates and post-C++ 11... and it didn't go well. You know what? I don't give a shit.
It's crazy that with that amount of experience you wouldn't get the job, just because you lack some modern C++ info in your brain's memory. Stuff you could search for or ask an LLM in 5 seconds (or even look up in a freaking physical book). You'd probably be fully up to date within a few weeks.
Says a lot about the people hiring imo. Good luck to them finding someone who can recite C++ spec from memory.
If you last worked on Pre templates C++ and now need to work on a template heavy codebase you are effectively writing in a different language. I don't think it will be a few weeks of catching up.
1 reply →
Ha, thanks, and obviously true. But I can understand companies wanting people who can just march into their codebase and "hit the ground running," I guess.
I don't need the stress anyway. The dough would've been nice, though...
C++ and C still force a usage of header files.
For whatever reason this is probably the biggest reason I've struggled with it( aside from tooling... Makes me miss npm).
Excellent article. Thanks for sharing.
Previous from 3 days ago: https://news.ycombinator.com/item?id=42952720 (103 points, 85 comments)
Thanks! I've merged that thread hither since this one is currently on the front page.
(How is that possible, someone may ask? It's the SCP! - see https://news.ycombinator.com/item?id=26998308)
Something about the formatting of the code blocks used is all messed up for me. Seems to be independent on browser, happens in both Firefox and Chrome.
This is a Bjarne issue. For personal reasons he uses proportional fonts in his code blocks (in his texts) instead of monospaced and the code snippets always look bad. I guess he is stuck in his ways, just have to work around this ugly look.
Looking at how aesthetically charming the C++ syntax is, I wouldn't expect anything less than Comic Sans code blocks
> This is a Bjarne issue.
I have come to find this category of error to be distressingly large.
1 reply →
This is not a Bjarne issue.
The font is selected by the HTML/CSS of the ACM site, not by Bjarne.
There may be a bug in the CSS of the ACM site, but I think that it is more likely that anyone who does not see correctly formatted code on that page has forgotten to open the settings of their browsers and select appropriate default fonts for "serif", "sans serif" and "monospace".
As installed, most browsers very seldom have appropriate default fonts, you normally must choose them yourself.
In this case, whoever does not see a monospace font, which is mandatory for rendering the code on that page, because the indentation is done with spaces, which become too narrow if rendered with a proportional font, must have that proportional font set in their browser as a default monospace font, so they should correct this.
No, the formatting was definitely botched. It should look much better than it does even in a proportional font.
3 replies →
this is definitely an issue for the editors of the ACM journal
It's typical Stroustrup style to write code in a variable width font. I'd wager they didn't have an option to use a variable-width font in their code blocks in their CMS and normal paragraphs are trimmed automatically.
I didn't see the author at first. However, immediately after seeing the code I checked for the author, because I was sure it was Stroustrup.
The other give away is that he wants to use his awful "I/O streams" feature even though he also wants very modern features like modules.
Normal people who have a modern environment would std::println but Bjarne insists on using the I/O streams from last century instead
While you are right about the books of Stroustrup, here your inference is wrong, because Stroustrup cannot have anything to do with the CSS style sheets of the ACM Web site, which, in conjunction with the browser settings, determine the font used for rendering the text.
On my browser, all the code is properly indented, most likely because my browsers are configured correctly, i.e. with a monospace font set as the default for "monospace".
Whoever does not see indentation, most likely has not set the right default font in their browser.
1 reply →
There’s a better formatted PDF on Stroustrup’s website: https://stroustrup.com/21st-Century-C++.pdf .
That is massively better, thank you!
The code blocks aren't in a preformatted tag like <pre> so the whitespace gets collapsed. It seems the intention was to turn spaces into but however it was done was messed up because lots of spaces didn't get converted.
The code blocks are formatted as "wp-block-code", which seems to select the default monospace font of the browser.
My browser has an appropriate default monospace font (JetBrains Mono), so the code is formatted and indented correctly, as expected.
Where this does not happen, the setting for the default monospace font must be wrong, so it should be corrected.
1 reply →
Have you verified that your browsers have correct settings for their default fonts, i.e. a real monospace font as the default for "monospace"?
Here the code is displayed with my default monospace font, as configured in browsers, so the formatting is fine.
There are only 2 possible reasons for the bad formatting: a bug in the CSS of the ACM site, which selects a bad font on certain computers or a bad configuration of your own browsers, where you have not selected appropriate default fonts.
Firefox reader view seems to be a slight improvement since it removes the random right alignments in the article.
This doesn't seem to be a code blog, but a general science communication blog. The editors may not be familiar with code syntax, and may simply be using a content management system and copy-pasting from source material.
> ACM, the Association for Computing Machinery, is the world's largest educational and scientific society, uniting computing educators, researchers and professionals to inspire dialogue, share resources and address the field's challenges.
Most of programming language conferences are organized by ACM.
1 reply →
Yeah. Looks nasty. Don't the editors of the ACM have a say on how the article is presented?
https://news.ycombinator.com/item?id=42952720
[dead]
Who the hell typeset this?
Communications of the ACM has had unbelievably bad typography for code samples for decades (predating the web). No idea how this is allowed to continue.
I'm guessing someone pasted from what went into the print edition. Or Bjarne himself.
It's just the first code snippet that's messed up. The rest is merely wonky.
You don't use 10 spaces of indentation? It's the 21st century.
It’s a wchar_tab.
[dead]
[flagged]
Ok, but please don't fulminate on Hacker News. We're trying for something different here.
https://news.ycombinator.com/newsguidelines.html
> Between Rust and Zig, the problems of C++ have been solved much more elegantly
Those languages occupy different points in the design space than C++. And thus, in the general sense, neither of them, nor their combination, is "C++ with the problems solved". I know very little Rust and even less Zig. But I do know that there are various complaints about Rust, which are different than the kinds of complaints you get about C++ - not because Rust is bad, just because it's different in significant ways.
> It is so objectively horrible in every capacity
Oh, come now. You do protest too much... yes, it has a lot of warts. And it keeps them, since almost nothing is ever removed from the language. And still, it is not difficult to write very nice, readable, efficient, and safe C++ code.
> it is not difficult to write very nice, readable, efficient, and safe C++ code
That's a fine case of Stockholm Syndrome you've got there. In reality, it is hard. The language fights you every step of the way. That's because the point in the design space C++ occupies is a uniquely stupid one. It wants to have it's cake and eat it too. The pipe-dream behind C++ is that you can write code in an expressive manner and magically have it also be performant. If you want fast code, you have to be explicit about many things. C++ ties itself in knots trying to be implicitly explicit about those things, and the result is just plain harder to reason about. If you want code that's safe and fast, you go with Rust. If you want code that's easy and fast, you go with Zig. If you want code that's easy and safe you go with some GCed lang. Then if you want code that's easy, safe, and fast, you pick C++ and get code which might be fast. You cannot have all three things. Many other langues find an appropriate balance of these three traits to be worthwhile, but C++ does not. It's been 40 years since the birth of C++ and they are only just now trying to figure out how to make it compile well.
Even Cobol code hasn't been ported in it's entirety, and the whole codebase at the peak was probably orders of magnitude smaller than C++. It's also far easier to port Cobol - with it being used mostly for data processing and business logic - than C++ that was used for all manners of strange, esoteric and complicated pieces of software requiring thousand to millions of man-hours to port (for example most of Gecko and Blink).
C++ will be here forever, at least in some manner.
edit: spelling
We can all at least appreciate that COBOL is something you try to get rid of where possible. If we took the same attitude to C++ as we do COBOL, then I think the issue would be much less severe.
> It is so objectively horrible in every capacity,
Total hyperbole and simply not true.
> but it still somehow managed to limp on for all these years
Before Rust became somewhat popular, there was simply no serious alternative to C++ in many domains.
That in and of itself is a failure. The decision to continually bolt more stuff onto this mess instead of developing a viable alternative is honestly painful. When you look at something like Zig, it gets you much of what C++ offers and in a way that doesn't cause you pain. Is the argument that Zig simply wasn't possible 30 years ago? I doubt it. As best I can tell, Zig comes as the result of a relatively experienced C programmer making the observation that you could improve C in a lot of easy ways. Were it not for the existing mess, he might have called his language C++. Instead a Scandinavian nut-job decided to heap some mess on top of C and everyone just went along with it.
1 reply →
Honestly, I am a happier and more productive developer since left C++ behind for other languages. And it's not just the language, but the lack of ecosystem too. Things like the build system, managing dependencies, etc, all such a pain compared to modern languages with good ecosystem (Rust, Flutter, Kotlin, etc)
Start by removing Rust's dependency in GCC and LLVM, both written in C++.
Rust doesn't "depend" on LLVM in the sense you seem to imagine, you can instead lower Rust's MIR into Cranelift (which is written in Rust) if you want for example.
LLVM's optimiser is more powerful, and it handles unwinding, so today most people want LLVM but actually I think LLVM's future might involve more Rust.
1 reply →
I dislike the style of Code used to write this. I understand that, given who wrote the article, this is blasphemy.
Opening braces should be inline with the expression or definition.
Comments can be above what they're referred to.
Combined, this makes any code snippet look like crap on mobile and almost impossible to follow as a result.