Honestly, this sentiment and most of the comments here are nerd-sniping and oneupmanship masquerading as wisdom.
Obviously very few people, possibly none, know 100% of all possible knowledge (how various compilers work, all possible UB, gotchas of stdlib, patterns to use to avoid bad things) to be known about C++. Congratulations to you if you are able to find an example of something someone who “knows C++” does not know. But nobody claiming to know C++ is claiming to know 100% about it anyway, it’s obviously a way to phrase that you’re familiar with it and competent enough with it to be productive based on experience and exposure.
You wouldn’t hold a programmer in other languages to the same standard. Most people who “know Java” aren’t JVM hackers, most people who “know JavaScript” aren’t familiar with all the actual details of how JavaScript gets interpreted and executed by the browser, or how V8 works.
Yes, the main difference is that it can be easier to introduce certain classes of bugs in C++ than in other languages. You don’t need to know 100% of C++ to know it well enough to mostly avoid these. Buffer overflow, memory leaks, and UB are all something a person who “knows C++” is able to avoid by conforming to common patterns that limit the use of features leading to this behavior/structurally avoid them. Stop being pedantic
Where C++ stands out is being wide and leaky at the same time. It allows high level programming, and yet you have to know about all those tiny low level nuances in order to avoid shooting yourself in the foot. In comparison, Java and JavaScript allow you to stay at one level and not look down.
That’s totally true. In practice I think most effective C++ programmers, while still not knowing nearly everything there is to know, adopt the mindset of “if I follow these rules, avoid XYZ, and don’t get overly fancy, I can avoid aiming the gun at my foot”.
I’d be warier about someone claiming to know all the C++ features without having a lot of experience maintaining it or running it at scale, than someone who claims to know less features but has maintained it and ran it at scale. It’s very much a language where you need to be humble about what you don’t know and careful about introducing things you don’t understand well.
I think this “use what you know” is actually true in most languages including Java and Javascript, it’s just that the failure modes usually manifest as poor performance rather than unexpected semantics like in C++.
...and then those effective C++ programmers get a code base where their peers throw ints in exceptions and other freakish stuff, their whole mental concept is destroyed and they need to learn to swim or sink.
lol try using "modern" cpp without knowing all the little subtleties and you have undefined behaviors, leaky programs and unrecognizable monstrosity that you can't possibly debug because you'd have to stare at walls of texts. No, it is far, far too easy to slip into madness with C++. Which is why I would rather hire someone that admits they don't know C++, be humble about it, then someone who said they are pretty confident with their C++. Nah, I don't trust any C++ code. They are all broken and must be taken with extreme care AFAIC.
When you are writing "real" programs, not just "leetcode solutions" you will come across all the myriad ways that c++ is fucked up.
I just think you and others are arguing against a straw man that does not represent those who would off the cuff/colloquially/on their resume claim to “know C++”. Yes if you then ask to clarify and the person says they know the whole thing back to front that’s suspicious and worthy of validation.
But as someone who has been primarily working with C++ (14,17,20) for several years I will continue to colloquially simplify that as “knowing C++” while having the humility to note that does not mean I know nearly everything about it. I know how to prevent subtleties and unexpected outcomes reasonably well within the subset that I commonly work with, and also know that when working outside that subset or using things I don’t understand well, that I’m wielding a footgun that must be treated with extreme caution. And most people who “know C++” would also be in that camp, and we’d know that’s what each other means when we say we “know C++”
I have worked with C++ codebase that is very conservative. They are basically C with a sprinkle of templates but a fuckton of OOP. C++ actually makes it harder to reason about the program, not easier. I have worked with C code that is much easier to reason about. There's way too many moving parts in C++ that sometimes things just gets fucked and no one knows why.
This actually happened where I was trying to fix 1 thing but another entirely different thing broke and no one knows why. So I just leave the PR to rot. I was only 1 or 2 years into C++ at this point and I already know at that point that I don't think this shit is viable at all. I mean half of C++ talks/discussions is fixing itself, not actual real world problem.
I think you are underestimating what you know and overestimating what the average people know. I have seen interviews where there is a very large gap of knowledge but they don't seem to know that its there. It seems to me that the general population's idea of C++ is very different than those that talk and use it a lot.
I've migrated codebases from C to C++. The resultant codebases were an order of magnitude easier to support, easier to extend, and breathed new life into existing product lines. I had this experience at 3 different companies over the course of 20 years.
Having said that, there are definitely "gotchas" that trip people up, but honestly, the biggest problems I've seen is people have a poor understanding of object-oriented analysis and design. Plus I'd love to get back all those hours of people arguing over whether methods should be private or protected, or virtual, and whatnot.
It took the industry a long time to figure out the best practices and by the time those emerged a lot of bad habits had been formed. That's the biggest advantage Rust has - they started with a clean slate after the best practices had been learned. C++ needs to maintain backward compatibility and as such, it's quite a complicated language. But not so complicated it can't be learned. It just requires more effort than most other languages.
> it’s obviously a way to phrase that you’re familiar with it and competent enough with it to be productive based on experience and exposure.
I think the point is that almost nobody can say they "know" C++ in this sense. You have to be more specific -- what parts of C++ do you know well enough to claim competency? It's probably not all of it.
That’s fair, but also very clunky in practice, and essentially should be an implied point by anybody else who “knows C++.” It would be very clunky and annoying to write on a resume “knows C++20 except have limited knowledge of exceptions, template metaprogramming, and default linker/make tools” when you can just caveat out all your blind spots when you actually get to talking about which parts you know (and when your resume is being reviewed first by nontechnical folks who will interpret carves like this negatively due to ignorance).
It’s not wrong or misleading to claim you are eg a cancer researcher or study English literature while not knowing all cancer research topics or all works from all niches of English literature. Because everybody who works in those areas knows that it’s implied you were exposed to only a limited subset. If laypeople assume that “knowing” a broad and complex thing means you have 100% of knowledge across the entire domain, that’s a problem with their interpretation of the claim.
> It’s not wrong or misleading to claim you are eg a cancer researcher or study English literature while not knowing all cancer research topics or all works from all niches of English literature.
True, but also cancer researchers and Eng Lit professors would never say they "know" cancer or English literature. They'd say what portion of those topics they know.
Also, a big part of the problem is that you really can "know" most languages in their entirety at a competent level. C++ is unusual in that you can't really do this with it. So assuming that "know" means "truly expert in" is not really a ridiculous thing to do if you yourself aren't very familiar with C++.
When someone says they “know” some other language, it’s rarely all of it either.
Take Python for example, I know some very good Python programmers who have been using the language for 20 years and yet wouldn’t claim they know every single feature, dunder method and standard library feature, but would absolutely say (rightly) that they know python.
The only measure of C++ skill is years using it really and the industry. Game development will give you skills in C++ but you'll always have more to learn. It is one of the only places where you will be pushed into new areas and need the performance. You'll have fights of memory, string classes (custom vs standard), template hell, all sorts of C/Obj-C library integrations, network, compiler tours and many more adventures.
We used lots of C++, Objective-C and even Objective C++ (g++ mixing Objective-C + C++, connecting to C++ engine) in a custom game engine that worked on mobile for lots of shipped titles. It is powerful and fun. I have 5-6 years using it everyday across games and media apps, bits across the rest of years, but so much territory uncharted.
I actually like C++ but it is like a lightsaber, it can be dangerous. Most apps still run on C++ at some level, even virtual machines. C++ is the stick shift manual transmission of programming languages. I'd build more in it beyond games if allowed and use it on personal things quite a bit.
In my opinion, if you can work daily in C++ for some years that all other languages are easier from there. The understanding of the machine is better from memory to stack/heap to performance, cpu/gpu bounding, networking, and the sheer power of the root of all applications. C/C++ are where the machine meet code just above assembly, there is maneuverability there but also danger.
Fun fact: Objective-C was created before C++. Objective-C all they way back to 1981 but officially 1984. [1] C++ all the way back to 1982 but officially 1985. [2]
Objective-C++ is one of those things that sounds like an abomination when you first hear about it, but when you try it, it's surprisingly straightforward and useful.
Obj-C and C++ are really very different, so it makes sense to use each one for what it's best at; and the syntax of each is so different that it clashes much less than you'd expect.
When I first learned about it in 2008ish I was blown away, really just using g++ to compile and you are set. Very interoperable and more than expected, C/C++/Objective-C++ all work well together, maybe even better than layers on top today in mobile/web. It isn't recommended to use all these unless there is a reason, but for integration points it works great. Using C libs in C++. Using C++ libs/systems in Objective-C++. Accessing Objective-C from C++ etc.
The game engine is in C++ so to use it on iOS we needed to have an integration point for the system level Objective-C libraries to the C++ game engine (same with Android NDK integration in a way).
Another game engine from around the time Oolong Engine [1] also used this and was used for an early Quake to iOS port.
Like our custom engine Oolong used PowerVR/PVRTexTool to simulate PVRTC/PVRTC2 textures.
This setup made it so you could just develop your whole desktop/console/mobile game on PC then it would run on iOS with the Objective-C++ integration points. That was huge for game studios that only had PCs pretty much at the time.
I ran into Steve Naroff a couple of years ago and he said, "it's still amazing that we got Objective-C++ to work" ("we" being his team at NeXT and ours at Cygnus). I'm impressed Jobs went for it.
About the lightsaber thing: From my experience its worse, even if you are really knowledgable c++ will bite you out of knowwhere cause things never compose that well and something breaks for some arcane reason which you will only notice because your customers build segfaults randomly.
Also the amount of template voodoo necessary for certain things to be ergonomic to use is insane at times.
I vaguely remember a weapon in an old text based multi-user dungeon (MUD).
It was cursed: once picked up it couldn’t be put down. It was also very powerful, with only one downside: randomly you’d hit an ally in the same room instead of the enemy.
Reminds me of a lot of things in the world.
It was also just at the right level that you’d be getting self-confident and even a bit cocky, but not experienced enough to recognise such an obvious trap.
Of course, I dual-wielded two of them like a maniac to the dismay of my party. I only killed a few allies! An accident, I swear. Not my fault.
My first tech job involved working with Obj C++. It was at an educational app platform targeted at iOS apps, but we also started targeting MS and Android apps after a few months. Quite a few devs of the 3rd party apps wrote their core logic in C++, and it was my job to help them get set up with our iOS SDKs.
The core logic wasn't too bad (except that I was very green at the time), but the interop between C++ and Obj C parts was truly painful and part of why I became a web dev for years after.
I don't know, years might be contraindicative. I've met a lot of people who learned paleolithic C++ in the 90s and never reformed or learned anything in C++11 or later and they're the worst C++ colleagues you can imagine. I'd rather have some kid who never used C++ before than someone who has been using C++98 for 25 years.
Hmm, I know a person who says he knows C++. He has spend more than 10 years studying the language, and at all the HFT companies he works at micro-optimizing code they all say that his technical ability is better than whatever senior person works above him. I don't know C++ much, but he'll happily tell you how compiler passes work and what the differences of GCC, Clang with LLVM are.
I've noticed he doesn't know everything, but he knows a lot.
I have worked with a person who was the "big C++ gun" at the company, taught C++ in a big university two days a week, and IIRC also was on some sort of standards committee (in any case his name was on some document from them). His knowledge of the language was breathtaking.
In a personal conversation, that person told me that he estimated that he knew about half of the language, and that average career length wasn't enough to learn all of it.
Entirely this. C++ has grown into such a monstrosity that I think anyone claiming to "know" it is fooling themselves. Nobody can "know" it in the sense usually meant when saying you know a programming language.
I've been working primarily in C++ since the days when there weren't even C++ compilers (you instead ran the C++ code through a program that turned it into C that you compiled with a C compiler). By any reasonable measure, I am proficient in the language -- but I think I'm actually proficient in about half of the language, same as your coworker.
This is also why I've moved away from C++ in the last few years. It's too unwieldy for me to use as a default language anymore. Now, my default language has become C++-as-a-better-C rather than full-blown C++.
It's partly a question of how you define "knowing something"
For one thing, when a language feature doesn't really work out, the user community will notice and it'll disappear from most codebases. If a Java developer with 20 years of experience has never worked with java webstart - is their mastery incomplete?
And even for features that aren't deprecated, almost no jobs will demand every feature a language offers. A veteran Java developer with 20+ years working on high-performance backend server code might not know much about JavaCard, or packaging their code for use on Windows, or the state of the art in making GUIs, or the latest kitchen-sink enterprise framework - is their knowledge incomplete?
Of course, some would say if your definition of "knowing a language" is so demanding that nobody meets it, some might say that's an unrealistic bar....
Exactly this - The language is big and getting bigger, and so is the standard library.
But this is true of almost any popular language! By the metric people are applying to C++ here, there is no one on earth who "knows Java", which is patently ridiculous.
A better metric would be "Can write code in that language for some domain".
> A better metric would be "Can write code in that language for some domain".
Can you though? C++ chooses to sidestep Rice's Theorem with IFNDR (Ill-formed No Diagnostic Required). IFNDR is sprinkled on lots of the ISO Document and what it basically does is, everywhere the language needs semantic constraints for soundness Rice's Theorem would otherwise make it theoretically impossible to write a C++ compiler so they just say well, if you break this semantic constraint you haven't written a C++ program. The compiler won't know, and neither will you, but it's not our fault if your program doesn't work, it was actually never a C++ program anyway.
This is a great rhetorical device - if the goal was to win a debate championship this sort of sidestep is very clever. But obviously it's going to produce a vast amount of nonsense, it's terrible engineering practice, and yet for decades people acted like this was OK.
I too have 10 years of C++ development under my belt, also in HFT, but this was from 2003-2013. I do find it easier to understand and solve a lot of problems by going one level lower than most developers (TCP packets, call stack, heap memory, disk blocks, CPU instructions). However, this depth, at least for me, seems to come at the expense of breadth of knowledge of different technology offerings and the best way to apply them. E.g. I could write a queue from the ground up, but I don't know enough about the features and behaviours SQS, RabbitMQ etc.
Ten years of in the field work is about right for that level. It’s probably accurate to say that if you’ve spent less than that writing C++, you’ll discover things that will surprise you. The alias keyword was mine.
I remember going through all the specified points in the graph over my first 5 years of C++-ing, except the exception handling (probably because we had set rules on using exceptions for error handling).
Also, this graph is from 2010, so it's missing some lambda magic, auto vomit, and static object initialization got easier since.
The TLDR of TFA is there are people who think they "know C++" but actually don't, and there's a simple test to tell them apart from the ones that do actually "know C++".
If anyone says they know C++, ask them back which one they are referring to?
GNU g++, MSVC C/C++?
ISO C++98, C++03, C++11, C++14, C++17, C++20, draft C++23?
I like the OP's curve, but realistically, there needs to be a set of curves next to each other, because every 5 years, C++ changes a lot to the extent that you can write "idiomatic" code that looks very different from half a decade before.
I've been using C++ since the 1990s, but mostly, overall whenever I used C++, I stuck to a subset of C++11 or C++17.
Compiler errors always have been a pain, which is still the case now, and no two compilers compile the same language (unlike Java, whatever complaints you might have against it).
I did a lot of C++ 20 years ago, and so figured I "knew" it. I recently tried writing some C++ again, and couldn't recognize the language at all... the symbols, functions, libraries, etc. were so radically different that it more than exceeded what I would consider to be the threshold for an entirely new programming language.
Every nontrivial C++ project ends up picking a working subset, sometimes explicitly. These don't particularly correspond to standards versions. I ‘know C++’ according to coding standards X and Y pretty well, but I don't ‘know C++’ according to project Z. (As a concrete example, I don't think I've ever worked on a code base that used exceptions or RTTI.)
TBF this is common in any mature language. In fact one reason I don't like Go is that it was explicitly designed to have "One True Way" to do anything and if you don't like that choice, too bad. I know why that approach was taken, but it doesn't work for me.
I find that most CVs I read this day explicitly specify the standard the candidate is experienced in/worked at a specific gig.
I normally expect at least C++11 when interviewing, but let the candidate use whatever standard they are more familiar with (and even non-standard features, especially around threading).
I once worked with a true 10x programmer. The guy used C++ metaprogramming to avoid boiler-plate code. Any by "used" I mean he needed the best preprocessors and compilers at the time to just write his code. He once single-handedly porting our >1m LOC codebase to Java over a weekend (admittedly, a 3-day weekend, but still).
When new people asked him if he knew C++ his standard answer was "I know enough to know I don't know C++".
> He once single-handedly porting our >1m LOC codebase to Java over a weekend
How do you even physically do that?
Generous estimate:
1M LOC
18hr days * 3 = 54hrs
1,000,000 / 54 ~ 18,500 LOC/hr
There must have been a lot of help from automation here or you had an enormous amount of boilerplate code, because otherwise I don't understand how this is even possible.
IME a lot of real-world 'enterprise quality' million-line code bases only have a few thousand lines of actual functionality in them - if at all (take a typical Java or Java-style C++ class which is mostly constructors, setters and getters, and maybe a single method that actually does something).
Then replace some large modules with existing 3rd party libraries, and the whole feat sounds a lot more believable.
Back in the day that those codebases were written, the 3rd party libraries did not exist or they were GPL. Everyone actually had to reimplement a linked-list library or macros in C. Nowadays you should probably fire anyone attempting to do that inhouse.
I've reduced codebases by more than 100x, and simultaneously closed out 90+% of bugs, and added many orders of magnitude in performance.
It is far less impressive than it sounds, because it relies on having an unimaginably bad starting point.
I guarantee you that the starting point for this story was a pile of copy-pasted classes, each of which had some minor tweak, and that the program exposed some combinatorially large number of similar flows. When a bug was fixed, it would need to be manually applied to dozens of classes, and this did not happen, so each of the copy-pasted classes diverged and replicated functionality.
The output of the weekend was almost certainly a simple set of software layers, each with a clean mathematical abstraction, and the composition of the layers expressed all possible flows from the legacy system.
The best example of this I've heard of was with inkjet printer drivers from HP. They used to fork their entire driver stack, including font rendering and dithering, for each printer they released. They produced dozens of models per year. Then they assigned 5-10 full time engineers to maintain each fork of the driver.
After the open source people had already done it with reverse engineered stacks, someone at HP wrote a unified driver framework where the only model-specific stuff was parameterized inputs to the ditherer (DPI, etc) or the actual wire protocol over USB to send the list of dots to put on the paper.
They ended up replacing something like 10,000 engineers with a dozen people or so, and printout quality increased dramatically (though I doubt it was as good as the open source stacks).
When doing a major rewrite like that, you’ll often end up with significantly less lines of code, maybe 1/4 of the original. Even if rewriting in the same language.
This is the result of both being able to cleanup & simplify, as well as completely ignoring edge cases and regressions :)
A very consistent codebase makes it possible to do things like this, using automatic refactoring and very smart find/replace, but such codebases are few and far between.
- If you map out the entire system, and understand how all the pieces fit together, you can: make something simpler that fulfills reqs/does the same exact thing; use template metaprogramming to generate the source code for you, instead of writing it by hand
The only issue is very few people will have the skills or the patience to sit down and try to understand metaprogramming. So you won't be able to take advantage of it in most business use-cases (i.e. won't easily be able to find another cog to work on it), despite how powerful it is.
It's like why more people don't work with K (or functional langs, etc.): it's not a simple procedural or OO language, so it's harder to learn -- and harder to get started with.
I once wrote a Perl translator to translate a codebase from a custom business rule language into C++. Not my proudest moment: rather than using a parser, I just tokenised and processed the code line-by-line. It worked.
I tend to agree with the result, despite having spend much of the late 1990s-2000's writing C++ code. By 2011, I considered myself pretty close to language lawyer levels since I had made an explicit effort to dig into the dark areas of the spec I didn't personally use much in the late 2000's. Then c++11 came out, and that reset my ability to understand feature interaction by 5+ years. Then before I got comfortable with the changes, it changed again, and again. And not in insignificant ways.
These days I don't write much C++ code (regressed to largely C), and when someone asks me how proficient I place myself at a 4 out of 10, and then explain how the spec has changed faster than I think most people can actually absorb it unless its their full time job to sit on the standards committee. So I don't believe anyone who gives a themselves a score over 6 unless their job actually required a deep level of standards understanding (ex: compiler developers).
OTOH, all that doesn't mean my actual C++ style has changed much. I still use C++ as a C with objects, only now some of the restrictions of the past have reasonable workarounds. So the idea that only beginners treat it as a C with objects language is a misleading statement. Sure I can create all kinds of fancy language constructs, but I only use things like template meta programming in toy projects or various other features, but the results are frequently setup as "C" language extensions (aka here is a matrix class where the matrix T can be a bignum/etc) which happen to look more like a C compiler with some extra features for manipulating matrix's.
A lot of the problems (ex threading is hard) come from the fact that its far to easy to dig a comprehension hole that even the initial programmer cannot get themselves out of. Hence projects like firefox/thunderbird from ~15 years ago where the C++ code was hacked together in a style that needed to be cleaned up but never was, leading to piles of hidden bugs, and problems where newer/stricter compilers would simply refuse to compile the code due to undefined behavior on ever 10th line.
> These days I don't write much C++ code (regressed to largely C), and when someone asks me how proficient I place myself at a 4 out of 10, and then explain how the spec has changed faster than I think most people can actually absorb it unless its their full time job to sit on the standards committee. So I don't believe anyone who gives a themselves a score over 6 unless their job actually required a deep level of standards understanding (ex: compiler developers).
I generally factor in a ~5 year adoption period for newer standards as not only compiler adoption completes, but as distributions pick up newer compiler versions and make them the default.
So right now, I'd consider c++17 the expected standard for people to generally be comfortable and competent with.
You can get quite far treating C++ as C with classes, and I recommend this approach for anyone trying to learn it. Especially nascent gamedevs. You’ll be able to learn the other details as you go.
One tip: template metaprogramming can consume your life. Avoid it, but use templates sparingly; they do simplify code, especially for vector types.
The key with C++ zen is “all things in balance.” You can go too far with basically every aspect of the language. And it’s important to go too far, so that you know why not to.
> You can get quite far treating C++ as C with classes
I'm teaching a HPC (threading/openmp/mpi) class and that's exactly what we do. We switched from regular C to C++ for classes and some standard library data structures and that's that. It makes our lifes easier and stuff is still "easy" enough to understand and work with.
I've once had to work with a cryptography codebase using excessive templating. It was an absolute nightmare to deal with and the only person I know of able to understand the codebase was the original author.
Using templates that way is totally possible. The problem is the nesting you end up with makes it hard to follow what is going on as you discovered.
I like to use templates to replace #define style programming. As it makes it easier to debug what is going on and get the compiler to work for you instead of in the background and hoping it spits out the right thing to feed to the next stage in the compiler pipeline.
Many languages have these sort of 'yeah you can do that but you probably shouldnt' things. C++'s version of that is templates (for a long time it was giant trees of classes). You get sucked into how cool it is then realize you have made a horrible mistake and made a giant mess that you can barely comprehend much less debug.
I don’t consider template-metaprogramming as official core language feature, is is an side-effect of templates. Generic-Programming itself with templates is great and the superpower of C++.
Readability and maintainability are the key features of good source-code. Therefore - I back off if someone tries to make the code itself “art”. The result shall be reliable craftsmanship. A usual, apply reasonable exceptions ;)
Yarp, I do the same. It's rudimentary but works fine and the code is clean enough if you pick a couple conventions (starting "private" variables with an underscore for example) and stick with them.
I've become somewhat convinced that no one has written a correct nontrivial C or C++ program since 1989.* My go-to example in C++ is examining a floating-point number as an integer:
• Cast a (pointer to a float) to a (pointer to a uint32_t). No; even if you make sure the size of a float on your target platform is 32 bits, this is undefined behavior.
• Make a union of a float and uint32_t. No; this works in C (since C99...?), but in C++ "it is undefined behavior to read from the member of the union that wasn't most recently written." [1]
• Use reinterpret_cast<uint32_t>. No; this violates the type aliasing rules, and the behavior is undefined. [2]
• Use reinterpret_cast<unsigned char> or reinterpret_cast<std::byte> and then reassemble these into a uint32_t. Yes! this works, although slightly laborious.
• Use std::memcpy. Yes! This is perhaps the "right" answer. The compiler should recognize the idiom and elide any actual copy.
I'm not even confident that I have that correct. It's an enormous language to fit into one's head; the language specification runs about 500 pages long and the standard library another 1500. [3]
Serialization of floating point number makes some sense. Even if you can't reliably read it on other machine, reading it on the same machine should be possible.
Yes, being able to fwrite() dump floats to binaries is invaluable when working with large datasets. It's common to see binary 'cache' file being created for ASCII inputs (e.g. CSV) when there is any chance of it being fed repeatedly.
I'm not sure 100%, but your second example does not imply UB. UB is caused by the act of accessing an object via a pointer of the wrong type, but just casting a pointer as you do should not be a problem.
Another use is doing radix sort of floating point numbers. Since the exponent is in the most significant bits you can almost treat them as integers for radix sort (you just need to be a bit careful with negative values): http://www.codercorner.com/RadixSortRevisited.htm
I stand corrected: there is one very good reason to want to treat a float like an integer. But if you can come up with such a solution, you are definitely not one to stop at their first guess, and you'll double-check the standard before the final commit.
It depends on what you do! The reason I use this as my go-to example is that I came across the incorrect use of reinterpret_cast for this exact purpose in a previous job (which was firmware-adjacent).
edit: are you also expressing disagreement with the claim that few C++ programs are correct (and the implied claim that this is because many people's first-guess approaches are actually incorrect), or just saying that you think this is a bad example of the language being complex or misleading?
That's bytes, if you want bytes you need to specify what order you think the bytes are in, Rust's to_be_bytes, to_ne_bytes and to_le_bytes gives you bytes (in Big, Network, or Little endian)
Rust provides methods to get your f32 (a 32-bit IEEE floating point number) or f64 (the 64-bit one) as, appropriately, a u32 or u64. Rust calls these methods to_bits because that's what you get. Conveniently Rust doesn't need to care about byte order for these operations because no extant computer makes a different choice for its integer and float representations. Good.
You insisted you want char *, by which I understood you meant bytes. Rust can do that too, but now you need to care about byte order. Seeing another answer in this sub-thread I realise you actually meant you wanted a string which is doubly hilarious in a C++ thread. The char * is a pointer to a byte, C++ has actual strings (and, these days, even a passable fat pointer style string reference called std::string_view) and they're different.
It is reasonable, but you must pay attention not to incur into undefined behaviour (which the post is about).
An array of chars is the way to go for the C++ committee.
This bugs me as well, since there still isn't a general enough official solution, even with C++20's bit_cast. Unofficially, the C way of type punning(pointer conversions and unions) still works in probably all C++ compilers, but it's UB, so potentially dangerous.
I think most use cases for type punning fall in 2 categories: conversions between types with same size in memory, for writing or reading as a new type(ex. read an f32 as an u32) or accessing only a part of the data, like the first or the last 2 bytes in a u32.
Ideally, there would be a way to specify that a union is used for type punning and what conversion types are allowed (using something similar to concepts or constraints). That would probably solve most use cases and also allow the compiler to easily detect it.
>[type punning via enum] No; this works in C (since C99...?), but in C++ "it is undefined behavior to read from the member of the union that wasn't most recently written.
it is technically UB, but it works as a documented extension on pretty much every compiler.
It looks like it's okay even technically! From a section marked "(since C99)":
> If the member used to access the contents of a union is not the same as the member last used to store a value, the object representation of the value that was stored is reinterpreted as an object representation of the new type (this is known as type punning). [...] Before C99 TC3 (DR 283) this behaviour was undefined, but commonly implemented this way.
That does a different thing. It converts the actual number represented by the float to an integer. We want to get an integer that contains the same bit pattern as the float.
That's a different problem: that simply convert a pointer to float to an uint32 (probably truncating it). What the post mentions is 'uint32_t my_int = *(uint32_t*)&my_float;
(Of course you had it right, and HN ate your formatting, but it is funnier this way)
I find it amusing that the post is from early 2010, almost 2 years before C++11 came out and people were already saying C++ is too complex.
Now, 13 years later, if someone knows the all the language itself, libc++, common idioms, best practices and template meta-programming intricacies, I think it's safe to say they are more of a devoted learner/teacher than a programmer, as retaining and keeping up with all that knowledge won't leave much time for actual programming.
Maybe at least 10+ years of continuous C++ use would be a better measure of actual C++ skills?
Indeed. Pre-C++-11, I would actually have said that I know C++ very well. The language had a pretty stable surface area for a long time.
C++11 was such a major change to the language that not only was there a lot new to learn, but a lot of new best practices needed to be developed, and a lot of the common sense on those wasn't practical because of the need to interface with existing code and libraries. I feel like reasonable API design and coding style still hasn't stabilized post-C++11.
A lot of the new best practices of c++11 were already being developed and advocated for years before C++11. There was a part of the community that was focused on RAII, smart pointers, etc. Even lambdas had their beginnings in functors.
I think the difference is a lot of people were ignoring these trends in their c++ use, and were slightly blindsided when the standard appeared to endorse them.
If I'm doing a project in C++, it's because there's some library that does a lot of the heavy lifting for whatever project I'm on. (This is typical for me: I usually pick languages for a project based on how good the library landscape for the problem domain in that language is.) And before smart pointers were in the standard, they weren't typically used in library APIs. Even now that they are in the standard, a lot of those aforementioned libraries don't use smart pointers in their APIs, making usage in applications mixed at best.
A lot of times libraries want to have a C-callable ABI. This means you don't see a lot of C++ features in use between library boundaries, even if they use a lot of it internally.
Also, further back in history, the ABIs for a lot of C++ features were subject to change. If you used g++ on Linux in the 90s and 2000s you've been bit by this. I'm not sure if this still happens in the modern era with the standards still iterating a lot; would not surprise me if it still does. This is another reason to avoid interfaces with a high risk of breaking ABI, which C++ features can be.
This matches my experience. In 2010 I had been using c++ for ~15 years and at that time would have said (correctly) that I knew pretty much the entire language. This was at least partially confirmed by an interview I had at that time for a c++ role, for which I immediately knew the answer to every c++ question.
Since then I've continued to use the language daily but have given up hope of ever getting back to that relative level of knowledge. I just try to learn little bits here and there whenever it's relevant to something I'm working on.
Yes, complexity-wise, it was the start of the big bang. But it was also a badly needed update and C++ would have probably died if it had continued on that path. Pre C++11, I remember having to include Boost, which is huge, just so I could replace the manual delete calls with smart pointer equivalents and use the thread and file system API.
Maybe the additions could have been managed better, but I don't think there's an easy answer as to how. Hopefully, this is something that can be improved. We'll see.
While the language itself has certainly gotten larger and I have had my own share of rants on the working and products of the standard committee, the day-to-day reality is that C++ has gotten easier to write and read, often significantly so.
Agreed, writing seems the most obvious one, as several pain points have been fixed, like smart pointers, lambdas, expanded standard library and a lot of other small things. Also the tools are great and mature and the fact that Bjarne is still involved is a bonus. I think the language complexity is much more of a problem for someone trying to learn the language, than for someone already comfortable with it. Which, may affect its future adoption, who knows.
Still, the people jumping ship to other younger languages and then writing about it remind me a bit about someone trying to justify switching their ageing partner with a younger one. Perhaps understandable, but after some time, the problem may turn out to be elsewhere.
We'll just have to see how sexy Rust and the other current alternatives will be when they get to C++'s age, if they still use programming languages by then.
I don't blame people for jumping ships. There are a lot of mature, safe and fast enough languages (no, I'm not talking about you, python) and for many projects there is very little justifications for writing them in C++.
I expect Rust to grow to be about as complex as C++, except for the most part memory safe. I still think it would be a big win overall but I don't know if it will significantly displace it though.
It's probably unavoidable, since no single person has the power to impose a direction for Rust.
There's no technical reason why C++ could not be made memory safe if needed, by piling on complexity, like compiler flags and checked smart pointers or something like that, but I don't know if it's worth it.
C++ on a CV is a gift for the interviewer because it's trivial to discover if the person is bullshitting when they say that they know it. I'm not talking about the esoteric corners here, I mean the absolute basics - it's amazing how many people think they can pretend to know a complex technology and are then knocked for six when the interviewer actually quizzes them on it.
I'm guessing, depending on the niche you work in, your basics are very different from many other niche's C++ basics. If you're working in FPGA development you'll have very different basics then a fintech. Likewise, game development is very different from backend web dev. C++ at Facebook is probably very different from C++ at Microsoft. Microsoft Kernel development C++ is probably very different from Microsoft Windows 11 adware C++.
I'm glad that interviewers exist that think they can pwn the interviewee by quizzing them on the "extreme basics", because it's an immediate red flag to me that the interviewer is acting with extreme hubris to claim that such a thing exists at all. I go back to this all the time, but when the guy who has literally been writing entire books on the esoteric features of C++ for 20 years says he doesn't trust himself to evaluate potential errata[0], I highly doubt any single person is capable of understanding the voluminous pitfalls and complexities of this large, bolted, amalgamation of a language.
You're confusing the STL with C++. Kernel C++, Gamedev C++, Appdev C++ are all mostly the same. There's some small things like, you can't have a global constructor in the kernel. But all the same language features are the same.
What does change is the STL. Asking someone what a constructor is, or templates, or auto, or ConstExpr, ConstInit, ConstEval and the differences. It's the exact same code on any format or machine. Asking someone what a "Unique_ptr" or "Ranges" or whatever is a bit in the bullshit realm of an interview and should be shamed. But C++ is vastly seperated from the STL. And odds are in an interview, you can ask someone about their specific domain of c++ they use in most cases, since usually a win32 shop is going to interview win32 people.
I guess they're different for different interviewers.
The one I would use is something about undefined behavior. I mean, the fact that it exists and consequences. For example: "this program works, but if I add this line of code at the very end, the program crashes. Can be I sure that it crashed because of that line or some incorrect inputs to that line?"
In (almost) all other languages the answer is "Yes". Not in C++ or C. You may have a faulty program in the first place that just happen to not crash.
All other quirks and syntax can be picked up when needed. You can play around and investigate. But undefined behavior requires one to think and debug in a very different mindset. One cannot assume that a program that does not crash and gives a correct answer will do that if extended.
I've taught to freshmen students who have personally encountered dozens of different false positives and false negatives not only with sanitizers, but with compiler warnings as well. Nothing extraordinary, just `-Wall -Wextra -Werror` with the latest available GCC/Clang on the latest available Ubuntu LTS.
Some examples (some library bugs are also included):
I've also seen fun examples of a program working with all sanitizers in 14 modes across 3 OSes (GCC and Clang on Ubuntu, Apple Clang on macOS, Visual Studio on Windows), and _only_ failing with Visual Studio Release mode. It was an array overflow. Just was not caught by sanitizers.
Mind you: no weird C++ magic, just standard coding exercises.
What is a smart pointer? Or if I'm sadistic, whats the difference between unique_ptr and shared_ptr? To me, smart pointers are one of the tells that you are actually writing C++, and not just C with classes.
That said, testing syntax is generally awful and I only use to gauge someone's familiarity with the language (same with Go, I'll say what is a goroutine, but Go is easy enough to learn that it wouldn't factor into my decision unless the candidate was brazenly lying about past experience).
It's moreso I've had so many interviews crash on this question that I've started to feel bad for asking it (and starting to assume that, no they don't know C++).
Most of these jobs we didn't even use C++, it was just identifying C++ engineers was usually a very strong signal, especially because we were early adopters into Rust.
well in one case I had to go right back to "how do you declare an integer variable?", which the candidate still didn't know how to do, before eventually admitting that their C++ experience was actually testing a system written in C++. That was an extreme example admittedly.
The problem with that statement is you have to in a very small time period pick enough corners to find what they know and not the many parts you think are not esoteric but are unused by many people.
The reaction I've seen from some clueless managers has verged on: "Wow he knows C++! He must be a genius!"
It's surprisingly easy to fool people with just words. I think this is why we have so many incompetent people working in tech. Most people don't know how to qualify people.
As someone who worked professionally on a c++ application for two years around 2010, I took c++ off my resume. C++ is the ultimate "stump the chump" language with tons of corner cases and esoteric gotchas. I hated it and I hate interviewers who do this. You all can have it.
The reality is, in my area, Java enterprise development pays a lot more. I'm sure this is the case in plenty of other areas as well. Surprising but true.
I've been writing C++ for 4 years now professionally. I don't claim to be some kind of god at it but what am I supposed to do? Leave it off my resume because I don't know every intricate detail? I've used it to get shit done and make a company money, everything else is fluff.
> I've used it to get shit done and make a company money, everything else is fluff.
I find it so frustrating how hard this is to sell in an interview. I understand how important it is to avoid bad hires, but I'm a self-taught web developer so I just flat out don't know a lot of CS "basics". I've been a professional SWE for 5 years, have made all of my teams very happy, have accomplished some very good work, and have dug into the docs enough to get the most out of the many tools/libraries I've had to use.
But, sometimes there's a coding challenge on a topic I've just never seen before, and I'm dropped. As unrealistic as it is, I wish interviews had options for demonstrating my ability to learn a new tool quickly and make use of it. I'd spend a work day taking on a mock ticket for some new security procedure I've never touched before if it showed them that I can actually get the job done.
Every place that uses C++ uses some narrow fraction of the capabilities. Teams try to find an intersection of features that hopefully a majority of the team can understand.
In an interview setting, keep answers contextual and tight. In my previous professional setting, we would solve the problem like this : <however>. Try to solve problems using only the subset you know.
if you're pushed into a corner and they really want to overload [] or whatever, be clear that because, c++ is a large and sprawling language, for production code you'll need to check the spec, or consult with a teammate. With that understanding in place, you can take a stab at it.
If you get dinged for that, you probably don't want to work there anyway.
I suppose it depends on the work the interviewer's company is doing, but when I last had a C++ job, knowing what a copy constructor was and the implications involved was important.
I agree that asking details related to syntax are not good questions. But copying objects is a very natural thing to do in most languages, and if you don't have some idea of the implications, that's a red flag.
Also: Some understanding on default copy constructors. I think it's OK not to know whether a default one is created for you or not, but just knowing that it might be is worthwhile to probe in an interview.
Of course, these concepts are due to a poor language design, but what can you do? If your team uses C++, you need to deal with the poor design, and need people who understand the poor design.
> C++ is the ultimate "stump the chump" language with tons of corner cases and esoteric gotchas.
certainly, a bad interviewer can ask bad questions (for any language), but the copy constuctor (and call by value or call by reference) is one of the basic features of c++.
Add it back. You know that any place like this is going to be full of people who spend all their time jerking themselves about how wonderful they are. Every place that I have interviewed where interviewers have done that has been full of people who are too maladjusted to work at a place that actually succeeds in producing anything.
It's because it is fast. I write small video games, and I have performance metrics I need to hit. Other languages may be nicer to write, but if it is at the cost of performance, it is making my life harder. Up until recently, if you wanted a language which didn't hurt you in performance, your choices were basically C or C++. You can debate whether C++ is better than C, but it has a lot more features, many of which I would miss if I had to use C.
Every language which attempted to compete with C++ did so with automatic memory management or other features which were bad for performance. If the goal is to kill C++, they made the wrong tradeoffs and introduced anti-features from a C++ pov. Only recently have we gotten languages which actually compete with C++ on this level like Rust, Zig, Odin, etc.
It's also because it extend C, and C is the linga-franca of programming.
C++ is the base of pretty much all our applications. It "won" over Objective-C/C++ and added to C in a way when the application age came. C/C++/Objective-C/Objective-C++ all sit at the balancing point / center of mass between the machine code/assembly and something that is a language, so they are the base of everything including applications, virtual machines and more. It took the Application Layer of the OSI by storm.
C/C++ will always be around no matter how hard replacements are thrown at it. C++ is the Grand Canyon and attempts to dethrone it from what it does best are mere rainstorms adding to a river at the bottom of the canyon, maybe one day but that day is not close.
C/C++ work well together and this made them win along with the power and timing of coming about in the application age.
This is a goal, sort of. They've publicly stated that they want Carbon to be an easy transition from C++, like Kotlin did for Java. But they want a particular kind of language. A challenge with C++ today is that users want very different things. Google doesn't like being limited by strict ABI stability. Other people want to link their code against binaries compiled a decade ago. Carbon is promoting a particular language evolution model that makes sense for a lot of use cases but not all use cases.
It also remains to be seen if it'll work. A lot of people don't trust Google and languages take a long time to build.
C didn't have enough abstractions to scale to large projects. Most of these projects would go on to choose better languages (Perl, PHP, Java, Python, C#). If you still wanted speed, you chose C++.
Well the title (and contents) pretty much summarizes the industry: never trust a programmer who says they program.
No matter how many years they worked, how many projects they have shipped, how much code they share on GitHub, disregard everything and require multiple rounds of rectal examinations every time they apply for a job. They could have written millions of lines of code, it doesn't matter coze it's not the latest fad and anything before last week's paradigm was just monkeys hanging out from the trees, it was clearly impossible to write any business logic before last's week invention of <insert latest craze here>.
Find some obscure crap you freshly read about that they don't know or don't remember since using it once 20 years ago and apply the very correct and fair logical inference "IF THEY DON'T KNOW EVERYTHING THEN CLEARLY THEY KNOW NOTHING".
Something along this: "So... you call yourself an English speaker? Native, ehh? Read plenty of books, wrote Medium articles? We'll see about that. OK, use 'Floccinaucinihilipilification' in a sentence that makes sense! Don't know? What a surprise! OK, one last chance, try not to omnishamble again. What's the velleity of the quincunx tintinnabulation? ... Just as I thought, you're a fraud and a sham and clearly don't speak English".
> Write a program which accepts a letter and display it in uppercase
That task is a lot harder than it sounds, given that casing is locale-specific. In a Turkish locale for example, I would expect a lowercase "i" to be converted to U+0130 "İ" instead of "I" like in en-US.
And then there's the whole ambiguity around the word "letter". Is a letter a single code point, or a grapheme cluster? What if someone passes in an emoji? With multiple zero-width joiners? Better make sure your dependencies are up to date so you know what's a single letter and what's too many...
This is clearly not what GP meant, and although i18n is rather complicated, I suspect that pointing all these things out to a junior dev in an interview would actually cause them to BSOD. The only correct answer to that question if you take i18n into account is "I'm not qualified to write that off the top of my head."
Haha, that's largely my resume, but I write pretty hardcore high-performance multithreaded code in C++ every day. Reason: I started with C++, got a job in it, and still work in it 15 years later, having mostly not needed another language in my day job.
To be fair, I also write some Lua, Javascript, Python, and I have sideline love affairs with Rust, Zig, Scheme, Common Lisp and others. But I've never had to write serious software in any of these, so applying for jobs where that's the main language would at least require me to honestly say "it'll take a little while before I reach maximum productivity".
The title is correct, but I very much doubt that most new C++ programmers (or anyone starting their programming carrier after ca 2005) started with C and the "C++ is just C with classes" mindset. You don't need to know C to have ample opportunities to shoot yourself in the foot for the first few decades of writing C++ code, and the C language really cannot be blamed for any problems that C++ stacked on top of C.
I believe lots of courses still teach C-esque C++ first, starting with "three kinds of memory: stack, heap, global", then adding classes, followed by templates, and finally getting into STL.
Here is an example MIT course from 2011, it introduces C-style arrays and C-style strings in lecture 4, pointers in lecture 5, and classes in lecture 6: https://ocw.mit.edu/courses/6-096-introduction-to-c-january-... (although memory management is postponed)
Well, I started on the Mac (Metrowerks) and Visual C++ 1.0, and at that time you still had to interoperate with either Windows or Mac "pseudo-classes" (even down to having to set Pascal stack calling conventions, etc.).
Surviving that (and the ample choice of exquisitely carved tribal foot guns on either OS) was an interesting experience.
I think the c++ community needs to get past the idea that people learning it are coming at it from a "c with classes!" perspective.
It's going to transition to "python with static types", "rust without a borrow checker", "java with manual memory management", etc.
I don't think that as time progresses the newcomer pipeline is going to be largely represented by people coming from C.
The curve described is what it's like to pick up any language after being proficient with another though -- nothing specific to c++. c++ is just larger and more sprawling with more footguns and other things to make you suffer.
Find similarity, be comfortable enough to be productive, gain experience and realize mistakes, get frustrated, either walk away or accept the language and ecosystem as it is and learn how to work within it.
You can still use it as "C with classes" and STL containers and get a lot done. For that, you don't need to know any template wizardry or the subtleties of rvalue references.
Yup. In fact, boost is often a code smell, especially in modern C++. The standard library packs a great deal of functionality now, and things like boost threading aren’t needed anymore. Lambda functions (closures) and range-based for loops also greatly reduced the need for boost.
It’s still useful, just not super critical anymore. You won’t feel like you’re missing out if you avoid it.
Having something like Boost these days is quite far from essential and unless it was in a company's stack already or someone loves it I think you're unlikely to find it in greenfield projects.
Except if some policy enforces C++11 or C++14, especially in embedded systems, where compilers might be years behind in following the specs. For example, gcc has (almost) full C++17 support starting from version 8, which might seem old enough to be omnipresent, but still some QNX-based distributions might have stuck with 5.x.
I've been saying this about hardware engineers since forever. If a hardware engineer tells you they're a good software engineer they're going to be a problem. I guess one day I'll find a unicorn, but practically always it's massive red flag. It just begs the question in which way are they flawed: Do they write shit software and think it's great? Do they have so little experience that they think that software is simple? Do they just disregard everyone else's skillset? Do they fundamentally not understand their own limits? What is it that leads them to make such an utterly silly claim?
I'm a software developer who's led some embedded development projects in the past, and I definitely know where you're coming from. There are hardware developers who are fantastic at some parts of software development, but other parts require totally different skills.
In my projects, where we partly used DSPs and software-defined radio, I found that the hardware guys could be amazing at writing neat modules of DSP assembly, with carefully benchmarked time and space usage. I felt that they thought "hey, software is easy, it's just wiring a bunch of filters together!" But for anything requiring more sophisticated architecture (like interpreting an EPG embedded in the data stream) they wrote spaghetti code.
Often their stuff would be nice and terse, so it could look good at first glance if you didn't really understand software. Some parts would be efficient but buggy; other parts would be correct but with terrible scaling or arbitrary size limits.
Similarly, I'm sure if I tried to get involved in hardware, and confidently thought "this is easy, it's just wiring a bunch of functions together!", I might get some of the small stuff right, but the actual higher-level hardware architecture would be equally terrible.
This is reminding me of similar issues I've had with engineers that have been stuck in the backend bubble too long. They seem to operate in a constraint free environment and it's other devs who have to live with the awkward limitations the backend dev thought "is the right way".
You can solve this problem by performing schema design with stakeholders in the loop. If everyone agrees upon the tables/columns/relations ahead of time, then constraints can be enforced before any code is touched.
Do you think there is something specifically about HW eng thinking SW eng is easy, or just the classic underestimating a domain you have no understanding of?
Well, I see it from the hardware engineering side because I'm a hardware engineer who works closely with software engineers and have read one too many CV that lists "C/C++" as language, normally "So, tell me about SFINAE" or "Explain RAII" is enough to cut through that pretty quick.
I think a massive part of it is that Software Engineering is better than hardware engineering. The software engineers are great! They have all these tools and documentation and they're free and open source! So there's no barrier to entry for a hardware engineer to pick up C++. But if a software engineer wants to pick up like Verilog for example... Well firstly, you're going to need expensive hardware, secondly the tooling is all closed source propriety and costs thousands to license. Oh, and there's no CI, the debuggers are tcl based tools from the 90s. etc. etc.
A hardware engineer can pick up software tools and solve the tiny part of the orchestration that needs to be done in software easily, but there's no equivalent where a software engineer would pick up hardware.
RAII is a good question to weed out inexperienced C++ programmers, but I don't think asking about SFINAE is a very good interview question, unless the job involves heavy template metaprogramming, and even then it is such an obscure name for a fairly intuitive concept ("yeah so the compiler will only use the template definitions that work") I am sure there are many people good at template metaprogramming that don't know the term.
Edit: changed "incompetent" to "inexperienced" as it better reflect my opinion.
It's not a pure "if you don't know SFINAE you're out" but it gives you a very strong indication of which parts of the language they're actually familiar with (although maybe it's a little dated now), and for a lot of candidates you could go through a list of dozens of these acronyms and they wouldn't be able to reasonably identify any of them.
I'm pretty sure my brother doesn't know what RAII is, but he's a way better C++ developer than me.
The principle of scope, and scope-bound (and scope-bound objects) really took of recently, and i'm pretty sure a better name than RAII exists now (especially since RAII doesn't mention deletion, and resource deletion is litterally the point of RAII).
He might know about SFINAE, i didnt (but i never got into C++ metaprogramming, and he did).
> The principle of scope, and scope-bound (and scope-bound objects) really took of recently
"recently"? - you are joking. i suspect that neither you nor your brother know much at all about c++. RAII and scope are both core ideas in c++ and always have been.
> RAII and scope are both core ideas in c++ and always have been.
I would say that's debatable. RAII definitely dates back to the earliest days of C++, but you don't have to know it in order to use the language. Even today there are many professional C++ devs who've never really learned RAII because they write C with classes style code. I first started learning C++ about 20 years ago and IIRC at the time usage of RAII was still in the minority but there was real momentum towards it being considered the "right" way to do C++.
well, i started using c++ in the late-80s, and even back then RAII (possibly not under that (bad, IMHO) name) was widely seen as an important feature of the language. how else could you implement a string class (which you had to do yourself back then)?
The deletion (or better release as RAII is not just for memory) is implied in the acquisition part.
Also RAII is not (just) about scope, but bout tying, recursively, the lifetime of a resource to another one. You might even have manual destructor calls at the bottom of the tree, the rest would still be RAII. If it was just about scope, higher order functions would be sufficient.
In the vast majority of fields of engineering, it's possible to exhaustively test your product.
Your power supply is specified to accept 90v-260v 50-60Hz and deliver 12v at 1 to 20 watts, in ambient temperatures between -5°C and 55°C? You can test over the entire input range, your expert knowledge assuring you that if it's tested at 20°C and 22°C there's no need to test at 21°C.
On the other hand, if your software is specified to correctly validate a SAML assertion? It's simply not possible to enumerate all the states the system might encounter.
I wouldn't say this is actually true: while there are generally fewer dimensions in hardware testing, it's not actually a given that if you have tested over the range in each of them that you will certainly be OK across the entire space (in the same way that having unit tested each of your functions at extremes doesn't mean the whole system will work as intended), not to mention it's not very common that you actually have anywhere near proper coverage of each of those dimensions (and don't get me started on lifetime and aging testing). Also, testing is often very expensive so it's also very common that you don't actually have test results for the version you're actually building (one of the reasons safety critical hardware is so expensive is because regulations generally require you do this as opposed to handwaving).
(This isn't to say software isn't more complex than hardware: just that I think software engineers have a tendency to romanticize the level of competence and confidence other engineering disciplines have in their designs)
> I wouldn't say this is actually true: while there are generally fewer dimensions in hardware testing, it's not actually a given that if you have tested over the range in each of them that you will certainly be OK across the entire space
I agree that the point where the power supply dissipates the most power, or the point where it starts making an audible whine, might be in the middle of its operating range rather than at its limits. You would certainly want an experienced engineer who can say what density of parameter sweep is appropriate.
This is what I was trying to say when I said that when testing a system designed to operate between -5°C and 55°C, expert knowledge might assure you there was no need to test at 21°C if it had been tested at 20°C and 22°C successfully.
I've designed power supplies and I can tell you that the "density" of the test doesn't matter so much as the validity of the test.
One way we assess stability is by doing frequency sweeps. What allows us to infer stability from that test is the assumption that the system is linear time invariant (LTI). We assume it's LTI because we tried to design it to mostly act that way, even though it's really not.
For an LTI system, a frequency sweep and a step response are completely equivalent and interchangeable, it doesn't matter which one you use. But what actually happens is the step response test reveals different information. This can only happen if the system is not LTI. Therefore neither test is conclusive about stability, though they're still informative, which is why we do them. 10x density would be no less inclusive.
Another problem is that some stability issues have very low observability. The blip or offset they cause in testing is indistinguishable from normal switching noise and measurement imperfections. They only really reveal themselves when they become a problem in the field. 10x density wouldn't find the issue because it's caused by some other combination.
Explaining and fixing those issues is tough, and anticipating them requires an arsenal of models developed from someone's hard earned experience.
I don't mean to diminish the density issue. That's still a real thing, but it's kind of an orthogonal problem. The formal term is optimal design of experiments, and in simulation we use search methods like Monte Carlo because it's not feasible to enumerate the design space even on a computer, much less in a real test.
Testing doesn't enumerate the physical design space. It only enumerates a model. Only very simple theoretical models can be enumerated in practice. We have to design with much more sophisticated and accurate models that can't be enumerated, and we have to settle for knowing that even those models are inadequate to describe everything we need to account for. We do our best to design in a way that makes the simple, testable models valid and informative, but it's impossible to completely succeed, so we only ever test "enough" (to make money).
There's probably a danger in commenting on adjacent fields. If you're a HW guy and you write some software to exercise your hardware, you might think it's easy because you've only stepped one layer into the interface.
It's probably similar in a lot of areas, you stand on the tip of an iceberg and you think you've seen the whole thing.
I see something similar in trading code. You get someone who knows a lot about how markets work, and they figure out that you can hack a few things together in python. Now they're a software engineer.
>classic underestimating a domain you have no understanding of
to be clear I'm not a hw eng myself so I don't have a deep understanding of hw eng, but I have a friend who's studying EE and from what he asks/tells me sometimes, there's a non-trivial amount of coding involved depending on what it is you're working on. So mabye it's because hw engineers may have experience writing lots of code which is non-trivial, but nowhere close to what a sw eng does still, hence giving a wrong impression.
Same reason we (SW engineers) think X field is easy and can be self-taught: the fundamentals are easy to learn, but figuring out where and how to deal with edge cases when they pop up is something that can only be learned with experience.
I see the same with a certain subset of finance people who learn Python, and start thinking coding is easy. Yeah, fair you can whip some simple Jupyter notebook using Pandas to analyze time-series. Now build out a distributed, fault-tolerant ETL (CRUD++) system that follows all business rules, is maintainable/readable, and can scale to atleast 100 "servers."
Perhaps not the most apt comparison -- but the fundamentals in every field are easy to learn; but working at the edge is something you have no experience in until you do.
> Same reason we (SW engineers) think X field is easy and can be self-taught: the fundamentals are easy to learn, but figuring out where and how to deal with edge cases when they pop up is something that can only be learned with experience.
Ironic post from the person who thinks Reddit would be trivial to recreate. Your whole account reads like a parody.
People who are honest about their skills dont get the job. People who say they are the living embodiment of sunshine and rainbows get the job.
They say it as it is the correct answer in an interview, maybe they believe it, maybe they dont - they however are saying the things that they need to say to be employable.
I dip in and out of C++ (have done so since it came about, which sort of dates me) and I don't see it as a unified language. I see it as a set of idioms that depend on the use case and libraries you have to use (boost, Arduino, XML processing, etc.).
It literally has far too many ways to skin a cat, from light grooming to full on mecha brain transplant.
That being said, I concur that C++ is the language where "we need some rules" is the most prominent. C++ has awesome features, but the number of people on the planet who know all of them is way too low to rely on them (you probably can't hire any of them anyway). But a team of 10 peoples is likely to know a lot of them together, so you need to work with the intersection if you want your code to be maintainable
Many years ago as an undergrad student I applied for an internship. They asked if I knew "C++", and having studied it and programmed in the only 'C++' I knew I said yes. After arriving at the internship they showed me the codebase and it a lot of it looked very foreign to me. Turns out they were using Symbian C++. I had quite a few weeks of extended evenings and few all night coding sessions to try to keep up and get basic things done. Luckily, I somehow managed to deliver on my ask, but lesson learnt!
I sat next to a guy on the C++ Standards Committee and he'd be the first to admit that he didn't actually know C++ (he knows more about C++ than anyone else I've ever met).
The overwhelming majority of people who say they know C++ well do not. This isn't peculiar to C++, of course, but it does seem to be a larger effect with C++ for some reason.
LOL, someone told me in an interview that they are a C++ expert. I asked them if that refers to their company, or the wider community, or the C++ Committee members or the C++ compiler frontend maintainers.
It should be a science. It should be closer to core math. And yet it often feels closer to performance arts w.r.t. the individual's level.
I also immediately noticed it was an old post, because the new standard version have all but eliminated the frustrating complexities of the language. :D
When a crooter asks me "On a scale of one to ten, how well do you know C++?" I start by calibrating the scale: one is a complete neophyte and ten is Bjarne Stroustrup. I then say I'm a 5, maybe a low 6. Or, I did back in the early 2010s.
The joke is that everyone, from the most junior neophyte to the expert wizard will judge themselves as a 7. Bjarne and a few elder gods might go as high as an 8.
I’ve definitely met people who actually did know C++. Turns out they were also on the committee or frequently talked to other people on the committee. If you spend a large amount of time investing in learning it.. it is possible. However, few people, including myself, are willing to make that trade when there are so many other things to learn. If a new grad out of school were to claim they completely knew C++, ha! I’ve been a C and C++ programmer for 20 years and I’d never say that!
Building on this a little. I’m very worried that Rust is growing the same problem. Although it does seem that one can be more successful in learning a subset and interacting with other’s code without everything falling over.
There is a general rule that people who know less think they are experts, unless they find out that they don't, and start learning again... It's called Dunning–Kruger effect
It's also a generally the case that people who just discovered the Dunning-Kruger effect doesn't think it applies to themselves, ESPECIALLY if they just climbed "Mount Stupid".
Obviously very few people, possibly none, know 100% of all possible knowledge (how various compilers work, all possible UB, gotchas of stdlib, patterns to use to avoid bad things) to be known about C++. Congratulations to you if you are able to find an example of something someone who “knows C++” does not know. But nobody claiming to know C++ is claiming to know 100% about it anyway, it’s obviously a way to phrase that you’re familiar with it and competent enough with it to be productive based on experience and exposure.
You wouldn’t hold a programmer in other languages to the same standard. Most people who “know Java” aren’t JVM hackers, most people who “know JavaScript” aren’t familiar with all the actual details of how JavaScript gets interpreted and executed by the browser, or how V8 works.
Yes, the main difference is that it can be easier to introduce certain classes of bugs in C++ than in other languages. You don’t need to know 100% of C++ to know it well enough to mostly avoid these. Buffer overflow, memory leaks, and UB are all something a person who “knows C++” is able to avoid by conforming to common patterns that limit the use of features leading to this behavior/structurally avoid them. Stop being pedantic