I’ll offer a perspective unrelated to pay: It’s a pain to start learning C++, and even after you do, older devs will roast the hell out of your code because your book/tutorials of choice forgot to mention a crucial (in their opinion) feature that you absolutely should/shouldn’t use! Not to mention you’ve only programmed on Mac/Linux so far & windows is totally different, has a different compiler, different ways to install libraries, different C++ standard features supported, etc.
I like C++, and tooling has come a long way, but it’s so much easier to download rust/Python/node and you’re basically set on every platform & immediately ready to go. NOW consider pay, and even someone enthusiastic about programming C++ will reconsider.
As a C++ enjoyer I would 100% have learned Rust if it was around when I started... just for Cargo alone.
Problem is now I've already done my time in the Makefile trenches there's little incentive in me re-learning another systems lang and having to compete with lots of smarter people, with more Rust exp, for jobs whilst giving up all my arcane knowledge of CMake and friends.
Rust being popular atm is great for C++ if I'm honest as it siphons away a new gen of systems programmers over to another lang allowing me to sell my dark services for more coins.
> it siphons away a new gen of systems programmers over to another lang allowing me to sell my dark services for more coins
I've always been curious about whether it really works this way. First order, one would consider supply and demand leading to increased wages. But whenever I've looked into the reports of "COBOL programmers are getting paid a fortune because there are so few of them alive!", the reality has been that wages are ... unimpressive.
My hypothesis is that as the talent pool shifts elsewhere, the market dries up. For instance, new projects aren't started in COBOL any more (well, at least anywhere I've seen...). You're left doing maintenance on ancient systems, where the calculus is always "this needs to be cheaper than a new solution". Maybe it's the "liquidity" of the job market for a given language?
No doubt, though it's worth considering the tech industry is a bit more diversified now than the days of COBOL number crunchers and Space Invaders. What you're describing seems closer to PHP vs. Javascript than something as entrenched as idio-matic legacy C++.
And heck, if I'm wrong I'm sure somewhere like AWS will entomb me as an SRE, or some such, so even long after I'm RMA'd my spaghetti abominations can continue to haunt the cloud.
I did my time in the Makefile trenches, too. I'm happy that I've learned other languages. There will always be smarter people than you, but you will bring your particular knowledge and do new things. Keep on learning!
I was a Cobol developer for a year in 2016 in Prague. It's hard to make a proper market research because the Cobol market is small. My limited point of view is, that it's not worth the stress. There is nothing new to build with it and young developers are there just to replace the old ones and maintain what's left. Even that it's hard to find a Cobol developer, i couldn't find a Cobol position that would pay more than a React developer.
The funny part about cobol is that even if you say you know the syntax, there is site specific implementation details/features which are even MORE critical than just the language.
you have picked up so much meta knowledge (debugging, testing, build optimizations), domain understanding and soft skills that making the switch might be less tough than you think. Plenty of coins in Rust land, too!
This. Learning the language as a whole is an incredibly daunting task. It's ugly, it's built on a combination of OOP and procedural C and quite frankly, the class syntax in combination with header files has aged pretty bad. I feel like I have a lot of code I need to write twice.
The standard committee keeps tacking on new features and decade old footguns are promised to never be fixed (and somehow, people consider this a feature).
It's only ugly because it was meant to be a federation of different programming paradigms. You can mix low level C calls with homegrown RAII frameworks, or mix traditional OOP with functional programming. Once you start throwing in preprocessor macros, meta programming, and templates, you can have a codebase that is incredibly complex to understand and maintain.
As for the header/implementation separation, I always thought it was a good idea to have that flexibility. I've worked in codebases where the compilation units were fairly large and complex and would be implemented in separate files. This would be similar to partials in C# I've also worked with some code that would determine the target compilation units at build time. Not saying it was the best approach at the time, but it was one option.
If anything scares off people from C++, it would definitely be the breadth of the language, though. You could work in C++ for over ten years and possibly not even encounter or use half of the functionality it provides. And because of it's breadth, going from one C++ codebase to another could look completely different. Especially when looking at something written for a Unix/Linux/Posix system vs Windows.
templates are one of the best features of c++. they're complicated because they're powerful. they certainly aren't perfect, but you won't find anything like them in other popular languages.
I coded in C++ professionally for about 15 years and don't remember it would be "incredibly daunting". Not easy, but doable. But yes, I also exited the pool, now I work in Golang and just wouldn't go back to C++.
As someone who has been writing C++ at Google, this exactly. Despite all the tooling, guidelines and "internal magic", C++ is still an abomination. And no it has nothing to do with memory management, I actually do like C.
I love how Eric Raymond describes it as "anti-compact", because, well, it really is. C++ as a whole should be deprecated -- and no new projects should use C++ (unless for some very odd and specific reason).
>C++ as a whole should be deprecated -- and no new projects should use C++ (unless for some very odd and specific reason).
And what can be used instead of C++? C? If C was better, then C++ wouldn't have been invented. Rust? It's much more painful to use than C++. Zig? It's immature and has very low usage. Nim? Has very low user base. Julia? It isn't solving the same problems.
I don't think that's true. I started C++ using the 'cfront' system, the major push wasn't because C was somehow lacking, it was because it was fashionable to do object orient programming which is hard in C. Various horrible patterns were invented using function pointers so C programmers could feel like they were doing object oriented programming and all of them sucked.
When cfront turned up, the first versions basically automated that suckage. C++ did get better, but it was still horrible compared to CLOS or Smalltalk. This was largely due to weirdness in how the constructor/destructor ordering worked, the giant bogosity that is multiple inheritance and a few massive other undefined behaviours that every compiler did differently, but I think at this point it's fair to point out the language is bad and things like C# are so much better it's not even funny any more.
Yes, OOP kind of sucks. But you can use C++ without OOP if you so wish.
C# is much better but still it isn't a systems programming language due to garbage collection. So while you can solve some classes of problems easier and better, it can't replace all of C++ use cases.
Learning rust is like cycling over 2 big hills. It’s exhausting and painful if you’re in any way out of shape.
Learning C++ is like cycling from SF to LA. It starts easy enough and every day you’re making a lot of satisfying progress. But you have so far to go because of all the features and quirks of the language.
It’s probably easier to get started learning C++. But it’s also much faster to finish learning rust and be able to read almost all rust code. (Pin still scares me though)
Having cut my teeth in Z80 Assembly, I could never understand people's aversion to pointers. You cannot do anything of any practical use, on hardware, without them.
Regular Rust code uses references almost exclusively. (Unless you're working in kernel space or building fancy data structures.)
The big difference between C++ and Rust is what happens when you get something wrong. C++ has lots of undefined behavior and nasty surprises for the unwary. (I led a C++ project for a decade and saw it all.) In Rust, if you get something wrong, the compiler typically refuses to compile it. Which is also very frustrating, but the frustration is all up front.
It would also be easy to get started learning Rust, if people cared enough about doing it. The best way to learn Rust as a first programming language is to start with pure value-oriented programming at first, liberally using .clone() when necessary. Then introduce borrowing, with shared and mutable references; continuing with features involving interior mutability (Cell, RefCell etc.).
This is admittedly quite different from the way people usually learn C++, but it makes sense on its own terms. It's much closer to how higher-level languages like ML and Haskell are taught, and people have successfully learned those languages in introductory programming courses.
I like you're analogy, but I do think there are features of c++ that are big hills as well. To me it would be SF to LA with some big ups and downs on that long ride.
I'm also curious what's difficult about pin in rust? It basically just disables moving of the object for the lifetime of the pin.
> I'm also curious what's difficult about pin in rust?
I understand the concept. It’s the syntax which trips me up. You don’t mark structs as Pin. You mark them as !Unpin. And then, when is it safe to pin_project fields? Every time I read the documentation I understand it for about an hour. 2 weeks later I need to re-read the docs again to convince myself I’m not subtly messing anything up.
I’ve also gotten myself in hot water writing custom Futures, trying to wade through the barrage of compiler errors at the intersection of lifetimes, Pin and async. It’s a lot harder than it needs to be. The compiler holes around GAT, async closures and async trait methods doesn’t help.
I’m still not comfortable with Pin, UnsafeCell, raw FFI (lifetime and ownership handling across the boundary is tricky), and complex macros. But I also think that you don’t need to understand these at all to be an effective Rust dev.
Except that the C++ substitutes for those features (where applicable; C++ has nothing like GAT, and has to make do with complex template meta-programming) are a lot harder too.
Microsoft has apparently ported DWrite to Rust (the C++/WinRT team is now having all the fun in Rust/WinRT, while ignoring the lack of tooling in C++/WinRT after they killed C++/CX), Azure IoT unit is adopting Rust on Azure Sphere alongside C while not supporting C++, and at Ignite Mark Russinovich did mention they are planning to port some sysinternals tools into Rust as kind of POC.
I do respect Mark and his Sysinternals products. However I could not care less what he thinks is the language I should be using. I choose what works for me and as long as it brings me healthy dosh I am not in need of one's "approval.
As someone who has used C++ professionally for two decades, I disagree. Rust's pain is superficial and all up front. C++ pain is death by a thousand paper cuts, especially if you have people on your team that aren't intimately familiar with its pitfalls and its more modern constructs. I don't plan to write a new C++ project ever again, unless there's some very compelling reason to do so. Rust is an absolute breath of fresh air.
I think this is probably the most salient point. C/Rust then Java. I don't know why people hate on Java so much. And I think C lives fine along side Rust.
> Java forces OOP and it's verbosity is worse than COBOL.
Java doesn't force OOP in any meaningful way. I mean it does, in that you need to wrap all code in a class, but that's a non-issue (one line of code at the top and a closing bracket). You can write Java code where all functions are static and do nothing is object-oriented, when that's the best match for your needs.
On verbosity, you can latch on to the ConstructorAccessorMapFieldGetterFactorySingleton nonsense if you want but that's on you. Nothing in the Java language forces that on anyone. Having been writing Java code since 1996 I've never written such code.
Java is based on Objective-C without any of the nice flexibility, doesn't have value types, has C-like numeric types except they're less flexible yet not any safer, and its culture thinks you organize code by putting it into 6+ layers of namespaces inside other namespaces.
My rule is that languages are good if they have value types, which explains why PHP is good.
> Java is not very fast compared to c++, or you'd see it in embedded systems all over.
Those are quite different domains, with only minimal overlap.
Embedded systems more often that not are not seeking maximum performance. What matters is smaller code size and running on mimimal hardware. Java doesn't do so well there since you have the overhead of the VM. Java is rarely a sensible choice for embedded code. Just use simple C, or rust if it works for the use case.
(Yes I know project green was originally about embedded set-top boxes! But times changed.)
Performance critical systems are usually large servers for either high througput (web or other server traffic) or low latency (HFT) applications. The opposite end of the spectrum from embedded. That's where Java shines. You might be able to beat Java with carefully hand-tuned C++ but just as likely the JIT might beat you. So for maximum performance server code combined with a more sane developer productivity, Java cannot be beat.
what's fast really depends on lot on what you are building - without more details about the challenge at hand any statements about performance are unhelpful. A huge backend system? The logic in a toaster? A space ship?
Unless you're doing something really highly specific to C++ (like Cuda for instance or deep integration with big C++ codebase), saying that rust is painful compared to C++ is laughable.
Julia is solving many of the same problems as C++. GPU compute, HPC, high performance algebra kernels are all well within Julia's purview. It's not (at least yet) good for things like writing OS kernels but there is a large amount of overlap with C++.
Static types are not required for things like type inference and optimizing compilers. It's just that many dynamic languages are not written with performance in mind, and have semantics that make optimization impossible.
Julia was designed from the ground up to have C level performance, and well written Julia code does that easily in throughput focused scenarios.
Julia's intermediate representation which it compiles dynamic code down to is statically typed, and any dynamism just manifests itself as the compiler waiting until the types are resolved at runtime before running again and generating new specialized code.
If your code is written so that the types are all inferrable, there's no pauses.
it's not. it uses type inference to infer types and llvm to compile down to native code. differentialequations.jl is often faster than the fastest C and Fortran solvers, and Octavian.jl often beats MKL at matrix multiplication.
Write a sufficiently complex memory safe program in C++. I dare you. It's been proven again and again that humans can't do it. And calling Rust more painful than C++ is just absurd.
> C++ as a whole should be deprecated -- and no new projects should use C++ (unless for some very odd and specific reason).
It's quite a bit easier than Rust and no other popular language has its most important features (cross platform, interfaces with syscalls and other libraries easily, manual memory management possible, likely to be supported for a long time).
Having used both C++ and Rust extensively, I would say that it is easier to get something to compile in C++ than in Rust, but I find Rust overall much easier. C++ is really very complex, and even after more than a decade of using it there are so many things I don't know. With Rust I feel like I have a pretty solid grasp of most of the language.
I agree.. I've been writing in Rust for about 2 months and I find it to be a much easier surface than C++.. getting over the borrow checker isn't as bad as some make it out to be.. in fact, if it compiles it largely works.. you might be cloning one too many strings as a newbie, but you get the hang of it quickly.. when I wrote a lot of C++, I used a really small feature set.. but that was like 20 years ago.. now (apparently) it's a lot better.. with Rust, you need to think about memory, stack, and heap, but it's not ridiculous. The type system is really great. Granted, I'm not writing a database, but so far, I see a lot of really good libraries that integrate really easily, and it's just fun to use. The functional features and futures feel a lot like scala... and it's fast.
oh and the last time I wrote in C++, I was using gmake.. all those compiler and linker switches.. header and linker search paths.. ugh.. I felt like I was launching a rocket to the moon just getting some of that to build. don't miss that.. chasing down memory corruption in threads with gdb.. also painful. I get that C++ is much better now, but I haven't really used it in a long time so can't comment.
Yes, but when you want to do linked structures, really generic code without repetition or decent compile-time programming, then C++ is very powerful at that.
I have found table-generation and processing at compile-time to be of big help.
For example, pre-computing values or parsing at compile-time.
This is useful for example in the following situations:
- You want to embed things at compile-time but keep things apart in a file. This file can be used at run-time when developing but is embedded and literally vanishes (you put your config directly in a struct) when deploying.
- You do not want to spend extra time at compile-time.
- No need to protect data at run-time with mutexes, etc.
The simplification it can yield is not immediately obvious. It can take a bit more work but the result is worth it. D is more powerful than C++ at this, btw.
It _looks_ easier because it lets you do anything you want. C++ makes you feel like you're going faster, but then you spend two weeks debugging a weird memory issue.
I come from functional programming background, so I'm all for taking a little bit longer to make my code compile if that means it'll work. I'd rather deal with compilation errors than waking up at 2am to debug a stupid segfault.
C++ has improved a lot over the past decade or so. Compilers can add runtime checks now that make most memory bugs easy to detect and diagnose. C++ has plenty of warts and legacy stuff that you wouldn't keep if you were designing a language from scratch, but it is way better than it used to be and there still isn't anything that fills the same space C++ targets. Rust tries to but IMO it is too opinionated.
> It _looks_ easier to me because it lets me do anything I want. C++ makes me feel like I'm going faster, but then I spend two weeks debugging a weird memory issue.
Not only that: toolchains, IDEs, static analyzers, mature frameworks for Protobuf, Capnproto, C compatibility, Python wrapping of APIs including little friction: deriving classes and exception conversion in pybind11 for example...
There are lots. It is just that some ppl think real world is just like when they sit down to code a zero-dep, no time-pressure thing.
If you take criticism of any activity you partake in as being "damned rude" and a violation of site T&C, then there'd never be any discussion at all.
You are not your job, and prepare yourself for listening to honest criticism of the things you base your identity around or you will find it very difficult to learn and grow.
Apologies if it came as rude or hateful, that wasn't my intention.
Funny thing is I'm a C++ programmer myself right now as I mentioned :) Even if we decide to "deprecate" C++ today, Google alone would have large enough C++ codebase left to maintain for the next two generation of programmers.
So no, I don't think it means we should demote or fire all C++ programmers -- but at the same time I'd like to note that as a programmer no one should be married to a single language -- languages come and go.
There's also the opposite problem: older devs will use outdated features and have best practices in place that are now considered antipatterns. This exacerbates the existing difficulties to modernize legacy code bases and creates incentives to not do so. New devs are forced to learn C++ from 20 years ago which is much worse than modern C++.
My experience with C++ is that every five years I come in and see people claiming that not only should I never do whatever I was told five years ago, but actually it was never popular and nobody ever did it. Mostly about ways to allocate objects or use smart pointers.
I'm still wondering why `(int)` is spelled `reinterpret_cast<int>` in C++. Do they just like hitting keys on the keyboard?
> I'm still wondering why `(int)` is spelled `reinterpret_cast<int>` in C++
1. because it is greppable and unsafe
2. because there is also static_cast and dynamic_cast
3. Because in C (int) is all three at once and not greppable -> C will let you do whatever, C++ will not let you do with a static_cast everything you can do with a (T) cast.
All in all, it is nice to ask, but if you do not know what you are talking about except for the surface, then, you should not say:
> Do they just like hitting keys on the keyboard?
No, we do not, but we hate even more to get an ungreppable, undecipherable (semantically) casting lost somewhere in several tens of thousands of lines of code :) This eases finding the suspicious code more easily.
A reinterpret_cast is something that shows up very rarely, so it’s fine that it’s not concise. And when it is used, it usually can do with a function naming it. I try to have a “no raw reinterpret_cast” view unless it’s chars to unsigned chars for string stuff. And as others have said, it’s grepable and can’t cast away constness. If I’m handing a const unsigned char* to a function taking a char* that I know wind modify the data, I don’t want that to be (char*)ptr. I want it to be const_cast<char*>(reinterpret_cast<const char*>(ptr)) because yikes, it should stand out because it’s awful.
And then I’d wrap that godawful cast in a function overloading the legacy C interface, so the overload has one job: to encapsulate the logic that the legacy function isn’t const correct. So then I’d have like void wrappedFoo(std::string_view s) { foo(const_cast<char*>(reinterpret_cast<const char*>(s.data())), s.size()); } with lots of comments about the cast.
But it is not unsafe, or rather, the C++ casts aren't more safe. They have the same semantics.
There are differences casting between class types, but with numeric types the issues are how to handle the value not fitting in (or being imprecise) in the destination type. Casting a value to int in C++ that overflows is still UB.
It doesn't fix C's other strange numeric issues either, like how `unsigned short` * `unsigned short` produces `int`.
C++ casts are safer. I think you are confusing concepts here.
There are C++-style casts that are compile-time errors that C would allow you to do in C style. For C everything is potentially a reinterpret cast basically.
they are unsafe because if you try to cast something that's not valid the compiler won't let you. you can then choose to ignore it and be unsafe, or figure out the problem and fix it.
There are more casts than those. Anda reinterpet cast is a superset of a static cast.
If you want to narrow, you would use static cast, not reinterpret cast.
If you want to reinterpret a set of bytes as another object then you reinterpret cast (actuall use std::bit_cast, will catch more errors,). So yes, you can still do that but consciously.
In C you could even turn a cast that is essentially a static cast into a reinterpret cast by accident and the compiler would say nothing.
I'm assuming they didn't mean "grep" in a literal sense, rather that it is easier to find keywords like "reinterpret_cast" when scanning your eyes through code, whereas with C-style casts you'd have a much harder time.
Now find casts that are known to be as unsafe as reinterpret cast for the whole set of types your codebase deals with to audit a violation of the type system. What would you grep?
I would grep 'reinterpret_cast' and I would set warning as error for the c-style casts. So you can assume my codebase does not have any.
I will find any cast to reference, pointer, etc. for any type. How would you do tjat if you do:
> I'm still wondering why `(int)` is spelled `reinterpret_cast<int>` in C++. Do they just like hitting keys on the keyboard?
They are doing different things. For example, `reinterpret_cast<int>` cannot cast away constness but `(int)` can. There are other differences and you should never use C-style casts. In most C++ code bases I worked on, there were static code analysis checks pre merge that would prevent anybody from merging code with C-style casts to master for that reason.
This is a very good example of C++ being difficult to master. There are heaps of ways of doing one thing and usually there are just one or a few of those a good practice, but to explain the best practice's rationale, you need to know and tell a whole story. A simple cast even needs a story. Let alone smart pointers combined with normal pointers; ampersands and const qualifiers that have different meanings depending on where you put them and so on. You can fill a small book with explaining initializers, a big book with explaining templating. In the same amount of pages you can explain the complete ANSI C.
Absolutely. The reason for this is that C++ has been around for a long time while maintaining backwards compatibility (to C too mostly). I know no language where it's as important to have excellent automated tooling that restricts usage of the language for your project. Thankfully tooling is pretty good nowadays but there is very little documentation on how to start if you don't have an expert on your team to help with that.
Most people in 2022 have plenty of free bytes on their disks so they don't really care about how long their code is. If you want short code, I can recommend Fortran 77, no identifiers longer than 6 characters allowed.
The difference between C++ today, and what we had 20 years ago is not much. C++ has functionality built in today, we didn't have, but the OS provided, and now the language has many features we wished we had years ago.
Example: Variadic templates. When that happened, the problems we would bang our head against the wall with magically vanished. Twenty years ago, every implementation of the std library had serious bugs in it. It was avoided. Not the case today. I remember a std::map iterators not working. Watcom's C++ had serious flaws with it, but many exciting products were still created with it.
We didn't complain. We were happy, and worked around our problems.
I think the root of peoples issue with C++ is with template meta programming portions of it, which you can completely ignore, and grow into.
Template metaprogramming is hardly ever needed anymore. C++20 has better built-in features that for most of what was done with it, that compile much faster.
Almost all C++ programmers completely ignore it. You can too.
template metaprogramming is used extensively in libraries. c++20 didn't really change that; it makes metaprogramming nicer to read, but it does not obviate the need for it.
C++ is a tool and it still has its use. If you have ever tried to slowly sprinkle Rust into a large C++ code base you will know that it's hard, laborious and error-prone. For completely new projects that don't need existing C++ libraries: sure, I wouldn't use C++ nowadays. There's tons of good reasons that lots of actively developed code is still C++ though.
As an older dev, screw those older devs that 'roast your code'.
The really good ones won't do that. The ones that do suffer from some weak superiority complex. I had three awesome mentors when I started writing C, and a couple great mentors when I did what little C++ work that I had to do.
The only scold I ever dealt with wrote overly wrought, overly complex crap. Kitchen sink patterns to handle any possible future variation, instead of solving the problem at hand.
Stay away from people like that, they'll just turn you into the same old grouch.
%90 of what people do today is import some open source library or package to do the heavy lifting and weave them into an app using a scripting language. It is sort of the MS Basic approach of the web app world. Nothing wrong with that. C++ is used for a different class of problems that are not the main focus of the industry anymore. I would likely choose something like Rust today but at the time C++'s star was rising there really weren't a lot of other better options.
I also love Python the language, but it’s hard to keep using it when it’s so slow. Parallel processing helps, but it’s still slow. Definitely dumping pandas the first chance I get. It’s one of two major bottlenecks with the other being anaconda for Windows. Maybe the culprit is running Python on Windows since it relies on so many parts of nix?
Try to update your Python environment and also install ALL recommend dependencies for Pandas (and for Geopandas if you use it). I had one old env in Anaconda (Python 3.9.12, Pandas 1.4.2) and then created a new one in Mambaforge (Python 3.10.6 with Pandas 1.5.0). It gives me speed up one of experiment project from ≈30 min to ≈5 min. Use Python code only for glue and leave extensive calculations for C/C++ code. Pandas and Geopandas have the recommended C/C++ dependencies which dramatically speed up calculations and in new versions as I guess they improve integrations with these dependencies.
PS I also recommend to every one to use Mambaforge instead of Anaconda, because it uses Mamba dependency solver written in C++ and it is in orders of magnitude faster.
> Use Python code only for glue and leave extensive calculations for C/C++ code.
That would completely defeat the point of Python for me. I’d rather switch to typescript or C# / Java before I code in C again, but you’re right. Fast Python is an oxymoron. However, in my case my bottleneck is the pandas library. I have to see whether workarounds like Dask work
> It’s a pain to start learning C++, and even after you do, older devs will roast the hell out of your code because your book/tutorials of choice forgot to mention a crucial
I was a C++ programmer for 25 years. Last 3 years, I'm using Rust language in my daily job and for all my hobby projects. I will not return to work in C++ again. No money can make me to change my decision. Nowadays, programming is a pure joy for me again. This was not the case when I had to work with C++.
I doubt that's true. I haven't used C++ for many years now. I was in your boat a few years ago (C/C++ since 1990) but now 6+ years Python then Rust centric. I still think in C++ terms when writing rust, although most of the time it's "I would need 10 lines to do this 1 line of rust code were I still using C++."
I dabble in C++ occasionally as a contractor. No problem picking up the new features and making useful contributions.
I moved into managed compiled languages back in 2006, still dabble in C++ pretty much ok, and code is relatively modern (enjoying modules in VC++ nowadays).
same here. I've pounded C++ and C code for 20+ years. Rust is the right tool to replace both of these. Long live Richie (well maybe not him but his work), Kernighan and Stroustrup.
long time C++ programmer here too, 30ish years, but I don't use it for much anymore. While I like that the language has improved, it's just a nicer experience coding in other languages.
I agree. No matter gray beards bashing Rust on HN every time it comes up, programming in Rust brings me joy and I will never use C++ again if I have a choice.
Not the person you asked, but for me (coming from C++20, with lambdas, async, etc), the big win is that the borrow checker automates away boring PR comments about "you used std::unique_ptr in a non-idiomatic way that is technically safe, and it bleeds memory unsafety into some random API, so write this level N+1 magic instead".
It also checks that all my threaded code is data race free.
On the downside, it's support for safe (in the sense that the compiler checks it) lock free programming is basically non-existent, which means that stuff that would be easy in C++ ends up being in Rust unsafe blocks that you need a PhD in type theory to reason about.
> On the downside, it's support for safe (in the sense that the compiler checks it) lock free programming is basically non-existent, which means that stuff that would be easy in C++ ends up being in Rust unsafe blocks that you need a PhD in type theory to reason about.
I'm not familiar with the C++ built-in facilities for lock-free stuff (but learning about them currently)
Could you expand on this if you're willing, maybe with some pseudocode?
I've also been curious about things like cache alignment, aligned memory, and false-sharing size detection in Rust -- all of which C++ has as std built-ins
> ... but can also see its appeal for someone that's not as confident over the minutia.
I think this is mischaracterization of why people choose Rust (and FWIW it comes across as condescending). People choose Rust over C++ partially for the same reason people write unit tests. We accept that no matter how good we are at writing software, we are still fallible, and so we introduce structures around our work to minimize the fallout from that fallibility.
No, it‘s exactly the opposite: I enjoyed C++ 11 but trying any feature tacked on beyond that just makes me want to smack my head onto my desk.
The standard isn‘t slowing down: for c++ 23 the committee has announced even more features when compilers haven‘t yet implemented every CPP 17 feature. What good is a standard if no one is following it?
> but can also see its appeal for someone that's not as confident over the minutia
It's not confidence, I just don't care. I don't want to learn the minutia of C++. Rust is the thing that has got me interested in lower level programming.
There are just as many written-in-Rust posts as ever, although the practice of appending "in Rust" to the title has fallen off (there's literally one of these at #3 on the front page right now). Meanwhile, an article from IEEE evangelizing Haskell was literally on the front page this morning with 256 votes.
As someone who has been painfully self-teaching C++ for the last ~1.5 years on and on, these are my hangups:
- The features that make C++ decent are often found in C++ 20/23, for which there are woefully few resources
- Code taking advantage of coroutines and generators isn't commonplace yet, rare to find examples
- C++ 20 concepts are a near mirror copy of Rust Traits and enable composition that's an alternative to inheritance, again difficult to find examples of
- C++'s way of "implementing" interfaces (IE iterator<T>) by having a magic defined set of type aliases and methods is not intuitive for me
- Writing code to control things like parallelization and thread scheduling that's portable across libraries is difficult. I think std:: executor is supposed to fix this but if I'm manually scheduling threads and I want to use IE oneTBB it's not obvious how to do this.
- Dependency and package management is a nightmare. Need to condense CMake + vcpkg into a Cargo like tool and make it a standard
- error messages are indecipherable. GCC 13-dev colors make this a bit better but human-written "ELI-5" errors succinctly explaining the problem and suggesting a solution (like Rust)
- flags like "-fsanitize=address,leak,undefined -fanalyzer -Wthread-safety -D_FORTIFY_SOURCE=" should be baked in to a default "dev"/"debug" mode newbies shouldn't need to know/think about
- Profiling and instrumenting/monitoring should be a few CLI flags with nice UI visualizers (IE LLVM X-ray)
- I have to Google what headers cstd/std stuff comes from half the time
C11 resources are just fine to start with. That will bring you to "modern C++".
C++ is not about bleeding edge. Nor is it about being a good, modern language.
It's key feature is to be able to tweak performance based on profiler data.
So, you want to write code you understand well enough, so you can adapt it based on the feedback profiler gives you.
If you don't need to write code that needs to get every last bit of performance, or you don't have to use some specific libraries, or maintain some legacy code, there is no reason to use C++.
Also, a humorous rant:
1.5 years? I've been programming C++ professionally for 15 years and it's still a daily struggle. It's a minefield of footguns, within which are buried unexploded munitions from previous generations, all administered by a posse of savants gleefully adding more and more convolutions to the language to "modernize" it.
I have to also Google what headers cstd/std stuff comes from all the time
FWIW the concept feature of c++20 has very little in common with rust traits. C++ concepts are closer to type assertions and aren't really a type system for templates.
The old c++0x concepts proposal was much closer (and of course precedes them), bit turned out it was harder to make it work.
Interesting that I also am learning c++ since some months and I have a very similar feedback as you.
Just to add something:
> - C++ 20 concepts are a near mirror copy of Rust Traits and enable composition that's an alternative to inheritance, again difficult to find examples of
I don't know Rust, but they remind me of Scala traits. I guess it's conceptually the same, and Scala comes first :)
> - The features that make C++ decent are often found in C++ 20/23, for which there are woefully few resources
Somehow yes, but I think lots of cool things happened in 11 and 17. At least when I read the docs, it seems so. C++20 seems to have simplified and ported a lot of more modern features to the language, though. But 11 was a game changer from my understanding.
> - Dependency and package management is a nightmare. Need to condense CMake + vcpkg into a Cargo like tool and make it a standard
Tooling is indeed something that requires a sort of unofficial community decision to say "just start with this and this". That's so true! But that also goes against the main philosophy of C++ which makes it so super cool!
>
- flags like "-fsanitize=address,leak,undefined -fanalyzer -Wthread-safety -D_FORTIFY_SOURCE=" should be baked in to a default "dev"/"debug" mode newbies shouldn't need to know/think about
Did you try project_options[0]? I recommend it to all newbies like me. Just set the "enable_developer_mode=true" and you get all that stuff for free.
> - I have to Google what headers cstd/std stuff comes from half the time
In Clion you can cmd+click on the symbol and it leads to the header.
Rust's dynamic traits are like Java or C# Interfaces.
Rust's Static traits are used like C++ templates but with more compiler checks and guidance.
In both cases, you can separate the code that implements the trait from the class itself (like a partial class). In Rust you can also retroactively declare a trait and then implement that trait for a class you like (even someone else's class, or a standard class).
you're right, Scala has also type classes, though. Generics, traits, variance, etc., are all quite sophisticated in Scala. I wouldn't say it's taken from Rust. The idea had been there for a while.
I tried it around 6-9 months ago and the modules basically broke Intellisense - in even a smallest project with modules, the update of the Intellisense's internal state took 15-30 seconds. Hopefully it's better now, but I think I'll by trying out zig instead.
Usually people use the lower case with dots, i.e. `i.e.`. Also one usually uses `e.g.` in those situations with `i.e` reserved for "that is". To illustrate:
"We had to come up with some way to cut our burn rate, e.g. lay off (i.e. fire) some of our workforce; cut facilities (i.e. get out of our leases),..."
As an aside, I wish there were "minor aside" conversations that wouldn't pollute the main thread so I could respond without occupying massive comment-space.
In AE it's also usual to have a comma following them, so
"We had to come up with some way to cut our burn rate, e.g., lay off (i.e., fire) some of our workforce; cut facilities (i.e., get out of our leases),..."
The reasoning being that these expressions are "parenthetic and should be punctuated accordingly" The Elements of Style
Coincidentally, I had years of Latin so I knew those, but thought this had some other contextual meaning, a name of something (like IE for Internet Explorer). Apparently not!
The article focuses on the financial world. Historically in the UK (and elsewhere) finance roles have had strong interest for applicants, and high salaries were on offer. So the industry could pick the "most talented".
Now the model has broken down which is the motivation for the article. That's because the "talented" C++ developers have been hived off by the big tech firms (which can pay equivalent salaries or more), or the top video game companies (which generally pay low but make up for it in fun).
What remains are the "normal" developers. But their nature of business means you can't achieve a market edge with that level of talent, all else being equal.
This was never the case for COBOL because that was deployed to keep the business running, not for creating unique value and market edge. So "normal" developers work out just fine.
Finance firms hire a lot of talented graduates. They need to work against their natural inclinations, and provide training and mentorship to get those folks up to speed with C++; a long journey. They don't want to do it due to being seen as providing a cross-subsidy to their competitors when the graduates switch companies.
Because Python pays more. Or Javascript. Or Ruby. More demand, more salary. Apart from finance, pay is lower than web languages. And finance is small. Embedded systems programming, that also uses the language, pays 30% less than web jobs from my last job hunting period.
Employees may be leaving the embedded space (and C++) for web tech because of this. This is the feeling I get from my local job market (western Europe). Maybe as the old timers retire, job offers will align? Who knows.
Yes, the embedded space pays terrible, and the employers don't seem great on the whole. When I was at Google I got to work on embedded stuff and really liked it; but I was getting a Google salary. When I left Google I pursued IoT and embedded jobs a bit and while I was not expecting Google level compensation at all, I was astounded at what was going on there, pay wise. General software eng back end jobs pay better.
The problem is I really like writing C++ (and Rust, etc.)! So I'm cultivating other "systems programming" type career paths; DB internals, have always fascinated me, so I'm trying that direction. Working fulltime in Rust now, but it's hard to find work there that isn't crypto-tainted.
Other people have pointed out that lower pay in embedded has to do with the influence of EE salaries. Which are sadly lower than they rightfully should be.
I did a few a few months of contracting at a major voting machine company. They make a significant portion of all US voting machines. They had 4 developer teams Firmware (C++ where I was), UI (web tech on a voting machine), poll book (java), and a web/support team. Before I was hired in a massive influx of contractors each team was something like 3~5 people, except UI which was a new team with the contractor hiring spree.
After the work was done, they shed nearly all the contractors and about half of their previous full time employees. Just quadrupled their staff to make a voting machine then fired them all.
They hired me as an "Embedded Software" on their Firmware team. It was a total shitshow we didn't have unit tests or CI. The new hires insisted on it and I spent a bunch of time maintaining a Jenkins setup for the team that really helped.
The pay wasn't great, a little less than defense contracting, which was a little less than insurance companies and slow finance companies.
If that is what most embedded development is like then I see why it is brings the average down.
Well the bug reports were like: "I clicked around and the UI froze/crashed"… no info on how to reproduce, no logs, nothing. Just that bit of information.
When was that? I am so glad that for the past 5~6 years every contract I have worked has had unit tests and for the past 10~12 every place has at least accepted their value.
The last time I actually had to argue for unit tests was in defense contracting and not for the team I was working on. Some idiot at a lunch-and-learn community thing tried to claim there was no short term gain from them and we had defined short term in months. He could not believe that unit tests can help the developer writing them and the help the team the very next sprint.
I hope he learned better or got forced out of tech.
I have worked on codebases where full coverage was obtained using service level tests in a proper pipeline. If you couldn't add a service level test that got your pr coverage, then you were referred to yagni and told to stop doing stuff that wasn't part of your task. I was ok with that, it worked well, and the tests were both easier and faster to write. If the services had been too large maybe it would have fallen apart?
I have also worked on codebases where there were only tests at the level of collections of services. Those took too long to run to be useful. I want to push and see if I broke anything, not wait hours and hours. If a full system level test could complete in a few minutes I think I would be fine with that too. The key is checking your coverage to know you are testing stuff, and to write tests that matter. Coverage can be an antimetric.
> I have worked on codebases where full coverage was obtained using service level tests in a proper pipeline.
Sounds ideal to me. Add testing where it is cheap enough to justify, and maybe just a little more than you think you really need because they pay off so often.
If your mocks and fixtures get too big you might not be testing the code in question meaningfully.
Coverage and test quality/quantity need to scale with the associated risk. Perhaps "Flight planner for Air Force Weather" gets more testing than "Video game User Interface" and that's ok. Even in gaming, the engine team should have more tests than the game teams.
Yeah, but in real life scenarios, the difference in actual numbers, as opposed to percentages, matters.
Let's imagine that the split for all software shops is 80/20, with 80% being crappy, and 20% being decent. If there are 10 embedded software shops out there, it means there are only 2 decent embedded shops out there that an engineer can work at. Meanwhile, if there are 1000 non-embedded software shops, it means that there are 200 decent shops an engineer can work at.
This creates a wild disparity, even if the ratio of crappy to decent is exactly the same for all software shops in general.
The 20% decent shops are retaining their engineers and only growing at a sustainable rate. Available new jobs are filled with a referral since every employee is constantly bragging to their friends. So they post few / no new jobs online.
The 80% crappy shops are shedding employees (turnover) and also poorly managed so they fire everyone and rehire later. Only the worst employees decide to remain during such a purge. So most new posted jobs (more than 80%) are for such companies.
Then the 80% crappy companies talk about their issues finding staff and you get articles complaining how hard it is to find XYZ employees (interns, C++, even supermarket staff). But the real problem is the company in question, not the industry as a whole.
In real-life, engineers aren't just cogs in a wheel that are interchangeable, who can seek work in any organization. There is also a smaller number of people who can/want to do systems level/embedded programming.
Yes, I agree with you. Which is why I explained that despite the overall ratio of crappy/decent shops might be the same for all software work areas, embedded devs are the ones who get the short straw.
Just another project manager trying to hire enough people to make the project happen on time. I am in another one of those situation right now. Nothing to do with anything sensitive, just a team of 9 mothers trying to make a baby in 1 month.
The code is quite secure, but the process and company are... typical processes and company people. Paper ballots and physical boxes are more secure if good practices are followed.
At one point I was tasked with shuffling the data layout on disk in real time to mitigate de-anonymization attacks. Security was real concern.
Crypto everywhere. The voted ballots were encrypted with keys generated and delivered immediately before the election. No networking by default. The end product had all the right things.
That said, no one had clearances, third party auditors were morons, and pay wasn't great. So if I were an attacker I would just try to bribe people to make the changes I want. Can't bribe a ballot box company to election tamper, because they just make boxes.
With all that effort they are still needless voting machines, they each count a few thousand votes and not all produce a physical paper trail. Because they have software and logic in them they need a constant chain of custody to make sure that the code we wrote is what is actually run.
Just use a box and paper, it is safer all the ways digital things suck. A precinct counting votes only needs to tally a few thousand ballots so it might take a team of people a hour or two, less time than to fix a potential technical problem.
And paper can more easily have bipartisan oversight and can have physical security measures that are impractical on a computer.
All that said I have no reason to believe our elections have been tampered with on a national level or that anyone other than a local republican may have used our machines to steal elections, even then no firm or even circumstantial evidence, just baseless suspicions and conspiracy theory level anomalies.
I am from Brazil. If you saw the news, the current president that just lost elections, been insisting for years, that elections here are untrustworthy.
Reason is simple: electronic voting machines with no logging, paper trail or anything. And the common people doesn't have permission to do penetration tests or read the entire source. All of it is proprietary and secretive with no public testing basically.
For years the now president, when he was still congressman, been trying to make a law where the voting machines will print the vote, and deposit on a box. This way people can count the votes printed not just trust the machine, but the government keep inventing reasons to not allow this, even when a law passed, judiciary struck it down.
Thus today people are protesting, seemly almost half of the country voted for him, the difference was tiny, they are protesting. The winner insists elections were fair, but how you prove it when the machines are proprietary and secret? How you prove it when they have no log of votes, and instead just print the totals? In a country full of corruption, and where the the mafia literally made a party to commemorate a specific person became chief election judge, how you trust nobody bribed the manufacturer or the programmers?
Most American voting machines print a ballot an let the voter review it, but not all. There have been some jurisdictions that have given up on that for reasons that seem bad and vague to me.
I think mandating that voting machines be open source is a good idea to me. Here in the US we have 3rd party auditing companies. Various US State and the Federal Government all have different testing/auditing labs that they have certified they trust. Then each voting machine company has to convince them that it is good to sell to the governments that trust them. The final build that the lab signs off on gets a cryptographic signature and the poll workers are supposed to check that it matches what they are given to run on their machines just before the setup their machines for voting.
Do Brazil have anything similar with auditors or inspectors? Or at least some crypto connecting the vendor to the polling locations?
This is really interesting. Here in Australia we still use paper ballets for the lower house of parliament. I volunteered as a “scrutineer” for one of the parties, which let me go into the warehouse where the ballots were being counted and watch. As an scrutineer, you physically look over the shoulder of the person counting votes and double check their work. You can’t touch anything, but if you disagree with the vote, you can flag it. The voting slip gets physically sent to a committee somewhere for final judgement.
I highly recommend the experience if you’re Australian - it was very cool seeing democracy in action. I personally have a lot more faith in our system of voting after seeing it in action first hand.
That said, the senate votes are all typed into a computer by the election officials. It’s just too hard to do preferential voting by hand with ~200 candidates on the ballot.
>EE salaries are sadly lower than they rightfully should be.
Profit margins of an EE will almost always be lower than profit margins of a software engineer. A team of software engineers can quickly scale to selling to millions of users (and collect nearly 100% of the resulting revenue as pure profit), whereas a team of EE's cannot a) scale their customer base as quickly, since scaling up manufacturing takes time and b) realize a profit anywhere close to 100% of revenue, since much of their revenue goes towards manufacturing and distribution costs.
In other words, the marginal cost of selling one unit of a physical product is always nonzero, whereas the marginal cost of selling one unit of software is often (very close to) zero. That differential goes towards higher salaries for the software engineer.
There are shorter term effect where for at least a generation there's been too many new grads able to design hardware I2C devices, resulting in too many new grads also able to write I2C driver software as a backup career, resulting in low pay across the board for both fields.
Just because a student likes the field, and can pass the ever more difficult filter classes along the way, doesn't mean there's a job waiting after graduation in that field. For some reason students keep signing up for an EE education even though the odds of them getting an EE job after graduation are very low. The odds of them getting any job, even a high paying one, are good because the majority of the graduating class goes into software development, mostly embedded, but most kids who can, like, bias a class-C amplifier transistor, will never have a job doing EE stuff, there's just too many EE grads for too few EE jobs.
As another example of that effect, see also K-12 education where for at least one generation, the bottom half of the graduating class was never employed in the field, at least in my state. Enrollment for K12 has absolutely cratered in recent years, and now most grads have a reasonable chance of getting a job in their field.
I understand this but I think the biggest driver for software salaries is the sheer number of companies that are interested in hiring software engineers. Plenty of hardware companies are very profitable but do not raise their salaries because there is no market pressure to do so as the more limited job market means EEs/embedded engineers do not switch companies nearly as frequently and switching companies is generally the best way to get a substantial salary increase.
Which hardware companies have SaaS margins? I think 10% margin is very good for a hardware company. A software company would aim for multiple times that.
I'm really hoping the salaries for EE type roles start to match software as the grey beards start to retire and talent becomes scarce. We've got a legion of grads going into CS, but EE classes are a fraction of that. Despite that, software roles are often more than double the salary. Any role I go into as an EE/Embedded Systems engineer, I'm more often than not the youngest by 20-30 years. I wonder how the industry in the West is going to survive it, beyond hiring contractors from India/South Asia.
Yeah same, I’m an EE camping out in software because of the pay. It’s also just easier work. I would much rather be intellectually challenged coding firmware or embedded work. I didn’t go to school to build web widgets. It’s just EE pays so badly you can’t make the bills. I was getting offered numbers that wouldn’t have afforded my own studio apartment to rent. For EE work. It’s insulting.
...which is ridiculous because of what it takes to become an EE VS what it takes to become a "web developer". Basically anyone who can handle basic logic can be a web developer if they just put in a bit of effort. Degree or not!
To become an EE you need a 4-year degree and a whole heck of a lot of knowledge about things that are a real pain in the ass for laypeople like calculating inductance, capacitance, and impedance (<shudder>).
You don't need much knowledge to make a circuit board, no. But when your boss wants to add a USB 3.0 hub to your product it suddenly becomes a, "wow, we really need an EE" job (because the spec has so many requirements and you're not going to get your product certified unless you can demonstrate that you followed it).
> Basically anyone who can handle basic logic can be a web developer if they just put in a bit of effort. Degree or not!
A "modern" web dev needs to know a whole bunch of crap nowadays. Not saying it's insanely hard but its not that easy. But sure, getting a job as a junior should be way easier than EE.
> You don't need much knowledge to make a circuit board
Not quite.
For most modern high speed designs PCB's are very far from being simple. Signal and power integrity are critical. It doesn't help that these can be "voodoo" fields where, a bit like RF, years of experience as well as the theoretical foundation are really important.
That said, I think I know where you are coming from. A ton of low-performance embedded designs these days can be done by people with very little EE education. Anyone can learn anything online. There are plenty of resources. This is a good thing, of course.
As someone who's not an EE (with no degree in anything at all) and has made many circuit boards... No, they're not that complicated. Not really.
I've even designed an analog hall effect keyboard PCB with integrated IR sensor, dual power regulators (to handle 95 ultra bright RGB LEDs), invented-by-me analog hall effect rotary encoders (incremental and absolute), and more. It wasn't rocket science.
> I've even designed an analog hall effect keyboard PCB with integrated IR sensor, dual power regulators (to handle 95 ultra bright RGB LEDs), invented-by-me analog hall effect rotary encoders (incremental and absolute), and more. It wasn't rocket science.
Sorry to burst your bubble...
Glad you learned enough to do it and had fun with it.
Yet, such PCB's are trivial to design. Heck, one could auto-route something like that and get a working board for prototyping. In fact, I have done exactly that many times over the last four decades for keyboard/control-panel boards. And auto-routers suck. The fact that one can actually use one for a PCB is a good indicator of how trivial that design might be.
One of the big differences between hobby PCB's and professional EE-driven PCB's is in manufacturing and reliability.
It's one thing to make one or a few of something, anything. Quite another to make hundreds, thousands, tens of thousands, millions. As an example, I am pretty sure you did not run your design through safety, environmental, vibration, susceptibility and emissions testing.
For an example of complex design one can look at such things as almost any dynamic RAM implementation, from SDR to DDRn. Timing, signal integrity and power integrity are a big deal and can make a massive difference in performance and reliability.
Another example is just-about any PCB used in automotive designs. They have to survive brutal power, thermal, vibration and RF environments for decades. This is not trivial.
Other fields with critical needs are medical, aerospace (which includes civilian flight) and industrial.
Consumer electronics is actually quite critical at the limit because you are dealing with very large numbers of units being manufactured. In other words, while a design for something like an industrial CNC machine might only require a few hundred or a few thousands of boards per year, in consumer electronics one can easily be in a situation where we are running 50K to 200K boards per month. Bad designs can literally sink a company.
I understand though. From the frame of reference of a hobbyist or enthusiast everything can look simple. That's pretty much because they just don't have enough knowledge or information. This means they only have access to the most superficial of constraints, which makes PCB's seem easy, maybe even trivial.
As my wife likes to say: A google search is not a substitute for my medical degree.
No, analog keyboard PCBs are not trivial at all. You have to keep a lot of things in mind when routing your analog VS digital tracks. Especially if you've got per-key RGB LEDs right next to your hall effect sensors (can be a lot of noise if you don't do it right).
Not only that but you also have to figure out how to get loads of analog sensors into a microcontroller that may only have 4 analog pins (e.g. RP2040). In a way that can be scanned fast enough for 1ms response times (again, without generating a ton of noise).
It's not so simple like an electromechanical keyboard PCB which is quite trivial.
> For an example of complex design one can look at such things as almost any dynamic RAM implementation, from SDR to DDRn. Timing, signal integrity and power integrity are a big deal and can make a massive difference in performance and reliability.
...except 99% of all PCBs aren't that complicated. You don't need to know the specifics of RF in order to design a board that controls some LEDs.
> No, analog keyboard PCBs are not trivial at all. You have to keep a lot of things in mind when routing your analog VS digital tracks. Especially if you've got per-key RGB LEDs right next to your hall effect sensors (can be a lot of noise if you don't do it right).
Sorry. This isn't meant as an insult at all. Yes, this stuff is trivial. I know it might not seem that way to you because you are not an EE. I get it. That does not make it complex. For you, maybe. Not for me or any capable EE.
Yes, having designed plenty of challenging analog products I can definitely say that analog has its own set of challenges. Designing keyboards with hall effect switches isn't in that category.
In fact, I could easily make the argument that high speed digital is actually analog design.
> You don't need to know the specifics of RF in order to design a board that controls some LEDs.
I would like to see your boards pass FCC, CE, TUV and UL certification.
Look, there's nothing wrong with being a hobbyist and having a great time designing stuff. Bravo for having learned enough to have done what you shared. That is definitely something to admire. Just understand that your experience does not give you the ability to fully grasp professional EE reality.
I don't really see why you would create a keyboard in this way.
> ...except 99% of all PCBs aren't that complicated. You don't need to know the specifics of RF in order to design a board that controls some LEDs.
There is a difference between creating something that works, which is easy enough to do, and creating something that is competitive on the consumer market, i.e. that BARELY works. This is the difference and why you would pay an EE to do this job.
Honestly all of that sounds like it maps pretty well to programming.
I sometimes run little 30 minute programming workshops where I teach people enough of the basics that they can walk away with something they’ve made. Give a novice 3 months to go through an bootcamp and they can become a half useful programmer.
But the “other half” of their knowledge will take a lifetime to learn. In just the last 2 weeks my job has involved: crypto algorithms, security threat modelling, distributed systems design, network protocols, binary serialisation, Async vs sync design choices, algorithmic optimization and CRDTs.
It’s easy enough to be a “good enough” programmer with a few months of study. But it takes a lifetime of work if you want to be an all terrain developer.
> Honestly all of that sounds like it maps pretty well to programming.
Yes, definitely. And, BTW, this also means that lots of useful work can be done without necessarily having golden credentials.
Here's where I see a huge difference between hardware and software at scale (I have been doing so for 40 years): Hardware, again, at scale, represents a serious financial and technical commitment at the point of release. Software gives you the ability to release a minimum-viable-product that mostly works and issues fixes or updates as often as needed.
If we imagine a world where v1.0 of a piece of software must work 100% correct and have a useful service life of, say, ten or twenty years, we come close to the kind of commitment real electronics design requires. You have to get it right or the company is out of business. Not so with most software products, be it embedded, desktop, industrial or web.
If I go back to the late 80's, I remember releasing a small electronic product that gave us tons of problems. The design went through extensive testing --or so I thought-- and yet, contact with actual users managed to reveal problems. I had to accelerate the next-generation design, put it through more extensive testing and release it. We had to replace hundreds of the first generation units for free because we felt it did not represent what we wanted to deliver. This is where knowledge and experience can be invaluable.
I design the majority of the electronics for my company and pretty much all the firmware as well.
Wages are not bad for the area i'm in, which is fairly rural, but could be a lot better for the work involved. Move to a big city would probably help but I like the quieter lifestyle.
I've not done any web development full time for close 20 years, first started writing JSP code. Dabbled with a few personal website designs since then. I'm sure if I went back to web development, it may pay more but I don't think it would have the same level of job satisfaction for me. I try to keep up to date on some of the technologies used but it seems overwhelming from the outside.
Part is resistence to change, but I do find the work for the most part enjoyable so it's a risk to change jobs as well.
The demand for EE roles is far less than the demand for Software roles.
For a simple thought experiment, imagine if you could get a good developer for $20 an hour. Every single company on the planet, from a mom and pop shop to big corporations could turn a profit off their work.
Now imagine you could get an electrical engineer for the same price. What percent of businesses could profit from electrical engineering? 2%?
My point wasn't about demand though. I'm well aware it flags behind SW companies by a staggering margin. A small team of SE's with enough money to buy some laptops between them can create multi-million dollars worth of value in a few years. It would take a team of EEs 5x the time and 25x the initial investment to create the same. Of course there are going to be 100's of SE companies for every EE one.
My comment was regarding supply. EE is an art that blossomed in the 80s and 90s in terms of practicing engineers, and has shrunk per capita since. This is largely driven by kids getting drawn into SWE over EE as people look at salaries and modern day billionaires, and figure it to be a no-brainer. Today EEs are a small fraction of the total engineering disciplines, despite being essential for the communication, power generation, distribution, consumer electronics, aerospace, automotive, and of course, the computer hardware industry on which the software one is built; amongst many other growing sectors like robotics, medical, and IoT.
If there are a legion of EEs are set to retire in the next 5-10 years, and all the would-be EEs are now designing web apps, surely at some point the supply/demand scales start to tip one way? Many of the above industries are abstracting everything to software platforms as time goes on, but no amount of money can make a SW dev design a power-train for a car, antenna for a 5G device, or program an FPGA for silicon verification.
Bear in mind, though, that a lot of those EEs going into software are doing so not because they love software, but because they can't find EE jobs. Sure, many are no doubt doing it for the money, but if they really wanted to be programmers, they'd have majored in CS.
The context OP setup was “when grey beards retire.”
The ideas being demand is low as the senior EEs stay put.
Mom and pop shops could use Excel and did successfully for years. Big banks even ran on gigabyte sized Excel sheets before the 2010s hype bubble (Source: direct experience working in fintech 2010-2015)
Anyone in tech believing the last 10-15 years was about anything but the US government juicing its economy to stay relevant, titillate, and ingratiate itself on now 30-40 something college grads is fooling themselves. All those students are now bought in to keeping the dollar alive.
Software has gotten so over thought and bloated given a “too many cooks in the kitchen.” situation. Templating a git repo with appropriate dep files given mathematical constraints is not rocket science. The past needed to imagine software as out of this world to gain mindshare. Correct and stable electrical state is what really matters.
We are entering a new era of tearing down the cloud monolith for open ML libs that put machines to work, not people.
Behavioral economics has been running the US since before Reagan.
Alternatively, web is generally more valuable. You don’t buy a new washing machine because the current firmware sucks, but you will shop somewhere else if Newegg’s website is terrible. That relationship is generally true where people rarely test embedded software until after a purchase, but people tend to jump ship more frequently online.
Net result a lot of critical infrastructure and devices suck as much as possible while still getting the job done.
I’m building a house at the moment and I have been insisting that I am able to actually test all the built in appliances with power to see if the software is garbage.
I have found that most of the high end brands have a completely horrible user experience. Miele is the worst I’ve tried, and I found that as you go up the price range even inside that brand the experience gets worse.
The top end Miele induction cooktop takes over 5 seconds to boot up before you can even turn a hob on. The interface has a second of latency on presses. It took me probably 20 seconds to work out how to turn a hob on. I happened to be with my mother at the time and I asked her to try to work out how to turn a hob on and she had failed after 1 minute of trying and gave up and asked me.
It looks nice though.
The thing I find the most infuriating about it is that my attitude towards this stuff is just not understood by designers at all. They complain at my choices because the Miele appliances which they specified are “better quality”. And yet I feel like they can’t have actually tried to use them because as far as I can tell the quality is total garbage.
The mere idea of waiting for a kitchen appliance to "boot up" makes me angry. How did we normalize this madness? Telephones, TVs, car engine instruments, HVAC thermostats, why can't any of these be instant-on like in the 80s? Apply power and it starts working is a basic design principle.
Meh. Bootup time is irrelevant if the thing is always on. Many "dumb" microwaves won't let you use them until you set the clock after a power loss which creates an artificial "boot up time" of 5-120 seconds (depending on how complicated the procedure is; I remember microwaves that had absolutely obtuse clock-setting procedures).
Slightly off topic but imagine an induction cooker with the original iPod control wheel as it's power control.
We opted for a gas hob when we installed our kitchen. Mostly because I like the controllability when cooking. Obviously it's a nightmare for health and the environment but man it makes cooking easier.
Touch controls on induction cooktops/hobs are almost ubiquitous, and they have extremely poor usability in my experience. Liquids cause problems, and you need to be very careful not to move a pan or any utensils over the controls, or brush against them while concentrating on cooking. Apart from the other awful usability issues with the UI or icons.
I did a survey of all the cooktops/hobs I could find in my city, looking for something that would suit my elderly mum, and I didn’t find a single unit that was usable. Fortunately a salesperson knew of a recently developed “cheap” model from a noname brand, which had individual knobs, so I ordered that, it arrived an month ago so I got it installed, and it has worked very well for my mum.
Usability is not something that most people know to look for when making purchases, so most whiteware ends up with a hideous UI. People will buy shit, then complain, but it doesn’t change their future purchasing habits (e.g. looking for features, especially useless features!)
I bought a middling brand microwave with knobs that has reasonable usability, despite providing all features. The iPhone is another possible counterexample, although I fucking hate many of their usability decisions (remove all multi-tasking shit from my iPad - I only ever initiate it by mistake and I always struggle to revert my mistake - fucking floating windows and split windows and fucking ... at top of the screen).
The ability to clean the cooker is the only advantage of touch controls. I don't know how well the original iPod touch wheel would hold up in that environment but from a usability point of view it was excellent.
how is it a nightmare?
if you aren't getting that energy from natural gas, you'd mostly get it from a CO2 producing power plant, with efficiency losses going from heat (steam) -> electric -> heat (cooktop)
Even Gas cooktops without a pilot light are surprisingly inefficient with under 40% of the energy ending up in your pan. (Which is why the air several feet above the pan is so hot.) On top of this you end up venting air your HVAC system just used a lot of energy to make pleasant outside and/or breathing noxious fumes from incomplete combustion so Carbon Monoxide, NOx, formaldehyde etc
Induction stoves powered by natural gas power plants are more efficient than directly cooking with natural gas plus you can use clean solar/wind/nuclear/hydropower or oddballs like geothermal.
It’s even worse if you don’t size the burner to the pan. My wife always uses the largest burner with an 8 inch pan, probably 70% of the heat goes around and over it. Really made me want to switch to induction but I noticed the same thing that most induction cooktops have stupid, unreliable touch controls.
I think efficiency of a hob is pretty low on the priority list right? Certainly when framed in cost terms (gas being cheaper than electric). The total amounts are too small relative to hot water / home heating to make much difference. Especially if you go out of your way to find an induction cooker with a decent interface (there is at least one out there with knobs).
For most things which would need cooked on a hob for a long time we use an Instapot electric pressure cooker anyway (out of preference rather than efficiency concern).
It depends on what your paying for fuel, propane is shockingly expensive at 3$/gallon right now + delivery fees but let’s use 3$ for 91,452 BTU which works out to 11.2c/kWh before you consider efficiency.
At an optimistic 40% efficiency for a stovetop vs 90% for an induction cooktop the breakeven is 25c/kWh which is well above average US electricity prices. Worse that 40% assumes properly sized cookware in contact with the burner, no pilot light, and ignores the cost of venting air outside.
As to total costs, at full blast a propane burner only costs around 1$/hour but some people do a lot of cooking.
Same goes for car MMIs. Tesla is almost fine when it comes to the latency (still far behind an iPad e.g.) but other manufacturers are just atrocious in this respect
The industry will do just fine. In all my years assisting in the hiring process (I'm software, but due to my EE background I was often asked to help with interviewing EEs), I've never noticed a shortage of EE applicants. OTOH, we had a lot of trouble finding enough software people to hire.
The reality is that EE jobs are a small fraction of the software ones and supply is keeping up with demand, so there's no upward salary pressure.
> Yes, the embedded space pays terrible, and the employers don't seem great on the whole.
in europe c++ pay is in general ridiculously bad, I got some job ads this morning. Senior job in real-time trading in C++ in Paris, multithreading and linux knowledge, english first: 55-75k. Embedded senior C++ FPGA engineer in paris: 45k-65k. No bonus in either position. thanks but no thanks
Those job ads are both better than my current position. £40k for cross-platform C++ desktop app with both multi-core and distributed parallelism. PhD required. GPGPU experience preferred (notice that it's not CUDA experience because some users have AMD cards). Now, with two consecutive promotions, I could bump my salary up to £50k. Of course, to qualify for the second of those promotions, I need to receive personal commendations from three different professional organizations across at least two different countries.
This is true, trying to switch from FPGA's/RTL Design to something higher up the stack over the next few months for this reason. My employer does seem to have great difficulty hiring anyone with these skillsets but funnily enough, the salaries never seem to improve.
I wonder how much is just EEs looking at SWE resumes and going "why would I pay that much for this?! writing code isn't that hard" I definitely get that vibe from some of the local hw-eng companies.
And they may not be wrong, but.. sorry, that's supply and demand. If I have to go write stupid NodeJS stuff to get paid decently, I guess I'll have to go do that.
I worked at a place once where one of the EEs who wrote firmware told me that algorithms and data structures were pointless because in the end it's just bits in a linear address space in RAM.
The industry has basically screwed itself. It's pretty typical for companies to consider embedded/firmware as EE work that is done in the gaps of the hardware schedule. EEs generally make bad programmers which shouldn't be a surprise as their background is usually not in software development; I similarly shouldn't be hired to do EE work. Because of this the code bases tend to be abysmal in quality.
The salary for these positions tends to be tied to EE salaries which for some reason are quite low. So it's hard to attract good talent willing to deal with the extremely poor code quality and all of the other extra challenges this field has on top of normal software challenges.
Since few software developers are attracted to this niche there's not a lot in terms of libraries or frameworks either, at least not in comparison to most other software ecosystems. I've had a start-up idea for a while now to really close that gap and make embedded development far more sane in terms of feature development and such, but I worry nobody would even bother to use it.
I've been in the embedded space for years now and I've been considering bailing because the problems just aren't worth the pay.
> one of the EEs who wrote firmware told me that algorithms and data structures were pointless because in the end it's just bits in a linear address space in RAM.
This is, of course, wrong. However, I think I understand where this EE was coming from.
At the end of the day, once all is said and done, there's a minimal set of instructions necessary for a CPU to perform any task. One could add to that two more variables: minimum time and minimum resources (which is generally understood to be memory).
So, at least three optimization vectors: instructions, time and resources.
Today's bloated software, where everything is layers upon layers of object-oriented code, truly is pointless from the perspective of a CPU solving a problem along a stated combination of the three vectors listed above.
The way I think of this is: OO exists to make the programmer's life easier, not because it is necessary.
I believe this statement to be 100% correct. OO isn't a requirement for solving any computational problem at all.
Of course, this cannot be extended to algorithms. That part of the EE's is likely indefensible.
How about data structures?
Some, I'd say. Again, if the data structure exists only to make it easier for the programmer, one could argue it being unnecessary or, at the very least, perhaps not optimal from the perspective of the three optimization vectors.
It's nothing groundbreaking, although my idea alone wouldn't really help in the safety critical space.
If web development were like embedded development every single company would be building their own web server, browser, and protocol the two communicate over. It would take a phenomenal amount of time and the actual end product, the website, would be rushed out the door at the very tail end of this massive development effort. As the complexity of the website grows, the worse it gets. All of the features being sold to customers take a backseat to the foundational work that costs the company money either through initial development or ongoing maintenance. Plus there's very little in the way of transferable skills since everything tends to be bespoke from the ground up which poses a problem when hiring.
In this analogy that base layer is really just hardware support. This is starting to change with projects like mbed, zephyr, etc. There's still a lot to be desired here and these realistically only work in a subset of the embedded space.
My idea comes in after this. Keeping with the analogy, consider it Ruby on Rails or NodeJS for the embedded world. Certainly not appropriate for all things, but a lot of what I have worked on professionally would benefit from this.
> one of the EEs who wrote firmware told me that algorithms and data structures were pointless because in the end it's just bits in a linear address space in RAM.
At a previous job, the project lead (mechanical) assigned the embedded team (2 people) writing the firmwares for 3 boards (multi-element heater control, motor controller and move orchestrator with custom BLDC setup, multi-sensor temperature probes) in 2 weeks over christmas, because the junior EE said “I can control a motor with arduino in 30 minutes.” My only guess as to why such a disconnect from reality was possible is that the EE had a MIT degree, while I’m self-taught, and that we had always delivered our firmwares on time and without bugs.
I mean, it's the same phenomenon I've seen even in webdev where a PM or UX person who has produced a whole series of mocks then hands it off to the "programmers" and demands a short schedule because... well... they did all the hard stuff, right? You're just making it "go."
People naturally see their own hard work and skills as primary. I know enough about HW Eng and EE to know that it's actually really hard. That said, it doesn't have the same kind of emergent complexity problems that software has. Not to say that HW eng doesn't have such problems, but they're a different kind.
If you see the product as "the board", then the stuff that runs on the board, that can end up just seeming ancillary.
Oh, no, this was super common. When the Arduino (and, soon afterwards, the Pi) were launched, for several years, about 20% of my time was spent explaining higher-ups why there's a very wide gap to cross between a junior's "I can control a motor with Arduino in 30 minutes" and "We can manufacture this and make a profit and you can safely ship it to customers".
Don't get me wrong, the Arduino is one of the best things that ever happened to engineering education. Back in college I had to save money for months to buy an entry-level development kit. But it made the non-technical part of my job exponentially harder.
Ha. Try telling a customer that even though he's prototyped his machine with three arduinos (he used three because he couldn't figure out how to do multitasking with just a single one...) in a couple of weeks, it will be a $100k project to spin up a custom circuit board and firmware to do the same thing. And no, we can't reuse the code he already wrote.
Physical design and logic design talent is actually _super_ in demand right now but you have to have real silicon experience for which FPGA can help you get.
Google/Apple/Nvidia/Qualcomm/Broadcom and gang are having problems retaining talent right now.
I have an EE background but worked in webdev for many years. I got pretty bored with webdev, and had the opportunity to get into embedded Rust development, so I did. Its been really awesome, learnt so much both in embedded but also hardware engineering.
But now I think I'll head back to web development for my next job - I think web is better as an employee or as a contractor. It seems to me there is more freedom in webdev, often its possible to work from home or abroad... Embedded on the other hand is encumbered with equipment, oscilloscopes, devboards, protocol analyizers, you name it and often requires onsite hours.
And then there is the pay and job availability... I recall interviewing for a role that involved designing a full-blown operating system for use in the auto-industry. The role was paying 40-50K euro a year in Germany, which is insanely low. React developers earn substantially more, but are required to know substantially less.
The only reason why (I can imagine) someone would chose embedded is probably because its very rewarding and mentally stimulating. Its awesome creating physical devices. Its awesome interfacing with the real world. Its awesome deep diving into bootloaders, memory allocations and exercising a fundamental understanding of computing.
fully agree. rust links to its stdlib statically made its binaries too large for many embedded boards though, one reason I could not switch to it.
embedded is hard to get remote positions due to hardware involvements, which sucks. on a positive side, the job could be more secure sometimes, but then the low pay truly ruined everything, overall it remains to be negative.
I mostly do backend and devops at work, and C++ is quite present, not as main language, but writing libraries to be plugged into Java, .NET and node frameworks.
> You may also look into Kernel Programming for a lucrative systems programming career.
This is the road I have taken since I started to work professionally, but I yet have to find a lucrative job. I know that I am paid more than microcontroller devs, but less than web devs. The market for kernel developers is not that big either.
I’ve been in both web and embedded for the last 20 years, and to me web dev “done right” is just as much if not more complicated than embedded, and very similar. In both cases, you have a distributed systems (everything action you take, system wise, is asynchronous, and very uncertain). Debugging is a pain in both cases, because you have only limited access to the system under test (especially in the field), and things like minification / optimizing compilers make it hard to necessarily track bugs.
Embedded has the advantage that you can usually trust your peripherals more (they’re not a user that randomly presses CTRL-R), there is less framework and thirdparty stuff in your way, and the timing constraints are usually better understood. Webdev also suffers from a ton of UX and UI (animations, wizards, complicated workflows, error handling that needs to be error handled that needs to be error handled), which often results in very complex state machines.
In both cases, observability is key, especially for debugging purposes. I use the same patterns in both cases: a lot of state machines and event driven design, because I get “debugging” for free (I just need to log state + events and I can reproduce any scenario).
The big advantage of web, and one that I always have to adjust to when I come back from a period of time in embedded, is that you can YOLO a lot. YOLO to prod, you can always easily revert. YOLO the UI, because you can trust the user to refresh their page or workaround it (your hardware peripheral won’t). YOLO everything because you’ll never really brick stuff. YOLO timing because you usually don’t have hard or even squishy-hard realtime requirements. YOLO behaviour because you can have realtime feedback on how your system is doing, and pushing a new version is only minutes away.
But “web dev” done right, and I really like having something fast, robust, repeatable and observable, is quite the challenge too.
I realize I mostly focused on the frontend side here, but you can easily see how backend dev is highly complex too (but that often falls under system programming too).
Lots of framework, in fact most of the runtime environment is not under your control at all (cloud services, for example). Complicated deployment and distributed patterns, often requiring many services to collaborate for a single functionality (DB, monitoring, cache, load balancing, backend itself, storage in just the simpler cases!). And none of this is something you can just plug your debugger into and hack away at it. Very similar to embedded in how I approach it.
Deployment is similar too, in that you will often have a builder system that creates artifacts than then get deployed asynchronously, resulting in heterogeneous environments at least for a while, with needs for proper API boundary design.
Seeing the parallels between both worlds allowed me to use CICD, blue/green, feature flags, data pipelines to the cloud, UI patterns from the then nascent javascript framework explosion back in the late aughts, when that stuff was almost unheard of in embedded environments. I scripted my jtag environment using rhino (javascript on the server, back before node came out) to collect and hot reload pieces of code, while being controlled in the browser. I made a firmware app store for midi controllers I was building.
Embedded UIs also highly benefit from knowing patterns from web frontend, because they are highly event based too, and really benefit from attention to detail (say, animations, error handling, quick responsiveness). At any point the user interacts with the device, through a button, a touchscreen, a sensor, UI feedback should be immediate and obvious (even if it’s just a LED turning on). Good web applications are absolutely amazing in how they achieve that (through CSS, through JS, with nice layout / graphical design patterns).
It’s good to know this, I think I take for granted the experience I have in web dev. It’s just intimidating to be at the bottom of a large climb in a new discipline.
I did Linux kernel work for a decade at my old company. Left due to low pay.
Also worried about my employability. Not much call for C programmers in 2022. You’ll always fear losing your job.
I love low level though, I do embedded projects for fun! I can probably sling back-end Python for 1.5x the salary. I wish embedded payed better, but it doesn’t and therefore I won’t help alleviate this “shortage”.
If you are ever looking for C opportunities, my team would probably like to be aware of you when the hiring freeze is over. We work on next-generation volatile and non-volatile storage projects including an open-source storage engine.
not many, I did it, jobs are scarce, most of the time you port a new version of CPU to the kernel, or add a few device drivers, the industry does not need a lot of those engineers, and per my experience, not compensated that well either. these days many kernel programmers work for big companies.
I remember Apple having a lot of related listings so I'd assume companies that are somehow involved in OS development (Microsoft, Google, maybe RedHat/IBM and Intel).
Significant portion of kernel code is written by FAANG, for example. There are other companies that also pay reasonably well. You can check some statistics of contributions to Linux kernel here https://lwn.net/Articles/909625/
Defense industry has a few such jobs, working a lot with RTOS's, network devices, sometimes even embedded for signal processing/control systems, etc... The big defense contractors probably pay better than working directly for the govt depending on where you live.
Isn't this a sign of a problem ? where important domains with hard problems pay few .. while some dubious applications are throwing money on css plumbers ?
There's a strike happening here in Ontario schools by janitors and education assistants and early childhood educators, because they want more than a 2% raise on their $40-$50k year jobs ($30k USD, and look at inflation #s...). The government is going to use a special "shouldn't be used" clause in the Canadian charter or rights and freedoms to force a contract on them and ban a strike and forbid collective bargaining despite it being a charter right. These are people who clean poop, shape young minds, and keep critical systems running, and so on.
All of this to say: difficulty and importance of a job seems to have almost nothing to do with either the pay one gets, or the respect one gets.
No, it's always been the case. Just because something is difficult, doesn't mean it pays well. Otherwise, teachers and mathematicians would all be millionaires.
I feel almost exactly the same way as you. I've flitted around the research/applied research boundary for ML for the last decade+, so I write plenty of Python. I enjoy the way Python gets out of my way so I can focus on interesting research problems. But the actual act of writing code in C++ is so much more fun, once you get good enough at it that the footguns don't trip you up.
The embedded AI space is a pretty good place to make money writing C++. I was in autonomous vehicles for a bit. It didn't really interrupt my post-Google compensation trajectory, and I got to write performance- and safety-critical C++
My local bus/transit agency was hiring an embedded programmer a couple of years ago and while I thought it would be fun to do embedded stuff and get to work on busses/trains (!) the pay was like half my web dev salary. (Granted there is a pension but it's not that good)
If the government did its job and we had sound money, and taxation were explicit instead of this wacky adjustable-and-unpredictable-devaluation that is inflation, there would be no need for cryptocurrency.
The point of money is to be spent, not to hold it. You can't have an asset that's both good to hold over the short and long term. (I forget where this is stated.)
That's because the point of an economic system is to trick other people into making food for you, and holding money instead of trading it obviously isn't going to lead to that.
> The point of money is to be spent, not to hold it.
Why? Why prioritize spending now rather than later? If I can't defer consumption, I will always need to work, and I can't retire. That would be financial oppression.
> You can't have an asset that's both good to hold over the short and long term.
I am abnormally curious why this is the case.
> That's because the point of an economic system is to trick other people into making food for you
I'd rather they make food for me when I'm old, instead of when I'm young and I can make it for myself. How is this an argument against saving?
> holding money instead of trading it obviously isn't going to lead to that.
While it's true that if everyone saved in the short term, we'd see persistent recessions, it's bound to end, as people start to want to spend their earned money.
In "Die with Zero", an argument is made to allocate and spend everything you've made, because this life is all you've got to do so. I agree with this book.
Even in extreme deflation, people buy things they need. For example, technology prices have been in exponential free fall for decades, yet today the world's largest companies have a lot to do with selling computers, phones, and/or software.
The only reason for government currency inflation is balancing the (wasteful) budget, after the government spends beyond its means. This allows soft-defaults (government paying bond coupons in a diminishing currency) instead of hard-defaults (government failing to pay bond coupons). But both kinds of defaults should be seen as bad, by investors.
To get an idea of the scale of the misallocation, compare the tax revenue to GDP with government spending to GDP. The US government pays for 44% of the yearly domestic product, while only taxing 9.9%. This amounts to a LARGE benefit to those printing money and spending it before price inflation hits.
> Why? Why prioritize spending now rather than later? If I can't defer consumption, I will always need to work, and I can't retire. That would be financial oppression.
I should've said spent or invested. You can save by turning money into I bonds or stocks for retirement, and that works because it funds something productive (stocks/corp bonds) or the government would like you to defer consumption due to inflation (I bonds).
But remember money (vaguely) represents stored up labor. In nature you can't retire because you can't save up labor; saving money isn't just like a squirrel storing nuts for later, it's also like if the squirrel could put off gathering them at all.
Long term investments (stocks) are better for retirement because they're riskier.
> I'd rather they make food for me when I'm old, instead of when I'm young and I can make it for myself. How is this an argument against saving?
By "other people" I meant farmers, so you're probably not doing that work yourself. There will probably be farms because other people are continually buying enough from them to keep them producing, but if nobody buys something for long enough it won't get cheaper, the market will cease to exist because nobody will produce it anymore. Saving money/retiring in this way is kind of parasitic.
> Even in extreme deflation, people buy things they need.
There was a Great Depression where people stopped being able to do that, you know. Deflation really upsets people. Deflation in Germany also got the Nazis elected.
It's not good to think about "the government spending beyond its means" as if it was a household. The government's the one that invented the money in the first place. A fixed money supply doesn't make sense on a planet with an increasing population that all want to use your money because of how awesome the US financial empire is.
And not only did the US fail to get inflation despite best efforts from ~1980-2020, other countries are seeing inflation now without extra deficits.
You are making a lot of interesting arguments. Thanks!
> In nature you can't retire because you can't save up labor
That is true. What I can do, I guess, is ensure that I will have what I want in the future. If I don't know what I want, then I want to buy a small piece of everything (index funds).
> are better for retirement because they're riskier.
From the very article you linked: "Having no earnings and paying no coupons, rents or dividends, but instead representing stake in an entirely new monetary system of questionable potential, cryptocurrencies are undoubtedly the highest risk investment known to man."
Of course, here it seems Wikipedia is a bit opinionated, and gambling would be an even higher risk investment. But at that point I'm sure the risk-return relationship would break down.
The Kelly criterion is the optimal to size up how much risk to take over time. If there's even the slightest chance that losing a bet/investment will leave you with zero wealth, then you may not place all your wealth on that bet.
> Saving money/retiring in this way is kind of parasitic.
As some people save, others spend. As I mentioned with "Die with Zero", I will spend all my money eventually. If people do not synchronize their spending with the rest of the economy, the effects will average out, and one individual does not matter. Unfortunately, people tend to buy high and sell low, going on trends. And I've noticed both national and cryptocurrencies go through this - albeit with the interest rate mechanism, national currencies don't drop 80-90% from time to time.
> A fixed money supply doesn't make sense on a planet with an increasing population that all want to use your money because of how awesome the US financial empire is.
As the population growth slows, or capital reaches diminishing returns as some other finite resource is depleted, it is only responsible to think of the economy as a household, and the money as a reflection of real, existing goods and services, rather than future ones, because future ones might not exist, and debt will become less "productive".
I distinguish between "productivity" of a debt and its yield. Taking on debt means signing up to pay future interest. But the resources you receive in exchange might make it worth paying interest, or might not. This is what I call "productivity" for lack of a better vocabulary. And interest rates or yields are orthogonal to this.
> The government's the one that invented the money in the first place.
The government merely partly captured the monetary velocity multiplier effect caused by fractional reserve.
Fractional reserve was invented by private banks, which create most of the money supply. In spite of their enormous power, and the enormous profits in fees and interest as a result of money creation, banks still go bankrupt by abusing their power, requiring bail-outs (with public money) or bail-ins (with depositors' money).
One such bail-out was immortalized in Bitcoin's first block ("The Times 03/Jan/2009 Chancellor on brink of second bailout for banks").
> And not only did the US fail to get inflation despite best efforts from ~1980-2020
In 1980-2020, the CPI went from 82.4 to 258.8, or a ~3.14-fold increase, or a 3.14^(1/40) ~= 2.9% compounded average growth rate. That is not failure to get inflation, it is overinflating by 45% compared to the 2% objective.
What we are seeing now (>10% inflation) is the result of irresponsible pandemic government budgets being mopped-up by the central banks.
By the way, PPP cost $170,000 to $257,000 per retained job-year. I bet employees on payroll during the pandemic were not paid that much.
Yep. I build glorified CRUD apps in NodeJS + React, my friend works on some embedded C++ stuff.
- My working hours are way more flexible. I pretty much only have to attend meetings which are rare, so I can basically work whenever I want during the day. That means that I can go to the dentist and stuff like that without taking the day off. She has pretty strict hours.
- I can work from anywhere, only requirement is decent internet connection. She has to go to the office because that's the only way she can actually test the code she writes for physical devices.
- My salary is basically double of what my friend makes.
She's currently learning JS so she can just move into web space. If someone can choose between easier job with much better salary, benefits and working conditions, they will do that without thinking, unless they reeeeeally like C++.
Probably less related to C++ as a language and more so an "embedded" issue or down to the specific industry that your friend is in.
E.g., there are hundreds of C++ devs at my company that have the same work from home options and flexible hours as their frontend peers. So these jobs exist.
Why is API design/backend engineering the only software discipline that gets maligned like this? These are bread-and-butter operations. I don't mean to attack you, just noting that I never hear anyone talk about mobile development in the same way, for example.
A lot of CRUD app development feels like tedious, repetitive busy work. Data entry was a solved problem in COBOL if not earlier, it's not gotten any harder in the decades since, it's just gotten more tedious.
There are generic data entry tools that solve the entire class of problems in that space. In web tooling there are things like Django's Admin app. In "the ancient world" there is Excel for good and bad and ugly. But those aren't "branded" enough. Those aren't "experiences" that some UI designer has sweated and cried over until they were beautiful. Those "don't understand the domain" or "don't support our processes" or "don't know that we are a special snowflake with special snowflake data" hard enough.
So you just rebuild the same types of things over and over again every time the UI designers get bored or the company decides to use a slightly different version of the same data. Sometimes it can feel a bit Sisyphean.
The same can be said for dentists or architects or chemical engineers or whatever. Teeth and houses and oil refineries are “solved” problems in that we know how to do it.
But each instance is a little different. Each customer wants their flavour of the problem solved.
Long story short: don’t get into a line of work if you don’t like churning out multiples of the same thing for years.
> The same can be said for dentists or architects or chemical engineers or whatever.
- Dentists have dental hygienists that do the day-to-day grunt work so that dentists can focus on the real problems/exceptional cases (cavities, root canals, etc).
- Architects build the plans, but they leave it to construction workers to actually construct the project.
- Chemical engineers generally work with staffs of chemists and other roles that the take the engineered solution and apply it day-to-day.
Right now, software uses the same job titles "for everything". There's (perhaps intentionally) no differentiation between the people expected to solve the hard problems/engineer the tough solutions and the people hired to plug-and-chug their way through their 15th yet-another-CRUD-app that year. There are complaints in the surrounding threads even that some of the "drone work" pays better salaries and has better hours/corporate cultures than the harder stuff. It's an interesting "upside down" situation compared to even just the three fields specifically referenced here.
I went to an engineering school with the expectation that I would be doing software engineering not just in name but in role, but most of the jobs I've ever worked were paying me to do not that. I certainly know friends who are chemical engineers that also perform the role of chemists for their companies, but those are clearly distinct enough job descriptions with a big enough salary distance that those companies know that any hours my friends put in as chemists rather than their hired job is over-paying those hour rates by a considerable enough amount that they have reason to hire cheaper chemists. I have never seen a software job consider that they may be hugely over-paying a software engineer to do "software development grunt work". Without truly separate job titles and salary considerations that is forever going to be opaque to the accountants at companies.
Long story short: other professions clearly delineate between jobs that are (creative) problem solving and jobs that are more "grunt work" like Ikea-assembling CRUD apps. Why don't we?
> Long story short: other professions clearly delineate between jobs that are (creative) problem solving and jobs that are more "grunt work" like Ikea-assembling CRUD apps. Why don't we?
Is that even possible? It's difficult to separate grunt work and problem solving, because you often need similar levels of context to solve both. They also tend to intertwine a lot.
Of course it is possible. There's just currently more reasons for companies not to care and not to do it than to do it: Capital P Professions have education requirements and licensing/certification commitments. Capital P Professions have ethics bodies and mandate professional standards. Capital P Professions have professional societies that sometimes can organize industry wide negotiations (not quite to the same extent as Unions, but kin to it).
I don't think it is a technical problem keeping software from better sorting its various types of jobs by difficulty and types of problem solving. I think it's far more corporate politics and sociopolitics and a general lazy preference for the current status quo (because it works to company's favors in terms of job description opacity and keeping pay scales confused and under-valued and, uh, not having to worry about "quaint" "old timey" things like professional ethics investigations).
Software Engineering is also a capital P profession on the countries where it is a professional title, and not something that one is allowed to call themselves after a six weeks bootcamp.
I think there's truth to this but you're glossing over details that are critical. If the amount of variation between products were countable and predictable as you paint it, then you'd only need designers and a cms specialist who can configure the product. As a web shop, this is much cheaper to do. There are tons of website builders today which has saturated the "simple" market, but "intermediate" customers have small variations that still need custom integration work.
All in all, saying that dev work is repetetive is a hard sell, because if it was, you could just automate it. And we clearly haven't automated even the space of medium-complexity web apps yet.
I pointed to two clear examples where we as an industry have automated it (Django's Admin app, Excel), and I could name tons more (Access, Power Automate, InfoPath, Power Apps, SharePoint Apps, SharePoint Lists, and those are just the Microsoft side of the list; you mention CMS specialists and we could list out of the box CMSes for days).
> still need custom integration work
Define "need" here. I already threw some shade at this by accusing many companies of thinking their every need is a special little snowflake that needs lots of custom tweaks and custom specifics. In my practical experience so much more of that is "want" rather than "need". They want to feel special. They want to feel like they have control. They don't want to pay for the cheap out of the box CMS because it might imply to their executives and shareholders that their entire business is cheap and easily automated.
Some of these CRUD apps are truly "John Hammond syndrome" apps: they want the illusion that no expense was spared. (And just like John Hammond sometimes confusing building out the gift shop and adding fancy food to the restaurant in the Visitor Center with spending enough on redundancy among the critical operations work and staff.)
As someone who has done .NET, C++ embedded, Python and NodeJS, I have to say that picking up NodeJS and creating APIs at scale with full automated nightly test suites using docker and postman/newman was very easy to learn and very fun. Python is up there as well but I had to work on Django and not some of the more simpler api frameworks that looks nice.
It's not maligned. I've worked on some complex backends some time ago and I would never call those glorified CRUD apps. But my current project is basically Node backend with almost zero business logic. You hit GET endpoint, it returns someORMRepository.find('events').where({ category: 'FUN' }). That's it. React side displays a table with some basic sorting and filtering. Editing and creating and entry is just a basic form. I don't see how else could I call it, it's not that much different from CRUD demos you see in blog posts.
> Why is ... backend engineering the only software discipline that gets maligned like this?
Where do I even begin? The intrinsic difficulty of most backend problems is very low - read some customer data, save it to a database, call an external API, send data back to customer. The only effort you should have to put in is fighting boredom.
The web dev industry managed to overcomplicate this task to the point where even small startups targeting niche markets have architectures inviting race conditions over distributed systems with tens/hundreds/thousands of working parts.
It doesn't have to be like this. The problem is that your average web dev doesn't know how to scale down (optimize for space/memory/disk consumption), so instead they scale up (more computers). Scaling up isn't necessarily a problem if you know what you're doing, but I've seen a bunch of super-principal engineers regurgitating the popular scaling up buzzwords without actually understanding the tradeoffs. They choose a technology because Google is using it.
It's not fun to fix deep systemic problems in distributed systems when the system has already been running for a long time, and there's a large number of devs working on it. You can't just say "ok, everyone stop working, for a while, we'll take a couple of months to rewrite everything, the customer can wait".
What's worse it that this type of issues would've been obvious from the very beginning to anyone mildly curious to imagine what the future of such a system would look like.
Another type of common issue is slow queries, and the common "solution" results in eventual consistency.
I'll stop now.
> I never hear anyone talk about mobile development in the same way
Mobile development is just as bad, maybe worse. One overly complicated framework (Android), and another one that's fenced-off to non-Mac developers.
It's also the companies that use C++ in my market (Embedded, Germany): They are either "old" industries (cars, car-parts, industrial machines, military equipment, embedded stuff) or consultancies working for these companies. Very few of them have any real flexibility nor do they care about their employees' wishes much. I have been looking for a 20h/week remote job (I have 5+ YOE) in this field for a few months and basically all offers were crap in one way or another. Negotiating your contract beyond salary and vacation days is extremely non-standard. Working remote is not a thing - best you can do is work from home, often with clauses allowing them to cancel this agreement any time, or with very weird restrictions around your workplace. There is tons of red tape in every single bigger company. I'm still deciding between two offers, but it is very likely I will leave the C++ Embedded field and work in the python market in the future.
I am at a bit of a loss here. On the one hand these very companies cry publicly about a lack of skilled workers, on the other hand you have to fight hard to get your market price and they will not budge on downright immoral clauses (such as not getting paid for x amount of overtime per week) or remote work.
Sounds like a completely different world from where I work in Cologne. We're having trouble finding good Java developers, so we're basically dropping requirements left and right. We'll even interview people without a resume and we're far more flexible on remote work than we are in the rest of the company.
Embedded opportunities have been slowly shrinking for years. For whatever combination of reasons, a lot of employers think that embedded work is easy or otherwise doesn’t require a large budget.
It’s increasingly bizarre to get a well-designed IoT device with a very polished mobile app and web UI, then struggle with hardware factory resets and firmware upgrades because the embedded side of the product didn’t get the same level of attention.
It’s like embedded somehow became an afterthought in the industry. Perhaps because it’s the only part of the system that doesn’t have a highly polished UI layered on top of it? Over the past decade I’ve witnessed multiple companies over focus on anything that goes well into slide decks (UX mock-ups, animations, etc.) or generates vanity metrics (number of backend servers, requests per second to the cloud) while ignoring anything that doesn’t have a visual pop to it (embedded firmware)
"Programming is just typing" was a typical management refrain when I was in the embedded field (more properly, now I'm adjacent to it). It was frustrating. Computer scientists and programmers aren't Real(tm) Engineers so they don't deserve as much money. They can't be in charge because you can't have engineers answering to non-engineers (ie, very few leads let alone managers coming from the software side). Which leads to a culture that's overly hardware centric with insufficient leadership/management understanding of what software actually entails.
Also, "This doesn't meet the current requirements, fix it with software!" The best one was when the case wasn't waterproof... How the fuck is software supposed to fix that? They literally expected the software team to work magic. A lot of pushback got that requirement kicked back over to the mechanical engineering team to address, but it took months. Moronic.
It might also be that the management chains in embedded are likely former engineers or EE people. In Web you get a lot of management interfaces where the boss can't do their job: that's the perfect recipe for high pay.
In embedded and other non-software engineering but engineering firms, the management is typically engineers that CAN do their subordinates jobs, they just don't want to.
> They can't be in charge because you can't have engineers answering to non-engineers (ie, very few leads let alone managers coming from the software side). Which leads to a culture that's overly hardware centric with insufficient leadership/management understanding of what software actually entails.
In hardware-centric orgs, software developers are a small step above technicians in their pecking order, sometimes below. The pecking order itself is annoying enough, but when you switch from designing your own ASICs to buying COTS dev boards and primarily only adding software to it, you're not really a hardware company anymore. But it'll take another generation for them to realize it, or a severe crunch if someone comes along and realizes that they can pay embedded devs $200k+ and eat the lunch of half these companies.
Many hardware companies still see software as just another line item on the BOM: Like a screw or a gasket. It's something you build cheaply or buy from a supplier and sprinkle it on the product somewhere on the assembly line. These hardware companies have no concept of typical measures of software quality, building an ecosystem, release management, sometimes even no concept of source control. They tell an overworked embedded software engineer: "Go build something that barely meets these requirements, ship it, and then throw the scrap away." like metal shavings from their CNC machine.
At a previous company our firmware was literally called by a part number. So I would regularly work on the repos 5400-520, 5400-521, 5400-524, 5400-526, etc.
I remember an embedded company I joined; when I asked how they manage releases, the eng manager said, "well, we find an engineer who has a copy of the source code that can successfully build with no errors, copy it off their workstation (debug symbols and all), and send it to the factory to get flashed onto devices." Total clown show.
Thanks for bringing up weird memories. I remember software not having a name and version but a part number. As if it wasn’t living and evolving as it needed to be as a networked firmware.
Perhaps I'm missing some deeper use case here. More complicated firmware projects can have only part of the system loaded during production, namely the bootloader and some system level image(s). The firmware that has all of the business logic can be pushed/pulled when a customer activates it much later on. How would a part number for this image (or really set of images) be useful?
In your first case, imagine that you have a contract manufacturer that is told to build something according to a particular Bill of Materials. You change the firmware and assign it a new part number (or assume that the version is embedded in the part number). Internally, the BOM is updated with this new part# and as part of your process, the manufacturer is sent the new BOM. Manufacturer goes to build the product and discovers that the firmware they have is a different part number than on the BOM. If not for this, they'd be building with the wrong firmware version.
In your second case, if the only person loading it is the customer, a part number may not solve anything other than the business managing inventory. However, if you're already in the habit of assigning part numbers to everything you build (I have come to be a big advocate of this), then it really is just part of the process.
I've seen a mix of both: there is a standard firmware version for the hardware combined with a set of customer customizations. In this situation, not having a unique part number for each combination (of firmware + customer config) resulted in confusion, angry customers and a manufacturing department having no idea exactly what it was that they were supposed to be building.
Yes, there are other ways of solving these problems but assigning unique numbers works well enough.
To play devil's advocate - are there any (useful) measures of software quality? Even this place is mostly programmers and we can't even agree whether we should be writing unit tests or not.
Sort of. There are accurate measures with verifiable predictive power. But useful depends on cost/benefit, which in turn depends on ability to implement and market forces.
There's a company that looked at reducing critical defects from a sort of actuarial perspective. They have a few decades of cross-industry data. I've used their model, and it works. If you don't need a numerical result, you can just read the white paper about what's most important [1].
So to partially answer your question: unit testing reduces defects, but reducing defects might not be worth the costs to you.
And defects might not be the only thing that matters. There are other measures of goodness, like maintainability, which complicates the answer. You'd have to collect your own data for that.
I’d say for micro services and large distributed system, you do need a pyramid of testing with most covered at the unit level. The system is just too large and continuously changing as all the different versions of services release.
this is grimly funny to me because where I work, software is a literal line item in the manufacturing BOM, each release gets a part number and is physically shipped to the factory
it makes some sense, but the company mindset about the role of software is very clear
One thing I came to see working in both web and embedded for two decades now: a lot of embedded developers often miss the “product” side of what they are building. This probably doesn’t explain the lower pay, but it might be a reason why embedded overall doesn’t get the recognition it deserves: the embedded engineers don’t know how to communicate their value / provide more value to the business.
This is becoming increasingly important as you well note, where devices are all connected, and things like setup and updating and connectivity are crucial. Designing not only a robust, but a user-friendly firmware update process is actually a lot more work than just building a bootloader: you need to communicate to the user, in realtime, what is going on. Cancelling an action needs to be immediate and provide feedback on the process of the cancelling. Error handling needs to provide useful information, and probably a special UX.
These do need to be factored into the embedded software right from the start, because they significantly increase the complexity, and it’s extremely easy for management to miss how crucial that part is. I keep a few horrible chinese consumer electronics devices on hand (webcam, mp3 player, mobile phone) to show what I mean. The only difference between an ipod touch and a noname mp3 player with touchscreen is… the software.
Having to press 3 inaccessible buttons, connect a USB volume named “NO NAME”, have it hang for 2 minutes when unmounting, then show a black screen for 3 more minutes, before showing … that it didn’t update, vs a smoothly progressing update progress bar showing the steps, the devices showing up in my online dashboard as soon as it reboots, that’s what my value as an embedded engineer is.
There was a time in the late 90s/early 2000s where this happened to driver development on the (Classic) Mac. Companies would make some USB device and get a reasonable driver made for Windows (I assume - I wasn't using Windows at the time). Then they would say, "Well, MacOS is 10% the market of Windows, so we'll pay 1/10th for someone to develop a driver for this." But it turned out that USB worked completely differently on the Mac from how it did on Windows, so none of the Windows code was relevant at all for the Mac devs. They would either get what they paid for (which was terrible for users) or they would not get a Mac driver. This is around the time I stopped buying any device that required installing a driver. Many of these devices didn't really need one because they were regular USB-spec devices (keyboards, scanners, etc.) To this day, I will not install a driver for a fucking mouse. Why would that be required?
> It’s increasingly bizarre to get a well-designed IoT device with a very polished mobile app and web UI, then struggle with hardware factory resets and firmware upgrades because the embedded side of the product didn’t get the same level of attention.
Why? The issue is that you have to actually ... you know ... PLAN when you have an embedded device.
You can churn the app and the web and the backend infinitely so there is no penalty for doing so. If you take that attitude and apply it to embedded you wind up with an expensive pile of scrap.
Yeah, I sorta specialize in that whole IoT firmware update/fleet monitoring/make sure everything at the edge runs smoothly end of things, and if you find a company that realizes that this is something that MUST work smoothly if they're going to scale, then it's a very sweet place to be. Even better, that sorta work combines low-level C++ with lots of back-end web service work, so you're never 'just' a C++ programmer.
A lot of C++ jobs are at FAANG companies. At least at the ones I’ve worked, nothing serious (ie in prod) is implemented in Python or Ruby. It’s Java for stuff that doesn’t need to be fast, C++ for stuff that needs to be fast, and Go for random stuff where people were able to shoehorn it in.
I think the problem is more that asking someone to accept low pay to work in C++ (one of the hardest languages to be productive in) doesn’t make any sense. If I’m good at software and know C++ I’ll work at a FAANG, AI company, self driving, or HFT/Hedge Fund for 3x-10x what a random C++ embedded role would pay.
I left embedded for web about a decade ago and doubled my salary overnight while taking on a role that was less demanding. My experience from embedded gave me an advantage over colleagues, specifically with regards to troubleshooting systems and performance problems, (edit) and the ability to read / understand the C/C++ code that so many of these languages, their standard libraries, their extensions, etc., are implemented in, that has carried forward to this day.
I'd love to go back to embedded but I can't cut my salary in half to do it.
This has been my experience in the US. C++ is my favorite language. I learned programming using it in the mid nineties. I still keep up with developments although I wouldn't consider myself the most skilled with it anymore.
The only job I've ever had that used it was a civil engineering company in a small city in the deep south, and it was mostly just C. The pay was good for the area, but nothing spectacular.
I moved to Seattle and the only C++ jobs were at FAANGs, and only a small portion of jobs at those companies. I worked at two FAANGs and only used Java and C#. I learned frontend web stacks largely due to job flexibility and it almost always pays the best vs amount of stress/work needed to put in.
Yeah I could probably write C++ at <insert FAANG> for 2x the salary but I'd also have to work 80h work weeks and deal with FAANG internal politics and sabotage from coworkers, depending on FAANG and which variation of "don't call it stack ranking" they use this year.
On the other hand I can use TypeScript and work from my home office. I've been considering moving back to the south just because much lower living expenses, family, and availability of remote work for web stacks. I can't get that with C++.
I don’t know, I guess it depends on location and such. I work as a C++ dev with computer vision related stuff. I have an ok salary and very flexible working conditions. And I haven’t seen any web related jobs in my region which seem technologically more interesting.
You are correct about number of job openings though..
I'll second this, I work as a low level C dev on embedded stuff. Good salary for the region, company pays embedded software devs at the same rate as high level app and web devs, and have somewhat flexible working conditions. This is just an anecdote I know, but I am very surprised at the general consensus at how bad the embedded positions are, it hasn't been my experience or my peers at least.
> Apart from finance, pay is lower than web languages. And finance is small.
And it's blasted to hell with bureaucracy, red tape, toxic work environments and a reputation for having to deal with infrastructure best described as "fossilized". Banks have no one to blame but themselves (and maaaaybe a bit the sometimes insane requirements of financial regulation agencies) for being unable to attract programmers.
At least in Germany, the fintechs have never had much trouble attracting developers... so it's not the finance industry itself, here in Germany it is definitely a culture problem in the established banks that historically have treated IT purely as a cost center instead of the integral part of business they are.
i have worked for several investment banks, and have always been highly paid, and had top-notch hardware and software to work with - i am sure that most competent banks realise that they are basically software hosts.
Investment banks aren't your typical consumer banks though - less red tape (because they're not consumer banks), less historical baggage (consumer banks have accounts that are sometimes well over a century old, which makes everything that touches account data incredibly sensitive as the data needs to be always consistent), and way more money available. There's a reason why a lot of advances in communication came from the needs of the investment banks, particularly specialized hedge funds / "quant banks".
It's amazing how badly embedded programming and C++ programming in general pay compared to the others you mention like Python & Ruby. A good C++ programmer has to know a whole lot more (and be careful about a whole lot more) than a Python or Ruby programmer does. C++ is well known to be a complicated beast - probably the most complicated programming language in existence with plenty of footguns. And an embedded developer needs to know a lot about both software and hardware.
Yes - I used to program in C++, and left it for another job. 2 years later, when looking for other opportunities, I realized how much of the small details in C++ I'd forgotten, and didn't want to go back to all those minutiae unless it paid more.
I'm about a decade removed from a C++ shop and I disagree with this.
I've found C++ shops have "lower" standards for C++ developers. I'm putting "lower" in quotes here because I'm talking relative skill within a given language. It just seems way more common in C++ shops to have situations where "20% of the developers do 80% of the work". This isn't to say there's dead weight in Python/Ruby shops, but my experience in the C++ world was there was always a small group of developers doing most of the work and this is considered normal whereas the same situation in a Python/Ruby shop would be a major crisis.
Despite the demand, if you're a low output Python/Ruby dev you'll likely struggle to hold a career together; hiring will be a slog and you'll get squeezed out of orgs with PIPs every 6 months. The same low output C++ developer could probably stay gainfully employed once hired.
Hopefully this fact might encourage others to pursue C/C++ jobs. There is zero expectations towards being a "rockstar" - if you know the fundamentals and can plod through work at whatever pace you're comfortable with there's probably a job out there for you.
I've gone from Python to embedded C++ recently and this is my experience, although I would add that C++ devs know a lot more at the lower level of abstraction such as Linux, toolchains, etc which makes them seem like wizards. Outside of embedded, a good Python or NodeJS engineer has opportunities to do more automation and value added activities such as CI/CD, test automation, devops, etc.
This might be true for embedded C++ programmers, but it's not true at FAANG or finance companies, which accounts for a lot of C++ programmers. I'm in San Francisco and I wrote mostly Python/Go at web companies for the first 10 or so years of my career, and write C++ at a FAANG now. I'm getting paid significantly more now than I was before. At my previous job where I was writing C++ I was making $286k in cash ($220k base salary + 30% bonus target) plus generous stock compensation. Most people writing Python, Javascript, or Ruby are not getting paid that much in cash even if they're working at a unicorn startup in SF.
Yeah, but it is kind of unique in that the skillset is in demand for two different types of business, and one has a drastically higher profitability and demand for people. That by itself isn't that unique, there are any number of jobs that don't exist because the qualifications would make employees too expensive. But we actually rely on this stuff for our modern world, we need these jobs to exist, but we won't pay for it, so you only get those who love the work, and those who are too bad to do anything else. So far, that has been enough. Teachers, Nurses, and Vets are similar, so I don't guess it really is that unique. And we are seeing shortages in all of those too.
a few issues here - c++ is not used that widely in embedded (most prefer c or a small c++ subset). and ruby? i can't remember the last time i saw a post about ruby here. and finance is huge.
Ruby still has relevance as it was the lingua franca of the 2010s ish startup scene. These days those startups have become veritable big tech companies in their own right - Stripe, Uber, AirBnB, and so forth. While many of those companies have started integrating other languages they still have massive legacy ruby codebases and demand then for ruby engineers.
all those companies you mention seem to me to have a 50/50 chance of going down the tubes. not because of their use of ruby, of course. still, i don't see any company started today to base their software on ruby. probably just me being wrong.
as a rails dev, if I were to start a new project today I would still pick rails. It makes building web apps a breeze. The technology is mature, stable, active and still staying modern in terms of integration with modern JS
you start running into problems as you scale, but the reality is you will run into scaling problems regardless of what technology you use, and the ability to move quickly and iterate is much more important for new projects than solving scaling problems before they exist
haha but then I have to learn the entire .NET / windows ecosystem which is a huge jump considering i've only ever developed on mac/linux. I am using wsl now though
and running circles won't matter because for most web apps the DB is usually the bottleneck anyway
But you can use .NET on both Linux and Mac. As for DB being the limit, usually that's only the case for simple CRUD apps. In microservices and high load apps, performance matters.
microservices start being useful when your monolith becomes too large for your engineering department to work on simultaneously. If you force good engineering practices and quality code reviews, you can scale this up to at least 100 devs. Microservices are more about Conway's law
high load apps I agree with, pick the technology that is appropriate, but again, for new projects I would say any technology that gives you speed of development (like rails) is far far far superior to speed of the technology.
I'm also surprised c++ is paid less than python and javascript these days, embedded c/c++ jobs are also paid less which requires years of experience to get better at.
C++ shops are a diverse beast. There's the legacy MFC desktop app from 1999 dentists are using to upload dental imagery, there are high-profile Windows applications and games, there are also cutting-edge ML, computer vision and simulation-related domains.
And it seems like most of the jobs in the domains where C++ is common want established domain experts and maybe a handful of new people coming in through university pipelines.
My last C++ job was for a robotics company a few years ago (pre pandemic). The job was not very “embedded”, but quite challenging - processing noisy images from lidars, etc. I worked 60 hour weeks and my salary was 80k or so. Then I realized I can get twice as much just writing Python micro services. So I became a Python developer instead. Much less stress and a lot more free time too.
There are no 7 figure dev roles in finance except in very select hedge funds where one time bonuses in 5 or 6 years may be that large. 6 figure is the norm. At 7 figures, you’re likely in management and not working on technical details.
Not at all. This is a complete falsehood spread most likely by the financial companies themselves. I worked at Investment Banks for many years, doing low level C/C++ type stuff in various flavors of algorithmic trading and high frequency trading. I left in 2014, because I got an offer for 40% more just doing pure web stuff in Javascript. In the years since I have more than quadrupled my TC, and my neighbor, who is essentially sitting in the seat I sat in when I was working in finance, in that period has upped his comp by maybe 40%.
And on top of that, I rarely log in after hours or on a weekend. In finance, my real breaking point came because there was just an absolute refusal to architect to be able to make changes during market hours, which are essentially 9-6 these days- most securities have a big enough of an extended session that you can't push changes until after. Any significant network changes, host swaps, etc... all had to be done on the weekends. In the web world, you had to bake the ability to make changes on the fly from the very get go, there is no off-time when there is no traffic. And in my last team, we actually avoided pushing changes with any significant risk on Fridays, because if something really bad did happen, it was going to be very hard to get ahold of the right people to diagnose and fix it...
I should have added that I worked in prop funds during that period as well, I left finance for a bit to do "pure tech" and then went back to a top N hedge fund until recently (and while there are always silly arguments about these things, N was rarely considered greater than 5) for about 5 years, and while yes everyone was paid nicely there, no one was paid 7 figures for their C++ skills. Quant researchers that were writing C++, different story, but they were paid entirely for their research/alpha generating ability, C++ was just a tool they used to get there. In fact, hearing about their hiring process, it was mostly math questions, I am not even sure there was a big in depth technical portion to their interview loop.
Similarly, there were some AI/ML guys that were rumored to be hauling it in, but this was not for their tech skills- though they were doing mostly python, it was for their AI/ML specific knowledge. As was kind of typical at that place, and most places like it, those guys I think all flamed out and were let go by the time I left. While its not easy to "score a deal" and get promised a very high package for a year or two, its actually much harder in those types of roles to actually keep your seat there. But... if you are actually producing models that generate alpha/profit for the firm, then you are golden.
AI/ML was really just a specific manifestation of a larger trend of if you were on the bleeding edge of a capability that the firm wanted- IE had invented it, or were a very early successful adopter, my firm would have been welling to pay well above typical market to get that. Think along the lines of cloud (2016ish), Kubernetes (2017/18ish), "big data" (2016ish) capability, etc... an alternate route would be to have had successfully engineered change in an org to adopt something like real SRE. Even those types of things, I don't believe anyone was over 7 figures, but maybe? Regardless, the typical path there would be to kind of "burn and churn" those types- IE they build it, maybe its even quite successful, and then thats your niche for the rest of your time there (which is not what most leader types want), or you don't succeed and just get pushed out pretty quick- SRE as a concept was something my previous firm tried several stabs at hiring guys from Google for, but they just never made any real inroads on.
At my shop, my boss makes 7 figures. Some of the other very senior engineers make those too. At HRT, Jump, it definitely happens more. Jane Street is not a C++ shop, they have devs making 7 figures too.
No. I work in HFT and this happens in only two cases:
1. At top-tier firms like HRT, Citadel Securities, Jump, TGS, RenTech; there are a decent amount of C++ devs making 7-figures. In many cases, it may depend on how profitable their desk is.
2. At most other firms (mine included), only very senior devs are making 7-figures. These are people managing or overseeing many teams.
This BS that HFT C++ devs make craptons of money has been spread by tech bros and college kids, who have never worked in HFT.
I've wondered why embedded tends to pay lower. C++ (and C) tend to be 'harder' languages for the average mainstream developer, particularly web developers. I guess I expect embedded jobs to pay more, yet they don't and like you said, pay less.
I started as an embedded software engineer in the early 90's, and at the time there were lots of well paying jobs compared to other software engineering diciplines.
In the 2ks/10's, at least in my area, embedded jobs dried up. Mobile development produced a lot of very high performance SoCs that were cheap, and had high quality already developed middleware layers (Android for instance). They sort of conquered a lot of the embedded media processing space I was an expert in.
As a result I jumped ship to mobile, but it was much higher level programming far away from the SoC, and most of the lower level code was being written in China/South Korea.
This basically meant for the engineers that weren't able to shift, they basically weren't scouting around for, or finding other jobs (in general).
So even though there is a small pool of engineers with these skills, a lot of people left the embedded space at a time when some of those jobs are starting to shift back, leaving a shortfall, but also a pay gap.
Yea, I moved from embedded to mobile (iOS) development pretty quickly. Similar problems/constraints but the tooling was an order of magnitude better. No more cobbling together non-working cross-compilers from some vendor's crappy BSP and praying they produced binaries that worked.
Said finance companies are also at fault. They are not willing to scale up their operations. They demand only the creme de la creme, but there’s simply not enough incentive to do C++ when the compensation is so bimodal.
I'm really surprised at how stable and widely supported Rust's FFI is.
I have several C++ projects that integrate a portion written in Rust, where the Rust project produces a .a file that is ultimately linked with clang into a larger C++ project.
I definitely agree Rust has a long road to adoption in embedded/low level systems, and particularly areas with custom compilers/toolchains that rely heavily on system specific undefined behavior.
But it's a lot closer than I had thought it was a year or so ago.
I agree. But I think it'll be hard to see Rust really make progress until hardware makers worldwide start really doing 'Rust First'. And the problem there, is that Rust is a bit inaccessible to many.
Rust trades of absolutely everything for performance - and that's just not the trade-off we want to make in most scenarios. Even for most embedded systems - something that's easy to program, easy to read, easy to support, great tooling etc. is worth more than a 'a bit faster performance'.
If we were to have created something ideal for embedded systems, it would not be Rust. I think it'd be a bit more like Go. Or just like a 'Safe C' with a lot better built-in libraries.
I like Rust but I fear it is not 'the one' and the bandwagon has already left the station so we have to go with it.
In the past few years I’ve discussed salaries with dozens of companies as a staff level IC. C++ companies pay significantly less even if the work is far more specialized and challenging. The real money is in “Cloud + python/golang”.
Dozens of highly profitable public tech companies?
There's really on APPL, GOOG, MSFT, AMZN, FB.
Pinterest, AirBNB, Adobe, Intuit, Snap, Roblox, etc are usually a pretty big drop in pay - but usually above all but the highest of high paying startups.
The vast majority of actual cloud jobs - building cloud infrastructure - are low-level - not Python.
Are you talking about startups using AWS? I'm not sure that's a "cloud" job.
I left my last job which was entirely C++ because of lower wages compared to the industry and low upside in wage growth potential. While I enjoy the lower level nature of that kind of work why stay somewhere solving hard C++ problems when I can go do some easier web backend stuff somewhere else making 15% more or become a kubernetes expert and break into new pay band all together?
>solving hard C++ problems when I can go do some easier web backend stuff
As guy who worked with both C++ and backend I would assume you don't have much experience if you say one is harder than the other. Different beasts, different problems to solve, complexity lies in different parts.
I did low level C/C++ stuff in the algo trading world until 2014, and since then have done a plethora of other things from node.js for a BIG e-commerce player, python, cloud architecture, SRE type stuff, etc... and every single job has been an absolute cakewalk compared to fighting against the various footguns and headaches C++ has to offer. No more fighting huge object hierarchies and having to put in hacks because making a change to the base class would require half the company to recompile, no more migraines from template compiler errors vomiting out on my screen, debugging template metaprograms, memory leaks, "oh crap this copy constructor doesn't do what I assumed it would do " type errors, "I have to read 10 different files worth of code to track down whether this legacy library is going to delete the object for me or I have to do it myself" headaches, dealing with huge build times, etc... I could go on.
C++ is essentially 4 different languages rolled into one (C, C with classes/OOP, templates, template metaprogramming), and while I am sure greenfield entirely modern C++ projects exist and are a bit nicer to deal with, they are unicorns for most devs out there using the language daily.
Why would you dismiss my comment on my presumed experience? Seems a bit arrogant. Did I say all backend problems are easier or that all C++ problems are harder? No, I merely stated why work on hard C++ problems for less pay when one can work on easier backend problems for more pay.
Wasn't even a good comparison either, would be like calling a Ferrari the same speed as a push-bike because you saw the former driving slowly alongside the latter.
I've seen that too, ironic since most scripting languages are written in c++, but languages such as go and rust are now self-hosted so there's finally meaningful aloternatives to either c++ or java/c# (in the case of go).
> In the meantime, JavaScript and Python is a lot easier to work with, with a higher salary.
I don't know - I legitimately think programming languages are simpler than web applications. Mostly stateless, mostly a big pure function. Compared to the anarchy and chaos of web services seems easy.
I meant that increasing amount of projects related to Python/JS and others that would previously be created in C/C++ is now created in Rust. Some examples:
TypeScript type checker written in Rust
Ruff – a fast Python Linter written in Rust
Introducing Turbopack: Rust-based successor to Webpack
Deno is a simple, modern and secure runtime for JavaScript, TypeScript, and WebAssembly that uses V8 and is built in Rust.
OK, few are actively recruiting for people on those projects, as a proportion of the whole job market. A few juicy jobs there and a huge pile of less well paying ones means that any average is going to be low. The existence of those roles is great for those that have/get them, but this doesn't help the wider pool who need to use other tech to get the better wages – a situation that results in fewer newly training in c++ because those outside the pool see the low average.
>In my latest talk, I computed that we have 2 developers paid at full time to maintain Python: I am full time, Barry, Brett, Eric, Steve and Guido have 1 day per week if I understood correctly.
Now from what I understand situation is way better, but still - that's what it looked like just three years ago, when Python already had millions of people writing code in it.
You understand that it's the ratio and comparative numbers right? A single team of C++ developers creating that stuff can support infinity python programmers building on top
Isn't that nuts? It basically comes down to, if you're further from management, you're not valued.
Oh, you can change the color of the button on my website? 200k/year!
You eke out maximum performance from poorly documented devices and apis using obscure toolchains and custom built linux kernels to run on small chips that are the backbone of our business? 75k/year!
There's lots of C++ programmers out there. But they're bottled up in FAANG, I think. So you have to be able to compete with that.
Working at Google is on the whole a lot of C++. Major parts are moving to Go, certainly. But there's absolutely giant code-bases of pretty cleanly written C++ services, and most Googlers are quite proficient in it and there's whole teams at Google that work on improving C++ standards and techniques etc etc etc. Not to mention the Chromium codebase as well, which I also worked in. I get the impression this is also a thing at Facebook.
All of this to say: if you want C++ programmers, you need to pay competitive enough rates to pull them away from there.
(That said, if you're hiring for C++, I'm looking.. and I don't expect a Google salary, just a fair one. I'm not the most elite C++ programmer in the world, but I can write good code and understand your system...)
> All of this to say: if you want C++ programmers, you need to pay competitive enough rates to pull them away from there.
I'd modify this to say "All of this to say: if you want programmers, you need to pay competitive rates."
I worked in real estate for a while (as a software dev) and my boss, who was a realtor, always said "there's no house problem that price can't solve".
The labor analogy for me is "there's no labor shortage that salary can't solve". It may take some time (as people get trained up) but employers complaining about talent pool issues but paying get little sympathy from me.
> The labor analogy for me is "there's no labor shortage that salary can't solve".
Perhaps not in SW, but there are plenty of labor shortages that salary can't solve. Some positions are simply not economically viable in certain markets - the pay required to get people to do them is more than a given business can afford. And I don't mean in the "cutting into the compensation of the business owner" type, but "not enough revenue to pay such salaries".
Try living in countries with very different economies (1st world, 3rd world, etc) and this becomes obvious. You'll see jobs in one country that simply cannot exist in another.
> labor shortages that salary can't solve. Some positions are simply not economically viable in certain markets
Your two sentences are opposite of each other? If someone can't afford to pay employees market rate, they need to rethink viability of the business not say that they can't find someone.
> If someone can't afford to pay employees market rate, they need to rethink viability of the business not say that they can't find someone.
Your statement is pretty much what I'm saying, with the added "in certain markets" clause. That a market can't support such jobs doesn't preclude it from being a labor shortage.
Think of it this way: A given business has a certain job that was economically viable, and they could find the labor to do it. Then over a few years the economy changes significantly (feds raise rates, cost of living changes, etc), and slowly that job no longer is viable in that market. The change will not be sudden, so there will be a period of a few years where businesses still find people to do the work, but the pool of such employees keeps shrinking before it hits zero. It's fair to call that a labor shortage in the transition period.
> It's fair to call that a labor shortage in the transition period.
I don't know about fair, but I find it misleading. When I hear "labor shortage", my intuition is that just having more people around (Advertise job offers better? Give more work permits?) would solve it, not that the market is such that this type of work doesn't make financial sense anymore.
"Labor shortage" sounds like some macroeconomic temporary situation to which businesses shouldn't need to adapt to beyond temporary measures, while what you're describing seems like a failure from a business to adapt to a new stable situation.
Although the pound has recently crashed, the general rule for comparing UK salaries to US ones is to 1.5x it to dollars. A £150k salary in London is equiv to a ~$200-250k salary in the US. Not bad, I'd say.
While it's of course good by local UK standards, I still think sometimes London needs to recalibrate to pay closer to US norms. Cost of living in London is often just as bad as major US coastal cities. At my work, they recently introduced a London office paying eng roles in similar salary range, and lots of opportunities to move there with help provided for relocation fees. For many the interest evaporates when they see typical dev pay package and CoL in London.
London can be as expensive or cheap as you want it to be. I worked for a startup on 23k when I first started working and my now wife was an undergrad, we still had a studio in Central London (zone 1). On the other hand you can have a town house in Belgravia for prices similar to Atherton.
> The labor analogy for me is "there's no labor shortage that salary can't solve".
Assuming the labor you need is fungible.
For a sandwich shop, raising pay will probably solve a labor shortage. But even salaries of $10,000,000 a year won't produce more surgeons. It might inspire more people to get into the pipeline, but that's a 10-15 year lag.
It will solve the shortage by allocating labor to the most economically valuable uses.
If you can't find developers at the price you can afford to pay, then make do with fewer devs, raise your prices or go out of business. The market doesn't owe it to you to make your business model profitable.
There might be a 10 year lag creating more surgeons from scratch, but you'll have existing surgeons suddenly working for your company today.
Your 10-15 year lag assumes untrained workers. You'll have a stream of people transitioning from other medical professions or surgical specialisations to your $10mm job category, many of whom require much less than 10 years of additional training (1-2 years? on the job?).
UK salaries on the whole are shockingly poor. That's actually a really decent salary here. Devs can start anywhere from £20k to £35k depending on background/where in the country you are.
Perhaps. I note that position is being advertised by Oxford Knight, a high-touch recruitment firm. Their main modus operandi is to make contact with someone looking for a job, find out what they are looking for, and then suggest particular openings on their books to them; they would have plenty of opportunity to explain about the compensation. I think these public job listings are a bit of a cheap additional thing, and aren't that important to them.
I would recommend joining with some recruiters from Durlston Partners. They are fairly upfront about the compensation on offer and for C++ in London it is significantly better than the numbers I've seen in this HN thread. For example, I've seen roles with base salaries of £250k and generally large (or even uncapped) bonuses on top of this...
In my very limited experience (n = 1), hours per week is the same as anywhere else i've worked.
One quirk is that the trading day starts at seven in the morning, so at least one dev in the team has to be awake(-ish) and logged on to handle any technical issues. Before the pandemic, that meant being in the office, so an 05:45 alarm clock for me, but these days we do it from home. Another quirk is that trading winds down after six in the evening (exact time varies), and some releases have to wait until after that, so sometimes you've finished a piece of work but have to loiter to actually release it. Again, these days we can do that from home, so less of an impact, but still can be annoying.
Also, my team does its own out of hours support on a very unstructured basis, so you could get an alert in the middle of the night (but not weekends) you need to respond to, although this is rare (and is mostly in our hands to keep rare!).
I expect that better-organised teams have better ways with dealing with all this!
That's another load of BS spread by people not in the industry. They equate HFT's and quantitative hedge funds the same as investment banks. At most HFT's, mine included, devs are mostly doing 40-45 hours a week. Some places like Citadel, Headlands, the work hours are terrible (still not IB level bad), but the rest of the places it varies between 40-50.
In central London perhaps. But it's by no means important to live actually in London and work in London. Commuter culture is (or at least was) big. Many people commute from the surrounding 100+ miles by train each morning and pocket those London salaries while living in low cost of living areas.
From what I understand, Google is really about a crippled subset of C++, that people jokingly call "C+-".
I ran a C++ shop for 25 years. I used to program in it, but stopped, many moons ago. The new C++ is a huge change from what I knew.
I am expecting to see a lot of hate for the language, in this thread.
Regardless, it is a very powerful language, and it is not for the faint of heart.
I attended a Swift conference, many moons ago, and one of the speakers was this wonderful woman that had recently moved to Adobe (a C++ shop).
She was supposed to speak about Swift, but ended up speaking about C++, and the wonderful, supportive community of older, experienced developers she found, around the language.
I loved it, but I'll bet a lot of the folks around me, were squirming.
All sane C++ codebases must use a defined subset of the language. Google's is one that works for them, with some pretty strong standardization. It's a good set of compromises. But you won't get far programming in it if you don't know the broader semantics of the language.
I didn't do C++ from about 2003 to 2013. When I came back I was delighted. It's so much better.
I liken C++ (and other "industrial" languages, like PHP), to "advanced" tools, like the specialty tools and brands, that only professional mechanics know, like OTC. You won't get them at Home Depot.
The language definitely has its niche, and I am glad to see it not being used for standard GUI programming, anymore.
But for that niche, there's nothing better, and it's a big niche.
There are only two types of C++ shops: the ones that use a rigorously-enforced subset of it, and the ones where the codebase is an impenetrable hornet's nest. You must decide on a subset of C++ to use or you'll go insane. Google made some opinionated choices and tradeoffs about which subset to use (as did Microsoft and every other company with a huge C++ codebase), and it's possible to disagree with the specific boundaries they chose, but the mere fact that they chose to do this is unimpeachable.
C+- is pretty common. I last read the Google C++ guidelines maybe ten years ago. They were pretty much in line with the decisions that other large C++ projects I've worked on had made, like we all encountered the same footguns and language misfeatures.
Every 5-10 years there's a big shakeup and the C++ culture changes, but it takes a long time for things to filter down into the embedded world (for instance).
Perhaps in the past, but they aren't too terribly far behind C++20 and a bunch of stuff that has been added to the language over time (like stringviews) was made available much earlier via library support. I don't think I've ever heard this "C+-" joke at Google despite working in C++ here for a very long time.
Yeah. If anything the Chromium codebase is bloated with uncomfortably intricate OO patterns. Google3, less so, but that's because that kind of server-stuff doesn't lend itself to the same kind of thing, maybe.
I like the Google style guide overall. I tend to use it on new projects even though I'm not there anymore, and when I get into other people's code often my first instinct is to "clean it up" into that style :-)
It is true that the adoption C++11 was later than ideal. Since then, the C++ build and library maintainers have made a priority of not letting that happen again and have done a really really good job (IMO).
Maybe more like C++ without (most) pointers. A coupe of years ago they allowed mutable references in function arguments, so the use cases for passing pointers around were reduced even further.
Well ... like I said, it was a long time ago (I don't know if Google had even made their first hundred billion, yet). I suffer from CRS. You know how us old "boomers" are...
Not interested in fighting about it. I'm sure that I'm wrong. Being right buys me absolutely nothing. It's not my wheelhouse.
Learning the language fundamentals robustly is the hard part, and everyone has incomplete knowledge. The C++ standard library is easy to learn for someone in your position. If you want to, start with something like iterator pairs and few trial problems, it will feel just like pointer arithmetic then the standard algorithms will be easy.
I have tried to focus on C++ during my career and I still haven't even touched ranges or needed about half the std algorithms.
It was try!Swift New York, back in 2017, or so. I don't think they published vids, but maybe. I'll have to go look at my badge, to see which one it is.
That was a lovely talk and changed my perspective on C++. My impression was that C++ is a ancient language with lots of footgun.
The talk reminds me of its value. C++ is still evolving. Improvements are introduced slower compared to other languages, but still, C++ is making progress and many found it pleasant to work with. It just need more time.
Once a language no longer has a developer pipeline, it becomes a language with no future. No matter how important and widely used. You wind up with jobs begging for qualified developers. Given how systems last, the language may survive indefinitely. As nobody dares migrate projects off of it. But employers will struggle more and more to find employees.
The first language that this happened to was COBOL. There are still a lot of COBOL systems out there. And a lot of COBOL jobs. But nobody wants to go into COBOL for fairly obvious reasons.
C++ is not going anywhere. All operating systems and almost all compilers for every other language (plus many of their runtimes) are written in it.
<s>When the heat death of the universe is upon us and all other languages have ceased to exist and re-emerged thousands of times over, C++ will still be here, driving the lower-most layers on top of hardware.</s>
First of all I never said C++ was going anywhere. I compared it to COBOL which is still estimated to be involved something on the order of 60-80% of financial transactions. C++ has a similarly bright future.
But secondly, your claim is wildly overstated. C++ is a lot less fundamental and essential than you think.
Operating systems:
Linux and the *BSD family are written in C, with Rust making some headway. (But C++ would be over Torvalds' dead body - see http://harmful.cat-v.org/software/c++/linus for example.) OS X is written in a combination of C and Objective C. Android is written in C and Java. Windows has a lot of C++, but the kernel is straight C for reasons that Raymond Chen explains at https://learn.microsoft.com/en-us/shows/one-dev-minute/one-d....
Not only are not all operating systems written in C++, but most of the most successful ones aren't. And multiple groups, INCLUDING ones otherwise sympathetic to C++, have concluded that C++ is a terrible choice for kernels that are close to the hardware.
Compilers.
Well GCC is written in C. (Though Clang is C++.) So are the interpreters for Python, Ruby, Perl, and PHP. One of the backends for Go was originally C but has been ported to Go. (They maintain another C++ backend.) Julia's backend is mostly C. (A few libraries are C++.) However JavaScript and Java are both written in C++.
Reality is a long, long ways from your claim that almost all compilers for every other language are written in C++. In fact straight C is a more popular choice.
You didn't talk about GUI applications. But there you'd have more of a point - C++ is far more popular there. However even that it isn't a slam dunk. Rust was explicitly developed as a response to the fact that C++ makes security very hard. The idea being that it would be easier for Mozilla to port security critical modules from C++ to Rust than to secure C++. Since security is getting ever more important, it now makes sense to write in a different language first.
I learned something, thanks. One of Chen's arguments is, that C allows better memory control compared to C++. For example, it's easy to place the vtable in pageable memory instead of non-pageable memory. Do you know if rust has this problem too, since it also uses vtables?
Rust uses fat-pointers to references and raw pointers to dynamically sized types (DSTs) – slices or trait objects. A fat pointer contains a pointer plus some information that makes the DST "complete" (e.g. the length, or in the case of trait objects, the additional data is a pointer to the vtable).
Generally, you have, under control, where you store your fat-pointer.
It depends how much you like working with legacy code, and how much you love C++.
If you stay the "C++ expert" route you will always have a job, but it will be every more gnarly legacy code. Because those who had relatively good legacy code are also those who are going to find it easiest to migrate off of it.
Judging by what happened in Perl (the example that I know best), I'm guessing that most developers will leave C++. Mostly for languages in a similar space. A lot will go to Rust.
C++ jobs are not going anywhere, quite the contrary. A lot of systems won't be rewritten.
Besides, at least at the moment "C++ expert" is a perfect background for becoming a Rust expert should the want arise.
Rust *is* much more enjoyable to program in than C++, though
> But nobody wants to go into COBOL for fairly obvious reasons.
On YouTube, type in "COBOL Mainframe", and you'll find out pretty quickly that there's a plenty of guys from India, including juniors who work with or on COBOL and Mainframes.
Having recently learned C++/C.. I don't see why it would be taught as a first language outside of specializations. The gains from C/C++ coding are vastly outweighed by the costs. The reality is that there is no good, agreed upon standard in C++ for how to manage memory... how would you teach this to junior engineers at university?
Having learned C/C++ as my first languages I can say I'm quite glad for it. I was able to learn so fucking much.
1. Object oriented programming, complete with inheritance and virtual dispatch
2. Pure Functional Programming, lambdas, const correctness, template metaprogramming
3. Type system fuckery - the things you can do in C++'s TMP is insane. Both nominal and duck typing in the language.
4. Hardware - I watched so many talks on how CPUs work that explained things via C/C++. I learned about tools like valgrind and cachegrind, how to profile code, etc.
5. Security - The original reason I wanted to learn them, to understand how to exploit C and C++ programs and their common vulnerabilities
6. Data structures - C++ is awesome for building data structures from scratch and understanding their low level semantics
I don't want to write them professionally, but learning them was massively helpful, with skills propagating through every other language I've used.
I'm glad someone pointed out these things, as I concur. I learnt C++ via the C++ programming language book, and it gave me many of the above benefits/insights. (The book has its faults though). It is akin to trawling through the CLRS book. It does make you a better programmer but no-one knows why. (An attempt at sarcasm - I know most of us never need to do advanced data structures/algorithms but it helps refine the mind for programming challenges).
7. Fundimental underpinnings - Every other language is written essentially in C - you can often understand how their features WORK if you know C extremely well.
... I'd argue that 1,2 are covered by rust. 3 and 6 by unsafe rust.
Once you have that, a transition to pure C for the full 7 makes sense, and would be trivial for a rust dev.
I'm fairly proficient in Rust and I think it has its own strengths for learning, but its own downsides too.
Pros:
1. Way easier to get "live" help, I've found the barrier to ask for Rust help is lower than any other language I've ever used (and I've used a lot)
2. Way easier tooling like cargo, error messages are far better, especially with regards to generics vs templates
3. Some things are more explicit in Rust. Like `Box<dyn Trait>` is very explicit about dynamic dispatch, I don't recall that being the case in C++.
Cons:
1. Rust's "codex" of knowledge is nowhere near C++. There are tons of conference talks, books, blog posts, etc on C++. Not just on C++ but also on hardware as seen through C++, security, etc.
2. Implementing low level data structures in Rust is an advanced practice. Implementing them in C++ is trivial. I get that it's trivial because C++ doesn't give a fuck about your pointers, but if you're just trying to learn data structures you really just need to be focusing on happy paths and whatnot (as a student).
So idk, at this point I tend to recommend Rust, but I also end up having to link people to talks on C++ sometimes! Herb Sutter and Scott Meyer are amazing speakers and writers and that's a very hard thing to replace.
An accurate summary. I was reflecting exactly your cons when I left gaps in what I said Rust was good for.
Con (1) (Existing examples / training) is a legacy language strength in general, C++ is going to lead in this for a long time.
Con (2) (Low level data structures) I 100% agree. A toy example of a datastructure in C++ is going to be really easy to express but also dangerous to use.
And you are correct (Con 2 again), it IS an advanced practice. The result of making a data-stucture the rust way is a bulletproof class anyone can use, rather than a C++ foot-gun so it's worth the effort (but less so for learning).
You might find you can make your whole module unsafe and write a very similar data-structure in rust to your C++ solution (pointers and all) and it would compile and run fine.
The big thing to know about rust datastructures is that you need to use unsafe to implement anything cyclic. There is nothing wrong with using unsafe for a few lines of code - just review carefully.
My gut says any new student should start with an interpreted language like Python or JS/TypeScript. As that gets you to running code, and core concepts like variables, loops and if statements in little to no time.
However, there is value in learning some of the under the hood concepts such as pointers, structs, memory layout, endianess, pass by reference, compilers etc.
I don't think schools need to teach employable C/C++ skills, but C/C++ is a great language to play with and experience these core concepts.
However I'm not sure if the value in learning these concepts are real, or it's just my own interests/nostalgia. You can have a successful career in this industry without having to manage a single byte of memory, and it arguably makes sense to accept abstractions at their face value so you can focus on what builds your skills/product.
> pointers, structs, memory layout, endianess, pass by reference, compilers etc.
C++ is a bad language to teach any of these concepts. Sure, people will be exposed to the concepts, but they are presented in a rather esoteric fashion. Not to mention, actually leveraging those concepts in is bad practice anymore, i.e., using a pointer arithmetic to loop over arrays instead of iterators or the like.
I didn't grok a lot of those concepts until I took computer architecture, which was taught in assembly language. And we weren't taught x86, but a toy assembly language designed for teaching.
Another big pain point I had in school is that every professor / TA had different opinions on what was a right and wrong way of doing things in C++. And sometimes their opinions would conflict with the damn documentation too. There's nothing like having to relearn core language concepts every year at the whims of professors. This is probably where most of my disdain for the language has come from.
the chance of you being taught correctly by your so-called "professors" (they are not professors unless they have been appointed to a chair) is vanishingly remote, but this has zero to do with the language
C and C++ are still horrible languages even if you want to teach those concepts, because of how many footguns they have. That's why Pascal was so popular as a teaching language, historically speaking - it still has pointers and other stuff you need for manual memory management, but it's much simpler and more regular both syntactically and semantically.
no, it has exactly the same issues as c and c++, and some of its own, such as arrays of different sizes being different types. guess why it isn't used anymore
Arrays of different sizes are different types in C, as well - this is obvious when you are dealing with pointer-to-array types.
That aside, Pascal removes a lot of UB and other footguns by forcing explicitness for e.g. casts and pointer arithmetic, or providing (verifiably safe) byref argument separately from raw pointers. Strings and arrays are a mess in standard Pascal, which is why everybody used dialects that solved them - most notably Borland's, of course, which was used for a lot of DOS and early Win32 software.
Anyway, I'm not suggesting Pascal specifically today. The point is that C and C++ were never good teaching languages, which is why something else was usually used as one whether we look at 80s, 00s, or today.
>However, there is value in learning some of the under the hood concepts such as
>[...]
>I don't think schools need to teach employable
My University taught the intro CS class in Scheme; years after I graduated they switched to Java and last I saw it was Python (based on visits back to campus and wandering through the bookstore to see what textbooks were for sale). I just checked and it still Python, based on the course description ("how to design and implement algorithmic solutions in Python"). I see a few 2xx level classes are in Java, and after that it stops mentioning specific languages.
Anyway, it's tough since there is pressure to teach the concepts, which argues for certain languages, yet also produce employable graduates, which argues for certain other languages.
Finding overlap is tricky... teaching theory in Haskell, under-the-hood concepts in assembly, software development gluing libraries together in javascript/c++, may in fact be the superior approach... but there is fatigue associated with learning languages just to learn more languages when maybe a nice general language that serves many educational needs is a better way.
Python might be the sweet spot to start out with, and indeed it looks like the 3 intro classes at my alma mater, are taught in Python. I'd like to think the driving force behind this is that 1) Python works well, and 2)using one language for first year students (well, 2nd semester 1st year or perhaps 1st semester 2nd year) lowers the mental overhead on the students.
Going heavy on C/C++ early essentially selects people that already come in with a programming background. Some folks don't get that, or not much of it, in high school and want to enter the field anyway. And I think it is fair for them to reasonably expect, like you can with every other academic field, that they can do that via the starting curriculum.
I have written maybe one binary search that went into production code in 20 years of software development work. It is absolutely an under the hood concept.
But knowing how it works so that I can leverage the concept efficiently is super important. Having a sorted or unsorted list/array/tuple/whatever-linear-thing and an functions that search them and then knowing what the performance characteristic will be like and how I should put those two things together is not something that can easily be googled.
I agree it doesn't need to be a first year thing, but it does need to be part of a robust computer science education.
Exactly. And understanding WHY you should prefer using a Struct (or class) instead of a dictionary / hash-map. Can you feel the PAIN of all that additional cost??
The Python / JS world is all dictionaries. Such a developer might never understand why their language runs slower.
One could learn in C++ using a variety of methods… including the ignore memory management variety.
The question is why? For a student learning dfs for the first time or hash tables/queues etc. python will be much easier to learn. Java/rust/go will likewise be easier if you want to talk about types.
Java and Python's notions of type safety are completely brain damaged. I can't imagine how things would have gone if I'd learnt them as first languages.
Rust, C and C++ get that better, in my opinion (you have to consciously choose to write type safe code in two of those languages, but at least it is possible in all three). Go's thread safety is way behind Rust's. meh.
Could you tell me some differences that you see as significant between the C++ and C# type system?
I can think of a few but they seem like compiler sugar only. For instance C# classes are treated as a &ClassName or std::shared<ClassName> (handled by reference transparently to the programmer).
Otherwise I think that C, C++, C# have very very similar type systems. They don't have alternatives to dynamic dispatch (virtual / override) like enums with value (a Rust feature). They don't have ways to add functionality to existing 3rd party classes. C# Interfaces are basically simple C++ classes with every function marked virtual.
C# feels far more dynamic with reflection and whatnot but you say type system, which is a different thing. LINQ again stacks well with C# and makes it more expressive, but doesn't change the type system.
I'd argue that '99 C++ is very similar to C in the type system, with a little sugar around classes having an implied *this and vtable mangement around virtual. Otherwise you can impl something like C++ classes using C structs.
Are you talking about C++ template programming and meta-programming? Because I personally view that as an advanced preprocessor / precompiler step that outputs large amounts of simpler C++ code without templates and doesn't change the "type system" a great deal.
Perhaps you are referring to some very modern C++ features I haven't used yet? Most likely they only brought C++ back in line with a subset of Rust / C# features.
C++'s type system is both more consistent and more expressive.
GC'd languages fall into roughly 2 categories.
1. those that are pure but lack strong primitives (Python, Ruby, et al),
2. those that bifurcate their type system and treat primitives as a special case (Java, C#, et al).
2 is done as a compromise for performance since they'll start making guarantees such as preferring stack allocation. This is the primary difference between 1 & 2, what the language promises with these types (if they have them).
C# learned from Java but added "value types". So now understanding what a piece of code actually does requires you to know not just the type, but whether it's a value or reference. Create a tuple with a string and 2 ints and default construct it. You get (null, 0, 0). Now do it with MyTypeA, MyTypeB, and MyTypeC. You have no idea what that tuple will default construct to unless you know more about MyTypeA, MyTypeB, and MyTypeC.
Default equality is it's own bag of worms.
Whereas in C++ it's a lot simpler. It's value, a reference, or a pointer, and you know which it is by looking at the callsite. C++ has similar ambiguity with respect to function callsites with references vs pointers, but the convention in the C++ community is to pass by pointer if it's going to be mutated and by const reference if it is not.
IOW, the rules for the C++ type system are easier to learn and use effectively, and their semantics are easier to understand from the code.
The existence of typedef automatically makes it more expressive than C#.
Fair points. I agree that having to memorize which types are arbitrarily treated as pointers in Python is a huge pain. Python also adds pain with the "default argument" issue.
The C# standard library is more consistent and powerful (batteries included, more modern). That means there is a "standard, correct" way to do most things in C# in a multi-platform way. C/C++ has massive variety here.
Memory management is trivial in C#.
So yeah, C# is easier to learn. Perhaps you're super smart?
Ah, I guess I was responding to this paragraph of your message
> But more to the point, people claim it's somehow harder in C++ to learn programming so either I'm super smart or they're wrong.
TBH I consider the type systems of most imperative languages to be extremely similar. The only difference worth discussing is enumerators which can hold data. C++ doesn't have these, not sure if C# has them now. Rust has them.
I'll create a new comment under yours for a more specific discussion.
The C++ of the 90s was a much simpler language, closer to "C with classes" (and often used as such) than modern C++. It was much easier to learn everything about C++ back then.
But there is! Use local variables and value semantics. Use references to borrow stuff. Never touch new or delete. Use smart pointers when need an owning pointer (rare). Anything beyond that: ask next in seniority to supervise. That should be enough to live a happy C++ life for a junior.
My recent hot take is that students should never learn on a language older than they are. There are so many options, why did I have to battle linker errors in my intro to CS class with C++? Nothing we were doing required C++.
I mean, I don't think it's a legacy language at all. But I do think it's a minefield of annoying problems with no two codebases speaking the same subeset of C++ evolved to work around those annoying problems.
All told I would fine it more frustrating to work in C++ than enjoyable.
A recent curriculum of languages taught at a German TU I skimmed over includes C, C++, and Haskell. Can't help but agree with that choice: C gives you understanding of low level operations and appreciation of higher level concepts such as GC and/or verification etc. while Haskell lets you experiment with functional and logic idioms, type theory, DSLs, etc. Starting with Java or C# might be ok for training and getting a job, but would be unsuitable for foundational education IMO.
That hasn't changed, you are still expected to pick one up yourself except (for me) Java, the first and only language that really got "taught". Maybe that was C++ a few decades ago or something. We had to do a little bit of C for some system programming lecture.
Would not say it's true at least for robotics related fields. I personally taught C++ some years ago and the new guys have taken over since, adopting and improving on my course. There are many specializations where C++ is still a must.
Not surprised to see Python but when I went to uni in Germany 10 years ago Java absolutely seemed to dominate as a teaching language (and on the job market). Has C# gained ground?
Yes, Java is still big. Python is popular because it is a synonym for machine learning at this point. C# mostly because of Unity3d is the preferred game engine.
I switched from C++ because the type of work is typically boring. Its either system level stuff, or HFT, usually some market data system or similar. Not to mention its likely a big old system with legacy code base.
Seeing the salaries now I kinda regret switching out, but the companies and business problems you get in Python & Java are much more interesting.
May be a personal taste thing but graphics programming, computer vision, scientific computing, robotics and even some embedded systems work is far from boring imo.
But certainly the above are niche fields in a market where software engineering is almost always limited to webdev.
Each of the industries you mention has a tech stack which must remain modern. That means that new work will migrate to the language easiest to do the work and find workers.
It's gonna migrate away from C++ if C++ becomes "legacy."
Embedded systems work is perhaps the exception - since there will be old hardware in the field. But firmware parts go out of production within 2-10 years and so new drivers require creation and new features encourage a migration to a whole new system.
As a game developer I agree that a lot of C++ work can be really fun, but this less than 10% of what C++ is used for. More so you have databases, non-robotic embedded work, crypto (ew), systems work, etc. And they all pay terrible; except financial, but that’s the most boring of all C++ work.
As a personal anecdote, all of my jobs were actually very interesting with lots of interesting design problems spanning from low level systems to very high level design decisions. Granted I worked in automotive and later in AR and have been lucky enough to be at the start of some projects, but there definitely are interesting projects for C++ out there.
The last time I searched for jobs, there were two competing automotive makers among the prospective employers. The one that offered the C++ position for programming Lidar systems offered a lower salary than the one that just wanted a generalist to wrangle scripts and systems. Beats me.
Auto companies notoriously pay their people poorly. They are losing many talented engineers as a result. I know quite a few in the industry here in Michigan. The engineers who remain at the autos, are blown away by the salaries their former colleagues are getting from SV companies.
I was mentoring a EE who wanted to up his game. He wanted to get into programming, and I pointed out that Python, C++, etc. are in demand at many companies right now. I asked him for his salary range, and for a 50+ year old engineer with 20 years of experience, it was pretty awful. Though, completely normal for the autos and their suppliers.
I found it equally hilarious that (at the last place I was) the Python devops jobs paid 50% more than the core developers of the company's core product.
I actually just tried to play around with what seems to be a "modern c++" boilerplate project.
It uses CMake, conan for packaging, clang-tidy and cpp-check, and has templates for fuzz and unit testing[1].
I found it because qtcreator and kdevelop were weirdly clunky and created partly broken qt projects and I figured I wanted to add a package manager and qt to the mix.
The template looks really fancy, but it's so incredibly slow, to the point of being unusable.
It's a ramble yes. But the point is modern C++ tools seem to have added some niceties to the language, but they also brought more of the main C++ issues, i.e. slow compile times and nasty boilerplate in the build process. Yes, I realize CMake isn't modern and there are a bunch of new build tools.
That is the most bloated boilerplate project I've ever seen. Does anyone actually use this for their projects? I would hope not. I know Javascript people are used to starting a project with a template/boilerplate, but that's not a good idea in C++ land. You actually need to understand the stack you're using, and all of the tools (unless you're working on a team, in which case that could be someone else's responsibility)
Also, QT projects aren't real C++ since they require an additional preprocessor to compile custom syntax additions for "signals" and "slots". As a consequence, tooling for those projects tend to be more complicated and clunky.
This linked project seems to be trying to stuff as many things as they possibly can into a hello world project. With Conan + CMake you only need like 3 files total in your entire project for something like this:
* conanfile.txt: for declaring requirements/dependencies
* CMakeLists.txt: describe how to compile your project
* main.cpp: source code for the hello world app
Also,
> Yes, I realize CMake isn't modern and there are a bunch of new build tools.
No!! That thinking is a trap. CMake is modern, and using something else is more likely to cause problems for you and anyone who might want to use your project. That boilerplate's is a bad example. A fully functional SDL2 hello world with Conan can be done with a ~4 LoC CMakeLists.txt.
Unless you have a very good reason (there are good reasons, but they're the exception rather than the rule), don't build your C++ project with anything other than CMake.
I wouldn't put much stock into such starter templates, the developer who put them together has a goal and priorities that are not yours.
QT tends to be particularly painful outside of QTCreator so I also wouldn't use that as a general knock against C++.
having said that, C++ build times are slow due to both the #include mechanism and templates. The C++ modules coming will help deal with the #include mechanism and templates can be instantiated to assist the compiler.
None of it is amazing, but the compile times are better than rust imo.
C++ modules has been promised for a long time, I wouldn't hold my breath on it coming anytime soon.
As for build times, it is sadly very easy in C++ to have long build times. Avoiding it takes careful work, knowing when and what to include, when to forward-declare instead, and always maintaining includes as requirements change, etc.
There's nothing to make that process any easier or automatic, which is very frustrating. There's the include-what-you-use project, but I could never get it to work.
Any developer can learn C++ (or at least the subset used in most real-world code). It's not like theoretical quantum physics or something. Hire people and then send them to training.
Can't afford to train your employees? Then you don't have a viable business in the first place. Might as well liquidate and return the capital to investors instead of wasting time on a slow death spiral.
Looks like C++ developers / holdouts will be commanding 400-600k salaries in the comming decades, like those arcane COBOL wizards still tinkering on banking mainframes.
This is literally my career wind-down/retirement plan. I'll be 62 when the Year-2038 problem hits, and they're going to need to overturn some rocks to find C and C++ systems programmers who still can read from a monitor.
I could see AR/VR dev environments being viable enough to not get laughed at in 15 years, but there's no way we give up 2D displays in the majority by that time.
In 15 years? Potentially we'll have brain interfaces by then. Just imagine the job you want done and you'll "see" in your mind's eye the existing solutions in your code-base and then on the (contemporary version of the) internet.
Why not be an proficient Python developer and make the same compensation[0] with better docs, better libraries, better codebases, no compilation annoyances[1], easier to read code, and fewer not-defined-by-the-spec weirdness?
C++ would have to pay 4x what Python pays for me to even consider doing it again. It literally makes me sad when I write software in it. Especially the libraries.
[0] Mid-six digits as an IC is very doable in North America.
[1] If you want to count the occasional Cython thing, fine. Almost no compilation annoyances.
Do you mean $500k? On levels.fyi for San Francisco you're talking about the 95% percentile of people who upload their salaries (which tends to the higher range anyway), and that's mostly stocks. So in the highest paying part of North America, at the highest paying companies where employees put their salaries on levels.fyi, a small minority can reach $500k. North America as a whole? Nope.
I worked for an SF company as an IC remotely from Toronto. Total comp was $500k USD yearly. I wasn't the only one, either, and the other guy didn't even have a technical education. He was completely self-taught and just steadily made progress by being humble and curious and hard-working.
Look, I don't really care about money after a certain point. So I'm not trying to encourage people to do stuff for money. What I'm trying to say is that there is a world of software development renumeration that is achievable if you keep developing skill and it is far less challenging than dealing with a C++ codebase.
They're just working on other things in other languages.
If there was strong demand for embedded C/C++ devs that paid well, I wouldn't be doing C# REST APIs on Azure. Not that I don't enjoy working on APIs, but I do like working with bare metal more.
There are now more options for many tasks that used to land squarely into the C++ land: rust for low level and low latency stuff; python for a high level glue for some latest algorithms; ruby, JS for web.
On the other hand, C++ grew into a very complex language. For example if someone is not well versed in template metaprogramming meaningfully contributing to an existing project leveraging it could take a while.
So when a new project starts, C++ is rarely the top choice. And as the job pool shrinks, developer pool does, too; usually even faster, as younger developers do not want to be known as experts in a complicated legacy language. My 2c.
The article very specifically states "very high levels"
That has two effects:
1) Given how ineffective the hiring process is at selecting skill, they would never hire even Bjarne Stroustrup. Average to incompetent people trying to select "very high level" people usually get scammed by the most imaginative resume bandits.
2) Programming languages are not as fixed as spoken languages, and "very high level" people are making more money in other languages, or less tedious languages.
The article also uses language like "programmers" in a generic sense but the site is specifically for the financial services career field as are all the quotes. I've worked in financial services and the highly skilled people who can learn anything would never learn C++ because it doesn't pay nearly as well as, well, anything else in financial services IT departments. Someone made a huge architectural error in designing a system requiring the highest level C++ programmers but never telling HR to pay them enough to find any. If I were still doing financial services work I'd never learn C++, that would be dumb, I'll make a lot more money learning... anything else in that industry, or learning skills that transfer out of the industry.
The article also mentions automotive, another example of a giant slow moving industry that underpays and over-Dilberts its C++ programmers.
What I enjoyed about my time in financial services is they understood their money depends on generous IT spending, they understood the value of documentation and code review, and it was a stable industry. Not perfect in an absolute sense but "more so relative to other industries". I would have to think for awhile if those values fit the C++ community. My immediate guess is "no" but I'm not certain, everyone liked the generous budget but the constant review and oversight was often a little excessive.
This is good, right? There are better languages for most (although certainly not all) purposes, whether you measure "better" through features like memory-safety or through developer happiness or anything in between.
Most of the systems that were written in C++ in the past didn't have to be written in C++. Now they'll become hard to support, because they are in fact hard to support, and eventually they will be refactored or rewritten.
Same thing happened with Assembly and COBOL programs being written into C++.
a) Yes I run technology at a more serious HFT firm than Jane Street.
b) Jane Street also use FPGAs, so you cannot exactly say they just use ocaml, it’s more nuanced than that, they are using a mixture of technologies. I think they made a very unfortunate choice early on and are still paying for it.
Rust becomes extremely painful as soon as you want to push the boundary. If you need to ensure you fit a struct into a cacheline, or you need to ensure an object is reused. All of these things can be done, even things like recycled intrusive structures, but it’s constant friction.
If you don’t try and push the boundaries it’s fine (and if you make everything unsafe you’d also be fine), but otherwise life quickly becomes painful.
If you are going to go to that level of effort then you’ll find it much easier in c++ and the downsides of c++ become insignificant.
On the other hand if you’re writing a web backend c++ would be a terrible terrible choice.
> Hickling pointed to Java, which has long “seemed to be replacing C++ itself,” but hasn't.
This is inaccurate. Java completely ate C++'s lunch in the enterprise space back around the turn of the millennium. Java doesn't need to continue to eat into C++'s marketshare in other domains, because it has more than enough mindshare to sustain itself (there are more Java programmers than C++ programmers), and its very existence keeps C++ from reasserting itself in the enterprise space.
Correct me if I'm wrong here, but now that everything is run in a docker container anyway, doesn't most of Java's appeal of "write once, run anywhere" from the turn of the millennium go away? At this point all it has is garbage collection and a network effect.
I don't think WORA was ever a really big deal for server-side work. In practice, you know what platform you're developing for, and even if you were developing on Windows and deploying on Solaris, you would have plenty of opportunities to test on Solaris before deploying.
I think the reason that Java beat C++ for boring line of business apps is that it was much, much quicker to write code which ran fast enough and didn't crash much. Having nice easy to use URL/HttpURLConnection classes built in was probably also a surprisingly big part of it!
Java has a number of other advantages... you can effectively develop for Linux on a windows box. The vm architecture makes debugging easier. It has a large and useful body of code, including lots of libraries and features that ease interfacing with legacy systems (like corba and soap)
The time and effort needed to train Java developers versus C++ developers is also huge.
This means more Java developers are available for hire and they typically cost less than C++ developers. This reduced on-boarding time and reduced cost is extremely attractive for enterprises.
You say that like it's a minor thing, but GC is probably the largest language-related productivity improvement I've ever experienced as a developer, second maybe only to static types.
Unless you need bare metal speed, or use some specific piece of code, never use C++. It is horrible, convoluted mess that will consume developer hours into pointless trivia that are not issues in anu other languages. So C++ has that as point against unless you want to waste developer hours on purpose. I’m saying this while writing C++ daily because in our domain (cad, geometric computing) for our use case it’s the best option only because mostly it’s the only option.
Most of enterprise Java jobs in mid/ junior engineering level are similar to old times production assembly line, everything has a Factory or Impl from dependencies. All you have to do is add bunch of methods and connect API's and service bus (of course I'm extremely simplifying it, but) it's not hard to find people who can do that, also it's not hard to get them up to speed for what needs to be done so the churn rate doesn't impact a lot to enterprises which is a big win. In my personal experience I don't think that's possible with C++ so Java may have started with "write once, run anywhere" now its more of glue between longer running actors and constantly changing workforce
Yes and the best part is that companies instead of migrating from Java to something lighter are simply trying new things like GraalVM which makes native executables.
So your developers can keep on writing crappy and heavy spring boot applications (which need 10-15 seconds in best case scenarios to to start) and eventually be "graalvm-ed" to produce a native executable that starts in < 1 sec.
I mean, that's a brilliant evolution (no jokes). This is how you keep in the game without getting devoured by Go and similar.
> all it has is garbage collection and a network effect.
It didn't have much of an advantage at the time either. It's advantage was that Sun was willing to shower SPARC boxes on your university for almost nothing if you were willing to move your core curriculum to Java.
This, pretty much. Long before there was anything like Rust, Java was the memory-safe alernative to C++ for larger projects. And yes it was slow and clunky, but devs still put up with it because the benefits were so compelling.
Late 90s. It was before Java got hotspot. Java prior to that was incredibly slow. Microsoft's J++ was a more performant runtime than Sun/Oracle java for several years. That ultimately became C#/.Net.
C++ had the benefit of running fast but it had all the problems you can still run into today. I'll never forget that app I worked on where function deep in the stack was returning a reference to a local string variable containing html...
Late 90s to early 2000s for sure. Tons and tons of backends of various kinds were written in it. If it was Windows it was typically running on COM/DCOM, and in UNIX it was usually CORBA. Sure, the UI was typically written in something else. Dawn of the Web with CGI, and if we are talking desktop people would use something like Visual Basic, Power Builder, or Delphi. But C++ pretty much dominated that space until Java got decent enough and fast enough.
Before Java, C++ was basically the only game in town for frontend and backend. There was always Visual Basic on Windows, and C, and a smattering of various pascal flavors.
I've done a little bit of C++ in the last couple of years. Compared to pretty much any other common language out there the developer experience is full of pain. I can program C++ but why would I want to?
The problem of C++ is that it's a bad personal investment since you will need to re-learn the language all the time. Almost every other language respects your time more. Not getting back... ever.
Not just the language, entire toolchain and community are like this. Yesterday I upgraded my Go compiler, everything worked straight away, got free performance improvement on my code. I still shudder when I remember upgrading to a new version of GCC, Clang, MSVC. Took weeks to setup everything and fix newly created bugs. Not getting back... ever.
I've been C++ the majority of my career, doing simply amazing shit... and a recent recruiter did not recognize anything I was discussing and asked if I even wrote code. I worked on the original PSX OS, I've delivered over 40 commercial products, all written in C++ and that meant nothing.
I wouldn't take that personally. In my experience most recruiters are complete idiots. The vast majority of recruitment agencies in my geography (UK) just employ children for recruitment roles and as such all their understanding of tech is almost entirely cargo cult and buzzword driven.
This is why every tech CV that goes to a recruiter should have a "buzzword page" so they can play pairs, feel like they're adding value and get out of your way.
When a recruiter is like that then their function is simply to forward your CV and arrange an interview. They are not qualified to actually vet your ability in any way.
I get this too. I wrote a number of design/game engines, design applications, and frameworks in C++ and I get recruiters that don’t understand that and start asking me basic C++ questions. One time they were reaching out to me about a design engine I wrote and didn’t even realize.
Indeed they will end up asking you little questions that have zero to do with your engine and ding you for not knowing some particular trick. It drives me crazy.
That is why I think it is never good idea to talk to recruiters. Most of them would not spot a difference between golden laying goose and some mediocre person. There are enough companies out there where one can come and apply without going through a recruiter.
I can code in C++, have been coding in C++ since it was first released, and C before that, on almost ever conceivable platform and processor. I've done some seriously hard-core systems programming, hard real-time work, deep embedded kernel level projects, and stupendously fast algorithms. But the thing is, I can get paid 2x to 3x as much to write Python or Typescript, and I'm not in some toxic fintech/cryptobro environment. Every employer that has hired me for my C++ has been "not chill" and just outright toxic at times, and the pay is crap. Why put myself through that?
Footnote: I've heard the same nonsense about not enough COBOL programmers (I've reluctantly done quite a bit of COBOL over the years) with companies desperate to hire and will pay $$$ for the privilege of retaining them, but when I talk to these desperate companies willing to pay top rates for a decent COBOL developer, I find out they are offering less than I was making 20 years ago and choke when I say "to get me to move, you'd need to pay at least 10% more than what I currently make."
I find it funny that the headline states it as a general trend while it's looking at it only through the prism of "the finance/crypto industry struggles to find C++ developers".
Maybe part of the problem is that people don't want to work on your bullshit crypto products, or just on HFT.
Bingo. I've been approached continuously by recruiters looking for finance and/or crypto firms and I won't go anywhere near it.
2x the salary but 10x the stress with type A finance bros yelling at you constantly? Fuck that.
Full disclosure: I'm a sucky C++ programmer. If if they're approaching me that means they've chewed up everyone else they can find. So, another red flag.
Yep - you're right. This is a red flag I learned to recognize when I was young. When I turned 16 and could legally apply to jobs, I walked into a local restaurant and asked about a dishwashing position. I filled out the application right in front of the hostess, handed it to her, and without her even looking at the name she asked "Can you start today?". I took the job. I know better...now....
>2x the salary but 10x the stress with type A finance bros yelling at you constantly? Fuck that.
I am in that industry and this hardly happens. Probably at places like Citadel, not otherwise. In HFT's, traders are basically quantitative traders now. They do both strategy development and trading i.e they have a strong STEM background and are much different than finance bros at most traditional hedge funds.
Rust should make the pool of C++ developers decrease exponentially in the coming decades. There's little reason to code a greenfield project in C++ unless it involves leveraging some niche libraries, and those niche libraries will eventually have Rust equivalents.
Or be a part of a discord group that flies a pride flag in your face for 6 months of the year. A helping of ideology is what I've really been looking for in a programming language.
I wrote several hobby projects in scala but once I realized what kind of people were in that community I decided not to have anything more to do with it. (And I'm not talking about Tony Morris)
May not be rough for me, but it's rough for @ChoooCole. You're being dismissive, acting like the ideology is harmless. She has to live with the consequences of the actions that the pride community influenced her to take at 16. It's not harmless.
Hmm, I think that's a little heavy-handed. Just off the top of my head: CUDA (maybe other gpgpu stuff too), gamedev, most projects with a UI. The tooling you'd be using in those spaces is still bleeding edge; too much so to use with a greenfield project that you intend to put in production, imo.
I guess you said "in the coming decades," which might be true, but for the time being it's more than just niche libraries that might push one to use C++.
There's much more reason to do a greenfield project in C++ than Rust - experienced C++ hiring is still considerably easier! Not everything has a purely technical motivator.
> experienced C++ hiring is still considerably easier
Perhaps, but you can take experienced devs with a background in other languages and expect them to write solid Rust code. You probably only need to hire 1 or 2 people who already know Rust.
I don't know any people who program in Rust. I've been programming 25 years professionally. On my LinkedIn and through friends I probably know 50 people who do c++ programming in some capacity, including myself, to a poor level.
As someone who only knows C++ to a poor level, you’re exactly the sort of person I wouldn’t want to hire for a C++ job, but I would consider hiring for a Rust job. The bar is way higher for C++ because it’s so easy for even experienced developers to introduce mistakes.
I'd be tempted to pick Rust for a greenfield project even if I had only a team of C++ devs with no prior Rust experience. Having one person that can teach it would help, but it's not an absolute necessity. And luckily every team has that one person ho is the Rust evangelist...
If you can program C++ you'll pick upp Rust more quickly than any other convert. Unless it's a startup with a short runway where you might not have the luxury of a slower start, then I think it would probably pay of in both productivity, staff retention, ease of recruiting (later), and a lot of other parameters.
I'd say the trend in the industry is to hire engineers rather than language specialists. An experienced C++ guy should be able to learn the Rust basics in a few months.
If you're writing a GUI app, you usually don't need the exceptional efficiency that is C++'s selling point, so you are probably better off using Swift or C#, instead of either C++ or Rust.
Claims like these always reduce C++ to a systems programming language, neglecting the fact that is primarily used in computationally expensive projects in which Rust cannot be a valid substitute.
I worked in C++ in the UK for a number of years and interviewed for a couple of positions like these. The main reason I didn't want to work for a financial company is that they insist on high number of working hours and being located in-person in London.
When I studied Physics, I did 2 years (4 semesters) of C++. Nowadays, nobody would do that, you'd learn Python by default.
When I graduated from school in 2010, my primary experience from a previous internship was working in an outdated style of C++. Interviewing for C++ positions was so intimidating that I ended up removing all references to it from my resume, essentially disavowing that I had any knowledge of the language, and applying for Java and Python jobs instead.
To quote Yogi Berra, "it's déjà vu all over again."
Those of us of a certain vintage remember this precise issue with Fortran a couple decades ago, and those out there of a certain further vintage probably remember it with COBOL. So on and so forth back through PL/I, Algol, and so on to the dawn of time.
While there have certainly been consequences of the dearth of COBOL and Fortran devs, ultimately the world has thus far survived. We'll survive the obsolescence of C++ (if indeed that's what this is).
The main issue for me is pay. I’ve used C++ for 15 years, written many design/game engines, frameworks, etc, but C++ jobs don’t pay well. In recent years I’ve been using C# with Unity and I make 25-30% more than I would working in C++. C++ has also changed a lot over the last 5-10 years (a lot for the worse) and requires a big commitment to stay an expert. In order for me to go back to C++ full time I’d need a substantial salary increase and a project that I want to work on.
I really don't get this language thing. I do understand that having experience with a language is important as any developer trying out a new language has a learning curve between months to years depends of the needed scope.
There are also "cpp lawyers" that you'll see in cpp conventions and are really "part" of the language.
But...
From my knowledge of the ones I admire as software engineers. It's not the language. It can be Haskell, cpp, python. I doesn't matter.
A real engineer should understand at least basic concepts of how a CPU works, memory concepts including manual alloc/dealloc. So languge is a fraction of your time.
I've "learned" Java in my university and did a C course.
I've started without knowing any C++. But since working at my current place, I write only CPP.
There's infinite learning curve but that's with any language (or knowledge in any field)
Personally, I'm currently transitioning from functional oriented JavaScript to Object Oriented PHP and I'm struggling a lot with the different paradigm. It feels like everything happens by magic rather than according to the neatly organized assembly lines that I'm used to.
Question: what kind of skills do financial companies typically look for from C++ devs?
Whenever I hear “C++ expert”, I don’t know what that means. Are they masters of the STL? Are they part of the “C with classes” church? Are they an unhinged template meta programmer? Are they a major gcc/clang contributor? Did they co-author a part of the language spec?
Or are they just some dude with 7 years of experience working on a single C++ codebase and the subset of features that happens to be using?
More specifically in trading, the most important thing seems to be to know how to write fast code, while keeping the codebase maintainable, because it's going to keep growing, and will need to change as the world changes.
That means avoiding polymorphism (and so a fair chunk of OO design), making heavy use of templates, and understanding the performance characteristics of language constructs and standard library functions. Expect to be slinging godbolts back and forth, benchmarking obscure hashmap implementations, and occasionally sweating over the precise layout of structs to maximise their cache-friendliness.
As a talented C++ developer... yes there is a dearth of talented C++ developers.
I've been to several C++ conferences. Even there, many of the engineers end up learning "new" things that are IMO basic concepts that have been around for years.
I recently went through a hiring phase for Senior C++ engineers and most of the applicants were familiar with old tech. It was really disheartening to me to realize how many "senior" C++ engineers weren't senior in real C++ experience. Even one of the hires ended up using a C style array in their first commit instead of std::array.
It's really frustrating and makes me wonder if perhaps I'm in the wrong job: maybe I shouldn't be a software engineer, maybe I should be a software educator to teach modern concepts and un-teach ancient concepts. Indeed, a lot of the ancient concepts directly contribute to the modern world's problem of software safety.
I can't help but wonder if developers are simply off-put by the complexity of the language itself.
Way back in 1989 I was fortunate enough to get Borland's C/C++ compiler, the K&R C book, and Stroustrup's C++ book - which, at the time, was about the same size as the K&R C book. The C++ language was relatively simple to learn at that time: no STL, no odd pointer types, no template metaprogramming, etc.
Fast-forward to 1997 and I'm listening to a talk about C++ compilers and hear that templates are now a Turing-complete language, and how the compiler developers used the template language to coerce the compiler into printing out the prime numbers.
Fast forward again to 2022 (with trips to Java, Kotlin, Scala, and Haskell in between) and I've got a project that requires JNI and Python bindings to a core C engine. Great, I'll use C++! I proceed to start searching for containers/constructs and realize that at least 50% of the C++ language and standard library is unreadable to me. I only have a guess as to what a `shared_ptr` or `unique_ptr` type is, and I certainly don't know when to use them, and concepts/traits outlined in the STL documentation seem to be some sort of generalizations that the STL might provide default implementations for, but might not? What exactly is the syntax and/or type of a lambda function and how do they interact with C function pointers? etc., etc.
I get the sense that C++'s evolution has been driven by esoteric corner cases that a handful of developers have encountered and the language & STL design has been driven by those cases. I'm still surprised by how many problems I can easily address in something like Scala or Haskell that C++ simply does not provide the facilities to express easily.
> I can't help but wonder if developers are simply off-put by the complexity of the language itself.
I certainly fall into this category. At one point I could have almost been considered a C++ language lawyer.
But then two things happened:
(1) I got stuck working on codebases that assumed C++11. Even now, I'm using Python3 and C++ (mostly 11, with a little 14/17 sprinkled in).
(2) The C++ language spec got significantly more complex after C++11. Without writing C++14/17 code on a regular basis, I just couldn't justify the time spent trying to keep up with it.
So for me, the cost/benefit ratio of trying to stay "current" with C++ is no longer worth it. I expect to prefer Rust over C++ for new projects.
> I only have a guess as to what a `shared_ptr` or `unique_ptr` type is, and I certainly don't know when to use them, and concepts/traits outlined in the STL documentation seem to be some sort of generalizations that the STL might provide default implementations for, but might not? What exactly is the syntax and/or type of a lambda function and how do they interact with C function pointers? etc., etc.
And that's exactly my point though! `shared_ptr` and `unique_ptr` solve (or, at least, simplify) a lot of the memory management problems endemic to all C code and old C++ code. And they've been around for over a decade now.
I would absolutely be willing to spend some time with you to teach you what new things are, what they do (and what problems they solve), and how to use them without introducing new problems. I think several one-on-one teaching sessions would help you a lot with that. But one-on-one teaches you and not the thousands of other old-experienced C++ developers also in your shoes.
> I get the sense that C++'s evolution has been driven by esoteric corner cases that a handful of developers have encountered and the language & STL design has been driven by those cases.
While some of the C++ evolution fits that category... I would also add that a lot of changes in modern C++17, C++20, or C++23 are aimed directly to the masses of C++ developers doing "everyday" work. shared_ptr and unique_ptr are definitely not esoteric corner cases: they're driven to specifically solve memory management problems that have directly contributed to some very significant CVEs in many thousands of products.
A series of expert-level blog articles guided by your experiences teaching a pre-2010 C++ guru would probably be well received by other such "ex-gurus." They would need to be definitive, detailed, verbose and bring the reader up to C++ lawyer in the relevant new feature to be interesting.
A trip through the graphics pipeline has the correct level of detail:
I'm afraid I haven't read any C++ books. I'm sure that's where some of my strong biases come in; I'm all self-taught with some cppcon videos and attendance sprinkled in for topics that I know I'm weak in.
I can recommend a Slack [0] or Discord [1] though. There are plenty of other C++ people who can make such recommendations.
C++ has fantastic docs, books, videos, so much. You shouldn't need to read the std library code, and shouldn't read it to know what is safe to do. There are many implementations and they differ in details but all have really good standards compliance, except where they document deviation (like EASTL omits some slow stuff).
If you want to iterate over a container, know promises about algorithmic complexity, have strong guarantees about type safety, or know what smart pointers promise to do then you can get all that without digging into the stdlib's code.
I know this wasn't the main point of your comment but shared_ptr can be thought of as a reference counted pointer. It cleans up the pointed to object when all the pointers to that object go away. It doesn't need to be reference counted, there are implementations that do goofy ring lists under the hood but all the operations on it are cheap O(1) operations and it is only slightly slower than a raw pointer, looking at the code might cause someone to miss the promises of computation complexity. For object you only want one of there is unique_ptr. You give it a pointer or constructor to make an object and when that pointer leaves scope it cleans up the object. Both are great for managing things like memory, connections, file handles, anything you want automatically cleaned up when the pointers leave scope.
Not all languages have the robustness that comes with a 40 year history, so learning last year's hotness and C++ are going to be different.
I'm with you mostly, although as the mostly sole maintainer of an 22 years old and still evolving C++ codebase (among some other responsibilities) I do try to keep up with the language. But if I didn't feel this was necessary for my job I probably wouldn't do it or at least won't sink too much time into it.
I've also programmed in Haskell and I've had the same reaction as you, as to the things I can easily express there and not in C++. However I think the 2 cultures (Haskell and C++) are similar in that they both attract people who like to produce clever code which is unreadable to other people, even though the base language is not that complicated.
As for your questions:
1) shared_ptr / unique_ptr: Personally I liked this presentation. Watch it on x2 speed until it gets to parts that are difficult for you.
This is more dense than the previous one. As for your question "how do lambdas interact with C function pointers": rough explanation is they are not the same, since they can possibly allocate memory if they capture. But if they don't capture anything they are just like a (static) function, and can be converted to a function pointer.
As for concepts / traits, rough answer is that you don't need to know about them for everyday work, unless you write a library / API for consumption by other C++ developers. In that case, they help the compiler provide better/shorter error messages in case template functions/classes aren't used properly.
You can do almost anything in C++ at least three different ways. Most places that care about engineering will not have you do abusive template metaprogramming or go crazy with macros, even though it’s possible to do in the language.
The std library is filled with bad implementations that can’t change because of backwards compatibility. So it’s not your fault that they look confusing. The pointer types you mentioned are actually quite simple, think of them as thin wrappers over raw ptrs with constructors/destructors that implement counting (for shared ptrs) and whose destructors do the delete operation (for unique ptrs, and for shared ptrs once they reach 0 copies).
Working with C++ can be easy and fun as long as you aren’t in a codebase where people went overboard with anti patterns or “cleverness”.
To be fair and assume competence: A lot of C++ programmers have been burned by adopting the latest and greatest bling, then having to port it to some backwards platform that doesn't have a recent compiler. Many of us use the legacy stuff when possible, because we know it works, there are already robust third party libraries that interoperate with the legacy stuff, and it is more likely to be portable than std::omfgbbq.
This is getting better. Businesses that force a specific compiler for reasons other than technical merit are being punished in the market.
I am currently at a not super advanced C++ shop that has to write for several compilers on locked platforms, and even the worst/oldest compilers are clang or gcc forks that support C++17 and some C++20. We have robust CI and if it passes there we can use it (presuming it doesn't violate style guide/best practices/code review/etc). It is unlikely we will be burned by adopting the latest thing for anytime longer one developer is working on it.
I think the companies that are so afraid of upgrading software tooling and related processes incurred huge project costs because of otherwise needless major rewrites. One I worked for a college bookstore management software developer that was trying to rewrite their whole point of sale software. They were simply crushed by tech debt. Any fix in the original software took months and the rewrite wasn't finished before the company was bought by a competitor. Better unit tests, better practices staying up to date, better CI all would have contributed to business success.
Raises hand. Everyone needs to read "Effective C++", which is mostly a book about how not to shoot yourself in the foot with C++. The book is old enough now that the way he recommends writing C++ wouldn't make it into a PR.
So for that reason, I tended to write C+, which is pretty much C+, with some classes.
after rereading effective c++ with a decade away from c++, my main takeaway is that c++ shouldn't ever be used. The title should be "55 reasons to not use c++ for your next project"
That's the problem with the C++ ecosystem, it is actually many vastly different ecosystems mashed together. It's impossible to find a "C++ developer" because there is no agreed upon "one C++ style". It seems what you're looking for is a "C++ developer who likes to write the same C++ style as myself" ;)
> It seems what you're looking for is a "C++ developer who likes to write the same C++ style as myself" ;)
No, I'm perfectly content with someone writing C++ in a different style.
What I'm not content with is someone who uses old code without a technical reason why. C style arrays, for example, are 100% inferior to std::array. Old loops with indices are maybe 80% (off the cuff guesstimate) inferior to ranged-for. There's absolutely no reason whatsoever to use `delete` in any C++ code that doesn't directly handle allocations (and so, very esoteric), and simlarly for `new`. But I keep seeing these things (and many other examples) show up in newly-written C++ code from "experienced" developers. Maybe 90% of the time they'll fix their code when I point out the modern solution and what problems the modern solution solves, and the last 10% of the time ends up in a technical argument about the merit of the old code (and that's fine as long as there is a technical merit for it).
> Are the advantages of std::array over C arrays big enough to add 8kloc to each compilation unit though?
I say yes, absolutely. C style arrays are _very easy_ to get wrong in many ways. Three just off the top of my head:
- iterating using a size_t instead of an iterator
- calculating the size of the array (and often using a preprocessor macro to do it)
- leaving things uninitialized
So a std::array provides iterators and works with a ranged-for loop. The only reason to use a size_t is if you truly need an index number (and I would argue: use `std::distance()` instead).
A std::array provides a `size()` giving the total number of objects in it. It also provides the type, so you can do sizeof(type) * array.size() -- though that's still error prone.
A std::array ensures that objects are correctly initialized.
And, if you still need to dangerously decay the data to a pointer, you can use .data() to grab that pointer.
> A range-checked std::array replacement can probably be written in a few dozen lines of code.
Can you provide an example?
> That's the problem with all C++ stdlib headers, they are incredibly overengineered for what they bring to the table.
I would argue that the standard library isn't overengineered. It's engineered for more than just your use case. Just because code "is there" doesn't mean that code makes it into your product. Pay for what you use, don't pay for what you don't use.
Last time I needed to write `delete` I was either fixing some low level garbage code that was super old or writing a smart pointer for a case not handled by the standard library. Either way it was so long ago that I really can't remember which.
Complaints about delete aren't about part of common modern C++, if they aren't from subject matter experts aren't well structured complaints.
Last time you needed to, sure. The problem is that if you find out how to do dynamic memory allocation, there will be tons of resources pointing at new/delete. Parts of the language that all the experts agree are terrible are just sitting there, poking out behind a shiny facade, waiting to scratch the unwary.
How is that not the case for any language that lets experts get at the gory details what is the alternative?
Even in languages like Ruby this problem exists. Superficially, Ruby has a decent garbage collector and you never need to dereference a pointer. In practice as soon as it gets slow you hit an optimization stopping point and need to write an extension in C. Then you have all this mess again except with all the baggage another whole language brings to the table and none of the sheltering of a type system.
At some point you just have trust software devs to use the tools.
holy mother of god! I used C++ around 2000 for some old projects. When new and delete where the object oriented "equivalent" for malloc() and free(). So, if you don't use that, what do you use in 2022 C++?
`std::make_unique()`, `std::make_shared()`, or another function that wraps `new` and `delete` into an RAII type (commonly called a smart pointer) so that you, the developer, worry less about explicit memory management.
And the standard smart pointers are perfectly extensible enough to wrap things allocated from C libraries (I like to pick on opengl's glalloc() and glfree(), though malloc() and free() are acceptable to pick on too), C-style `FILE` pointers, or even memory-mapped things from `mmap()`
I would also point out that even in 2000, `std::auto_ptr` existed. So even in 2000 you probably should not have been using `new` and `delete`.
By the GP's opinion, you use the stl classes that implement RAII for you.
What is, obviously, only one way to do it, with its up and downsides. Granted that the upsides are much more numerous than the downsides, but there is a reason it's not the only option available.
> C style array in their first commit instead of std::array.
In my neck of the C++ woods, neither are a good choice. My point is: C++ is not a language, it is a language group.
Most north Europeans speak a Germanic language, which have many shared features and often partially shared dictionaries, yet they mostly won't understand each other without some further study. As a Dutch speaker I can't judge the quality of somebody's German, even though it sometimes feels like I should be able to.
> In my neck of the C++ woods, neither are a good choice.
Just off the top of my head: "it's not resizeable" (use a std::vector instead) or "it's allocated on the stack instead of the heap" (you can use a std::unique_ptr<std::array> to put it on the heap). But what are the reasons to not use `std::array` in your neck of the C++ woods?
> I can't judge the quality of somebody's German, even though it sometimes feels like I should be able to.
The cool part about software languages is that they're implementations of data structures and algorithms. Understanding those gets you most of the way to understanding what the code does regardless of whatever native language is spoken.
I've had the same experience in Java. Experienced engineers that are learning "new" Java even though those features came out years ago. I really think this will be the case for any language. Most people are not the HN type that are going to live and breath programming.
I'll probably get down voted, but I honestly really enjoy Java. The language, while it has some legacy verbosity, is in my opinion the perfect balance of simplicity and complexity. It has just enough language features without becoming Scala/Rust/C++ levels of complexity, while still being competitive on performance.
I'd love to get into low level programming, game development or somewhere where C++ has to be used (I'd rather use another language probably but not gonna get picky); however, in my experience (and this might differ vastly since I'm in South America):
1. There are really few jobs doing this here compared to web development; there are maybe less than ten new openings in LinkedIn a month.
2. Most if not all of these ask for Sr positions with +5 years of experience, there are no jr jobs to get this experience from (except maybe from university or public research entities, which just don't pay enough to cover rent).
3. Most of these are just maintaining old legacy software written in C++, the jobs in games, embedded, low-latency, etc. are REALLY far and between or nonexistent.
4. Many don't allow for remote work, whereas in web development it's the norm.
5. You can say goodbye to part-time work, it's not the norm in web development either, but since there are so many jobs there you eventually will find one that'll allow you to work less hours.
6. Even if you get the perfect job, it might still pay less than a Sr React position or whatever; I'd personally rather work in something interesting for myself, but someone with a family might not.
I haven't been investing time in looking for remote work in the US or EU yet, in my little experience, they usually do not allow for remote work, or only for Sr positions. However I'm sure it's possible since there are much more openings outside.
So personally I believe there's a lot for these companies to improve so this pool of talents grow, and it doesn't even imply higher wages; they can focus on hiring from another countries and/or more jr candidates. There won't be more "talented C++ developers" if these people can't learn or grown anywhere. There's a lot of talented people right out of college they just don't have the work experience yet.
Why doesn't c++ offer a language version that's trimmed down to "modern" rules? Meaning all things that were deprecated, or not recommended, are cut out of the language. What I find difficult is that there's several ways to do the same thing but each option has some nuance or gotcha with it. This is said as a beginner who did half of learncpp
It'd be a different version that's not intended for backward compatibility. This would be beneficial for any new projects or new learners who don't need to sift through decades of changes.
There can still be the standard updates every 3 years for backwards compatibility, but Id assume version upgrades don't happen very frequently. Meaning, if a project is started on c++11 then it's probably going to stay on c++11 indefinitely, never making it to c++17 or some other later version.
If that's true then that would imply in general the projects using c++17 or later are created from scratch using that standard and therefore wouldn't suffer from backwards compatibility.
This is all said as someone who works with Java where I don't see folks migrating from Java 8 -> 11 or later.
In Rust this is done using Cargo (the build system) and each crate (library / binary) can choose a compiler edition that gates certain features which are not backward compatible.
You can use (depend on) a crate from your code that uses an old edition.
In C++, your idea would require enforcement at the compiler level but the linker should still link both "old" and "new" C/C++ together. Then the switch is done on a .cpp file (translation unit) level.
However the culture around C / C++ compiler development with standards and multiple compilers would require huge committees to decide what is "modern C++." There might be a lot of debate.
Much of the innovation in Python, C# and Rust is in how the process of language and standard library development is managed in the community. I wonder if C++ could do that?
I have been a big fan of C++ and have had used it for about 8-9 years professionally as well as for side projects. The baggage that comes with it is just not worth it. My recommendation for any new project is to go either with Go or Rust as a C++ replacement unless the environment or hardware or resource constraints does not allow you otherwise.
As an individual developer, learning and being an expert in C++ intricacies is not worth the time and effort. Instead learn more about data layout in memory, hardware, language tooling etc and write code accordingly in a much better language as per the _need_ and C++ would really only qualify for very niche requirements.
1. Juniors are no hired for C++ type roles. Employers want someone with 5+ years of experience, because just knowing the syntax is not enough. And training on the job is also not offered.
2. It pays shit in comparison.
3. Job positions are relatively scarce.
My actual niche is numerics (so writing numerical libraries and simulations), btw. I was really dismayed to see how shit the pay is.
8-12 years ago I knew modern C++ inside-and-out. I worked on high-performance template developer libraries. These days whenever a C++ opening comes up in my job feed, it's in game development or it requires extensive domain-specific embedded/firmware experience. I just shrug and move on.
Andy Kelley noted that contributions to the self hosted zig compiler were way higher than to the C++ one. This matches up with my experience. I’d love to work on Swift but do I want to work on it so much that I’d write C++? Nah. Ditto PyTorch, XLA, LLVM, etc. I should probably bite the bullet and learn C++, but dammit Rust land is so nice.
Once upon a time I knew the c++ standard like the back of my hand. I was excited about the first few standard revisions, and read proposals as they came out.
Now I think it's a dead end and would sooner start a new project in C than C++ (but would prefer neither). The language that exists now is spending all its efforts fighting yesterday's battles, and it's rapidly losing relevance.
It used to be my bread and butter, but the only way I'd take a job that was for c++ programmers now is if it was to help migrate it to a language that lives in 2022 instead of endlessly relitigating the PL battles of 2008.
I could have been a talented C++ developer. Hell, I _am_ a good C++ developer. I should be, I've been doing it for a decade. Across a range of projects, from hardware to web backends to games and art.
But aside from a brief 3 years as a C developer (embedded hardware product) I've never been close to being one. Because when I was coming out of university seven years ago with a good CV and projects I could competently talk about, and I _could_ do the whiteboard segments, somehow I ... couldn't do them the way they wanted? And they, the very sparsely spread places with listings at the time, were offering no money for a new grad to somehow make their way across the country to interview, only to be offered a low salary (the C job I got was at the risk of a £180 train ticket three weeks before I would have had nowhere to live).
Now? I maintain a fleet of Wordpress sites for a digital agency for a modest £40k a year as the company's sole "someone who knows anything about Linux" person. Because my coworkers are lovely, my boss doesn't stress about anything and lets us flex our time and the clients are nice. Money is tight and definitely less than my skillset but going through the hoops of the "clever" side of the industry to be well paid and well stressed sounds daunting. There was no nurture for anyone not conforming to a very specific template which shows its face on the ranty side of Twitter.
My personal anecdote. I used to be a top-notch C++ developer, if I do say so myself. I was able to quote the ARM by chapter and verse as it were. I left C++ a little over 20 years ago. Honestly, Java had gotten to the point it could do everything I needed to do. I've stayed abreast with the changes in C++11, 14, and 17 - but only out of curiosity. I really wouldn't want to go back to that world.
If I were starting new and needed a systems language I wouldn't even think about C++. Rust would be my choice, no question about it - and I'm certainly not a Rust fanboi by any means! As others have mentioned I'd use Rust just for Cargo alone! Seriously, package management for C++ is, was and will probably forever be an absolute mess. Who wants to go back to that?
Now that I think about it that's the difference between "legacy" languages and "modern" languages - dependency and artifact management. In modern languages I expect that to be taken care of uniformly regardless of operating environment. The legacy languages do not do this. This is akin to the separation of classical and modern physics based off quantum theory.
That's why new developers don't want to use "legacy" languages and even many of us older ones don't want to mess around with those environments anymore! Too much work is involved in maintaining your tools rather than getting work done!
Well, hire me and train me. I'm a mediocre Python data engineer but self taught myself a bit of C and C++ plus I don't ask for high salary. Something like 90K CAD is good for me.
I have been playing with C++ since 2003 (university years) and still haven't had any opportunity to get hired due to my lack of previous professional experience.
For some reason, my web development experience does not matter for them, nor they care that I do my best to learn from C++ experts we all know.
If they are so scared they cannot find talented C++ developers, it's about time to start training people and let them gain some experience.
Do they actually hire inexperienced people and train them or are they just complaining for years now (?); because quite frankly, I have been reading this since the release of C++11 and still haven't they found their "talented C++ developers"!
I'm here people! Hire me, train me, and prove me wrong with what I have said!
Most places need people who have production C++ experience. If someone is self-taught and hasn't done production-level C++, then some places are willing to train but don't expect to be hired in anything above a junior-level role.
I'm surprised to not see any comments mentioning Rust and Go. (Rust is mentioned in the article)
Go intends to bring better team efficiency without giving up much performance. Rust intends to bring safety with similar performance to C++.
There's mentions of Python (and some others) in comments, but those language stray a bit too far from being useful as general purpose languages where C++ has been used historically (but that's not too say there aren't efforts to use Python/Cython embedded).
It's not that the pool is running dry, it's that they're burning through the pool. Places like banks and most hedge funds are usually really bad to work for, and they aren't paying that well compared to FAANGs and unicorns (who they are competing with for talent). The exception is a few of the HFTs: they can have good engineering-focused cultures, but you will still work 50 hours per week or more.
These companies are more selective about who they hire than FB, Netflix, or Google, and almost universally, the talent they want is having them bid against very high-paying tech companies.
The last few times I spoke to a recruiter, I asked for salaries over $500k because that would make it worth it to go there instead of staying at Google - I was making more than half that at G, and working 20-30 hours a week. Most of them said that they would not offer near that. The ones that would wanted me to be the next coming of Bjarne (in terms of C++ knowledge) to pay that rate. They were hiring a lot of people from defense contractors at the time.
Anecdotally, a lot of developers who enter one of those firms early and stay tend to get to the $500k-1M range pretty easily, but they seem to be a lot less willing to hire in that range. Comparatively, Google and Meta are happy to make $400k+ offers.
I was contemplating to work with C++, but coincidentally I got into C (microcontrollers) and C#/.NET (desktop apps). And every time when I am forced to work with C++ (.NET CLR DLL libraries) I am reminded about general typing overhead which I do not need to do in C# - i.e. HPP file with declarations and CPP file with definitions; Enums can't be translated to its string representation directly, but you need to have some helping function to do that... gross.
It depends on what you do. There are things that can be very easily done in C++ with a macro or a template that are simply impossible in C# and you just end up hammering out brain-dead code.
My brother actually likes C++ (and is probably better at it than most devs i worked with, even those writing a DB in C). He also is finishing school, and is searching for an internship. C++ internship flat out don't pay enough. 600 euros/month. He is looking in a specific field in particular (audio and/or radio) and is currently expending to drones, so this might not help, but i find this crazy.
C++ is still dominant in the AAA games world, and there's no signs of it ending its reigns at least at the engine level.
IMO it's not so much that there aren't talented C++ developers or "Rust is the new hotness", it's the long standing WLB/culture issues of the games or the finance industry that's causing many of them to burn out and move to FAANG/Microsoft/etc. (I'm one of them)
One more reason why: there is no job for Juniors, Middle and Senior. Only such as 10+ years of experience with a narow specialization, this is not for Senior, it is for Senior++ ;)
2 decades ago, there were plenty of jobs for newcomers: shareware desktop apps for Windows, growing CAD/EDA/simulation tools, small and middle-sized games, and enterprise apps. Today there is no room for desktop apps for Windows, CAD/EDA/simulation seems stagnant, games require Unreal/Unity/Godot with their own languages, and enterprises used C# and Java.
Embedded C++ positions differ from "General C++ Senior++ Engineer". Yes, they are welcome for novices, but they use their own subset of C++ of an outdated language version, and you cannot grow with them, because the size of a project is small and after 10+ years you cannot be Senior++ with required rare specialization (and of course low salaries).
What else leaves for novices to get more experience? Only writing and supporting your own open-source project with a few users (or no user base at all) during a decade… So, it is not an option for many people.
Is the pool running dry or are they simply not putting enough water in or allowing enough people through the gate? I've seen the job ads. Must have five years. Must have x, y, z. For a junior position. Unrealistic.
Same person can get a web job elsewhere with much less nonsense. So off they go.
Like a lot of things in IT these shortages are nonsense. They are lobbying for visa relaxation or similar so they can pay less.
If this was true, salaries would be way up. Are they? Not from what I can tell. I recently switched to a web dev role and didn't have the impression that demand for my (imho considerable) C++ skills was crazy high.
My impression always was that the problem with C++ is the high skill/brain cost in relation to usable ability payoff… which makes a limited talent pool a logical consequence.
(What I mean by that is — comparatively speaking, C++ has a much larger set of things that you need to know, understand, and apply correctly to get the same kind of thing done relative to other languages.)
I remember this exact same recruiting agency spammed me a $120k offer to become Lead Developer for a new quantitive trading algorithm. For that salary, they also expected me to relocate to London. Some C++ consulting gigs pay $2k per day, so their offer just seemed really cheap in comparison.
Ah, employer thinking. There should be a "pool" of available C++ programmers. Not that they should hire reasonably good programmers and train them in C++.
Dear finance folks: Please do write your trading systems in Rust; then at least the poor beeper-wearing support folks maintaining the code doesn't have to come in on the weekend as often to debug those unnecessary null pointer & dangling reference errors causing crashes.
Dear language righteous. I am one of those "poor souls" who writes web and other enterprise backends in C++. I can assure you that I do not come on weekends / woken up at night to fix "null pointer & dangling reference errors causing crashes" or memory leaks. Modern C++ along with libs and tooling provides more than enough features to be done away with those on backend style development. I just simply do not remember the last time I had this kind of problems.
Firmware is a bit different but frankly all of the gadgets I own work just fine ;)
Edit. I am not a C++ zealot. Have used and using many languages.
Rust is solving for the wrong problem. Null and dangling pointers aren't the main problem in building financial trading systems (I built a financial trading system).
Part of the equation is that C++ devs just aren't as fungible as Python, Rust, and devs in newer languages (not that being a C++ and a Python dev is mutually exclusive). The reason for this is that a lot of the C++ codebases were written before adoption of open source 3rd party libraries was widespread and codebases were more bespoke in general. So, as a consequence, the effectiveness of a C++ dev in any given org is highly correlated to tenure. It's tough to penetrate as a new dev, and if you have to move companies, it sets you back considerably. It's like a forest with poor soil and a mature canopy; difficult to take root in.
I love C++ and I have 15+ years of experience. But I have no desire to work in finance. Apart from that there don't seem to be many good job opportunities. So instead I'm slinging JS like everyone else and making 500k...
"Not enough talent" means "not enough talent for your shitty pay and/or horrible work conditions".
I spent my first ~5 years at Google working in C++. Google didn't seem to have a problem finding enough C++ folks whose C++ skill is basically "writes compilers and maintains a template library as a hobby" or "can explain a garbled stack trace in a crash dump". Also folks without prior C++ experience did reach this level while working there.
C++ is my favorite language. When I started my 'side project' (a new kind of data management system called Didgets) I used it exclusively.
Before I semi-retired to work on the project full-time, I would sometimes check out the help-wanted postings for various software companies. It became more and more rare to see a job posting that focused on C++ skills. Systems level programming and being able to write very performant code does not seem to be nearly as high a priority today.
Unless proved otherwise I always assumes that when employers say there's a shortage of any skilled labour they just mean they can't find someone cheap enough.
Exactly this. I know C++; I'm not using that knowledge. So, there's at least me in the talent pool. And my inbox continues to indicate that recruiters are anything but serious about hiring.
C++ was great in its era but has been supplanted by other languages for safety and performance reasons and it's fine that it's on its way out.
The tail will be long (there are billions of lines of C++ code that will need to be replaced or rendered so fundamental that they cannot be mutated without breaking expectations), but there's no shame in a language running its course.
As an embedded systems developer for 38 years, with 26 years writing for small threaded RTOS, including 16 and 32 bit DSPs, and the remainder being platform work on QNX Neutrino, NetBSD, and Linux, most of my code has been K&R, then ANSI C.
A very small amount of C++, but by comparison, a very small amount.
Everything else, Shell, Perl, Python, is a shadow by comparison.
I did my time in C++, had fun, learnt plenty, then got an offer to do some python with a way better salary. At some point company have to seriously up their offer if they want me to care about ABI breakage, weird bug cause by undefined behavior, and the usual 100k loc of legacy code with 10+MB sized object and no coherent memory ownership.
"eFinancial careers"? "ProfitView, a crypto trading tools developer"? "former software engineer at Barclays and Bank of America"? ... cry me a river.
Maybe they're seeing less C++ developers because more of us developed a bit of a moral backbone and don't go working for those socially parasitic enterprises.
Also, programming jobs in finance can be really boring.
There are a few neat technical challenges, but usually the subject matter is quite dry and fairly annoying (regulatory frameworks, etc.). Also there's usually a lot of red tape to get anything done.
I don't want to work for a financial company again if I can help it.
It's a poorly written sentence. The key idea is: "Rust got an 87% approval rate in the "most loved" category of the Stack Overflow Survey. [...] C++, meanwhile, languished at 48%"
The number I was looking for was in a paragraph earlier, that I somehow missed when I made my post earlier
> So where have all the C++ developers gone? The Stack Overflow Survey 2022 reported almost a drop of almost two percentage points in respondents this past year using C++ (from 24.3% to 22.5%), even while the percentage of professional developers using it rose. The good news, though, is that 34.7% of respondents learning to code are using C++, placing it in the top 6 programming languages of that category.
Huh, I need to pick me up one of those plentiful jobs.
Just thinking out loud, for us folks who have done that for a long time, I can see more reasons why people don't look for more gigs once they have stepped off of the wheel. (aside from the real or imagined problem of ageism of course).
. The torture of the modern software interview. Not worth it if you have enough money to live on already.
. The tendency for modern companies to simply grind out more of the same. There was a time when there was a lot of genuinely new product development going on in the larger embedded and workstation app space.
. The strong notion of small stepwise increments in development rather than being cut loose for some time. Writing to a series of tickets really sucks if you are used to being left alone.
I'm not saying that things are worse, just different.
I am one of the few stubborn people I guess, that insists in wanting to work with C++, C, Lua...
All I see is Python, Ruby, JS, Java, etc... jobs. So why bother learning a language that has no jobs for it? No wonder the amount of developers are decreasing. It is just offer meeting the demand.
TBH high levels skills on C++-20 is not what these companies are looking for, instead on C++-14 or so, meanwhile anyone learning C++ in 2022 would rather want to pickup C++-20 ++ with its niceties, old C++ is too tiresome without extra rewards.
I wondered why C/C++ is not appreciated more by engineers. I hear a lot of comments along the lines "rust is safer". While I get it is safer in some aspects a lot of times I've heard people end up writing unsafe rust anyways. Then the next argument goes well you should write as little as possible of that unsafe rust. Fact of life is that you will need to write unsafe rust. Actually unsafe INSERT_ANY_LANGUAGE_HERE. My point is as an engineer you will need to learn how computer works. And I don't mean just at a surface level but really deep. So once you have that knowledge why not just use it?
I agree for C. It's simple, it's fast, it's portable, and it's a fantastic gateway into ASM. To your point, I believe learning it is an important part of foundational computer science - and for that reason I don't think it will ever go away completely.
It gets worse: try finding a Windows C++ developer with kernel experience. They seem to be a rare breed... And finding one who has even a bit of security experience and a defensive coding style is even harder
I'd love to learn C++ and jump ship from the Data Science job I work, but I'm not even sure where I would start. Sure there are lots of videos about programming in Modern C++ but the reality is none of the codebases you work on will probably use. And a lot of places just use a subset of C++ or just C with classes, so then how much of C++ do I actually need to know? That coupled with the fact that they are paid less, makes me think I should just learn a python web framework since I already know python, and some Html and CSS and just apply to a python job
The comments here worry me. I've been programming C++ professionally for 15+ years and I think my comp is pretty good. I didn't shop around though (maybe I should just to see what the market is like, but I hate LeetCode or whatever it is kids do nowadays :) ).
We've noticed the C++ pool getting dry when hiring though; nowadays everyone that comes to interview uses Python and JavaScript. We're switching part of our stack to TypeScript because we simply can't find qualified C++ developers (or folks in general willing / able to learn C++).
What industry are you in? From the comments, what I've gathered is that anything in the embedded space is awfully underpaid unless you work for Google/Apple/Microsoft/etc.
And finance can pay well, but web development pays better.
I am not sure what do they mean under "talented C++ developers". If this is a person who knows all of C++ and constantly uses all the features then sure, you will not find too many people with this kind of experience.
Reason being I think is that they're not really needed in almost all cases. Most people will do just fine using specific subset that is adequate for a given project.
What is really needed are people who can think and "properly" architect and implement the system. They're way more valuable and will do just fine in most of the languages.
I believe it. C/C++ was one of the earlier languages I knew. It was my workhorse language.
But now it's been supplanted by C#, Python, and JavaScript. And even with JavaScript, I'm constantly finding out stuff that was new a few years ago that makes the language better and easier to use.
I've taken a look at C++ recently and there's so much new stuff, it would almost be like learning it all over again. Hell, even C# would be like that if I were to compare it from when I first learned it to what it looks like now.
If having a VM/GC is acceptable then C# is the overall best language/framework today, meaning that it ranges from good to great in everything: Web, Game Dev, Desktop Dev, Mobile etc.
Otherwise there's no general contender. With C#, these days you can go for unsafe code and dealing with native code in safer ways using new/ish features of the language: Span/Memory, refs, ref attributes etc. Or just use good old pointers directly and manage everything yourself.
This seems like a good thing frankly. C++ sucks to work in (I have 10 years exp with it professionally). I’d never start a new project in C++. Maybe C for an exotic arch, but not C++.
Most of C++ programming is tied in legacy code and tech debt, that is why it has that image. Only so few modern green field C++ projects with latest standard out there.
That's true, but if you can find a place that isn't just new features all the time and you can refactor, it's so satisfying cleaning up some shit ancient c++ with clean modern c++.
I've had C++ projects at work over the years, and generally despise the language (it feels too complex.) I actively avoid it. C is fine. C++, no thanks.
This is interesting to see. I don’t use c++ professionally but I have started to use it in my spare time for some side projects.
I find it interesting as it gives a lot of control over OS primitives.
One thing I found a little odd was the various different ways to build and import third party libraries. Finding the include directories and linking the libraries can be a pain.
I use cmake and ninja which seems to be somewhat easier to wrangle with.
What do these presumably highly-paid C++ developers work with exactly? And what is their background? How much do they make in London?
I have a degree in mathematics, am ~36 years old, but only worked as a web dev.
I'm thinking about taking courses in mathematical statistics or physics. I want to "get out" of web dev. This type of job would be a huge step up for me. Just unsure how to begin a career change...
Have to say this threat really took me by surprise.
I was seriously considering moving from web dev (ruby/python etc) to C++ or even C to increase my job security.
I guess the feeling many devs have of job insecurity and/or frustration with constant changes and becoming obsolete and/or burned out is inherent to the field.
Back to the drawing board...
Many people here compare c++ jobs to, say, js jobs mentioning that the latter are easier and at the same time pay more. I wonder if all them really dont care what they do as long as the wage is large enough.
I tend to think that in many cases one would deal with something more interesting/involving while working on a cpp project than on a python web app.
I had the most fun ever writing c++ 20 years ago. The power of the language amazed me, combining high level constructs with low level control and many, many (too many) powerful features. And so many ways to completely screw yourself up.
I vowed never to use it again. I miss it still but it's just not worth it.
I personally know c++ and know others who are also well versed in c++. I have a disagreement with the computer industry and won't work in an embedded space. Others won't be hired because of their lifestyle, looks, or past record. The pool isn't running dry it's being drained.
I spent some time developing i c++ 10 years back or so. I really don't want to use OOP. I fucking hate OOP. No amount of money will make my use C++ again.
These days I do embedded programming in C with some Python on the side. As a plus the nightmares about debugging c++ code disappeared.
I find a subset of C++ to be pretty enjoyable to use to be honest. But I think if I hadn’t cut my programming teeth on it so early in my life I would probably share the opinion of most of my coworkers about how it’s a bit of a monstrosity.
Actually I've learned C++ in the past years and want to look into a working in embedded development. Now I'm reading that the industry is in need for C++ developers but paying bad at the same time? I'm confused.
I wrote in C++ for a good chunk of my programming career. I'm a manager/architect now, but if you forced me back into programming I'd much rather use C# or Rust than C++ again.
A job description might say C++ at the top but, it will be followed with a page of other requirements and weeks of interviews. They won't hire you unless you are already doing everything.
Personally, I would love to work in c++. It’s my favorite language to work in. I have over a decade of experience with it professionally. But, seems like most jobs these days are python.
I have no interest on HFT or crypto currency the article says lacking C++ programmers. I consider them ponzi scheme. It doesn't create any value to the society to me.
Should I move away from c++ ? Anyone has done this? Everytime I try to do this I get dropped citing I lack "professional" webdev experience. Thanks in advance.
Is this for graduates coming directly from high school or are you seeing success with people that already hold degrees and are going for retraining or up-skilling?
New college grads who went to college right after high school. They have a strong intern program, so the students usually have some real-world experience using C++.
>The real problem is that C++ is neither easy nor loved. Rust got an 87% approval rate in the "most loved" category of the Stack Overflow Survey. However, only 9.3% of respondents used Rust at all and only 8.8% did so professionally. C++, meanwhile, languished at 48%.
It seems that developers who never used Rust love Rust. I love neither but if I would to pick one to use, I would pick C++ because for myself is much easier.
Talent is a myth. The pool is only limited at a particular point in time. Employers who require skills in a particular language can just provide training.
I believe they're misguided to invest in C++. Enormous amounts of time and effort are spent solving weird pointer errors in C++ programs which simply don't occur in other languages such as Rust and C#.
Why is there this almost religious belief that their program needs the fastest execution time? It's madness IMHO. Trading a few percentage in performance for safety, reliability and security is more than worth it.
I still use C++ on a very regular basis, but C# is my mainstay at the moment.
I am no expert, but I am under the impression for HFT that high performance is the absolute goal and slight percentages in performance can affect profits.
HFT entails only a small part of the usage of C++.
It's still being used in many embedded systems (including automotive, aviation and building automation), where Rust is also available. Interop is always possible with C/C++ so there's little excuse not to use Rust there.
the west repeated the same mistakes as when they decided to let china become the factory of the world
now they pay the price
they thought they could outsource embedded/system devs too, if you go check in Indonesia and Taiwan, the amount of people who dab with embedded/robotic stuff is insane
I like C++, and tooling has come a long way, but it’s so much easier to download rust/Python/node and you’re basically set on every platform & immediately ready to go. NOW consider pay, and even someone enthusiastic about programming C++ will reconsider.