So I have to hope for the best when some piece of hardware, coded in C, gets plugged into the network.
Morris Worm is 30+ year old, and yet best we can do is having platforms like CHERI, Solaris SPARC ADI, ARM MTE, which also happen to be quite specific.
C has a bad rep, and deservedly so in the context of anything related to security. We have not been able to sanitize the language because the ability to surprise the writer of some bit of code is almost a core concept in the language. This sucks. But at the same time the way newer languages are held up as the manna from heaven is rather tiresome as well. By the time they will be deployed in the quantities that C code is today there will be exploits in those languages because it is mostly the programmers that are fallible. Now, some languages - again, C - make it easier to shoot yourself in the foot than others. But I've seen plenty of code in Java that was exploitable even though memory safety in Java is arguably at a higher level than say Rust.
So you'll be hoping for the best for some time to come, no matter what the stuff you install was written in.
It is possible to foreclose on some kinds of error entirely, and if you compare like-for-like, those benefits become obvious. If, on the other hand, you take the productivity gains of a higher-level language as a reason to write more ambitious programs, well, new functionality can find new problems.
Very nice, but I'd nitpick memory corruption and expand it to hardware errors, or add that category.
Sending digital signals over analog medium (read: always) can fail, and may do so regularly depending on hardware itself and environment conditions.
Simple examples: Digital 3.3V signals sent over 10m, high-capacity lines (lose a bit here and then), or an overclocked CPU undervolting just at the right time, etc...
EDIT: I'd also argue that UB is a type of logical_error. Not the fact that UB exists within the language, but in that the logic failed to account for the scenario?
Except that unless we are talking about hardware design languages, no programming language accounts for hardware errors as such.
UB is its own kind of error, specially since ISO C documents over 200 use cases, and unless you are using static analysers, most likely won't be able to find out that are failing into such scenarios as no human is able to know all of them by heart, whereas logical errors are relatively easy to track down, even by code review.
But every language that can reach the hardware will have the ability to wreck things in spectacular and hard to predict ways. Every new language ever was touted as the one that would finally solve all our problems. For Java and COBOL the historical record borders on the comical. I have no doubt that the same will go for every other language that we just haven't found the warts in yet. Two steps forward, one step back, that seems to be the way of the world in the programming kingdom.
Nearly every piece of networked hardware has critical software running written in C, and the consequences are not nearly as disastrous as reading HN would make us believe.
A possible outcome is that you would trade pointer bugs etc. for "Java bugs" if Java embedded were used everywhere. Embedding a complete runtime increases the attack surface alot.
Many of those Java exploits are on C and C++ written code layer, yet another reason to get rid of them in security critical code.
According to Microsoft Security Research Center and Google's driven Linux Kernel Self Protection project, the industry losses due to memory corruptions in C written software goes up to billions of dollars per year.
Then, unless we are speaking about a non-conformant ISO C implementation for bare metal deployments, it provides the initialization before main() starts, floating point emulation, handling of signals on non-UNIX OSes, VLAs.
How Apple, Google, Microsoft, Sony, ARM are now steering the industry regarding their OS SDKs has already proven who is on the right side.
I have been at this since the BBS days, I don't care about Internet brownie points.
The only thing left is actually having some liabily in place for business damages caused by exploits, I am fairly confident that it will eventually happen, even if it takes a couple of more years or decades to arrive there.
Claiming that a little process startup code (that isn't really part of a particular language, but is much more part of the OS ABI) was easier to exploit than an entire JRE is just dishonest.
I would never think of "floating point emulation, handling of signals on non-UNIX OSes, VLAs" as anything resembling a "runtime". These are mostly irrelevant anyway, but apart from that they are just little library nuggets or a few assembly instructions that get inserted as part of the regular compilation.
By "runtime", I believe most people mean a runtime system (like the JRE), and that is an entirely different world. It runs your "compiled" byte code because that can't run on its own.
I care about computer science definitions, not what most people think.
Dishonest is selling C to write any kind of quality software, specially anything conected to the Internet, unless one's metrics about quality are very low.
So I have to hope for the best when some piece of hardware, coded in C, gets plugged into the network.
Morris Worm is 30+ year old, and yet best we can do is having platforms like CHERI, Solaris SPARC ADI, ARM MTE, which also happen to be quite specific.