Hacker News new | past | comments | ask | show | jobs | submit login

Except it doesn't break on all platforms.

So I have to hope for the best when some piece of hardware, coded in C, gets plugged into the network.

Morris Worm is 30+ year old, and yet best we can do is having platforms like CHERI, Solaris SPARC ADI, ARM MTE, which also happen to be quite specific.




C has a bad rep, and deservedly so in the context of anything related to security. We have not been able to sanitize the language because the ability to surprise the writer of some bit of code is almost a core concept in the language. This sucks. But at the same time the way newer languages are held up as the manna from heaven is rather tiresome as well. By the time they will be deployed in the quantities that C code is today there will be exploits in those languages because it is mostly the programmers that are fallible. Now, some languages - again, C - make it easier to shoot yourself in the foot than others. But I've seen plenty of code in Java that was exploitable even though memory safety in Java is arguably at a higher level than say Rust.

So you'll be hoping for the best for some time to come, no matter what the stuff you install was written in.


It is possible to foreclose on some kinds of error entirely, and if you compare like-for-like, those benefits become obvious. If, on the other hand, you take the productivity gains of a higher-level language as a reason to write more ambitious programs, well, new functionality can find new problems.


It all boils down to:

Σ UB + Σ memory_corruption + Σ logical_errors ≥ Σ logical_errors


Very nice, but I'd nitpick memory corruption and expand it to hardware errors, or add that category.

Sending digital signals over analog medium (read: always) can fail, and may do so regularly depending on hardware itself and environment conditions.

Simple examples: Digital 3.3V signals sent over 10m, high-capacity lines (lose a bit here and then), or an overclocked CPU undervolting just at the right time, etc...

EDIT: I'd also argue that UB is a type of logical_error. Not the fact that UB exists within the language, but in that the logic failed to account for the scenario?


Except that unless we are talking about hardware design languages, no programming language accounts for hardware errors as such.

UB is its own kind of error, specially since ISO C documents over 200 use cases, and unless you are using static analysers, most likely won't be able to find out that are failing into such scenarios as no human is able to know all of them by heart, whereas logical errors are relatively easy to track down, even by code review.


Very true.

But every language that can reach the hardware will have the ability to wreck things in spectacular and hard to predict ways. Every new language ever was touted as the one that would finally solve all our problems. For Java and COBOL the historical record borders on the comical. I have no doubt that the same will go for every other language that we just haven't found the warts in yet. Two steps forward, one step back, that seems to be the way of the world in the programming kingdom.


Nearly every piece of networked hardware has critical software running written in C, and the consequences are not nearly as disastrous as reading HN would make us believe.


The CVE database proves otherwise.

Unfortunately liability is not yet a thing across the industry.


What does that site prove?

https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=stack 3496 entries https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=pointer 2389 entries

https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=java 2034 entries

A possible outcome is that you would trade pointer bugs etc. for "Java bugs" if Java embedded were used everywhere. Embedding a complete runtime increases the attack surface alot.


Contrary to urban myths, C has a runtime as well.

Many of those Java exploits are on C and C++ written code layer, yet another reason to get rid of them in security critical code.

According to Microsoft Security Research Center and Google's driven Linux Kernel Self Protection project, the industry losses due to memory corruptions in C written software goes up to billions of dollars per year.

Several reports are available.


> Contrary to urban myths, C has a runtime as well.

Would you like to tell us how it compares to e.g. Java's runtime as well?


For starters, easier to exploit.

Then, unless we are speaking about a non-conformant ISO C implementation for bare metal deployments, it provides the initialization before main() starts, floating point emulation, handling of signals on non-UNIX OSes, VLAs.


So basically all the stuff that noone cares about? Maybe it's only a runtime if you really want to win a pointless argument?


How Apple, Google, Microsoft, Sony, ARM are now steering the industry regarding their OS SDKs has already proven who is on the right side.

I have been at this since the BBS days, I don't care about Internet brownie points.

The only thing left is actually having some liabily in place for business damages caused by exploits, I am fairly confident that it will eventually happen, even if it takes a couple of more years or decades to arrive there.


Claiming that a little process startup code (that isn't really part of a particular language, but is much more part of the OS ABI) was easier to exploit than an entire JRE is just dishonest.

I would never think of "floating point emulation, handling of signals on non-UNIX OSes, VLAs" as anything resembling a "runtime". These are mostly irrelevant anyway, but apart from that they are just little library nuggets or a few assembly instructions that get inserted as part of the regular compilation.

By "runtime", I believe most people mean a runtime system (like the JRE), and that is an entirely different world. It runs your "compiled" byte code because that can't run on its own.


I care about computer science definitions, not what most people think.

Dishonest is selling C to write any kind of quality software, specially anything conected to the Internet, unless one's metrics about quality are very low.


So "computer science" defines a little process startup code to be equal to the JRE? If that is so, I admit defeat to your infallible logic.


Computer science defines any kind of code used to support language features as a language runtime.

Assume whatever you feel like.


So that seems to be about how balanced and deep you want discussions to go. Thanks for the propaganda anyway.


Propaganda goes both ways.


Most of the notorious security breaches in practice (Stuxnet, NSA, IoT mega-botnets…) had nothing to do with buffer overflows.


A few stars don't hide the impact of billions of dollars in fixing buffer overflows and their related exploits.

> 70% of the vulnerabilities Microsoft assigns a CVE each year continue to be memory safety issues

https://msrc-blog.microsoft.com/2019/07/18/we-need-a-safer-s...

If you are going to argue that is 'cause Windows, there are similar reports from Google regarding Linux.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: