Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There is no such language as C/C++. There is C, which cannot be written safely, and there is C++, which can be, and quite often is.

It has been many years since I shipped a memory bug in C++. It is just not a real worry for me. I am constantly dealing with design, specification, and logic flaws, which affect Rust equally, or moreso.

I am aware that there are plenty of other programmers out there, writing bad code in what they would call C++. I would like them to write good code. If it takes Rust to make them write good code, so be it. But if they began writing decent C++ code, that is just as good.

The threshold is not zero memory errors. The threshold is many fewer memory errors than logic or design errors. The more attention your language steals from logic and design, the more of those errors you will have. Such errors have equally dire consequences as memory errors, and are overwhelmingly more common in competent programmers' code, in C++ and in Rust.

C++ is (still) quite a substantially more expressive language than Rust, which is to say it can capture a lot more semantics in a library. Every time I use a powerful, well-tested library instead of coding logic by hand because it can't be captured in a library, that is another place errors have no opportunity to creep in.

So it's great that Rust makes some errors harder to make, but that is no grounds for acting holier-than-thou. Rust programmers have simply chosen to have many more of the other kinds of errors, instead.

Every programmer who switches from C to Rust makes a better world; likewise Java to Rust, or C# to Rust, or Go to Rust. Or, any of those to C++.

Switching from C++ to Rust, or Rust to C++, is of overwhelmingly less consequence, but the balance is still in C++'s favor because C++ still supports more powerful libraries.

You might disagree, but it is far from obvious that you are correct.



> It has been many years since I shipped a memory bug in C++. It is just not a real worry for me.

The whole comment sounds so much like well written satire, but I think he's being serious.


I agree with him. in many practical applications with well design class hierarchies it just really isn't much of an issue. Hasn't been for me either.


> with well design class hierarchies

:eyes:


you can roll eyes at me all you want, but I've been programming in C++ for a long time. These memory access issues just don't seem to be a big problem for us in practice. That's because we wrap all raw memory manipulation in appropriate classes for our application, so it's just not an issue. I agree it could be an issue in theory.


He rolls his eyes at "hierarchies". Libraries do make the difference.

Somebody else interjected Design Patterns. You can define a design pattern as a weakness in your language's ability to express a library function to do the job.


... and with proper use of Design Patterns!


Why is it difficult to believe? I've also written plenty of C++ code without memory bugs. It's not that hard if you play by a few simple rules.


> I've also written plenty of C++ code without memory bugs.

The classic response to this is "That you know of." Consider that even quality-conscious projects with careful code review like Chrome have issues like this use-after-free bug from time to time.

https://googleprojectzero.blogspot.com/2019/04/virtually-unl...

So when people claim that they personally don't write memory bugs I tend to assume that they are mistaken, and that the real truth is that they haven't yet noticed any of the memory bugs that they have written because they are too subtle or too rare to have noticed.


Chrome is in an exceptionally hard place because of its JIT. Your language cannot tell you if it's safe for your JIT to omit a bounds check.


That post describes two vulnerabilities: one is in the JIT, but the other one is in regular old C++ code. More generally, JIT bugs are a relatively small minority of browser vulnerabilities. More often you see issues like use-after-free in C++ code that interacts with JS, such as implementations of DOM interfaces, but the issues are not directly JIT related and would be avoided in a fully memory-safe language.


Chrome, like Firefox, is not an example of modern C++ code. Google's and Mozilla's coding standards enforce a late-'90s style. It is astonishing they get it to work at all.


In this case, I mean a subsystem that has been in production since 2006 and has been processing hundreds of thousands of messages a day. I don't claim that it's perfect or bug-free, but if it had significant memory errors I'd have heard about it. I designed and implemented it to use patterns like RAII to manage memory, and it's worked quite well.


That is why use tools like valgrind to verify that you got it right.


When I worked on a mobile C++ project at Google, we went exceptionally out of our way to avoid memory issues.

We ran under valgrind and multiple sanitizers (and continuously ran those with high coverage unit and integration tests). We ran fuzzers. We had strictly enforced style guides.

We still shipped multiple use after frees and ub-tripping behavior. I also saw multiple issues in other major libraries that we were building from source so it can't be pointed at as just incompetency on my team.

Basically, it might be possible but I think it's exceptionally more difficult to write memory safe C++ than this thread is making it sound.


Writing memory safe programs in C++ is possible. Most coding styles and some problem domains don't lend themselves to it naturally, though. In my experience, restricted subsets used for embedded software vastly reduce the risk of introducing errors and make actual errors easier to spot and fix.


> Writing memory safe programs in C++ is possible.

Everything "is possible" in the sense that in theory you can do it. But if time and time again people fail to do it. Even people who invest almost heroic levels of effort (see above: valgrind, multiple sanitizers, and so on) you get to the point where you have to accept that what is possible in theory doesn't work in practice.


I have seen it done in practice, on rather large systems. But it requires actual, slow software engineering instead of the freestyle coding processes that are used in most places.


My main rule is "no naked new," meaning that the only place the new operator is allowed is in a constructor, and the only place delete is allowed is in a destructor (unless there's some very special circumstance). This style lends itself to RAII. The other rule is to use the standard library containers unless there's a very good reason not to do so. That seems to cover most of the really basic errors.


Yes, I know how you code are obliged to code at Google. It is astonishing that anything works.

The "strictly enforced style guides" strictly enforce '90s coding habits.


Together with a test-suite that covers the exponential number of paths through your code...


Changing programming language neither reduces the need for test coverage nor does it magically increase coverage.


A type system changes the need for test coverage because it eliminates whole classes of bugs statically that would need an infinite amount of tests to eliminate dynamically.


That leaves an infinite amount logic bugs to be tested for. Types cannot fix interface misuse at integration and system level. So no, this does not reduce the need for testing.


Whether they reduce the need for testing overall is arguable. But what matters in this discussion is that types can guarantee memory safety, meaning that the cases that you forgot to test – and there will always be such cases, no matter how careful you are (just look at SQLite) – are less likely to be exploitable.


Types can only provide limited memory safety. There is a real need to deal with data structures that are so dynamic as to be essentially untyped. Granted, this usually happens in driver code for particularly interesting hardware, but it happens. Also, I have not yet seen a type system that is both memory safe and does not prohibit certain optimizations.


I haven't written c++ seriously for a number of years. Do you still have to do all that "rule of three" boilerplate stuff to use your classes with the STL? Is it better or worse now with move constructors?


It's a bit better with C++11 syntax where you can use = delete to remove the default constructors/destructors, e.g.:

  class Class
  {
      Class();
      Class(const Class&) = delete;
      Class& operator = (const Class&) = delete;
      ~Class() = default;
  };
Which I find slightly cleaner than the old approach of declaring them private and not defining an implementation, but the concept hasn't changed much. I'd love a way to say 'no, compiler, I'll define the constructors, operators, and destructors I want - no defaults' but that's not part of the standard.

Move constructors are an extra that, if I remember correctly, don't get a default version, thankfully.


So, so much better. Nowadays we "use" what has been called "rule of zero". Write a constructor if you maintain an invariant. Rely on library components and destructors for all else.


> https://jaxenter.com/security-vulnerabilities-languages-1570...

there's a world in terms of safety between C and C++.


The comparison in that link is pretty meaningless; it scores languages by how many vulnerabilities have been reported in code written in them, without even making an attempt to divide by the total amount of code written in them, let alone account for factors like importance/level of public attention, what role the code plays, bias in the dataset, etc.


To be fair the report explicitly states this limitation. jcelerier just conveniently forgot to mention it.


You're misrepresenting the report in order to justify your bias. Direct quote from the report:

    This is not to say that C is less secure than the other languages. The high number of open source vulnerabilities in C can be explained by several factors. For starters, C has been in use for longer than any of the other languages we researched and has the highest volume of written code. It is also one of the languages behind major infrastructure like Open SSL and the Linux kernel. This winning combination of volume and centrality explains the high number of known open source vulnerabilities in C.`
In other words the report explains this with 1) there being more C code in volume and 2) more C code in security-relevant projects (which are reviewed more by security researchers). It also states explicitly that your conclusion is not to be drawn from this.


Readable version of the quote:

> This is not to say that C is less secure than the other languages. The high number of open source vulnerabilities in C can be explained by several factors. For starters, C has been in use for longer than any of the other languages we researched and has the highest volume of written code. It is also one of the languages behind major infrastructure like Open SSL and the Linux kernel. This winning combination of volume and centrality explains the high number of known open source vulnerabilities in C.

Please, never ever use code snippets for quotes, unless you hate mobile users. Just put "> " in front.


> unless you hate mobile users

or just period. I'm reading this on a 4K desktop display, and I still have to scroll. it's only useful for actual code, which is very rarely posted on hn.


> It has been many years since I shipped a memory bug in C++. It is just not a real worry for me.

Can you write down the algorithm that you use to avoid writing memory bugs? Can you teach others how to do it? Experienced C++ programmers do seem to learn how to avoid those bugs (although very often what they write is still undefined according to the standard - but e.g. multithreading bugs may be rare enough not to be encountered in practice). But that's of limited use as long as it's impossible for anyone else to look at a C++ codebase and confirm, at a glance, that that codebase does not contain memory bugs.

> C++ is (still) quite a substantially more expressive language than Rust, which is to say it can capture a lot more semantics in a library.

> So it's great that Rust makes some errors harder to make, but that is no grounds for acting holier-than-thou. Rust programmers have simply chosen to have many more of the other kinds of errors, instead.

Citation needed. What desirable constructions are impossible to express in Rust? I've no doubt that you can write some super-"clever" C++ that reuses the same pointer several different ways and can't be ported to Rust - but such code is not desirable in C++ either (at least not in codebases that more than one person is expected to use). Meanwhile Rust offers a lot of opportunities for libraries to express themselves clearly in a way that's not possible in C++: sum types let you express a very common return pattern much more clearly than you can ever do in C++. Being able to return functions makes libraries much more expressive. Standardised ownership annotations make correct library use very clear, and allow a compiler to automatically check that they're used correctly.

> Every programmer who switches from C to Rust makes a better world; likewise Java to Rust, or C# to Rust, or Go to Rust. Or, any of those to C++.

> Switching from C++ to Rust, or Rust to C++, is of overwhelmingly less consequence, but the balance is still in C++'s favor because C++ still supports more powerful libraries.

> You might disagree, but it is far from obvious that you are correct.

On the contrary, it's obvious from the frequency with which we see crashes and security flaws in C++ codebases that the average programmer who switches from Java to C++, or C# to C++ makes the world a worse place. It's overwhelmingly likely to be true for Rust to C++ as well.


>Can you write down the algorithm that you use to avoid writing memory bugs? Can you teach others how to do it?

Yes. Code using powerful libraries. Every use of a powerful library eliminates any number of every kind of bug.

Rust has not caught up to C++'s ability to code powerful libraries, and might never. C++ is a moving target. C++20 is more powerful than C++17, which was more powerful than 14, 11, 03.

There are certainly niches for less powerful languages. Rust is more powerful, and nicer to code in, than many that occupy those. It will completely displace Ada, for example.


> Yes. Code using powerful libraries. Every use of a powerful library eliminates any number of every kind of bug.

So if I find that a C++ project is using powerful libraries, I can be confident that it doesn't have memory errors? History suggests not.


If I find a Rust program that is (perforce) not using powerful libraries, can I be confident that it does not harbor grave errors?

Certainly not. Rust takes aim at memory errors, and misses the rest that would be avoided by encapsulating bug-prone code in libraries. C++ enables capturing bug-prone code in well-tested libraries, eliminating whole families of bugs, including, in my recent experience, memory bugs.

That is not to say all C++ code is bug-free. Google and Mozilla code, by corporate fiat, is forbidden to participate.


> If I find a Rust program that is (perforce) not using powerful libraries, can I be confident that it does not harbor grave errors?

You can be confident that it doesn't harbour memory errors. You can be confident that it doesn't contain arbitrary code execution bugs, which is a much better circumstance than with any C++ project I've seen (C++ by its nature turns almost any bug into a security bug).

IME you can also have a much higher level of confidence that it does what you expect (including not having bugs) than you would for a C++ project, because of Rust's more expressive type system.

> C++ enables capturing bug-prone code in well-tested libraries, eliminating whole families of bugs, including, in my recent experience, memory bugs.

And yet in practice you can neither be confident that there are no memory bugs, nor that there are no other bugs. Even the big name C++ libraries are riddled with major bugs. Perhaps libraries that are written in a certain fashion avoid this bugginess, but that's of little use when it's not possible to tell from a glance whether a given library is one of the buggy ones or not.


This is the classic False Dichotomy.

Rust programs have bugs. Rust programs have security bugs. Are they mediated by memory usage bugs? Probably not, unless the program has unsafe blocks, or uses libraries with unsafe blocks, or libraries that use libraries that have unsafe blocks, or call out to C libraries. Or tickle a compiler bug.

Can it leak my credentials to a network socket as a consequence of any of those bugs, memory or otherwise?

Putting your memory errors in unsafe blocks may make them invisible to you, but that does not make them go away.

So, yes, of course it can.


> Can it leak my credentials to a network socket as a consequence of any of those bugs, memory or otherwise?

Sure, that class of bugs still exists. But they're rarer and less damaging (even with stolen credentials, an attacker can't do as much damage as one who had arbitrary code execution).

Rust eliminates many classes of bugs. C++ does not: the fact that theoretically there could be non-buggy C++ libraries doesn't help you out in practice, because there's no way to distinguish those libraries from the very many buggy C++ libraries.

> Putting your memory errors in unsafe blocks may make them invisible to you, but that does not make them go away.

It's just the opposite: it makes the risk very visible, so in Rust you can choose to avoid libraries with unsafe. Whereas in C++ any library you might choose is likely to have memory safety bugs and therefore arbitrary code execution vulnerabilities.


Kind of true, AFAIK Rust binary libraries don't expose safety information, like it happens in ClearPath or .NET Assemblies.

Still too many libraries make use of unsafe when they could be fully written in safe Rust.


Rust cannot displace Ada until it fulfills the business and security requirements that keep Ada alive.


> Can you write down the algorithm that you use to avoid writing memory bugs? Can you teach others how to do it?

Structure the code in a way such that it is obvious what happens. Use "semantic compression" (e.g. be clear about your concepts and factor them in free standing functions), but don't overabstract/overengineer.

Eliminate special cases. If the code has few branches and data dependendencies, then successful manual testing gives already high confidence that it will be pretty robust in production.

Prefer global allocations (buffers with the same lifetime as the process), not local state. This also makes for much clearer code, since it avoid heavy plumbing / indirections.

I tend to think that modern programming language features mostly enable us to stay longer with bad structure. And when you hit the next road block, fixing that will be correspondingly harder.


> Structure the code in a way such that it is obvious what happens. Use "semantic compression" (e.g. be clear about your concepts and factor them in free standing functions), but don't overabstract/overengineer.

This sounds little different from "write good code, don't write bad code." I'm sure we all agree on these things, but I'm sure the people who write terrible code weren't trying to be unclear or trying to overengineer.

> Eliminate special cases. If the code has few branches and data dependendencies, then successful manual testing gives already high confidence that it will be pretty robust in production.

True enough, but that's so much easier in a language with sum types.

> Prefer global allocations (buffers with the same lifetime as the process), not local state. This also makes for much clearer code, since it avoid heavy plumbing / indirections.

That's a pretty controversial viewpoint, since it makes composition impossible (indeed taken to its logical extreme this would mean never writing a library, whereas the grandparent was convinced that more use of libraries was the way to write good code).

> I tend to think that modern programming language features mostly enable us to stay longer with bad structure. And when you hit the next road block, fixing that will be correspondingly harder.

Interesting; that's the opposite of my experience. I find modern language features mostly guide us down the path that most of us already agreed was good programming style, enforcing things that were previously only rules of thumb (and that we had to resist the temptation to bend when things got tricky). And so the modern language forces you to solve problems properly rather than hacking a workaround, and the further you scale the more that will help you.


>> Eliminate special cases. [...] > True enough, but that's so much easier in a language with sum types.

These languages make it easier to have more special cases. There's a difference.

> That's a pretty controversial viewpoint, since it makes composition impossible (indeed taken to its logical extreme this would mean never writing a library, whereas the grandparent was convinced that more use of libraries was the way to write good code).

I don't see why that should be the case. Aside from the fact that composition/"reuse" is way overrated, libraries can always opt for process- or thread-wide global state. Another possibility would be to have global state per use (store pointer handles), and passing a pointer only to library API calls. The latter is also the most realistic case since most libraries take pointer handles. I absolutely have these handles stored in process global data. For example, Freetype handle, windowing handle, sound card handle, network socket handle, etc.

Also called "singleton" in OOP circles. Singletons are nothing but global data with nondeterminstic initialization order and superfluous syntax crap on top. Other than that, they are indeed good choices (as is global data) since lifetime management and data plumbing is a no-brainer.

> I find modern language features mostly guide us down the path that most of us already agreed was good programming style

But just the paragraph before you said you didn't agree with mine? In my opinion, OOP, or more specifically, lots of isolated allocations connected by pointers/references, make for hard to follow code since there is so much hiding and indirection even within the same project/maintenance boundaries without benefit. In any case I absolutely agree that this style is not doable in C. You need automated, static or dynamic (runtime) ownership tracking.


> I don't see why that should be the case.

At the most basic level, if project A makes use of library B and library C, then you want to be able to verify the behaviour of library B and library C independently and then make use of your conclusions when analysing project A. But if library B and library C use global state then you can't have any confidence that that will work. E.g. if both library B and library C use some other library D that has some global construct, then they will likely interfere with each other.

> Another possibility would be to have subproject-wide global state, and passing a pointer only to library API calls. The latter is also the most realistic case since most libraries take pointer handles.

At that point you're not using global state in the library, which was the point.

> you can always opt for process- or thread-wide global state

That doesn't solve the problem at all.

> Also called "singleton" in OOP circles. Singletons are nothing but global data with nondeterminstic initialization order and superfluous syntax crap on top.

Indeed, and they're seen as bad practice for the same reason as global state in general.


> At that point you're not using global state in the library, which was the point.

Yes. But I want to make clear that you are still using global state for all uses within the project itself. The library can be implemented in whatever way. For example, setting the pointer in a global variable on API entry ;-)

> That doesn't solve the problem at all.

WHICH problem? I don't think there is one.

> Indeed, and they're seen as bad practice for the same reason as global state in general.

This is foolish. There is no problem with global state. Global state is a fact of life. Your process has one address space. It has (probably) one server socket for listening to incoming request. It has (probably) one graphics window to show its state. Whenever you have more (e.g. file descriptors, memory mappings, ...), well then you have a set of that thing, but you have ONE set :-). And so on.

You are not writing a thousand pseudo-isolated programs. But ONE. One entity composed of a fixed number of parts (i.e. modules, code files) that work together to do what must be done.

Why add indirection? Why make it hard to iterate over all open file descriptors? Why thread a window handle through 15 layers of function calls when you have only one graphics window? It adds a lot of boilerplate. It even brings some people to invent hard to digest concepts like monads or objects just to make that terrible code manageable. It makes the code unclear. Someone once described it with this analogy, "I don't say ''I'm meeting one of my wives tonight'', unless I have more than one".


> Yes. But I want to make clear that you are still using global state for all uses within the project itself.

But if we believe in using libraries then often our project will itself be a library.

> The library can be implemented in whatever way. For example, setting the pointer in a global variable on API entry ;-)

And then you have the problem I mentioned: if there is a diamond dependency on your library then the thing using it will break.

> WHICH problem? I don't think there is one.

The problem of not being able to break down your project and understand it piecemeal.

> Global state is a fact of life. Your process has one address space. It has (probably) one server socket for listening to incoming request. It has (probably) one graphics window to show its state.

All those global things are a common source of bugs, as different pieces of the program make subtly different assumptions about them. Perhaps a certain amount of global state is unavoidable. That's not an argument against minimizing it.

> You are not writing a thousand pseudo-isolated programs. But ONE. One entity composed of a fixed number of parts (i.e. modules, code files) that work together to do what must be done.

If you write a program that can only be understood in its entirety, you'll be unable to maintain it once it becomes too big to fit in your head. Writing a thousand isolated functions gives you something much easier to understand and scale.


> The problem of not being able to break down your project and understand it piecemeal.

That's just incredibly untrue. It's FUD spread by OOP and FP zealots.

> All those global things are a common source of bugs, as different pieces of the program make subtly different assumptions about them.

Do you want to say that my logging routine is more complex because my windowing handle is stored in a globally accessible place?

> Perhaps a certain amount of global state is unavoidable. That's not an argument against minimizing it.

My advice is to make clear what the data means. Make it simple. Don't put a blanket over what's already hard to grasp.


> Do you want to say that my logging routine is more complex because my windowing handle is global data?

If your logging routine touches your windowing handle that certainly makes it more complex. If I'm meant to know that your logging routine doesn't touch your windowing handle, that's precisely the statement that it isn't global data.


It is global data, because it can (and should be) used without threading it through 155 functions.

In terms of the relational data model, it is global data because there is always one, and only one, of it.


> But if we believe in using libraries then often our project will itself be a library.

How about making the project good first? Let's try to get something done instead of theoretizing.


You mean start by building something that can be used and tested in isolation, rather than trying to build an enormous system in one go? Isn't that what you've been arguing against?


No I mean solve the problem "we need to build a program that does what it's required to do" (and no more) before trying to build a library that will cure diseases.


That's a total non sequitur. Libraries can, and usually should, be much smaller than applications.


Libraries are much harder than applications because they must work for a large number of applications with diverse requirements. They need to be more abstract, and therein lies the danger.

Regarding the size, clearly wrong. It depends a lot on the library. A windowing or font rastering library will be a lot larger than your typical application.

And for libraries that are much smaller than the application itself, why bother depending on them? (Anecdote, I heard the Excel team in the 90s had their own compiler).


At this point I'm really unsure whether this is trolling or not.


Just discussing. Why would it be trolling what I do and not what the other guy does?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: