Slightly off topic, but something I've been curious about. Go decided on garbage collection to manage memory. Why do so few languages offer something like C++'s RAII?[1]
In C++, when you define a class, you define the constructor(s) and the destructor. Then the destructor is called whenever the object goes out of scope. Easy. This is the gist of RAII. Furthermore, in C++, a default destructor is automatically generated for you, which will ensure all the objects members are destructed with your object,[2] so you can avoid even worrying about the destructor.
Yeah, it's not as simple as GC, but it seems like a great compromise between manual memory management and garbage collection.
[2]: Provided no members are dynamically allocated. If you allocate an array in your constructor, for example, you need to free it in the destructor. You need to define a destructor to free dynamically-allocated memory, or to do other cleanup.
Preferring RAII over garbage collection is the decision that Rust made.
Probably the reason that more languages don't use RAII for memory management is safety. C++, even with RAII, is not safe—you can violate memory safety with certain uses of references and iterators. Rust has to go to great lengths (lifetimes and the borrow check—things that have only been in the research landscape) to make this safe.
I'm actually a huge fan of Rust's memory model and wished Go had something similar, since it's faster to use smart pointers, and less calls to the garbage collector are required. I also love how rust uses three different kinds of pointers depending on how the memory is managed, and although it adds some complexity, it gives more semantic value to the data you are working with.
Unfortunately for me, the rust compiler is written in rust, so I would have to adventure to grab a Linux and try my hand at a cross compile before I can play with it on my OpenBSD box.
This approach starts getting brittle when you get into objects that aren't procedure-scoped; for instance, a global session table in a web application, or a request token tracker in an RPC system. These are also the kinds of object lifecycles that tend to be behind leaks and corruption in programs that don't use RAII.
People use shared pointers to mitigate that problem, but shared pointers are (a) a reference counting scheme with all the attendant problems and (b) not compatible with code that holds references but doesn't speak the same shared pointer scheme.
(It's been a bunch of years for me since I wrote C++ professionally; that may show here.)
I think the biggest fans of RAII are those who use it more academically -- in real life it is brittle and scopes are tricky. It is also just an idiom as compared to a compiler feature. And, IMHO, RAII tends to play poorly with others.
It's more error prone than GC and there aren't really enough advantages to RAII to make it worth it much of the time. Sometimes, sure, but GC is nicer and usually fits the bill.
Are you asking what kinds of errors RAII-style C++ programs make that you can't make in Golang (an easy question), or what kinds of errors you can make in Rust that you can't make in Golang (harder)?
I'm just questioning the assumption that RAII as a concept must be memory unsafe, specific language aside.
Edit: To be clear it's the latter question, but more general. C++'s implementation has many opportunities for errors, but I do not think the technique is inherently unsafe. (PL research has proven many similar systems based on unique types and borrowing to be type- and memory-safe.)
A safe RAII system based on unique types is precisely, by definition less powerful than GC, because it allows no operations that GC doesn't allow but forbids some operations that GC allows (namely aliasing).
If you're talking about C++'s unsafe implementation of RAII, that's true of course.
C++ is in fact where my experience with RAII comes from. Could you, for example, forget to make a destructor virtual in "safe" implementations of RAII?
Perhaps the real answer to "Why isn't RAII more popular?" is that C++ has poisoned the well for many programmers.
> C++ is in fact where my experience with RAII comes from. Could you, for example, forget to make a destructor virtual in "safe" implementations of RAII?
No, that would be unsound.
> Perhaps the real answer to "Why isn't RAII more popular?" is that C++ has poisoned the well for many programmers.
Totally agreed. (But that said, I think that Go, and most other languages, made the right decision when going with pervasive GC. It makes the language much simpler, both in terms of implementation and interface. It costs some performance, of course, but being as fast as C in all cases was never a design goal of Go, according to the FAQ. The complexity, both in interface and implementation, that languages buy into if they want safe manual RAII-style memory management is not trivial.)
In C++, when you define a class, you define the constructor(s) and the destructor. Then the destructor is called whenever the object goes out of scope. Easy. This is the gist of RAII. Furthermore, in C++, a default destructor is automatically generated for you, which will ensure all the objects members are destructed with your object,[2] so you can avoid even worrying about the destructor.
Yeah, it's not as simple as GC, but it seems like a great compromise between manual memory management and garbage collection.
[1]: http://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initial...
[2]: Provided no members are dynamically allocated. If you allocate an array in your constructor, for example, you need to free it in the destructor. You need to define a destructor to free dynamically-allocated memory, or to do other cleanup.