> It looks like GCC is overtaking Clang as the compiler with the best C++11 support
There seems to be a myth going around that Clang has or had better C++11 support. Apparently when the author did his last shootout Clang had a slight edge over GCC, but over the long run GCC has had the better support most of the time. I started using C++11 (or C++0x as it was still called) in February 2011 and at that time GCC 4.5 had significantly better support than Clang.
Not to denigrate Clang's other qualities - I wanted Clang's better error messages but had to stick with GCC to get the C++0x support I needed.
C++11 rocks and I think Bjarne was right in saying that it feels like a new language. This isn't your grand-daddy's C++. It's certainly worth considering for your next project and valuable to learn if only to make you a better programmer.
See, the funny thing is, you're still going to be interfacing with your grand-daddy's C++, and his daddy's C. And you'll be adding cruft for your grandki--ah, who am I kidding, hopefully they'll be using Ruby.
EDIT: Downvote all you want--you know it's true. It is seldom you'll get to write a new C++ project from scratch, totally, using only standard libraries.
It's stuff like https://github.com/marshray/qak in which I indulge my impulse to recreate the universe from scratch in C++11 on the weekends so I don't act foolishly during the week.
I've been porting it to the limitations of MSVC 2012 and looking at implementing a lightweight Node.js-like IO system in C++11.
A bit off-topic, but since I discovered Intel's compiler years ago, I've been wondering why I, the generic desktop programmer, would want to pay for a compiler when the free ones are pretty damn good. I can think of only a small set of use cases where the CPU manufacturer might know the best optimization for some code that's heavily utilized in compute-intensive projects, but that seems like a small market.
It's much better at vectorising code (using SSE and AVX) than MSVC and GCC, it's got better loop unrolling heuristics (it's better at working out when unrolling isn't worth it or will slow things down), and its maths functions are much faster than the native ones on all platforms.
In my experience writing high end VFX software for three platforms, Linux's libc is the slowest with normal maths functions, and Windows is the fastest.
Intel's math libs are often 4-5x times faster - if you do a microbenchmark of a loop of powf() or sin() functions, it'll be that much faster using the Intel libs.
If building with fmath=fast for fast floating point math, Intel's fast versions are also quite a bit more accurate, and done't produce nans or infs as much.
> Linux's libc is the slowest with normal maths functions
Do you happen to remember which one? There have been a few "sporks" of glibc in the last few years and, if memory serves me correctly, Debian now use eglibc.
I think the sporks were done to address bloat, but I'm also curious if they address speed.
The Intel compiler tends to have an edge when you write rampantly inefficient code to begin with. If you structure your code reasonably, the only reliable performance difference I have seen is that the Intel compiler takes longer to compile and produces much larger binaries (4x is typical). The run-time performance is rarely significant. Also remember the "run slow on AMD" feature, which implies that two code paths are often generated.
Private companies who deals with CFD (Computational Fluid Dynamics) and Aerodynamics simulations are companies with deep pockets and who generally pay for a bunch of licenses of these compilers, it's not a small market, I can assure you that.
It's been ages since I used icc. But last time I used it, it could really do magic when it came to vectorisation and SSE support. So if you have heavy numeric code then it might be worth it. For Linux you can get a free (as in beer) edition for non-commercial use.
For normal desktop applications it probably doesn't make sense.
Support is crucial. You may not hear about toolchain bugs often but I see them every day working on binutils/LLVM/Clang. If you run up against one, it isn't fun.
Intel's compiler generates code that runs twice as fast. Speed should really have been in the list.
On that note, the whole list looks like it was written from the point of view of gcc. Features on other compilers that vary from gcc are given 'partial' credit, since I guess they do it differently and different is bad. And extensions in other compilers are not mentioned at all.
It looks like the 'shootout' was done by standing in gcc's corner and taking pot-shots at everybody else in the room.
It was an article about C++11 feature compliance. You're saying that no one should be allowed to write such an article unless they're also prepared to do an (essentially unrelated) benchmarking study?
And ICC tends to do quite well, though "twice as fast" is pretty spun (I'm sure there's a vectorizable routine somewhere that does that, certainly most typical code is going to be more in the "even to 10% better" range).
I recently wrote an entity-component framework [1] for C++ using a bunch of C++11 and it was really quite enjoyable. At the time, VC++ didn't support variadic templates which was a bummer, but they've since released a feature pack with it. It's great to see adoption coming along so quickly.
I've been following the development of C++0x from the side-lines and had recently come across an excuse to get back into C++ (developing games).
I recently wrote about using smart-pointers to wrap resources acquired from C libraries. I was able to load textures that I could freely share through-out a tonne of procedural code and know that there would be no memory leaks. Having such implicit run-time support goes against the C philosophy but I think that with smart-pointers there is finally a good defence for breaking that rule.
Not the OP, but I'm going to put my vote down for a simple and comparatively less discussed feature: initializer lists. Declarative programming is a huge win, and being able to declare objects just like you do "static data" is a huge readability win.
Other stuff is mostly about fixing bugs in the standard (<cough> rvalue references </cough>), cleaning up the syntax (auto) or providing new syntax for useful but comparatively rare operations (lambdas). That's all good, but initializer lists can change the paradigm of how the code is presented, and that's better.
"useful but comparatively rare operations (lambdas)"
I think it is rare just because people is not used to it, once they discover the flexibility of lambdas functions you would start seeing them more often (even abuse of them)
I am remaking a event system that I did for a turn based card game and the use of lambdas is just so natural that I realized now how painful and weak was the previous code.
Lambda functions are just syntax, they aren't "flexible" in any meaningful way I can see. And I argue that they certainly are rare -- virtually all major languages (other than C and C++03) have some form of straightforward anonymous function with some form of local scope closure (and to be clear: C++11's implementation of that bit is sort of a mess!).
Other than node.js, virtually none of them make regular use of them. When they do, it's mostly just to have a convenient way of passing a callback.
Lambda's are good. But as implemented in C++11 they really don't do anything to change the nature of the code being written. On the other hand, proper use of initializers does, by virtue of not having to write a thousand setXXX() functions, etc...
"Lambda functions are just syntax, they aren't "flexible" in any meaningful way I can see"
I disagree with that statement. To the best of my knowledge, lambdas functions are implemented as functors which are created by the compiler. So to have a comparable code you have to create those functors by hand. Which can be a lot of repetitive and boring work if you are doing something like an event system.
"virtually all major languages (other than C and C++03) have some form of straightforward anonymous function"
What about Java? :)
"virtually none of them make regular use of them. When they do, it's mostly just to have a convenient way of passing a callback."
Coming from Scheme (which yeah we can argue whether is a major language or not) I can see a lot of benefits of using closures, way more than just a convenient way to pass a callback.
"Lambda's are good. But as implemented in C++11 they really don't do anything to change the nature of the code being written."
Even when is not as powerful as the implementation in other languages still is a huge gain rather not having them at all. If you use them accordingly you would found a more concise and clear code in comparison with not using them. At least that was my personal experience.
For me, the lack of lamba functions meant I was less likely to use many of the algorithms from the standard library. In general, you want to provide a functor to such functions, and it doesn't make sense to define a class, overload operator(), and get the member variables and instantiations to line up when you could just write a for loop. With lambdas, the compiler will do that boiler plate for me.
(Yes, I say "will" - I'm not yet working on something that lets me use C++11 features. Sigh.)
Not the parent poster either, but for me, it has been rvalue references/move semantics and variadic templates.
Rvalue references/move semantics: being able to explicitly express ownership with std::unique_ptr<T> and moving ownership with std::move() has made correct code so much easier to write (especially with unique_ptrs in containers). I don't ever use the delete operator anymore, yet my code leaks no memory and doesn't have the overhead of reference counting or garbage collection.
Variadic templates: these have made templates so much more useful. I've been able to write amazing helper functions that take an arbitrary number of arguments of any types, such as concat_string() [1], which is probably the most useful helper function I've ever written. I've also done some pretty far-out template meta-programming which I'm slightly embarrassed to talk about.
I was surprised by how useful move semantics were, unsurprised by variadic templates, but also surprised that lambdas weren't as significant as I had expected. They're very useful but haven't transformed the way I write code, perhaps because my C++ code was already pretty functional before C++11. Also lately I've been annoyed with lambdas because it's not possible to move an object into a closure, putting them in conflict with my beloved move semantics.
I've also found lambdas less life-changing than I'd hoped. In a lot of languages I use short single-expression closures all over the place, but C++11's lambda syntax is too verbose for that to work well. The difference between "o -> o.bar" (in C#) and "[](FooObject *o) { return o.bar; }" may seem somewhat superficial, but it greatly cuts down on how often the lambda is the natural and clean way to do something.
OTOH, having local functions with upvalues was a benefit of lambdas I hadn't even originally considered, but that I've found myself delighting in.
Yeah they are verbose (like so much in C++ sadly).
For frequently recurring patterns like "return o.bar" you can write a templated helper function that returns a functor which you can use like this:
getmem(&FooObject::bar)
It's longer than the C# version but shorter than the lambda and easier to type because your editor can auto-complete most of it (unlike the lambda which has a lot of syntax). It becomes even shorter than the lambda if FooObject needs to be const.
I use this technique a lot. For instance I can do:
to sort a container of Persons by last_name. I've been doing that since long before C++11 and still prefer it over lambdas because it's shorter, easier to type, and clearer when read.
The table is inaccurate on that: GCC is listed as full support, but it doesn't implement at least the same feature clang is missing (floating point pragmas), which is the only C99 missing feature in clang, as far as I know, so I don't understand the difference.
C99 is probably going to be the last C standard most commercial compiler vendors will care about.
On the desktop and server the world is moving from C to C++/Objective-C, with C still being very important in the embedded space.
Microsoft's official position is C++ is the future and C is legacy on Windows platforms.
Even the two most important open source C and C++ compilers are now both written in C++, although given their open source nature, I expect they to keep on supporting C contrary to the commercial vendors.
Objective-C still cares about the C standard. Since it's a superset of C, the C standard that it's a superset of matters. So just as there's C89, C99 and C11, there's an Objective-C89, Objective-C99 and Objective-C11.
Certain C11 features (such as generic macros) could be very useful in Obj-C.
Intel's C compiler also claims to support most of C11, as, apparently, does Clang. That covers the popular Unix compilers. Microsoft seems to be the big outlier, but they don't even care about C99.
I admit, though, I have no idea what commercial compilers are popular in the Windows world. I am under the impression that you either used Intel's compiler, Microsoft's compiler, or a free port of the Unix toolchain, but I don't really do Windows development.
HPC is usually one of IBM XLC, Intel ICC, or GCC. All of them support most or all of the C11 language features.
For commercial unixes, it's hard to find documentation on C versions supported by Oracle Solaris Studio, but it was last released in 2011. HPUX ACC was last released in 2010, so I would be very surprised if it supported the 2011 C standard already. That doesn't meant it won't, but it does mean that it moves slowly. Not surprising, since large unix vendors operate in a market where stability is valued above most other things. IBM XLC already supports C11.
The embedded market tends to use GCC, from what I can tell, although it may vary widely by company. ArmCC is around, but the last major looks like it was in 2011 (minor updates since). I don't know what future versions will bring, but I'd be surprised if they don't at least end up supporting the memory model and atomics. These are /useful/ when writing embedded code.
RTOS development is pretty much the same situation as embedded.
From what I understand, game development usually isn't done in C, but I'd be very curious to know how the compilers there differ from compilers used for normal desktop applications. Which ones are normally used for game development?
EDIT: And, apparently, Microsoft is going to be adding some C11 features to C, as well as most of C99, because it will be piggybacking off the C++11 updates. So even the company that said it was ignoring anything newer than C90 is adding C11 features. That was a surprise to me.
Perhaps you could list some widely used compilers that have said they will not be moving towards C11 support?
For commercial unixes, it's hard to find documentation on C versions supported by Oracle Solaris Studio, but it was last released in 2011. HPUX ACC was last released in 2010, so I would be very surprised if it supported the 2011 C standard already. That doesn't meant it won't, but it does mean that it moves slowly. Not surprising, since large unix vendors operate in a market where stability is valued above most other things. IBM XLC already supports C11.
While it's true the last release was in Dec. 2011, Solaris Studio receives updates and fixes (just as Visual Studio, etc.) do between releases that may contain significant improvements.
C11 and C++11 support are not yet available in any form, although the Solaris Studio compiler is one of the few that actually has full C99 compliance (yes, even those annoying floating point pragamas).
> There is support for some more features from the C11 revision of the ISO C standard. GCC now accepts the options -std=c11 and -std=gnu11, in addition to the previous -std=c1x and -std=gnu1x.
> * Unicode strings (previously supported only with options such as -std=gnu11, now supported with -std=c11), and the predefined macros __STDC_UTF_16__ and __STDC_UTF_32__.
> * Nonreturning functions (_Noreturn and <stdnoreturn.h>).
> * Alignment support (_Alignas, _Alignof, max_align_t, <stdalign.h>).
> * A built-in function __builtin_complex is provided to support C library implementation of the CMPLX family of macros.
http://gcc.gnu.org/gcc-4.6/changes.html
> There is now experimental support for some features from the upcoming C1X revision of the ISO C standard. This support may be selected with -std=c1x, or -std=gnu1x for C1X with GNU extensions. Note that this support is experimental and may change incompatibly in future releases for consistency with changes to the C1X standard draft. The following features are newly supported as described in the N1539 draft of C1X (with changes agreed at the March 2011 WG14 meeting); some other features were already supported with no compiler changes being needed, or have some support but not in full accord with N1539 (as amended).
> * Static assertions (_Static_assert keyword)
> * Typedef redefinition
> * New macros in <float.h>
> * Anonymous structures and unions
> * ISO C11 support:
>
> + define static_assert
>
> + do not declare gets
>
> + declare at_quick_exit and quick_exit also for ISO C11
>
> + aligned_alloc. NB: The code is deliberately allows the size parameter
> to not be a multiple of the alignment. This is a moronic requirement
> in the standard but it is only a requirement on the caller, not the
> implementation.
>
> + timespec_get added
>
> + uchar.h support added
>
> + CMPLX, CMPLXF, CMPLXL added
>
> Implemented by Ulrich Drepper.
But I don't know what's still missing and I doubt that glibc will implement the range check library (Annex K).
Why is it that Microsoft seems to lag so far behind? Don't they have lots of funds and competent programmers to take the lead? Or is it simply not a priority?
On the MSVC blog, there's a post announcing C++11 support in VS2012. The comments, of course, are filled with angry devs posting about what a joke the initial VS2012 support is.
Buried in those comments is one from a MS dev who claims that he is the only guy working on the standard library implementation. Kinda sad, really.
I can't be sure, but the fact that libstdc++ is ABI incompatible with the current stdlib probably doesn't help adoption and thus usage and testing. If you want to use libstdc++ every dependency must also be compiled against it, including any system packages.
libstdc++ is gcc's C++ stdlib; libc++ is clang's (although clang can used libstdc++). libc++ supports all of C++11, although there's still some bugs. libc++ is ABI incompatible with libstdc++, but it's explicitly designed to support using both libstdc++ and libc++ within the same program, so that's often not a program.
Sorry, rereading my post I realise I omitted some text (I was on my phone), and shouldn't have said "ABI incompatible". What I meant to say was that the C++11 versions of both libstdc++ [1] and libc++ [2] are effectively incompatible with the C++98 version of libstdc++. The main point still stands: you effectively can not link against libraries using different versions of the C++ standard unless you're very lucky.
There seems to be a myth going around that Clang has or had better C++11 support. Apparently when the author did his last shootout Clang had a slight edge over GCC, but over the long run GCC has had the better support most of the time. I started using C++11 (or C++0x as it was still called) in February 2011 and at that time GCC 4.5 had significantly better support than Clang.
Not to denigrate Clang's other qualities - I wanted Clang's better error messages but had to stick with GCC to get the C++0x support I needed.