A bit off-topic, but since I discovered Intel's compiler years ago, I've been wondering why I, the generic desktop programmer, would want to pay for a compiler when the free ones are pretty damn good. I can think of only a small set of use cases where the CPU manufacturer might know the best optimization for some code that's heavily utilized in compute-intensive projects, but that seems like a small market.
It's much better at vectorising code (using SSE and AVX) than MSVC and GCC, it's got better loop unrolling heuristics (it's better at working out when unrolling isn't worth it or will slow things down), and its maths functions are much faster than the native ones on all platforms.
In my experience writing high end VFX software for three platforms, Linux's libc is the slowest with normal maths functions, and Windows is the fastest.
Intel's math libs are often 4-5x times faster - if you do a microbenchmark of a loop of powf() or sin() functions, it'll be that much faster using the Intel libs.
If building with fmath=fast for fast floating point math, Intel's fast versions are also quite a bit more accurate, and done't produce nans or infs as much.
> Linux's libc is the slowest with normal maths functions
Do you happen to remember which one? There have been a few "sporks" of glibc in the last few years and, if memory serves me correctly, Debian now use eglibc.
I think the sporks were done to address bloat, but I'm also curious if they address speed.
The Intel compiler tends to have an edge when you write rampantly inefficient code to begin with. If you structure your code reasonably, the only reliable performance difference I have seen is that the Intel compiler takes longer to compile and produces much larger binaries (4x is typical). The run-time performance is rarely significant. Also remember the "run slow on AMD" feature, which implies that two code paths are often generated.
Private companies who deals with CFD (Computational Fluid Dynamics) and Aerodynamics simulations are companies with deep pockets and who generally pay for a bunch of licenses of these compilers, it's not a small market, I can assure you that.
It's been ages since I used icc. But last time I used it, it could really do magic when it came to vectorisation and SSE support. So if you have heavy numeric code then it might be worth it. For Linux you can get a free (as in beer) edition for non-commercial use.
For normal desktop applications it probably doesn't make sense.
Support is crucial. You may not hear about toolchain bugs often but I see them every day working on binutils/LLVM/Clang. If you run up against one, it isn't fun.
Intel's compiler generates code that runs twice as fast. Speed should really have been in the list.
On that note, the whole list looks like it was written from the point of view of gcc. Features on other compilers that vary from gcc are given 'partial' credit, since I guess they do it differently and different is bad. And extensions in other compilers are not mentioned at all.
It looks like the 'shootout' was done by standing in gcc's corner and taking pot-shots at everybody else in the room.
It was an article about C++11 feature compliance. You're saying that no one should be allowed to write such an article unless they're also prepared to do an (essentially unrelated) benchmarking study?
And ICC tends to do quite well, though "twice as fast" is pretty spun (I'm sure there's a vectorizable routine somewhere that does that, certainly most typical code is going to be more in the "even to 10% better" range).