Hacker News new | past | comments | ask | show | jobs | submit login
Swift is not (splasmata.com)
96 points by mpweiher on June 4, 2014 | hide | past | favorite | 96 comments



These tests are meaningless without comparing the produced assembly (maybe the obj-c compiler is very well optimized since they've been doing that for quite a while now) and even then these microscopic tests don't really tell you much about performance in real scenarios. Swift has the advantage, as far as performance goes, that methods are not dispatched dynamically. In obj-c, every single time you call a method, you call objc_msgSend which executes at least 12 (IIRC) additional instructions (or more, depending on the situation). Plus as far as I know, not all objects are heap allocated which will also make things faster.


I'd go further: Never trust a benchmark you can't run yourself. That goes for Apple's, and it appears to go for the ones in this post as well.

And microbenchmarks (and it doesn't get much more micro than "loop without doing anything") in particular have always been very poor predictors of real performance in my experience. They can be useful when trying to improve a specific area, but they should not be used for bold pronouncements.


Well, Apple is known for misleading benchmarks, and that guy is not.


I don't think it really matters what reputation anyone has, if you throw benchmarks out into the world and make claims about them, you should throw the code out too so people can repeat it and find flaws.

I'm not suggesting malicious intent at all here. People make mistakes, and contrary to what seems to be popular opinion, good benchmarks are hard.


Examining assembly is especially important since mature optimisers are very good at eliminating code. A loop to assign the same variable a million times obviously has no side effects beyond the first assignment so the loop could easily be removed entirely.


Yeah but "C" is a subset of objective C and the idea is that tight code generally gets written in C without the dynamic dispatch. So comparing Swift to objc_msgsend seems to be a conscious misunderstanding of Objective"C".


I'm not sure I understand what you are trying to say. What do you mean by "the idea is that tight code generally gets written in C"?


He means that the performance intensive portion of Objective C programs (the tight loops, aka the 10% of the code that your code spends 90% of its time in, or whatever) are written in the subset of Objective C closer to C. E.g. they avoid dynamic dispatch in favor of static dispatch.

I'm not an Objective C programmer unless I can avoid it (C++ most days) so I can't really say if he's right or not. (Though, if I had to guess, I'd say he is)


Here's some assembly, if you're curious https://gist.github.com/koke/553424954b16d65d1cb9


Method dispatch tends to not be a major factor in ObjC performance.


I'm inclined to disagree and this article seems to disagree as well http://it.toolbox.com/blogs/macsploitation/bypassing-objecti...

(it's not a great test but it's a test)

Given I've found some other articles that claim that it's not that slow but none of them really present any data.

I guess that it depends on your definition of "major factor".


Yes, if you were doing a quick sort and each op required a msgsend it'd be a factor. But you'd have to be crazy to write code like that. This is why there are a lot of C apis that Apple has written that you call and bridge with ObjC.

With Swift you can't do that. You have to wrap the calls and generate bridge headers. What a pain.


Are you surprised that a language that's officially 2 days old has worse support than a language that they've been using for the last maybe 25 years?


Do you have an application where message sending is a performance bottleneck?


Whether it is a bottleneck or not is a different question.


I think mpweiher's point was that in normal Objective-C applications, the fact that message sending is slow is irrelevant. Yes, it is slower than a C function call, but it's rare that it is an issue in real code. When it was a problem before, in something unusual like your article's heavy tree traversal, you'd inline the code or drop down to C. But that was fairly rare. Apple has also improved objc_msgsend within the past few years.


My wild guess is that with Swift, Apple is trying to get a more lightweight language not due to performance issues but due to battery consumption. I think that around 40% of CPU time of your average obj-c app is spent in objc_msgSend. That is potentially a lot of power essentially wasted.


Author did not say if he compiled with compiler optimizations, which appear to be very important for swift being "swift".[1]

[1] https://twitter.com/Catfish_Man/status/473752917347139584


I made similar tests with optimization and saw similar numbers. Without optimization it was significantly worse.


Same here. There seems to be a stiff price to pay for all that convenience...


Is it that convenient?


A bit. It does cut down on a lot of code and it is a bit easier to follow. But I would not trade it in for Obj-C based on performance.


I don't get it. "Similar numbers" but "significantly worse"? You measure performance and compile without optimization?!

If you have better numbers, please share them!


> I don't get it. "Similar numbers" but "significantly worse"?

GP said with optimization he got similar numbers, and without optimization he got significantly worse numbers. It was crystal clear.

> You measure performance and compile without optimization?!

GP clearly stated he compiled both with and without optimization, and characterized (qualitatively) the results of each compared to the results in the article.


> "Similar numbers" but "significantly worse"?

Yes. Similar numbers to the ones posted when compiling with optimization. Significantly worse when compiling without optimization.

> You measure performance and compile without optimization?!

No, I tend to measure with optimization. In fact, I personally don't do Debug builds at the all. However, when the numbers were so bad, I did a Debug build to cross check, and lo and behold, those numbers were even worse.

That clear things up?


Yep. Thank you.


@milend via twitter: "He's definitely running the Swift code in Debug mode as I saw numbers in the same ballpark." [1]

    https://gist.github.com/anonymous/4df3610970891c6afc5a
    https://gist.github.com/anonymous/7d7fcf0a5ce24c37999c
    
    ObjC: 0.063209, Swift: 0.255287 (Release)
    ObjC: 0.060600 Swift: 5.200207 (Debug)" [2]


    [1] https://twitter.com/milend/status/474349620311883776
    [2] https://twitter.com/milend/status/474354899527155712


I did some further investigation [1] by running 20 million iterations in Release and profiled the code.

Turns out, at least 82.9% of it was just swift_retain / swift_release (ref counting). Whether those retains / released can be optimised away, I don't know, we'll have to wait and see.

[1] https://twitter.com/milend/status/474360984413683713


Benchmarking things like "assignment" in compiled languages is pretty meaningless. The compiler may or may not eliminate the code altogether, depending on whether it determines that it's dead code. Assignment may not even actually do anything. Even with more complicated code that definitely does something, you have to be a bit careful to make sure that the whole computation isn't deemed to be dead. It's not unlikely that Swift's current LLVM optimization passes aren't doing as much as Objective-C does, even with optimizations turned on. This won't have as much effect on benchmarks that do some actual work, even simple ones.


I think the optimizations for both need to be looked into carefully. For the following micro benchmark, Swift came in 1.5x slower than the C version. Both were built in Release mode with "-O3", and both with "Debug" checked OFF in their respective schemes. This factor is way smaller than the orders of magnitude being reported by the OP. Can't seem to find OP's benchmark source too.

Swift -

    var j : Int = 0
    var k : Int = 0;
    var i : Int = 0;
    for (i = 0; i < 1000; i += 1) {
        k = 0;
        for (j = 0; j < 1000000; j += 1) {
            k += j % 7;
        }
    }
C -

    int j = 0, k = 0;
    int i = 0;
    for (i = 0; i < 1000; ++i) {
        k = 0;
        for (j = 0; j < 1000000; ++j) {
            k += j % 7;
        }
    }
update: If I use "fastest unchecked" mode when building swift instead of the "fastest" mode, the swift version is 8% faster than the C version.

machine: 1.7GHz core i5 macbook air (mid 2011), 4GB RAM.

edit: For clarity, "1.5x slower" means "swift took 1.5x the time C took" and "8% faster" means "C took 1.08x the time swift took".

edit: To OP - Please put a link to your benchmark source and projects somewhere, or show your project settings as a snapshot or something. For one, I don't believe any of your results based on my own trials.

edit: Changed ++i and ++j to i+=1 and j+=1. This is the version that is 8% faster than C. OP seems right in that i+=1 seems faster than ++i.

edit: (It's becoming impractical to keep this in sync with my project.) When using Int/int, I get swift 8% faster and when using Int64/int64_t, I get C to be 17% faster.


I'm curious what the effect of using the overflow arithmetic operators in Swift would be. (See near bottom of https://developer.apple.com/library/prerelease/ios/documenta...)


Finally some actual data, thank you! Do you know the functionality behind 'fastest unchecked'? Does it only leave away memory bounds checks or arithmetic overflow checks or all of it? Depending on how it works, Swift could be quite interesting for HPC programming in the future. My dream workflow would be something like

create working and correct program with all the comfort of modern languages / IDEs

-->

switch to unchecked-but-warn (or whatever that's called like), run some test suites and get rid of all warnings.

-->

switch on highest optimizations and switch off checking, verify results.

-->

port to multicore / multinode / accelerators using some directive based approach.

note that you won't have much safety on todays' accelerators anyways, so removing that safety net when still in an easily debuggable environment is going to make life easier when debugging the accelerated code - it's all about being able to exclude error classes.


> Does it only leave away memory bounds checks or arithmetic overflow checks or all of it?

My guess is only arithmetic checks and not array bounds checks. If array bounds checking is removed, the language won't be any safer (security wise) than C, hence I think that's kept through out.

Btw "Int" and "Int64" in Swift are actually "struct"s and not a primitive type like in C. So the compiler is indeed able to keep performance for these small structs on par with primitives. This is promising for swift's claimed speed I think. These structs have extension methods like uncheckedAdd(), uncheckedDivide() and so on, which lends some evidence to my "unchecked refers only to arithmetic" guess.

edit: .. that Int/Int64 are structs means that the + and += are all overloaded operators, compared to the raw operators used in C. Given that, the compiler, I must say, is impressive.


Interesting. When you say 'So the compiler is indeed able to keep performance for these small structs on par with primitives', is this based on your experiments or on some theoretical analysis? If the former that's indeed impressive - I would guess that behind the curtains it uses some kind of macro based approach rather than actually implementing it as a struct where you're always going to have some additional integer ops to calculate addresses? If the latter I don't quite get how you arrive at that conclusion, could you expand?


A lot of this is in the category of "not-entirely random guess" :) .. need to dig deeper to see this.

Here is why I said what I did (and I did mean the former) --

My micro-benchmark is actually comparing the struct-based code in Swift with primitive-based code in C. If "k += j % 7" is to be interpreted literally in Swift sans optimization, it would be two function calls and two stack word copies (structs go on the stack and by copy, whereas classes go on the heap and by reference). k and j are both structs and so operator overloading is needed to write this expression. That the Swift compiler generates comparable-to-C performance in these cases is good news I think.

One major caveat with these benchmarks is that the same LLVM backend is being used for both. I'd like to be able to compare Swift with C/ObjC as compiled by GCC as well for a truer picture of "faster than C".

For practical purposes, I think this is nicely close. I haven't tried the ObjC bridge yet though.


The default optimization level is `Os' (instead of `O3') for iOS projects. Would you please also benchmark the two under -Os option?


Here are median values based on a few trial runs with -Os (but I'm still building and running for MacOSX only).

Int/int : Swift = 2.7s, ObjC = 3.4s (Swift is 25% faster)

Int64/int64_t : Swift = ObjC = 2.7s (indistinguishable)

edit: Swift compiler is still building in "fastest unchecked" mode for both.


For the empty loops, I'd be willing to be that the objc compiler is optimizing them away entirely. I suspect that the Swift compiler may not be that smart yet. There are plenty of times I've wanted to benchmark some C operation, looked at the generated assembly to figure out what is going on, and found that it was empty.

Without source, disassembly, and something that you can test for yourself, this benchmark is rather uninformative.


So we are comparing a language with 10+ years of optimizations against a language that just barely released in beta? We have no idea what settings the author chose (I'm not implying the author is attempting to deceive) and, assuming they are the defaults, it seems safe to assume Apple would choose very conservative defaults for the beta, especially the first release (it doesn't matter how fast it goes if it doesn't do what it's supposed to).

In two months, after people start to understand what's going on, where the trouble spots are, and what the right way to fiddle bits, if we are still seeing performance like this, I'll be interested.


I would assert, rather, that we have 60 years of compiler and runtime experience, both academic and industrial.

Consequently, I would propose that there is no excuse for a public release of a commercial programming environment to be so slow unless it introduces significant novelty sufficient to render the preceding work inapplicable.


I agree. Especially if the language is named something that implies high performance.

When Golang first appeared it was quite fast. Swift is not, apparently.


Go's compilation was fast, which made developement much more pleasant. Performance of the actual compiled code was a different matter (hard to get both fast binaries and fast compilations).

However, I understand that 1.1 and 1.2 has much improved code performance.


Golang also doesn't do anything particularly interesting as a language, and didn't have to maintain interoperability with a system like Objective-C.


Well, to be fair on the optimization story, I assume this is llvm-backed objc vs. llvm-backed swift, and swift is using the well-known, very optimized, message dispatch code that objc does. So there should be similar heft behind the optimizations in each.


Not always[1] (though, of course, when it does, it should take advantage of it). As TFA mentioned, there will also be a difference between class-based references and C scalar values. Of course, if that is the difference the author is seeing, it is probably far less meaningful for most applications (games would be an obvious exception, but that may be where Metal comes in).

1. http://stackoverflow.com/questions/24022172/does-swift-use-m...


I think the speculation in the blog post about it using boxed primitive types exclusively must be wrong, or at least a temporary state of affairs if it's currently correct.

I don't have a Mac capable of running any of the dev tools at the moment, but the implications of the section in the manual on the Int* types don't indicate they're boxed at all, nor give any reason why they should have to be.

Of course, if he's declared everything as an NSNumber it could explain some of the results. But obviously, so could them currently being boxed. Either would probably account for the ++ performance problem, especially if it's post-increment, since that probably involves making a copy.

I think the OP is just confusing the fact that they have expandable type as indicating that they have a virtual type.


Isn't the point of a high-level dynamic language that you shouldn't need to do any bit-fiddling to write performant code?

Swift seems self-defeating.


I think that's hardly the case. Or are javascript and python and ruby self-defeating as well?


Depends on the way you look at it - on the one hand JavaScript is a terrible language, on the other hand it is (more or less) the only thing you have available in a browser.


If you could use a high-level dynamic language to write performant code without "bit-fiddling", why would anyone write in anything else?


The most commonly-heard excuse is that programmers do not like all the parens.


No, the point is to ease application development.


Google cache of the post, since the blog 'sploded: http://webcache.googleusercontent.com/search?q=cache:www.spl...


Ok so I won't be writing engine code necessarily in Swift. Or servers. But I'm not sure that's what it's for in the first place.

How about user facing apps? Haven't we been through this a billion times on language of choice for job. Java, python, php etc. Each has a use. For a lot of what Swift will probably and should be used for I don't see this being an issue that hasn't already been beaten to death.


For normal apps I would mostly agree, but for a lot of games performance matters.


Benchmarks like this miss the point of what it is to write production software code. The only people who care about this style of benchmark are people with very specialised use cases and academics (and their students), the latter of which are trying to prove something that is easy to prove, but doesn't really tell us anything of practical use.

For most purposes, a user of your software is not going to feel the difference between 0.0006 seconds and 0.06 seconds. One is 100x faster, so is "better", but no user will care.

The stuff that is really expensive (3D graphics, etc.) that users care about are being handed off into custom APIs that are highly optimised and hardware acceleration anyway.

What's more important for most programmers when choosing a language is not "how fast is it for this code to run?", but "how quickly can I ship code?"

Code in the app store next quarter is going to beat code you never get shipped, every single time.

Not long ago I had a project I thought would be perfect for Haskell. I don't know Haskell but I am a very experienced Ruby-ist. I thought about using the project as a means to learn Haskell but wanted the code shipped within a month. I did it in Ruby. It's probably slower, it's perhaps not as elegant as a Haskell solution. But it's shipped. And that therefore beats the Haskell solution which does not exist.

Swift appears to have modern features that mean many programmers will feel more comfortable developing applications using it, many of whom would have struggled with Obj-C.

In that sense, it's going to beat Objective C for performance in the only benchmark that matters commercially: the one that gets ideas into user's hands the quickest.

If you think the rate of development so far has been fast, well, just watch. It's about to get mental.

I don't expect an academic to understand that, nor would I expect most junior/undergrad devs, and I can see in some special cases you would want to think about optimisation (in which case turn that little piece of code into C and link it in), but in the real World, that's how coding works.


We might be missing something here and I think ultimately Swift will be faster than Obj-C but it does highlight it's probably not wise to just start blindly converting all your code to Swift just yet until we get a better feel for what it's good (and not) good at.

Also with the fact that Apple says they will be breaking code as time goes on I'll also be on the Swift sideline for now.

Edit: Why are people finding this comment so offensive ?


> I think ultimately Swift will be faster than Obj-C

Is there something about the semantics swift that you think would inherently permit faster code than well-written ObjC?

Edit: I don't know why this comment was down voted -- it's a perfectly reasonable question that others have responded to. Oddly, the parent comment _I_ was responding too is apparently also being down voted. WTF?


Yes, the types.

Strong types are far easier to optimize.


Well, swift does a lot of type inference it's true but it doesn't appear to be more to be more strongly typed than ObjC (I could be wrong because I have only skimmed the ebook). A lot of ObjC code simply returns an id when it could return a tighter type. They seem semantically equivalent to me.

Typically a language's efficiency comes from its ability to restrict ambiguity. E.g. a scoped sequence of expressions without gotos allows the compiler to eliminate variables, reorder statements, unroll loops etc because it can see 100% of the uses of a variable by examining that scope. On the other extreme a language like BASIC is harder to optimize in such a fashion because the program counter could be set into the middle of the loop, and because all variables are global, they have to survive the scope.

I am genuinely interested if their are features in swift that aren't also in ObjC.

A language can be equivalent to another and still be easier to program in because the cognitive load on the programmer is lower so good code can be written more quickly. Swift seems to be an interesting attempt at doing that. But that's orthogonal to the question of compiled efficiency.


It is definitely more strongly typed than objective c. Swift's type inferencing is up there with Haskell, and Objective C/XCode's 'find all uses' is still string based.

Totally agree with your 'reduce ambiguity' observation, and I think long term Swift will do very well there. It will also will allow tooling to radically improve from what it is, and multicore to have a reasonable chance to be helpful.

Imo it seems a lot of the language decisions they have made are to make it more compiler/memory friendly (eg the way assign works with arrays). That bodes well for speed/memory/battery I think.


Not semantics specifically but the engineering effort Apple will be investing in it.


The issue might be that the benchmarks shown during the presentation were for iOS, and the author here might have built for OS X. The other possibility may be that the we may need to tweak the compiler settings.

I certainly don't think that Apple was outright lying in their benchmarks. Those sort of shenanigans are a PR nightmare, and nobody thinks its worth it anymore.


> nobody thinks its worth it anymore.

I'm curious which period you're referring to here. I haven't been following Apple presentations in detail for a while, but I remember the iPad 2 announcement was chock-full of flat-out lies. Do you mean that this is something that's changed since then (March 2011)? Or do you mean that it's not worth it for developer-focused presentations vs consumers (who would presumably be more credulous)?

EDIT w/ source: http://fortune.com/2011/03/03/steve-jobs-reality-distortion-...


That article would be more credible if it didn't give two examples of dual-core tablets (Dell and Motorola) that shipped "in volume" that actually flopped. Which raises the question of just what kind of volume they were shipping in - probably orders of magnitude lower than the kind of volume the iPad 2 ships in.


> the iPad 2 announcement was chock-full of flat-out lies

Do you have anything concrete to back this up with?


Ah of course, that was dumb of me. I was remembering this from a few years ago but I should've taken the time to dig up a source referring to it. For some reason I was thinking "everybody must remember that".

Here's an article that's pretty succinct about the inaccuracy and lies in the iPad 2 announcement: http://fortune.com/2011/03/03/steve-jobs-reality-distortion-...


Shock news: Marketing puts company's products in positive light!

All companies tweak the truth to make themselves look good (which company was touting their huge sales numbers when they had barely sold any products to actual customers, just retailers?). Apple aren't alone, they just get the most press.


I don't know why almost everyone in this thread is acting as if anything even incidentally negative about Apple is a personal attack on them and their family; it's really rather pathetic.

As you said, this is common practice, and Apple may or may not be one of the more egregious offenders [1], but it doesn't matter. I wasn't saying anything about Apple being shittier than other companies in this regard; I was responding to the parent commenter's claim that a company wouldn't do something like that these days. Now how the fuck is your claim of "Every company does this, leave Apple alone!" not in full support of my point (and arguing against a point that, as near as I can tell, nobody made)?

[1] IMO the lie about the Samsung quote is a level of dishonesty you don't see all that often but my point is that differences in degree like that aren't really relevant


> Do you have anything concrete to back this up with?

A smug sense of superiority.


Bullshitting developers who are about to use your compiler to verify your claims, would be silly, right?

I mean, as opposed to users who won't run benchmarks.


I know they said on stage that they've spent a lot of time getting Swift fast, but aren't we still in beta? Isn't it a bit early to throw Swift under the bus for performance reasons?


Problem is everyone wants to know whether they just skip to swift right now or just wait a few years until the tech matures. Claiming a speed increase of x2 and ending with -x2 is a 400% difference. We're not talking minor differences here.

As a sidenote, i've recently heard Core Data wasn't used internaly by Apple. Makes me think that i'll skip to swift once apple gives me the list of which of their app is coded with this language.


Based on the differences between ++i and i+=1 alone I would think that there are just bugs they need to squash and that the open beta, and all these eyes on it, are going to help find the cases that are stumbling.

Also, for many of the tests where it was a 400% or order of magnitude knock in performance there are a lot of comments in here refuting those results, so I'm not sure what to think yet.


The author seems to argue that these benchmarks disprove that Swift is fast. While I have no interest in Swift whatsoever, I'm unswayed by a few tight loop, simple benchmarks. It would be interesting to compare larger programs that are part of cpu-bound computations.


If you watch the keynote, some bold, sort of specifc claims are made. There is some big disparity going on between what is being said in the WWDC and what this person what found. They are claiming over 100x speedups and this person is seeing it be 10x slower.


Maybe Swift is in reference to developer speed.

From a PL / compiler standpoint it's pretty obvious that Swift is not going to match Objective C from a performance standpoint. That's why Apple used a micro benchmark comparing a static, compiled language vs a dynamic, interpreted language (Python) for marketing purposes.

> We can’t know exactly what’s going on behind the scenes, but my hunch is some of what we take for granted in Objective-C – the straight C scalar data types – are actually classes in Swift. And the more you rely on classes, the more Automatic Reference Counting is in there somewhere, retaining and releasing like there’s no tomorrow, often for no good reason.

If they switched from primitive types to classes for built-in types, that means vtables and an additional memory lookup per element access weakening caching and memory locality.

Creating a Swift int would involve instantiating a class, incrementing the ARC count, and storing data compared to only storing data for primitive types. That explains why appending Swift's Ints to an array is so much slower than NSNumber.

On a side note, this isn't as much of a problem with Java—despite everything being implemented with vtables—because of JIT and better CPU branch prediction.

Disclaimer: I have no Objective C / Swift experience.


> From a PL / compiler standpoint it's pretty obvious that Swift is not going to match Objective C from a performance standpoint.

This shows a deep misunderstanding of Objective-C and Swift. Swift has way more type information at compile time than Objective-C, and thus gives the compiler much more room to optimize (in addition to being safer). My guess is we'll see Swift eventually be faster than Objective-C at just about everything. For instance, they could unbox class instances into values where possible, which is something they're probably not doing much of yet.


This applies to naive code, where you let the compiler fix it. In those cases, Swift can be better because of the reasons you mention.

However, if you actually put your mind to it, for the 1-5% of code that actually matter, performance-wise, it is hard to beat the performance, and more importantly: predictability, of C.


They advertised favorable performance comparisons against both Objective-C and Python.


ObjC has an extreme performance range, from slower than Ruby to faster than Java with primitives (= C). It's easy to pick and choose.


I don't disagree; I was just clarifying on the parent's claim that Apple was benchmarking only against Python:

> That's why Apple used a micro benchmark comparing a static, compiled language vs a dynamic, interpreted language (Python) for marketing purposes.


Shouldn't we wait until Swift is in production for us and we're not using a compiler still built for beta before we come up accusations like these? Swift may not end up being the greatest thing since sliced bread, but how about we get our hands on the production-ready build of it before we decide.


Apple posted a benchmark during the presentation. It was validated. They lied. Maybe they shouldn't boast about performance before releasing?


The tools they used to benchmark may not be the same ones we all got in the beta. Maybe our validations are the wrong data point.


It'd be interesting to see how Objective-C would compare to Swift with ARC enabled. For the majority of us, who use ARC in our projects, these benchmarks don't mean a whole lot.


I never trust apple keynote benchmarks, but I was really holding out for the swift ones to be true.

It probably makes no difference in the majority of common iOS/OSX development, though.


It would be interesting to see something like Octane ported to Swift just so we can compare it to other languages that are supposedly "slow".


These appear to be in line with unoptimized swift. mpweiher, can we have some gists of the proj or something for these numbers?


Not my numbers, but I've gotten similar results, with optimized builds. Yes, unoptimized is even worse. See also:

https://devforums.apple.com/message/974858#974858

https://devforums.apple.com/thread/227905

https://devforums.apple.com/message/971211#971211


And this is why C++ can still be fast, even when the common STL implementations are a steaming pile performance-wise. The naturally developed style has been one-reference-only, so a lot of this never has to get done. With Objective-C, anything that isn't C is on the heap and reference counted. Even Java can allocate objects on the stack in tight loops (though this is done by the JIT, not by a change in how variables are declared).


What parts of modern, popular STL implementations are notably slow?

The only issue I've ever had is the blatantly obvious one of reallocating std::vector as it grows. You really want to reserve() enough space in advance if at all possible, because copying large chunks of memory is not fast.


You don't always get to use a modern, popular STL. Sometimes you need to use the hardware vendor's crappy, unoptimized STL (because they assume you won't use it). This is part of the reason the STL is unpopular in game development, though the story here has been getting better.

To answer your question, in my experience when the standard doesn't mention the desired performance characteristics of a function or method, even the popular STLs (libstdc++ and libc++ certainly) optimize for maintainability over performance. Not sure I can pull up an example of this right now but they are numerous. A good place to start is the standard containers other than deque or vector.

A bigger issue IMO is that the existence of some methods in various standard classes/templates really murder performance, no matter how good the implementation is. One particularly bad example is the bucket() method, local_iterator types, etc. in c++11's unordered_map (probably the rest of the unordered family too). These force the table to be implemented in such a way that every insertion requires a heap allocation, and iteration requires much more pointer chasing than would otherwise be necessary (e.g. with a probed implementation), which is... unfortunate for the cache.


Interesting, did you run your tests on iOS8, OS X 10.10 or on OS X 10.9.3 ?

Xcode or command line ?


Brutal. Hopefully Swift makes up for it in developer productivity.


Wise people know to avoid micro-benchmarks, but this is not a even micro-benchmark, it's a nano-benchmark....

Testing isolated things like empty loops or ...a single assignment. Those really don't make sense for modern compilers. The dead giveaway is having to modify your code in weird ways, so you don't trigger dead code elimination. If you trigger dead code elimination it means you code isn't producing anything, and you're benchmarking various edge cases that won't occur in the real world.

You need to at least implement a simple algorithm or some small unit of functionality that makes sense in a real program. Swift's compiler is designed to aggressively inline and remove reference counting overhead in cohesive units of code, however if you test everything statement by statement, then whether you enable optimization or not, it can't optimize that much.

That said I don't mind the article. It might help Apple discover places where Swift might be made even faster.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: