Hacker News new | past | comments | ask | show | jobs | submit login
Rust vs. C++: Fine-grained Performance (cantrip.org)
211 points by beshrkayali on Feb 7, 2016 | hide | past | favorite | 126 comments



Most insightful part for me was this one:

> Many variations [he's talking about Rust] that seemed like they ought to run the same speed or faster turned out slower, often much slower. By contrast, in C++ it was surprisingly difficult to discover a way to express the same operations differently and get a different run time.

This is an insidious pitfall that may earn Rust the "slow" label, akin to what happened to Common Lisp.


> This is an insidious pitfall that may earn Rust the "slow" label, akin to what happened to Common Lisp.

This doesn't match my experience at all with Rust, for what it's worth. Not having copy constructors makes up for any difference in that regard: in C++ it's way too easy to accidentally deep copy a vector and the entire object graph underneath it--all it takes is "auto a = b" as opposed to "auto& a = b"--with very bad performance consequences. Rust, on the other hand, doesn't let you deep copy unless you explicitly ask it to.


When I went back to C++ after learning Rust and used a vector again, I realized this and it hit me like a brick. I used to program in C++ before, so it wasn't like I didn't know this, but it was something rather unremarkable to me. Then I go back to C++ and throw vectors around willy-nilly and stuff works. Huh, I thought, this feels strange, I wonder how C++ accomplishes this without a garbage collector or ownership. Then it hit me -- it deep copies All The Things. Again, I already knew this, but I hadn't ever thought about it much. But the change in perspective made it stick out like a sore thumb.


The Qt C++ library does some neat tricks to avoid copying. For example its vector class (like nearly all of its container/value classes) overloads the "=" operator to perform shallow copies. So even though default behavior in C++ is to deep copy, it's possible to override.

Qt even takes it further by providing automatic copy-on-write. Methods that read data operate on the shared copy. Methods that change data cause the object to deep copy and detach first. This allows writing programs using values rather than references, while retaining the performance of references.


I got so worried about accidentally deep-copying stuff that I ended up with an ugly C++ hack:

    C(const C &) = delete;
    C &operator=(const C &) = delete;
    C(C &&) noexcept = default;
    C &operator=(C &&) noexcept = default;

    struct MakeCopy {
        const C &src;
        MakeCopy(const C &src) : src(src) { }
    };

    MakeCopy make_copy() const { return *this; }
    C(MakeCopy src) : field1(src.src.field1), (blah blah) { }
Then, when you really want to copy stuff, you have to say "a = b.make_copy();".

This isn't really clean, because it's too easy to add a field to C and then forget to add it to the "manual copy constructor", but so far I found it good enough for me.


In Rust, if you want a deep copy you (IIRC) implement the Clone trait, which allows you to explicitly clone everything. Many collections in the standard library already implement this, so you get it by default :).

Edit: I should point out that deep copies can only be explicit in Rust -- there's no implicit deep copy AFAIK.


There is no implicit deep copy, Clone is usually how a deep copy is implemented, but not all Clones are deep copies. Rc, for example, bumps a reference count on Clone.


> This is an insidious pitfall that may earn Rust the "slow" label, akin to what happened to Common Lisp.

Could it just be that the language is still young and all the edge cases in the optimizer haven't been implemented yet?


Not just that, but there's a lot of optimizations that would be nice, but we haven't implemented yet. (Though some of these will end up being more to help with compile times than runtime performance, though)


I thought Rust left the optimization to LLVM. Is that not the case?


LLVM is great, and it's the reason why Rust is able to match C++ in performance on this workload, but it's not a magic-optimizer-of-everything: it's tuned to C and C++. We have added a couple of Rust-specific optimizations to it, but not a whole lot.

I suspect what the author was seeing was random performance differentials between iterators and loops, which often boils down to little missed optimizations in LLVM. If you find them, please file them in the Rust bug tracker--we've fixed many of them and will continue to do so in the future!

Note that when MIR lands, progress on which is well underway, we'll have the ability to do optimizations at the Rust IR level in addition to the LLVM level. This should make it easier to do these kinds of things.


Figured this might be the case. LLVM is great but C++ compilers have had a long time to mature.

It's impressive that the filter and map methods produced such good performance. The lazy evaluation scheme must be well setup, shout out to the Rust team!

Are the lazy iterators are combined and effectively stripped out at compile time?


Yes.


As I understand it, there are a number of potential optimizations based on Rust's implicit understanding of the lifetime of a memory region. LLVM doesn't know about those because they can't be represented in the IR. Last I heard, any kind of optimization work based on the extra information rustc has is waiting for the compiler refactor that's currently underway.


Note that there aren't any optimizations that C++ can do that Rust can't in this area.


Can Rust's aliasing rules lead to extra avenues of optimization compared C++ (along the lines of C vs. Fortran)?


Quite possibly yes.


I dunno, it's very easy for me to express things differently in C++ and get a different runtime. It's easy enough for a bunch of C++ programmers writing needlessly slow code whom I've known. It must be equally easy for the author. The reason he doesn't ought to be vast experience with C++, so that he doesn't try obviously dumb things (as in things that malloc a ton of stuff needlessly, etc.), whereas his Rust experience is necessarily smaller. I think you need quite a bit of experience to declare that, well, experience just doesn't help because the performance model of this thing is weird!


I would never call software good to go until I have profiled it to weed out all the places where a slow technique meets an important path.

In my experience profiling and optimization is usually sort of like a victory lap - low hanging fruit for substantial differences.


This is a problem in Haskell as well, from everything I've read.


Can you share some links?


One particular example that I recall was that if you calculate something like

    foldl (+) 0 [1..1000000]
(i.e. calculate the sum of the first 1000000 natural numbers)

Then the obvious way to do it is to simply keep a running total, but haskell doesn't do that, it constructs the expression:

    (...((1+2)+3)+4)+5)+...)+1000000)
And tries to evaluate that, which tends to cause stack overflows, and is just generally slow. To fix it you have to use the following expression instead:

    foldl' (+) 0 [1..1000000]
This problem is described in more detail here: https://wiki.haskell.org/Foldr_Foldl_Foldl'


Adding just a single tick mark to make a big difference in performance is one of many areas in Haskell that make performance-minded development a pain. With these high-level languages, you really have to understand the implementation of the language to get anything fast written.



Yeah, I wish he had gone through this in more detail, too.


I laughed at this. I STILL write "++foo;" rather than "foo++;" as a default because C++ used to create copies at odd places.


The author compares gcc vs rustc.[1]

I'm curious if the author considered comparing clang vs rustc since both use the LLVM backend. (I'm guessing the more mature C++ code generation would still win the benchmark but one could study the intermediate code emitted by clang/rustc instead of the final machine code emitted by gcc.)

[1]https://github.com/ncm/nytm-spelling-bee/blob/master/Makefil...


On my Macbook Pro, the rust version seems consistently faster (20% or so) than the c++ version compiled with clang.


As a followup, if I use __builtin_expect as recommended by the article, the situation reverses, with c++ being significantly faster.


Curiously, most variations of the C++ version run only half as fast as they should on Intel Haswell chips, probably because of branch prediction failures

I haven't looked at the code yet (time for bed) but this note from Agner seems like it might be relevant:

  3.8 Branch prediction in Intel Haswell, Broadwell and Skylake

  The branch predictor appears to have been redesigned in the 
  Haswell, but very little is known about its construction.

  The measured throughput for jumps and branches varies 
  between one branch per clock cycle and one branch per two 
  clock cycles for jumps and predicted taken branches. 
  Predicted not taken branches have an even higher throughput   
  of up to two branches per clock cycle.

  The high throughput for taken branches of one per clock was 
  observed for up to 128 branches with no more than one 
  branch per 16 bytes of code. If there is more than one 
  branch per 16 bytes of code then the throughput is reduced 
  to one jump per two clock cycles. If there are more than 
  128 branches in the critical part of the code, and if they 
  are spaced by at least 16 bytes, then apparently the first 
  128 branches have the high throughput and the remaining 
  have the low throughput.

  These observations may indicate that there are two branch 
  prediction methods: a fast method tied to the μop cache and 
  the instruction cache, and a slower method using a branch 
  target buffer.
http://www.agner.org/optimize/microarchitecture.pdf

At a glance, the symptoms seem like this might match. Separately, I think there is still a limit of 3 branches per 16B before the branch prediction starts to fail. It's rare to hit this in normal code, but something this optimized might.


I've recently experienced what I believe was likely branch predictor failure. I rewrote essentially the same algorithm, with the same complexity, in a completely new way, to reduce constant factors. The new code was beautiful, and I was expecting perhaps 15-20% performance improvement. (40% of the runtime in the old version was in one function, the new version should, and did, call that function with less data to grind on average). Memory access patterns should be identical. Grand.

So what happened with my standard benchmark on Haswell (GCC 5.3, -march=native -O2)?

Old version, without PGO: ~8 minutes (baseline)

Old version, with PGO on: ~9 minutes (yes, PGO made it slower)

New version, without PGO: 11 minutes (60% slower)

New version, with PGO on: 7 minutes (~15% faster)

All of these benchmarks were replicated with a common set of functions tagged noinline, and I inspected the assembly to see if there was a difference with regard to loop unrolling. Nope, no unrolling. Afaict it appeared to be entirely to do with the way GCC had laid out code with respect to branches. The differences were minor in key areas and seemed like arbitrary compiler choices.

I eventually narrowed this down with callgrinds branch-predictor simulator to a few hot spots. I added a redundant if-statement to check for and short-circuit a common case, even though it shouldn't have mattered, and a few __builtin_expects, and suddenly I was getting the 15% speedup without PGO.


I found the branch prediction claim to be a bit weak without perfmon data to support it. The references point out that the Intel 'popcnt %r1, %r2' instruction has a false input dependency on the output variable and neither gcc nor clang seem to know about that and adjust register scheduling. This is usually caused because the chip uses the same scheduling bucket used for instructions like 'add'. So putting 'popcnt %rax, %rax' inside a tight loop will prevent that loop from being unrolled in the processor. As a result the code is highly sensitive to random changes because the compiler occasionally gets lucky and breaks the chain.


I've now looked at it more closely, and tested on Skylake and Haswell. I don't think the case described in https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67153 is actually a bug in either the compiler or the processor. Rather, it's the behavior you would expect if the compiler (at compile time) guesses wrong about whether a branch is likely to be taken or not.

Which choice is fastest on the depends on the runtime word list, and thus there is no reason to fault the compiler for either choice. For the processor, the difference is simply that a "branch not taken" is faster than a branch taken. The fast version involves a loop that executes in a single cycle:

   77.21 │210:┌─→add    $0x4,%rcx 
    0.66 │    │  cmp    %rcx,%r8  
         │    │↓ je     260       
    4.20 │219:│  mov    (%rcx),%edi 
    0.88 │    │  test   %r9d,%edi   
         │    └──jne    210
The cmp/je and test/jne instructions fuse into on µop each, so that (incredibly) the processor can execute the remaining 4 fused µops at an iteration per cycle. If the loop is rearranged so that the middle branch is taken, each iteration takes a minimum of 2 cycles instead.

The solution (as the author mentions in a parenthetical) is to give more information to the compiler with "__builtin_expect". I wasn't able to test on Sandy Bridge or earlier, but my strong guess would be that Haswell and Skylake have a potential speedup, rather than a regression. I'll try to add a comment to the bug later tonight.


Don't know about Clang/LLVM, but in GCC you can give the compiler a hint if a branch is likely to be taken. As far as I can tell from disassembly, most of the time it just puts that branch first in the instruction stream, but I have measured significant benefit in very hot code paths.


I'm surprised by the small warts in the Rust code. In this line:

  let fname = &*env::args().nth(1).unwrap_or("/usr/share/dict/words".into());
Neither the &* not the .into() add much to the meaning for me as a reader. Why are they necessary for the compiler?

Why is there an .unwrap() on writeln! ? To force a panic if the write failed?


They used `into()` because it's the shortest. The more idiomatic options are all of the other ones: String::from, .to_string(), and .to_owned() (Strings implement a lot of generic interfaces.)

The &* converts a String to a &str. I would have done it in the match rather than here, personally.

  > Why are they necessary for the compiler?
Rust does not heap allocate things automatically nor convert types automatically. This is the side effect: you can see exactly how much allocation is going on, and know exactly how stuff is being converted.

(We do have a very small and very specific set of conversions that will happen, they don't here, though.)

  > Why is there an .unwrap() on writeln! ? To force a panic if the write failed? 
Yes. The compiler will warn you that you're ignoring the errors on the write, and panicking is one way of handling it.


So the string literal is a str, but it gets turned into a String with the .into(), and then back into a str (well, &str) on the same line. That seems like a lot of travelling to go a short distance.

Is this because args() ultimately gives you a String, and you need to get that and the string literal out of a single expression?

Why does args() give you String rather than str? Couldn't command-line arguments be static more-or-less constants?

Would there be any way to turn the output of args straight into a &str, so you didn't need to .into() the literal?

Why are String and str separate types, rather than different uses of a single type? Are there any other examples of pairs of types with the same relationship? Arrays and slices? Any others? Is there something like this for maps/dictionaries/hashes?

I realise i've asked a lot of basic questions about strings here, and that you (and any other knowledgeable people reading this) are probably pretty tired of explaining this over and over again. I would be equally grateful for a link to some relevant documentation or whatever as for a detailed answer!


  > Is this because args() ultimately gives you a String
Correct.

  > Why does args() give you String rather than str?
Because the command-line arguments to the program are dynamically created. To be a &'static str, like a literal, they would have to be baked into the binary. &str is a pointer, it doesn't own any backing store.

  > Would there be any way to turn the output of args straight into a &str, so you didn't need to .into() the literal?
Well, so that is what this code is doing. It's just also handling error cases: What if the argument isn't passed in, as the primary one.

  > Why are String and str separate types, rather than different uses of a single type? 
Because String actually owns the data it's attached to, and is heap-allocated. &str is a pointer; it can only point at some other string type, since it doesn't store its own data.

  > Are there any other examples of pairs of types with the same relationship?
Arrays/Vectors and slices are a great example: String is wrapper around Vec<u8>, and &str one around &[u8]. But any owned/pointer variant with some sort of restriction on the type of data is going to be like this, it's a pretty common pattern.

Don't forget other string types too: There's OSString, CString, and external crates provide other types as well. Strings are not simple.

  > Is there something like this for maps/dictionaries/hashes?
Slices point to _contiguous_ spots in memory, so these data types don't really have slices, because that property isn't true.

  > I would be equally grateful for a link to some relevant documentation or whatever as for a detailed answer!
Hehe, no worries. It's hard to say what _specifically_ would be the best docs, but if you're interested in the _future_ docs about this, http://rust-lang.github.io/book/understanding-ownership.html is an explanation of Rust's ownership system through String and &str that will become the next version in the official docs. There's still some rough edges, as it's a draft, though!


If a panic is forced and used like an exception, that's just so wrong ...


> .into() > Why are they necessary ...

"foo" in Rust has the type &'static str. That is, a pointer and a length to an immutable piece of memory that never goes out of scope, which in the case of a literal is allocated statically. Because Rust tracks ownership of memory, this kind of a reference is of a different type than a heap-allocated owned string.

Rust also does no allocations or conversions without you telling it to do so. The programmer wanted a heap-allocated String, so he had to call a conversion function, which allocated memory and copied the string.


I'm not sure why the author kept &*s, I have suggested &s[..] instead (i.e. "take a slice of the whole string").

And .into() seems to be used as a &str -> String (allocating) conversion for brevity only.

The more idiomatic form would therefore be:

  let fname = &env::args().nth(1).unwrap_or(String::from("/usr/share/dict/words"))[..];
Although I would move the &...[..] slicing to the places where fname is used, which should work fine with &String instead of &str (or automatically coerce to get the latter).


I need to learn Rust again. I haven't used it in a year, and the language has changed a lot. Everything now seems to require a closure or some ".into()" idiom. Like this:

     let file: Box<Read> = match fname {
            "-" => Box::new(stdin.lock()),
             _  => Box::new(fs::File::open(fname).unwrap_or_else(|err| {
                     writeln!(io::stderr(), "{}: \"{}\"", err, fname).unwrap();
                     process::exit(1);
                 }))
        };
Type Result has its very own set of control flow primitives.[1] So does type Option.[2] Any type can have its very own flow control primitives. Are we going to see

    date.if_weekday(|day| { ... }) 
and similar cruft for every type? I hope not.

Hopefully the setup of closures that probably won't be executed isnt't too expensive. Do those require a heap allocation and release when not used, or is this all on the stack? The run time variation for small changes indicates that some constructs are more expensive than others, but it's hard to know which ones are bad.

I understand the rationale behind the Rust approach to error handling, but it's just painful to look at. After seeing the gyrations people go through in Go and Rust to deal with the lack of exceptions, it looks like leaving exceptions does not make a language simpler. From a compiler perspective, the compiler can generally assume that exceptions are the rare case, and can optimize accordingly. Rust has no idea which closure a function will call, if any.

[1] https://doc.rust-lang.org/std/result/enum.Result.html [2] https://doc.rust-lang.org/std/option/enum.Option.html


> Hopefully the setup of closures that probably won't be executed isnt't too expensive. Do those require a heap allocation and release when not used, or is this all on the stack?

Creating a closure with |...| { ... } is literally as expensive as creating a tuple containing (references to) the variables it captures. That is, they're on the stack by default like everything else in Rust, and there's no implicit heap allocations. For more details, see, for instance, http://huonw.github.io/blog/2015/05/finding-closure-in-rust/ .

> Rust has no idea which closure a function will call, if any.

It does. As my link above discusses, each closure has a unique type, allowing monomorphisation to kick in and hence the compiler can easily optimise and inline calls to closures (as long as the author of the closure-taking function doesn't opt-in to only allowing virtual dispatch).


Closures are just as expensive as normal control flow, if they're not passed as a trait object (which is unusual).

To expand: No closures require heap allocation. If you pass the closure as trait object, calling it will require a virtual function call, which is more expensive than a normal one.


Closures aren't more expensive than what you use; they're not heap-allocated by default, and a closure that doesn't close over everything should end up being a regular old function.

We just merged an RFC for ? syntax, which should make error handling less verbose.


> it looks like leaving exceptions does not make a language simpler.

I don't think that this ever was the claim. Rust leaves exceptions out because they make code hard to work with and because it prefers errors be acknowledged. I don't think monadic errors are "simpler" than exceptions.

(Though I suspect they are simpler to teach than exceptions to people who have learned neither)

Once `?` lands in Rust I expect these "gyrations" become much simpler.

> Are we going to see > > date.if_weekday(|day| { ... }) > > and similar cruft for every type? I hope not.

Not sure where this comes from. As the others have mentioned, closures are pretty cheap, but anyway you don't need to use the control flow primitives. There are many ways of dealing with errors, including using `if let`, `match`, `unwrap_or_else`, `try!`, and the upcoming `?`.


    Incidentally, I don’t know why I can write  
        let (mut word, mut len, mut ones) = (0u32, 0, 0);
    but not
        (word, len, ones) = (0, 0, 0);
Marking variable declarations with let is a good thing and makes it clear if something is an assignment or a variable declaration. It also leads to cleaner code when you use nested functions (I always hated the "nonlocal" keyword in Python)


I think the author was indicating that while you can use that syntax to declare and initialize variables, you can't use it later to assign new values to them. I don't think they had any problems with the concept of `let`.

And it does strike me as a little weird as well to only be able to take advantage of the pattern match destructuring on initialization, but I'm sure there's a good technical reason.


I read it the same way. There's an open issue for it:

https://github.com/rust-lang/rfcs/issues/372

I think one of the main reasons this isn't done is because of the grammar issue. I dimly remember that one of the goals of Rust was to be parseable with a LL(1) grammar.


There are grammar issues, yes. We unfortunately have one tiny case that's context-sensitive though, so we didn't quite get there :/ that vast majority of the grammar is very straightforward though. There's a lot of benefit to it.


The main reason this isn't done is that `let (a, b) = (…)` is the pattern matching of two tuples, but assignment doesn't do pattern-matching.


let (word, len, ones) = (0, 0, 0)

looks like a cleaner version that works in F# (and I am not sure in Rust). This is extremely useful when initializing values from a function that returns a tuple and do not want to change them (unlike the mut word etc.)

let (x, y, z) = GetStartPoint3d()


That works in Rust if you add a ; on the end.


Nice. The original article does an interesting job, but what truly blows my mind is Rust gives systems level of performance while supporting highorder functions and aspects of functional programming. That is truly remarkable. On the side note, adding the ; on the end also works in F# but is not idiomatic.


Yeah, the optimizations are pretty great. I really love our closures too, it enables all kinds of good stuff.

(And thanks for the idiom note, makes sense. F# is pretty cool.)


If you want to compare the verbosity of both languages, why not use 'using namespace std' ?

Why write:

> std::vector<unsigned> counts; counts.resize(sevens.size());

When you can write:

> auto counts = vector<unsigned>(sevens.size());

Doesn't seem like a fair comparison to me.


I don't agree on the general usage of "using namespace std", so that part does not bother me.

For an even shorter init:

> std::vector<unsigned> counts(sevens.size());


Bringing namespaces into scope is perfectly idiomatic C++, just don't do it in your headers.


IIRC, importing an entire namespace into scope is a feature that was added exclusively for backwards compatibility and is not idiomatic modern C++.

What is idiomatic is explicitly importing individual symbols into a name peace.

    #include <iostream>
    #include <vector>
    #include <string>
    
    using std::cout;
    using std::vector;
    using std::string;

    int main() { 
    
      vector<string> v {"explicit ", "is better ", "than implicit\n" };

      for (auto p: v) cout << p;
      
      return 0;
      
    }
I'm on mobile so please forgive errors.


Wow, I never knew about that. I always hated having std:: everywhere, just so I could have "correct" C++. But I also hated referencing to std implicitly. This is the perfect tradeoff: explicitly state intentions in the beginning, then implicitly reference them as you go. (works until you use eighty five billion lines just to import your symbols)


Sure, sure, but I don't know, personally it feels sloppy. Renaming namespaces to shorten them feels ok, as is importing specific names. Bringing in the whole shazam gives the impression that the person who was writing didn't really care very much.

But again, personal opinion - it can definitely be used in some contexts (just be consistent in your project!).


> I don't agree on the general usage of "using namespace std", so that part does not bother me.

It should: we're comparing to Rust, whose namespace system seems very similar to a "using namespace std".


Rust only provides short reexports for a stable set of a few dozen items, rather than the "entire universe" of `using namespace std`. The only types that have a default reexport are `Option`, `Result`, `Box`, `String` and `Vec`.


To be even more specific, http://doc.rust-lang.org/std/prelude/


> we're comparing to Rust, whose namespace system seems very similar to a "using namespace std".

Not at all; only if you use globs ("use foo::*"). D's namespace system is like that, but not Rust's.


I think one thing of note here is that the author did not want to make the program parallel. One of the benefits touted by Rust is "fearless concurrency." If this feature was introduced this is where # lines or performance may have diverged.


Parallel is of course very important. But if serial speed isn't in the same range, go parallel will be the same perf crutch as the Python folks who say drop to native. Rust programs shouldn't be parallel to beat a single threaded C++ program.


>Rust programs shouldn't be parallel to beat a single threaded C++ program.

I don't write code in either language, but as an outsider my impression is that rust was created to make programming for parallel execution, easier.

If that's the case, why are we comparing a use case c++ is optimized for against a use case rust isn't optimized for?


We do want Rust to be excellent at parallel execution, but that does not mean that we don't pay attention to single-threaded performance either. The way that we make paralell/concurrent code better has no negative impact on single-threaded performance. In fact, sometimes you can use more efficient data structures when you know that you're not using multiple threads, like Arc<T> and Rc<T> for example.


Exactly. Most C++ codebases lean too heavily on `shared_ptr` because they don't/can't know when something will cross a thread boundary.


Rust wasn't created to make programming for parallel execution "easier". What Rust does is make writing implicitly unsafe parrellel code impossible to write.

This is a good thing, but is orthogonal to single threaded performance (which from what I've read, the Rust team definitely cares about).


It's absolutely intended to make it easier. Easier in the sense for example, that you can create a multithreaded program that uses stack-allocated but shared data, and you don't have to debug complex synchronization yourself.

The rayon library is an exciting example of some of the possibilities http://smallcultfollowing.com/babysteps/blog/2015/12/18/rayo...

Rayon, being a multithreading library, is of course itself not so trivial to write. What it lives up to is that users of rayon can use the regular rust type system rules to ensure that their use of rayon is thread safe if it compiles.


I find rust is still a complex language. I don't find concurrency to be any easier to write using Rust over using the C++11 concurrency API, for example. I don't have much real experience with concurrent Rust applications, though.

What Rust does have is that once your code compiles in safe mode you're very confident.

Overall, I like the complexity and I have been enjoying my time with Rust. I began using for bare metal ARM programming. After using Rust in "no standard library mode", I'm now convinced there are no more valid reasons for using C in this century. To me, Rust is a no-brainier replacement C.

It's not however, going to effect the adoption of modern C++, which has a slightly higher level niche in performance critical applications. It's the libraries that make C++ awesome, definitely not the language itself (although the core is improving).


Is there a rayon replacement in C++ (lightweight tasks for parallelism with a lock-free work-stealing scheduler), other than Cilk itself, which isn't really C++?


Take a look at Intel thread building blocks. https://www.threadingbuildingblocks.org/tutorial-intel-tbb-t...

Also there is Microsoft REST SDK which includes task based programming. https://casablanca.codeplex.com/wikipage?title=Programming%2...

Also there is async++ https://github.com/Amanieu/asyncplusplus/blob/master/README.... which is an implementation of the proposed c++ concurrency TS.

In January 2016 the concurrency TS was accepted so we should start seeing implementations from the compiler vendors. https://gist.github.com/StephanTLavavej/996c41f7d3732c968ede


I have no idea, sorry.


I think well designed C++ can maintain parity, especially with async() using http://en.cppreference.com/w/cpp/thread.

The problem is more that Rust is going to be automatic in your security more often, while C++ would require more thorough review.

But the code in this article is micro-optimized, and overlooks the common trend of micro-optimization in C++ often leading you into the twilight zone of unexpected undocumented behavior and "luck" when getting desired results rather than granularity well defined syntax. You basically compare the two with artful precision which means you do not get to notice the very common trend of what is safe Rust is often unsafe C++ in the default design.


> I think well designed C++ can maintain parity, especially with async() using http://en.cppreference.com/w/cpp/thread.

No, the entire point of Rust's "fearless concurrency" is guaranteed data race freedom by the compiler, something C++ doesn't even try to enforce. Modern C++ can still easily have data races.


To my knowledge std::ifstream is pretty slow, I think the author should load the whole file in memory with a cache friendly layout and try the benchmark again.


>but I’m amazed that Intel released Haswell that way. I don’t know yet if it Intel fixed it in Broadwell or Skylake.

I would love to know more about this; how curious indeed.


This is really fantastic to get an understanding of how Rust works in a real world environment. As a non-systems developer I am finding the Rust language to be a little cryptic though in particular the many new syntax elements that don't exist in other languages.

I would be curious to know if there are plans like Swift to simplify the language syntax in the future or if it is just something to get used to.


It depends on what you mean by "simplify". We've just accepted an RFC for ?, for example, which should make certain code more concise, but it is an additional feature, so is that more simple?


General notes: - Semicolons inconsistent usage makes Rust harder to process. Is that semicolon extra? Is it not? Will that be a compilation error? Saving single keystrokes will be paid in cognitive load. Programming is already a demanding task, this isn't helping. - Chaining multiple statements on a single line is bad for various reasons.

Dependencies / headers / modules: - C++ is better precisely because it's more grained on the initialization side. The paragraph starting with 'Rust wins, here', is just a general turnoff to read the rest of the article. When I read the title 'Rust vs. C++: Fine-grained performance' I was honestly expecting either a table with performance times across various algorithms, or a savvy optimization guide. I wasn't expecting a LoC duke-it-out.

Input file processing - Both are conceptually boilerplate code. There is more for Rust apparently. - There is an interesting &⊛env:: in there that makes the language looks like it needs a bit more work. Three operators just to do something with the first object? - Are they intentionally trying to copy C++ with the :: operator but removing visually arbitrary semicolons? Just replace the :: with something else.

Data structure and input setup - Remove the 'let' from the language, and it does look cleaner than the C++.

Input state machine - Why isn't "(0..7).rev()" "(7..0)" - On the bottom part, lamba-like usage will makes in less comprehensible (and probably undebuggable the way it's written)

General lack of performance numbers - "I found that iterating over an array with (e.g.) “array.iter()” was much faster than with “&array”, although it should be the same..." "Curiously, changing scores to an array of 16-bit values slowed down (earlier versions of) the C++ program by quite a large amount – almost 10% in some tests – as the compiler yields to temptation and forces scores into an XMM register. The Rust program was also affected, but less so."

Edit: Unicode


I didn't write this blog post, though I did help the author optimize their Rust code when they dropped by #rust, so I'll respond to the generic Rust stuff rather than those details.

  > Semicolons inconsistent usage
You've asserted this, but not demonstrated it: semicolon usage is consistent in Rust. Rust is not C++, so it doesn't necessarily follow the same rules, but it is consistent, even though it may be different.

  > C++ is better precisely because it's more grained on the initialization side.
Can you elaborate on this? I'm not exactly sure what you're saying.

  > There is an interesting &⊛env:: in there that makes the language
  > looks like it needs a bit more work. 
See above: this kind of thing is about how Rust views coercion and heap allocation: explicit, not implicit.

  >  Just replace the :: with something else.
We tried that. We still ended up with :: as a scope operator.

  > Remove the 'let' from the language, and it does look cleaner than the C++.
`let` has a few advantages, in that it allows you to take advantage of patterns, and makes initialization explicit.

  > Why isn't "(0..7).rev()" "(7..0)"
Because ranges always iterate forward, from start to end. rev() reverses the direction.

  > On the bottom part, lamba-like usage
I'm not sure what you're referring to here.


I get the feeling that Rust design is concerned more about write-ability of the code than readability. Code has to be maintained, read by random people, comprehended. It will grow up in hundreds of megs on large projects, and it will take time to compile.

With the above in mind:

> You've asserted this, but not demonstrated it: semicolon usage is consistent in Rust. Rust is not C++, so it doesn't necessarily follow the same rules, but it is consistent, even though it may be different.

Writing code is consistent and unaffected, code comprehension (debugging, integration efforts, etc) by third parties will be hampered.

>Can you elaborate on this? I'm not exactly sure what you're saying.

Not importing the entire library will result in faster compilation. Unless rust has a concept similar to precompiled headers, etc. I apologize at this point. I'm not sure what Rust does in that respect, but my initial thoughts were that as the project grows, the parser will slow on includes. The internal symbol resolver will slow on lookups because too many things are included.

All of the above is only if you are trying to match what C++ does for large teams. I guess it's up to the developers of the language what direction they want to take it in. More agile or more enterprise oriented.

(From personal experience: I was on a team looong time ago, where codebase took 12 hours to compile on a powerful cluster. Any gains, even by few hours, changed schedules drastically)

>See above: this kind of thing is about how Rust views coercion and heap allocation: explicit, not implicit.

I understand your viewpoint, however three operators on a single object still seems conceptually excessive to me.

>Because ranges always iterate forward, from start to end. rev() reverses the direction.

Since both are supported, maybe the parser can be made smarter?

> I'm not sure what you're referring to here.

Well this piece of code:

  let scores = words.iter()
            .filter(|&word| word & !seven == 0)
            .fold([counts[count];7], |mut scores, &word| {
                for place in 0..7
                     { scores[place] += (word >> bits[place]) & 1; }
                scores
            });
Is not easily read or debugged ( on which line do you set the breakpoint, etc).

Outside of minor things like this, I like what Rust is trying to do. I think it's a great effort and I do look forward to where it will lead.


> Not importing the entire library will result in faster compilation. Unless rust has a concept similar to precompiled headers, etc. I apologize at this point. I'm not sure what Rust does in that respect, but my initial thoughts were that as the project grows, the parser will slow on includes. The internal symbol resolver will slow on lookups because too many things are included.

The compilation time problems associated with headers are problems with headers: avoiding headers makes it easy to avoid the problems. The crate system of Rust is somewhat similar to precompiled headers, and C++ is in fact in the process of putting something similar (modules) into the standard.

Interestingly, this makes the Rust compiler easier to work with: the basic things of lexing, parsing, name resolution etc. are not at all bottlenecks (any particular file is only parsed once, there's not megabytes and megabytes of headers to parse for every invocation). This means these parts don't have to be micro-optimised, and hence don't get hit by an increase in complexity due to those optimisations.

Concerns about internal symbol resolution aren't a problem: hashmaps give (expected) O(1) access no matter how many elements they hold, and even using a tree has O(log n) access (i.e. going from 1_000 symbols to 1_000_000 only doubles the time for a look-up, and the next doubling happens at 1_000_000_000_000 symbols).

> Since both are supported, maybe the parser can be made smarter?

Not sure what this has got to do with the parser (7..0 parses fine, it just creates an empty iterator), but implicitly reversing ranges are on of the most annoying features of R: it makes it annoyingly hard to do things like `x..y - 3`.

> Well this piece of code:

Two things: the author is effectively golfing here (they say they want the code to fit on a single page), and, the Rust version can be written just like the C++ version, with a loop:

  let scores = [counts[count]; 7];
  for &word in words {
      if word & !seven == 0 {
          for place in 0..7 {
              scores[place] += (word >> bits[places]) & 1;
          }
      }
  }


Can LLVM lift the bounds check out here?


Yeah, it should be able to handle that easily, since both bits and scores have static length 7. Note, however, that the loop version does no more or less [] indexing than the iterator/fold version.


  > I get the feeling that Rust design is concerned more about write-ability
  > of the code than readability.
We're usually accused of the opposite: being explicit about things really helps in the long run. We've made several choices that we think, at least, helps "in the large" but hurts "in the small". We can always be wrong, of course :)

  >  code comprehension (debugging, integration efforts, etc) by third parties will be hampered.
In what way? I still don't understand what you're getting at here.

  > Not importing the entire library will result in faster compilation.
Rust's unit of compilation is a "crate", which gets compiled at once. Metadata is then stored in the artifact (.rlib) so that you can know the interface, etc, and you don't need to recompile the rlib when you change what aspects of the library you use. So I'm a _little_ bit unsure of the _exact_ details, but I think it's the same as precompiled headers, to my knowledge.

  > where codebase took 12 hours to compile on a powerful cluster.
Well, I'm sure we'll get to really huge projects someday, but we tend to do the "small packages" philosophy, so an initial compile might take a while, but subsequent ones aren't as bad. I just did a clean build of Servo, which is roughly 6MM LOC (575k Rust, 500k C, 1MM C++ (Spidermonkey, I'm guessing)), and it took 16 minutes on the first (debug) build, and 3 minutes on the second, after touch-ing the main file. Lots of the initial time is building the 170 (!) dependencies, which then don't need to be compiled again. (And your second build-times will be different based on which sub-package you're building. Some are faster, some are slower.)

We also have a lot of compiler performance improvements coming down the pipeline, including incremental recompilation within a single crate.

  > Since both are supported, maybe the parser can be made smarter?
It's been debated, but it's another edge case to remember. It's not clear that adding another special rule to the language is worth saving four characters.

  >  Is not easily read or debugged ( on which line do you set the breakpoint, etc).
I think it depends. I come from a functional background, so reading it feels fairly straightforward to me (though that for in a fold is a bit odd), and debugging should work as usual, though I don't feel the need to use a debugger a whole lot in Rust.

Thanks for taking the time to reply. Skepticism is good! Constructive criticism is the only way things move forward.


It seems Rust sacrifice too much for the sake of safety. If Swift will evolve to truly cross-platform language, I think it is in the golden middle of safety vs usability trade-off (without requiring non-deterministic GC like D).


> It seems Rust sacrifice too much for the sake of safety.

Like what? Given the constraints that Rust has, I don't see much that could radically be changed. Little things like the ? operator (equivalent to try!()) can improve the common tasks, but how do you ensure memory safety in the absence of a garbage collector without the programming being explicit about what he wants to do?


I think after about Swift 4.0, if you're writing a native application in [systems language] and you're using Arc/Rc/shared_ptr all over the place you need to seriously take a step back and ask yourself why you're not using Swift.

If Swift had structural references (for all intents and purposes in Swift, all structs are value types and all classes are automatically reference counted pointers) and move semantics Swift would absolutely kill it.

It's an exciting time to be a PL nerd.


I'm a Rust fan, but I'll still be excited if Swift manages to add some sort of borrow checker to the language in a future revision (C++, D, and Nim have all also at least alluded towards such a direction, but only C++ has made any progress so far). From what I've seen though, I'm still not sure that Chris Lattner has given serious thought to such an addition. I'm waiting for a concrete proposal before getting my hopes up, because Rust had to make put significant effort into tailoring the base language to borrow checking and implementing such an analysis after the fact runs the risk of serious divergence (or worse, breakage) in the ecosystem.

(There's also the fact that, as of the hypothetical Swift 4k, none of the library ecosystem will be leveraging the borrow checker and so Rust will still have a substantial head start for people who want thoroughly static ownership.)

Other than that I'm a bit dismayed that Swift just punts concurrency to GCD, given how many other new languages make concurrency such a major focus (especially Rust).


How is this related to the article?

And what, exactly, does Rust sacrifice for safety?


> How is this related to the article?

The article compares Rust vs C++. No surprise that the [potential] language in the middle of these two extremes is mentioned....

> And what, exactly, does Rust sacrifice for safety?

Clarity, usability... Here is Alexandrescu's quick overview of it:

https://www.quora.com/Which-language-has-the-brightest-futur...


I've addressed the "bulging muscle" criticism many times before. Suffice it to say that it has nothing to do with safety and everything to do with a cultural difference between untyped and typed generics. C++ and D use untyped generics, while pretty much every other language in existence, Rust included, uses typed ones. It's a tradeoff, and I think Rust made the right choice. In fact, Rust chose "clarity" and "usability" over the expressive power of C++ and D templates!


Rust = c syntax with Erlang conventions


You would've hit closer to home if you'd said "ML" or "Haskell" instead of Erlang, though you'd still be shooting wide.

Rust's borrow checker is its single most distinctive feature, and that is inspired by the work done in the Cyclone language.



Ehm no not even close.


Do you want C++ compiler to also prevent data races? Then use const functions by default, seriously.

http://youtu.be/Y1KOuFYtTF4 (starts at 29:00)

C++ template meta programming language is such a powerful and expressive tool for writing libraries and safe abstraction. Imagine if all the effort of 8 years of Rust team went into a modern C++ safety library.

Such a library would have been more useful contribution and more people would have benefited.


> Do you want C++ compiler to also prevent data races? Then use const functions by default, seriously.

Tell me how to use const functions to enforce that I take a lock before accessing data guarded by a mutex. Or how to mutate data local to the current thread only, while forbidding mutation of shared data.

> Imagine if all the effort of 8 years of Rust team went into a modern C++ safety library.

I imagine our 8 years would have been wasted chasing the impossible.


Using const does not prevent data races.

We made Rust because you can't actually retrofit it's guarantees on C++ without breaking backwards compatibility. The CPP Core Guidelines are an example of this: Herb said that data race prevention is a non-goal, and in general, how it interacts with concurrency is not yet understood.


Did you watch the video?

Curious why you think using const functions doesn't prevent data races after watching

at 29:00 he says the C++ standard guarantees const member functions are data race free.


I mean, data races need mutability, so something that's const can't have a race, sure.

What I meant was something like "const itself is not a panacea against data races generally." It is a useful tool.


Don't get me wrong I respect Rust core team as theorists and think rust is an important research project.

But in C++ if you have a const member function F() then F() can only change member variables that use the "mutable" key word.

If you default to all methods are const you have much much safer C++. As safe as rust? not probably but combined with liftimes proposal and static analysis rules we have a language good engineers can sufficiently work with.

    struct Foo 
    {
         mutable auto a = 4;
         auto increment() const -> void 
         {
              this->++a;
         }
    };
If we train engineers to use const member functions, then the programmer is forced to think about state and mutation.

As C++98 PTSD begins to fade and C++17 becomes the cultural mental image of the language, the argument for switching to rust will become less and less compelling.

Eventually, the only "advantage" will be ML influenced syntax.

The reason is there are only three languages that can easily call into C++ code. Objective-C (swift by proxy), D, and of course C++.

Why rust didn't prioritize compatibility with C++ like C++ did with C boggles my mind. Why do you think C++ became so popular? Near perfect C compatibility (until recently) is literally the only reason. Instead of focusing on pragmatisim Rust focused on language purity.

At what cost?


> But in C++ if you have a const member function F() then F() can only change member variables that use the "mutable" key word.

This is not nearly enough. See the examples I gave in my other post.

Besides, all "const" means is "I won't mutate 'this'". It doesn't mean someone else who has a non-const reference to your object won't mutate your object. That makes it largely useless for reasoning at a local level.

> If we train engineers to use const member functions, then the programmer is forced to think about state and mutation.

The programmer may think about it, but the compiler doesn't enforce it at all.

> As C++98 PTSD begins to fade and C++17 becomes the cultural mental image of the language, the argument for switching to rust will become less and less compelling.

Only if you don't understand how Rust works.

> Why rust didn't prioritize compatibility with C++ like C++ did with C boggles my mind.

Because C++ compatibility is impossible without being a derivative of C++, and C++ is hopeless in the safety department. I believe it is impossible to make C++ memory safe without making it not C++ anymore.

> Instead of focusing on pragmatisim Rust focused on language purity.

No, Rust focused on what is possible.


> Because C++ compatibility is impossible without being a derivative of C++, and C++ is hopeless in the safety department. I believe it is impossible to make C++ memory safe without making it not C++ anymore.

Do you consider D to be a derivative of C++? I don't. They have a near perfect C++ interface. D can even catch C++ exceptions. It's pretty sweet. You can hear about the black magic Andrei used to get that to work here:

http://cppcast.com/2015/10/andrei-alexandrescu/

Side note: I'm actually thinking the Rust and C++ comparison misses the mark. First of all, C++ cannot be obsoleted by rust, it's economically impossible. Secondly, I don't even think they compete in the same domain space. Rust is actually lower level than C++, in that it does lower level stuff in a more natural way. It's C, that Rust makes obsolete; which is something C++ totally failed to do (because of it's RTTI, exceptions, and other runtime features -- all things you and the rest of the Rust team intelligently scrapped). After trying embedded Rust, I'm completely done using C (and the C parts of C++).

IMO the advantages C++ still has over Rust are:

- libraries, libraries, libraries

    - C++ has so many great libraries

    - *writing* C++ libraries with SFINAE sugar is the bees knees.
- much better support and tooling, but this is for sure a temporary advantage.

- template meta-programming feels like a different language, it's dynamic, functional, and the difference is refreshing. I like the mental switch you have to do to go from writing a C++ template to writing a C++ model.

- as much as header files suck, they let you skim an API much more easily than Rust or Swift.

- performant, memory safe data structures:

For example, it's trivial to write a low overhead pointer that can't dangle, throws an exception on nullptr deference, and that safely points to an object it doesn't own (even if that object is its owner). With this and a unique_ptr composed with similar safety mechanisms you now have the primitives needed to write any data structures or algorithm so that it is protected from undefined and unsafe behavior, yet remains performant. This solution has significantly less overhead than using reference counted pointers like shared_ptr derivatives or Rc. The performance tradeoff over using raw pointers is minuscule and totally dwarfed by the gain in safety guarantees. From what I've observed, you can't do this in Rust, if you try to write a doubly linked list you either have to use C-pointers that can dangle inside unsafe blocks, or eat the Rc overhead.

EDIT: How could I forget to mention the glory that is constexpr?

Advantages of Rust over C++ (again, IMO):

- the correct defaults (aka opposite of C's insanity)

- best C++11 features built in and defaulted (move semantics, unique_ptr is Box<T>, Rc is shared_ptr etc..)

- Bad ass embedded capabilities. You don't nearly feel as crippled like you do in C++ without the STL.

- the safety guarantees are real, and they're awesome

- move semantics into closures looks so goddamn elegant. I love it

- inline assembly doesn't feel like a hack

- modules and cargo rock

- secretly and surprisingly similar to ruby in a lot of places :)

- it wants you to think functionally, and unlike Swift, it means it. This is a good thing.


> Do you consider D to be a derivative of C++? I don't. They have a near perfect C++ interface. D can even catch C++ exceptions.

It's not a near-perfect C++ interface, because D can't instantiate C++ templates on its own.

> First of all, C++ cannot be obsoleted by rust, it's economically impossible.

Rust can make C++ obsolete in a technical sense. But I agree of course that C++ is immortal.

> For example, it's trivial to write a low overhead pointer that can't dangle, throws an exception on nullptr deference, and that safely points to an object it doesn't own (even if that object is its owner)

Wrong. You can't write a pointer that can't dangle in C++. Think iterator invalidation, "this" invalidation, etc. I've given innumerable code samples to prove this in the past.

It's hard to argue about this abstractly, so let's do this. If you give me the C++ code for a smart pointer that you claim is safe, I will construct you a code example showing use-after-free using that pointer.

> From what I've observed, you can't do this in Rust, if you try to write a doubly linked list you either have to use C-pointers that can dangle inside unsafe blocks, or eat the Rc overhead.

Yes, you can't do this in Rust. You also can't do it in C++.

C++ is good at making you think that you're using pointers that can't dangle, as evidenced by posts like yours. It's not good at actually preventing things like use-after-free.

> - template meta-programming feels like a different language, it's dynamic, functional, and the difference is refreshing. I like the mental switch you have to do to go from writing a C++ template to writing a C++ model.

I much prefer the static approach of Rust's generics to the untyped templates of C++--the fact that templates feel like a different language is a disadvantage in my view--but this is a huge tradeoff and it largely comes down to taste. This is unlike the memory safety guarantees of Rust pointers, which are not a matter of taste; safety is a formal property that C++ references simply do not satisfy and cannot satisfy no matter what.


Here's an implementation of <experimental/observer_ptr> set for C++17 aka the world's dumbest (and most poorly named) smart pointer.

https://github.com/martinmoene/observer-ptr/blob/master/incl...

I'm using a modified version in a personal library. It throws an exception on a nullptr dereference. I'm not sharing that version here because A. it's not ready, and B. You're definitely a better programmer than me and I don't feel like getting totally shit on today :)

I thought the whole point of the "world dumbest smart pointer" was that it's automatically nullptr'd for you so it can't dangle. I'm looking forward to hear why this isn't true. I've actually been brooding over this perceived advantage of C++ for a while now.


    std::vector<std::string> strings;
    strings.push_back("Hello");
    std::observer_ptr<std::string> p(&strings[0]);
    strings.clear();
    std::cout << p; // use after free


That wouldn't compile.

To use the header I linked it should be `nonstd::observer_ptr<std::string> p(&strings[0]);`

You also need to dereference p to use the stream operator.

I'm disappointed you presented such a contrived example. You also deliberately avoid const-correctness. That's a little insidious. You need to const everything manually in C++, so as a good C++ programmer you should const everything as you write C++. Only carefully omit const when you need mutability.

We all know C++98 is dangerously unsafe. Let's write your example in C++11.

    #include <iostream>
    #include <string>
    #include <vector>
    #include <memory>
    #include <utility>
    #include "observer_ptr.h"

    // kill the noise
    using std::cout;
    using std::vector;
    using std::string;
    using nonstd::observer_ptr;

    int main() {
      vector<string const> const strings { "Hello" };
      observer_ptr<string const> const p(&strings[0]);
      strings.clear(); // not gonna happen
      cout << *p;
    }

    » clang++ -std=c++11 immutable.cpp
    immutable.cpp:11:5: error: member function 'clear' not viable: 'this' argument has type 'const std::vector<const std::string>' (aka 'const
          vector<const basic_string<char, char_traits<char>,     allocator<char> > >'), but function is not marked const
        strings.clear(); // not gonna happen
        ^~~~~~~
    /usr/local/Cellar/llvm/HEAD/bin/../include/c++/v1/vector:735:    10: note: 'clear' declared here
        void clear() _NOEXCEPT
             ^
    1 error generated.

EDIT: Down voted for legitimate rebuttal. Nice.


You changed the vector from a mutable one to an immutable one. That isn't the same code. What if you need a mutable vector?

Also, consider this:

      std::observer_ptr<std::string const> const foo() {
          std::vector<std::string const> const strings { "Hello" };
          return std::observer_ptr(&strings[0]);
      }

      std::cout << foo(); // use after free, even with const!
This is how these discussions always go: someone presents an example of use-after-free in C++, someone else says "that's contrived"/"not real C++", and the discussion continues indefinitely. At some point we are going to just start debating "does anyone actually write use-after-frees in modern C++?", which is also a question we have empirical answers to by way of vulnerability databases and bug trackers, and the results don't look good for C++ there either.

If your contention is that observer_ptr is completely safe, then we could have easily added it to Rust. We didn't for a reason: it's unsafe in Rust and unsafe in C++.


Yes, if you make a dangling pointer factory or introduce global state then you can break things. Luckily you wouldn't use anything like that to write the safe and performant data structures I'm talking about.

They are protected from undefined behavior, performant, and are excellent middle ground between reference counting or raw pointers.

I like Rust I just don't understand how I'm wrong about this single little pro C++ has over it.


> Luckily you wouldn't use anything like that to write the safe and performant data structures I'm talking about.

Are you talking about how you can write data structure abstractions in C++ using unsafe feature (like std::observer_ptr) that are themselves safe to use? If so, that's exactly Rust's model! Rust's linked lists, for example, are written using unsafe code internally and present a safe interface.

> I like Rust I just don't understand how I'm wrong about this single little pro C++ has over it.

Because it's not a pro that you can write unsafe code in C++ without saying "unsafe"!


Highjack the term safety, use contrived examples, and slippery slope arguments all you want. You know exactly the advantage I'm talking about.

It's fine, I get you're very invested in this. I'm gonna go back to making stuff. I like the work you guys are doing and I don't want to give the impression I do not.


It's not "hijacking" the term "safety"--it's having a precise definition of it. It's not a slippery slope to show both examples of unsafety and to show that this lack of safety leads to security problems in the real world.

I understand I sound irritated here, but that's because we went through a lot of work to make Rust safe. It is irritating when people claim that languages that didn't even try to do any of that work somehow did it better than Rust did.


Rust is CLEARLY better at safety than C++. Stop seeking validation for that. Everyone who knows about Rust is convinced. I'm sorry it must be frustrating talking to someone emotionally disinterested about the whole thing. I've just been thinking about C++'s advantage in that particular instance from a software architecture point of view and you kind of failed to persuade me otherwise.

That being said, Rust is definitely a language I will continue to study. I have no doubt it's making me a better programmer.

EDIT: I find it funny that I'm always defending C++ here. C++ is like that kid at school who's not really your friend but you always get stuck doing project with. You end up spending a lot of time with him, so when the time comes that you hear someone talk bad about him you feel obligated to point out that he's a actually pretty alright dude.


It seems you're sort of missing a big point here. You demonstrate how, with sufficient vigilance, you can write safe C++. The goal of Rust is to move the vigilance into the language and make the unsafety opt-in rather than attempt-to-run-from.


> with sufficient vigilance

So now using const is some kind of difficult task? Not much cognitive overhead there. I thought everyone was onboard with immutability these days.

My point is that safe low-overhead data structures with non-dangling, pointers protected from nullptr dereferencing ARE possible in C++ and not Rust. Yes, you have to put const after every type. Call that vigilance if you want. It's a really small price to pay IMO.

Edit: Down voted again. What's wrong with my point or code? I'm curious.


> So now using const is some kind of difficult task? Not much cognitive overhead there. I thought everyone was onboard with immutability these days.

The fact that if you make everything totally immutable you can write safe C++ is (a) not true; (b) even if it were true, a totally irrelevant point in practice since not everything can be immutable. "C++ is memory-safe if you don't mutate anything anywhere" is a completely uninteresting point to the real world (even if it were true).

> My point is that safe low-overhead data structures with non-dangling, pointers protected from nullptr dereferencing ARE possible in C++ and not Rust. Yes, you have to put const after every type. Call that vigilance if you want. It's a really small price to pay IMO.

No, it's completely wrong. You have no protection from dangling references in C++.


> No, it's completely wrong. You have no protection from dangling references in C++.

I'd love to see an example without globals or an undefined behavior factory function.


A "factory function" is just a function. Unless you have some way of statically distinguishing "factory functions" from functions, you are now arguing "you can write safe C++ if you make all your data immutable and you don't use functions". Besides the fact that this is not true (consider the destruction order for rvalues), how is this a statement that is possibly relevant to the real world?


> as much as header files suck, they let you skim an API much more easily than Rust or Swift

Just to reply to this individual point, rustdoc is fantastic. Personally I much prefer having an HTML-rendered "header file" for a crate's API. Even if they aren't hosted, it's trivial to generate one yourself using `cargo doc`. To be maximally useful, the author needs to have made doc comments, but at least there's a standard format for that, and you might not have anything like that in a header file anyways.


You can try crates.fyi for hosted docs, e.g.

https://crates.fyi/crates/itertools/0.4.7/


Super cool. I like everything they have going on with the triple slash comments too. Now all we need is an IDE that can get those docs up in a key stroke!




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: