Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Three months of Rust (scattered-thoughts.net)
244 points by abecedarius on June 5, 2015 | hide | past | favorite | 114 comments


> The Rust community seems to be populated entirely by human beings.

:D <3

Regarding your borrow checker example, note that your code is now prone to blowing up if `step` is modified too much. You have created the necessity of an invariant (step should not pop out of the vector) which may be broken by later cleverness.

See http://manishearth.github.io/blog/2015/05/17/the-problem-wit... for more details.

Note that in this specific case you could just use `&str` over `&String` everywhere and push "some new thing" directly; `&String` is a double-pointer, whereas &str is a fat pointer.

> Nor am I totally sure what the tradeoffs are between having a self argument and not.

It's not a tradeoff thing; it's a "do I want a static method, or a member method" thing.

> Some kinds of constraints cannot be used in where clauses, so I believe the former is strictly more powerful.

Actually where clauses are much more powerful. With the type constraints stuff like `A: Foo` works but not `Vec<A>: Foo`, but the latter is allowed in where clauses.

Overall, loved reading this post! It identified some areas of diagnostics that we can try to improve (I'm very interested in fixing diagnostics), and is a pretty accurate picture of the language :)


> Note that in this specific case you could just use `&str` over `&String` everywhere

It's an awkwardly construed example. Here is the actual code - https://gist.github.com/jamii/ae46e8e0c9757330e9ea . There the borrow makes more sense since the value is being created by calling a function on the current solver state.

I've edited the post to include a solution that was suggested in the reddit discussion - replace &'a Value with Cow<'a, Value>.

> It's not a tradeoff thing; it's a "do I want a static method, or a member method" thing.

> Actually where clauses are much more powerful.

I really don't understand this part of the language yet, but...

I linked to an active rfc about constraints that cannot currently be expressed in where clauses (https://github.com/rust-lang/rfcs/blob/master/text/0135-wher...), including the example you gave.

In cases where a function takes two arguments it isn't always obvious which argument should be self. If that affects whether constraints end up being A: Foo<B> or B: Foo<A> or `where Foo<A,B>` it seems like one could run up against those limitations?


I think you may have misread that RFC. With my emphasis:

> Here is a list of limitations with the current bounds syntax that are overcome with the where syntax:

The list is of the limitations of normal bounds, not the limitations of where clauses. This was actually the RFC that added where clauses, and that list was the rationale for doing so.


Oh... well that's embarrassing.

So, I half-remembered the actual problem I ran into and found something that half-looked like it mentioned it. Not my finest hour :S

I dug up the IRC exchange for the problem I actually ran into:

    jamii
    How do I write the type of a byte iterator:
    fn next_path<N: Iterator>(nibbles: &mut N) -> u32 where <N as Iterator>::Item = u8 {
    That gives me 'equality constraints are not yet supported'
    FreeFull
    jamii: <N: Iterator<Item = u8>>
Equality constraints seem to still be unimplemented ( https://github.com/rust-lang/rust/pull/22074) but I can write this instead

    where N : Iterator<Item=u8>
So that whole section of the post is incorrect. I've removed it and linked to this discussion instead.


I'm the author of the above pull request (as well as other pieces of where clauses), unfortunately there are some internals that need refactoring before we can complete equality constraints, and they weren't the highest priority before 1.0 especially since you can encode a trait that acts in the same way. I'm starting at Mozilla for the summer next week and hope to fix a couple of the outstanding issues around them and associated types.


> There the borrow makes more sense since the value is being created by calling a function on the current solver state.

Agreed. And yes, a Cow is better in this case though of course there's a slight runtime cost. In such a case I would heavily document the new invariants with warning comments though :P

> I linked to an active rfc about constraints that cannot currently be expressed in where clauses

Those are the constraints that cannot currently be expressed in bounds, not where clauses. Bounds are the stuff within <>, where clauses were the new thing proposed to supplement them.

> In cases where a function takes two arguments it isn't always obvious which argument should be self.

I'm not sure why you seem to be troubled by self (here and in the post). It might be due to you looking at method dispatch as a sugar for function calls -- it sort of is (because of UFCS), but it really isn't.

The second argument can never be self. It's always the first. (nitpick: functions can't have self args, methods can, but that's just terminology).

For direct impls, you do a method without `self` when you wish it to be a static method. Eg `impl Foo {fn x(){}}` will be a method called as `Foo::x()` and will work independently of any instance of the type. `impl Foo{fn x(self){}}` needs to be called as `foo.x()`, where `foo` is a `Foo`. Here, the method is able to access the state of `foo`.

The behavior so far is the same for most languages like python or java (java has implicit self and uses `static` to say "no self").

Now, traits just let you provide a way to classify objects based on what methods -- both self and nonself/static -- they have. So, a trait `Cloneable` would be `trait Cloneable { fn clone(&self) -> Self}`, and its implementation would be `impl Cloneable for Foo {fn clone(&self) -> Foo {....}}`. This would mean that "given a `Foo`, I should be able to get another `Foo` out of it by calling `foo.clone()`". On the other hand, sometimes you want to run an operation on the type itself. I.e. `trait TypeName {fn type_name() -> &'static str}` and is implemented as `impl TypeName for Foo {fn type_name() -> &'static str {"Foo"}}`. In this case, it's no longer "given a `Foo`, do X", it's "given a type that implements TypeName, do X".

Generally nonself methods on traits don't make much sense unless you're doing type gymnastics (in fact, Java doesn't even allow static methods in interfaces). For example, they're used in hyper for dynamic dispatch on strongly typed headers (http://hyper.rs/hyper/hyper/header/trait.Header.html)

Also, `where Foo<A,B>` doesn't make sense, where clauses need a colon; `where Something: SomethingElse`.


> Those are the constraints that cannot currently be expressed in bounds, not where clauses.

Yeah, I got that whole thing totally wrong. I've removed that part of the post and linked to this discussion instead. Thankyou for de-confusing me :)

> The second argument can never be self. It's always the first.

I think we are talking past each other on this point. I'm thinking of the design-time choice between eg:

    trait Observe<Observee> {
      fn observe(self, observee: Observee);
    }

    trait Observe<Observer> {
      fn observe(self, observer: Observer);
    }

    trait Observe<Observer, Observee> {
      fn observe(observer: Observer, observee: Observee);
    }
Analogous to the typical OO problem where it's not clear which class a method should belong too.

Because I mistakenly believed that where clauses are less powerful I thought that the above choice had additional significance beyond code organisation, because it would affect the kind of constraints I could write. But they aren't so it doesn't matter :)


Oh, I get it now.

Yeah, any of these choices will work^. Well, the last choice shouldn't be a trait, really, just a standalone function.

Associated types would also help simplify this (note, if you have a trait Foo<A>, the trait can be implemented multiple times on the same type, with different A. If you have a trait Foo with associated type A, only one implementation will be allowed, the associated type is a property of the implementation)

^ In more complicated situations coherence may disallow one or more of those choices.


> the trait can be implemented multiple times on the same type, with different A

That was the intention - a function that dispatches on the type of both arguments (it probably shows that I secretly think of traits as typeclasses). I'm still trying to figure out what the implications of the above choices are.

> In more complicated situations coherence may disallow one or more of those choices.

I hadn't thought of that. If the trait is in another crate, does coherence require that the self type is in this crate or that all the types are in this crate? How do the coherence rules work if I don't have self type?


Traits are typeclasses :)

Uh, the coherence rules are complicated and I forgot them. It's a mixture of where the impl, type, trait, and type parameters are.


The general the coherence rules stop you from defining an implementation that could be possibly defined elsewhere.

The important take away implications are: - you can not implement a trait defined in another crate for a type defined in another crate, unless the trait is parametrized by a type from the current crate - you can't define overlapping implementations


So the coherence rules don't treat self specially? That would be good to know.


Is the Rust community really made of humans? I always thought it consisted mostly of crustaceans.


I think the term I've seen is Rustaceans :)


Crustaceans are those coming to the language from C.


The take-away for me is this:

"Despite the restrictions of the type system, I am more productive in Rust than I am in either Javascript or Haskell. It manages somehow to hit a sweet spot between safety and ease of use."

When I toyed with Rust last year (so, I'll admit my knowledge is outdated, I need to refresh it), I had a pleasant experience on the productivity side. The big reward for me, coming from C/C++, is that my programs simply worked as expected once I was past fixing all the errors the compiler reported. That usually doesn't happen this way in C/C++, where you spend additional time fixing whatever null deref and whatnot that break your program in subtle ways at runtime.


Doesn't using C++ RAII eliminate some of that? Or you use raw pointers often?


Anything in C++ generally comes with its very own footgun.

The one for RAII looks like this:

  {
    mutex_guard(some_mutex);
    foo();
  }
What does this do? It locks some_mutex, then immediately unlocks it, then calls foo().


I don't understand the example. Why would it do that and not unlock after foo()?


The following code would do the right thing; see if you can spot the difference, and think about whether you would catch that in code review (g++/clang++/visualc++ won't warn about it).

  {
    mutex_guard guard(some_mutex);
    foo();
  }


Ah, now I see it and it makes sense.

The answer to the code review question is obvious. I didn't see the error even though I was told what happens.


I guess you meant std::lock_guard.

Anyway, the first example creates a temporary object which doesn't lock anything since it will be destroyed right away because it's an rvalue (not sure why would anyone do that for locking, unless not knowingly). The second example creates a proper guard which lives until the end of the scope.


Rust uses RAII too. But borrowing makes it much more powerful.


Yeah, Rust controls much more than C++ RAII does.


"For our 2400 loc it takes 20s for a dev build and 70s for a release build. "

I have played with rust, but not written any large amounts of code. This makes me a bit sad though, I have 7000 lines of go which takes less than a second. I think there is a bunch of bloat in software compilation which the plan9/Go people were wise to stamp out.

Compare gcc/clang/rustc build times from source with building go 1.5 from source which bootstraps itself. It comes down to something like 20 minutes vs 20 seconds.


The advantage, of course, is that LLVM has probably over a hundred man-years of optimizations in it, including two IRs (SSA LLVM IR, SelectionDAG), tons of target-specific optimizations (ScheduleDAG, Machine-level optimizations), great tooling (e.g. DWARF support, Address/Thread Sanitizer), and lots of community support. None of it is bloat as far as Rust is concerned. I optimize bare-metal Rust code nearly every day and if I had to spend time fixing suboptimal codegen due to missing optimization passes in the compiler I'd never get my job done. Especially since I work on ARM, where bad codegen punishes you hard.

There's certainly a lot of room for improvement in Rust's build times, but LLVM was completely indispensable for the language to get off the ground, and I don't regret using it for a minute.


What would benchmarking say the slowest part of rustc is? Typechecking/semantic analysis, llvm, or something else.

LLVM is a great innovation when it comes to making new languages from scratch, High performance will hopefully be one of rusts strong points, so its good to have so many companies working on llvm performance for free.


On debug builds, it's about evenly split between typeck/borrowck and codegen (including LLVM IR construction and LLVM passes). On release builds, LLVM optimization time tends to dominate everything.

Niko Matsakis is actively working on improving typechecking time--there should be plenty of tricks we can try once we have enough data as to the lowest-hanging fruit. Felix Klock and others are working on reducing the amount of LLVM IR we generate to speed up codegen. There have also been designs for incremental and parallel compilation that will be long-term but have potential for big improvements down the road.

I think with careful work we can get down to a good edit/compile/run experience. Compilation speed probably won't ever be as good as a direct-to-assembly compiler for language with an extremely simple type system with no real IR for optimization would be, but I don't believe there's a magic bullet that will beat LLVM's compiler performance while providing excellent optimization, and for Rust's goals we need the runtime performance.


I think a big issue is that even if it takes a few more seconds, it is still a big win compared with C and C++ builds, usually measured in hours.

But I guess it is a block for those used to programming languages with traditional interpreter implementations.

My biggest complain when trying out Rust was the C++ build times of the bootstrapping process. As for using pre-compiled Rust, I think the compile times are pretty acceptable for 1.0.


> C and C++ builds, usually measured in hours. Do you have a source or a story about this? I hadn't heard numbers like that before.


Is rustc able to run typechecking without doing codegen?


`rustc .... -Z no-trans`. kibwen is working on a mode where rustc can generate crates with just metadata, and use it for a `cargo check` mode.


The slowest part is LLVM. (You can pass "-Z time-passes" to rustc to get detailed timing info.)

This isn't just an "LLVM is slow" problem, though. It's also a "rustc generates extremely verbose LLVM IR" problem. Optimizing the IR before it gets to LLVM is part of the plan for solving it.


It's also a "let's rebuild everything every time" problem. Hopefully, that will be fixed with support for building incrementally.

(Also note that compiling a C++ file with 2000 lines can take a very long time too)


You must not forget that thanks to the preprocessor, what looks like 2000 lines may in fact be around 200k lines

E.g. I have a very innocent 2000-line MainWindow.cpp that weighs 244419 lines when measured with gcc -E -o - | wc -l


Good point, it really is quite staggering what a C++ compiler has to digest to compile a standard "Hello, World!". Hopefully the much-anticipated module system will help here. Generally I find that avoiding Boost libraries (including std::regex) helps enormously ;)


> What would benchmarking say the slowest part of rustc is? Typechecking/semantic analysis, llvm, or something else.

I feel like there are some projects out there that trigger things with bad runtime complexities in Rust. I had to stop using the piston image library because compiling it takes 15 seconds every time, which I'm not in for.

Compiling racer currently needs 2GB of RAM for no good reason.

So I'm pretty sure there is ample room for optimizations.


If I remember properly, when Go was first developed, compilation time was one of the primary metric that Rob Pike et al were optimizing for, and drove major aspects of its design. It shouldn't be surprising that Go blows other systems out of the water in this regard.

Here he is talking about it:

https://www.youtube.com/watch?v=rKnDgT73v8s#t=8m53


My point is that you pay a price for this: LLVM's optimization passes are much, much more sophisticated than those of the Plan 9 toolchain. In optimized builds of Rust, the LLVM optimization and compilation time tends to dominate, so having a simpler type system wouldn't really help.

You could have a more C-like language that isn't so dependent on expensive optimization passes like multi-level inlining and SROA, granted, but I think most any high-level language—i.e. one that isn't a bare-metal language like C and Pascal—is going to have lots of opportunity for expensive optimizations.


If optimization is the problem, then compilation at Go speed should be possible with -O0.


Just having the ability to perform such optimizations requires an architecture that is sure to have some overhead no matter which optimizations, if any, are actually executed.


Rust could still have a toolchain like DMD devoted to fast compilation with minimal optimization. It just doesn't, yet (and likely won't for quite some time, since the present advantages of having a single toolchain are fairly significant and Rust doesn't have a formal specification yet).


There are C compilers out there that work like this (like, say, the Plan 9 C compiler) and they rarely ever get used in practice, because GCC and clang -O0 are fast enough. I think making a second compiler backend just to eliminate the IR generation step isn't going to be the solution we want in the long run.


I think they are only fast enough because developers have not been exposed to better. In general, I would like to see more aggressive build modes for CI servers, and less aggressive modes for dev.


Also, I'm confused by the word "aggressive" here. Could you elaborate please?


Aggressive optimization. I just mean on a build server time isn't as much of a factor as local development.


Multirust makes working with multiple toolchains on *NIX pretty great, but no formal spec can be a pain. Ruby doesn't have a spec, yet has multiple implementations, so it's not the _end_ of the world...


> It shouldn't be surprising that Go blows other systems out of the water in this regard.

For any developer that never used Turbo Pascal, Modula-2, Oberon compilers, just to cite a few examples among many possible ones.

Those that did, can not comprehend why companies invested in languages like C and C++, which created this notion all compilers should be slow.


Oh yes, Turbo Pascal (and even Delphi / Object Pascal) had a really fast compiler. Sigh... those were the days...


C++ isn't so bad either. 2400 loc is a really dinky program; we have a 10,000 loc C++11 program which takes 10s for a full rebuild; typical rebuild times are about a second. I don't know how rust handles incremental rebuilds, but that kind of build time feels excessive.


Other comments have mentioned LLVM (and its optimization passes) as a reson for slow compile times; just for comparison, using LDC (the D compiler with LLVM backend) on an older computer (1.6 Ghz Athlon 2650e, 2G ram, spinning rust), for a release build at -O2, it takes about 35 seconds for 10,000 sloc


> I think there is a bunch of bloat in software compilation which the plan9/Go people were wise to stamp out.

Be more specific.


linux kernel takes 20 minutes to build on my workstation, a plan9 kernel takes something tiny like 60 seconds on a raspberry pi. This is due to a few reasons.

1. Plan9 C does not allow headers to include more headers, this speeds up compilation, as there is far less useless preprocessor churn.

2. Plan9 C/Go does not do heavy optimisation, but does a decent job.

3. The system has less code overall, something like 1 million lines of code for something like gcc seems like bloat to me, They don't remove useless features and dated features as fast as they pile it on.

The whole Go compiler builds from source for me before the gcc configure script has completed.


> 2. Plan9 C/Go does not do heavy optimisation, but does a decent job.

That's not good enough for Rust.

> 3. The system has less code overall, something like 1 million lines of code for something like gcc seems like bloat to me, They don't remove useless features and dated features as fast as they pile it on.

Those GCC optimizations matter. You simply cannot get away with competitive performance with an AST-to-assembly plus peephole optimizations on the assembly anymore. SROA, SCCP, loop invariant code motion, algebraic identities, vectorization, aggressive alias analysis, etc. are all very important.


Imo compilers should only optimize specific functions which are known to be a code hotspot.


That will result in very poorly-performing code. I liked DannyBee's quote (paraphrased): "Most applications have flat profiles. They have flat profiles because people have spent a lot of time optimizing them."


There are times squeezing every last bit of performance is great. I am saying is in general there is no need to run -O3 on a 500 line function which is run once at initialization compared to a compute kernel. Our compilers and languages don't have the granularity to specify this currently.

I do want more specialized optimization tools however. Things like automatic super optimizers [1] which can be targeted by programmers with special directives.

[1] http://theory.stanford.edu/~sbansal/superoptimizer.html


According to a quick google search, both gcc, clang and msvc allow turning off/on optimizations at a function level, via #pragma/attributes?


interesting!


Well in that case... if you consider optimizing non-hotspots as "bloat"(?) then you are coming at this from a whole other viewpoint than most of us who have embraced optimizing compilers. :-)


1) seems like a language fault and not a compiler fault.

2) heavy optimisation is not "bloat".

3) "seems like bloat" is a too vague argument. Fine enough point about them supposedly not removing useless features.


"Modern machines are a huge pile of opaque and unreliable heuristics and the current trend is to add more and more layers on top. The vast majority of systems are built this way and it is by all accounts a successful strategy. That doesn’t mean I have to like it."

This is a really valuable observation.

"Smart" compilers seem great for letting you write code without thinking too hard when performance requirements are loose, but they make it difficult to achieve peak performance in two ways:

1. The fact that details of the machine are abstracted away from you means that you may never learn them well.

2. It ends up being insufficient just to know the details of the machine, because you also need to know how to coax the compiler into producing the low level result that you want, and then you need to be vigilant that later changes to the compiler don't break your assumptions.


I think the opposite is true. Modern hardware is so complex[1], made even more so by its constant interaction with a complex OS, that any sense of familiarity with the actual performance model is illusory, unless you're doing something very controlled and very specific (like, say, DSP). Modern hardware itself is an abstraction, hiding its operation away from you. We can no longer hope to tame hardware with meticulous control over instructions as we were able to up until the nineties.

Forget about clever compilers; forget even about smart JITs; even if you look at such a big abstraction as GCs and only consider large pauses (say anything over a few tens of milliseconds), it is now the case that in a well-tuned application using good a GC, most large pauses aren't even due to GC, but to the OS stopping your program to perform some bookkeeping. Careful control over the instruction stream doesn't even let you avoid 100ms pauses, let alone trying to control nanosecond level effects.

[1]: http://www.infoq.com/presentations/click-crash-course-modern...


Most of us don't have the time for meticulous control over instructions but those who do can certainly use it to good effect eg http://www.reddit.com/r/programming/comments/hkzg8/author_of...

My aversion to piles of opaque heuristics is not because I'm against smart compilers, just that for certain projects I want to be form a mental model of what code I should write to get a certain effect. The trend of modern languages with heavy heuristic optimisations or complex JITs is towards less certainty and less stable optimisations, so that a program that runs fine today might be unusably slow tomorrow.

Staging and compiler-as-a-library is a promising compromise for projects that really care about stable performance eg http://data.epfl.ch/legobase . You can still have an LLVM-smart compiler underneath but you get to make the first pass.

Rust is actually very predictable in some respects eg generic functions will be monomorphised. I prefer it to wrangling GHC or the V8 JIT.


> I want to form a mental model of what code I should write to get a certain effect

And how do you do that with hyperthreading, virtual memory, power management that may decide to power down your core because what you're doing doesn't seem important enough (and that differs greatly from one processor to another) and cache effects on code, data and TLB (all are strongly affected by other threads and processes running on your machine[1])?

While those effects didn't exist much before the 90s, and they don't exist today in GPUs and small embedded devices, on desktops and servers those effects may be much greater in magnitude than any difference you're able to get by better control over generated code. Not running a hypervisor, turning off virtual memory, pinning threads and isolating cores have a much more profound effect on predictability than which language or compiler you're using. Focusing on compilation before taking care of those much more powerful sources of unpredictability is like trying to get a faster car by reducing the weight of the upholstery fabric.

> so that a program that runs fine today might be unusably slow tomorrow.

I think that slowdown actually applies to assembly programs much more than to, say, Java. As CPU architecture changes, it's actually easier to keep higher-level code performant. I mean, why do you assume that compiler changes will hurt your code performance more than CPU changes?

> You can still have an LLVM-smart compiler underneath but you get to make the first pass.

There are many ways to produce good machine code (my favorite is Graal, HotSpot's next-gen JIT), but none of them really give you a good mental model of what's going on. You may like one approach over another for personal aesthetic reasons, one approach may actually produce better results for some workloads than others, and some approaches really are more predictable -- but no approach produces categorically predictable results, and more predictability doesn't buy you better performance (though it still requires more effort).

It used to be that if you knew what instructions your compiler would emit, you knew how your program would perform. That is just no longer the case (well, it is to some degree, but other effects are stronger). A single instruction may perform anywhere within 7 orders of magnitude (L1 cache hit to virtual memory miss) depending on effects outside the program's control! (of course, those high-volatility costs are usually amortized, but so is a less unpredictable compiler output).

[1]: That is the key to cryptographic attacks that let a process sense what a cryptography algorithm running in another process is doing by the way the cryptographic computation affects the performance of the first process.


I think you are taking a very black and white point of view. Yes, hardware is complex and unpredictable. That doesn't mean that we can't reason at all about performance.

I take a program, measure it's performance on a wide range of real-world workloads across multiple different machines. Then I change some numeric routine to use unboxed integers instead of boxed integers. I measure it again on a wide range of real-world workloads across multiple different machines and find that it is significantly faster in all cases. My approximate mental model of how the machine works allowed me to make a change that empirically improved performance. My model is not perfect so I do have to measure carefully, but it is what allows me to make sensible decisions about which changes to measure rather than just changing things at random.

In a language where the compiler controls unboxing, my mental model is much more approximate. I have to figure out how to influence the heuristics to lead them into making the correct choice, and the solutions tend to be hacks that are highly sensitive to small changes to the heuristics, leading to conversations like https://groups.google.com/forum/#!topic/clojure/GvNLOrN3lGA .

Performance for non-tuned code may be better on average but my ability to tune important areas is reduced. If the compiler was more predictable, or had a interface that allowed me to add information, or if I could make my own passes then that trade-off would go away. I'm not against smart compilers, I'm against smart compilers that don't talk to me.


> I'm not against smart compilers, I'm against smart compilers that don't talk to me.

There are some extremely interesting advances in that area in OpenJDK. Java 9 will contain two relevant changes. The first, JEP 165[1] (fine-grained and method-context dependent control of the JVM compilers), lets you control compilation with metadata depending on context (e.g. inline method foo when called from bar); a much more interesting and powerful enhancement targeted for Java 9 is JEP 243[2] (Java-Level JVM Compiler Interface). It will do the following:

* Allow the JVM to load Java plug-in code to examine and intercept JVM JIT activity.

* Record events related to compilation, including counter overflow, compilation requests, speculation failure, and deoptimization.

* Allow queries to relevant metadata, including loaded classes, method definitions, profile data, dependencies (speculative assertions), and compiled code cache.

* Allow an external module to capture compilation requests and produce code to be used for compiled methods.

This opens the door to what I think is the most impressive compiler of the last decade, and a true breakthrough in (JIT) compiler design: Graal[3]. Graal supports languages of any level (it already has frontends for Java, C, Ruby, Python, R and JavaScript), and then allows complete control over code generation and optimization decisions at runtime. E.g. you tell it what kind of speculations to make, and it tells you which speculations failed. Unlike LLVM, you compile your language into a semantic AST (that may or may not match the language's AST) and feed it to Graal, but each node may contain not just semantics but instructions on speculation and code-gen control at any level you wish. During compilation, Graal interacts with the node and the node gives further instructions. As I understand it, JEP 243 will allow to plug Graal into the standard OpenJDK HotSpot (though at reduced speed), until Graal matures enough to become HotSpot's default compiler.

So what Graal will do is let the developer (if the language designer allows), write simple, high-level code, but tell the compiler, "listen, compile however you like, but when you get to this function, talk to me because I have some ideas on how to compile it just right".

[1]: http://openjdk.java.net/jeps/165

[2]: http://openjdk.java.net/jeps/243

[3]: https://wiki.openjdk.java.net/display/Graal/Publications+and...


Thanks, that is really interesting. I'll have to look into it.


And yet I consistently find that checksum tools, compression libraries, and things like video decoders (such as H264 decoders) written in assembly consistently outperform all other implementations I've had to deal with. "Sufficiently smart compiler" is a tired meme at this point. There are few programs that benefit from being entirely written in assembly, but quite a lot who do having parts of them hand optimized. Some, like game emulators, particularly one man job like No$GBA, are still fully written in assembly and its performance is a sight to behold. No$GBA would lose a lot if it were rewritten into a high level language.


> and things like video decoders

That's precisely the example I gave. Although many modern decoders use GPUs, which are much simpler than CPUs (simpler even than 90s era CPUs). The GPU performance model is very simple to comprehend.

> No$GBA would lose a lot if it were rewritten into a high level language.

That's a nice sentiment, but I don't think it is supported by the facts. You could probably write a JIT in Python that would perform much, much better (but that would be overkill, given that you're emulating a very slow, very small machine), and a trivial implementation in Java would probably perform just as well.

The ability to achieve significantly better performance for general-purpose tasks (let's call that "branchy code") with low-level languages today is more myth than reality. What is true that some high-level languages consciously give up on some performance to make development easier, but that's a design choice. That's not to say that optimizing JIT and AOT compilers get everything right -- they don't -- but they get it right often enough that they're very hard to beat.


+1, at the very least, because of the nod to Terra. Terra, imho, feels a lot like the perfect middle-ground between Lua and Rust. It has a clean syntax with some handy/fancy features but keeps a simple static typing system that makes me feel comfortable.

Also, nice to read a review of Rust that didn't reduce to “C is the worst evar!” or “Haskell makes no sense!”. Reasoned and clearcut. I certainly disagree with various parts of the review, but I don't do any dev for webservices so it is unsurprising that the author and I have a differing of opinion.


I would be interested to hear which parts you disagree with.

> I don't do any dev for webservices so it is unsurprising that the author and I have a differing of opinion

I don't either, I'm working on a database / language runtime. It happens to have a html interface rather than a console interface, but the majority of the work is very far away from normal web-dev :)

> Terra, imho, feels a lot like the perfect middle-ground between Lua and Rust

In particular, I really like the idea that the Terra type-system is just Lua code, so you can have different kinds of static analysis in different places instead of one-type-system-to-rule-them-all. Of course, it's a totally unproven idea at this point so it's hard to say how that would turn out in a real project.


Great to see some feedback on Rust, I've only played a bit but was quite impressed, dsepite being a mostly high level programmer.

However, it sounds like Eve is a simple, dynamically typed programming language/environment. So it's super weird to me to see him rave about the safety and type system of Rust...


Right tool for the job. Static typing works well for systems software. Eve is aimed at scripting / knowledge work, where you are mostly manipulating collections of messy data and it's useful to be able to start with a loosely specified program and only nail it down with types once it settles down. Imagine a relational database where you can vary between setting every column to Any and not caring about relationships, or strictly typing everything and adding integrity constraints all over the place.


Coming from C#, that syntax looks completely alien to me. I need to write a very small monitoring app to run on a tiny armel box so I may try Go. I will still miss Visual Studio's debugger, though.


When I tried to write my first program in Rust, I failed miserably. With a strong background in C# as well, I tried to write a function that returns an interface (a trait in Rust), which was apparently not something you do in Rust. (With an unboxed trait that is, but there are some proposals to add support for this.) The compiler diagnostics have improved tremendously since then. I don’t think the syntax is that much different. You use `let` instead of `var` and types go after the argument instead of before.

After some time I got the hang of it, and I am sure you can do it too. It is surprising how much I _didn’t_ miss the Visual Studio debugger. The VS debugger is amazing and in my opinion there is no debugger that comes close, but Rust has a much higher “if it compiles it works”-factor than C#. There are still times where I wish for a debugger like that of Visual Studio, but this is relatively rare. Instead of spending time debugging, you spend your time staring at compiler errors in denial until you eventually accept that the compiler is right and your code is wrong.


Apparently Visual Rust [1] is a thing now. VS2015 will support GDB as a debugger, too (!)

[1]: https://github.com/PistonDevelopers/VisualRust


Go is shit. Trust me, Rust will suit you better. Coming from C# it will be a lot closer to what you're used to than the bizzaro-world of Go.


Nonsense, go is fine, so is rust.


Both are really great. I think Go is better for servers and rust can be great on embedded and high performance applications.


I've been tempted to try getting rust going on some of the small arm micros I've got lying around. The memory safety would be tremendous boon for a lot of development there, same with the ownership/lifetime management. I think I'd probably have to strike most of the standard library but it'd still be really useful.


For me I keep seeing buffer overflows in router management stacks. It seems like rust would help protect routers from attack, while allowing low footprints.


Yea that's a common one I've done in a bunch of microcontroller things I've done. And there it's even more dangerous because you'll overflow the buffer, and change something and never have ANY idea you've done it until the consequences have happened. I ran into quite of few of those while setting up an allocator to get Lua running in <30k of ram without any floating point math.


Rust is likely to be popular for embedded work, where debugging is hard and crashing is often not tolerated.

Go is primarily for server-side web applications. That's Google's main business area, that's what they built it for, and that's where all the libraries are well-debugged.


The truth is that rust is in its infancy its all guesses at this point. Go didn't attract c++ people as people first guessed.


I saw them talking about that, but it's completely obvious.

Nobody today is running C/C++ unless they need either A) complete speed or B) bare metal. Go can't do either, so there was going to be very little transfer from C/C++ to Go.


C++ is still mostly what I use, but I need neither speed nor bare metal. Maybe add C) people doing UI and applications?

There aren't many other good options: Java: UI looks terrible; it's annoying living in Noun-land when CPUs are, if anything, more about verbs than nouns; and running properly on Windows is not trivial.

C#: Up until recently, not cross-platform unless you're ok with Mono. (But, might be worth looking in to now)

Python: great for scripts and experiments. The lack of static typing is a real problem when working with code other people wrote. Also, UI is not a strong point.

Go: what UI libraries? (I'm sure they exist)

Objective-C: even on iOS/Mac-only projects, I still prefer C++, because it's so much easier to use STL objects than NS data structures.

C++: you can pretty much get it done, and often you can use Qt to get it done quickly and cross-platformy.

I'm hoping Rust will be the C++ I always wanted.


Most GUI toolkits for Rust are in their infancy, but I hear that Gnome will be working on integrating Rust with GObject this year, which should allow Rust to leverage Gtk:

https://wiki.gnome.org/GUADEC/2015/BOFs/Rust

Some preliminary work towards this:

https://github.com/gi-rust/glib-sys


> I hear that Gnome will be working on integrating Rust with GObject this year, which should allow Rust to leverage Gtk:

This disappoints me greatly.

The idea of an object hierarchical GUI is not a guaranteed given. It causes things like "primary UI thread" which doesn't need to exist anymore. It is a reflection of a time when memory and CPU was more scarce.

In the face of massively parallel CPU's and huge memory, a tile-based, constraint GUI with publish-subscribe is probably a much better choice. This is especially true with a language that promotes concurrent programming.

I suspect, sadly, that we're simply going to continue down the same broken, single-threaded UI path because of the installed codebases.


> Java: UI looks terrible

Java GUIs can be made to look quite nice, just many developers never spend the time to go beyond the default settings.

If anything, the old Filthy Rich Clients blog from Chet Haase , Romain Guy was full of examples how to achieve great UIs in Java.

However most 9to5ers don't read such sources of information.


I am a Qt/C++ programmer by day and i have been looking at rust and there some bindings to GTK and such, but to me, nothing really comes close to the cross platform-ness of Qt. There seem to be simple QML bindings for Rust too, but thats not really the same.


I'm not against you, but to convince, you need to tell why, why the sh*t? why to trust you?


Lack of a runtime exception model would be one huge thing. So you basically have to check for errors after every operation, and whatever errors you don't check for just... get lost.


I need more info than that.


The Eve language is an offshoot of what started as LightTable, which was heavily focused on Clojure/Clojurescript.

Interesting that Clojurescript was not the language of choice here, both given the roots of that project and the reputation of lisps for being languages to write other languages.

I wonder if the team would be willing to comment on why they are moving away from Clojurescript?


Writing a language runtime in javascript is hard enough, clojurescript adds yet more layers of runtime overhead that we have to work around.

Clojurescript is a fine applications language, but it's not a good systems language. Most languages aren't.


This is a super interesting read! Having come into Rust from an experience with mostly object-oriented languages (Python, Java, C++), what you seem to have taken for granted, I found surprising and new, and what you are surprised by (such as self parameters), I found quite normal. It's great to see the other side of this.


I've actually written a ton of python too. The self parameter didn't confuse me because of OO, but because traits can actually dispatch on the types of all of their arguments so it wasn't clear to me what the meaning of the self argument was and how much it should affect my design.

Manishearth cleared up most of my confusion - it affects namespacing and auto-borrow but doesn't interact with constraints. Traits are very similar to typeclasses and I was just thrown by the surface level syntax.


A Rust programmer can't seem to be Rusty. The more he's Rusty, the more he knows about it, which seems like a contradiction.


[dead]


I ported a project to Rust and then ported it to C++14 (which is awesome!), but I only did that because Rust was in a state of massive flux at the time and I got tired of the project breaking every two days due to changes to the compiler/syntax/stdlib. I plan to reevaluate Rust once it's settled down a bit.


Are you performing a social experiment to see how low your karma can get?


New languages always are trap. Programmers always waste too much time on new languages or some tricky language syntax. They should focus on the business.

I will never try Rust.


Well, we first spent six months on 'cannot call function undefined of undefined'. We could have spent the last three on 'segfault' instead. Rust let us spend that time on actually experimenting instead of just fighting the computer all the time.


Spending 3 months on a segfault? In any language even a mediocre programmer you can usually resolve a null reference exception / segfault in a couple of hours at the most. With a complex program and a debugger it could be a minute or so. Maybe if it only happens in production and you don't capture stack traces or core dump files... but this happens once and then you learn your lesson.


He obviously meant the general class of errors, not one specific bug.


Rust isn't what you seem to think it is. It targets a very specific area that's been dominated by C/C++ only because there wasn't anything better out there.

Rust still has a long way to go, but it's making progress and I would personally like to see a world where Rust is the go-to language for systems software.


Why can't you practice Rust in a non-business environment?


Programmers always spend too much time on studying which hammer is better, but they forget what they really want to do, right?


I agree there is some truth in what you're saying. However, without programmer's curiosity and desire to create new interesting and beautiful languages, we would still be "focused on business" and writing code in COBOL, Fortran, C, Pascal...


I don't think that was the case for me at least. I felt like (using your own analogy) the more I studied Rust, the better I learned how to use every other hammer. Learning a new programming language helped me learn c++ better since I was constantly on the lookout for potential memory leaks and other common pitfalls that Rust prevents. Now when I code in c++, I always try to write the code with safety in mind.


I actually agree with you in general, but in this case we spent six months hammering in nails with a wet sandwich because the only available hammer has a chainsaw for a handle ("it's perfectly safe, just don't ever press the on-switch"). This blog post is me seeing something that looks like an actual hammer and suspiciously looking for a trap.


Oh of course. People are too hung up on tools to get the job done, but you're case is oh-so different.


I agree with you generally -for me, given that I already know java well, it's clearly not worth the effort to spend much time on (say) Go. I make an exception here, because I think Rust has genuinely useful new ideas brought to the table.


This is the stone-age of computer science. Things aren't really going to get better at the behest of programming language luddites who think their 40-odd years of experience constitutes the whole future of technological progress.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: