Hacker News new | past | comments | ask | show | jobs | submit login
The Road to Rust 1.0 (rust-lang.org)
570 points by steveklabnik on Sept 15, 2014 | hide | past | favorite | 238 comments



We've been using Rust in production for Skylight (https://www.skylight.io) for many months now, and we've been very happy with it.

Being one of the first to deploy a new programming language into production is scary, and keeping up with the rapid changes was painful at times, but I'm extremely impressed with the Rust team's dedication to simplifying the language. It's much easier to pick up today than it was 6 months ago.

The biggest win for us is how low-resource the compiled binaries are.

Skylight relies on running an agent that collects performance information from our customers' Rails apps (à la New Relic, if you're more familiar with that). Previously, we wrote the agent in Ruby, because it was interacting with Ruby code and we were familiar with the language.

However, Ruby's memory and CPU performance are not great, especially in long-running processes like ours.

What's awesome about Rust is that it combines low-level performance with high-level memory safety. We end up being able to do many more stack allocations, with less memory fragmentation and more predictable performance, while never having to worry about segfaults.

Put succinctly, we get the memory safety of a GCed language with the performance of a low-level language like C. Given that we need to run inside other people's processes, the combination of these guarantees is extremely powerful.

Because Rust is so low-level, and makes guarantees about how memory is laid out (unlike e.g. Go), we can build Rust code that interacts with Ruby using the Ruby C API.

I'm excited to see Rust continue to improve. Its combination of high-level expressiveness with low-level control is unique, and for many use cases where you'd previously use C or C++, I think Rust is a compelling alternative.


Somewhat OT, but...

I'm looking at your pricing, and I don't have any idea how you define the notion of a "request". Right now, I can't even begin to guess what pricing bucket my apps might fit into, and that reduces my motivation to try out your service. I speculate I'm not the only person who's felt this way.

(also, I know why you include the Ember mention at the bottom, but I really don't care. I'd rather see your awesome UI than hear what technology it's based on.)

Edit: Seriously, I'm getting downvoted for offering someone feedback on their product experience?


Don't let it get to you - upvote and downvote are very close for mobile users. Here you have a +1 to compensate. :)


Love that you guys are using Rust in production. Very elegant design (AS::Notification IPC -> Rust).

I'd dump the part of the marketing site talking about the fast UI on Ember. It's cool that you guys use Ember and you're proud of it, but I want to see more about your product. Show me how easy it is to drop into an app, or reports you generate, alerts you flag, etc..

Ultimately, while I nerd out on what a product is built with, I only care about what it can offer me. Fast UI isn't a feature, it's an expectation.


"We end up being able to do many more stack allocations, with less memory fragmentation and more predictable performance, while never having to worry about segfaults."

But isn't this stuff the job of the VM? There's no reason why a program written in Ruby can't do the same stack allocations, reduced memory fragmentation and predictable performance automatically - if the Ruby VM was better designed.

If Ruby had a better VM would you chose to use it over Rust? In Rust are you doing things that the VM could be doing for you?


> There's no reason why a program written in Ruby can't do the same stack allocations, reduced memory fragmentation and predictable performance automatically - if the Ruby VM was better designed.

Escape analysis falls down pretty regularly. No escape analysis I know of can dynamically infer the kinds of properties that Rust lets you state to the compiler (for example, returning structures on the heap that point to structures on the stack in an interprocedural fashion).

This is not in any way meant to imply that garbage collection or escape analysis are bad, only that they have limitations.


Rust must statically prove lifetime of references to stack allocated variables does not exceed lifetime of the variables they point to at compile time, in order to be 100% memory safe. How is that different than just an advanced escape anlysis? Theoretically a VM could do much more, because it could do speculative escape analysis (I heard Azul guys were working on such experimental thing called escape detection) and even move variables from stack to heap once it turns out they do escape.


You could do escape analysis equivalent to Rust's only if you inlined all the functions. Sometimes, that's simply not possible, e.g. with separate compilation.

On the other hand, Rust is still able to perform its kind of escape analysis (via lifetime-tracking type system), because the relevant facts are embedded in type signatures of Rust functions, and as such must be present even for separate compilation (even if the actual implementation of the function is unknown).


VM could do see all the functions and do whole-program analysis. Inlining is an orthogonal concept.


You could in theory, but that analysis would likely be really slow.

In any case, without the type system and compiler to enforce the discipline the programmer is going to lose a lot of control and predictability.


Not necessarily. As you said, Rust encodes that information in type signatures. Exactly the same information can be used in a VM and it could do escape analysis one method at a time then.


That's true. Maybe it would work, but I wonder if anyone has attempted it before...


For one thing, Rust can reject programs at compile time if it's not satisfied with its analysis. For Ruby to get that, you might have to restrict the language in a similar way.


The JVM, who is several order of magnitudes better than the Ruby VM, constantly gets smashed in terms of performance and memory usage by a well designed C++ program.

Why? Because in theory, a VM should be able to do as well, if not better, than a programer.

The limit of a VM is that you are extremely limited in the amount of resources you can allocate to the VM to determine how much resources should be freed. See what I mean?

In the end the VM is nothing less than a program. With RAII, for example, there is zero overhead associated with the decision of releasing a memory block, because it's compiled into the program.

On top of that if you start to add some nasty memory optimization tricks allow you, you start to get why manually managing memory is still very interesting when performance and/or memory usage is important.


Both the JVM and the CLR are regularly beaten by C/C++, despite claims that they are "as fast". What gives?

In my view the whole problem comes down to whether or not you are using an idiomatic approach or not. In idiomatic C#/Java you use a lot of heap allocations, garbage collection, you may be using dynamic dispatch and so on.

If you write a C# program that uses stack allocation only (no classes, only structs and primitives), no inheritance/polymorphism, no exceptions, you should find that the CLR stands up pretty well to a C++ program. Sadly, what you have done then is essentially code in a very small subset of C#, OR you have achieved something that is so hard and so prone to foot-shooting you could just as well have used C++ to begin with.

To reverse the argument: if you use C++ with loads of garbage collected objects etc. you will end up with performance similar to a java/C# program. But in idiomatic C++, you usually don't.


C# lacks the language features to make dealing with such coding style sane and the libraries use the "idiomatic" approach - if you're willing to put on a straitjacket and throw away the libraries then why bother with C# ?

That's not saying that C# can't be made more efficient by avoiding allocation in performance critical sections, but overall it's going to perform worse than C++, both idiomatic and non idiomatic versions. It's just that for most use cases C# is used the performance difference isn't relevant anyway.

Java doesn't even have value types, so it can't even come close in terms of memory layout efficiency and avoiding allocations without perverse levels of manual data structure expansions - for eg. consider how would you do the equivalent of : struct small { int foo; float bar; char baz; }, std::vector<small>.


I agree completely. For an "inner loop" scenario in C# you have to use the straight jacket version of C#. Luckily you can use idiomatic C# for the rest of the program.

The option to using "straight jacket C#" is using C/C++ interop for a part of the program. If that part is large enough, such as in most high end games, it's usually worth biting the bullet and go C++ throughout. Luckily again, those programs are rare.

Point is still that C-style C# is almost as fast as C++ (modulo compiler specific optimizations) but for the reasons above, that fact isn't very interesting.


Java will have value types soon. Also, C++ does some heap allocations behind-the-scenes quite often (e.g. with vector or string) which are hard to get rid of and they are much more expensive than in JVM/CLR. So not always idiomatic C++ smashes JVM/CLR in performance. YMMV and typical differences are small enough to not matter for most of server-side software like web-apps, web-servers or databases.


> What gives?

What gives is this. The people who right these fast C/C++ programs that beat Java/C# are usally far more skilled and trained.

Any programmer who know neither language well and has to write a big application in it will probebly find java/c# far easier.

I remember a long blogpost where somebody set out to test this on himself. And that guy was a very good C++ programmer. He found that his C++ prgrogrammer was slower, but he then set on improving the speed and in the end beat java by quite a bit. However the amount of effort was completly unreasonable for most programmers.

So "What gives" is this, and this has been true for a long time. If you are a expert in a low level language and spend time optimizing you will probebly beat Java/C#.

I would suggest that you should look into what a JIT or GC can and can not do. Some of the performance problems you identivy are really almost never a bottleneck anymore.


(I think shin_lao was actually talking along the same lines, saying that even the hyper-optimised JVM often isn't as good as a good C++ program.)


I agree, my point is merely that the bulk of the difference between idiomatic Java programs and idiomatic C++ programs is due to the wildly different ways of of programming idiomatically in Java vs. C++.

The small (steady state) performance difference remaining when doing the "exact same thing" in both programs is just down to how good the C++ compiler is vs. the VM JIT at optimizing (usually better, sadly).

What intrigues me about Rust is that hopefully we won't have to choose between readability and elegance vs. performance and safety. Keep up the good work.


It is very hard to have all this done reliably, especially in a rather dynamic language like Ruby. Maybe the Ruby VM could be better designed to perform these sort of optimisations, but it will never give the same control as a language like Rust, C or C++; i.e. a small code change could cause the optimiser to fail, leading to slow-down and bloat.

Furthermore, the hypothetical "sufficiently smart VM" isn't much value for code written now.


[Chris has done some fantastic work on a Truffle / Graal backend for jruby; for my part I'm (slowly) working on a "mostly-ahead-of-time" Ruby compiler]

I'm not sure I'd agree with "never", though I do agree Ruby is a hard language to optimize.

There are two challenges with optimizing Ruby: What people do and don't know the potential cost of, and what people pretty much never do, but that the compiler / VM must be prepared for.

The former includes things like "accidentally" doing things that triggers lots of copying (e.g. String#+ vs String#<< - the former creates a new String object every time); the latter includes things like overriding Fixnum#+, breaking all attempts at inlining maths, for example.

The former is a matter of education, but a lot of the benefits for many things are masked by slow VMs today that in many cases makes it pointless to care about specific methods, and an expectation not to think much about performance (incidentally, it's not that many years ago that C++ programmers tended to make the same mistakes en-masse)

The latter can be solved (there are other alternatives too), or at least substantially alleviated, in the general case by providing means of making a few small annotations in an app. E.g. we could provide a gem with methods that are no-ops for MRI but that signals to compilers that you guarantee not to ever do certain things, allowing many safeguards and fallbacks to be dropped.

Ruby's dynamic nature is an asset here in that there are many things where we can easily build a gem that provides assertions that on e.g. MRI throws an error if violated or turns into a no-op, but that on specific implementations "toggle" additional optimizations that will break if the assertions are not met. E.g. optional type assertions to help guide the optimizer for subsets of the app.

In other words: how optimized Ruby implementations we get depends entirely on whether people start trying to use Ruby for things where they badly need them.


Meta note: I saw that this post was being pretty heavily downvoted and upvoted it to keep it from falling out of the discussion. I disagree with the assumptions in the post, but they are handily refuted by replies. I'd rather have a robust discussion with every reasonable side well represented than a boring echo chamber.

I urge other folks here to not merely blindly downvote posts they disagree with, reserve downvoting for posts that don't deserve to be seen at all because they don't contribute to the discussion.


I've never built a Rails app, so perhaps there's something simple that I'm missing (I'm mostly a C# guy), but how would I use Skylight to monitor my on-premise application? The pricing makes it seems like a hosted service. I would have expected some sort of profiler to be needed on the server and then perhaps a centralized location for the data to be pushed to for display.

The design of the site is great and I'm mostly curious because it sounds like something I wish was available in the .NET world!


Thanks for asking. If you're interested in the nitty-gritty, Yehuda and I gave a talk on the architecture behind Skylight at RailsConf: http://www.confreaks.com/videos/3394-railsconf-how-to-build-...

The short version is that the agent runs on your servers and collects information from your Rails app using the ActiveSupport::Notifications instrumentation built in to the framework. We serialize that into a protobuf that's transmitted via IPC to a background daemon (written in Rust).

That daemon batches multiple reports into a single payload that is sent to the Skylight servers, where we use Storm and Cassandra to process the requests, and periodically do aggregate roll-ups.

Unlike New Relic, Skylight gives you access to the entire distribution of response times, not just the average response time. (According to DHH, averages are "useless.")[1] This ends up being a lot of data, and a lot of CPU-intensive processing, which is why we sell it as a hosted service.

[1]: https://signalvnoise.com/posts/1836-the-problem-with-average...


Minor point, but New Relic has had built-in support for collecting response time distributions for over a year. Please don't lie about this.

https://docs.newrelic.com/docs/apm/applications-menu/feature...


Just an FYI, while I think, to set up skylight I just add a gem to my project, I'm still not really sure. It would have been nice to go to your homepage and have it confirm that it is easy to configure/deploy into your application.


Is it possible to get the names of the Phd thesis read?


Hey Tom, did you also consider googles golang? Currently I'm building my backends with golang and I am pretty satisfied with the performance and the whole flow. Maybe it is sometimes a bit cumbersome to check on errors like a paranoid but in the long term it helps to predict what happens in error cases. How about rust? Is the workflow comparable to golang? Is there an equivalent to gofmt and some ide support for code completion? Thx and regards Bijan


I don't speak for tomdale/wycats, but embedding a language with a GC (Go) inside another GC'd language (Ruby) only leads to worlds of pain, especially when trying to efficiently transfer data between them.

In languages without compulsory GCs (like Rust, which doesn't have one at all) this works because it's easy to have complete control about when memory is freed, but a GC'd language may free memory that was passed back into Ruby since the GC can no longer find any references to it.

I'm sure there's ways around this (e.g. using unsafe/raw pointers), but this control is the default in Rust, and there are a pile of language mechanisms (ownership, in particular) that make making this safe much easier.

(Also, AIUI, Go's FFI story is relatively inefficient, and, has some rather bad bugs; apparently https://code.google.com/p/go/issues/detail?id=7978 is the Go GC trying to free FFI memory.)


Actually, that Go bug is the scanning code getting confused about an incorrectly stored stack pointer value during a very small race between returning from a FFI call into Go and generating a traceback for scanning. I'm not familiar enough with the internals to comment on if that can lead to freeing FFI memory, but I don't think so. Not that crashing the process is much better. :)


From what I understood, it's a Rust agent communicating protobuf structs over IPC, not embedded into the gem.

In such a case, memory safety and GC is less of an issue.


Memory safety is still good for long term code maintenance and refactorability. wycats also mentions that these processes are reasonably long lived - memory leaks can become a big problem over time if you are not careful. These can be mostly purged by static analysis and Valgrind, but are still a headache to deal with. I would also add that Rust is much easier to hack on safely if you don't have a great deal of prior systems programming experience.


I believe there is a Rust library embedded in the gem to serialize and communicate with the outside agent (IIRC, this is actually the most important part to be in Rust, it needs to be efficient to avoid interfering with the normal operation of the main program as much as possible).


I've never looked at it too closely, but wycats has talked about how the Ruby code calls Rust code through the FFI and how Skylight doing some special things to make that work especially smoothly by directly calling some internal functions of Rust's failure handling bits, and about how the Rust code crashing shouldn't take down the rails process. That seems to suggest it's not just an IPC thing.


There are some work-in-progress code completion tools https://github.com/phildawes/racer).

No format tool yet, but its often talked about and Id be surprised if it doesnt happen.

The biggest infrastructure difference to go at the moment is that its tricky to cross compile binaries in rust.


From time to time I use "rustc --pretty normal <file.rs>". Not a real formatting tool, and it has a tendency to produce some funny results at times, but it's better than nothing.


I used to describe my preferred family of languages as:

- C when I absolutely had to (kernel/modules/plumbing).

- Python for scripting and broad accessibility.

- Haskell when I had the choice and I knew everybody who would work on the project.

I was skeptical of Rust when it first came out, due in large part to the many different kinds of pointers it originally had, many of which involved significant manual memory management. But now, with a strong static type system, garbage collection, pattern matching, associated types, and many other features, Rust is looking like a serious contender to replace all three of those languages for me.

Still waiting to see if it develops a strong following, community, and batteries-included library ecosystem, but I need to start doing more experiments with Rust.

Disappointing to see yet another language-specific package management system (Cargo), though.


> Disappointing to see yet another language-specific package management system (Cargo), though.

It's been a huge step up from makefiles in developing Servo. I can barely ever bring myself to go back to using make and git submodules now.

The workflow of breaking up your project into small, self-contained packages that people hack on independently and are all built with a package manager that natively understands the language and can build without a single line of shell is a huge improvement over vendoring dependencies and dealing with a maze of configure scripts or arcane make/(insert your preferred replacement here) files. It improved compilation time a lot too!


> It's been a huge step up from makefiles in developing Servo.

As a maintainer of a number of Rust libraries I highly agree! It has definitely won over a few skeptical collaborators of mine (who were fans of Make).


As others have said, having a package management system that deeply understands the language and tooling is awesome. Examples:

- Rust has a distributed-by-default documentation generator (rustdoc), cargo knows this and provides `cargo doc` to render a library's docs with it.

- rustdoc can run code examples in the documentation as tests, to check that everything is up-to-date, `cargo test` does this (along with running the in-source unit tests and any external tests).

- the Rust compiler allows for plugins, which are dynamic libraries loaded into the compiler and can be used for things like custom macros (aka procedural macros aka syntax extensions) and custom compiler warnings. cargo understands these, allows them to be 'imported' via the normal dependency mechanism, and specifying `plugin = true` in a package makes cargo do the right thing, e.g. building as a dynamic library (static libraries are the default) and compiling for the correct target when cross-compiling.

I'm sure all of this is possible with other systems, but it seems unlikely to be so nice to use.


It's very similar to Racket, and yes, it is nice to use!

Other systems can get you much of the way there (node, Python are the only ones I'm really familiar with) but I suspect you need a little language help to achieve the same kind of convenience.


Rust does not have any garbage collection, to be clear. All your other features are correct though :)

(We have previously said "opt-in GC" but that was a lie. See https://news.ycombinator.com/item?id=8312327 for more.)


Yeah, I should have said that more clearly. Automatic memory management (never having to call free), not garbage collection. Which is arguably more awesome: whenever the compiler can figure out at compile time when you'll stop using memory, it can statically decide to reclaim it there.


Yes! The fact that Rust has zero-runtime-overhead automatic memory management is a big deal. I've been using Rust for almost a year, and it's been a source of great delight for me.


I am a huge fan of no required GC, and of precise refcounting. But does Rust have anything on the roadmap to handle truly cyclic structures that don't lend themselves to a weak pointer approach?


One option would be to allocate objects out of an arena[0] and then destroy the entire thing at once when you're finished with the contained objects.

[0] http://doc.rust-lang.org/arena/index.html


Makes sense.

If anyone is interested, I think it could be interesting to see if the refcounting approach I developed for handling cyclic references in my library "upb" would work as a Rust library:

https://github.com/haberman/upb/blob/master/upb/refcounted.h

The basic idea is that you refcount groups of objects instead of refcounting objects independently. You compute the groups dynamically such that no cycle can span groups. This is sort of like an arena, except that no arena is ever explicitly created, and the collection can be more precise than an arena.

In my library, objects go through a two-phase lifecycle. First they are mutable, then they are frozen, after which no further mutations can be made. When an object is frozen, its outbound pointers are also frozen.

The nice property of this scheme is that you can perfectly compute these "virtual arenas" by computing strongly-connected components at freeze time. This makes the scheme optimal for frozen objects. For mutable objects, the groups are computed more conservatively and objects may not get freed as often as they otherwise would with a perfect scheme (ie. the same downside of an arena).

I would love to know how this scheme might possibly be modeled in Rust. Since Rust also has strong semantics around mutability and immutability, it seems possible that a very nice idiomatic Rust interface could be implemented around this scheme, giving more precise cleanup than an arena while still allowing circular structures.


This sounds like an interesting variant on a Train collector.


I think what you want is a cycle collector, and to invoke it at appropriate points. I don't see why Rust couldn't have a refcounted pointer type with an invokable cycle collector, but maybe someone on the core team could chime in here.


I'm curious if other approaches are viable also, like I mentioned here: https://news.ycombinator.com/item?id=8321618


Interesting!


Maybe try to avoid these as much as you can - they make the code complicated in any language - and if you must, then manage the memory yourself?


Curious to hear more about language-specific (though OS-agnostic!) package management systems. IMO composer is the best thing ever happened to PHP, Ruby gems are huge, Python eggs also make a very useful ecosystem.

OpenSUSE's Open Build System would be great to ship independent packages, but those are again heavily tied to Unices, hence leaving other platforms behind.


> Curious to hear more about language-specific (though OS-agnostic!) package management systems.

As far as I can tell, one of the main justifications for most language package management systems is "we also run on Windows/OSX, which has no package management, so we'll invent our own". As a result, users of systems that do have sane package management get stuck with multiple package management systems, one for the distro and one for every language. Even then, I find it disappointing that nobody has built a cross-platform package management system for arbitrary languages to unify those efforts.


The other justification is generally a clash of cultures: the people who maintain distro/OS package managers generally come out of the culture of sysadmins, who value stability over feature-richness, while the people working the language communities generally come out of the culture of developers, whose priorities are the exact opposite.

When languages try to hook into existing OS-level systems, the people on the language end get frustrated by the way the people on the package-manager end don't hurry to rush out bleeding-edge versions of packages the second they hit Github. To the package-manager people, that's no big deal, their orientation towards stability and predictability makes them comfortable with waiting a little for the coffee to cool. But to the developers, who want to get their hands on the Latest and Greatest Right Now!, it feels like slogging though molasses.

So the developers eventually end up blowing their stacks and stomping off yelling "Yeah? Well fine, we'll build our own package manager then! With blackjack! And hookers!"


Maybe OS-level package managers should default to stable, but let the user check a box to get the latest and greatest. Developers want a stable system like everyone else, but for the stuff we're hacking on, we have a legitimate need to get the most recent, so our software isn't obsolete by the time we finish it.


Most OS-level package managers also aren't designed to install more than one version of a package at a time. They don't tend to integrate with build systems as well, either.


That's not so simple. A distro is a fine-tuned collections of packages which work more or less well together. Debian, for instance, comes in stable/testing/unstable/experimental flavours, depending on how daring you are. But even this isn't a universal solution. If you are deploying for instance a web application, you will want to deploy a locked down number of dependencies as well, regardless of what is present on the target system. And you may need deploy multiple applications side by side. Few system package managers have an answer to this.


So developers end up installing later versions manually. And in many cases it's no big deal. If the distro has Julia 0.2.1 and Emacs 23, I can upgrade to Julia 0.3 and Emacs 24 and it's not likely to damage anything. It'd just be nice if I could do it with the package manager instead.

But just because I'm doing that doesn't necessarily mean I want, say, the latest unstable version of the window manager.


Debian will let you do that. You can run, say, your machine on testing but get the latest Firefox from experimental if you want. This may, however, upgrade other dependencies on your system, but it's pretty much unavoidable.


I'd be happy enough if the OS-level packagers stopped modifying the package-level packages they packaged.


The problem with language package management systems is they've been used for installing user facing software. As a developer tool I think it is the perfect way to go.

And you should add Linux to your Windows/OSX as being an issue, which Linux package management tool would you build packages for? All of them?

The end user package management provided by the OS should be for installing end user packages and the language tool for installing and publishing libraries and dev tools.


> The problem with language package management systems is they've been used for installing user facing software. As a developer tool I think it is the perfect way to go.

Precisely so.


> As a result, users of systems that do have sane package management

Given the diversity of OS in the IT landscape, which systems are those?


I met only one package manager that I don't need to fight in order to get what I want: Gentoo's Portage. With a local overlay and language specific functionality concentrated in eclasses it's trivial to add new packages, do version bumps, have fine grained control over installed versions, enabled features, etc.


Most of the unix package management systems are not really that good for development: they don't support sandboxing, or non-root installation/execution, or don't support the right kind of versioning, or don't support multiple versions of a library, or some other issue.

The only existing system that I might trust enough for this kind of thing Nix, and at least for Haskell it is a pretty solid alternative to their language-standard package manager. Unfortunately, its popularity is way lower then stuff like apt or yum which are pretty shitty for development compared to something like pip or cabal.


The distro only contains a small selection of the packages (even if there are hundreds or thousands of them) and the language package system is usually the source the distro maintainers use to find the packages anyway.


Lately, IMHO, Gradle in Android Development(applicable for Java development as well) is a huge improvement over managing dependencies with pom.xml(ant) and linking jar files manually. Besides, you can totally customize build.gralde too.


>Disappointing to see yet another language-specific package management system (Cargo), though.

As a packager in a Linux distro, I'm disappointed every time somebody tries to cram in PL-specific packages inside distro packages.


Interesting, how do you get around it, when those packages are depended upon by user facing software?


> Disappointing to see yet another language-specific package management system (Cargo), though.

Coming from Python I find Cargo very very smart and very well thought so far, it is not feature heavy, but everything has a very clear and useful purpose. For instance today I found that if I created a file .cargo/config I could override my dependancies to make Cargo search projects on my fs instead of grabbing them on Github, while doing developments it's a big thing I think.


> Disappointing to see yet another language-specific package management system (Cargo), though.

I don't think it is. You need support for Rust modules on various platforms Linux/Mac/Windows(possibly Android). No single tool works on all those platforms. Cargo does and it has minimal dependencies.

Not having to juggle three different configuration (CMake, Makefile, etc.) on different platform is actually pretty great.


> Disappointing to see yet another language-specific package management system (Cargo), though

So what is the solution to have portable packages for:

- RPM systems

- Debian systems

- tarball systems

- Pkg systems

- MSI systems

- Mainframe OS

- Embedded OS

- Aix/HP-UX/Solaris package systems

- ...


The goal of the [nix](http://nixos.org/) project is to solve this, and every time anyone brings up a package manager on HN, someone has to mention nix. The reality is that nix is really nice, but isn't any better than making a new package manager until it has wide adoption, so no one is using it.


Nix was brought up during the discussion that led to Cargo, but no Windows support is a deal breaker.


I would probably make the same decision, but I hope in the end Caro is easy to wrap with Nix, which is a breath of fresh air particularly when needing to mix dependencies that cross language boundaries and share those build recipes with a team.

Previously, I wrote shell scripts and worried whether everyone on the team had rsync installed, or xmlstarlet, or some other less common tool. Now I wrap those scripts in a Nix package that explicitly depends on all those and distribute with confidence. It's fantastic.

Bundler and rubygems, for example, do various things that make good support within Nix rough. Two examples: 1. rubygems has no standard way of declaring dependencies on C libraries; 2. as far as I know there is no way to ask Bundler to resolve dependencies, create a Gemfile.lock, but not install any gems (I realize github gems must be downloaded to see the gemspec...)


Cargo has the second, and there's a plan for the first.

That said, the reason that you want it to do the installation is that a lockfile is supposed to represent the way to do a build successfully. Without building everything, you can't actually be sure that the lockfile is correct. In theory, it should be...


> reason that you want it to do the installation is that a lockfile is supposed to represent the way to do a build successfully

Sure, and I'd like to do that build within Nix (and someone else might want to do it with another packager), which gives a stronger guarantee than Bundler since it incorporates C library dependencies and more. Anyway, the specifics aren't relevant to this discussion, and it seems you have a grasp of the issues, so carry on!


Wouldn't it still have been less effort to port Nix to Windows, than to write an entirely new package manager and then port it to every OS?


If that were the only downside, possibly. I don't really do Windows development, so I can't tell you how difficult porting Nix would be. There's large advantage to having a packaging system that knows your language well. It's going to have tighter integration than a generic one ever could.


It seems to be only for GNU/Linux systems, what about all other OSs out there?


I've been experimenting with Nix on Mac OS X lately and it works fine. I've heard that it works on FreeBSD as well. The big gap is Windows.

The good news is that you can integrate your language-specific tools with Nix as well, such as has been done for Haskell, node.js and other things. (I'm looking at it so that we can integrate our Dylan stuff with it.)


When this discussions happen on HN, I always see a narrow discussion of Mac OS X, GNU/Linux, Windows and with luck *BSD.

But the world of operating systems is so much bigger than the desktop under the desk.

Good work on Dylan by the way.


I'd love to have the time and the resources to deal with more OSes. :) 20 years ago, I had to keep stuff running on Solaris and lots of other platforms. About 20 years ago, I still did some work on VMS on actual VAX hardware! It wasn't that long ago, that we had the possibility of BeOS either. Comparatively, we have quite a monoculture (of POSIX) these days with Windows being the non-POSIX representative.

Maybe unikernels like OpenMirage will help make things interesting.

And thanks! The work on Dylan is a lot of fun and keeps me semi-sane by keeping me busy.


Nix is much more than just a package manager though.


How would (or how does) Nix deal with windows anyways? Are it's abstractions portable enough?


> Disappointing to see yet another language-specific package management system (Cargo), though.

That's a much bigger problem to tackle, and one that nobody has really thought much on or done much work with. Package management is a tough problem in the abstract, and it's a big challenge to create a meta-package management infrastructure, more so when there doesn't seem to be any money in it (at least not easily).


> Disappointing to see yet another language-specific package management system (Cargo), though.

No, I don't want my language to be bound to someone else's package manager at all.


The ownership idioms are very similar to idiomatic C++11 and std::unique_ptr. Which is to say that Rush has got an industrial strength safe memory management system.

But Rust stands out because the rest of the language is such a joy to use, compared to pretty much any other 'systems' language out there.

Congratulations to the team!


One of the cool things about the Rust ownership system is the lease/borrow system. Moving is cool, but much of the time you want to synchronously call another piece of code and give it a temporary (stack-frame-long) lease for that pointer.

Rust starts with ownership, but makes it easy to ergonomically and safely lend out that ownership (including one-at-a-time mutable leases) of both stack and heap allocated pointers.

I've been programming with Rust since last December, and I have had essentially zero segfaults coming from Rust code during that time frame, roughly equivalent to what I would have expected writing code in a language whose safety guarantees come with a runtime cost (GC or ARC).


> The key to all these changes has been a focus on the core concepts of ownership and borrowing. Initially, we introduced ownership as a means of transferring data safely and efficiently between tasks, but over time we have realized that the same mechanism allows us to move all sorts of things out of the language and into libraries. The resulting design is not only simpler to learn, but it is also much “closer to the metal” than we ever thought possible before. All Rust language constructs have a very direct mapping to machine operations, and Rust has no required runtime or external dependencies.

Almost sounds like they borrowed this thinking from Exokernel design... I think Rust is shaping up to be a very exciting language.


Rust looks fantastic, and has a lot of things I wish I could do while in a higher level language like F#.

I just wish Rust was a bit less verbose. Requiring, for instance, type annotations on function arguments because it's sometimes helpful is such a weird decision. Let the programmer decide when an annotation is needed. This gets annoying when you get into functions with complex arguments. Especially for local functions where the signature could be messy, but the limited scope means annotations just clutter things. I'm not sure why Rust forces us to compromise here.


> Requiring, for instance, type annotations on function arguments because it's sometimes helpful is such a weird decision.

In separate compilation you have to annotate functions anyway, and in most large ML projects I've worked on people tend to annotate just because the error messages get much better. This is a common design in functional languages these days.


Fully typed function signatures form a kind of contract you can program against.

Haskell and other extremely strongly typed languages can infer the types of function parameters, yet the community still agrees it is good practice to annotate your work.


Would Haskell be better off if the compiler enforced this community agreement, instead of letting users decide?

Also, the type annotations can be added on later. While you work and play with ideas, leave everything unannotated. After it's cemented and perhaps refactored a bit, add the "contract". In Rust, even while working things out, the user has to figure out and jot down the types.


> Would Haskell be better off if the compiler enforced this community agreement, instead of letting users decide?

Unequivocally, yes. My logic is that, while writing the function may be slightly quicker and more convenient if you can leave off the type, reading that same code is made at least an order of magnitude easier if the type annotation is sitting there in the code.

Actually, it gets better than that. Writing down the type of a function before writing the function often helps you write the function.

Protip: Use `ghc` with `-fwarn-missing-signatures -Werror`, or even better, `-Wall -Werror`. :-)


The thing is, with current tooling, the current design means that I can write a function, type a key combination, and have the type signature printed in a buffer (where I can then copy it into place). And until I do that, there is warning highlighting on the function.

Now in some ways this may seem silly - are you really going to understand code without understanding types? But especially for people new to the language, _or_ when you're dealing with new libraries (if you've ever written a little wrapper around a function from a complicated library, you know what I mean), it's nice to choose whether you want to work from values or from types (where undefined is your friend).

Which isn't to say that think rust's decision is bad, just that having flexibility makes this kind of tooling easier (and I'm assuming here that in all cases, the end result will be all annotated top level functions). And, part of rust's choice was probably to make type checking easier, which is an important thing (especially given how sophisticated the borrow checker is).


> Now in some ways this may seem silly

Not at all!

Typing should be a conversation with the compiler. If you have a strong understanding of what you are writing then, yes, writing the types first makes sense. On the other hand, sometimes I only understand how some particular pieces fit together—at this point, I want the compiler to throw its inference engine at my code fragment and tell me everything it can!

Typing and programming is exactly the same as theorem stating and proving in mathematics. It would be idiotic to have one-way information flow only.

That said, it's also practically criminal to just hand someone a proof without stating what you think it's supposed to be proving. Ultimately, that it where you must wind up.


Interesting point. Getting the type annotation inferred for you there is definitely useful. I've used it a few times myself.

The `undefined` trick is also immensely useful. I use it a lot when starting a new module. Rust also has a notion of bottom, indicated by `!`, which will unify with all types. I frequently use this in Rust in a similar way that I use `undefined` in Haskell. (In Rust, you would speak `fail!()` or `unreachable!()` or `unimplemented!()` or define-your-own.)

> (if you've ever written a little wrapper around a function from a complicated library, you know what I mean)

Yes, absolutely. I haven't really run into this problem with Rust yet though. Types are generally pretty simple. If and when Rust gets higher-kinded types, that would assuredly change.


I thought ! in Rust indicates a macro.


A `!` followed by an identifier does, yes. But a `!` also lets you define diverging functions:

    fn diverging() -> ! {
        unreachable!()
    }
The two `!` in that code are completely orthogonal things. See the manual on diverging functions: http://doc.rust-lang.org/rust.html#diverging-functions


Certainly the user should be allowed to decide when they feel the code needs further documentation to be readable. Trying to drag people into something seems unfriendly.


OCaml has a decent compromise between no signature at all and signatures everywhere: you put signatures in your interface. Works pretty well in practice, I find.


Haskell compilers can, and its probably better for production code to have them do it.

OTOH, it can be better for exploratory coding in some circusmtances not to. (For one thing, it can be a tool to find cases where you accidentally write something that is more general than the types you were thinking of, but perfectly valid for the more general type -- which, at least as someone fairly new to Haskell, I find myself doing a lot.)


It's also much more convenient for REPLs.


Yes. Many times the compiler will actually infer a type different than the one you wanted. In my experience this just leads to confused programmers who have trouble deciphering what went wrong when presented with the error message.


Even in languages with whole-program inference, it's generally regarded as best practice to write out the types of your functions, hence Rust's choice here. You're right that there's a tradeoff. In general, Rust follows 'explicit over implicit.'

> This gets annoying when you get into functions with complex arguments.

Have you seen the where clauses yet? This should significantly help complex function declarations.


I just don't understand why there has to be a tradeoff. I just don't get why the compiler should decide on such a large thing, instead of letting the programmer do it. One can always be more explicit if one feels they're getting value from it. If someone wants to write a bunch of terse code, why stop them? Does the compiler gain a large benefit from not having to include this feature? Who loses by allowing users to do what they want?

Comparing C# and F#, the extra annotations change the frequency in which I'll introduce an inner (local) function. For instance, here's a little helper in a piece of code I'm writing at the moment. It's part of a larger function and isn't exposed beyond 5 lines of code.

  let runWithLog f name =
    try run f name with ex -> logExn ex
Used in: runWithLog "sessions" collectSessionStats runWithLog "visitors" collectVisitorStats

Having to add "(f: string -> RunParams -> Whatever -> unit) (name: string)" almost doubles runWithLog helper yet provides no benefit. And this an extremely simple case! Once the arguments are generic, higher-order functions themselves, it gets quite noisy.

Sure, if it's a top-level export, then maybe annotating is a good idea. But if it's limited in scope then what's the harm?

Not that it'll change when I use Rust - there's nothing competing in this category. It'd just be nice if the language let the user decide on the style.


Rust will do type inference on lambdas for you. :-)

    fn fubar(x: uint) {
        let times = |n| x * n;
        println!("{} * 5 = {}", x, times(5));
    }
Rust only enforces type annotations on top-level functions.

> Does the compiler gain a large benefit from not having to include this feature? Who loses by allowing users to do what they want?

FWIW, I feel precisely the opposite as you. I'd rather have an ecosystem of code where top level functions must be annotated by types than an ecosystem where types are there only if the author feels like adding them. There is a small cost but a large gain, IMO.


Oh if nested functions don't need annotation, then I suppose that saves most of the problem.

Can top-level definitions be of the lambda form? If not, what's the reason to have separate ways?


There are `fn` types---which are just function pointers---and there are `||` types, which are closures that correspond to a pointer to a function and an environment.

You can actually define `fn`'s within other `fn`'s, but their types are not inferred. Closures cannot be defined at the top level, presumably because there is no global environment to capture. (Rust does have global "static" constants, though, which are available everywhere in the enclosing module I think.)

I can probably hazard a few guesses at why there is a split here (probably relating to lifetimes), but I don't know for sure. Perhaps someone else will chime in.


But if a closure doesn't capture any variables then how it is different?


The immediate difference is the body of what you need to infer against. In the local case it's quite small, and the compiler doesn't have to worry about a lot of what-ifs.

Personally I like seeing types and I'm glad people are forced to write them. (This is an opinion I've had long before Rust existed, so I'm not rationalizing excuses.)


> Who loses by allowing users to do what they want?

Everyone else who looks at your code who would have preferred them.

I dabble with both Go and Haskell, and this seems like the best of both worlds: from Go they enforce a uniform standard across libraries, coworkers' code, etc, and from Haskell, they're adopting the philosophy that "types are documentation".

I, too, would be a little annoyed at documenting lambdas, like this, but I think it's eminently reasonable to require it for all top-level function definitions. And it sounds like from another comment here, that that's the case. :)


It seems you could look at unannotated code with a tool which gives you inferred annotations.


Still sitting on the fence as to which language I should pick up on next - the only contenders are C++11 and Rust.

How does Rust compare with C++11 as a language? C++11 seems to (in some ways) have caught up with what Rust has to offer (compared to older C++ versions) e.g. smart pointers, concurrency and regexes part of the standard library


I'd definitely pick C++11 unless you need to use Rust.

Rust is inherently memory safe - however in practical terms this isn't important for most applications. If you are writing security critical applications Rust will provide you with some very important guarantees (ie. there are certain mistakes which are inherently not possible in the language). C++ doesn't really guarantee anything and if you're an idiot you can shoot yourself in the face. However in practical terms memory management in C++11 is very straightforwards and C++11 compliant code (ie. using the STL and not writing it like C) is very safe and clean. You're not mucking with raw pointers anymore

The main issue I see is that Rust is still in early development. It may or may not get "big" in the coming years. And library support is ... lacking

In contrast C++ has the STL and boost and every library under the sun. I haven't working with a lot of other languages extensively, but I've never seen anything as clean, robust and thorough as the STL and boost. C++ will remain relevant for a long long time. If Rust takes off in a big way, you'll be well positioned to jump ship.


I definitely understand where you are coming from, and I agree that C++11's ecosystem maturity is a great reason to choose it at the moment.

However, I cannot agree with you that Rust's safety guarantees are not useful for most C++ programs, or that you have to be an "idiot" to do memory-unsafe things in C++11. Someone at Yandex recently did a presentation about Rust [1] in which they pointed to a bit of (completely idiomatic!) C++11 code that caused undefined behavior. The audience, full of seasoned C++ and Java developers, was asked to identify the problem. Not one of them could point to what was causing it (the compiler certainly didn't). The next slide demonstrated how Rust's compiler statically prevented this issue. The issue could have taken weeks to surface and days to track down, and the C++ compiler simply didn't have enough information to determine that it was a problem. This is something that happens over and over to anyone using C++, in any application, not just security-critical ones.

I'm not saying C++11 doesn't improve the situation, because it does--it would be disingenuous to say otherwise. But it's equally disingenuous to imply that C++11 makes memory management straightforward or safe. It does not.

[1] http://habrahabr.ru/company/yandex/blog/235789/ (note: the presentation and site are in Russian).


> Someone at Yandex recently did a presentation about Rust[1] in which they pointed to a bit of (completely idiomatic!) C++11 code that caused undefined behavior.

It would be great to have at least that segment of the talk translated. Sounds like a good example.


The example:

  std::string get_url() { 
      return "http://yandex.ru";
  }

  string_view get_scheme_from_url(string_view url) {
      unsigned colon = url.find(':');
      return url.substr(0, colon);
  }

  int main() {
      auto scheme = get_scheme_from_url(get_url());
      std::cout << scheme << "n";
      return 0;
  }


Can you say what is the problem here? What is a string_view?


A non-owning pointer into string memory owned by someone else (effectively a reference into some string). AIUI, the problem is the temporary string returned by get_url() is deallocated immediately after the get_scheme_from_url call, meaning that the string_view reference `scheme` is left dangling.


The string_view pattern is a pretty bad idea and useless with a decent compiler.


What do you mean by useless?

If I have a string like "foo bar baz" and I want the second word, should I copy out that data into a whole new string? That seems rather inefficient.

(How is a compiler going to optimise that away?)


For small strings, a copy is not only faster but more multithreading friendly.

Keep in mind that on a 64-bit architecture a view is at least 16 bytes large and that small strings can be copied to the stack resulting in better locality and reduced memory usage.

Last but not least, with copy elision, your temporaries might not even exist in the first place.

Example:

    std::string data;
    // ...
    auto str = data.substr(2, 3);
    // pretty sure str will be optimized away
    if (str[0] == 'a')


I don't think copy elision[1,2] means what you think it means, it simply allows the compiler to avoid e.g. allocating a new string when returning a string, or avoid allocating a new string to store the result of a temporary. That is, copy elision allows

   std::string str = data.substr(2, 3);
   return str;
to only allocate one new string (for the return value of substr), instead of two. There's no way the compiler can get out of constructing at least one std::string for the return value, especially if there's any form of dynamic substr'ing (e.g. parsing a CSV file with columns that aren't all the same width).

Sharing is only multithreading unfriendly if there's modification happening, and modification of textual (i.e. Unicode) data is bad practice and hard to get right, since all Unicode encodings are variable width (yes, even UTF-32, it is a variable width encoding of visible characters).

Furthermore, a string_view is strictly better than a string for many applications, since a string_view can always be copied into a string by the caller if necessary (i.e. each function can choose to return the most sensible/most performant thing, which is a string_view if it's just a substring of one of the arguments).

The only sensible argument against string_view in C++ I know is: it's easy to get dangling references. Which is correct, but that's a general problem with C++ itself, not with the idea of string views (Rust has a perfectly safe version in the form of &str, which cannot become dangling like in C++).

> Keep in mind that on a 64-bit architecture a view is at least 16 bytes large and that small strings can be copied to the stack resulting in better locality and reduced memory usage.

No, a string_view points into memory that already exists, there's no increased memory usage; a small string copied on to the stack will be part of the string struct, which is at least 3 * 8 = 24 bytes: a pointer, the length and the capacity. Also, a memcpy out of the original string is always going to be more expensive than just getting the pointer/length (or pair of pointers) for a string_view, since the memcpy has to do this anyway.

[1]: http://en.wikipedia.org/wiki/Copy_elision

[2]: http://definedbehavior.blogspot.com/2011/08/value-semantics-...


Yeah my example for copy elision sucked, but that doesn't mean it cannot play in favor when you work by value.

Sharing is only multithreading unfriendly if there's modification happening, modification of textual (i.e. Unicode) data is bad practice and hard to get right

Read-only access to data indeed scales "infinitely" on modern architectures.

No, a string_view points into memory that already exists,

Yes. Right. How do you store that? You need at least one pointer and and an int or two pointers. That 16 bytes. Memcpy for a couple of bytes is very quick when it's stack to stack thanks to page locality.

Also, if you are using pointers you will have aliasing issues which will have an impact on performance. If you work by values you allow the compiler to optimize things better.

For small strings string view are just dumb and "most of the time" strings are very small.

To give a better example of why working a string view is both a bad idea and dangerous, it's as if you said "I don't want to copy this vector, therefore I will work on iterators". That's obviously a bad idea.


> Yeah my example for copy elision sucked, but that doesn't mean it cannot play in favor when you work by value.

Not just sucked; it was entirely wrong. Copy elision is not related to std::string vs. string_view. Even with copy elision turned up to 11, returning a std::string will be more expensive than a string_view.

> How do you store that? You need at least one pointer and and an int or two pointers. That 16 bytes. Memcpy for a couple of bytes is very quick when it's stack to stack thanks to page locality.

I was very careful to cover exactly this in my comment.

Computing the memcpy is strictly more work than creating a string_view, since you need the information that is stored in a string view (i.e. pointer and length) to call memcpy.

Furthermore, the 'stack string' is actually stored contained inside a std::string value, which is larger than 16 bytes. There is no way that returning a string_view causes higher memory use at the call site than returning a std::string. (If you're complaining that it forces old strings to be kept around, well, you can always copy a string_view to a new std::string if you need to, i.e. a string_view can do the expensive 'upgrade' option on demand.)

Here's the quote from my comment above:

> a small string copied on to the stack will be part of the string struct, which is at least 3 * 8 = 24 bytes: a pointer, the length and the capacity. Also, a memcpy out of the original string is always going to be more expensive than just getting the pointer/length (or pair of pointers) for a string_view, since the memcpy has to do this anyway.

> Also, if you are using pointers you will have aliasing issues which will have an impact on performance. If you work by values you allow the compiler to optimize things better.

You do realise that a std::string contains pointers and so on inside it? Furthermore, the small string optimisation (copying to the stack) means every data access to a std::string includes an extra branch.

> For small strings string view are just dumb and "most of the time" strings are very small.

So instead of just having a cheap reference into a string you're happy with the overhead of a function call (memcpy) and a pile of dynamic branches? I wouldn't be surprised if the branches are the major performance burden for std::string-based code that is processing a pile of substrings of some parent string. In this case, the data from the string_views will normally be in cache anyway (i.e. it will've been recently read by the function that decides who to slice into the string_view).

> To give a better example of why working a string view is both a bad idea and dangerous, it's as if you said "I don't want to copy this vector, therefore I will work on iterators". That's obviously a bad idea.

It's not obviously bad to me. In fact, it seems very reasonable to work with iterators rather than copying vectors (isn't that exactly what the algorithm header does?).

If your problem is that it is unsafe and hard to avoid dangling pointers etc, that's just a fundamental problem of C++ and is unavoidable in that language. One fix would be to use Rust; it handles iterators and string_views safely.


The slides on Slideshare are surprisingly easy to follow (and were a good introduction for me): http://www.slideshare.net/yandex/rust-c


See slide 42 of STL's recent talk for what I guess will be a similar example: https://github.com/CppCon/CppCon2014/tree/master/Presentatio...


Reproduced here:

  const regex r(R"(meow(\d+)\.txt)");
  smatch m;
  if (regex_match(dir_iter->path().filename().string(), m, r)) {
      DoSomethingWith(m[1]);
  }
- What's wrong with this code?

  - Haqrsvarq orunivbe va P++11
  - Pbzcvyre reebe va P++14
  - .fgevat() ergheaf n grzcbenel fgq::fgevat
  - z[1] pbagnvaf vgrengbef gb n qrfgeblrq grzcbenel
(http://rot13.com/ 'd if you want to guess.)


Seems like this was fixed in C++14 by adding a std::string&& overload.

http://en.cppreference.com/w/cpp/regex/regex_match


The underlying problem is still there, fixing a few of the worst cases in the standard library is helpful but only up to a point. (E.g. anyone with a custom function that does something in a similar vein needs to remember to do the same.)


There's more to memory-unsafety than raw pointers; all of these are problems even in the most modern C++ versions:

  - iterator invalidation
  - dangling references
  - buffer overruns
  - use after move (and somewhat, use after free)
  - general undefined behaviour (e.g. overlong shifts, signed integer overflow)
And there's more to memory safety than security critical applications. Rust means you spend a little more time fighting the compiler, but a lot less time fighting the debugger and a lot less time trying to reproduce heisenbugs caused by data races/undefined behaviour.

Of course, the library/tool support is indisputably in C++'s favour.

> if you're an idiot you can shoot yourself in the face

If you're a human you will shoot yourself in the face. It just takes far too much brain power to write correct C++ always (a single mistake leads to brokenness), especially in a team where there can be implicit knowledge about how an API works/should work that may not be correctly transferred between people.


Iterator invalidation, dangling references, and use-after-move are all essentially the same thing---references outlasting their owner---no need to multiply the issues. Buffer overflows are an issue, yes, unavoidable due to the C legacy.

On the other hand, it's somewhat ironic that you point to overlong shifts as a C++ problem when Rust has the exact same behavior. What does this function return?

    pub fn f(x: uint) -> uint { x >> 32 }
Honestly, I loved the idea of Rust. I was sold a memory-safe C++, and that sounded awesome. But what I got instead was an ML with better low-level support; it felt like an enormous bait-and-switch, as nobody is interested in yet-another-functional-language.


Overlong shifts are currently not handled correctly, yes, but they will not be undefined behaviour; they will possibly be implementation-defined but will not lead to memory unsafety.

> use-after-move [...] ---references outlasting their owner---

Not really, e.g.

  std::unique_ptr<int> x(1);
  foo(std::move(x));
  std::cout << *x; // undefined behaviour
Unless you mean something other than `&` references.

> Honestly, I loved the idea of Rust. I was sold a memory-safe C++, and that sounded awesome. But what I got instead was an ML with some low-level extensions; it felt like an enormous bait-and-switch, as nobody is interested in yet-another-functional-language.

Something in this sentence has to be wrong, since people are clearly interested in Rust: either people are interested in YAFL or Rust isn't what you seem to think it is.

Anyway, that just sounds like a 'problem' with your background/expectations and/or whoever sold it to you. Rust is a C++ competitor (i.e. targets the similar low-level space) but it is not definitely trying to just be a C++ rewrite fixing the holes. I don't think there's any official marketing implying the latter.


My comment appears to have been more polarizing than I ever expected. Let me clear some things up.

In your unique_ptr example, you're right: the reference doesn't outlast its owner, but it becomes a dangerous zombie after getting its guts removed. It is worth mentioning that the behavior may or may not be UB depending on how `foo` takes its parameters: std::move is really just a cast.

Maybe interest is the wrong word to use; many functional languages have generated a lot of interest, but this interest has historically not translated into actual mass usage. Instead, popular languages have adopted certain functional features over time (lambdas, comprehensions, type classes, etc), but have remained fundamentally Algolian for the most part. Rust seems to go in the opposite direction: start with ML (or something ML-like, anyway), and strip it down until it fits into the C++ space.

I am definitely interested in C++ replacements, to be clear. I have explored things that stray from it much more than Rust, such as Haskell and ATS, but I went into those fully expecting to see something different. But look at documents such as [1], and tell me that it doesn't create the expectation that Rust is trying to fit C++'s shoes a little too tightly. Additionally, trawling through mailing list discussions, familiarity with C-like languages seems to have been a design principle since the start (see for example the <> vs [] for generics debate).

Finally, I wasn't (and am not) passing judgment on Rust for being what it is. I was conveying my experience from being excited about it, to being less excited about it after actually learning it. I don't expect a productive discussion to come out of it; I've also seen how defensive the Rust community can be [2].

[1] https://github.com/rust-lang/rust/wiki/Rust-for-CXX-programm...

[2] https://pay.reddit.com/r/rust/comments/2bbeqe/it_started_out...


I think the situation of your [1] is that Rust has ended up close enough to C++ that it's useful and meaningful to provide a translation guide between concepts, to help C++ programmers get up to speed more easily; it's certainly not a design document or anything like that. Maybe I'm missing your point. (As I said elsewhere, C++ has had a lot of experience in this space, and so has a lot of good ideas, Rust is not ashamed to borrow them.)

On that note, would you interpret [a] as meaning Rust is trying to be a functional language? The reality is more plagiaristic: functional language have nice features and so Rust borrows some of them. (In my mind the correct interpretation of both documents would be: Rust is a mesh of various languages with enough similarity to many for translation guides to be helpful.)

There has been syntactic decisions tilting towards C++/Java/C# programmers (like the <> for generics), but as far as I can remember those sort of decisions are all minor in terms of semantics. For the most part the actual semantic behaviours are considered in terms of "does Rust need this" rather than "will this move us more towards C++" (even if the feature was inspired by C++).

[a]: http://science.raphael.poss.name/rust-for-functional-program...


You're right, arriving at that kind of conclusion from the existence of a tutorial is bad reasoning. I had wrong expectations, I suppose.

I must thank you for pointing that link out to me, though: it said what I was trying to say much better than I could in its prologue. Namely, how hard it is to sell a functional language to old-school C people, and how Rust may have a hard time with that (even if it's not a pure functional language).


> But what I got instead was an ML with some low-level extensions; it felt like an enormous bait-and-switch, as nobody is interested in yet-another-functional-language.

Rust is not functional. It may draw heavy inspiration from statically typed FP, and closures, ADTs, pattern matching, and and expression-heavy programming style might give that impression, but it is at its heart a procedural systems language. As stated in the blog post, most of Rust's core features map directly to underlying machine instructions, and there is always the opportunity to cede control from the type system if you absolutely have to. Indeed, core library types like `Box<T>` and `Vec<T>` are at their fundamentally built on `unsafe` code.


What do you mean by 'low-level extensions'? There's nothing in the language proper that can't run on bare metal, how much lower can you get?

If anything, it's the functional parts that feel bolted on: closures are crippled (though getting better soonish), the types that get spit out of iterator chains are hideous, no currying, no HKTs, functional data structures are much harder to write w/o a gc, etc.


> But what I got instead was an ML with better low-level support; it felt like an enormous bait-and-switch, as nobody is interested in yet-another-functional-language.

Leaving aside whether I think that's a fair description of Rust, I think plenty of people are interested in a functional programming language without the overhead of GC that is suitable for use as a low-level systems language.

You probably shouldn't write "Nobody is interested..." when what you really mean is just "I am not interested..."


Well, people tried that with D. Didn't catch on.


It caught on a little.


I've had a vastly happier Rust experience than C++ experience (I've written in both, well beyond the "zomgz 50line starter program").

The Rust compiler is vastly smarter and gets you type checking you have to pay out the nose for in C & C++. I'm a fanboy of Rust, but I would suggest looking hard at Rust for any C or C++ production code going forward. (My default going forward for this space will be Rust unless overriding considerations say otherwise).


I'm not sure in what world writing C++11 is "very straight forward ... safe and clean". You can easily write unsafe code without thinking about it. For example one can write a function that lends out a reference or pointer to memory that may or may not exist after the call exits, and this is impossible in Rust.


To be precise, impossible outside defined `unsafe` blocks of code. It's still important to be able to drop down once and a while if you absolutely need to... you just don't want that ability all the time.


Wow, that was a great comment and exactly the type of info I was after.

I think (coming from a dynamic language world) the memory safeness is what pulls me towards Rust. But from what you say and what I've read elsewhere, that was old-style C++ and not C++1[17].

Thanks!


Read what the other commenters are pointing out. Maybe the situation is better in C++ now than it was before, but it doesn't mean you can't shoot yourself in the foot, specially for a beginner. Rust was built with safety in mind from the start, there are errors you can make in C++ that the Rust compiler simply won't let.

My advice is, if you're learning it for work, then go with C++. Even if it succeeds, it will take some years for Rust to be mainstream and as pointed out the library support is great.

If you're learning it for fun or for the sake of learning something new. Then Rust is a very nice and promising language bringing things from functional languages that C++ lacks and offering very interesting tooling around it.

Whatever you choose, after you feel confident with one go and learn the other as it will probably give a better perspective in the strengths and/or weaknesses of both.


I would just point out that the impetus for the Rust language was Mozilla looking for a better language to implement a browser in than C++. They obviously have a lot of experience writing C++ and how to do it as good as possible, but found it coming up short, especially as multi-core starts becoming the bottleneck.


C++11 is still C++ though. I've been writing pretty much exclusively C++11 for the last year, and, while they've added a lot of very nice features and I find it painful to go back to C++03, it's still a huge language with lots of hoary edge cases and everything and the kitchen sink built in (now with rvalue reference and move semantics for extra verbosity!).

I haven't touched Rust since 0.7 (where I abandoned it after filing two bugs on the project before I even got argument parsing working), but it's the most promising of the new batch of languages in my mind, and I have some side projects in mind I'd like to try when 1.0 comes out.


Rust is memory-safe; C++11 is not.


C++ is useful when learning Rust since a lot of documentation assumes the reader is familiar with C++ and there’s plenty of C++ documentation that has no pre-requisites.


Hmm. I learned Rust without any C++ knowledge (though I did have a fair amount of experience with C), and it was pretty easy. Recently I've had to start writing some C++ (to my dismay), and it has not been so easy... the language is really complex.


I think things have changed :)

Steve Klabnik has been doing a ton of work on the docs, and that has brought a lot of "coming from an HLL" perspective to them.


Hm, I learned it more than a year ago now (and have kept up), so it was before Steve's time :)

Also, I'm coming mainly from a C background, not higher-level language.


Well, before my time being paid anyway :)


Why are those your only choices? What makes your problem unsuited to e.g. Nimrod?


> Green threading: We are removing support from green threading from the standard library and moving it out into an external package.

I only ever looked at Rust from a 500 foot view while toying with it at a Hackathon, but I had no clue it had so many different types of threading models. This seems like a step in the right direction, indeed. If Task is going to your unit of concurrent execution, as much transparency around that as possible is a good thing.


Green threading used to be default. And used libuv for io. Then native threading was avaliable. Then it became default. Now green threading is being moved out of builtin libs.

Starting with green threads is really easy too:

    extern crate green;
    extern crate rustuv;

    #[start]
    fn start(argc: int, argv: *const *const u8) -> int {
        green::start(argc, argv, rustuv::event_loop, main)
    }

    fn main() {
        // this code is running in a pool of schedulers all     powered by libuv
    }
Edit: You can also start up a new libgreen scheduler pool at any time and add some native threads to the pool. So you can have some threads running with libgreen and some with libnative. (So you could theoretically embed a go program inside a rust program)


It had just two: libgreen backed tasks with "green threads", libnative backed them with OS threads.


Does that mean every task will be an OS thread? Or are tasks handled by a runtime specific scheduler and will use a pool of threads for IO?


> Does that mean every task will be an OS thread?

By default, yes.


Hmmmm, I hope there are other alternatives in the future (besides libuv). It would be nice to see Rust have a lightweight unit of execution similar to goroutines or Erlang processes.


When will it be pleasant to use on windows?


Hopefully the tooling will take off once the language stabilizes. Using a "newish" language is a total exercise in frustration when you are a spoiled kid who expects an IDE to come with your language configured, and a nice big play button for running your first program.

For a language to take off, it badly needs a very good (ideally "official") development experience, such as a custom eclipse impl, or a very good IntelliJ plugin. When a dev experience comes with batteries included it lowers the treshold substantially from just "use whatever text editor you like and compile on command line, here is a readme".


How difficult is it to start from a huge C++ codebase and start adding new features in Rust? How bad is such an idea? I know there is interoperability, but those are toy examples, does anybody have real life experiences?


This is all good. No higher kinded types for v1.0?


Nope. We're pretty sure they'll be backwards compatible, and there are too many other outstanding issues (eg, syntax) and too little time. Lots of us want them though!


There isn't an RFC for them, at least that I could find. Have they just not gotten that far in terms of thought?


I have an RFC sitting here (and a few blog posts talking about it), but considering it'll be post 1.0 when such a thing is even entertained, I haven't submitted it. There are more pressing and more appropriate things to work on first.


Awesome - looking forward to seeing that. :)


I'm pretty sure at least a few people have ideas/are working on them. I have a 75% finished RFC that I have been sitting on because the language has been in such flux lately. I wouldn't be surprised if Niko, Aaron or someone else on the core team doesn't already have strong ideas about them.


That's right, there hasn't even been enough work for an RFC.


I think one of the core challenges is how traits play with HKTs. You can only allow for the defining of HKed type parameters and type arguments so many ways, but it becomes more complicated when dealing with the resolution of traits around HKTs. In Haskell for ex. there is no `Self` so the type class can easily be dispatched on a higher kinded type without needing a receiver type. In Rust `Self` must currently be fully applied as it is resolved from a value directly: `value.trait_method` and we pick our instance based on value's type. If we want to implement Functor (Mappable or w/e) we need to come up with another approach. I've thought about a few but they seem to play best with multi-dispatch + associated types + where clauses. I'm hoping to finish the RFC once all these features land and I can actually hack a prototype, instead of scribbling on paper.


> I'm hoping to finish the RFC once all these features land and I can actually hack a prototype, instead of scribbling on paper.

This would be amazing. One of the big issues blocking HKTs has been that though many folks want it there hasn't really been anyone willing to champion it yet. No pressure though - it is a tough problem.


Yeah I'm game to be that person. I have been willing to work on it for months, but between school and making money to support myself to go to school time has been scarce. Feel free to bother me online, pressure is good.


What I think is an important feature of this language is the ease with which it can interact with other languages. Especially the possibility for Rust code to be called from foreign languages such as C very easily.

I'm looking forward for even better support of iOS with the support of arm64, I think it is really important to offer an alternative.

BTW is there an RFC on dynamically sized types? I can't find any, I'm looking to learn of it works.


> Especially the possibility for Rust code to be called from foreign languages such as C very easily.

The second production deployment of Rust is a Ruby gem, written in C, that calls out to Rust. It's used in skylight.io, if you're curious.

> BTW is there an RFC on dynamically sized types?

IIRC, DST was before the RFC process even existed, it's just taken forever to implement. The Duke Nukem Forever of Rust. :) http://smallcultfollowing.com/babysteps/blog/2014/01/05/dst-... is what you want to read, IIRC.


> The second production deployment of Rust is a Ruby gem, written in C, that calls out to Rust. It's used in skylight.io, if you're curious.

Yep! I'm one of the authors of that project. The fact that Rust provides automatic memory cleanup and the attendant safety without runtime overhead (even ARC has non-trivial runtime overhead) was a huge win for us, as was the transparent FFI.

We were looking for a way to write fast code that was embeddable in a Ruby C extension with minimal runtime overhead and without a GC (two GCs in a single process is madness). We also wanted some guarantees that we wouldn't accidentally SEGV the Rails apps we were embedded in. Even last December, Rust was a clear winner for us.

We've been shipping Rust code to thousands of customers for many months, and given the language instability, it's worked really well for us.


Are there any open source libraries spun off from the Skylight agent? It would be nice to see some examples of production-quality Rust code.


A few:

* https://github.com/carllerche/hamcrest-rust - a (badly in need of more fleshing out) testing library * https://github.com/carllerche/nix-rust - bindings of Linux/OSX-specific APIs to Rust * https://github.com/carllerche/curl-rust - a binding of libcurl to Rust * https://github.com/carllerche/pidfile-rust - a library for using a pidfile for mutual exclusion across processes * https://github.com/carllerche/mio - a low-level IO library that attempts to implement an epoll-like interface across multiple platforms


> IIRC, DST was before the RFC process even existed, it's just taken forever to implement. The Duke Nukem Forever of Rust. :) http://smallcultfollowing.com/babysteps/blog/2014/01/05/dst-.... is what you want to read, IIRC.

Thanks for the info and the link!


Agreed. Two areas I can think of: 1. Rust library that is callable from Python. Many are using C/C++ for optimised Python program. 2. Able to create iOS framework library in Rust to be callable from Swift/Objective-C; similarly Rust library callable from Android NDK.


Congratulations to the Rust team! Can't wait to start learning the language and building stuff using it.

I'm looking to learn about how Rust's refcounting memory management works (and how it differs from how, e.g. Objective-C or Swift's runtime-based reference counting works), mostly for personal edification. Can anyone point me to any good resources?


Steve already told you that refcounting isn't reached for by default, and I wanted to share an anecdote to emphasize that.

I've written a regex library, CSV parser, command line arg parser, elastic tabs and quickcheck in Rust. Behold:

    $ find ./ -type f -name '*.rs' -print0 | xargs -0 grep Rc
    $
I've definitely reached for it a few times as an escape hatch, but I've always ended up finding a cleaner approach to persist without it.

Of course, there are plenty of legitimate uses of refcounting. I just haven't hit them yet. :-)


You're logged in as root? O_O :p :)


Haha, no, I added that in after-the-copy-and-paste. :P

Fixed nonetheless. Do'h.


Is worth noting that you don't reach for reference counting by default in Rust: your reach for references, then boxes, THEN Arc.

You can find documentation on these types here: http://doc.rust-lang.org/guide-pointers.html


Well, you wouldn't go for Arc (atomic RC) unless you need to share data across tasks (threads), within the same task you can use Rc.

This is different from shared_ptr in C++ which is always atomic and LLVM has to try hard (in clang, no idea what GCC does) to eliminate some redundant ref-counts.

Oh and since Rc is affine, you only ref-count when you call .clone() on it or you let it fall out of scope.

Most of the time you can pass a reference to the contents, or, if you need to sometimes clone it, a reference to the Rc handle, avoiding ref-counting costs.


That's true, thanks.


Awesome! Thanks.


It'd be wonderful if they kept the ability to define that a certain destructor does zero memory.

Sometimes, you need that.


Zeroing out memory isn't sufficient, as discussed here: http://www.daemonology.net/blog/2014-09-06-zeroing-buffers-i...

That said, this was discussed on reddit[0], it sounds like there is a way to guarantee that you did zero out memory (but not necessarily copies of that memory, as discussed in the link above), and because Rust is intended to be memory safe, it's not as much of an issue if you don't/can't.

http://www.reddit.com/r/rust/comments/2fnb82/zeroing_buffers...


Not in C. Rust is, as you say, intended to be memory safe.

That means it has the hope of getting it right.


FWIW, you can implement the 'Drop' trait to provide a custom destructor, and then use, e.g. 'volatile_set_memory'[0] to zero out the memory of the object. This isn't subject to the same problems as C, AFAIK.

[0] http://doc.rust-lang.org/std/intrinsics/fn.volatile_set_memo...


You would have to force the sensitive content to be dynamically allocated. All types in Rust can be moved via a shallow memcpy and that will leave around dead shallow copies. For example, `Vec<T>` will leave around dead versions of the values when it needs to do a reallocation that's not in-place.


I'm excited for the release too. Know many people who hesitate to touch Rust, even if interested, due to the fact language is still in active development.

On minor concern though, I don't see how "where clauses" are simplifying the language. Looks like something that could be added after the release.


> On minor concern though, I don't see how "where clauses" are simplifying the language. Looks like something that could be added after the release.

Where clasues simplify Rust code, they don't simplify the language itself. They're also important for associated items. For more: https://github.com/aturon/rfcs/blob/associated-items/active/...


It seems that 1.0 is going to be a solid release. But the post-1.0 Rust is going to be even more exciting once they have added inheritance and subtyping which enable true polymorphic reuse!


Object inheritance is only useful in rare edge cases so your statement doesn't make much sense. Traits have default methods, inheritance and can be used as bounds on generics (no dynamic dispatch, type not lost) or as objects (dynamic dispatch / type erasure).

What makes you think object inheritance is such a sure thing anyway? I don't expect either object inheritance or optional garbage collection to materialize, ever. In fact, I'd be pretty sad if the language was degraded with that extra complexity - I think it would be a worse situation than the mess that is C++. There would no longer be a shared idiomatic Rust.


I think optional garbage collection might materialize, but I would imagine it would end up being a third-party library.


Why are trait objects not "true polymorphic reuse" already? Inheritance is pretty much just a performance and expression optimization to be avoided in most situations (and I believe is going to be in 1.0)


Adopting the channels system is interesting. Are there any other languages that have a scheduled release pattern like this?


Go sort of does this, though on longer timescales. Developers are at "tip". Actual releases are every 6 months, and before the release there is always at least one release candidate, and sometimes an actual beta.


I'm not aware of any, but if there are, I'd like to talk to them about their experiences doing so. :)


> We are removing support for green threading from the standard library and moving it out into an external package. This allows for a closer match between the Rust model and the underlying operating system, which makes for more efficient programs.

That's an interesting move in comparison to Go, which multiplexes coroutines onto threads.


It's more systems-like. We don't pay any overhead for calling into C code, and M:N scheduling doesn't really provide advantages over 1:1 scheduling when you don't have a GC (and even when you do, the differences are fairly negligible on modern Linux).


With 1:1 scheduling, how do you limit stack size to something reasonable (a few kB per thread), which is necessary when you need to launch tens of thousands of threads?


If you know you only need a small amount of stack size, you can set the stack size to be small via the task builder: http://doc.rust-lang.org/master/std/task/struct.TaskBuilder....


Thanks for the link. But what if I don't know the stack size in advance? I guess the model with a stack that starts small and grows on demand is only possible with M:N scheduling, not with 1:1 native scheduling?


Can rust be used to power http endpoints like a REST API? Or is it more designed to be system type daemon stuff? I guess I don't fully understand the marketing of it, however I haven't ever written anything in C or C++ either.


There are already a number of libraries and frameworks that implement an HTTP setver in Rust, though things seem to be coalescing around Teepee.

http://chrismorgan.info/blog/introducing-teepee.html#main

There are few REST api libraries as well.


When I first looked at Rust, I recall being very confused about the distinctions between crate, package, module, and library. It seemed like an area which could use some simplification.


"Package" and "library" are two words that mean the same thing. They're more generic terms for "crate," which is Rust specific. "module"s are ways of splitting up your code inside a crate: one crate has many modules, and each module belongs to one crate.


Thanks, Steve! Have not yet had a chance to go through the new guide, but looking forward to it!


Any time. There'll be a full guide to the module system when I get some time...


Wait, they've got rid of unique pointers? When did that happen? They were there a couple of months ago. That was one of my favourite language features...


My understanding is that they're still there, just as part of the standard library rather than as a language feature.


That's correct. What used to be ~T is now Box<T>. We say 'boxes' instead of 'unique pointers' now.


I haven't looked at Rust, but it seems from the outside that releasing a stable version of a language every six weeks is very aggresive?


I'm sure that seemed true about web browsers too. The trick is feature flagging all new API surface and only including flagged code when it's deemed stable.


Continuous deployment is common in the web world. It's true that it's aggressive, but we think it's going to have significant benefits.


Yes, I'm not saying it's a bad thing. It does imho put a lot more pressure on the language developers than a cd'd web app, with regards to backwards compatibility and such. (With ~10 releases a year there's bound to be accidental breakage that passes the beta period.)

Ambitious might have been a better word than aggresive..


People tend to expect more stability from a systems language than a web one. Can I trust that my Rust 1.0 code will work, unchanged, 20 years from now? If not, the language is likely to remain in the enthusiast realm.


To be clear, Rust is following SemVer, and the six-week releases are 1.x versions. So they should be backwards compatible. There's no current timeline for a 2.x.


Could someone lay out the advantages of Rust over C/C++/Dart/Go/other languages that cater to a similar space?


AFAIK Rust is the only language that offers memory safety without garbage collection.


C++11's std::unique_ptr and std::shared_ptr are also nice features. Does Rust provide more guarantees?


Rust provides memory safe versions of them; e.g. using a unique_ptr after moving it leads to undefined behaviour. Rust also allows 100% safe references into the memory owned by those types (no possibility of accidentally returning a reference into memory that is freed).

Lastly, Rust's type system actually allows 'this value must be kept local to a thread local' to be expressed, meaning there are two shared_ptr equivalents:

- Arc (Atomic Reference Counting), which uses atomic instructions like shared_ptr

- Rc, which uses non-atomic instructions, and so has much less overhead.

Rust also has move-by-default semantics, so there's no extraneous reference counting due to implicit copying. (Which is particularly bad with the atomic instructions of shared_ptr.)


What about Ada?


I'm not sure about Ada, because I never used it, but it seems to have some lifetime checking comparable to Rust's borrow checker. However, it also seems less powerful (in Rust you can specify the lifetime explicitly if need be, in Ada it seems you would have to give up safety).


Swift


Reference counting is just another form of garbage collection, http://www.cs.virginia.edu/~cs415/reading/bacon-garbage.pdf


Does it use the GPU, memory compression, code rewriting, automatic vectorisation, multiple cores, or any other performance techniques from the last 10 years?


Rust is a low-level language so everything user-space can be implemented, and the main (and only) compiler rustc uses an industrial strength optimiser (LLVM) which has support for automatic vectorisation.

Furthermore, the type system is designed to be very good for high-performance concurrency.

See http://blog.theincredibleholk.org/blog/2012/12/05/compiling-... for an example of using Rust on a GPU, and I can only imagine that it has become easier since then.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: