It never gained widespread adoption because it was off by default. Rails 4 turns it on by default on multithreaded web servers like Puma.
And even in Rails 4, concurrent request handling is multiplexed on a single CPU core, leaving Rails unable to take advantage of the parallelism of modern architectures.
This isn't true on JRuby, where Ruby threads are 1:1 with JVM threads (which are in turn 1:1 with native threads) that execute in parallel on multiple cores.
At my employer (Square) we run many of our Rails apps in thread safe mode on top of JRuby, providing parallel request processing across multiple cores on a single JRuby/JVM instance.
JRuby is of course incompatible with C extensions to Rails; the place I used Rails had such dependencies, and so JRuby was not an option. I agree that JRuby is otherwise preferable to cRuby, though.
I'd be curious to hear how Square has found ruby (independent of rails) to be from a maintenance standpoint.
> JRuby is of course incompatible with C extensions to Rails
The fact that old-style native extensions are tied to MRI is one of the motivations for Ruby FFI, which provides a common mechanism for interfacing to native libraries on MRI, JRuby, and Rubinius (and maybe some other implementations, as well.)
> And even in Rails 4, concurrent request handling is multiplexed on a single CPU
Actually, even on YARV you will use more than one CPU, however the GVL won't allow them to be used efficiently.
My point is: there is nothing in Rails forcing it being multiplexed into a single CPU, it is all about which language implementation and web server you choose.
The post is also very inconsistent on its own arguments. If multi-core and being able to share memory in between cores is a criteria, Node is also not a good option compared to what you have in the JVM, Erlang VM, Haskell, Go, etc.
How is the state of the thread-safety among gems? I have thought a lot about it, but the danger of having thread-issues in the included gems are a bit daunting to be honest.
Not as great as some other language ecosystems, however many people are using Sidekiq, a multithreaded job queue, successfully (and more generally, the Celluloid concurrency framework Sidekiq is based on)
Having spent a lot of time in both Rails and Node, the author's unequivocal endorsement of Node as better than Rails is just silly.
Building a non-trivial app on Node requires that you do an awful lot of plumbing. Yes, the performance is very nice. And it's going to be great when the community grows up a bit and things start to Just Work with each other. But that day is not here yet.
A lot of the components that people depend on are still really buggy. In the last three days I've had to debug and fix four bugs in other people's npm modules. The community's attention is spread all over the place, because there isn't one dominant way of doing things.
People can't even agree on sane interoperable packaging for code that needs to work on both browsers and in node. It's a mess.
I don't see an unequivocal endorsement of Node in this rant. He only said that Node is better by his criteria if you really want to use a dynamic language.
For what it's worth, I'm using express, and it doesn't come close to solving my plumbing problems.
Example 1: express can't reliably send a 500 if your code hits an unexpected exception somewhere down in the callback chain. To do it correctly, it would need to either use Node's Domains feature, or would need to use Promises. My own code uses Promises, so I had to extend express to respect them.
Example 2: I have quite a few modules that need to be shared by both node and the browser. Browserify is almost good enough, but to get it actually working, I had to find and fix bugs in four different npm packages.
Example 3: Then I try to add engine.io, which generates client code completely incompatible with browserify. To get the engine.io client included in my javascript bundle, I literally do a regexp replace on its source code.
Example 4: Then I try to add Ember.js to the client bundle. It's packaged in yet another incompatible way. I need to write a wrapper for it so it can find its dependencies, and so that the top level symbol can actually get exported other than on window. I write another slightly different wrapper for ember_runtime so I can load the models on server side.
Example 5: Does anybody do automated management of templates as clientside dependencies? It doesn't seem like it -- it seems like most people just package up all templates into one big pile. Well, I did some more plumbing, and now my code can say "require('templates/foo')" and template foo will automatically get precompiled into the relevant bundle.
Except that nearly all of the cool kids in the community hate Express and nearly all of them believe that chaining micro-utilities together is the wave of the future. (Because, of course, researching 25 utilities takes the same time as researching one.)
Express uses a convention that is not harmonious with many of the later best practices that have been emerging.
Does the community hate Express? You can't really hate that much on it considering that TJ's one of those who's leading the movement: http://component.io
I don't like Rails, and I don't like Ruby, but this wasn't a particularly enlightening piece. He doesn't explain why using something like Django in place of Rails makes sense; he handwaves a bit over Node. There's some statically typed language dogma, and that's it.
I'd love to read a good hit piece on Rails, written by someone who spends the time to lay out a strong technical justification for their opinion. This was disappointingly Twitterish.
Couldn't agree more with this sentiment (though I do like ruby as a language quite a bit). I have used other frameworks: sinatra in ruby, ring/compojure in clojure, flask in python, and there are a myriad of reasons to choose one of these over something like rails, but the author discusses none of this. I think much of the problem with rails is that it's so monolithic and has become rather bloated in certain respects.
I would very much appreciate a more thorough, thoughtful discussion of the pros and cons of various design decisions in rails. It does a lot of things right which is why you see people modeling various tools after rails (database migrations for example... I recently wrote a clojure plugin that closely emulates the simplicity of making schema changes ( https://github.com/ckuttruff/clj-sql-up )).
I work with rails every day and lots frustrates me about it, but it's also quite effective for a lot of things (which may be why "everyone and their dog" seems to use it). If you really want to have any influence over this fact, it would help tremendously to create a more compelling argument; I'm sure this would inspire much more interesting dialogue
I've worked with rails a lot over the years, including high traffic sites, and the general comment I'd make is that Rails scales more or less like PHP, but consumes far more ram per process doing so. Beyond that, whether it's good for your project probably depends a lot more on culture and the people involved than any technical distinction. Deploying rails was historically rather clunky, but that's been resolved over the last couple years thanks to folks like Heroku.
This piece highlights the inherent drawback of all dynamic languages: performance. We've been through the performance argument too many times to count with both Rails and PHP.
I don't know what kind of taste I would have if I were to follow this piece as advice. Am I expected to do CGI in straight C? Ridiculous.
For a midsized to big project static typing is really nice. eg A few years ago I had a perl project that only let you know a function was missing at run time (ie when the user asked for that code).
Thanks I didn't know about that. Of course I used a test suite. But checking that all functions actually exist (in my case some had been renamed) is something you should not have to manually write.
Well, the idea is that your test suite should cover ALL code you have, so you don't have to check all your functions. Devel::Cover helps you by clearly marking code your tests don't run in red, so you know where you need to add tests and don't need to go through manually.
Why do I care if my simple CRUD app can only use a single CPU core? Because someday I might have millions of users and then I'll have Twitter's old problems? I'd love to have to Twitter's problems.
Use the right tool for the job. And claiming that Rails is never the right tool is just silly.
I can't resist (this is the OP): you are missing the point. It's not just throughput, it's high-percentile latency. Latency is critical if you have 1 billion users or 100 users, and it is difficult to bring the high percentiles of the latency distribution down into reasonable territory on Rails since, by default, all operations are essentially serialized.
I guess the problem of high-percentile latency is not widely understood; I'm not sure I understand it myself. Can you explain in more detail? In particular, are you talking about requests that take a while to complete because they have some complex processing, or requests that take a long time to complete because they can't be processed until some other long-running request finishes? The bit about everything being serialized suggests that the main concern is the latter. Does this apply even when using multiple threads under the C Ruby implementation? Why does running multiple web server processes on the same machine not mitigate the problem?
BTW, I don't use Rails or Ruby, but I do use Python for web apps at work (currently CPython, GIL and all). I'm curious to find out if this problem of high-percentile latency applies to Python as well.
So, for any black-box service endpoint, the latency for any given request is obviously just the time it takes for that operation to complete. Ideally one measures both end-to-end latency from the client and server-side latency in order to understand the impact of the network and, for high-throughput applications, any kernel buffering that takes place.
All of that is obvious, I imagine. By "high-percentile latency", I'm referring to percentiles of a distribution of all latency measurements gathered from a given endpoint over some period of time. If you imagine that distribution as a frequency histogram, the horizontal axis ends up being buckets of latency ranges (e.g., 0-10ms, 10-20ms, 20-30ms, etc), and the bars themselves of course represent the number of samples in each such bucket. What we want to do is determine which bucket contains the 95th percentile (or 99th, or 99.9th) latency value.
You can see such a latency distribution on page 10 of this paper which I published while at Google:
http://research.google.com/pubs/pub36356.html
Anyway, it is a mouthful to explain latency percentiles, but in practice it ends up being an extremely useful measurement. Average latency is just not that important in interactive applications (webapps or otherwise): what you should be measuring is outlier latency. Every service you've ever heard of at google has pagers set to track high-percentile latency over the trailing 1m or 5m or 10m (etc) for user-facing endpoints.
Coming back to Rails: latency is of course a concern through the entire stack. The reason Rails is so problematic (in my experience) is that people writing gems never seem to realize when they can and should be doing things in parallel, with the possible exception of carefully crafted SQL queries that get parallelized in the database. The Node.js community is a little better in that they don't block on all function calls by convention like folks do in Rails, but it's really all just a "cultural" thing. I don't know off the top of my head how things generally work in Django...
One final thing: GC is a nightmare for high-percentile latency, and any dynamic language has to contend with it. Especially if multiple requests are processed concurrently, which is of course necessary to get reasonable throughput.
In my experience, when using Django or one of the other WSGI-based Python web frameworks, the steps to complete a complex request are serialized just as much as in Rails. The single-threaded process-per-request model, based on the hope that requests will finish fast, is also quite common in Python land.
You mention that GC is a nightmare for high-percentile latency. Isn't this just as much of a problem for Go? Would you continue to develop back-end services in C++ if not for the fact that most developers these days aren't comfortable with C++ and manual memory management?
For my own project, the GC tradeoff with Go (or Java) is acceptable given the relative ease of development w.r.t. C++. Since there are better structures in place to explicitly control the layout of memory, you can do things with freepools, etc, that take pressure of GC.
For high-performance things like the systems I had to build at Google, I don't know how to make things work in the high percentiles without bringing explicit memory management into the picture. Although it makes me feel like a hax0r to talk about doing work like that, the reality is that it adds 50% to dev time, and I think Go/Clojure/Scala/Java are an acceptable compromise in the meantime.
It is possible to build things that minimize GC churn in python/ruby/etc, of course; I don't want to imply that I'm claiming otherwise. But the GC ends up being slower in practice for any number of reasons. I'm not sure if this is true in javascript anymore, actually... it'd be good to get measurements for that, I bet it's improved a lot since javascript VMs have received so much attention in recent years.
Final point: regardless of the language, splitting services out behind clean protobuf/thrift/etc APIs is advantageous for lots of obvious reasons, but one of them is that, when one realizes that sub-service X is the memory hog, one can reimplement that one service in C++ (or similar) without touching anything else. And I guess that's my fantasy for how things will play out for my own stuff. Ask me how it went in a couple of years :)
Just to be clear, do you mean that writing in C++ and doing manual memory management doubles dev time, or makes it 1.5 times as long as it would be in a garbage collected language?
Also, where does most of that extra dev time go? Being careful while coding to make sure you're managing memory right, or debugging when problems come up?
I don't think that doing manual memory management doubles dev time for experienced devs, no... I just mean that, if you're trying to eliminate GC hiccups by, say, writing a custom allocator in C++ (i.e., exactly what we had to do with this project I was on at Google), it just adds up.
I.e., it's not the manual memory management that's expensive per se, it's that manual memory management opens up optimization paths that, while worthwhile given an appropriately latency-sensitive system, take a long time to walk.
I submitted this because I thought it might make for some good discussion, but:
> This post is my attempt to be fair, objective, and, by consequence, unrelentingly negative about Rails :)
To me it seems arrogant to assume that you are being fair and objective when you only point out the negative attributes of something. I'd be more inclined to assume that there are positives I'm overlooking, that might even outweigh the negatives. Especially for something as beloved by developers as Rails.
What evidence is there that running a process per core has a significant negative effect on throughput and/or latency in practice? Benchmarks? Data from developers who tried both approaches while holding all else constant? (The latter is probably quite difficult in practice.) Also, JRuby supports real threads without a global interpreter lock.
I think I agree with this post about the benefits of static typing, though.
I agree that the post would be more compelling if I wrote a benchmark to demonstrate how much faster an in-memory cache is than an off-process or off-machine cache.
From first principles, though, I believe it should be obvious (yes?) that the ~300 nanoseconds it takes to grab a read lock and read from main memory is going to beat the ~1000000 nanoseconds it takes to get a response back from a remote cache over the network. Inasmuch as an application blocks on such cache reads, these sorts of things add up to troublesome latency numbers (and Rails – or at least dallistore – does indeed block on reads like these).
JRuby was off the table for the place I used rails due to reliance on some C extensions.
And I'm sorry if the argument seemed arrogant: I was being tongue-in-cheek about the "unrelenting negativity" part. My point about objectivity was that I tried not to rely on my opinions as much as demonstrable statements. That said, I didn't take the time to actually demonstrate most of those, and for that, shame on me.
How about the performance difference between an in-process cache and a memcached instance on localhost? Surely anyone who knows what they're doing will run the cache on the same machine as the web server.
(I guess that makes a good argument for not using a PaaS that gives you too little control over locality.)
This is a lazy answer, but it needs to be said: it depends on the context and the number of roundtrips to the cache.
If we're talking about a vanilla object cache, I think that it probably only costs 50% or so to make the extra copies and system calls. However, an in-memory "cache" can be something more than just a key-value store. E.g., for the thing I'm building right now, there are some more structurally complex graph structures that need to be traversed with every request to the "cache", and of course making a localhost RPC call for each entry/exit to the cache is really problematic. Though I admit that most caching is unsophisticated stuff, and in those cases it's just a ~2x difference.
Incidentally, we will be adding a cached test in a later round [1] and leaving the implementation particulars in large part to the best-practices of the platform or framework in question. So on Rails, use of memcached is anticipated. On a JVM platform, by comparison, use of a [distributed] in-process cache is anticipated.
Preliminary results suggest an even wider spread of results than seen in the existing single-query test.
Objectivity is a separate thing from the topic of the post. He came into it wanting to write an explanation of why he thinks you should never use Rails, and attempted to be objective about it by including data and describing some caveats to his argument (i.e. scenarios where you might be able to get away with it).
A post that says 'why you should or shouldn't use rails' is a different post and no more or less implicitly objective. Objectivity does not mean 'covering two sides of an argument'; sometimes one side is less or more supported than the other and sometimes you simply don't have the knowledge to cover both sides fairly.
> And it’s no wonder. Until Rails 4, the core framework didn’t even support concurrent request handling. And even in Rails 4, concurrent request handling is multiplexed on a single CPU core, leaving Rails unable to take advantage of the parallelism of modern architectures.
Every single rails installation I have seen runs multi-process and takes advantage of multiple cores. And then he goes on to say something similar, but says thats a disadvantage because the processes aren't using shared memory to communicate and have to use memcached.
The thing is, the rails model means rolling restarts are a lot easier, and you are lot more flexible with deployment strategies.
> However, modern dynamic languages (and their incapacity to do the sort of meaningful pre-runtime verification of basic semantic well-being one expects from a compiler) place a burden on test coverage. Of course it is essential in any language to provide test coverage for core algorithms and other subtle aspects of a software module. However, when trying to “move fast" and get to market quickly, one shouldn’t have to write tests for every souped-up accessor method or trivial transformation.
I'm still undecided about this angle. I simply don't find types to be a problem. While I agree fixed types are more performant, not dealing with types makes the code more fun because you're not held back from running it because you forgot a typecast.
Tests are essential, but the flipside is that tests are quicker to write in a duck-typed language, too.
The post of obviously trolling for clicks - even the title is saying nobody should use rails, and then it gives his times when you should use it at the bottom.
I don't think anyone is arguing that companies fail to grow because of these languages. It's merely that they would grow more quickly once at scale if they didn't have to spend several years rearchitecting while adding few innovations in the meantime (this is what happened at Twitter, for example).
Hmm, I didn't think it took any of them years to rearchitect per se? Regardless, somebody has to do it =)
I for one am grateful for these giants pushing our current platforms and languages to their limits. On top of that, they've gone and found solutions, all in their own ways to solve the problem of scaleability.
No?
It really did take twitter many years to rearchitect and break their most problematic dependencies on Rails, yes... my understanding is that it was a 4-year process.
Rails is a tradeoff, quick dev time, slow execution.
It's not designed for low-profit per request applications. Recent benchmarks suggest about 400 req/sec on a $10 digiocean server suggesting if you max your machine 24/7 that you'd need to make at least 1 cent per million requests, if your profit margin is below this then rails would not be a good solution.
There's no way you could serve 400 req/s with Rails on a $10 server for any sort of complex web app that requires data lookups and html rendering, partials, etc.
Maybe possible if you cache everything, but that can be quite complex.
It was 200req/sec on $5, most of the SQL issues are going to be HTTP framework independent. Russian doll caching is not that complex to setup at all on rails 4 which removes most of the HTML/caching issues.
Yes, you can do 200 r/s on a site that doesn't do much.
If you are doing something like https://www.tanga.com/deals/watches where what's shown changes completely depending on who's looking at it, you will not get that sort of performance.
Unless you cache the heck of out everything, but that just works around Ruby being slow at generating HTML and Rails being slow at rendering partials, creating URLs, etc.
Slurping in a whole tonne of data from SQL and sorting it on the app side is going to be slow anyway. Better off leveraging SQL, or pre-computing the result sets in SQL.
Can you back up "complex"? I did extensive work on caching with 2u.fm on Rails and found it exceedingly simple, especially coming from Java or PHP.
Plug in the dalli gem, and pass in the updated_at timestamp as part of the key.
Edit: Perhaps if you had to re-write an app to support caching you'd run into trouble, but thats outside of the scope of this article. If you build a rails app, you plan for caching from the start.
If users are shown different content, you can't cache easily. content_for quits working. etc.
Caching in Rails is used to fix the symptom of Ruby/Rails having terrible performance for rendering templates/html/helpers/urls. It's fine if you have a small, simple site. For anything complicated/large, it can become a pain.
By "complex", I mean something with hundreds of routes (I have close to a thousand), hundreds of controllers, hundreds of views/partials, etc.
But if you have a small site that won't have lots of code, then sure, Rails is going to perform great.
What I don't get is how number of controllers or views has anything to do with this. Size of the app has no effect on the app speed at all so I'm not sure why you'd bring that up. Complexity of pages does.
But still, rendering even complex views is usually no more than 200ms on MRI, and less on others. With multi-core and multi-threading and (easy, nested) caching it's trivial.
How do you explain 37signals and Basecamp? You're spreading FUD. Real world disproves that Rails can scale and like I said, it's only getting better not worse.
I hate rails and everything it stands for, but it's often the right choice for a business. The choice to develop faster in exchange for a heavier maintenance burden later on (which is what a dynamic language gives you) is often a good one. I don't see the concurrent request handling of node or django outweighing greater library availability and ease of hiring (often the hardest part of any business). True, any great engineer can learn a new language - but it takes time, time you may not have. (To say nothing of the inconvenient fact that many businesses can get by perfectly adequately with mediocre programmers).
I have my fair share of issues with Rails, but I think this is a bit extreme.
When I look into other ecosystems, the amount of library support for building modern web applications just doesn't compare. I think it's this aspect of Rails that really helps get from zero to viable product so quickly.
That said, the performance issues do start to take a toll after a while. And forget it if you want to do something counter to the way Rails want you to do it.
For reasons like this, and I would add, scary security history, I decided against using Rails.
I've experimented with Django and lightweight Python frameworks, Node.js and Java 6 EE, among others.
What has worked best for me was C# + ASP.NET MVC. The C# language has several features that I find appealing and lead to clean, efficient code (such as dynamic, lambda, LINQ and async, among others). The modern incarnation of ASP.NET running under IIS is quite efficient and productive both in development, profiling & diagnostics, and in production.
I can't imagine why a startup would lock themselves into Microsoft's ecosystem knowing that the more you grow the more those licenses are going to bite.
Unless you are doing it totally wrong the license will cost far less than paying developer wages and if your team is more productive in that environment it is a net gain. I agree that if you have a startup with no revenue and everyone works for free the licensing can be an issue (unless you use Bizspark).
I can't speak for others but in my case, consuming services of the Windows Azure platform takes care of licensing (there's not additional license costs). There is a degree of lock-in but there are also many benefits. As I said, it's my personal opinion and experience, based on the kind of applications I usually develop and the kind of market I work for (enterprise).
That reminds me of the reasoning during the housing bubble when people took adjustable rate mortgages with a low teaser rate, then got nailed when the higher rates kicked in. BizSpark payment shock after year 3.
As I understand it, companies get to keep any licenses they procure during the three year period at no charge upon graduating the program. Procuring additional licenses then starts to cost, but those are discounted for a two year period post-graduation.
Saying it's analogous to shady adjustable-rate mortgage practices is a bit of a stretch.
Maybe a small stretch.. But if you expect your company is going to be rapidly growing going into year four, you are going to go from zero software cost to very high software costs. After three years of commitment and investment are you going to spend the time to switch platforms? At that point you are stuck. Better to commit to free from the start.
I've done some web dev with node, python (GAE) and php. I ended up using C# + mono's web server for a recent project and was pleasantly surprised.
ASP.net is showing its age but there are some solid libraries, reasonably comprehensive documentation, and some nice new frameworks like MVC out there to use (I didn't even use a framework, just wrote 500 lines of wrapper logic so I could expose some REST services that handled JSON input/output and then built a connection pool for my Redis connections). If you know C# (or another .NET language) well enough, you can lean heavily on the type system and write automated tests for everything else and get all your errors/red tests displayed to you in your IDE. I hit very few runtime errors while building the services.
It's also pleasantly surprising how easy ASP.net deploys seem to be: rsync your app folder to the web server; the app.config and bin/ folders inside ensure that all your dependencies and configuration move over to the target machine. Mono's ASP.net server seems to support almost everything Microsoft's does, so I think I only ran into one behavioral difference (and it was documented, albeit a little hard to find).
ASP.net also has some of those same code reuse benefits you get with node.js, just in a different direction. Now if you have any native C# applications or libraries for doing interesting things, you can expose them as a web service trivially. JSIL's online sandbox (http://jsil.org/try) is literally just the compiler libraries deployed to a linux box with a 100-line shim over them that does compilation and caching and error reporting.
I took a look at F# but for the kind of applications I write, I prefer to follow mainstream, for the bigger ecosystem and less surprises, and C# is powerful enough for me.
I've actually been quite impressed with C# and .net as well, I can't always afford to use it, but it wasn't at all unpleasant to develop an app in.
The thing is, by the criteria and benchmarks the author of this post is using, asp.net MVC would be relegated to the dust bin for the same reason Rails was; it's at the bottom of his performance graph.
I'm interested on data about that. Can you share what you have? I've been able to find these datapoints[1] and they show ASP.NET MVC to be the fastest full-stack web framework they tested on Windows.
I may be reading it wrong, but in the benchmarks the author linked to showing database-access responses per second on EC2 Rails was getting ~700 and .net MVC getting a max of 1400 with a min of 400. That seems about equivalent given the best .net MVC version is using MongoDB for storage and Rails is using MySQL. Would be nice to see .net MVC with SQL Server or Rails with Mongoid.
This is nothing but link baiting. At the end of the post he adds the qualifier.
>If you already know how to get things done in Rails, you’re in a hurry, you don’t need to maintain what you’re building, and performance is not a concern, it might be a good choice.
This is exactly why Rails will remain as the goto framework for many developers.
Time to market and developer productivity matter so much more than capital expenses that it is ridiculous. I could buy another server in the time it takes to pay a developer for a days work. Not to mention, not all the features on a web application need to serve more than 2500 requests per second. The ones that do can be refactored into a web service with higher throughput or designed to be scaled separately from the rest of the features. It doesn't make sense to daisy-tank (to pick daisies with a tank) every single feature to support throughput it doesn't need at the expense of developer time. More over until there are analytics for what your users are using deciding what to optimize is entirely speculation. Bad science. The best way to get those analytics is to be live and the best way to be live is to have a built product.
The benchmarks of linux on a physical machine rather than a virtual machine is not representative of performance for (gawd help me) cloud-centric deployments. These benchmarks assume the maximum benefit from compiler optimizations that would not be available on a virtualized machine.
I'm not sure what compilers check more than syntactical errors. Those aren't any slower to fix in a dynamic language. I'd recommend checking on Sandi Metz on testing
The rest is just language preference. And my preference is ruby. Python is fine with me too. Javascript makes me a sad panda, but it runs in all the browsers and it's a lot better with the magic of coffeescript.
Recently I spent around an hour writing a small image resizing server in Ruby/Rails and Go (a fixed amount of time in each). I have several years of Rails experience and a year of Go experience. The Go server code was significantly more verbose, had far less features and worse error handling given the amount of time I had. I realize it is not the best comparison given that I am more experienced with Rails - I believe though that it would not be that much different even given a few years more Go experience. Yes the speed of Ruby and Rails can be frustrating, but I find it is often not needed, and if it is you can just rethink that piece or do it in a language that is more suited to the task. For development productivity I have yet to find a combination of language/framework that let me write maintainable code in as short a time (having worked over the years with frameworks in C/PHP/.NET/Java etc.).
I agree with your comment, and I shan't try and justify my comment, for it is libellous and unjustified. However, I guess I could have used "failed to describe Go with an adjective that you used in a positive context when discussing another language" over "slight". That said, it doesn't roll off the tongue as well.
It's worse than that. He links to a set of benchmarks to demonstrate that rails is slow without even considering the caching utilities that rails provides.
Who bases their sole criterion for a web framework on speed, and then ignores the baked-in caching functionality that a framework provides?
To be clear, the particular test cited is expressly designed to exercise the framework and its ORM+database connectivity. None of our present test types permit the use of a reverse proxy cache because we are specifically interested in measuring the performance of the frameworks/platforms and not the performance of reverse proxy software; you can find such tests elsewhere. Further, reverse proxies are only suitable for a subset of real-world applications where it is acceptable to cache output for consumption by a wide audience.
In other words, Rails wasn't unique in being tested without a preferred reverse proxy. Every single framework on that list was tested without a reverse proxy.
A future test type [1] will exercise back-end caching (e.g., memcached in the case of Rails), but we are not planning to ever include reverse proxies (of any form) in the project.
Tornado is using MongoDB while Rails is using MySQL. For a test that's essentially about responding to a query with a database access, I can't figure out what I'm supposed to take from this.
* Access a database table or collection named "World" that is known to contain 10,000 rows/entries.
* Query for a single row from the table or collection using a randomly generated id (the ids range from 1 to 10,000).
* Set the response Content-Type to application/json.
* Serialize the row to JSON and send the resulting string as the response.
10,000 is a couple of orders of magnitude too small to be interesting for a primary key lookup. And encoding two integers as JSON is not exactly a test of JSON encoder performance either.
I'm not saying that this kind of multi-way-shootout benchmark is a bad idea, I'm just saying that the current rounds of results are unlikely to have much predictive power ...
Obviously I am biased about the benchmarks project, but I disagree concerning the predictive power of the rounds to-date.
Thus far, the rank order of each round, as we add more tests, remains largely consistent. The most extensive test--Fortunes--exercises request routing, database connectivity and pooling, the ORM (if available), entity object instantiation, dynamic-sized collections, sorting, server-side templates, and XSS counter-measures. On the whole, where a framework has received an implementation of Fortunes, we see roughly the same order as in the other test types.
To clarify some points:
* The magnitude of the Worlds table is intentionally small enough to easily fit into the database server's in-memory cache. This is an exercise in measuring the frameworks' and platforms' database drivers, connection pooling, and ORM performance; not the performance of the database server. As an unintended side-effect--largely thanks to the contributions of readers--the scope of the project has broadened to include some Mongo and Postgres tests so it is to a very small degree a rough comparison of the request-processing capacity of three popular database platforms. But it is expressly not a benchmark of database servers.
* The response payload is intentionally extremely small because these tests are designed to exercise framework fundamentals such as request routing and header processing among others. Increasing the payload size directly increases the number of frameworks that will saturate gigabit Ethernet. As it is, even with a trivial payload, high-performance frameworks saturate gigabit Ethernet on the trivial JSON-encoding and plaintext tests.
* A larger payload JSON encoding test type is planned for the future [1], but I would caution that it is unlikely to shuffle the rank order seen in tests to-date in any notable fashion.
> But it is expressly not a benchmark of database servers.
Oh, okay, cool. I'd misinterpreted it as being a kind of a "full stack" test. I didn't look at any of the "fortunes" examples, either, as it happens.
I can see where you're coming from, but I'd still be concerned that the concurrency and connection pooling behaviour could be quite different depending on the DB query behaviour.
What I should have said is "... predictive of how well your website will actually work under load". But nothing much is predictive of that except for trying it with synthetic data ...
You're right that it's impossible to fully disentangle the performance of the database server from the frameworks' ORMs and their platforms' drivers and connection pools. We indicate which database server is being used in each test permutation and one can make very rough observations from the data--such as "MySQL appears very slightly faster in this type of use-case than Postgres"--but like I said, that's not really the purpose of the project. If one wants to compare database servers, there are many better resources for that insight.
You're also right that nothing can predict how your application will perform under load until you build it and test it.
By testing the fundamentals of web application frameworks, however, we hope to inform a preliminary selection process (along with self-selects such as comfort level with code type and community) to give you a rough idea of capacity before you build out the full application. I feel especially that the massive spread of the performance numbers--covering many orders of magnitude as it does--is illuminating to newbies and also valuable to seasoned pros.
Rails is pretty awesome for building monolithic, CMS systems and other CRUD type apps.
It's not awesome for a lot of other things.
I don't really get the point of this article and the author seems to be contradicting himself all over the place. Like, how can you possibly talk about the pitfalls of dynamic typing and then a paragraph later tell people to use JavaScript or Python? WTF?
Most problems at scale have to do with I/O, which Go isn't going to help you with. Your data stores are gonna be your pain points.
And oh boy, talk about a mentality of premature optimization... seriously folks, worry more about making something that someone is gonna want to use than how many requests per second the thing can serve. Right now your product has zero requests per second so you could just fulfill them by hand if you had to.
FWIW, my biggest issue with Rails (and much of what's produced by the Ruby community) is magic: I find the amount of stuff that's implicit, either depending on naming conventions, or automatic inclusion, or overriding default language behavior, or whatever, to be absolutely maddening. And as magic, it's mostly ungreppable and ungoogleable. Trying to figure out what exactly a given line of code does can take way too long. If I was writing an app from scratch as the only developer, I suppose this would be ok, but trying to maintain an existing app, or working as a team, this is a major problem.
The (lack of) speed in my local development environment has been pretty annoying as well.
"If you already know how to get things done in Rails, you’re in a hurry, you don’t need to maintain what you’re building, and performance is not a concern, it might be a good choice. Otherwise, never."
Take the first line and the last line and it sounds like Rails works perfect in an intra-nettish application as a front end to lots of data in a database. I have a lot of experience with that. It does work really well... for awhile.
Needless to say only having a couple hundred theoretical possible users means that worrying about handling 400 reqs/second doesn't come up as an issue very much.
His line about rails maintenance is dead on and the source of much internal push to run (not walk) away from rails. Push something out to users on rails 1.1 or whatever from 2007 and it won't run in 2013. A rails app needs constant continuous rewriting just so an apt-get upgrade won't kill it. Can't just deploy and walk away like a perl CGI script. Even if absolutely everything except rails stays the same, you can't just walk away and expect it to keep working.
Building something on rails isn't a capital investment where you lean back and productivity/money pours out of it. On the continuum of this, its on the far edge of continuous labor required. More like a million dudes building the pyramids by hand than like one dude building a crane.
I'm not a Rails fan. I've overseen the development of about a dozen medium sized Ruby web apps in my career and Sinatra gets us up and going much faster. Also, the relative lack of magic compared to Rails makes uptake for new team members 10x quicker.
With that stated, this article is pretty weak on substance for such a provocative title. I am tempted to flag it as sensational link-bait but Rails discussion is about as on-topic as it gets for HN so I'll resist.
Anyway, the article fails to demonstrate "Why Nobody Should Use Rails". The concurrent requests issue is a valid criticism but that's about the only substantive claim in the article. People should use Rails when it's the right tool for the job. In my experience, Rails shines in situations when there isn't a strong architectural lead managing the project. Rails is very opinionated so it forces disparate developers into a more cohesive application structure where you'd normally end up with a spaghetti-code special.
*edit: I was unaware of this but comments suggest Rails has supported concurrency for a long while, it was just off by default.
This assertion is, categorically and empirically, ridiculous. Yes, the runtime and framework are slow compared to something like go and Revel, but that is not the principal benefit of Rails. Rails' principal benefit is taking care of Maslow's Hierarchy of Needs, or whatever its analogue is for developers. The most recent Rails app I wrote has average render times of 7ms (excluding network latency) on dedicated hardware largely because most views can be rendered with just 3 memcached hits. What's more, implementing the nested caching strategy necessary to achieve this was pretty simple with the out-of-the-box capabilities of Rails 4. If I'm busy worrying about lower-level concerns it doesn't matter if my runtime and framework are blazing fast - I don't have the confidence or time to implement strategies like this, or at least not with nearly as little effort. The proof is in the pudding. Anecdotes to the contrary are welcome.
This is something from one of my previous write-ups for startups, specifically ones using rails:
1) If you KNOW FOR SURE your startup will face a lot of page views, requests etc. before hand (Example - Real estate portals like Airbnb, Trulia, etc), then you should not make the serious mistake of ignoring high performance frameworks beforehand. This could literally be anything, but the alternative I personally recommend is something Scala based [Read section 4 for WHY].
2) As usual, use Rails (or django/similar) ALWAYS to build your v1 prototype. I say always because you will be incredibly surprised to see how much rails gets stuff done for you. While this is a huge advantage initially, this can also become a nightmare, later. Hence, I suggest you use rails only initially and not forever.
You should be writing tests for all code - Not just dynamic languages. Compilers catch things like typos and type differences as you said; but there's so much more to test suites than that.
Your last 3 points have nothing to do with the post; they're just essentially more information added to other headings.
So your main point in the end is "Rails is slow" - I personally haven't dug too much into the performance of rails compared to others - But in general: It will improve - as all things do.
Static typing and unit testing are complementary tools. It seems to me that proponents of dynamic languages tend to treat unit testing as the one tool that makes everything look like a nail.
RoR doesn't "encourage" unit testing, it forces you to write thousands of meaningless unit tests which just make up for the lack of strong type system. Unit tests should test code logic, not parameter types.
Perhaps I've just missed it in the past but where does Rails advocate testing parameter types? There are examples of testing model validations, but you would want to test that in a strongly typed language as well. I don't think I've ever run across a test for something like "this id should be an integer" in Rails, if that's what you mean.
Then don't write meaningless types. If you're needing to constantly test parameter types - Then you need to work harder on ensuring your input is the correct type before calling a method. It should subconsciously tell you to put more effort into knowing what you're working with.
Unit tests are meant to test behavior - When I do X, I want Y to happen.
Rspec is great for this - I did X, I should see A, B, and C get called - and if they return to me this value then my final result should be Z.
There are gems like shoulda (https://github.com/thoughtbot/shoulda) that make it really easy to do the 'types and typos' kind of unit tests. The marginal time required for any given class is negligible.
The main turnoff for me when it comes to Rails is watching other major companies like Twitter completely rewrite their application away from Rails.
Growing pains -- the kind of pains we all want to endure some day. If you are planning to grow, why wouldn't you start with something you know has an end game (like PHP or Java)?
The often cited slowness of Rails, mixed with companies rewriting their codebases, and the flurry of vulnerabilities in the past year has caused me to take Rails less seriously.
> The main turnoff for me when it comes to Rails is watching other major companies like Twitter completely rewrite their application away from Rails.
I do not like Ruby and have never used Rails, but I feel your conclusion is wrong. Twitter would never have outgrown Rails (at that time they were already wildly successful) if it hadn't been the right (or at least a sufficient) choice for a prototype and initial production framework.
An overwhelming majority of web sites will never grow to the point where they need to drop Rails for technical reasons and some of them will still be hugely successful. If you reach the point where Twitter needed to switch, congratulations, you will able to afford it easily.
TL;DR: For most things, I would put node at the top for many new web apps because of its surface similarity to frontend.
--
After some evaluation in the past, Rails had some ghetto-like speed-bumps: I patched devise to use scrypt because the existing shit besides bcrypt was broken. devise-encryptable is a CWOT, don't bother. Maintainers promise all sorts of future dismissive bullshit but don't care, that's abundantly clear.
Security in RoR is a fallacy. As a signal, almost none of the gems were signed, especially Rails. If you don't mind not being able to prove code the author pushed has not been tampered, by all means, run code from the public straight to production. I wrote a gem to make gem signing simple, waxseal, but since signing is optional, it's never going to change unless RubyGems has a major security incident.
I just don't have the bandwidth to waste on a community that doesn't get it.
--
As to what's coming up in the systems world: Go is more suitable for crunchy backend services typically in the systems domain of Erlang, Java or C(|++). C is portable at the expense of autotools and so much extra boilerplate to do anything. A lot of people that matter know and are comfortable with C, so libs and kernels will use that for sometime in the future. Go or similar would take decades to adopt because there's not yet enough compelling chicken-and-egg evidence to change to a different flavor of Turing-completeness.
Go is qualitatively solid. It addresses entire categories of software engineering problems that happen at scale.
If you're doing bank software, you're probably stuck with a mostly JVM &| CLR runtime stack.
I actually started to use Rails recently and came from Node and fooled around with Go in a past too. I switched because I found myself to be much more productive with Rails and it's really easy with Rails to do fairly complicated things.
The article is jokes because everything he says about performance is irrelevant. You can get a single affordable VPS to kick out about 80 reqs/s with Rails 4 without any difficult caching/craziness on an app that's not super complicated but isn't a basic blog tutorial clone.
The same app in Node might perform at 350 reqs/s but who the heck cares? 80 reqs/s is close to 7 MILLION requests a day. How many apps do you have with ~7 million requests a day, and how hard would it be to add a second a server in the mix... wow that's tough.
Rails 4 is more than capable of responding with low latency too as long as you cache things responsibly. Fortunately that's brain dead simple with Rails since it handles all of the dirty work for you.
But my question is, what if the app doesn't, and isn't ever meant to, support a large number of users? What if even with the concurrency issues, it's fast enough?
Why Django is better? The author seems to hook on to the concurrency aspect of nodejs(for what it was made) and compare with Rails.
Does the author know there are projects like JRuby, Rubinus, there are multi-threaded servers like puma?
One thing is very clear, the author is either writing articles for attention or does not like Rails' popularity. Classic link baiting! He has written 3 blog posts in total, all for the popularity.
Don't get me wrong: I love working with Rails (it's my Bread and butter basically). But sometimes I fall back to rants like these myself. Though mostly focussed in the whole "there's a gem for that" and "let's create instances because we can but complain about memory usage".
I'm not even sure what a I just read. Did the author really mean to imply that statically typed languages don't require unit testing? Did the author really mean to claim that a low position in a single benchmark is a disqualifier for ever using a piece of technology?
These are not reasonable things to say. Its just religious zealotry basically.
The problem is not Rails or Ruby, it's the community. There's this vibe that "Ruby is fast enough", which rubs into Rails. It manifests in many ways, not the least "developer time is more important".
Well, it sure is. But one shouldn't have to make such a big tradeoff.
While we are tearing down the whole article, in some corner of the world, Jeff atwood is neglecting us by having donuts and building discourse (http://www.discourse.org/) on rails.
It never gained widespread adoption because it was off by default. Rails 4 turns it on by default on multithreaded web servers like Puma.
This isn't true on JRuby, where Ruby threads are 1:1 with JVM threads (which are in turn 1:1 with native threads) that execute in parallel on multiple cores.At my employer (Square) we run many of our Rails apps in thread safe mode on top of JRuby, providing parallel request processing across multiple cores on a single JRuby/JVM instance.