> And those requirements kept growing. If my calculations are correct, the standard setup for engineers now is a machine with 20 or more gigabytes of RAM just to RUN the software.
Close. In 2011, all the engineer desktops got upgraded to 36Gigs. At the time, the eng department still hadn't figured out how to deploy without duplicating hundreds of jar files everywere.
Systems actually come standard with 48 gigs :) As it turns out, that's really useful for the part of my job requiring data modeling.
Somebody suggested LinkedIn is one large application, which is absolutely false. It's composed of many small applications, each deployed as a WAR file since the non-Node.js part of the site is on the JVM. There's a lot of overhead that comes from deploying this way. It doesn't matter in production or staging, but it adds up on a desktop.
At this point in time though, dev machines have stopped growing because we have shared stacks you can deploy a couple pieces of the application against if you want to test the whole thing.
I interpreted the original article that their new node.js approach was wildly different than a server-based MVC framework.
Sure, they ended up with 20x greater speed, but never did they say that it was because node.js. They even provide a high-level detailed description of all the things they did with their new approach to achieve this performance.
"LinkedIn Moved from Rails to Node: 27 Servers Cut and Up to 20x Faster" should say something like "LinkedIn did a rewrite and rearchitecture that cut 27 Servers and increase speed aby 20x"
This is a great post for validating management concerns about pulling in sexy new technologies for the hell of it. Every place I've worked I've been unable to convince management to use e.g. Rails (5 years ago) or node.js (recently). Even though I love these technologies and wish I'd had more time in full-time employment to learn and play with them, I understand and appreciate the risks implicit with adopting a shiny new technology in your company's IT dev/production environments.
It's also a great post illuminating how in hindsight some things can be really obvious (that building a high capacity web service dependent on a single-threaded server will give you problems down the road), but at the time it's not always easy seeing the woods for the trees.
For me though, the big takeaway was that one line summary: "You’re comparing a lower level server to a full stack web framework." Node.js has a pretty nice library/module ecosystem now, but for a complete full-stack solution with maximum productivity I would venture that there is nothing out there that compares to Rails currently.
> This is a great post for validating management concerns about pulling in sexy new technologies for the hell of it.
But their old technology is requiring towers with 36GB of RAM. It seems like more of a choice between the devil you know vs the devil you don't. Obviously there needs to be a cost-benefit analysis even if it's imperfect. Otherwise you'll get stuck in a local maxima and before you know it you're paying $10 per million cycles of a mainframe running batch COBOL jobs maintained by old men who are dying faster than you can replace them.
This is a great post for validating management concerns about pulling in sexy new technologies for the hell of it.
At the end of the day, the Rails stuff got the job done. LinkedIn stayed up and was able to grow and add mobile features during that time. The current solution, the node.js stack, is even newer than Rails. So no, I don't think this validates management desires to stay with old technology.
It's also a great post illuminating how in hindsight some things can be really obvious (that building a high capacity web service dependent on a single-threaded server will give you problems down the road), but at the time it's not always easy seeing the woods for the trees.
Um, the new solution is single-threaded too. Threading and concurrency are not the same things.
It's definitely true, that node.js isn't a full fledged framework, but I still wrote several projects using it and you know what? I don't regret it, as much as I don't regret my move from C++ to C over the past years. And for the memory usage: Yes, even my biggest project never needed more than 100MB of RAM (when not using the cluster module).
But I completely agree with "the rewrite thing". I guess the other factors made it necessary to still do it....
I met Ikai through the Silicon Valley Rails Meetup, which I co-hosted back in 2008-2009 and which met at LinkedIn HQ in Mountain View. This post is a great contribution to the recent discussion about Rails at LinkedIn, and I hope it gets the attention it deserves.
Was there a recent discussion? I'm fairly certain everyone has moved on from Rails, and I'm not sure if they're still using it anywhere at LinkedIn. There are a few folks using JRuby, but I believe they're using Sinatra:
The concept that I got from the main article was that they tailored their application for "long-lived connection" to avoid multiple resource calls to make their web server more responsive. They also mention things like using "aggressive cacheing", "storing templates locally", "using timestamps to stream only required resources", "rearchitecture and rewrite."
Never once did I ever feel this article was advising that node.js was superior to RoR - they only every justified, at a high level, a way better approach (in terms of server load) to an "MVC" like architecture by leveraging client side frameworks and technique to lessen server load.
The author of this article also makes it clear at the end by stating that comparing the solutions is apples to oranges, but so did the original article...so I don't get the need for "clarification".
EDIT: I retract my "way better" statement - I mean "way better" in the sense of server load.
"•Firefighting? That was probably a combination of several things: the fact that we were running MRI and leaked memory, or the fact that the ops team was 30% of a single guy."
:o
That may explain why they had spam and security issues.
Close. In 2011, all the engineer desktops got upgraded to 36Gigs. At the time, the eng department still hadn't figured out how to deploy without duplicating hundreds of jar files everywere.