There's a good reason most Ruby and Python projects don't rely heavily on system calls: portability. If you care at all about portability, it's just easier to not hit the system calls directly, otherwise you'll have to detect host OS and make sure you're using the system calls properly.
If threads are "out", then pre-fork is way out. Just look at the history of the Apache project. I realize this all happened before the RoR era, but Apache used to use a pre-fork MPM almost exclusively. In more recent years it has added the threaded MPM and the async MPM. They have also put in the work I mentioned above to achieve cross-platform compatibility.
I'm just using Apache as an example here; I'm not suggesting that we should all use Apache. It's just funny to me to see a post that basically says, "All the stuff we've been doing for the last 5 years is out. We should be doing the same stuff they were doing 15 years ago, but in Ruby this time around instead of C."
Maybe software trends are like music trends? Everything from 5 years ago is lame, but the stuff from 20 years ago is super groovy, man.
Counter example: nginx, which uses fork(), and seems to smoke Apache while offering features like binary reloading without dropping connections (not sure if Apache supports this but I seem to remember no - please correct me if I'm wrong).
Portability is not always necessary - when talking about fork() and friends you're talking about trade offs.
When I'm only ever deploying to Unix environments, I accept the lack of portability in exchange for features I value.
That's not really a counter example. nginx does indeed fork off child processes, but it uses async I/O in each child process. It doesn't use the traditional pre-fork process-per-connection model that Apache's pre-fork MPM uses or that Unicorn is using.
Oh, you were referring to the syscalls/portability tradeoff. In that case it's still not really a counter example, because Apache made the same decision in its early versions (Unix only). And that's a reasonable decision to make given the tradeoffs, I think.
The point that I was more upset with was the "threads are dead, pre-fork is the way to go" section of the article.
However, thinking about it more now, even the syscalls vs. portability arguments presented here are another example of forgetting the past (or just never being aware of it). After all the work from the Python devs to encapsulate syscalls in the standard library and provide a portable API to them, Jacob Kaplan-Moss says, "I’m a bit dismayed to see [syscalls] relegated to the dusty corners of our shiny dynamic languages."
Hello!? nginx is a POSIX network application written in C. Of course it uses syscalls. As the previous poster pointed out you are also wrong in that nginx uses anything like a prefork model. It _may_ fork of a process or more if it detect that it is running on a SMP system to take advantage more than one CPU. But each process has a strictly async io architecture handling request/responses within a big loop.
The point of the post you are responding to is that it uses syscalls. I'm not entirely certain what about that warrants a "!?"; could you enlighten me?
nginx relies heavily on system calls at the expense of portability
Its a oxymoron as you cannot expect a C application that mostly do network and disk i/o not to use syscalls. That said, the author of nginx does an admirable job of reducing the numbers of syscalls and make the application as efficient as possible. In this context a syscall i.e. a kernel call is heavy and something one want to minimize.
The blog post on the other hand was talking about using syscalls from Ruby. I can understand Ruby programmers who wrinkle their nose at this. If you want your Ruby application to be portable then using, for instance, fork is not the best idea. In fact, programing anything long lived such as a server in Ruby is not the best idea. The GC in 1.8.x leaks memory over time and it is a common case that Ruby servers has to be restarted often as they consume all memory on the machine they run on. Adding fork on top of this is bad. A fork will copy the whole Ruby interpreter into the new process and you will end up with a lot of top-heavy processes - not a good idea unless you sell RAM. Basically, using Ruby as a systems programming language and for applications that are long lived is a bad idea.
Nonsense. My multitudes of small web services, many of which have been running months without a restart and while taking up no more memory than they did at the start say you're wrong. Even if there are some obscure bugs, and there undoubtedly are, that doesn't make ruby an unsuitable language for systems programming.
Making nginx portable means using more syscalls -- the ones specific to the kernel you're calling. Across Unixes that just means using the appropriate epoll/kqueue/etc, for Windows support it means a total refactoring to use NT's Completion Ports.
When the author says that threads are out, he means that they are a difficult way to write concurrent code. Pre-fork is just one (classic) example of how to write concurrent code without shared state. Instead of threads communicating via shared state, you have processes communicating over pipes or sockets.
That said, you're right about portability. The attraction of Ruby's Thread is that it works identically across operating systems.
As productive as dynamic languages make us, I think we sometimes forget that this is all typically built on Linux/C, and that's not changing anytime soon. A _good_ hacker should at least have a basic knowledge of what's under the hood.
this refers to the first edition of APUE - there is
also a very good (IMHO) second edition co-authored
by S. Rago (a former collegue of W. Richard Stevens):
The second edition mainly adds better coverage
of POSIX (much of which was developed after the
first edition was published) and current UNIX
variants (Linux, FreeBSD, Solaris, MacOS X) while
leaving out obsolete stuff.
Thanks! Rago is the copy I actually have on my bookshelf, didn't notice the site I linked was only the first edition. Guess with Steven's unfortunate passing they made a new site.
fork() in Ruby would be much better if MRI's garbage collector wasn't so awful -- because it marks every reachable object in each collection cycle, it's impossible for MRI processes to take advantage of the kernel's copy-on-write memory sharing post-fork. You can't even just let the processes gobble up space and let the kernel's VM sort it out -- anything that gets swapped out will have to be paged back in to be marked by the GC. Churn.
But if you're already taking that memory hit with separate processes, ala mongrel cluster, fork() still provides a number of juicy advantages (which the article explains).
I'm not familiar with the MRI source, but in most collectors, I'd think that switching from marking the object directly to using a separate mark map should be pretty trivial (like an afternoon's work). You'd still have to copy the pages for the mark map as soon as you GC in the child process, but that's only 3% of your total heap size.
Wrong -- almost all of the most visible GCs in the most popular languages are either 1) still awful or 2) were formerly awful for such a long time, they're still living it down.
It's a vicious cycle.
- GCs have a bad rep.
- Precocious programmer implementes their own dynamic language.
- They settle for Mark/Sweep or ref counts to "get it done"
(Hey, GCs are all awful anyhow, yeah?)
- Many people experience the awfulness.
- GCs have a bad rep -- REPEAT
Chicken & egg? GCs were bad. Experts have since figured out how to make them good. The programmer culture in general is slowly getting this knowledge by diffusion.
The VisualWorks GC is so good, as a lark, I once put an infinite loop into the app I was working on that did nothing but instantiate new objects. I could barely tell it was there!
Yes, the GCs you've heard of constitutes an encyclopedic listing of them. </sarcasm>
Hmmm, you just gave me an idea. Interview question to see if prospect knows what he doesn't know. Does she/he even have the order of magnitude right on that?
I'm sorry I'm coming to this with the thread dead. I was on a boat for a team-building offsite most of yesterday.
People also overlook the great benefit of fork(2) for static languages: it's like GC for your address space. In a long-enough-lived multi-threaded C/C++ program, heap fragmentation will eventually eat you just as badly as a memory leak would have. Since there's no GC to compact the heap, the only real solution for memory-intensive servers is scheduled restarts.
A good, old-fashioned fork(2) resets the address space to a known-ok state; after the client connection is done with, whatever fragmentation the request has introduced disappears with its container process. The canonical, ancient structure of UNIX servers (fork after accept) was what enabled those servers to stay up for months and years at a time, but very few people made that connection. When processes were the only concurrency primitive available, we saw only their costs, and assumed threads would be better, since their costs were lower. In some ways, processes were the devil we knew, and threads an unfamiliar devil; in addition to all the usual complaints (e.g., about how hard it is to synchronize), threads mean that the global heap lives forever.
Generally, threads are difficult to use because of what is known as the shared data problem. With threads, generally some global data structure is shared among all the threads. These threads write to and read from this data structure, and those operations are interleaved.
This means that if not careful, when a thread is reading data, that data could be corrupted by a another thread writing data into the same data structure. This in turn means that threads need to lock the data structure when they are accessing it so that other threads can't get to it.
This leads to a whole host of potential problems that can be very difficult to debug because many of them end up depending on subtle timing differences.
Processes avoid this because they can't have shared data. So instead the problem is broken up between processes and message passing is used to share data in between those processes. It's something that generally makes life easier to understand and steers clear of problems that can be very difficult to reproduce and debug.
Processes don't share mutable data, not without explicitly using IPC.
In nearly all Unixes, both processes share their entire address spaces after the fork(), just immutably -- pages are only copied when they're written to by one of the processes, otherwise they are shared till exit. Copy-on-write is what makes fork() tractable performance-wise.
Copy-on-write is also why people overestimate Apache memory consumption so dramatically and then get into the hassle of running php apps under fcgi to save a trivial amount of memory.
For a while now I've had a feeling that threads -- in the sense of constructs manually created and manipulated directly by an application developer -- are going to go the way of manual memory management.
There are still problem domains where you need to obsessively and manually handle memory, there are still people who don't work in such domains but have convinced themselves they do, and there are still people who feel, for whatever reason, that C or C++ are the only tool for Real Programmers(TM). But the trend is and for many years has been away from that and toward managed runtimes, because they're far less complex to work with and far less susceptible to the sorts of easy errors which plague C/C++.
I think that a few years down the line we'll be in a similar situation with threads: there will still be problem domains where you absolutely need them, and people who still believe for whatever reason that manually managing a thread pool and shared resources will make their penis bigger, but most of the world will be moving on to something that's less complex and less error-prone, and probably managed automatically by a language runtime or something similar.
I agree with this. I was talking to a someone recently who works on embedded system. They do quite a bit of threaded code to deal with network, UI, and other aspects. I mentioned I prefer message passing and processes to threads. His response was that in the limited resources of their devices, that was not feasible. Shared memory and a single process saved on very valuable limited memory.
I mainly work on server side code where for the most part, the overhead of a separate process is not issue. The overhead of not sharing memory is not an issue.
Thread programming is hard. A lot of people point out different ways of doing it and say "if you're not a terrible programmer, threads work fine", which is all well and good, but they are still much more error prone than async and process models. This is one of the reasons Apple wrote GCD, which is a very interesting approach to threading that makes the messy parts of dealing with threads easier.
I would agree with the author. Unless you have a compelling reason to use threads, processes and async tend to make for more correct programs. However, they're often more difficult to get started with. Trade-offs, it's what we do :).
It's not clear whether he's saying that threads are bad in Ruby, which is true because of the GIL, or that threads are bad in general, which is just unfounded bashing.
i'm always surprised to hear things like this. it's certainly not unfounded - it's just so obvious and universally accepted that it doesn't require explanation - one would think.
I think it may be obvious to people with a c/++ background, but when I was in school in the early 2000's, they taught you concurrent programming in Java via threads.
And the concurrency primitives and frameworks in languages like Java and C# are so easy to use there's a whole generation of us who think that threading is the easy way to do concurrency.
There is a link in the article to a presentation. The author has not used threads because (apparently) threads in Ruby are green-threads. These are not that useful for a type of application that needs to block on I/O.
This also breaks the idea that Ruby=Unix somewhat: far more power is available when native threads are available including the ability to make use of multi-CPU.
In MRI Ruby 1.8, Ruby creates a single native thread and all other threads run in that native thread. In Ruby 1.9, each Ruby thread is a native thread. In JRuby, Ruby uses native threads created via the JVM. Not sure about IronRuby, Rubinus, or Ruby EE. A tangled web of options for sure.
When I read stuff like this, I should go... Yay, they are back on the path. However, I'm stuck on the... why are they even off the path in the first place? Why should rediscovery of processes and the 'select' call be news?
... I should just write a big blog post about this ... (which I'll never do... to busy working :-)
At least last time I checked, he was running a custom written blog engine that several of us in #sinatra made a bit more generic. It's probably been heavily modified or even replaced since then. Link: http://github.com/rtomayko/wink
That's fine and dandy, but when I want to use your libraries with JRuby or MacRuby, I'm SOL. I kind of like the portability of avoiding fork() and exec() for those reasons.
Point taken. In the general case, though, (i.e. we're talking about more than Rack awesomeness) I think it's important that people keep this in the back of their minds. I was burned on this recently with the use of the Daemon Kit gem in a project.
It's definitely a "do-better" on my part to more closely examine the libs I'm working with, but as an author in a language wherein numerous OSes and implementations of the interpreter are used, it's something to keep in the back of your mind as well.
If threads are "out", then pre-fork is way out. Just look at the history of the Apache project. I realize this all happened before the RoR era, but Apache used to use a pre-fork MPM almost exclusively. In more recent years it has added the threaded MPM and the async MPM. They have also put in the work I mentioned above to achieve cross-platform compatibility.
I'm just using Apache as an example here; I'm not suggesting that we should all use Apache. It's just funny to me to see a post that basically says, "All the stuff we've been doing for the last 5 years is out. We should be doing the same stuff they were doing 15 years ago, but in Ruby this time around instead of C."
Maybe software trends are like music trends? Everything from 5 years ago is lame, but the stuff from 20 years ago is super groovy, man.