It sounds wrong to me to refer to Apache and nginx as "alternatives" to IIS. They're the web servers, and IIS is the "alternative", kind of like how Excel is the spreadsheet application and LibreOffice Calc is the alternative. I'm aware that I cocoon myself in something of an anti-Microsoft echo chamber, though. Does reality match up to my prejudices?
At the risk of second guessing the parent, they're considering the classical 'enterprisey' stuff as mainstream. If you look at what people use in most companies of a reasonable size, IIS is surprisingly common and Apache tends to only turn up when Java or PHP is involved, with appropriate nods to Oracle and IBM for their relative technologies.
Outside on the open Internet I absolutely agree that Apache and Nginx are the default HTTP servers. I switched from Apache to Nginx years ago (with a few exceptions) as it just seemed as though I could do more with less with it and it made sense for me to standardise.
I see nginx as more a general application server, and IIS as a specialised server you run when you need NTLM/Windows integrated authentication and ASP.NET et al.
Similar to how Apple takes ~75% of the handset market profits, IIS takes almost all the profit in the web server market and is increasing revenues year after year.
It does quite well for itself given that the competition is fierce and free.
My point being that CF has a license, whereas Ruby and Python do not - which has little value in gauging success, a la IIS v Apache/nginx. (Ignoring the reality of open source Railo and Open BlueDragon for the moment)
My rule of thumb for OSS software is that if a type of software is integral to developers' jobs, then there are some very good tools available. Operating systems, web servers, text editors, database systems, ... are all good examples of this. If the type of software is not integral to developers' jobs, then there are fewer good alternatives. There are some pretty good ones (OpenOffice, GIMP), but they're often not as polished and lack power features.
That said, I agree that it would be better if it isn't a thousand lines, and maybe there are better options than XML.
The company I work for is a Microsoft shop (for now, switching to Java sadly), and I've edited a lot of web.configs in servers, sometimes using Notepad, and it wasn't much of a hassle (though I did have to research what to edit beforehand in some cases).
Yes, because the registry is terrible. If the registry were any good, IIS configuration would use it, and you'd be able to programmatically reconfigure IIS, or any other program, in a standard registry-based way.
I have never configured an IIS server. Do you find that the programmatic configuration API is helpful? I rarely find httpd.conf or nginx.conf to be difficult to manage, but it's possible that we're missing out on a good thing in the OSS world. It's been known to happen.
I built a system that generated IIS configuration XML files, and a script to apply/import them on any/all servers in an array. It worked, and was substantially better than using the supplied GUI, but still inferior to editing nginx conf files by hand. These days I wouldn't touch IIS with a [very long object].
That's Augeas. ( http://augeas.net/ ) It takes some getting used to (some files are much easier to manage with plain old sed), but it's quite a cool and useful tool. Puppet and Kickstart are good use cases.
I think it reflects the general Microsoft philosophy of having APIs for everything. After all, they are an API company. It shows in the PowerShell design with it's object API approach.
Also, with the API you should be able to configure IIS without restarting it, while changing web.config recycles web apps (in ASP.NET scenario). With load-balancing, however, this is less of an issue.
I wonder this as well, especially with "top" websites. How much high profile traffic these days is actually served by a single piece of software?
All of our high profile sites have layers of caching and load balancing between them and the Apache backends that actually generate the content. In our case Nginx is in fact on the public facing end performing reverse proxy and TLS termination only. That is not to say Nginx isn't a fantastic product, but it may not be entirely fair to Apache, Varnish, and friends to measure HTTP server marketshare in this way.
Several sites I have insight into, use nginx in front of Tomcat, Jetty, Netty, so I agree this might be the main purpose of nginx, and it excels there.
It's likely a significant number still do this for legacy reasons but for new deploys utilising the likes of PHP-FPM, PSGI, uWSGI, & Rack should allow you to avoid using apache at all.
A major chunk of PHP based applications would be doing these, else it would be standalone Nginx serving static files for the most part of the servers..
Nginx works very well with PHP-FPM through FastCGI, and there is about zero configuration needed as everything is up and running after an apt-get (minus some commented configuration regarding php in Nginx).
I think more and more php-based app are powered by this kind of stack.
I've seen a good number of benchmarks that show modphp for Apache to be as fast or faster than FastCGI on nginx. If you're running a PHP application I'm not sure there's a huge impetus to switch from nginx over apache to just nginx (other than reducing a point of failure).
Of course modphp would be faster as php becomes part of Apache with this approach. The problem is that not every Apache process is going to serve php. Most of them will be serving static files but will have the overhead of the embedded php library. The real overhead though comes due to that you can't control how many Apache processes serve php and how many static files. Processing php eats a lot of RAM and you can quickly bring you server down to its knees. You could opt for less Apache processes but then you would end up serving much less visitors.
Anyway, Apache these days is mostly set-up with fastcgi or php-fpm due to these issues and this has the added good that you can use the worker or event mpm, though most benchmarks are still with the older prefork mpm.
I use nginx with php-fpm because (a) I find nginx easier to configure than Apache; (b) the configuration required for setting up nginx + php-fpm on Debian and Ubuntu is practically nil; and (c) this allows me to run PHP and Python apps on the same server on different subdomains.
The memory requirements of Apache with mod_php are one of the main reasons people hosting PHP applications use Nginx (or LigHTTPD, although enthusiasm for it has died down). I imagine it is still a common setup to have Nginx in front of Apache serving static content itself and forwarding the requests for dynamically-generated content to Apache.
Yuuuup. I remember discovering nginx/PHP-FPM a while ago. It's a pretty amazing combo, especially in terms of raw performance efficiency. Now it's even easier to do and a lot more sites are catching on.
"Apache and nginx, both open source web servers, have
lost market share this month whilst Microsoft gained
significantly, up by 2.43 percentage points, to just shy
of 20% of worldwide sites. For the second consecutive
month, nginx is powering fewer sites than in the previous
month's Web Server Survey, which is due, in part, to
almost 2M sites moving from nginx and to Apache. Within
the million busiest sites, a similar picture emerges:
nginx lost over 4,000 busy sites, many of which have
moved to Apache."
Many people recognize that it's much cheaper to place nginx in front of other web servers than to buy more servers. We could say that nginx is mainstream now, so any serious website would use it. If you take "top million websites" instead of "top 1000", you might see different percentages. Automated tests just see nginx in the header, although something else is the actual workhorse for the site.
Some of the stuff does not need Apache anymore. As I moved almost all my projects from PHP to Node.js, there's little use for Apache now. And nginx serves static content with less CPU and RAM.
No reason to use Apache for PHP - I run all of my PHP sites on nginx with php-fpm or fastcgi. It has been much easier for me to manage performance, especially on low-memory VPSs with php running in a separate process. I suppose it would work just as well with Apache behind nginx, but I don't see any good reason to have it in the way unless you rely on htaccess files.
What kind of opcode cache do you employ? Depending on how you set up APC for example, php-fpmcan use much more memory than running Apache mod_php with one shared segment.
I saw the benchmarks for G-WAN earlier this week (http://gwan.com/benchmark) and am curious why most people would choose nginx over the performance that G-WAN provides?
Other than claims by the author of G-WAN, I've yet to see anybody claiming that G-WAN outperforms Nginx. I also cannot find any evidence of anybody using G-WAN in serious production. I could be wrong of course, but every time I've inquired about it, nobody responds. I also cannot find any kind of community around G-WAN.
Also, it seems that a lot of the "performance" in G-WAN is thanks to its microcaching feature. When it's under high concurrent load, it caches requests for about 1 second. I suppose it's useful on public cacheable responses and useful in benchmarks, but does this really count as "faster" in the sense that G-WAN is a faster system overall? I don't know.
That being said, G-WAN has many interesting aspects indeed.
It doesn't really help that the author gives off some mild "losethos" style vibes (the rantings about open source software, the weird "buy an encrypted archive of the source" thing, etc.)
I'm sure it is very impressive in some very limited circumstances, but it really isn't something that I want to get close to. I can't trust people so.. untrusting.
G-WAN is not open source, and has no peer source review that I know of. I was not even able to run the binary to test it due to my distro being too old.
The self-made benchmarks cannot be taken at face value. If you dig deeper, you'll find outside benchmarks showing poor performance in other metrics.
I don't mean to insult the author or product, just providing my 2 cents into its lack of popularity.
I'm not too familiar with G-WAN, but there's many more factors than some benchmarks published by the author of G-WAN:
1) G-WAN is closed source. This means you're tied to their paid support for fixes and improvements. Web servers are pretty critical pieces of infrastructure that are not replaced easily (if you're doing large deployments, custom config, etc.) and so having this risk is huge.
2) Documentation, configurability, support for well-tested integration with various application servers. For example, Nginx supports uWSGI out-of-box. The G-WAN site seems to have limited documentation.
3) Nginx is well proven in production. Sometimes just being incumbent is a good reason. This often takes some large project or company taking a risk on the platform, proving it in the process; this is unlikely due to #1.
4) Other benchmarks indicate otherwise in regards to G-WANs performance; not to say its bad, but the selection of benchmarks on its site is likely biased towards it.
I'd never heard of G-WAN. It looks like it isn't open source, which is a major turn-off these days for core infrastructure components like web servers.
From their website: "But G-WAN was written to have an application server, so it also supports 7 of the most popular scripted languages: Java, C#, C/C++, D and Objective-C/C++."
As far as I can tell, G-WAN does not support the features or languages that many people want. As a Ruby programmer, G-WAN is (probably) impossible for me to use, and if I could fiddle with it enough to get it working, it would not provide enough of a benefit to justify replacing a working nginx setup.
There's also the prevalence of tutorials, books, and StackOverflow answers related to nginx; G-WAN is pretty sparse in this regard.
Basically: the eternal balancing act of "features" vs "speed." nginx hits a sweet spot for features & speed for more people than G-WAN.
Nginx doesn't really support Ruby either. You need an app server, either compiled in like Passenger or one you proxy to like Unicorn. Is the second option not available here?
Never heard about G-wan before, but from a quick read-through it is not quiet standard. It seems like somewhat unusual design, but interesting. From the g-wan website:
G-WAN is an all-in-one solution because communicating with other servers (FastCGI, SCGI, etc.) takes time (enlarging latency), and wastes CPU and RAM resources when the network can be avoided. Also, supporting many programming languages lets G-WAN serve most needs instead of having to use slower backend servers and fontend caches. Remember that our goal here is to use the ultimate low-latency and resource-saving solution. This is why G-WAN is a:
-Web server
-App. server
-Cache server
-Key-Value store server
-Reverse-proxy and elastic load-balancer server
I know it's cool to say good things about Nginx, and Nginx is good software, but your point (2) doesn't mean anything. I'd like to see some serious research and explanation into Nginx's code.
Full disclosure: I'm one of the authors behind the Phusion Passenger application server (https://www.phusionpassenger.com), built on Nginx. What I'm saying here is based on my experience with Nginx module development and research into Nginx code.
As far as I see, the main reason why Nginx is fast is because the programmer is brilliant. No, seriously. Look at the code. There are almost no comments, there are no unit tests, and yet Nginx has no well-known stability bugs, or even performance regressions. There are also tons of micro-optimizations, some of which are of questionable nature on modern CPUs (e.g. the huge switch statements in the HTTP parser). The Nginx code goes into great pain to ensure that there are no unnecessary copies or memory allocation. Everything assumes that you use static string structs that point to already-allocated memory instead of dynamically allocated strings. In return, writing an Nginx module requires extreme programmer discipline. Almost everything is hard.
Compared to Apache, Nginx's main reason for being faster is because (1) it's evented and (2) it has a lot less features than Apache. No .htaccess - implementing .htaccess is a huge performance penalty because suddenly on every request you have to perform additional filesystem operations. No dynamic spawning of processes (unless you use Phusion Passenger). No dynamic loadable modules, so if you want to add anything you have to recompile Nginx entirely.
Nginx is fast is because the programmer is brilliant. isn't it enough?) What I'm saying is that it is a good software. I don't even care that it was written by Russian guys. The code speaks for itself.
As for praising, well, I had no intention to read the code very carefully, but what I been able to gather from it, it was well-designed before being written, and it evented nature - processing multiple requests at different stages in a single process, looks very much like a instruction pipeline of a modern CPU, at least to me. That's, probably, the reason behind its name - an engine.
It's likely that our website would be counted as nginx since the response headers include "Server: nginx...". And there is an nginx load-balancing layer at the front.
But it's a pretty thin layer with the real "web servers" behind it.
Same here. I'm working with a big corporate client who is installing nginx box in front of an IIS one. The IIS box does the work, the nginx handles security, user throttling and reporting.
And no one will know IIS is behind it because it's a proxy. These stats don't tell the whole story.
Also note the inverted shape of the trends: Apache is much more popular among less visited sites, but gets less so as you go up. nginx is unpopular with the masses, but more so with the most popular sites.
I can see at least two ways to interpret this:
1. nginx is just strictly better, and the sites at the top need that performance/capability and employ the top/best-informed architects to maintain their systems, so they're the early adopters. Everyone else is still on Apache by virtue of inertia, but will slowly move over as time goes on.
2. nginx is faster for high concurrency, and mostly gets used as a front-lines layer in front of something else, maybe even Apache itself. In this case, the charts will stay about like they are now, because only the most stressed sites need that much engineering.
I don't know that there's an easy way to differentiate between the two without either waiting and watching for change or knowing more about the internals of the top few thousand sites.
I imagine a majority of small sites are still on cheap shared hosting providers, and nginx doesn't support anything .htaccess which allows individual site owners in a chroot jail to configure the web server.
Apache is a process- and thread-driven application, but Nginx is event-driven. The practical effect of this design difference is that a small number of Nginx "worker" processes can plow through enormous stacks of requests without waiting on each other and without synchronizing; they just "close their eyes" and eat the proverbial elephant as fast as they can, one bite at a time.
Apache, by contrast, approaches large numbers of requests by spinning off more processes to handle them, typically consuming a lot of RAM as it does so. Apache looks at the elephant and thinks about how big it is as it tucks into its meal, and sometimes Apache gets a little anxious about the size of its repast. Nginx, on the other hand, just starts chomping.
The difference is summed up succinctly in a quote by Chris Lea on the Why Use Nginx? page: "Apache is like Microsoft Word, it has a million options but you only need six. Nginx does those six things, and it does five of them 50 times faster than Apache."
event_mpm is only partially evented. Thanks to the Apache module API, which assumes blocking I/O, it's impossible for Apache to be fully evented. That said, event_mpm handles a lot of the important cases where evented I/O is desired.
"Apache is like Microsoft Word, it has a million options but you only need six. Nginx does those six things, and it does five of them 50 times faster than Apache."
But all those features can be implemented efficiently. There are a lot of things that Nginx does not support, that would make Nginx slower if it did support that. .htaccess support for example.
This may be arguing semantics, but I don't consider that a feature, as anything doable via .htaccess is doable via the main config. (not to mention that nginx supports graceful hot reloads so it's not a problem to manually force it to rescan the main config.
The WordPress Supercache plugin automatically sets up a .htaccess with mod_rewrite rules to enable caching. On Apache, it just works. On Nginx without .htaccess support it would have to tell the user to copy-paste a snippet. Ouch, not so friendly. And this is even assuming the user has access to the web server config file.
In my experience, switching to a properly configured NGINX server running PHP-FPM with memcached negates much of the need for Supercache to begin with. Caching plugins tend to cause subtle problems all over the place for WordPress plugins, so I prefer to run "bare metal" as much as I can.
Also, I can't imagine anyone running NGINX without having access to the config file.
Actually, WP Supercache and nginx can form an awesome team.
You configure WPS to spit out gzipped copies of each page on disk, then you configure nginx to serve gzipped pages directly whenever they are found (gzip_static and try_files).
Basically you can serve compressed pages straight off disk. And thanks to the miracle of operating system file caching, it's usually straight from memory.
I completely agree with this and anyway many of the caching programs are starting to understand that nginx is out there. I am pretty sure w3 total cache does.
However since using nginx I have done away with caching as it is so much faster than apache + caching that I don't really need it.
That's my point. For any small/medium sites, switching to NGNIX will make a dramatic improvement on all aspects of your site, while dicking around with caching will cause you endless headaches.
>Also, should I stop using LAMP stack for my small and experimental PHP websites that very few people visit, or is it good enough?
It's plenty good enough; it's worked for decades already. If it ain't broke don't fix it. If traffic on a site is ramping up to the point where you're thinking "I might need to get new hardware for this soon", that's the time to start looking at a higher-performance webserver.
And honestly I'd look at moving away from PHP more urgently than moving away from Apache, if only because of their respective security records.
Well, you did also say "If it ain't broke don't fix it", so we might be agreeing. You wouldn't want any new programming language to roll its own SSL library rather than using OpenSSL or GnuTLS, would you?
I wouldn't want a project to write its own SSL library no, but I would not start a new project in C, and if I had an existing C codebase I would seriously think about migrating it to a safer language (unless it needed particular C features. Crypto code does, because it needs to resist timing attacks, but that's a very rare case).
Completely agree with this. After a lot of tweaking I could get Apache (w/ the worker MPM) pretty close to the speed of NGINX, but the difference in configuration files was enough to convert me.
And with NGINX I understand _every_ line in the configuration files, with Apache it was almost like configuring by coincidence.
It seems to me like Apache was designed more for hosting multiple websites for different users (the .htacces for example) With Nginx It's not so easy for a jailed user to change his settings unless he's given access to the config.
Even for small scale stuff, the "default" nginx way of using PHP is much better - with php running in its own process via php-fpm or fastcgi. If you are running Apache with PHP-FPM you can get similar performance, but you find fewer how-tos and documentation on that since most Apache users leave modphp in place.
With modphp in limited memory environments you run into memory swap issues super fast unless you are careful about your Apache configs, since it will, in most default environments, spawn many more Apache processes than you have memory for. PHP makes the Apache process use 20-200M of memory for many standard apps like drupal or wordpress. Put it on a 1Gig linode without turning down the MaxClients and add 10 concurrent users and you will see the site grind to a halt. Now install nginx and php-fpm and follow anyone's basic how-to, it will only spawn as many php processes as your memory and cpu cores can handle, and nginx will keep spinning out the static files while php is churning in the background, living within reasonable memory bounds.
You can achieve the same thing in Apache, but I never really knew how to do that until Nginx taught me. I have never seen Apache set up right for this in any of my clients' existing servers and they can never scale at all.
Modphp is a bit faster if you have enough memory for your PHP application * the number of simultaneous connections, but that is hardly ever the case, since Apache will be reusing the same high memory processes for every jpg and css file that it does for all of the php requests and 2-3 simultaneous users can make 20-30 concurrent requests. You better make sure that your maxclients is set to your physical memory divided by your php app's memory use. In a 1G VPS this might be 1024M/64M = 16. Save some memory for MySQL if it is on the same server and you probably shouldn't have MaxClients over 10. Now all of the css, jpg and png files are waiting in-line for the 500ms php scripts to finish and the site loads slow. Many PHP apps eat up over 128M per process, even though a typical request is smaller, that one crazy request eventually bloats all of the processes and you are swapping. If PHP starts hitting swap, your request times are rising up to 3-5 seconds real quick.
I had a small VPS with apache and modphp that could only handle about 20 concurrent requests because it'd hit the memory limit so fast. I switched to nginx and php-fpm and it could then easily handle 5x the concurrent requests in the same memory footprint.
Even for small scale stuff there are some benefits to using Nginx.
There is the ability to use Lua with MySQL, PostgreSQL, MemCache, Redis, Upstream etc to perform functions of an app server e.g. authenticate users, serve JSON from a database. It's fast and keeps your app layer focused on real business logic.
Not that nginx isn't great technology, but Apache has mod_lua too. Also, mod_perl used to be very popular for embedding the kind of functionality you talk about in the web server.
nginx is a good tool, and you should probably twiddle with it sometime so you know what it's about.
As far as I know, nginx can do anything Apache can do. nginx is generally faster than Apache. Apache can include modules without a recompile, nginx often requires you recompile to add features. nginx can be used as a reverse-proxy, a load balancer, or a thin SSL / SPDY layer, and succeed; Apache isn't as well suited to non-traditional web serving.
Personally, I find nginx's configuration syntax nicer to read and write than Apache's.
LAMP is fine for small experimental websites. Do what you're comfortable with! The scale at which the differences between nginx and Apache actually matters is pretty large.
You shouldn't stop using your LAMP stack. The differences between how the two behave really only come into play when you're talking about thousands upon thousands of concurrent users.
Unless you are doing it all on the same server... having less load from one part of the server means more available for other parts, which can lead to a better overall user experience. It's not quite a drop in replacement, but could be well worth considering... trying to get under 300ms page loads shouldn't be just for the big boys.
For me the main reasons to switch from Apache to Nginx were that it's smaller, faster, very flexible and easier to configure. Plus the additional features (eg. reverse proxy, ...).
Interesting - how can the trend seen in those charts be explained? The clearly falling popularity of Apache the larger the website and clearly rising popularity of Nginx complement each other nicely.
At work, we have Ngnix as reverse proxy for some static files. Oh boy is this thing fast with very very small amount of resources (cpu, ram) and big number of requests.
If you're on Azure anyhow, why not use Windows and skip Mono? I don't have personal experience with either, but I hear that Mono is still quite a bit slower (although still fast enough for most purposes, I suppose).
This is just a PR and propaganda. We all know that JVM is the best "platform" ever developed by humanity and Jetty is the most efficient, fast and especially reliable under a high load web server.)
IIS must be on the second place, btw, considering how much man-hours and millions of USD were spent on it. We all know that MS technology is superior and very scientific.)
I know you were joking but actually in those web benchmarks that have been posted some of the JVM frameworks actually beat the Nginx based one (OpenResty).
Unfortunately, as far as I been taught, there must be some inconvenience. General purpose bytecode interpreter, even with JIT, cannot beat optimized thin layer of C code on top of an OS's specialized syscalls. Just according to some laws of nature.)
Nope. nginx is rising because it is a very masterpiece of programming. Take a lock at the code - it is clean, readable, every syscall counted and thought of. No unnecessary copying of data, not idiotic casting from one kind of a data-structure to another. It is a piece of art of system programming. Look at the code. Quality wins here.
I didn't chose nginx because of how the code looks. I doubt any program gets any kind of mainstream attention because of that. (Of course, it is good that the code is good.)
On the other hand, I use nginx as a reverse proxy in front of hypnotoad[1] so I might be unusual..
Both nginx and Apache are absolutely solid pieces of software. I wish we had as many alternatives to Microsoft Excel as we do IIS.