This was interesting inside view. One thing that I did notice is the statement that the level of changes on the MSFT side was causing him grief. I'd be curious, as a follow-up, to see if he continues to develop on Linux. It seems like he has generalized the ease of the port into other perceived benefits on Linux (e.g. rate of change, lower need to adapt code) that might not be true in practice, or at least as true as he feels they will be based on the porting experience.
Sounded a little bit like some kind of initial success euphoria bias. Yes, I just made that up. I remember it with Rails too. This isn't a knock on Rails--just using it as an example--but a lot of people had such a blast porting code (me included) that they assumed everything was peaches and cream until they realized that the big, dirty things in web apps like scaling, localization and the impact of poor planning didn't magically disappear.
I think there's a honeymoon period with just about any new technology. It generally starts once you've switched to solve a problem and the new technology solves it well. At this point any faults or frustrations are generally just because you don't know the technology well enough to do it right. It isn't until you become more familiar and try to expand what you're doing that you start to find and explore the limits of the new technology.
All technology has limitations and frustrations. The question is which works best for each person. I think this might work well for him, as it sounds like he is doing some things that do port very easily.
Choosing Linux for stable APIs would be rather silly. X11 is evolving rapidly nowadays and audio APIs are in a state of constant flux, where the different APIs often interfere with each other and there's no obvious best solution. Linux changes ABIs all the time and is notorious for C library incompatibilities. This recent mess with filesystem write semantics (http://lwn.net/Articles/322823/ ) has proven you can't even rely on POSIX.
Meanwhile, Win32 has been far more stable over the years and Microsoft has put a ton of effort into backwards compatibility. Don't get me wrong; I like Linux but API stability is not the reason to choose it over Windows.
"This recent mess with filesystem write semantics (http://lwn.net/Articles/322823/ ) has proven you can't even rely on POSIX."
I have to disagree.
The "problem" resides in the way ext4 writes data to the disk. If you need crash-resistance (the data-loss occurs on computer crashes) ext4 is not for you. And keep in mind if you really need it, ext3 isn't for you either. JFS or XFS, perhaps, and ZFS, but I never intentionally crashed a machine with unwritten data since, perhaps, my Apple II days.
I have several hundred million files stored on XFS, not sure if that's a lot or a little by HN standards, but maybe it counts for something.
We will never ever have another XFS setup with a filesystem where deletions are to be expected. XFS works fine as long as you are just adding files to the filesystem, deletions are pathologically slow.
We've tried just about every trick in the book and have finally made the decision to switch back to ext3, which given the number of files will take a while.
So, that's nothing against XFS from a reliability point of view, but definitely from a performance point of view.
Interesting. I had a meeting where XFS got high praises from a very knowledgeable person. The catch is that in this solution, files are added and appended and very seldom deleted.
Deletion can be much slower than file creation when the files are hundreds of mb too!
Worst of all, "rm -R" runs right into XFS's most pathological case -- it's doing tons of metadata reads while it unlinks files, and XFS shoves a lock in every orifice.
Ext4 - is badly configured Ext3 (data=writeback mode) with extents support. By default, in recent kernels, Ext3 is configured in same way as Ext4 (data=writeback). In RedHat/Fedora/CentOS/etc., Ext4 is configured properly (data=ordered): "Don't worry about the new default journaling mode for ext3 planned for 2.6.30 (data=writeback, which is much faster than the old default, data=ordered, but has enormous security and data integrity problems). No distro would ship this as the default. The only way it could happen at Red Hat is over the dead bodies of the security team, who, let me tell you, keep an eagle eye on file system data leaks like this."
Ext4 is not alone - other file systems (Ext3, XFS, maybe others) are patched to support new file rewrite semantic too.
PS.
I saw lot of zero-padded files on XFS until switched to Ext3 with data=ordered.
The X11 protocol was standardized in 1987. Any X application which worked then would work now. There are extensions and new capabilities being added, but every one is backwards-compatible.
Ditto with audio APIs -- the last major change was the changeover from OSS to ALSA, in 2002, and to this day there's an OSS compatibility layer.
I've used Linux since early 2001, and in all that time I've never encountered an incompatibility with libc.
The filesystem semantics issue is due to idiot application developers trying to squeeze out a few msec of performance, at the risk of their user's data. The issue exists on every modern filesystem, no matter which OS is used. Anybody who relies on POSIX -- ie, uses fsync() -- doesn't have to worry about that problem.
Yes, X11 is backwards compatible and that's been really great for the Linux desktop. But the article bashes Windows for rapidly introducing new APIs even though the old ones remain compatible; I'm just pointing out that Linux does that too.
As for the audio situation, the problem is sharing between multiple apps. ALSA sucks at it so "sound servers" were invented but they caused more problems than they solved. Now it seems like every distro has a different sound solution, and meanwhile OSS was never removed from the kernel so you can't even rely on ALSA always being there.
The filesystem problem can't be blamed wholly on apps or kernel devs; IMHO the problem really lies in POSIX which doesn't specify a way to achieve what app developers need in a way that can also be easily implemented in the kernel with good performance. I would argue that the behaviors app developers were using became a de facto part of POSIX and the way kernel devs broke them was irresponsible even if it didn't break the letter of the standard. Contrast with Microsoft which goes far, far out of its way to avoid breaking apps even when they flagrantly violate good practices (there are some great examples on The Old New Thing: http://blogs.msdn.com/oldnewthing/ ).
Yes, X11 is backwards compatible and that's been really great for the Linux desktop. But the article bashes Windows for rapidly introducing new APIs even though the old ones remain compatible; I'm just pointing out that Linux does that too.
Linux's graphics APIs have remained largely static: they are GTK+ and Qt. Nobody writes their own implementations of X11.
Your audio paragraph is completely divorced from reality. Sound servers were originally created to implement software mixing, which is not supported by OSS. When ALSA was imported into the mainline kernel, software mixing became possible without servers and they largely died out.
Now, ALSA is the dominant standard for Linux audio. Recently, the sound server PulseAudio was created, but apps use it through the standard ALSA API. To my knowledge, there is no distribution using a sound server other than Pulse.
You can't rely on sound being enabled, true. But every mainline distribution ships with sound enabled, and that means they support ALSA. If a user disables their sound subsystem and then complains they can no longer hear music, that's their problem.
The filesystem problem can't be blamed wholly on apps or kernel devs; IMHO the problem really lies in POSIX which doesn't specify a way to achieve what app developers need in a way that can also be easily implemented in the kernel with good performance.
Sure it does -- fsync(). To my knowledge, the only filesystem which has poor performance when using fsync() is ext3 in data=ordered mode (which is not the default).
When ALSA was imported into the mainline kernel, software mixing became possible without servers and they largely died out.
My experience was completely different. A few years ago, ALSA software mixing was not enabled by default and didn't work well when enabled. When not enabled, apps couldn't share the audio device at all. KDE created aRts to allow sharing at the app level, but aRts sucked, plus it hogged the audio device so non-aRts apps wouldn't work at all. Later aRts added a timeout after which it would release the device but this obviously wasn't a good solution. Gnome had ESD which I didn't use but it conflicted with aRts. JACK came along but was only ever used by high-end audio programs.
ALSA finally did get decent software mixing support, but now people are used to running sound servers. PulseAudio is the newest thing but last I heard a lot of people are still unsatisfied with it (e.g. http://jeffreystedfast.blogspot.com/2008/06/pulseaudio-solut... ). Furthermore, people who know what they're talking about are recommending a move away from PulseAudio and ALSA and back toward OSS! Personally, I think the case is quite convincing: http://insanecoding.blogspot.com/2009/06/state-of-sound-in-l...
To my knowledge, the only filesystem which has poor performance when using fsync() is ext3 in data=ordered mode (which is not the default).
A few years ago, ALSA software mixing was not enabled by default and didn't work well when enabled. When not enabled, apps couldn't share the audio device at all.
Yes, disabling sound mixing will prevent multiple applications from using the sound card at once, in much the same way as disabling graphics drivers will prevent X11 from working.
KDE created aRts to allow sharing at the app level, but aRts sucked, plus it hogged the audio device so non-aRts apps wouldn't work at all.
aRts is for pre-ALSA (ie OSS) applications. It doesn't belong on an ALSA-based system, and will obviously not get along well with a modern stack. I'm not denying there are lots of distributions which are configured poorly, but any decent distribution such as Red Hat or Debian worked well.
I disagree that glitch-free playback and per-application volumes are "solutions in search of problems", but that's personal taste. If somebody wants to run without PulseAudio, they can. Reading the blog post, it seems he was surprised when upgrading to a bleeding-edge development version caused problems.
The last link is written by an OSSv4 developer. OSSv4 is unlikely to ever gain mainstream acceptance because it contains insanity such as performing floating-point math in the kernel. Aside from people are literally hired from the company developing OSSv4, I have heard no good news about it, and there do not appear to be any movements back to an OSS-based stack.
Yes, disabling sound mixing will prevent multiple applications from using the sound card at once, in much the same way as disabling graphics drivers will prevent X11 from working.
X11 never came with graphics drivers disabled by default.
aRts is for pre-ALSA (ie OSS) applications.
aRts isn't "for" ALSA or OSS applications; in fact it doesn't play nice with either. aRts is for aRts applications. aRts has backends for both OSS and ALSA.
There's no reason glitch-free playback and per-application volume control can't be done in the kernel. The only feature that makes sense to do in user space is network transparency, which is of limited utility.
That first link you provided to people being 'unsatisfied with PulseAudio' is over a year old. Around a year ago most major distros jumped on PulseAudio a) before it was ready and b) using messed up configurations which didn't help anything.
I don't know much about how it stacks up for OSSv4, but I like the ability to see multiple audio streams from various programs and the ability to tweak a specific programs audio volume from PulseAudio (sometimes programs don't provide volume level control). I'm assuming here that everything talking about 'audio mixing' is just talking about taking multiple software audio outputs and blending them together to create the output to the hardware. Things like per-application/per-process audio volume control is an advanced feature that I've seen provided on OS X and I believe Windows (through 3rd party software). I would like to see functionality like this on Linux as well.
And to be fair, that blog post about going 'back to OSS' is claiming that OSSv3 -> OSSv4 was a major overhaul that adds in things like mixing support. When you say 'back to OSS', most people are going to read that as 'back to OSSv3' not 'ditch ALSA for the revamped OSSv4.'
Ditto with audio APIs -- the last major change was the changeover from OSS to ALSA, in 2002, and to this day there's an OSS compatibility layer.
ALSA, as an API for building modern audio applications, is useless. Yes, literally, useless.
I have tried for over ten years to move my home studio recording to Linux-based tools...but, to this day, reliable, glitch-free, low-latency, multi-track 24 bit 96kHz audio from multiple applications and external sources is simply not possible under Linux (without dedicating a machine to the task, which I've never been willing to do, and which I don't have to do under Windows or Mac OS X).
First up, audio requires an rt kernel. The rt kernels trail behind the mainline kernel by several revisions, and usually include a few major security or stability issues. The latency of a non-rt kernel is such that even with extremely large buffers, audio still stutters.
You also need a sound server to permit multiple apps to share and route sound amongst other apps. The competing and barely compatible sound servers for Linux are targeted to different audiences, and generally don't do a very good job even for their specific niche. JACK is the professionals choice, but in my experience, it's never been stable enough to be usable for any real work. PulseAudio is laughably unreliable...it consistently fails when returning from sleep on my desktop, and makes weird popping noises after playing sounds on my laptop. It never worked at all on my mid-range audio devices (Delta 66 on the desktop and FireWire Focusrite Saffire LE on the lappy), as it seems to just freak out when there are multiple devices (to be fair, Vista also freaks out a bit when changing sound devices and using both for various tasks; I've been able to find a delicate balance that allows Flash and movies and such to play through the built-in audio and audio software to use the good devices).
Anyway, I use, and have used, Linux as my primary operating system for all of my machines for over ten years for everything except audio. I try to make the switch at least once a year, and I've failed every single time because of failures of both the kernel and the audio APIs on Linux. I'm not a neophyte at this stuff, either. I've written and maintained patches for the Linux kernel over the years, I'm not nervous about any of the required steps to make things work, it's just that when all is said and done, the system is either not capable of working with high quality audio at all (non-rt kernel, normal desktop PulseAudio configuration), but usable as a desktop system for normal work...or it's configured exactly right for pro audio (rt kernel, jackd, etc.), and sort of works for a few minutes or hours at a time, but is otherwise useless as a desktop system because 3D video drivers are too flaky in this configuration, kernel has stability or security issues, and normal software (like Firefox+Flash) can't talk to the jackd server and can't get access to the audio device because jack is holding it.
Music software that does happen to support Linux (Renoise, for an example of one of my favorite tools), there is confusion among the developers about what they should be doing to provide good support to Linux users...which of the several sound servers is the "right" one? It depends on who you ask.
So, while I spend 90% of my day running Linux, when I want to record music, I have to reboot into Windows.
There are benefits and drawbacks to maintaining an API vs updating it. Switching every few months will alienate your developers, but making sure everything is 20 years backwards compatible will lead to strange errors (particularly because programmers will expect their idiosyncratic workarounds to continue to work)
I think the more important question is how the groups responsible will handle the transition. A well planned and executed transition will give many developers a chance to switch gradually and by the time the old API isn't supported the older versions of the program will likely only be run on the older versions of the API anyway.
Maybe I'm missing something, but who develops on Windows for applications that aren't using the Windows APIs anyway? I mean, the whole point of Windows is it's APIs. If you aren't going to use them, then sure, go and move to a unix system - indeed you should have done so years ago.
On the other hand, if your product is designed for Windows, then you can't really switch, can you? (that's a real question by the way, not rhetorical).
The whole point of Windows is that it's an operating system. Depending on your market/skills you either develop for it or not. I prefer to develop for it because when it comes to C++ programming, it's the best out there.
This story made porting look easy, but rest assured it is hard, and he won't be so thrilled when the problems start cropping up.
I develop java applications on Windows. Within my company there are at least 3 other software engineers that are doing web development targeted towards deployments that are not on windows servers that still use windows as development boxes. We also do this by choice.
If you are familiar with Windows XP then it is a fairly stable, simple, and reliable OS.
Jonathan Blow, the developer of Braid, has had a thread on his blog (made from last year, when he gave up trying to port Braid on linux, till July of this year) about the (sorry?) state of developer tools on Linux.
An interesting read - given that his primary editor was emacs and he has been using *nixes for 20 years.
Long story short - sorry state of sound on Linux, sorry state of debugging tools on linux, OpenAL+SDL dont cut it.
I still use windows mostly because of my primary tools (Maya, Photoshop and Fusion - occasional game), however I have a VM with linux in it and just run a fullscreen putty on a second monitor. For django dev, for example, I have a vertically split screen where bottom is a :resize of 10 lines and running manage.py runserver_plus and top one I run bash or cycle through several (whatever I need). Similar stuff happens when I dev in D or C++.
I edit files via tramp on that VM dev server. Neat combination is that I move VM from workstation to laptop and vice versa, basically having a portable dev server all the time with me. It would be nice though if I could find a binary diff copy for files, since copying VM over network can take some time (luckily I don't do that all the time).
Didn't it also take that long to port the OS X version as well? (I seem to remember the 'developer' builds of Chrome for Linux and Chrome for Mac coming out at the same time)
Keep in mind that he seems to be mostly using Python and doing mostly number crunching (which would seem to be rather OS independent) from what I could glean.
Isn't the whole point of Chrome to take advantage of OS level protection mechanisms so that stuff coming from the web doesn't get the chance to mess with your machine. I believe they're doing completely different things in this regard on each OS, basically building sandboxes from scratch.
Developers are responsible for debugging their platform then? He states that he had a significant speedup in Vista too, which is pretty good evidence that he'd isolated the problem to something in XP. So he needs to decide on what to do, and fixing XP retroactively doesn't seem to be an option.
And in any case, the thread scalability of the allocator some of MS's older C runtimes is a known performance issue. I'm not surprised by this at all.
What happened to: first check your code, then your compiler and as a last resort your OS? The problem was most likely in his hand-crafted threading code.
Whether there's an underlying fault or not, it's an interesting datum that it ran 5 to 10 times faster under Linux. If nothing else, compiling and running under both platforms gives clues to places problems are happening. We target three different platforms and seeing the performance change from one to the other is really useful.
All facts indicate the problem _was_ the platform. The program ran much faster on Vista, even with the original manual threading. When ported to Linux with OpenMP the problem went away, suggesting it never really existed inside the program.
He stated that one of the biggest reasons he did the switch was because Linux allowed him more freedom in terms of platform, libraries, and tools, while his windows options were limited by his company to Visual Studio 2005 on Windows XP x64.
He wanted to just start working on the newer operating systems, but was not allowed to due to corporate budgets or the like.
When you start to hit issues that you have good reason to believe have nothing to do with you, finding the root cause of the issues is not always the best choice.
free() under Windows XP can be incredibly slow. I once created a big hash table with more than 10000 allocated hash buckets, and freeing it took around 3 seconds (!) on a 3GHz P4. The same code on Linux and OSX took around 3 milliseconds.
Looking at what he was doing (Python + C++) it seems that switching was a no brainer. Hell, I'm a MS fanboy (seriously love .NET) and I use Linux for some projects (Java/Tomcat and investigating Mono).
I always been a windows developer. 3 mo ago my co-worker suggested I give linux a shot. Fuck it why not.
I realized the following:
1) Linux is more stable. (NEVER have to reboot it, unless I chose to conserve energy at night).
2) Linux boots faster. By the time linux boots, auto launches firefox and eclipse, windows is only reaching the desktop at which point its still 1-2 minutes before it becomes usable.
3) I still use the same tools in linux and windows (I still need a VM to run Toad). (Eclipse, grep, firefox, a linux command prompt (god cygwin is not as good as the real thing). Which is great since toad makes windows 95 appear stable. Putting toad in a vm is the best way to make it work well :). Also I get to test IE6,7,8 without any problems because I already have vms set up.
4) Multiple desktops = easier at times.
5) Windows made me need tools for mounting like sftp -> drive. Linux comes with sftpfs (ok fine u need to apt-get it). Its all free too so cost savings.
Switching to linux only raised one question: Why didn't I do it earlier.
The only problem is KDE fonts suck! Gnome ones are good out-of-the-box. Also KDE apps (like plasma) crashes like a wasted, doped-up, narcoleptic. Sooooo ubuntu, not kubuntu wins for me.
If you have to reboot Windows for anything other than updates you should be looking into problems with your hardware and drivers. The "Windows is unstable" meme should really be dead and buried by now.
I can believe Ubuntu is faster than Windows at booting, especially considering they focused on it in their last release, but for me the much bigger issue was flaky Sleep/Suspend-to-RAM and Hibernate/Suspend-to-Disk support. I suspend my laptop several times a day but only reboot on updates.
> If you have to reboot Windows for anything other than updates you should be looking into problems with your hardware and drivers. The "Windows is unstable" meme should really be dead and buried by now.
It it should be 'dead and buried' by now then why does it persist ?
It seems that there are enough people that share these experiences (not necessarily your own) to stop that from happening.
My own take is that it is getting better, but it isn't there yet.
One thing that I personally don't understand at all is why the industry did not long ago standardize on ECC RAM, the cost would have been marginal (due to the increase in volume) and the potential benefits in perceived stability would be huge.
Servers have it pretty much as a standard, any bit flipped in critical RAM can cause an OS crash (both for Linux and for Windows, RAM failures are OS agnostic).
Maybe Microsoft's code doesn't cause crashes any more, but 3rd party drivers and shoddy OEM hardware are definitely still to blame. Users don't care why their stuff crashes, only that it does. And first hand, I see XP-era machines crash less after converting them to Ubuntu. I haven't tested a Vista machine, I've never seen anyone run one.
Re: ECC ram, my non-server, "consumer" linux boxes typically have uptimes of 100,200,300 days. They only go down for hard drive failures and OS upgrades.
My copies of XP and Vista were fairly peppy booters when first installed, but adding in assorted services (databases, Web server, VNC) slowed things down, plus I had to install antivirus software, which makes things even slower.
Of more concern to me is shutdown time, as in "eternity", and when I have to just hold the button button to force a machine off. Each time there is a real risk of corrupted drives (I'm recovering data right now from a drive corrupted because something hung Vista and I had to power off the box).
Each time it refuses to cleanly shutdown there is nothing displayed to tell me why.
Is there even an option to have windows show what it's doing when it shuts down instead of the often untruthful 'Windows is shutting down' message?
> I haven't tested a Vista machine, I've never seen anyone run one.
I'm a designer and programmer and was stuck on Vista for a little over 3 months after starting my current job. Photoshop required at minimum two reboots a day. Absolute hell.
At work, I have to restart my computer at least once a week. I'm not sure if it is Windows XP's fault or if it is another program I run. All I know is that after about a week the computer grinds to a halt, taking way too long to do anything reasonable.
In my experience, Windows doesn't need to be rebooted very often - although it does seem to leak some RAM. Also, a few games and other applications (like GTA:SA) cause lockups or BSODs in edge cases. GTA:SA freezes the computer if I pause it, close the laptop lid, reopen the lid, and try resuming.
Also, as you mention, drivers cause problems occasionally. I think twice Windows suddenly stopped recognizing a wireless mouse, not sure why. It worked on a different USB port.
Disclaimer: I'm talking about XP. Also, I have little experience in Ubuntu, but quite a bit in FreeBSD on the desktop. Once it's set up, it works quite nicely actually - apart from some quirky behavior while suspending to RAM.
Windows swapping algorithms suck big time. After coming back to work I need to wait for windows to swap eclipse/ff to ram... Wtf was it doing overnight that warrented a need for disk swap? Heavy thinking?
After leaving ubuntu on all night the performance tomorrow is as if I never left the keyboard.
Last job windows shutdown took 3 min, startup took 10. The system did suffer from memory leaks or appeared to since after like 2 weeks it had to be rebooted or it worked like crap. Coulda been antivirus pos software though. Reboots were also needed at times, just the nature of not having kill -9.
Another complaint: can't delete open files. Our av program broke our build caz we could not run cleanup scripts where applicable.
In any case after 3 month of bliss I have nightmares that next job ill be coding in windows. Maybe 7 aint so bad...
I have a 2 year old Vista machine, the biggest problem I have is that it sometimes (2-3x per day) locks up while running a major HD write or read, usually with no warning, sometimes when I haven't even been doing anything on the machine. Also, I wiped my Norton Antivirus, it would start running virus checks every week, which it was supposed to, but for some f-ed up reason it ALWAYS did it when I was in the middle of a game, and I don't spend much time playing games.
I have found the kubuntu KDE4 a bit flaky and font-poor (especially the earlier versions). I run KDE4 at home from the gentoo repository and it is rock solid and the fonts are fine, so I am not sure what is going on there.
A bit beside the point as I would not recommend switching to gentoo as your first linux distro (or even for most people)
I used Windows Server 2008 for development this summer and generally had uptimes of 2-3 weeks...only had to restart for updates. Never had any problems.
Sounded a little bit like some kind of initial success euphoria bias. Yes, I just made that up. I remember it with Rails too. This isn't a knock on Rails--just using it as an example--but a lot of people had such a blast porting code (me included) that they assumed everything was peaches and cream until they realized that the big, dirty things in web apps like scaling, localization and the impact of poor planning didn't magically disappear.