I have to ask what this even means. The last time the world started holding Microsoft publicly accountable for its security indiscretions was the "Summer of Worms" in '03. From that point on, starting with WinXP SP 2, Microsoft has put an absolutely huge amount of effort into security, including:
* Training virtually all of their developers on secure coding
* Modifying their core libraries to avoid dangerous idioms
* Spending tens of thousands of dollars per product per release on external security testing
* Slowing down dev cycles with "SDL" measures like threat modeling and code review
* Holding off releases to audit for new bug classes
I'm not saying that Microsoft ships perfect software, because what I'm saying is that it's impossible to ship perfect software.
I'm not saying that Microsoft ships perfect software, because what I'm saying is that it's impossible to ship perfect software.
When you write it in C/C++, anyway.
(I think Microsoft has almost solved this problem, though, with their heavy investment in the CLR and languages like F# on top of the CLR. Even C# is fine, compared to C or C++. You can still write insecure software in managed languages, but it will be because of a careless design, not forgetting to tack a "\0" on the end of a block of memory.)
It may be very difficult, it may be insanely expensive and it may require an immense effort to thoroughly inspect the whole stack that is under the software (down to the hardware level, that is, with some attention on how external events may influence the hardware) but no. It's not impossible to ship perfect software, specially if you restrict the definition of "perfect" as doing what it's supposed to do, no more and no less.
I suspect rbanffy is talking about things like avionics, medical device control, and the space shuttle. This is a whole other ball game, and it is possible to ship software that is pretty darn close to perfect in those domains. They cost a few orders of magnitude more per requirement, and the requirements are sharply limited. These systems usually operate on a very limited number of input values and internal states, so you can often simply enumerate all the possible inputs and all the possible states to verify that the system works as designed.
I wasn't thinking about these markets, but their mere existence proves perfect software can be written - it's possible.
What I was thinking was the bold statement that doing something right (or perfectly right) is impossible and its widespread use as an excuse for not even trying. A software developer must aim for perfect software, even if shipping it is not always economically viable. The fact end-users have been beaten into submission and now tolerate having to restart their computers after software crashes or to redo calculations because the results were absurd the first time is not relevant to this discussion. If they tolerate less than correctness, we shouldn't. The current security problems we have are entirely fault of people who regard immediate profit as more important than long-term quality.
And I take exception with anyone who suggests this is how things should be.
Dan Bernstein didn't "regard immediate profit as more important than long-term quality". Neither did Wietse Venema. Both were at extreme pains to make sure their free software was free of security problems. Unlike any commercial software, security software included, software security was the #1 priority for both these projects.
Both failed to ship code free of game-over security flaws.
I find your point about "tolerating" failure to be irrelevant to the real world at best. At worst, it's just fatuous.
I've shipped hundreds of thousands of lines of code, and I've led projects to review millions of lines more: chipset to JVM, embedded to desktop, open source and proprietary. It is a point of pride with us: we always find things.
You think you're being respectful of users when you chant "don't tolerate failure". What you're really doing is being disrespectful of computer science. All evidence available to us suggests that it is practically impossible to ship code --- or at least, real-world code --- without security flaws.
Is secure software an impossible goal? Not for computer scientists. For instance, a Stanford team build a system for evaluating code for flaws based on abstract analysis. That project became Coverity, a commercial product for scanning source code for flaws. They compete with a larger company called Fortify, in a hundred million dollar market for static code analysis.
Projects like Stanford Checker, or the OS and runtime hardening work being done in countless labs, have a chance of advancing the state of the art to where you think it really is now. A line developer on a browser project has no such chance.
All I need to do is to point to a single example of a perfect program. I am sure there are a lot of perfect programs around us - built into phones, smartcard readers, TV sets. Nostrademus also pointed out a couple areas where bug-free is the norm. If we can go for simple ones, I have written a couple of them in my career - simple programs that do one thing and do it perfectly. I wouldn't be a professional engineer if I didn't pass those exams.
What you have to do, on the other hand, to back up your bold statement that developing perfect software is impossible, is to prove it impossible.
You apparently think the code inside of phones, smart card readers, and TV sets is likely to free from security defects, and at the same time you feel qualified to argue about this?
You could, if you had given it 5 seconds more thought, chosen examples that were hard to falsify (even if they weren't based on any experience of your own). You might have suggested "weapons guidance" and "space shuttle life support". You wouldn't be right, except to the extent that those systems have virtually no attack surface, but at least your message board argument would be viable.
You have an impressive reputation, but, still, you fail to grasp a simple point of logic. You said shipping perfect software is impossible. To disprove you, all I have to do Is to show a single example of a perfect program. Many simple programs you use every day of your life certainly fall into this category. Maybe we got overly ambitious with our building blocks. Maybe we made it a job harder than it could be. You say you always find something. Maybe that's because you only look where you know you'll do.
I am not against security reviews. Quite the contrary. What I am saying is that you made a very impressive statement you can't possibly prove. There is software that does exactly what it's intended to do with no side effects, no bugs and that lives in a spacesecurity flaws are non-existant.
You fail to understand not all software is as complex as, say, a JVM. I am pretty sure you would haves hard time finding a flaw in, just to make a very simple example, the code that runs my coffee machine or that calculates my car's mileage. A security expert that claims it's impossible to write perfect software is nothing but arrogant and self-serving.
So, if you want to say it's impossible to have perfect software, please, prove it.
Your argument changes with each comment. It's less about defending any coherent perspective about software and security, and more about prolonging a pointless argument on Hacker News. Despite opening with a tirade about not accepting the failures of commercial software, you attempt now to wrap up your argument by referring to "the code that runs my coffee machine".
I don't care about your relative inexperience in software security. I care about the disrespect to computer science; the utter certainty that your comments betray that an unsolved problem that implicates vast swaths of terrain in CS is in fact solved, and could be put into practice if only every software practitioner would behave in some unspecified and abstract way that Ricardo Banffy has determined for them.
I care about the disrespect you showed towards computer science. And to basic logic.
You claim shipping perfect software is an impossibility. I would find it entertaining to hear you trying to prove this. Unfortunately, you are avoiding that burden.
I took a stand against the pernicious impact baseless affirmations like yours have on attitudes throughout our industry. They imply it's OK to write something faulty and then debug it into shippable form. And then ship it. This is not the way it should be done.
I would be OK if you said shipping software that does (only) what it's supposed to do was "horribly difficult", "prohibitively expensive" or something on these lines. You equate perfection to invulnerability to external attacks, narrowing this "perfection" to fit into your expertise.
I "specify" no "abstract way" for software builders other than to have faith, at least until it's proven it's impossible to do our job right, that perfection is achievable and we should try really hard to get to it, something you seem to disagree with.
So, again, please prove our efforts are futile so we can finally shed our hopes and follow your lead.
* Training virtually all of their developers on secure coding
* Modifying their core libraries to avoid dangerous idioms
* Spending tens of thousands of dollars per product per release on external security testing
* Slowing down dev cycles with "SDL" measures like threat modeling and code review
* Holding off releases to audit for new bug classes
I'm not saying that Microsoft ships perfect software, because what I'm saying is that it's impossible to ship perfect software.