Hacker News new | past | comments | ask | show | jobs | submit login
German Government: Stop Using Internet Explorer (mashable.com)
52 points by AndrewWarner on Jan 15, 2010 | hide | past | favorite | 31 comments



Misleading title, the recommended stop is only until a security fix.


These problems are well known, yet enterprise IT management seems to persist in sticking with IE, even spreading disinformation to their employees with claims that FF, Safari, Opera and Chrome are more vulnerable.

One incident still boggles my mind. A friend of mind attended a security conference last year with a corporate IT manager who had been insisting that IE is the better choice from a security model standpoint. One after another speaker stood up and presented a paper or a speech or a lecture or a round-table all saying essentially the same thing:

The security of networked systems is only as good as it's weakest link, and IE has consistently been the weakest link for years. Even with hundreds of security patches since XP was released, it's still such a threat to the network that it's irresponsible to continue using it. Not one presenter used it anymore. They continued that enterprise IT should not only be recommending other browsers, they should be enforcing other browsers as part of their security policy and disabling IE as much as possible on all systems under their control. It was damning.

After we got back to the office, he sent out a corporate wide email reminding everyone that browsers other than IE are vulnerabilities to the network and won't be tolerated.


Is IE entirely to blame, really? What about an operating system that allows a web browser sufficient prominence within the system even to allow a susceptibility that enables hackers to “perform reconnaissance and gain complete control over the compromised system.”

I understand that a government would be hard-pressed to suggest that a nation forgo using an OS that is so deeply embedded in many organizations' and people's day-to-day operation, but even a brief acknowledgement of what is the underlying problem (worldwide deployment of an out-of-date operating system inextricably and systemically intertwined with the function of an abhorrently insecure electronic portal to the entire world of internet-enabled machines [I'm referring here to Internet Explorer]) would really help the much-needed movement to spread awareness of the more secure, cheaper alternatives (I'm referring here to Linux-based systems).

In any case, it's always great to see organizations with clout holding Microsoft publicly accountable for its indiscretions.


I have to ask what this even means. The last time the world started holding Microsoft publicly accountable for its security indiscretions was the "Summer of Worms" in '03. From that point on, starting with WinXP SP 2, Microsoft has put an absolutely huge amount of effort into security, including:

* Training virtually all of their developers on secure coding

* Modifying their core libraries to avoid dangerous idioms

* Spending tens of thousands of dollars per product per release on external security testing

* Slowing down dev cycles with "SDL" measures like threat modeling and code review

* Holding off releases to audit for new bug classes

I'm not saying that Microsoft ships perfect software, because what I'm saying is that it's impossible to ship perfect software.


I'm not saying that Microsoft ships perfect software, because what I'm saying is that it's impossible to ship perfect software.

When you write it in C/C++, anyway.

(I think Microsoft has almost solved this problem, though, with their heavy investment in the CLR and languages like F# on top of the CLR. Even C# is fine, compared to C or C++. You can still write insecure software in managed languages, but it will be because of a careless design, not forgetting to tack a "\0" on the end of a block of memory.)


Python, Ruby, and Java web applications are riddled with domain-specific vulnerabilities. I'm sorry, there simply aren't any easy answers here.


I think the problem is "web applications" and not Python, Ruby, or Java. Mostly.

(If you were writing your web applications in C, you'd have to worry about memory corruption problems AND cross-site request forgeries.)


"it's impossible to ship perfect software."

That is simply not true.

It may be very difficult, it may be insanely expensive and it may require an immense effort to thoroughly inspect the whole stack that is under the software (down to the hardware level, that is, with some attention on how external events may influence the hardware) but no. It's not impossible to ship perfect software, specially if you restrict the definition of "perfect" as doing what it's supposed to do, no more and no less.


Dan Bernstein couldn't do it. But sure, let's argue about this point. A lot.


I suspect rbanffy is talking about things like avionics, medical device control, and the space shuttle. This is a whole other ball game, and it is possible to ship software that is pretty darn close to perfect in those domains. They cost a few orders of magnitude more per requirement, and the requirements are sharply limited. These systems usually operate on a very limited number of input values and internal states, so you can often simply enumerate all the possible inputs and all the possible states to verify that the system works as designed.


I wasn't thinking about these markets, but their mere existence proves perfect software can be written - it's possible.

What I was thinking was the bold statement that doing something right (or perfectly right) is impossible and its widespread use as an excuse for not even trying. A software developer must aim for perfect software, even if shipping it is not always economically viable. The fact end-users have been beaten into submission and now tolerate having to restart their computers after software crashes or to redo calculations because the results were absurd the first time is not relevant to this discussion. If they tolerate less than correctness, we shouldn't. The current security problems we have are entirely fault of people who regard immediate profit as more important than long-term quality.

And I take exception with anyone who suggests this is how things should be.


Dan Bernstein didn't "regard immediate profit as more important than long-term quality". Neither did Wietse Venema. Both were at extreme pains to make sure their free software was free of security problems. Unlike any commercial software, security software included, software security was the #1 priority for both these projects.

Both failed to ship code free of game-over security flaws.

I find your point about "tolerating" failure to be irrelevant to the real world at best. At worst, it's just fatuous.

I've shipped hundreds of thousands of lines of code, and I've led projects to review millions of lines more: chipset to JVM, embedded to desktop, open source and proprietary. It is a point of pride with us: we always find things.

You think you're being respectful of users when you chant "don't tolerate failure". What you're really doing is being disrespectful of computer science. All evidence available to us suggests that it is practically impossible to ship code --- or at least, real-world code --- without security flaws.

Is secure software an impossible goal? Not for computer scientists. For instance, a Stanford team build a system for evaluating code for flaws based on abstract analysis. That project became Coverity, a commercial product for scanning source code for flaws. They compete with a larger company called Fortify, in a hundred million dollar market for static code analysis.

Projects like Stanford Checker, or the OS and runtime hardening work being done in countless labs, have a chance of advancing the state of the art to where you think it really is now. A line developer on a browser project has no such chance.


All I need to do is to point to a single example of a perfect program. I am sure there are a lot of perfect programs around us - built into phones, smartcard readers, TV sets. Nostrademus also pointed out a couple areas where bug-free is the norm. If we can go for simple ones, I have written a couple of them in my career - simple programs that do one thing and do it perfectly. I wouldn't be a professional engineer if I didn't pass those exams.

What you have to do, on the other hand, to back up your bold statement that developing perfect software is impossible, is to prove it impossible.

Good luck.


You apparently think the code inside of phones, smart card readers, and TV sets is likely to free from security defects, and at the same time you feel qualified to argue about this?

You could, if you had given it 5 seconds more thought, chosen examples that were hard to falsify (even if they weren't based on any experience of your own). You might have suggested "weapons guidance" and "space shuttle life support". You wouldn't be right, except to the extent that those systems have virtually no attack surface, but at least your message board argument would be viable.


You have an impressive reputation, but, still, you fail to grasp a simple point of logic. You said shipping perfect software is impossible. To disprove you, all I have to do Is to show a single example of a perfect program. Many simple programs you use every day of your life certainly fall into this category. Maybe we got overly ambitious with our building blocks. Maybe we made it a job harder than it could be. You say you always find something. Maybe that's because you only look where you know you'll do.

I am not against security reviews. Quite the contrary. What I am saying is that you made a very impressive statement you can't possibly prove. There is software that does exactly what it's intended to do with no side effects, no bugs and that lives in a spacesecurity flaws are non-existant.

You fail to understand not all software is as complex as, say, a JVM. I am pretty sure you would haves hard time finding a flaw in, just to make a very simple example, the code that runs my coffee machine or that calculates my car's mileage. A security expert that claims it's impossible to write perfect software is nothing but arrogant and self-serving.

So, if you want to say it's impossible to have perfect software, please, prove it.


Your argument changes with each comment. It's less about defending any coherent perspective about software and security, and more about prolonging a pointless argument on Hacker News. Despite opening with a tirade about not accepting the failures of commercial software, you attempt now to wrap up your argument by referring to "the code that runs my coffee machine".

I don't care about your relative inexperience in software security. I care about the disrespect to computer science; the utter certainty that your comments betray that an unsolved problem that implicates vast swaths of terrain in CS is in fact solved, and could be put into practice if only every software practitioner would behave in some unspecified and abstract way that Ricardo Banffy has determined for them.


I care about the disrespect you showed towards computer science. And to basic logic.

You claim shipping perfect software is an impossibility. I would find it entertaining to hear you trying to prove this. Unfortunately, you are avoiding that burden.

I took a stand against the pernicious impact baseless affirmations like yours have on attitudes throughout our industry. They imply it's OK to write something faulty and then debug it into shippable form. And then ship it. This is not the way it should be done.

I would be OK if you said shipping software that does (only) what it's supposed to do was "horribly difficult", "prohibitively expensive" or something on these lines. You equate perfection to invulnerability to external attacks, narrowing this "perfection" to fit into your expertise.

I "specify" no "abstract way" for software builders other than to have faith, at least until it's proven it's impossible to do our job right, that perfection is achievable and we should try really hard to get to it, something you seem to disagree with.

So, again, please prove our efforts are futile so we can finally shed our hopes and follow your lead.


Seems a bit pre-emptive, as the suggestion is based on "a critical yet unknown vulnerability" that was "probably exploited" in the recent attack on Google & others.

That said, the very fact that a browser is not super-sandboxed in the first place is frightening, and IE has a huge amount of integration compared to other browsers. Mayhaps that's why it's been such a problem?


It was probably preemptive when it was written, but it's been borne out by the facts.

The only browser that is "super sandboxed" is Chrome, and nobody knows how effective the process security model in Chrome is going to end up being. The actual vulnerability appears to be a basic object lifespan problem in DOM handling code; in other words, this could happen to any browser.

After Chrome, the browser with the next most involved security model is IE with DEP, and, in fact, when DEP is enabled, the exploit doesn't work (that's no guarantee that it could never work, though).


Cool, didn’t know we had something like that: http://en.wikipedia.org/wiki/Federal_Office_for_Information_...

(Uh, and it’s one of those government offices still in Bonn. Just around the corner, for massive 5km amounts of just around the corner.)


The "Bundesamt für Sicherheit in der Informationstechnik" advised users to avoid IE at least since 2004: http://translate.google.com/translate?js=y&prev=_t&h...


Did they advise users to avoid Firefox, too? Their users would have avoided a lot of security pain.


No, they just recommend using alternative browsers and to switch often. Although the BSI warned against lots of things in the interest of privacy, mostly Google products (Chrome Beta, Google Wave, ...).

On the other hand they have the power to intercept and analyze the complete data- and phone communication without anonymization. They of course use it not only to find malware and security risks, as was intended in the bill, but to provide 'suspicious' data to the police and intelligence services (http://www.golem.de/0901/64639.html).

We also have data retention for all communication data for six months and have planned a nice censorship infrastructure to block domains (against child pornography for now), not unlike Australia or China.

But at least IE has only 40% market share... (http://www.browser-statistik.de/)


Seriously, any company still forcing IE6 on their employees should be reported to the government and taxed. These stupid corporate IT decisions are impacting everyone, and there needs to be regulation. Its no different than if the company was pumping toxins into the air.


This vulnerability is no longer unknown, though it is (to my knowledge) still unpatched. It is known not to be exploitable on IE7 with DEP enabled, so most IE instances should in fact be easy to lock down.


China and the USA both have access to the Windows and IE source code, correct? So presumably they have as many pre-0day vulnerabilities as are ever needed.


You're crazy if you think access to IE source code is anything more than a speed bump for vulnerability researchers. Microsoft code is among the easiest to reverse out in a disassembler, and they publish symbols.

In any case, this vulnerability looks remarkably straightforward. You could conceive of the fuzzer that might have found it. It would be an extremely clever fuzzer, but not an unprecedented one.


(I know very little about security research, which is readily apparent... should have phrased the second part of previous question as a comment as well)


Maybe, but you shouldn't get downvoted for it. ;)


How hard, or easy, is it to make a really secure browser while at the same time keeping all the jazz that makes say, GMail work?


Someone ought to edit

   &utm_content=Google+Reader
out of the link so that the visits from HN aren't reported as referrals from Google Reader. By the way, this also tells me that the poster found the article in their reader and copied the link directly from the reader.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: