The problem for this falls directly on vendors that make their crap hard to keep up to date. I'm not blaming Microsoft for this, as they're actually pretty decent - the problem usually is with 3rd parties that don't validate on the newest releases.
Seriously people - there's a reason OS vendors have prerelease software programs. If you're not testing your software on say the Windows 8 beta or on prerelease versions of OS X 10.7.3, then when it doesn't work on day one of release it's likely your fault, not the OS vendor for breaking stuff.
The larger issue is that software testing is a PITA... this is probably a shared issue between Dev and Ops. Every time I see a new single function "installs software X outside of package management" program, I have to think that both sides are doing the other a disservice. Dev's should be able to get their software on multiple platforms easily, and Ops shouldn't have to wrestle to get it installed. We've only been wrestling with this problem for the last 30 years...
I wouldn't say Microsoft is pretty good at updates. I once installed a patch for a security flaw in DNS in our Windows DNS server and got reamed by my boss for bringing our site down. I have found the policy of installing all updates when first setting up a Windows server and then never touching it again to work quite well. We also keep all non-HTTP access to a VPN.
Alternatively, don't use Windows. App Engine is my preferred platform for new projects and the security on that is most likely top notch and constantly upgraded.
This is why we use QA servers for most of our systems. We try our best to test all functionality on a patched QA server before deploying patches to our production servers. We have had good success doing this (with Windows at least.)
The servers that were attacked weren't the actual exchange server but the servers for Director's Desk.
To be frank, it's a logistical nightmare to keep all your servers completely patched to the very latest patch level, so "unpatched" is really non-descriptive. Was it vanilla W2K3 or was it a couple of months old? I would imagine most servers that run Windows that house mission critical services is at least 1 month unpatched, because it takes time to test the patches to make sure it doesn't cause problems, so the title is a bit misleading.
I've had customers that had a window of 8 hrs per quarter to upgrade their mission critical servers for any reason (upgrading OS, upgrading software, etc), so I can imagine they would have been considered "unpatched" as well.
That being said, there's no excuse for misconfigured firewalls. You would have thought the 10 companies they hired for security would have noticed this, but I guess they were just feeding from the trough.
It is extraordinarily rare for talented security firms (which tend to be --- no, wait, are invariably boutique operations) to get true site-wide discover- audit- recommend-
retest- repeat-til-fixed projects.
The closest approximation that boutique firms get regularly are "sitewide pentests", which compromise the objective you're talking about by (a) timeboxing the project (and particularly the earlier discovery phase of the project), (b) deliberately hiding information from the team (often in the name of "judging what attackers would find in the real world") and (c) if retesting is part of the project, its one cycle. While large companies use sitewide pentests as their "once a year do the whole network" project, the reality is that these projects are designed first to prove that it's possible to pop the network (it always is), second to provide a triaged list of "most glaring" security flaws, and only lastly as a true survey of all the weaknesses on the network.
This makes sense, because the true survey would cost 5-10x more, involve weeks (not days) of discovery, doc review, and staff interviews, and take many months to run start-to-finish.
There's a lot of value in sitewide pentests (I think past a certain size every company should do them every year), but a lot of people have unrealistic expectations about what they can really accomplish. For most companies, if you're going to need to catch things like "one unpatched Win2k3 server in an obscure application cluster from a company you bought 5 years ago", you're going to have to match the external consultant pentest with a very focused very diligent very well-designed internal security effort designed from the outset understanding the limitations of pentesting projects.
My perception is that larger security firms do in fact get the all-singing all-dancing sitewide audit/fix projects, but those companies often are just feeding at the trough, sending people with no real talent for the work to bill lots of hours. It's also, frankly, hard to get talented people excited about spending months reviewing firewall rules.
It's sad to see that most companies don't really care about security, they care about securing their own jobs by covering their ass. This pentest that you talk about sounds exactly like that, just enough to cover his-or-her ass when the shit hits the fan but doesn't solve the problem. "See, I followed procedures and got the audit, and we fixed the top 5 most critical things. It's not my fault. How were we to know that the hackers would break in using the 15th thing in the list?"
I guess there's not much we can do about this, it seems ingrained in our culture these days, at least in the larger corporations. I know a bunch of Big 4 public auditors and some of the stories I would hear sounded like outright fraud. I would point that out, and they would say "Public auditors aren't in the business to detect fraud. We are only supposed to ensure that whatever gets published is accurate." The same goes for the financial "controls" that were supposed to be put in place with SarBox. Higher-level directors are supposed to sign off every quarter or every year that certain financial procedures are done, but they robosign the forms because they don't care. It's only when the shit hits the fan and people start scratching beneath the surface that everyone realizes that nothing is actually working as it's supposed to.
Never ceases to amaze me. I know upgrading servers can be royally difficult but for something as large and critical as NASDAQ and the fact that security experts seem to suggest that keeping software up to date is the #1 important rule, you would have thought that updating servers would be part of their general security strategy.
I'm guessing what would really amaze you is the scale of an IT operation like NASDAQ, the velocity of day-to-day changes that need to be kept up with, and the extreme difficulty of staffing a competent security team to handle it.
I take your point --- if you're ostensibly made of money (NASDAQ is more or less a "tier 1 ISP for money"), you should just be able to spend more money to address this problem.
However, as important as security has become in the last several years, the strategic scale of the problem really hasn't sunk in at the highest levels of most companies. Operations and development are still adversarial to security, they still run the table, and all three of these groups (ops, dev, and security) are still considered cost centers by COOs.
They're considered cost centers, because they are cost centers. Unless you generate revenue by selling software or providing operational or security services to others, there is no direct sales upside to spending more money in these areas. There is only downside risk to not spending enough and screwing something up -- that makes them cost centers, not profit centers.
That doesn't mean they're not important, of course. Any cost center that didn't perform an important function would just be cut. But it's just like running payroll -- it would be disastrous if you couldn't do it in a consistent and timely fashion, but as long as everything seems to be working correctly, nobody outside that area is ever going to care that much about it.
Not at all. They are only cost centers because that's how business guys divided them up to get they toys they wanted. Consider the following two divisions:
1. Revenue center model
Currently we have no computers. If we build a system that securely allows financial transactions, we can make a lot of revenue. We will assemble a team that builds a secure system for financial transactions.
2. Cost Center model
We will build a system that handles financial transactions. That will be our revenue center. The system will not have security as a requirement. Then we will have a cost center that is responsible for making this system secure.
PHB: Which costs more? Option 1? Ok, do option 2 then. Build me a system that isn't secure first, and then we'll work on that when we can.
Did those security experts suggest that the new patches would have any less holes than the current ones?
Meanwhile all the other experts suggest not touching a high performance, high throughput system by immediately installing every new patch that arrives.
One thing that Reuters failed to disclose while it continued to cover the NASDAQ breach, is that it offers a competing product. They finally added the disclosure after being called out and asked to by NASDAQ:
Seriously people - there's a reason OS vendors have prerelease software programs. If you're not testing your software on say the Windows 8 beta or on prerelease versions of OS X 10.7.3, then when it doesn't work on day one of release it's likely your fault, not the OS vendor for breaking stuff.
The larger issue is that software testing is a PITA... this is probably a shared issue between Dev and Ops. Every time I see a new single function "installs software X outside of package management" program, I have to think that both sides are doing the other a disservice. Dev's should be able to get their software on multiple platforms easily, and Ops shouldn't have to wrestle to get it installed. We've only been wrestling with this problem for the last 30 years...