Hacker News new | past | comments | ask | show | jobs | submit login

Governments take IT security very seriously, I don't see where you've seen they don't.

The problem is that you just can't secure a whole infrastructure overnight and that security is very hard.




I would like to clarify on this issue a bit.

1. Declaratively the governments are indeed taking IT security seriously. Thus many guidelines, laws and other formal documents regarding IT security have been accepted. Unfortunately many of these are internally inconsistent or in conflict with others. The net result is that in the name of "security" the governmental IT systems are unnecessarily complicated and expensive.

2. In practice - prescribed security measures are mostly not enforced due to many reasons (eg.: The measures are so strict and rigid that enforcing them would prohibit various legitimate users from actually doing their work; The people in charge of implementation and administration of these systems are plain incompetent and/or disinterested).

3. Securing everything that is currently deemed necessary to the extent prescribed by relevant law will turn out to be prohibitively expensive due to various logistic problems (who will implement necessary auditing systems for legacy systems that nobody udnerstands and there are no funds to replace, where will you store all the auditing information, where will you get competent engineers that actually understand the infrastructure and are willing to work proactively to secure them - for a laughable wage?,...).

So - as a matter of fact - I have to state that governments don't really take IT security seriously. They don't understand the issues and they don't even care. They care for scapegoats and thats it.

Disclaimer: Most of my work is on government related IT projects/systems.


Security is hard. Security on a large scale system is is very hard. Securing a legacy system is extremely hard. Securing a large legacy system is near impossible.

Yes the government wants to cover its ass first and foremost.

But that doesn't mean they don't take IT seriously, they just don't understand it to the point they cannot select people to work with that understand it correctly.

disclaimer: I've designed COMSEC systems


If they took IT security seriously, it wouldn't be a checkbox in a Lockheed or SAIC contract.


The problem is that you just can't secure a whole infrastructure overnight and that security is very hard.

Actually, you can never completely defend against every possible attack. Any finite limit can be exceeded. As long as an attacker can use up some sort of finite resource on your box or network, you're toast.

SYN floods don't happen anymore because SYN cookies make the size of your TCP half-open connection table infinitely large. So no DoS anymore. But other things are not that easy to fix; you can prevent 1 IP from opening a million slow connections to your server and filling up your state table and running your web server out of fds. But you can't prevent a million people from all opening up one connection without blocking legitimate users.

DoS is very hard to defend against. I think in this case, there is probably something easy to fix (get rid of those Windows 95-based routers and app servers), but in the general case, there is nothing that can be done.


This is not true.

I work for a major DDoS mitigation equipment company and we see SYN floods all the time.

And you CAN block one million users from all opening up one connection. Our software/hardware makes this happen. There are MANY MANY ways that DDoS can be dealt with; the biggest hurdle in many cases is convincing a customer that they might be next. Until then, they often don't see the need to spend the money on sufficient capacity or properly test their system against the range of "probable" attack vectors.

That's likely what you're seeing here.

It /is/ true that attacks are becoming more sophisticated and targeting applications rather than just pure network resources (e.g. b/w). A big part of our efforts are trying to abstract away the potential attack vectors common to several application stacks rather than developing solutions for each one at a time.

P.S. Paypal seems to work fine for me right now.


Well, I worked for Arbor, and while it's true that you can readily block packet-y attacks even from a million sources (as long as you can characterize the attack), you're kind of missing 'jrockway's point.

During the Olympics in Korea, which Arbor ran DDoS protection for, attackers set up web pages that simply directed hundreds of thousands of computers at URLs on the MSNBC sites. How are you going to filter against that? If you have a botnet, you can saturate a target with totally legitimate traffic.

You can talk all you want about anomaly detection and attack characterization, but if your attacker has a botnet that generates totally legitimate traffic patterns, you have a very hard problem to solve. It isn't intractable, but probably will require code changes to your application to address.

A lot of anti-DDoS gear that gets sold to enterprises is snake oil. Most companies aren't in a position to filter their own traffic.


I'm not going to get into specifics but I'd like to address your points.

The problem of distinguishing between legitimate traffic and attack traffic gets harder when the attack starts to look more like legitimate traffic. It doesn't get impossible.

You can have a more effective attack if you have a LOT of machines you can use to generate legitimate requests. Of course, after a short while, it's going to be possible to determine which of those hosts are part of the botnet because you can build a history of their requests over time.

So it's not an impossible problem to solve; just a hard one. Most enterprises don't really want to pay the money necessary to protect the bulk of their enterprises.

I don't think I missed jrockway's point, but I do think you're missing mine: namely, that effective DDoS protection is expensive and time-consuming from a training standpoint relative to any individual company's exposure. That's why we don't see more anti-DDoS features in high-profile websites, not because it's ineffective.

Even so...we STILL see lots of packet-y type attacks. It's often overlooked but crafting effective attacks requires really good programming skills. Such skills are often unevenly distributed in script-kiddie kommunities.


I agree that you missed my point :)

My point is, any finite limit can be exceeded. In days of old, it was state tables and file descriptors. Now it's bandwidth. Filtering doesn't matter once the packet has traveled down your finite link to your packet filter. That bandwidth has been used, and denied service to a legitimate user that wanted his packet to go to your server.

Mostly, you're right, it comes down to luck. Attackers don't get the chance to do a daily dev / qa / release cycle. They write something, push it to a bunch of users who hate Amazon and Paypal today, and that's the end of it. If they wrote good code, the attack will be good. If they need to tweak something, they missed the opportunity.

That's what's saving everyone here -- luck.


Filtering of DDoS attacks is moving into the network. The days of sorting attack traffic at the destination have always been numbered.

Current best practice has been to use technologies like FLOWSPEC to get the traffic off the network much closer to the origin.


Yeah, it's hard to speculate as to what's going on, because we are not Paypal or Mastercard. Maybe someone from Anonymous works there and changed their uplink media to 10BaseT :)

So about the SYN floods you see in real life, how do those work? Do routers not do SYN proxying for the servers behind them? Do SYN cookies not work? Are sequence numbers being forged? Is the link saturated? Something else?


Routers don't do SYN proxying. SYNs are just regular packets and are passed along to a host.

FIREWALLs on the other hand, might use a SYN to make an entry in a table that's used to track connection state. That table might be overloaded by a SYN flood. Same thing applies to load balancers.

SYN cookies work just fine at the ENDPOINTs.


That has been the opposite of my experience. To a large extent, the government can't take IT security seriously, because they've outsourced it. There are smart people in and around the government but no coherent strategy. I don't want to get too specific but no DoD network I've seen or talked to people who ran ranks with the least of my financial services clients.


> Governments take IT security very seriously, I don't see where you've seen they don't.

The very fact that this whole cablegate debacle even exists clearly demonstrates that parts of the US government do not.


I take death seriously, that does not mean I get to live forever.


You don't give some random army grunt access to your on/off switch.


Actually, I happen to work for the Army so I am often near well armed "grunts" with access to that off switch. It's a judgment call, but I assume walking around the Pentagon is probably safer than driving which I am also willing to do. More to the point, I think being respectful to well armed people is prudent, hiding under the bed is pointless. So, while I recognize the risk to life and limb at some point you need to focus on risk mitigation rather than avoidance.

PS: To put this into perspective, one of the guys I work with was there for 9/11. He sustained significant injury while several people in the room with him died. Yet, he is also willing to work in the building and most people in the building where not harmed.


Not interested. My original post was not about your completely-missing-the-point simile, but about the fact the the US government demonstrably sucked at IT security when they let the great unwashed have the kind of access they had to State Department cables.


IMO, the government does a reasonable job balancing how well it protects information and the costs of that protection. The current strategy will lead to leaks, but so did paper documents. Millions of people work for the government and many of them are going to try to cause problems.

So, if you are going to equate a single low impact release with “sucking” the go for it. But, I would point out unlike banks which often lose large numbers of SSN’s the government keeps the hole list for everyone and that has not gotten out. And (as my original post pointed out) sometimes when dealing with hard problems mitigation really is the best you can hope for.


He doesn't mean governments' IT security (which is so-so overall), but the Internet as a whole.

And he's right the stuff that will come out of it probably will not be very encouraging.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: