I personally think that the focus on open recursive resolvers is misplaced. All authoritative nameservers have to be "open" to queries for the domains they serve. So instead of the amplifying by doing:
$ host -t any random-site.com. ns1.example.org.
you can just as well do:
$ host -t any example.org. ns1.example.org.
Even if all nameservers supported good per-IP throttling (far from the case), there are still enough valid nameservers on the internet to stage a decent amplification I think. So once all of the open resolvers are shut down, the DDoS pricks will just target more important infrastructure to accomplish the same goal.
It might be that we'll have to switch all authoritative DNS requests to be TCP-only but I can't imagine what a pain that transition will be.
The worse news is that egress filtering has been something we've clearly needed for 15+ years and it doesn't seem we've gotten very far. Part of the problem is that these amplification attacks usually don't cause much pain to the real source of the attack. Plus since it's very hard to tell where the true source is, they don't even get publicly shamed for it. It's so much easier to point a finger at the middle man in this case.
For egress filtering to be effective protection, it needs to cover nearly all of the network. As long as a botnet can get their hands on a decent amount of unfiltered bandwidth to amplify it's game on.
For egress filtering to be effective protection, it needs to cover nearly all of the network. As long as a botnet can get their hands on a decent amount of unfiltered bandwidth to amplify it's game on.
The question is can it be done without the ISP's co-operation. For example, I've got transit service from XO (a Tier-2 provider) and they could conceivably port-mirror the inbound side from my connection, do a source IP check on packets, (even statistical would be fine) and then raise a flag if anything came out smelly.
Since we're comparing a 32 bit number, and we can construct a logic gate which defines 'legal' values for that number, even a modest FPGA implementation could just sink (equivalent to /dev/null) all traffic and generate a pulse when the match failed. Its been a while since I was in a company building DSLAMs and edge gear but up to a 10Gbit pipe that isn't a killer problem.
Near the edge, it's just a matter of using the same silicon you have for routing and ask the question "would I have routed a packet to this claimed source IP down the pipe I'm receiving it from?" In cisco-speak this is "ip verify unicast source reachable-via rx"
Once you move further towards the core, the problem becomes exponentially more complex. Asymmetric routes are common, so it's not weird at all to get a packet handed to you from a different ISP than you would reply to it. Any filtering there would break a large percentage of valid traffic.
The problem is that there are a hell of a lot of "edges", you need nearly all of them to be fixed, and they don't have the motivation to do much about the problem.
It's not a solution at all because it will never happen. You'd need every network on the internet to spend money and time to fix a problem that nobody is holding them responsible for and they have no monetary incentive to fix.
The solutions are clear. They happen to be doing the laziest of them which is close the open resolvers.
Open resolvers are like open WiFi, and open TOR nodes. They can all be misused, but I would still like to see them around in the future. A strict hierarchicy can be abused too, and it's not at all clear which is the lesser evil. I think you can solve a temporary problem created by an abuse of trust with another temporary use of trust... Ideally one could, if they were the target of a flood, control the flow towards them (egress filters) up the stream (the farther the better, which itself assumes a trust relationship) before it has a chance to concentrate.
It needs to change, but I highly doubt it will for at least a few decades. Too many things rely on it, and it would require way too many resources and a miserably long time to change everything to the new thing.
Egress filtering would happen within a few days if someone DDoSed every open resolver with other open resolvers. In fact, saturation of the outbound links would effectively be egress filtering.
Haha this would be kind of awesome. Obviously this would be bad news for the open resolvers' home networks, but could their upstreams handle it (i.e. maybe they would have to drop resolver traffic but not anything else)? If so maybe this Cloudflare should prepare such a response for the next time someone pulls this.
Yes, they do have to be open. Properly configured authoritative only name servers will allow full zone transfers (AXFR) only to their slaves.
The amplification part of the attack comes from the fact that they aks for zone transfers, and zone transfer replies are much larger than the requests.
If they were just doing normal DNS lookups for records you wouldn't have an amplification factor.
I personally think that the focus on open recursive resolvers is misplaced. All authoritative nameservers have to be "open" to queries for the domains they serve. So instead of the amplifying by doing: $ host -t any random-site.com. ns1.example.org. you can just as well do: $ host -t any example.org. ns1.example.org. Even if all nameservers supported good per-IP throttling (far from the case), there are still enough valid nameservers on the internet to stage a decent amplification I think. So once all of the open resolvers are shut down, the DDoS pricks will just target more important infrastructure to accomplish the same goal.
It might be that we'll have to switch all authoritative DNS requests to be TCP-only but I can't imagine what a pain that transition will be.
The worse news is that egress filtering has been something we've clearly needed for 15+ years and it doesn't seem we've gotten very far. Part of the problem is that these amplification attacks usually don't cause much pain to the real source of the attack. Plus since it's very hard to tell where the true source is, they don't even get publicly shamed for it. It's so much easier to point a finger at the middle man in this case.
For egress filtering to be effective protection, it needs to cover nearly all of the network. As long as a botnet can get their hands on a decent amount of unfiltered bandwidth to amplify it's game on.
I'm not optimistic.