This article from the Association of Computing Machinery and written by no less than Paul Vixie. It is a detailed review of the basic facts of the Internet being smart at the edge and dumb in the middle.
By design, the Internet core is stupid, and the edge is smart. This design decision has enabled the Internet’s wildcat growth, since without complexity the core can grow at the speed of demand. On the downside, the decision to put all smartness at the edge means we’re at the mercy of scale when it comes to the quality of the Internet’s aggregate traffic load. Not all device and software builders have the skills—and the quality assurance budgets—that something the size of the Internet deserves. Furthermore, the resiliency of the Internet means that a device or program that gets something importantly wrong about Internet communication stands a pretty good chance of working “well enough” in spite of its failings.
Many service providers have attempted to add “intelligence” into the network with QoS, MPLS and many other hacks that ultimately provide only short-term benefits. Vendors have produced products that the service providers have asked for and then complain when no value is derived from them.
Then the article moves to talk about Source Address Validation and it’s importance to protect the Internet from reflection attacks against DNS and NTP. This conclusion is depressing
There is no way to audit a network from outside to determine if it practices SAV. Any kind of compliance testing for SAV has to be done by a device that’s inside the network whose compliance is in question. That means the same network operator who has no incentive in the first place to deploy SAV at all is the only party who can tell whether SAV is deployed. This does not bode well for a general improvement in SAV conditions, even if bolstered by law or treaty. It could become an insurance and audit requirement in countries where insurance and auditing are common, but as long as most of the world has no reason to care about SAV, it’s safe to assume that enough of the Internet’s edge will always permit packet-level source-address forgery, so that we had better start learning how to live with it—for all eternity.
The conclusion is that the operators do not care enough for their customers or source of their revenue to implement self-preservation tools. This care has a cost, of course, since SAV requires some small amount of engineering and operations and short-term profits override long-term shareholder value. Service Providers are suffering from a few things in my view.
- A lack of good tools and standards that make networks easier to manage. And they reasonably have expected the vendors to provide those tools. Vendors didn’t and eventually “OpenFlow and the ONS” kickstarted the SDN disruption. Tools to handle SAV are in sight.
- The current movement to innovate in the service providers is very, very small. Few of them are investing in their networks and largely taking profits instead.
- Its easy to be lazy in a big company with incumbent positions.
The article goes on to talk about DNS Rate Limiting (DNS RL) –
The economics of information warfare is no different from any other kind of warfare—one seeks to defend at a lower cost than the attacker, and to attack at a lower cost than the defender. DNS RRL did not have to be perfect; it merely had to tip the balance: to make a DNS server less attractive to an attacker than the attacker’s alternatives. One important principle of DNS RRL’s design is that it makes a DNS server into a DDoS attenuator—it causes not just lack of amplification, but also an actual reduction in traffic volume compared with what an attacker could achieve by sending the packets directly. Just as importantly, this attenuation is not only in the number of bits per second, but also in the number of packets per second. That’s important in a world full of complex stateful firewalls where the bottleneck is often in the number of packets, not bits, and processing a small packet costs just as much in terms of firewall capacity as processing a larger packet.
It’s a good suggestion but the author is also quick to highlight the problems of DNSRL with marginal performance impacts and possible false-positive issues.
The final conclusion doesn’t show much hope of resolution.
Reflective and amplified DDoS attacks have steadily risen as the size of the Internet population has grown. The incentives for DDoS improve every time more victims depend on the Internet in new ways, whereas the cost of launching a DDoS attack goes down every time more innovators add more smart devices to the edge of the Internet. There is no way to make SAV common enough to matter, nor is there any way to measure or audit compliance centrally if SAV somehow were miraculously to become an enforceable requirement.
DDoS will continue to increase until the Internet is so congested that the benefit to an attacker of adding one more DDoS reaches the noise level, which means, until all of us including the attackers are drowning in noise. Alternatively, rate-limiting state can be added to every currently stateless protocol, service, and device on the Internet.
Tragedy of the commons. Society will damn itself. Queue wailing of virgins at the altar of social media because that’s probably the only thing that will drive some sort of change to the problem.