Internet Backbone DDOS "Largest Ever" 791
wontonenigma writes "It seems that yesterday the root servers of the internet were attacked in a massive Distributed DoS manner. I mean jeeze, only 4 or 5 out of 13 survived according to the WashPost. Check out the orignal Washington Post Article here."
Re:And... (Score:5, Informative)
Article: "The Domain Name System (DNS), which converts complex Internet protocol addressing codes into the words and names that form e-mail and Web addresses, relies on the servers to tell computers around the world how to reach key Internet domains."
The "IP system" should have been fine. The DNS system, which has become an integral part of the "internet" is not decentralized as regular internet infrastructure is. Yes it is supposed to withstand a nuclear war, and yes, it would have. btw, the system worked yesterday. only 4 of 13 may have survided, but the system still ran.
We can have the internet without dns, but we cannot have dns without the internet
Caching saves the day... (Score:5, Informative)
Thus the hour long attack was not enough to meaningfully disrupt things, as most lookups would not require querying the root, unless you were asking for some oddball TLD like
Change the attack to be several hours, or a few days, and then cache entries start to expire and people are unable to look up new domain names. But that attack would be harder to sustain, as infected/compromised machines could be removed.
It is an interesting question who or how this was achieved. THere seems to be a lot of scanning for open windows shares (Yet Another Worm? Who knows) also going on in the past couple of days, but there is no clue if it is related.
Re:13 servers (Score:2, Informative)
I didn't have any trouble (Score:1, Informative)
If DNS ever goes down totally, (Score:3, Informative)
We'll have to rely on IP addresses, obviously, so start changing your bookmarks now!
http://64.28.67.150/index.pl
instead of
http://slashdot.org/index.pl
Re:Ah ha. (Score:2, Informative)
I work for JPNIC (Score:4, Informative)
I'm at JpNIC & JPRS we manage the Japanese servers here. The attack progressed through our networks and effected 4 of our secondary mapped servers (these servers are used as a backup and in no way are real root servers). The servers were running a suite of Microsoft products (Windows NT 4.0) and security firewall by Network Associates.
Here is a quick log review:
Oct20: The attackers probed our system around 2100 hours on Oct 20 (Japan). We saw a surge in traffic onto the honeypot (yes these backups are honeypots) systems right around then.
2238: We saw several different types of attacks on the system, starting with mundane XP only attacks (these were NT boxes). We then saw tests for clocked IIS and various other things that didnt exist on our system.
2245: We saw the first bind attacks, these attacks were very comprehensive. We can say they tried every single bind exploit out there. But nothing was working.
Attacks ended right then.
Then on the 22nd they resumed (remember we are ahead)
22nd: A new type of attack resumed. The attack started with port 1 on the NT box, we have never seen this type of attack and the port itself responding was very weird. Trouble started and alarms went off, we were checking but couldnt figure out what happend, then we saw a new bind attack. The attack came in and removed some entries from bind database (we use oracle to store our bind data)..
The following entries were added under ENTRI_KEY_WORLD_DATA
HACZBY : FADABOI
CORPZ : MVDOMIZN HELLO TO KOTARI ON UNDERNET
Several other things were changed or removed.
Till now, we have no idea what the exact type of hack this was, we are still looking into this. The attack calls himself "Fadaboi", and has been seen attacking other systems in the past.
We are now working hard with network solutions.
Thank you.
Re:Why attack (Score:5, Informative)
I am not an expert but surely these servers connect to the net through some sort of router/hub whatever. The servers are made to handle a lot of traffic but what about the connecting hardware. If the routers were attacked directly wouldn't the DDOS attack still be succesful without touching or alerting the dns servers themselves.
It's an interesting idea, but it doesn't quite work like that. The routers we're talking about here (I imagine that most of the root servers are on 100BT or Gigabit Ethernet LANs which then plug into one or more DS-3s [45 Mbps] or more likely OC-3s [155 Mbps]) are designed to be able to handle many, many times more traffic than the servers are. Your average Cisco 7xxx or 12xxx router is built to handle far more traffic than any given server might see. Think about it ... you generally have many servers being serviced by one router, not the other way around. Additionally, each root server is most likely connected to multiple routers (say, they're hosted at an ISP with three DS-3s to different providers and each DS-3 is plugged into a different Cisco 7500).
Also I doubt that the routers are setup to recognize any kind of attack as they are just relays between the net and the server. Possibly the attack could go on for quite some time before any one realized what was going on.
Actually, it's the other way around. Most good routers are designed to have the ability (if you enable it) to look inside of the packets that pass through them and filter out "bad" ones based on various criteria. Thus, routers are actually perfectly suited to stopping attacks like this, while servers are expected to burn their CPU cycles doing other things (yes, servers can do this sort of filtering, but they generally have something more important to do). The only real problem is that it's often very difficult to tell the "good" packets from the "bad." After all, how do you distinguish automatically between a distributed flood of HTTP malicious requests and a Slashdotting? You get the idea.
Re:And... (Score:5, Informative)
DNS is hierarchical, both is naming and in server implementation. Small ISPs cache their DNS from more major providers, up until the A to J.ROOT-SERVERS.NET main Internet servers. There is in fact one critical file, but it is mirrored to the 13 root servers, and domain look-ups are cached at the ISP level. I'm not suprised most Internet users were not affected, you wouldn't be affected if several large mail servers where DDoSed would you?
Re:Well, I would guess... (Score:4, Informative)
-Kevin
Re:I work for JPNIC (Score:5, Informative)
Re:Couldn't have been that bad... (Score:3, Informative)
http://news.com.com/2100-1023-204904.html?legac
Can you say "SPIKE"? (Score:4, Informative)
Re:And... (Score:4, Informative)
You're correct in that there are more than 13 DNS servers.I've got my own, which may or my not lie - it's these 13 that are "trusted" ... so to speak.
Now, when you're configuring your network stack, in fact, when you described to me the various DNS servers, what is the important part- the name or the IP number? the number - which helps to prove my point that IP is more important than DNS.
mrtg charts (Score:4, Informative)
Root-servers.net [root-servers.net]
The legendary cymru.com data. [cymru.com]
I haven't looked yet but LINX mrtg charts might show something interesting. [linx.net]
Of course, even if someone could knock all the root servers over, the net as we know it wouldn't stop working instantly. That's what the time to live value is for :)
Re:And... (Score:4, Informative)
Traffic Stats (Score:5, Informative)
The stats for the h.root servers are available for the time period [root-servers.org] of the attack. Seems as though the h servers were taking in close to 94Mbits/second for a while.
More links to server stats can be found at Root Servers.org [root-servers.org] and some background is available at ICANNWatch [icannwatch.org].
Re:And... (Score:5, Informative)
Actually that is an Internet myth. Look at the IETF RFCs, the first ocurrence of the word 'Nuclear' is several decades after the Internet was created.
The DNS cluster is designed with multiple levels of fault tolerance. In particular the fact that the DNS protocol causes records to be cached means that the DNS root could be switched off for up to a day before most people would even notice.
The root cluster is actually the easiest to do without. There are only 200 records. In extremis it would be possible to code them in by hand. Or more realistically we simply set up an alternative root and then use IP level hacks to redirect the traffic. The root servers all have their own IP blocks at this stage so it is quite feasible to have 200 odd root servers arround the planet accessed via anycast.
The article does not mention which of the servers stayed up apart from the VeriSign servers. However those people who were stating last week that the .org domain can be run on a couple of moderately speced servers had better think again. The bid put in by Paul Vixie would not have covered a quarter of his connectivity bill if he was going to ride out attacks like this one.
Lots of people didn't notice (Score:4, Informative)
Re:One critical (Score:5, Informative)
It wouldn't have bothered them if.... (Score:2, Informative)
Re:And... (Score:5, Informative)
That is actually pretty much how it works (Score:3, Informative)
Re:And... (Score:5, Informative)
That's what I do with BIND9.
Not quite. (Score:4, Informative)
It is heirarchial with regards to namespace, but not so much with regards to lookups.
Re:Where's the Inter in the 'Net? (Score:3, Informative)
Actually that is not the reason. By the time DNS came along the Internet was already international. And never confuse the claim that the US invented the Internet with the idea that the US invented computer networking. Lots of countries had computer networks, the idea of protocol design to overcome the political problems of connecting disparate networks was what came out of the US.
The DNS servers are where they are because they are expensive to maintain and are run on a volunteer basis. Most of the people prepared to provide the necessary resources happened to be in the US. This is the reason why 9 of the root servers went down you cannot expect someone to pay for multiple OC3 or above connectivity to support a volunteer effort.
As far as geography goes China and Russia should have a root server. There should also be servers in Australia, south America and northern and southern africa. This is actually likely to happen when it becomes feasible to turn on use of anycast. At present there is a hard limit of 13 root servers. Some of those servers are multiple machines in fault tolerant configurations but they are still bound by the IP assumption that an IP address is served at a single location.
With anycast we simply fiddle the router tables so that there are multiple servers arround the world all responding to the same IP address. This will make it possible to have 50 sites serving each of the 13 root DNS addresses. In practice it is likely that only one of those addresses will need to be anycast and the BIND software tweaked to favor it.
Re:And for all you tech support people out there.. (Score:2, Informative)
You only get to use the ignorance excuse once. Not following instructions when you've been explicity given them is stupidity.
Re:And for all you tech support people out there.. (Score:2, Informative)
I have found AoA to be extremely useful in my understanding of Boolean Algebra [ucr.edu], Chapter 2 covered the basic postulates, theorems, functions very well. I printed the "16 Possible Boolean Functions of Two Variables" table he included and kept it in a handy location. I first came across minterms/maxterms and how they are used to find the canonical expression, as well as k-maps [ucr.edu] for optimization. I don't particularly like Hyde's assembly library however, for me the Intel Programmers Manual Volume 1-3 dead tree book was most clear and straight-forward, unlike assembly "tutorials".
I challenge you to provide a link to a better reference than Hyde's AoA that explains boolean algebra more clearly and more comprehensively. Go ahead.
Re:And... (Score:5, Informative)
The DNS system provides an "MX" resource-record for handling mail exchangers. Before the MX record, to send mail one would resolve the DNS using an A record, and connect to the resulting IP address. Nowadays, *@foobar.com doesn't have to always be handled by 140.186.139.224. In fact, there is a nice system set up for prioritizing mail handlers, built into DNS's MX records:
To answer your question, you can use IP addresses. But you'll be missing out on the prioritized DNS mail system. And don't worry about this being offtopic, the article isn't that all interesting anyways--I'd rather teach someone something interesting than write lame drivel about some "backbone DDoS" that's not even a backbone DDoS. Hey, its about the structure of the Internet...
blocking public DNS while hosting domains (Score:3, Informative)
You're right, you wouldn't want to block all queries, but you can do almost as good: you can block all queries except the queries for the domains that you're hosting. In fact, doing so is generally considered a very good idea, since it protects you against some forms of cache poisoning attacks.
Check out the allow-recursion command in the named.conf (5) man page, which does exactly what I describe.
Re:And... (Score:5, Informative)
Not a myth (Score:3, Informative)
Of course, the poster's original question still makes sense (even though it was a joke) -- the Internet is at least potentially vulnerable to an attack on the Internet, even if it could survive a nuclear war. The idea that an enemy would attempt a decapitating nuclear first strike without targeting C3I assets (command, control, communications and intelligence) is absurd. The beauty of Baran's solution is that it makes such a strike very difficult -- and yesterday's DDOS certainly supports this, since most people didn't even notice it.
It certainly does provide that capability. (Score:4, Informative)
For dynamically updating zones, I use a small Perl DBI script which dumps zones from the DB into a directory. All files in the directory are sorted (via sort) into a main text file, which is hashed into data.cdb. I also have a big text file from the other DNS server scped over and included in the hash. The entire system is dynamic, with every important entry controllable from within an easily backed-up (and restorted) SQL server. Adding things like DynDNS to this setup would be trivial (all I'd need is another table for actual accounts, which allow people to modify their own zone files).
Best of all, because there is an order of magnitude less code running, TinyDNS is a lot easier to inspect for correctness. You can spend a couple of evenings reading over all the code for the package (even if it's not the best looking C code in the world), and really understand it.
Re:And... (Score:5, Informative)
You don't know what you are talking about. There are two different types of DNS servers: authoritative servers and recursive resolvers. djbdns comes with tinydns, an authoritative server and dnscache, a recursive resolver. The two are completely separate. BIND includes both in the same server, which is why many people are confused into thinking they are the same thing.
tinydns does not restrict queries to only certain IP addresses. However, it can return different information depending on the source address of the query. This is usually called split horizon DNS.
dnscache does have access control. You do not want just anyone to be able to query your recursive resolvers. With dnscache, you need to explicitly allow access [cr.yp.to] for IP's that can query it.
There are not risks in opening your content (authoritative) DNS servers to everyone. There are risks in opening up your resolvers to everyone.
Re:Lots of people didn't notice (Score:1, Informative)
Look the dig below. The root servers resolve
In your example, because you looked up google.com, your isp doesn't need to contact the root servers, because it has already cached where the
(the results of "dig +trace slashdot.org" should be here, but the lameness filter doesn't like 'junk'
Re:And... (Score:3, Informative)
-hosting DNS service for customer domains (on servers which don't recurse, but are Internet accessible), and
-resolving DNS hostnames for downstream customers (on servers which recurse, but are inaccessible from the Internet due to name server configuration or packet filtering).
This strategy puts hosted DNS service in a sandbox, so that those servers can have zone data that is no longer valid (or not valid yet) without conflicting with the authoritative servers. It also prevents utilization of bandwidth for DNS resolution by non-customers...whish isn't really in the spirit of the 'net. For mammoth ISPs like Earthlink, it could make a noticeable difference in bandwidth usage (with a tradeoff of potentially making them seem like jerks).
Re:If DNS ever goes down totally, (Score:2, Informative)
yo.
Re:And... (Score:2, Informative)
The Internet is not the WWW. The WWW uses the Internet as it's transport.
The intenet would still function fine at the IP level that it was originally designed for. The complete failure of the DNS system would merely harm users reliant on names as network addresses.
My first email account was made up of numbers.
Re:Not quite. (Score:2, Informative)
Well... (Score:5, Informative)
You can use any physical layer: ethernet, a modem, a cell phone, wifi, bluetooth, firewire, USB, power lines, etc with IP, and similarly you can use may other protocols with Ethernet or any other link Such as IPX, NetBui, Apple talk, etc.
TCP, UDP, and ICMP are tied to IP and wont work with anything else.
Then there are higher level protocols that sit on top of TCP or UDP, for example DNS sits on UDP, FTP, telnet, gnutella and others sit on TCP. Interestingly HTTP should work on other protocols as long as you can establish a link between a server and a host on it. And you have software that implements it on these other links.
There's also Ipv6, which is a newer version of IP.
Re:Not a myth (Score:3, Informative)
The Internet uses Packet switching but it was certainly not based on the RAND design. By the time that the IP Protocol was written there were several Packet data networks in use. The Internet was designed for ease of configuration.
Vint Cerf - who was not the 'father of the Internet' but did manage the research budget for it has repudiated the nuclear fable several times. If nuclear proff had been a goal we would have used flood fill routing and not built MAE West and East.
Re:And... (Score:5, Informative)
You make some good points, but the Domain Naming Server system is in fact largely distributed.
and then you say:
DNS is hierarchical, both is naming and in server implementation.
Ok hold on here. It's both hierarchial, implying something at the top that everything is based on, and at the same time, distributed, implying that it's not dependand on some central source? Dude, you're contradicting yourself, and so you're wrong.
The truth is that the DNS system IS heirachial. ICANN runs the root. They say what information goes in at the highest level. The dot-com, and dot-aero, and dot-useless and so on. That is why there is so much scrutiny on ICANN for operating fairly [icannwatch.org]. They are the people who decide how the DNS system will be run, because they are at the top of the hierarchy.
"But wait!" you say, "Aren't there 13 root servers? That's distributed right there." Yes, but you are only half right. The LOAD is distributed, not the information. So you're distributing the LOAD, but the info is exactly the same on each one. And that info is controlled by ICANN.
Oh and yes, you CAN get that one file of information that the root servers have. Really you can. Take a look for yourself. Log into ftp://ftp.rs.internic.net/domain [internic.net] and get root.zone.gz [internic.net]. If you look at that file, you'll see it's a list of all the servers for all the TLDS.
It wouldn't matter if it did (Score:3, Informative)
If root DNS went down, you'd have to have Slashdot's DNS move as well.
Re:And... (Score:2, Informative)
Re:And... (Score:3, Informative)
I would guess that someone who runs one of the root servers would have a pretty good grasp of the costs of running a root name server.
Re:Couldn't have been that bad... (Score:4, Informative)
If you believe this article [com.com] on news.com [com.com], it looks more like a storm in a glass of water.
Quote: the peak of the attack saw the average reachability for the entire DNS network dropped only to 94 percent from its normal levels near 100 percent.
Actually this wouldn't affect _any_ sensible setup (Score:2, Informative)
1. every DNS zone (including the . root zone) has a TTL (time to live) - the amount of time you are allowed to keep the results of a query. The idea being that if you a server looks up a zone e.g. foobar.com it doesn't have to look again until the TTL runs out. This is typically about 24 hours for an average .com domain (but can be set to whatever the controller of the domain's DNS likes)
2. The TTL of the . root zone is* 6 months. This means an ISP's server only has to recheck a top level domain (.org, .com, .net) every 6 months. This means that if all the top level DNS servers were out for say a day, then 99% of the other servers out there wouldn't even notice, as they wouldn't need to query the roots for on average another 3 months. Sure, if the root servers were down for longer, the TTL would run out on more and more DNS servers, but in principle the root servers would have to be down for a sustained time to start to significantly affect the Internet's DNS.
* - the TTL of the root domains at the moment has been changed to 3 hours, presumably as they are changing the top level infrastructure and need to have the changed propogate quickly.
3. this is why all ISPs who have correctly setup DNS servers would not have noticed anything. If run your own DNS server on your home box, and don't run it all the time, you'll be checking the root servers the first time you do a DNS query when you switch your machine on; so would probably notice something. Lesson - use your ISPs DNS server to resolve domains!
Re:And... (Score:5, Informative)
The root servers run BIND.
Re:And... (Score:3, Informative)
Virtualization of computing resources is going very mainstream these days. You have products such as VMWare, competitors for Sun hardware, and even the staunch favorite, User Mode Linux.
I'm running DNS right now in a UML sandbox. Although chroot is an excellent security policy for services, if you want true isolation from the main system in case of break-in, it's hard to beat a UML. There is even a special image provided at the UML home page [sourceforge.net] which runs DNS, and only DNS. It's very handy, and is designed to run while taking only 16 MB of RAM.
Suffice to say, I'm very impressed. For running critical services which, in the past, have required a chrooted environment (such as DNS), user mode linux is a powerful alternative.
Now, would it have had anything to do with helping stop a DOS attack? Nope, but I'm just following the thread here