Internet Backbone DDOS "Largest Ever" 791
wontonenigma writes "It seems that yesterday the root servers of the internet were attacked in a massive Distributed DoS manner. I mean jeeze, only 4 or 5 out of 13 survived according to the WashPost. Check out the orignal Washington Post Article here."
And... (Score:4, Funny)
Re:And... (Score:4, Insightful)
A subterranean bunker is designed to withstand nuclear wars, but what do you think would happen if the nuke was inside the bunker?
Re:And... (Score:5, Funny)
Ummm... a lot more people would be safe? That is, the people who didn't fit in the bunker...
Re:And... (Score:5, Funny)
I think everybody outside the bunker would be like "What the hell was that?!"
Re:And... (Score:4, Funny)
It's nice to know that you do not have to quit your [favorite online game] 'just because' a nuclear war breaks out.
Re:And... (Score:5, Funny)
They'll have to pry my nuclear weapon out of my cold dead fingers. A man has a right to protect himself. Would you want to participate in a nuclear war without a nuclear weapon? Bringing a knife to a nuclear war ain't smart.
Re:And... (Score:5, Funny)
Ask Slashdot: My bunker had a nuclear weapon which disassembled itself as designed. Should I repair the bunker the way it was? Or should I remodel to make use of the larger space which is now available? Is water cooling better than air chillers? What bunker mods are your favorites?
Re:And... (Score:5, Informative)
Article: "The Domain Name System (DNS), which converts complex Internet protocol addressing codes into the words and names that form e-mail and Web addresses, relies on the servers to tell computers around the world how to reach key Internet domains."
The "IP system" should have been fine. The DNS system, which has become an integral part of the "internet" is not decentralized as regular internet infrastructure is. Yes it is supposed to withstand a nuclear war, and yes, it would have. btw, the system worked yesterday. only 4 of 13 may have survided, but the system still ran.
We can have the internet without dns, but we cannot have dns without the internet
Re:And... (Score:5, Informative)
DNS is hierarchical, both is naming and in server implementation. Small ISPs cache their DNS from more major providers, up until the A to J.ROOT-SERVERS.NET main Internet servers. There is in fact one critical file, but it is mirrored to the 13 root servers, and domain look-ups are cached at the ISP level. I'm not suprised most Internet users were not affected, you wouldn't be affected if several large mail servers where DDoSed would you?
Re:And... (Score:4, Interesting)
Re:And... (Score:4, Informative)
Re:And... (Score:5, Informative)
That's what I do with BIND9.
Re:And... (Score:5, Informative)
Re:And... (Score:5, Informative)
You don't know what you are talking about. There are two different types of DNS servers: authoritative servers and recursive resolvers. djbdns comes with tinydns, an authoritative server and dnscache, a recursive resolver. The two are completely separate. BIND includes both in the same server, which is why many people are confused into thinking they are the same thing.
tinydns does not restrict queries to only certain IP addresses. However, it can return different information depending on the source address of the query. This is usually called split horizon DNS.
dnscache does have access control. You do not want just anyone to be able to query your recursive resolvers. With dnscache, you need to explicitly allow access [cr.yp.to] for IP's that can query it.
There are not risks in opening your content (authoritative) DNS servers to everyone. There are risks in opening up your resolvers to everyone.
Re:And... (Score:5, Interesting)
What my DNS server does is mandate an ACL (list of IPs allowed to make recursive queries; this can be set to "all hosts on the internet" if desired) if recursion (talking to other DNS servers) is enabled. Recursion takes a lot more work to do than authoritative requests; it is best to limit access to this.
Unlike Dan, I feel that a DNS server should be both recursive and authoritative because it allows one to customize the resolution of certain hostnames. The idea is similiar to /etc/hosts, but also works with applications which ignore /etc/hosts and directly perform DNS queries. For example, I was able to continue to connect to macslash.com [slashdot.org] when a squatter bought the domain and changed its official ip; I simply set up a zone for macslash.com, and made MaraDNS both recursive and authoritative.
SMTP servers have IP restrictions at the application layer because this gives people some idea why they can't send email to a given host. A firewall restriction gives a vague "connection timed out" message in the bounce email message; application-level filtering allows the bounce message to say something like "You're from a known Spam-friendly ISP; go away".
- Sam
Re:And... (Score:5, Informative)
The root servers run BIND.
Re:And... (Score:4, Informative)
You're correct in that there are more than 13 DNS servers.I've got my own, which may or my not lie - it's these 13 that are "trusted" ... so to speak.
Now, when you're configuring your network stack, in fact, when you described to me the various DNS servers, what is the important part- the name or the IP number? the number - which helps to prove my point that IP is more important than DNS.
Re:And... (Score:4, Interesting)
Re:And... (Score:5, Informative)
The DNS system provides an "MX" resource-record for handling mail exchangers. Before the MX record, to send mail one would resolve the DNS using an A record, and connect to the resulting IP address. Nowadays, *@foobar.com doesn't have to always be handled by 140.186.139.224. In fact, there is a nice system set up for prioritizing mail handlers, built into DNS's MX records:
To answer your question, you can use IP addresses. But you'll be missing out on the prioritized DNS mail system. And don't worry about this being offtopic, the article isn't that all interesting anyways--I'd rather teach someone something interesting than write lame drivel about some "backbone DDoS" that's not even a backbone DDoS. Hey, its about the structure of the Internet...
Well... (Score:5, Informative)
You can use any physical layer: ethernet, a modem, a cell phone, wifi, bluetooth, firewire, USB, power lines, etc with IP, and similarly you can use may other protocols with Ethernet or any other link Such as IPX, NetBui, Apple talk, etc.
TCP, UDP, and ICMP are tied to IP and wont work with anything else.
Then there are higher level protocols that sit on top of TCP or UDP, for example DNS sits on UDP, FTP, telnet, gnutella and others sit on TCP. Interestingly HTTP should work on other protocols as long as you can establish a link between a server and a host on it. And you have software that implements it on these other links.
There's also Ipv6, which is a newer version of IP.
One critical (Score:5, Funny)
Re:One critical (Score:5, Informative)
Re:One critical (Score:4, Funny)
OK, I'll send you my HOSTS.TXT file. But remember to update it every few weeks because the ARPAnet is growing faster then ever after the adoption of this new, fancy, so called "TCP/IP" technology.
Re:One critical (Score:4, Funny)
"Hey xant,
I've attached the critical file you alluded to in your comment at http://slashdot.org/comments.pl?sid=43025&cid=450
Keep it on your hard drive in case we all need it.
Heh. In case his hard drive goes, maybe a couple other people should get it from here [internic.net].
Re:And... (Score:5, Informative)
Not quite. (Score:4, Informative)
It is heirarchial with regards to namespace, but not so much with regards to lookups.
But not distributed enough (Score:4, Interesting)
Bullshit.
I had obvious impacts trying to resolve DNS names during the time period of the attack (Delaware AT&T), despite having a caching name server on my local net, which queries AT&T's caching (primary?) servers.
ISPs should be responsible for providing the DNS services to their customers in timely and reliable fashion, querying their backbone providers in turn. Direct queries of the root servers by subnets should be verboten and expressly blocked by the ISP firewalls. If you need to resolve an refresh, probe the ISP DNS and let their system handle the distribution. That way the root servers become repositories and key distribution points instead of failure points like yesterday.
I'm sure someone will object that they have the "right" to use whatever ports they want and that they don't want to rely on the stability of their ISP's servers, but we're talking about the infrastructure people! We have no more "right" to hit the root directly than to clamp a feed from the power company mains to the house or splice into the cable TV/broadband wiring.
If we don't protect and distribute infrastructure resources adequately, everyone is affected. And if your ISP has servers that are too unreliable for this type of filtered distribution to work, change providers!
Sure, let's just do that (Score:5, Insightful)
After all, 99.5% of people wouldn't notice, and who *really* cares about the remaining
I really loathe the growing trend towards firewalling everything that moves. Mail outbound, other than to the ISP's mail server. Napster. Ping packets. It's really annoying to the people who actually *do* want to use said functionality.
Internet "license"? (Score:5, Insightful)
You want full functionality? Sign off with your ISP for the appropriate connection service. If you pay for a small business link, you get the higher level access, and also take responsibility for the maintenance and security of your node. You get hacked, you participate in DDOS attacks, you should be financially responsible. If you really know your stuff to use the extra functionality, you should have no issue with taking responsibility for the risks incurred.
Don't want to pay more? Don't want to be responsible? Don't get the access.
There is no such thing as "rights" when your activities impact others. If you aren't willing to stand up and be responsible for your traffic (subnet/link/servers), then internet "society" has the responsibility to protect the rest of the community from you.
If the internet is truly as critical to business as we all hope it to be, it only stands to reason that people are going to have to get "licenses" to run full service nodes and subnets. You don't get to drive without a license to demonstrate that you at least have the education and skills to do so safely -- why would you expect to do otherwise on the 'net?
"License"? WTF are you talking about? (Score:5, Interesting)
Yes, I do. The same peer-to-peer functionality that hosts on the Internet have had forever. I got my fill of "Internet access", but not being an Internet peer when everyone was selling dialup shell accounts but not PPP.
Sign off with your ISP for the appropriate connection service.
So *I* should pay *more* for them to do *less* work?
That's as bad as the pay-extra-if-you-don't-want-your-number-listed phone company procedure.
If you pay for a small business link, you get the higher access level, and also take responsibility for the maintenance and security of your node.
I *already* take responsibility for the maintenance and security of the node. I don't need to pay any more money to take said responsibility.
You get hacked, you participate in DDoS attacks, you sould be financially responsible.
There's no legal difference between a business and a home account from a financial responsibility point of view. What are you talking about?
If you really know your stuff to use the extra functionality, you should have no issue with taking responsibility for the risks incurred.
I *don't* have an issue with that. I just don't want to pay inflated business-class prices for standard peer-to-peer access.
Don't want to pay more?
Not particularly, no.
Don't want to be responsible?
Well, I'd kind of prefer to not be responsible (
Don't get the access.
Conclusion does not follow.
There are [sic] no such thing as "rights" when your activities impact others.
You seem to have misquoted me. I did not use the word "rights" anywhere in my original post, or claim that I had any such rights (legal or ethical) whatsoever. I did say that it was *annoying* to me.
If you aren't willing to stand up and be responsible for your traffic
Where, where, did you get the impression that I said this at all?
If the internet is truly as critical to business as we all hope it to be, it only stands to reason that people are going to have to get "licenses" to run full service nodes and subnets.
That has no bearing whatsoever on my argument. I also don't think that the potentially critical relationship to business can be said to imply that one needs a license. Electricity is quite critical to US industry (hell, it's physically dangerous), yet one doesn't need a license to utilize it.
You don't get to drive without a license to demonstrate that you at least have the education and skills to do so safely -- why would you expect to do otherwise on the 'net?
Still has no bearing on my argument.
Furthermore, I'd like to point out again that screwing up while driving can easily end up with many people dead. Even with the license system, cars are the leading cause of death of teens and young adults. I don't think you can compare that at all to the Internet, where maybe someone gets a Code Red infection. The Internet is important, but not knowing what you're doing on the Internet is wildly different (at least currently) from being an active threat to the lives of others.
Good point (Score:4, Insightful)
Amen.
The only reason we hear the words "web services" at *all* are because the bejeezus has been firewalled out of everything except for web access at most companies. From a technical standpoint, "web services" are a massive step backwards...we had much superior systems before we had to run all communication through http.
Web services are the ongoing rejection of developers and users of the blocking of services crossing the firewall. Eventually, everything will be tunneled over http, and we'll be back where we started (same things accessable across the firewall), abeit with a somewhat less efficient system.
"The Internet treats censorship as damage, and routes around it."
-- John Gilmore
Re:And... (Score:5, Informative)
You make some good points, but the Domain Naming Server system is in fact largely distributed.
and then you say:
DNS is hierarchical, both is naming and in server implementation.
Ok hold on here. It's both hierarchial, implying something at the top that everything is based on, and at the same time, distributed, implying that it's not dependand on some central source? Dude, you're contradicting yourself, and so you're wrong.
The truth is that the DNS system IS heirachial. ICANN runs the root. They say what information goes in at the highest level. The dot-com, and dot-aero, and dot-useless and so on. That is why there is so much scrutiny on ICANN for operating fairly [icannwatch.org]. They are the people who decide how the DNS system will be run, because they are at the top of the hierarchy.
"But wait!" you say, "Aren't there 13 root servers? That's distributed right there." Yes, but you are only half right. The LOAD is distributed, not the information. So you're distributing the LOAD, but the info is exactly the same on each one. And that info is controlled by ICANN.
Oh and yes, you CAN get that one file of information that the root servers have. Really you can. Take a look for yourself. Log into ftp://ftp.rs.internic.net/domain [internic.net] and get root.zone.gz [internic.net]. If you look at that file, you'll see it's a list of all the servers for all the TLDS.
Re:And... (Score:5, Funny)
Re:And... (Score:4, Insightful)
hint: read the last paragraph of Cmdrtaco's last journal.
just run a local DNS cache; if something is unreachable, you have the cached entry to work off of. When changes are made, you get the update automatically.
DNS (Score:4, Funny)
We can have the internet without dns, but we cannot have dns without the internet
Why would we want DNS without the Internet?
Re:And... (Score:5, Funny)
Oh my my face is burning off, and I thirsty like a mother grabber.. I hope the internet is still up, oh hey look there goes a cockroach.
Yeah... (Score:4, Insightful)
Re:And... (Score:5, Funny)
Well, if it does happen, I hope they finish them off. Otherwise, the cockroaches may try to revive XML and web services based on an acheological dig in a few hundred-million years. Then again, lets punish the little bastards for infesting our kitchens. Let them suffer dumb tech bubbles and useless fads afterall.
Re:And... (Score:5, Informative)
Actually that is an Internet myth. Look at the IETF RFCs, the first ocurrence of the word 'Nuclear' is several decades after the Internet was created.
The DNS cluster is designed with multiple levels of fault tolerance. In particular the fact that the DNS protocol causes records to be cached means that the DNS root could be switched off for up to a day before most people would even notice.
The root cluster is actually the easiest to do without. There are only 200 records. In extremis it would be possible to code them in by hand. Or more realistically we simply set up an alternative root and then use IP level hacks to redirect the traffic. The root servers all have their own IP blocks at this stage so it is quite feasible to have 200 odd root servers arround the planet accessed via anycast.
The article does not mention which of the servers stayed up apart from the VeriSign servers. However those people who were stating last week that the .org domain can be run on a couple of moderately speced servers had better think again. The bid put in by Paul Vixie would not have covered a quarter of his connectivity bill if he was going to ride out attacks like this one.
Re:And... (Score:5, Insightful)
1) This was not an attack on the surrounding world. This was an attack on the network itself, from inside the network itself.
2) The Internet was designed to be able to route around problems in a specific global region (nuclear war) by having each node or site have connections to multiple other nodes, creating a redundancy that would be almost impossible to get around (at worst case, you could try to route a region through someone's 56K if that region's main providers went down). This redundancy is nowhere near what it should be.
Also, the amount of nodes is magnitudes greater than the original founders ever thought of. The number of sites when that was said was around 20-30, and it was fairly easy for most of them to connect to each other and form a semi-mesh network.
3) Dependance on centralized services. This attack was on one of the Internet's centralized services, the Alliance of 13 (DNS root servers). With a limited number of root DNS servers, it's easy to point to somewhere and say "There's the weakness, let's hit it". The root DNS servers are a balance between complexity (having more than one root server takes time to propogate complete changes amongst all of them) and redundancy (having only one or a few servers makes an even more vulnerable point than the Alliance of 13).
Another major weakness is the continental backbones (for example, North America has the East Coast, West Coast, and transcontinental backbones) and their switching stations, like MAE East and West. Imagine if someone was able to take out all of MAE East in one shot, how crippled most of the Internet would be, for at least 12-36 hours while the alternate routing was put in place.
DDOS? (Score:4, Funny)
Watch Out! (Score:4, Funny)
Everyone! Run for your lives, Jackie's comin!
And for all you tech support people out there... (Score:4, Funny)
Re:And for all you tech support people out there.. (Score:3, Insightful)
Re:And for all you tech support people out there.. (Score:3, Interesting)
Re:And for all you tech support people out there.. (Score:3, Funny)
One would assume you still have to check periodically to see if the IP address from DNS is the same as your cached one. Either way, you are not the majority of Internet users, so for most everyone, DNS going dead == Internet going dead.
Determining whether or not kicking the majority of users off the Internet is a bad thing is left as an exercise to the reader.
Couldn't have been that bad... (Score:4, Insightful)
I'd say this just goes to show how reliable the root name servers are. I didn't notice any dns problems yesterday. In fact, I don't remember any root name server problems since the infamous alternic takeover.
Re:Couldn't have been that bad... (Score:4, Interesting)
Twenty minutes later, though, everything seemed fine, and the sites that wouldn't resolve earlier finally did. I wondered if something... erm.. unusual was going on, and it looks like there was...
As always, your mileage will undoubtedly vary...
Re:Couldn't have been that bad... (Score:4, Informative)
If you believe this article [com.com] on news.com [com.com], it looks more like a storm in a glass of water.
Quote: the peak of the attack saw the average reachability for the entire DNS network dropped only to 94 percent from its normal levels near 100 percent.
Re:Couldn't have been that bad... (Score:3, Informative)
http://news.com.com/2100-1023-204904.html?legac
And...? (Score:3, Funny)
You don't think the military puts any critical systems on the Internet, do you?
13 servers (Score:3, Funny)
Re:Well, I would guess... (Score:4, Informative)
-Kevin
Well there we go! (Score:4, Interesting)
Article:
"Despite the scale of the attack, which lasted about an hour, Internet users worldwide were largely unaffected, experts said."
All I can say is that if you think of this as a test, I'm happy it passed.
(Insert joke about Beowulf cluster of DDOS attacks / the servers ability to withstand the slashdot effect.)
Re:Well there we go! (Score:5, Interesting)
The attackers were idiots. They used ICMP echo requests (easily filterable, since the DNS servers don't _have_ to answer those) and quit after an hour. More publicity stunt than actual attempt to damage, IMNSHO.
I've been trying to publish a paper about exactly this (and how to redesign DNS to avoid the vulnerability) and I'm just pissed that they didn't tell me in advance so that I could do some measurements. :)
Comment removed (Score:5, Interesting)
Before anybody gets their panties in a knot (Score:5, Interesting)
"when uunet or at&t takes many customers out for many hours, it's not a problem
With something like the root nameservers, if it was an important attack, you would have noticed. I run an ISP and we had zero complaints, even from the Everquest whiners who complain at the drop of a hat about anything.when an attack happens that was generally not even perceived by the users, it's a major disaster
i love the press"
I would draw an opposite conclusion (Score:4, Interesting)
Fine, so the attack was unintelligent. What will happen when someone attacks MAJORLY and INTELLIGENTLY?
This gets my panties in a knot. A piddly attack brought down 65% of the root name servers! A good attack would have brought them all down! That doesn't that worry you?
Ah ha. (Score:4, Funny)
I'm going to beat the crap out of that 12-year-old as soon as I find him; he made me look like I had no skillzzz.
Re:Ah ha. (Score:5, Funny)
Caching saves the day... (Score:5, Informative)
Thus the hour long attack was not enough to meaningfully disrupt things, as most lookups would not require querying the root, unless you were asking for some oddball TLD like
Change the attack to be several hours, or a few days, and then cache entries start to expire and people are unable to look up new domain names. But that attack would be harder to sustain, as infected/compromised machines could be removed.
It is an interesting question who or how this was achieved. THere seems to be a lot of scanning for open windows shares (Yet Another Worm? Who knows) also going on in the past couple of days, but there is no clue if it is related.
The important caching (Score:4, Interesting)
For the most common 2LD names, any major ISP will have cached the addresses for them, and won't need to hit the .com server until the typical 1-week or 24-hour cache timeout periods. If your nameserver is ns.bigisp.net, somebody there will have looked up google.com in the last 2 seconds, even though nobody at your ISP has looked up really-obscure-domain.com this week - but even that one may be in the cache because some spammer was out harvesting addresses. An obvious scaling/redundancy play for the root servers and for the major ISPs would be to have them cache full copies of the root server domains to keep down the load and reduce dependency. It's not really that much data - 10 million domains averaging 30 characters for name and IP addresses is only half a CD-ROM. An interesting alternative trick would be for the Tier 1 ISPs to have some back-door access to root-level servers for recursive querying.
Preaching to the choir... (Score:3, Interesting)
I'd love to see a breakdown of what networks the attacks came from and what the OS distribution was... pie charts optional.
-B
Test run (Score:3, Insightful)
Maybe to cause a false sense of security, maybe to analyse how those crucial networks cope with DOS attacks so as to be more successful next time.
Whether these people were Bin Laden's boys or garden variety hax0rs don't get too comfortable. The worst is yet to come.
Sophisticated? (Score:5, Insightful)
I've never considered DDOS all that sophisticated myself. It's seems to me that "wow a script kiddie got more systems under his control than usual" more than "a great cracker is on the loose". Though I suppose if it were a great cracker then they could have been proving themselves by predicting the attack.
DDOS Sophistication Varies (Score:4, Interesting)
But, yeah, some of the attacks aren't much different than using a loudspeaker to announce "Free Beer at Victim.com"
OMG OMG (Score:4, Funny)
If DNS ever goes down totally, (Score:3, Informative)
We'll have to rely on IP addresses, obviously, so start changing your bookmarks now!
http://64.28.67.150/index.pl
instead of
http://slashdot.org/index.pl
And...? (Score:5, Insightful)
Indeed, no traffic slowdown, no more than usual support calls. The system works as expected, even under attack.
Worth a read: Caida DNS analysis [caida.org], and more specifically those graphs [caida.org]. It would be interesting to know which DNS sustained the attack, in regard to the graphs.
Looks worse then it is (Score:4, Insightful)
If you really want to, build your own root server [ipal.net]
I work for JPNIC (Score:4, Informative)
I'm at JpNIC & JPRS we manage the Japanese servers here. The attack progressed through our networks and effected 4 of our secondary mapped servers (these servers are used as a backup and in no way are real root servers). The servers were running a suite of Microsoft products (Windows NT 4.0) and security firewall by Network Associates.
Here is a quick log review:
Oct20: The attackers probed our system around 2100 hours on Oct 20 (Japan). We saw a surge in traffic onto the honeypot (yes these backups are honeypots) systems right around then.
2238: We saw several different types of attacks on the system, starting with mundane XP only attacks (these were NT boxes). We then saw tests for clocked IIS and various other things that didnt exist on our system.
2245: We saw the first bind attacks, these attacks were very comprehensive. We can say they tried every single bind exploit out there. But nothing was working.
Attacks ended right then.
Then on the 22nd they resumed (remember we are ahead)
22nd: A new type of attack resumed. The attack started with port 1 on the NT box, we have never seen this type of attack and the port itself responding was very weird. Trouble started and alarms went off, we were checking but couldnt figure out what happend, then we saw a new bind attack. The attack came in and removed some entries from bind database (we use oracle to store our bind data)..
The following entries were added under ENTRI_KEY_WORLD_DATA
HACZBY : FADABOI
CORPZ : MVDOMIZN HELLO TO KOTARI ON UNDERNET
Several other things were changed or removed.
Till now, we have no idea what the exact type of hack this was, we are still looking into this. The attack calls himself "Fadaboi", and has been seen attacking other systems in the past.
We are now working hard with network solutions.
Thank you.
Re:I work for JPNIC (Score:5, Informative)
Re:I work for JPNIC (Score:5, Interesting)
CORPZ : MVDOMIZN HELLO TO KOTARI ON UNDERNET
Well, this shouldn't take the FBI long. A quick Google search shows that Undernet's Kotari owns the domain www.kotari.com, which he's recently taken down but still shows whois records..
"Most sophisticated attack ever" (Score:4, Funny)
And that's just a little fragment of it. I'm really worried about these guys taking over the internet!!
Re:"Most sophisticated attack ever" (Score:5, Funny)
Re:I work for JPNIC (Score:5, Funny)
Unbreakable.
Running NT and BIND? (Score:5, Interesting)
It's really easy to setup a system which dumps your SQL database out to a TinyDNS file [www.fefe.de]. TinyDNS [cr.yp.to] is provably secure software. I would expect that you would use it on the root servers, since it's designed to work at very high levels of output/uptime, and be attack resistant to the point of being attack proof.
Say what you will about D. J. Bernstein [cr.yp.to], he does have a very capable DNS solution [cr.yp.to] available.
It certainly does provide that capability. (Score:4, Informative)
For dynamically updating zones, I use a small Perl DBI script which dumps zones from the DB into a directory. All files in the directory are sorted (via sort) into a main text file, which is hashed into data.cdb. I also have a big text file from the other DNS server scped over and included in the hash. The entire system is dynamic, with every important entry controllable from within an easily backed-up (and restorted) SQL server. Adding things like DynDNS to this setup would be trivial (all I'd need is another table for actual accounts, which allow people to modify their own zone files).
Best of all, because there is an order of magnitude less code running, TinyDNS is a lot easier to inspect for correctness. You can spend a couple of evenings reading over all the code for the package (even if it's not the best looking C code in the world), and really understand it.
In other news.... (Score:4, Funny)
HA! Jumping through their own ass. (Score:3, Funny)
A certain mil/gov organization I consult with was jumping through their own asses worried about this. The funny thing is, ummm... NOTHING CHANGED! We experienced NOTHING. I think they wanted us to do something... ANYTHING.
You know... next time this happens, I'm setting up my own root servers... errr... wait...
Can you say "SPIKE"? (Score:4, Informative)
My Brain Hurts (Score:5, Funny)
And I suppose the person who wrote this article would consider arithmetic a complex system of digits and symbols.
mrtg charts (Score:4, Informative)
Root-servers.net [root-servers.net]
The legendary cymru.com data. [cymru.com]
I haven't looked yet but LINX mrtg charts might show something interesting. [linx.net]
Of course, even if someone could knock all the root servers over, the net as we know it wouldn't stop working instantly. That's what the time to live value is for :)
Traffic Stats (Score:5, Informative)
The stats for the h.root servers are available for the time period [root-servers.org] of the attack. Seems as though the h servers were taking in close to 94Mbits/second for a while.
More links to server stats can be found at Root Servers.org [root-servers.org] and some background is available at ICANNWatch [icannwatch.org].
Thoughts from a DNS implementor (Score:5, Insightful)
I only noticed it because I use my own DNS server [maradns.org] to resolve requests; and pay close attention whenever I see any problems resolving host names (there is the possibility of it being a bug with my software).
The person who orchastrated this attack is not very familiar with DNS. Attacking the root name servers is not very effective; all the root servers do is refer people to the .com, .org, or other TLD (top-level-domain) name servers. Most DNS servers remember the list of the name servers for a given TLD for a period of two days, and do not need to contact the root servers to resolve those names. While some lesser-used country codes may have had slower resolution times, an attack on the root servers which only lasts an hour can not even be felt by the average end user.
In the case of MaraDNS, if a DOS (denial of service) is happening against the root servers, MaraDNS will be able to resolve names (albeit more slowly for lesser-used TLDs) until every single root server is sucessfully DOS'd.
- Sam
Follow-up Washington Post article... (Score:5, Funny)
Followup article, after slashdot story, was: "Attack on Washington Post Called Largest Ever".
Ah.. behold the mighty power of
Whats the difference between a dos attack & /. (Score:5, Funny)
--Joey
Re:al qaeda? (Score:5, Funny)
I was using the computer in Afghanistan to surf pr0n.
Re:oh my... (Score:4, Interesting)
And *nix systems are infinitely more scriptable, so I think it's more likely those were used for the attack (if I remember correctly, unsecured Linux where used for the big DDOS attacks on Yahoo and Ebay etc some years ago).
Re:That's why! (Score:4, Funny)
(It can't just have been me!)
graspee
Patent Infringement (Score:5, Funny)
Re:Reminds me... (Score:3, Funny)
Re:Why attack (Score:5, Informative)
I am not an expert but surely these servers connect to the net through some sort of router/hub whatever. The servers are made to handle a lot of traffic but what about the connecting hardware. If the routers were attacked directly wouldn't the DDOS attack still be succesful without touching or alerting the dns servers themselves.
It's an interesting idea, but it doesn't quite work like that. The routers we're talking about here (I imagine that most of the root servers are on 100BT or Gigabit Ethernet LANs which then plug into one or more DS-3s [45 Mbps] or more likely OC-3s [155 Mbps]) are designed to be able to handle many, many times more traffic than the servers are. Your average Cisco 7xxx or 12xxx router is built to handle far more traffic than any given server might see. Think about it ... you generally have many servers being serviced by one router, not the other way around. Additionally, each root server is most likely connected to multiple routers (say, they're hosted at an ISP with three DS-3s to different providers and each DS-3 is plugged into a different Cisco 7500).
Also I doubt that the routers are setup to recognize any kind of attack as they are just relays between the net and the server. Possibly the attack could go on for quite some time before any one realized what was going on.
Actually, it's the other way around. Most good routers are designed to have the ability (if you enable it) to look inside of the packets that pass through them and filter out "bad" ones based on various criteria. Thus, routers are actually perfectly suited to stopping attacks like this, while servers are expected to burn their CPU cycles doing other things (yes, servers can do this sort of filtering, but they generally have something more important to do). The only real problem is that it's often very difficult to tell the "good" packets from the "bad." After all, how do you distinguish automatically between a distributed flood of HTTP malicious requests and a Slashdotting? You get the idea.
WD40 (Score:5, Interesting)
Hmmm, last I looked at the Cisco feature set (or the like from Foundry and Nortel and what have you), it was a challenge to put in rules that
a) didn't take out significant "good" traffic, and
b) did take out significant "bad" traffic.
I agree that rate limiting ICMP traffic is an appropriate answer, especially in the light of this particular attack, but I'm appalled by the number of illitarate dorks who copy snippets titled "how to block all ICMP" from a textbook into their firewall without the slightest understanding of why ICMP was implemented in the first place.
I hate to think of what could happen if the 31334 hackers really start mixing attacks.
I positively _love_ wd40, but I will not apply it to reduce the squeeking of my cars brakes. Too many people use the Internet equivalent of WD40 on their network brakes.
Re:Where's the Inter in the 'Net? (Score:5, Insightful)
The Internet's roots have nothing to do with democracy. Quite the opposite, your military wanted a communications network that could survive a nuclear holocaust so that it would be the first to rebuild and conquer the world when the evil reds launched the first nuke.
Most of the TLDs are in the USA because the DNS system was created in the USA, and was largely hosted by US providers. It's too much trouble to move them, and of limited benefeit. If they ever decide to add new ones, it's likely that they'll put at least one in Japan, and probably a couple in Europe.
Even so, though, the main reason for their dispersal is to survive a nuclear attack that takes out one or two. I don't know if you've looked at a map recently, but the USA is big. It's not like all 13 of the TLD servers are located in a trailer in rural Kentucky. You'd have to carpet bomb the entire USA to be sure of taking out all 13 of them, and frankly, if somebody had the resources to turn the entire country into a self-illuminating glass-floored parking lot, the Internet would be the least of my worries.
Re:undisclosed location (Score:5, Interesting)
Disclaimer, I work for VeriSign. This is a personal opinion, not company policy. The details of the disaster recovery scheme are of course confidential. However I can tell people that we did think about these issues during the design. We have always known that people might think the DNS was a single physical point of failure for the internet. That is why we designed it so that it is not.
There are multiple locations. The 'A root' is NOT a single machine. There are actually multiple instances of the A root with multiple levels of hotswap capability.
Incidentally it is no accident that the VeriSign root servers stayed up. They were designed to handle loads way beyond normal load. The ATLAS cluster is reported to handle 6 billion transactions a day with a capacity very substantially in excess of that.
Even if all the A roots were physically destroyed the roots can be reconstructed at other locations. Basically all that is needed is a site with a very fast internet connection. In the case of a major terrorist attack AOL or UUNet or even an ARPAnet node could be comandered. The root could even be moved out of the country entirely, British Telecom is a VeriSign affiliate, there are also several other affiliates with nuclear hardened bunkers.
Most Americans have only been thinking about terrorism since 9-11. VeriSign security was largely designed by people who thought about terrorism professionaly, unless of course they were in charge of securing nuclear warheads.
All a terrorist could do is to kill a lot of people, there is absolutely no single point of failure. Even if the entire constellation is destroyed it would result in an outage of no more than a day given the resources that would become available in the aftermath.
Re:Punishment options. (Score:5, Insightful)
Seriously. How do you plan on enforcing this? Not only is it a huge expenditure of resources to track down the number of computers used in the attacks, to track down their IP addies, to obtain the needed court orders to obtain their ISP's logs, the resources to parse those logs to find out who was logged on, and *then* go about prosecuting the offenders, what would it accomplish?
If Code Red taught us anything, it's that the dumb won't change a thing about the way they work, regardless of how much the internet community ridicules them. It's also completely nuts to punish the ISPs for this... where does it stop? I'm pretty sure that some AOL clients were responsible (and while I wouldn't complain about no AOL'ers for a while, I bet they would). How about people who buy their access directly from UUNet? Gonna block out UUNet for a month?
Even if you could implement that punishment of the ISPs, it wouldn't accomplish much. It wouldn't hurt me at all if I was blocked from direct access to the TLD servers, because inside my network I'm running a mirror. My ISP is running a mirror. I know of a dozen open DNS servers on the internet. I'm betting I could find at least one that wouldn't block me.
Seriously, though. It's great to say we should punish these people for not securing their systems, but you have to understand just how many computers would be needed for this attack. The TLD servers aren't running on 64k ISDN: they're on OC48 at least. There's 13 of them. The kind of bandwidth needed to adequately DoS them is obscene. You either do it the dumb way and use 50 computers running on the fastest connection available, or you use *hundreds* of computers, possibly thousands or tens of thousands.
Looks great on paper, but realistically there's not much point in ranting like this. Besides... if it wasn't for the article, I'm betting that most of the world wouldn't have noticed.
Lots of people didn't notice (Score:4, Informative)
That's the scary part.... (Score:4, Interesting)
here's one; (Score:5, Funny)
4711 Mission Rd. - Westwood, KS (sub. of Kansas City), Tel: (913) 432-5678
Good enough for a lot of professional athletes, and they straightened me up after my car wreck.
But I don't think they can fix uunet.