Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
News

Internet Backbone DDOS "Largest Ever" 791

wontonenigma writes "It seems that yesterday the root servers of the internet were attacked in a massive Distributed DoS manner. I mean jeeze, only 4 or 5 out of 13 survived according to the WashPost. Check out the orignal Washington Post Article here."
This discussion has been archived. No new comments can be posted.

Internet Backbone DDOS "Largest Ever"

Comments Filter:
  • Re:And... (Score:5, Informative)

    by no soup for you ( 607826 ) <jesse.wolgamott@noSPaM.gmail.com> on Tuesday October 22, 2002 @07:49PM (#4508930) Homepage
    it's supposed to withstand a nuclear war?

    Article: "The Domain Name System (DNS), which converts complex Internet protocol addressing codes into the words and names that form e-mail and Web addresses, relies on the servers to tell computers around the world how to reach key Internet domains."

    The "IP system" should have been fine. The DNS system, which has become an integral part of the "internet" is not decentralized as regular internet infrastructure is. Yes it is supposed to withstand a nuclear war, and yes, it would have. btw, the system worked yesterday. only 4 of 13 may have survided, but the system still ran.

    We can have the internet without dns, but we cannot have dns without the internet

  • by nweaver ( 113078 ) on Tuesday October 22, 2002 @07:50PM (#4508940) Homepage
    The root DNS servers are required to go from the TLD to the actual TLD's nameservers, eg to go from ".com" to the .com root nameservers. As a result, although critical, their results are cached with very, VERY long cache timeouts (TLD DNS servers seldom change).

    Thus the hour long attack was not enough to meaningfully disrupt things, as most lookups would not require querying the root, unless you were asking for some oddball TLD like .su.

    Change the attack to be several hours, or a few days, and then cache entries start to expire and people are unable to look up new domain names. But that attack would be harder to sustain, as infected/compromised machines could be removed.

    It is an interesting question who or how this was achieved. THere seems to be a lot of scanning for open windows shares (Yet Another Worm? Who knows) also going on in the past couple of days, but there is no clue if it is related.

  • Re:13 servers (Score:2, Informative)

    by Kevin Stevens ( 227724 ) <kevstev&gmail,com> on Tuesday October 22, 2002 @07:51PM (#4508946)
    well, since the servers cache the lookup info, they dont need to be as impressive as you might think. Each domain has its own domain server with its own cache. So... you lookup slashdot.org, your machine might have the IP cached, and your domain's machine might have the machine cached (which is very likely). Alot of requests do not need to go through the root servers. This is especially true nowadays since people for the most part tend to have 5 or 10 or whatever sites they visit often, and rarely stray from them.
  • by Anonymous Coward on Tuesday October 22, 2002 @07:53PM (#4508970)
    I haven't had any trouble.
  • by Bobulusman ( 467474 ) on Tuesday October 22, 2002 @07:55PM (#4508990)
    Which could happen if these guys tried again:

    We'll have to rely on IP addresses, obviously, so start changing your bookmarks now!

    http://64.28.67.150/index.pl
    instead of
    http://slashdot.org/index.pl

    :)
  • Re:Ah ha. (Score:2, Informative)

    by Kevin Stevens ( 227724 ) <kevstev&gmail,com> on Tuesday October 22, 2002 @07:58PM (#4509007)
    It is unlikely that you would experience lag due to the root servers going down unless you were using those same routes which were experiencing the DDOS. It is still unlikely though that routes even a few hops away from the server's main links were completely saturated though. All the DNS server does is resolve the domain name to an IP address, once it is done for a site it does not need to be done again. Also, it is often cached either on your local machine, or your local ISP's DNS server, so you rarely need to actually go all the way up to the root server.
  • I work for JPNIC (Score:4, Informative)

    by Anonymous Coward on Tuesday October 22, 2002 @08:02PM (#4509043)
    Hi,

    I'm at JpNIC & JPRS we manage the Japanese servers here. The attack progressed through our networks and effected 4 of our secondary mapped servers (these servers are used as a backup and in no way are real root servers). The servers were running a suite of Microsoft products (Windows NT 4.0) and security firewall by Network Associates.

    Here is a quick log review:

    Oct20: The attackers probed our system around 2100 hours on Oct 20 (Japan). We saw a surge in traffic onto the honeypot (yes these backups are honeypots) systems right around then.

    2238: We saw several different types of attacks on the system, starting with mundane XP only attacks (these were NT boxes). We then saw tests for clocked IIS and various other things that didnt exist on our system.

    2245: We saw the first bind attacks, these attacks were very comprehensive. We can say they tried every single bind exploit out there. But nothing was working.

    Attacks ended right then.

    Then on the 22nd they resumed (remember we are ahead)

    22nd: A new type of attack resumed. The attack started with port 1 on the NT box, we have never seen this type of attack and the port itself responding was very weird. Trouble started and alarms went off, we were checking but couldnt figure out what happend, then we saw a new bind attack. The attack came in and removed some entries from bind database (we use oracle to store our bind data)..

    The following entries were added under ENTRI_KEY_WORLD_DATA ::

    HACZBY : FADABOI
    CORPZ : MVDOMIZN HELLO TO KOTARI ON UNDERNET

    Several other things were changed or removed.

    Till now, we have no idea what the exact type of hack this was, we are still looking into this. The attack calls himself "Fadaboi", and has been seen attacking other systems in the past.

    We are now working hard with network solutions.

    Thank you.

  • Re:Why attack (Score:5, Informative)

    by schnell ( 163007 ) <me@schnelBLUEl.net minus berry> on Tuesday October 22, 2002 @08:04PM (#4509056) Homepage

    I am not an expert but surely these servers connect to the net through some sort of router/hub whatever. The servers are made to handle a lot of traffic but what about the connecting hardware. If the routers were attacked directly wouldn't the DDOS attack still be succesful without touching or alerting the dns servers themselves.

    It's an interesting idea, but it doesn't quite work like that. The routers we're talking about here (I imagine that most of the root servers are on 100BT or Gigabit Ethernet LANs which then plug into one or more DS-3s [45 Mbps] or more likely OC-3s [155 Mbps]) are designed to be able to handle many, many times more traffic than the servers are. Your average Cisco 7xxx or 12xxx router is built to handle far more traffic than any given server might see. Think about it ... you generally have many servers being serviced by one router, not the other way around. Additionally, each root server is most likely connected to multiple routers (say, they're hosted at an ISP with three DS-3s to different providers and each DS-3 is plugged into a different Cisco 7500).

    Also I doubt that the routers are setup to recognize any kind of attack as they are just relays between the net and the server. Possibly the attack could go on for quite some time before any one realized what was going on.

    Actually, it's the other way around. Most good routers are designed to have the ability (if you enable it) to look inside of the packets that pass through them and filter out "bad" ones based on various criteria. Thus, routers are actually perfectly suited to stopping attacks like this, while servers are expected to burn their CPU cycles doing other things (yes, servers can do this sort of filtering, but they generally have something more important to do). The only real problem is that it's often very difficult to tell the "good" packets from the "bad." After all, how do you distinguish automatically between a distributed flood of HTTP malicious requests and a Slashdotting? You get the idea.

  • Re:And... (Score:5, Informative)

    by Istealmymusic ( 573079 ) on Tuesday October 22, 2002 @08:04PM (#4509058) Homepage Journal
    You make some good points, but the Domain Naming Server system is in fact largely distributed. Ever notice how when you configure your network stack you have enter a DNS server? That's your ISP's DNS server, its not one of the 13 root servers. Verizon gives its users 3 servers for translating numbers to names: vnsc-pri.sys.gtei.net (4.2.2.1), vnsc.bak.sys.gtei.net (4.2.2.2), vnsc-lc.sys.gtei.net (4.2.2.3), and for internal use, i-will-not-steal-service.gtei.net (4.2.2.4), Earthlink has 207.217.120.109, and even the smallest local ISP has its own DNS server.

    DNS is hierarchical, both is naming and in server implementation. Small ISPs cache their DNS from more major providers, up until the A to J.ROOT-SERVERS.NET main Internet servers. There is in fact one critical file, but it is mirrored to the 13 root servers, and domain look-ups are cached at the ISP level. I'm not suprised most Internet users were not affected, you wouldn't be affected if several large mail servers where DDoSed would you?

  • by khuber ( 5664 ) on Tuesday October 22, 2002 @08:10PM (#4509098)
    You can definitely get to the root servers. Ping only works if the host responds to ICMP echo requests. Try doing a DNS lookup :).

    # nslookup b.root-servers.net a.root-servers.net
    Server: a.root-servers.net
    Address: 198.41.0.4#53

    Name: b.root-servers.net
    Address: 128.9.0.107

    -Kevin

  • Re:I work for JPNIC (Score:5, Informative)

    by irregular_hero ( 444800 ) on Tuesday October 22, 2002 @08:13PM (#4509116)
    If you want to see in gory detail what a DDOS attack looks like in relation to what NORMALLY happens to these servers, try here [root-servers.org]. Notice the really big spike. As if you could miss it.
  • by pythas ( 75383 ) on Tuesday October 22, 2002 @08:15PM (#4509135)
    Do a google search for AlterNIC. Or, you could look here:

    http://news.com.com/2100-1023-204904.html?legacy =c net
  • Can you say "SPIKE"? (Score:4, Informative)

    by irregular_hero ( 444800 ) on Tuesday October 22, 2002 @08:17PM (#4509150)
    I think I can. The US Army-operated root server looks like it took the brunt of the attack [root-servers.org], as opposed to the JPNIC servers, which seem to have had a much lower rate [root-servers.org] (perhaps because most of the attacking hosts were US-based?).
  • Re:And... (Score:4, Informative)

    by no soup for you ( 607826 ) <jesse.wolgamott@noSPaM.gmail.com> on Tuesday October 22, 2002 @08:27PM (#4509212) Homepage
    Earthlink has 207.217.120.109, and even the smallest local ISP has its own DNS server.

    You're correct in that there are more than 13 DNS servers.I've got my own, which may or my not lie - it's these 13 that are "trusted" ... so to speak.

    Now, when you're configuring your network stack, in fact, when you described to me the various DNS servers, what is the important part- the name or the IP number? the number - which helps to prove my point that IP is more important than DNS.

  • mrtg charts (Score:4, Informative)

    by Cally ( 10873 ) on Tuesday October 22, 2002 @08:29PM (#4509225) Homepage
    Links courtesy of Sean Donelan.

    Root-servers.net [root-servers.net]
    The legendary cymru.com data. [cymru.com]

    I haven't looked yet but LINX mrtg charts might show something interesting. [linx.net]

    Of course, even if someone could knock all the root servers over, the net as we know it wouldn't stop working instantly. That's what the time to live value is for :)

  • Re:And... (Score:4, Informative)

    by Istealmymusic ( 573079 ) on Tuesday October 22, 2002 @08:33PM (#4509251) Homepage Journal
    Correct, I know of no DNS servers, even djbdns [cr.yp.to] DNS', which restrict queries to a limited IP range as is common with SMTP. There's not really a large risk in opening up your DNS to everyone, in fact, you there are plenty of alternate DNS root servers [jerky.net].
  • Traffic Stats (Score:5, Informative)

    by HappyPhunBall ( 587625 ) on Tuesday October 22, 2002 @08:33PM (#4509254) Homepage

    The stats for the h.root servers are available for the time period [root-servers.org] of the attack. Seems as though the h servers were taking in close to 94Mbits/second for a while.

    More links to server stats can be found at Root Servers.org [root-servers.org] and some background is available at ICANNWatch [icannwatch.org].

  • Re:And... (Score:5, Informative)

    by Zeinfeld ( 263942 ) on Tuesday October 22, 2002 @08:37PM (#4509278) Homepage
    it's supposed to withstand a nuclear war?

    Actually that is an Internet myth. Look at the IETF RFCs, the first ocurrence of the word 'Nuclear' is several decades after the Internet was created.

    The DNS cluster is designed with multiple levels of fault tolerance. In particular the fact that the DNS protocol causes records to be cached means that the DNS root could be switched off for up to a day before most people would even notice.

    The root cluster is actually the easiest to do without. There are only 200 records. In extremis it would be possible to code them in by hand. Or more realistically we simply set up an alternative root and then use IP level hacks to redirect the traffic. The root servers all have their own IP blocks at this stage so it is quite feasible to have 200 odd root servers arround the planet accessed via anycast.

    The article does not mention which of the servers stayed up apart from the VeriSign servers. However those people who were stating last week that the .org domain can be run on a couple of moderately speced servers had better think again. The bid put in by Paul Vixie would not have covered a quarter of his connectivity bill if he was going to ride out attacks like this one.

  • by billstewart ( 78916 ) on Tuesday October 22, 2002 @08:40PM (#4509292) Journal
    The attack only lasted an hour or so, didn't affect all the servers, and if most of the sites you were looking at were in your ISP's DNS caches, you wouldn't have hit the root servers anyway. If you're looking for google.com, your ISP's cache has it because somebody else looked at it 2 seconds ago - it's when you want really-obscure-domain.com that you need to hit the root servers.
  • Re:One critical (Score:5, Informative)

    by Istealmymusic ( 573079 ) on Tuesday October 22, 2002 @08:42PM (#4509308) Homepage Journal
    Sure, do an AXFR (A-record transfer) with DiG on a root server. Of course, you have to be a priviledged user--AXFR requires full-duplex TCP instead of an ordinary UDP connection, so unfortunately *.root-servers.net and *.gtld-servers.net don't allow transfers. Yet some of the international country-code TLDs (ccTLDs) allow AXFR transfers [securityfocus.com]; if you wanna host .AG or whatever just do a dig axfr and you're good to go.
  • by billstewart ( 78916 ) on Tuesday October 22, 2002 @08:43PM (#4509318) Journal
    ...if they'd looked up their favorite pr0n and warez sites first, so the names were in their DNS caches and their ISP's caches.
  • Re:And... (Score:5, Informative)

    by aredubya74 ( 266988 ) on Tuesday October 22, 2002 @08:50PM (#4509351)
    Verizon gives its users 3 servers for translating numbers to names: vnsc-pri.sys.gtei.net (4.2.2.1), vnsc.bak.sys.gtei.net (4.2.2.2), vnsc-lc.sys.gtei.net (4.2.2.3), and for internal use, i-will-not-steal-service.gtei.net (4.2.2.4) Actually, an interesting note on how this is configured. Genuity (aka GTEI aka BBN Planet), who hosts these DNS resolvers, has a simple, but effective distribution system for redundancy. There are actually several servers on AS 1 that will respond as 4.2.2.1 or .2. /32 routes are sprinkled into IGP within the network to try and route requests to the "closest" server that can answer the request. If one is in trouble, simply pull the route to it, and requests route elsewhere. It's not foolproof, as a DDOS would likely come from all borders and overwhelm all of the various servers, but it's pretty effective nontheless.
  • by Perianwyr Stormcrow ( 157913 ) on Tuesday October 22, 2002 @08:59PM (#4509404) Homepage
    It's just change propagation that's a bitch.
  • Re:And... (Score:5, Informative)

    by Neon Spiral Injector ( 21234 ) on Tuesday October 22, 2002 @08:59PM (#4509405)
    You mean like
    acl XXX {
    xxx.xxx.xxx.xxx/20;
    }

    options {
    allow-query { localhost; XXX; };
    ...
    };
    ?

    That's what I do with BIND9.
  • Not quite. (Score:4, Informative)

    by mindstrm ( 20013 ) on Tuesday October 22, 2002 @09:02PM (#4509419)
    Smaller isp's dont'cache info from larger ones... most dns servers simply use the root servers directly. There is no heirarchy beyond that with regards to caching.

    It is heirarchial with regards to namespace, but not so much with regards to lookups.

  • by Zeinfeld ( 263942 ) on Tuesday October 22, 2002 @09:04PM (#4509427) Homepage
    Because that country invented the Internet. It's the most poweful, the most prosperous, the most democratic country in the world. Where would you rather the root servers be... Iran, Iraq, China, Russia? Use your fucking mind.

    Actually that is not the reason. By the time DNS came along the Internet was already international. And never confuse the claim that the US invented the Internet with the idea that the US invented computer networking. Lots of countries had computer networks, the idea of protocol design to overcome the political problems of connecting disparate networks was what came out of the US.

    The DNS servers are where they are because they are expensive to maintain and are run on a volunteer basis. Most of the people prepared to provide the necessary resources happened to be in the US. This is the reason why 9 of the root servers went down you cannot expect someone to pay for multiple OC3 or above connectivity to support a volunteer effort.

    As far as geography goes China and Russia should have a root server. There should also be servers in Australia, south America and northern and southern africa. This is actually likely to happen when it becomes feasible to turn on use of anycast. At present there is a hard limit of 13 root servers. Some of those servers are multiple machines in fault tolerant configurations but they are still bound by the IP assumption that an IP address is served at a single location.

    With anycast we simply fiddle the router tables so that there are multiple servers arround the world all responding to the same IP address. This will make it possible to have 50 sites serving each of the 13 root DNS addresses. In practice it is likely that only one of those addresses will need to be anycast and the BIND software tweaked to favor it.

  • >Users wank up their software configuration and then blame it on "the server" instead of their own ignorance (notice I didn't say stupidity, I said ignorance.

    You only get to use the ignorance excuse once. Not following instructions when you've been explicity given them is stupidity.
  • by Istealmymusic ( 573079 ) on Tuesday October 22, 2002 @10:02PM (#4509716) Homepage Journal
    Okay...I Googled for "randall hyde sucks" in both web and groups, and couldn't find anything. You're right about me not being a UCR student...though I might be soon, depending on my SAT. Maybe you could enlighten me on Hyde's assholeness, if you would be so kind.

    I have found AoA to be extremely useful in my understanding of Boolean Algebra [ucr.edu], Chapter 2 covered the basic postulates, theorems, functions very well. I printed the "16 Possible Boolean Functions of Two Variables" table he included and kept it in a handy location. I first came across minterms/maxterms and how they are used to find the canonical expression, as well as k-maps [ucr.edu] for optimization. I don't particularly like Hyde's assembly library however, for me the Intel Programmers Manual Volume 1-3 dead tree book was most clear and straight-forward, unlike assembly "tutorials".

    I challenge you to provide a link to a better reference than Hyde's AoA that explains boolean algebra more clearly and more comprehensively. Go ahead.

  • Re:And... (Score:5, Informative)

    by Istealmymusic ( 573079 ) on Tuesday October 22, 2002 @10:11PM (#4509768) Homepage Journal
    Sure, you can send to @123.123.123.123, but it wouldn't go anywhere as 64-126.*.*.* [flumps.org] is reserved by the greedy IANA. Just kidding.

    The DNS system provides an "MX" resource-record for handling mail exchangers. Before the MX record, to send mail one would resolve the DNS using an A record, and connect to the resulting IP address. Nowadays, *@foobar.com doesn't have to always be handled by 140.186.139.224. In fact, there is a nice system set up for prioritizing mail handlers, built into DNS's MX records:

    host google.com
    google.com mail is handled (pri=10) by smtp1.google.com
    google.com mail is handled (pri=20) by smtp2.google.com
    google.com mail is handled (pri=40) by smtp3.google.com

    To answer your question, you can use IP addresses. But you'll be missing out on the prioritized DNS mail system. And don't worry about this being offtopic, the article isn't that all interesting anyways--I'd rather teach someone something interesting than write lame drivel about some "backbone DDoS" that's not even a backbone DDoS. Hey, its about the structure of the Internet...

  • by David Jao ( 2759 ) <djao@dominia.org> on Tuesday October 22, 2002 @10:11PM (#4509772) Homepage
    if you're hosting domains then you wouldn't want to make that above change to your named.conf

    You're right, you wouldn't want to block all queries, but you can do almost as good: you can block all queries except the queries for the domains that you're hosting. In fact, doing so is generally considered a very good idea, since it protects you against some forms of cache poisoning attacks.

    Check out the allow-recursion command in the named.conf (5) man page, which does exactly what I describe.

  • Re:And... (Score:5, Informative)

    by Neon Spiral Injector ( 21234 ) on Tuesday October 22, 2002 @10:17PM (#4509798)
    Ahh, in that case you'll want to add something like this:
    zone "xxx.tld" {
    type master;
    allow-query { any; };
    file "zone/domain-hosting";
    };
    The "allow-query { any; };" being the key. That overrides the more restrictive ACL for the primary use of the name server. You'll have to add that line to any zone you want to be able to be queried by the world.
  • Not a myth (Score:3, Informative)

    by commodoresloat ( 172735 ) on Tuesday October 22, 2002 @10:27PM (#4509845)
    It's not a myth. Nuclear survival may not have been discussed in the RFCs, but the idea of a distributed packet-switching communication network was conceived by Paul Baran in a series of papers [rand.org] for the RAND Corporation between 1960-2. Follow the link and read that "While working at RAND on a scheme for U.S. telecommunications infrastructure to survive a 'first strike,' Baran conceived of the Internet and digital packet switching, the Internet's underlying data communications technology." The Internet was designed to survive a nuclear war.

    Of course, the poster's original question still makes sense (even though it was a joke) -- the Internet is at least potentially vulnerable to an attack on the Internet, even if it could survive a nuclear war. The idea that an enemy would attempt a decapitating nuclear first strike without targeting C3I assets (command, control, communications and intelligence) is absurd. The beauty of Baran's solution is that it makes such a strike very difficult -- and yesterday's DDOS certainly supports this, since most people didn't even notice it.

  • by Inoshiro ( 71693 ) on Tuesday October 22, 2002 @10:28PM (#4509848) Homepage
    To provide caching, use DNScache. If your box is exposed to the internet, you likely don't want to be doing cache requests for the world. You can easily configure DNScache to broker for several internal (TinyDNS) systems. Note that only TinyDNS will set the authoritative flag; DNScache will not.

    For dynamically updating zones, I use a small Perl DBI script which dumps zones from the DB into a directory. All files in the directory are sorted (via sort) into a main text file, which is hashed into data.cdb. I also have a big text file from the other DNS server scped over and included in the hash. The entire system is dynamic, with every important entry controllable from within an easily backed-up (and restorted) SQL server. Adding things like DynDNS to this setup would be trivial (all I'd need is another table for actual accounts, which allow people to modify their own zone files).

    Best of all, because there is an order of magnitude less code running, TinyDNS is a lot easier to inspect for correctness. You can spend a couple of evenings reading over all the code for the package (even if it's not the best looking C code in the world), and really understand it.
  • Re:And... (Score:5, Informative)

    by Electrum ( 94638 ) <david@acz.org> on Tuesday October 22, 2002 @10:28PM (#4509850) Homepage
    Correct, I know of no DNS servers, even djbdns [cr.yp.to] DNS', which restrict queries to a limited IP range as is common with SMTP. There's not really a large risk in opening up your DNS to everyone, in fact, you there are plenty of alternate DNS root servers [jerky.net].

    You don't know what you are talking about. There are two different types of DNS servers: authoritative servers and recursive resolvers. djbdns comes with tinydns, an authoritative server and dnscache, a recursive resolver. The two are completely separate. BIND includes both in the same server, which is why many people are confused into thinking they are the same thing.

    tinydns does not restrict queries to only certain IP addresses. However, it can return different information depending on the source address of the query. This is usually called split horizon DNS.

    dnscache does have access control. You do not want just anyone to be able to query your recursive resolvers. With dnscache, you need to explicitly allow access [cr.yp.to] for IP's that can query it.

    There are not risks in opening your content (authoritative) DNS servers to everyone. There are risks in opening up your resolvers to everyone.
  • by Anonymous Coward on Tuesday October 22, 2002 @10:32PM (#4509866)
    How is this informational? It's WRONG, WRONG, WRONG.

    Look the dig below. The root servers resolve .org. Your ISP will cache the response from the root name servers for 2 days. How often do you need to lookup .org?

    In your example, because you looked up google.com, your isp doesn't need to contact the root servers, because it has already cached where the .com servers are.

    (the results of "dig +trace slashdot.org" should be here, but the lameness filter doesn't like 'junk'
  • Re:And... (Score:3, Informative)

    by Anonymous Coward on Tuesday October 22, 2002 @10:34PM (#4509879)
    Smart ISPs maintain separate servers for:
    -hosting DNS service for customer domains (on servers which don't recurse, but are Internet accessible), and
    -resolving DNS hostnames for downstream customers (on servers which recurse, but are inaccessible from the Internet due to name server configuration or packet filtering).

    This strategy puts hosted DNS service in a sandbox, so that those servers can have zone data that is no longer valid (or not valid yet) without conflicting with the authoritative servers. It also prevents utilization of bandwidth for DNS resolution by non-customers...whish isn't really in the spirit of the 'net. For mammoth ISPs like Earthlink, it could make a noticeable difference in bandwidth usage (with a tradeoff of potentially making them seem like jerks).
  • by yo303 ( 558777 ) on Tuesday October 22, 2002 @10:38PM (#4509890)
    Or just http://1075594134; it's shorter.

    yo.

  • Re:And... (Score:2, Informative)

    by Shanep ( 68243 ) on Tuesday October 22, 2002 @10:45PM (#4509920) Homepage
    Im pretty sure about 95% of the worlds email and web browsing not being able to work does not constitute "the internet working fine".

    The Internet is not the WWW. The WWW uses the Internet as it's transport.

    The intenet would still function fine at the IP level that it was originally designed for. The complete failure of the DNS system would merely harm users reliant on names as network addresses.

    My first email account was made up of numbers.

  • Re:Not quite. (Score:2, Informative)

    by Scott Hale ( 574751 ) on Tuesday October 22, 2002 @10:50PM (#4509937)
    Or if your using Windows 2k/XP you can pull up a command prompt and type 'ipconfig /flushdns' to flush the cache.
  • Well... (Score:5, Informative)

    by Find love Online ( 619756 ) on Tuesday October 22, 2002 @10:54PM (#4509954) Homepage
    Ethernet is a physical transport, while TCP/IP is a protocol. In fact, TCP (transmission control protocol) sits on top of IP (internet protocl). There is also UDP on top of IP (but no one says UDP/IP that I've ever heard) and ICMP on IP. UDP are short messages that are sent without creating a link, and ICMP is for things like Ping, tracerout, etc. You can create your own protocol and use it on the internet.

    You can use any physical layer: ethernet, a modem, a cell phone, wifi, bluetooth, firewire, USB, power lines, etc with IP, and similarly you can use may other protocols with Ethernet or any other link Such as IPX, NetBui, Apple talk, etc.

    TCP, UDP, and ICMP are tied to IP and wont work with anything else.

    Then there are higher level protocols that sit on top of TCP or UDP, for example DNS sits on UDP, FTP, telnet, gnutella and others sit on TCP. Interestingly HTTP should work on other protocols as long as you can establish a link between a server and a host on it. And you have software that implements it on these other links.

    There's also Ipv6, which is a newer version of IP.
  • Re:Not a myth (Score:3, Informative)

    by Zeinfeld ( 263942 ) on Tuesday October 22, 2002 @11:14PM (#4510062) Homepage
    Nuclear survival may not have been discussed in the RFCs, but the idea of a distributed packet-switching communication network was conceived by Paul Baran in a series of papers [rand.org] for the RAND Corporation between 1960-2.

    The Internet uses Packet switching but it was certainly not based on the RAND design. By the time that the IP Protocol was written there were several Packet data networks in use. The Internet was designed for ease of configuration.

    Vint Cerf - who was not the 'father of the Internet' but did manage the research budget for it has repudiated the nuclear fable several times. If nuclear proff had been a goal we would have used flood fill routing and not built MAE West and East.

  • Re:And... (Score:5, Informative)

    by mysticalreaper ( 93971 ) on Tuesday October 22, 2002 @11:37PM (#4510170)
    You say:
    You make some good points, but the Domain Naming Server system is in fact largely distributed.
    and then you say:
    DNS is hierarchical, both is naming and in server implementation.

    Ok hold on here. It's both hierarchial, implying something at the top that everything is based on, and at the same time, distributed, implying that it's not dependand on some central source? Dude, you're contradicting yourself, and so you're wrong.

    The truth is that the DNS system IS heirachial. ICANN runs the root. They say what information goes in at the highest level. The dot-com, and dot-aero, and dot-useless and so on. That is why there is so much scrutiny on ICANN for operating fairly [icannwatch.org]. They are the people who decide how the DNS system will be run, because they are at the top of the hierarchy.

    "But wait!" you say, "Aren't there 13 root servers? That's distributed right there." Yes, but you are only half right. The LOAD is distributed, not the information. So you're distributing the LOAD, but the info is exactly the same on each one. And that info is controlled by ICANN.

    Oh and yes, you CAN get that one file of information that the root servers have. Really you can. Take a look for yourself. Log into ftp://ftp.rs.internic.net/domain [internic.net] and get root.zone.gz [internic.net]. If you look at that file, you'll see it's a list of all the servers for all the TLDS. .ca, .uk, .fr, .com, .net. Everything. There's also a list of all the root servers: named.root [internic.net] There's other info there, but i'm sure you can find it yourself.
  • by 0x0d0a ( 568518 ) on Tuesday October 22, 2002 @11:40PM (#4510189) Journal
    The caching nameserver pdnsd does something like this -- if it can't manage to get a new record, it uses the old (stale) copy. So you have a cached copy of Slashdot's NS for a long, long time.

    If root DNS went down, you'd have to have Slashdot's DNS move as well.
  • Re:And... (Score:2, Informative)

    by EelBait ( 529173 ) on Tuesday October 22, 2002 @11:44PM (#4510206)
    Not off topic at all. In fact, you can send an email to an address like that, as long as that IP address is a mail exchanger. Normally, when you send an email to someone@domain.org, there is actually a machine named something like mail.domain.org that handles email. The DNS manages an "MX" record that directs email destined to domain.org to mail.domain.org. However, if domain.org is actually the name of a machine that accepts email, no MX record is needed. By the same token, if 123.123.123.123 is the IP address of your mail server, it will work just fine.
  • Re:And... (Score:3, Informative)

    by micheas ( 231635 ) on Wednesday October 23, 2002 @02:23AM (#4510727) Homepage Journal
    The article does not mention which of the servers
    The article says that Paul Vixie's DNS server was wone of the thirteen to survive.

    I would guess that someone who runs one of the root servers would have a pretty good grasp of the costs of running a root name server.

  • by Anonymous Coward on Wednesday October 23, 2002 @03:11AM (#4510901)
    I'd say this just goes to show how reliable the root name servers are.
    I'd say this just shows how reliable the Washington Post is.

    If you believe this article [com.com] on news.com [com.com], it looks more like a storm in a glass of water.

    Quote: the peak of the attack saw the average reachability for the entire DNS network dropped only to 94 percent from its normal levels near 100 percent.
  • by AmunRa ( 166367 ) on Wednesday October 23, 2002 @05:22AM (#4511204) Homepage
    People should really read up on how things work before they start posting like they know _all_about DNS; so here are a few facts:

    1. every DNS zone (including the . root zone) has a TTL (time to live) - the amount of time you are allowed to keep the results of a query. The idea being that if you a server looks up a zone e.g. foobar.com it doesn't have to look again until the TTL runs out. This is typically about 24 hours for an average .com domain (but can be set to whatever the controller of the domain's DNS likes)

    2. The TTL of the . root zone is* 6 months. This means an ISP's server only has to recheck a top level domain (.org, .com, .net) every 6 months. This means that if all the top level DNS servers were out for say a day, then 99% of the other servers out there wouldn't even notice, as they wouldn't need to query the roots for on average another 3 months. Sure, if the root servers were down for longer, the TTL would run out on more and more DNS servers, but in principle the root servers would have to be down for a sustained time to start to significantly affect the Internet's DNS.

    * - the TTL of the root domains at the moment has been changed to 3 hours, presumably as they are changing the top level infrastructure and need to have the changed propogate quickly.

    3. this is why all ISPs who have correctly setup DNS servers would not have noticed anything. If run your own DNS server on your home box, and don't run it all the time, you'll be checking the root servers the first time you do a DNS query when you switch your machine on; so would probably notice something. Lesson - use your ISPs DNS server to resolve domains!

  • Re:And... (Score:5, Informative)

    by Alioth ( 221270 ) <no@spam> on Wednesday October 23, 2002 @08:17AM (#4511607) Journal
    Only if you're running older versions of BIND. Current versions of BIND can be easily chroot jailed and run as a user that isn't root (even the old, vulnerable versions could be run as non-root - a lot of the problem is that RedHat 6 installed BIND by default running as root).

    The root servers run BIND.

  • Re:And... (Score:3, Informative)

    by Doc Hopper ( 59070 ) on Wednesday October 23, 2002 @09:27AM (#4511985) Homepage Journal
    Darnit, I have mod points, but I have to contribute to the discussion!

    Virtualization of computing resources is going very mainstream these days. You have products such as VMWare, competitors for Sun hardware, and even the staunch favorite, User Mode Linux.

    I'm running DNS right now in a UML sandbox. Although chroot is an excellent security policy for services, if you want true isolation from the main system in case of break-in, it's hard to beat a UML. There is even a special image provided at the UML home page [sourceforge.net] which runs DNS, and only DNS. It's very handy, and is designed to run while taking only 16 MB of RAM.

    Suffice to say, I'm very impressed. For running critical services which, in the past, have required a chrooted environment (such as DNS), user mode linux is a powerful alternative.

    Now, would it have had anything to do with helping stop a DOS attack? Nope, but I'm just following the thread here :)

To do nothing is to be nothing.

Working...