Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
News

WCArchive sets new Record 82

dcs writes "The hardware upgrade for wcarchive came not a single second too soon. In it's first full day of operation with the new hardware, a new record was set... 969 gigabytes of traffic was generated, thanks mainly to the recent release of RedHat 6.0. I'm looking forward to the first terabyte in day mark, but it seems an upgrade on network capacity is due before that can happen. "
This discussion has been archived. No new comments can be posted.

WCArchive sets new Record

Comments Filter:
  • by Anonymous Coward
    Hey, I find it kinda fascinating too to know what such large sites run. Someone mentioned in a previous thread though that download.com runs Netscape Communications/Solaris 1.12. I know that Yahoo! uses FreeBSD (an old copy of the FreeBSD newsletter had an article about Yahoo! and the different OSes they tried when they were starting up). Dejanews and Slashdot are linux based, and Hotmail uses FreeBSD and Solaris (from the kirch paper).

    That's pretty much all I know about who uses what.
  • by Anonymous Coward
    No wonder you're posting as an AC!

    windows "nt" could never handle this load!

    In microsoft's "best practices" documentation, they recommend a GROUP of machines more powerful than ftp.cdrom.com, just to server 6-8 GB/day.

    Install windows "nt" on ftp.cdrom.com, and watch it crash, just like the debacle that occurred when microsoft tried to move hotmail from Unix to windows "nt"!
  • by Anonymous Coward
    That's for serving 6-8 GB/day as typical HTTP requests from a web server, NOT AN FTP SERVER. The load placed on a machine or set of machines in these two roles is very different.

    I've got 4 Linux machines with dual 300 MHz PIIs and half a gig of RAM each using round robin DNS to handle a very busy web site, and it doesn't serve anywhere near 1000 gigs a day, yet it needs hardware that is much more powerfull than cdrom.com, precisely because web serving is a much harder thing than FTP serving.

  • by Anonymous Coward
    Are you taking into account that they only have to pay for what they download? i.e. tcp acks

  • I wonder if MS has any real-world statistics concerning how much of a load NT has been able to handle in the past. I mean, what's download.com run on? Is it NT?

    Forget Mindcraft, this is where it really counts.

    ----

  • No, it shows that FreeBSD is capable, stable, powerful, and robust.

    How does a FreeBSD machine's stability and power somehow prove something about Linux or NetBSD? It proves nothing more about Linux than an NT box doing the same thing would.
  • As of 7:20 pm EST:

    Welcome to wcarchive - home FTP site for Walnut Creek CDROM.
    There are currently 3111 users out of 5000 possible.

    Doesn't look slashdotted to me.

  • Comment removed based on user account deletion
  • Don't forget that Slashdot itself runs Linux and Apache and handles about half a million hits a day, much of that dynamically generated. By my calculations, at peak times, Slashdot tops 10 hits/sec.


    --Phil (Way to go, Rob!)
  • It doesn't -prove- anything. It's merely an impressive feat. I have no doubt that another OS could achieve a similar accomplishment, however. Regardless, it is certainly a testament to FreeBSD's performance (not necessarily speed, also includes functionality) under extreme load.
  • "Bear in mind though, that all _independent_ testing has shown exactly the opposite to be true"

    Proving that Linux is faster than NT on a desktop box with 64 megs of RAM doesn't really satisfy the statement "all testing has shown exactly the opposite."

    And if you are referring to Smart Reseller's test, it was hardly independent. In speaking with the authors of the test, they readily admit to being biased against NT and went out of their way to cripple it.
  • That 17% was server OS sales for 1998. This is not 17% of total server installations.

    Microsoft and Netscape dominate the Intranet web server market. Apache has only a small minority of this market.

  • But a 486 is hardly what people are running NT on.

    The Oracle benchmarking that was posted to slashdot a couple weeks ago was also done in a biased manner.

    By selecting hardware which is known to give good performance on Linux and poor performance on NT, the test is just as biased as the mindcraft study.

    Which is fine, but don't pretend that they are unbiased an independent when they are not.

    Oh, and BTW, all of the production servers at my company are running SMP. The intranet servers are quad processor Proliants, the Oracles are Sequents with 16 processors.

    How many Linux servers do you see in production environments at Fortune 500?

  • Why benchmarks? The claim made by the original poster was, the new record proved the stability and usefullness of the free unices. Now, ftp.cdrom.com runs FreeBSD. Obviously the new record can prove anything at all about FreeBSD only. It doesn't prove that Linux is _not_ just as capable either, but that's not the point made here.

    The original poster was trying to make a point that seems to have gotten missed. You see, the original poster was saying that Free Unices are in fact capable of handling extremelly heavy load. FreeBSD just happens to be a free unix variant. Thus, this provides an example to saying, 'Free Unices can handle really heavy load.'

    Everyone seems intent on picking them appart and treating them all as totally and completely seperate entities. However, that wasn't how the original post treated them.

    If you put product "A" to a certain test, this proves absolutely nothing about product "B".

    Perhaps not, but that isn't the comparison that was made. A much more apt comparison would be something like this:

    You put product A, B, and C all together, and packaged them as product Foo. Now, Product B, part of Foo, does something really good. You now say, see, this is an example that shows that product Foo is really good.

    Now, replace A with Linux, B with FreeBSD, and C with NetBSD. Now replace Foo with Free Unices. Perhaps this will clear up the intended point of the original poster?
  • Anyone who ever claims that the free Unices aren't up to handling heavy load ought to see this.

    I think this proves very conclusively that the free Unices (Linux, NetBSD, FreeBSD, etc) are all very capable, stable, powerful, and robust. I'd love to see a box running a commericial OS try to match this. ;-)
  • It's now stable as a rock on Alpha's too :)

    Is it? Neat. ;-)

    I've heard lots of mixed reports on how far the Alpha port had progressed, though last I heard it was still fairly beta, but improving rapidly. The Sparc port though, last I heard, was pre-alpha still...
  • I love Linux as much or more than the next guy, and NetBSD sounds pretty cool, but how the heck does this record prove anything conclusively about NetBSD and Linux?

    You misunderstand my point. I recently suggested at work that we use one of the various free Unices for a couple of servers. My suggestion was shot down, with the comment that none of the free Unices had ever been proven in a high stress, high load situation.

    This is, in my oppinion, quite clearly an example where one of them has.

    I love the free Unices. FreeBSD is stable as a rock on Intel hardware (though, unfortunately, not portable for crap yet). NetBSD has the stability of FreeBSD, along with the ability to run on damn near every single architecture available (even more than Linux). Linux just plain rocks, with it's stability, features, and amazingly fast evolution.

    I also believe in using the right tool for the right task, and I often don't bother to differentiate between which one is better, or anything else. They're all Free Unices to me, each with their own strength and weeknesses.

    This says a lot about FreeBSD, and the potential for Free Software in general. Don't make more out of it than there is to be made, though.

    This is exactly my point. This is an example that shows very clearly that the Free Unices, and Free Software in general, *can* work, and *does* work. I'm not here to argue the specifics of each OS, or anything like that.

    I look at this from the point of view that, when I show my boss evidence like this, all of the Free Unices win, and all of them become better recognised for their abilities by him.

    It makes Free Software/Open Source advocates look intellectually dishonest.

    I disagree. I see it as using a single example to prove a concept, as opposed to 'My OS is better than your OS.'
  • Why do we need to wait for more network upgrades for a terabyte in a day? Where is the bottleneck in this situation? Do we know? Or are we just spouting out that 'oh, it's the network thats slow' because it sounds good. OF course, that was the last second comment on the post, so that's probably the case.

    I don't see that there would be much problem in boosting the total G services by 31G! Only 7% more! But heck, it made a great line to end the story on (yeah ok).

    yacko
  • David Greenman, the Co-founder/Principal Architect of the FreeBSD Project [freebsd.org] just posted a new picture of the new wcarchive, it is now available here [cdrom.com].

    Updated hardware description is also available here [cdrom.com].

    It would be amazing if someone could pull some nice effects with The Gimp [gimp.org] and make a cool looking "ftp.cdrom.com theme" for Windowmaker or something...
  • How many Linux servers do you see in production environments at Fortune 500 companies?

    Probably a lot more than you would believe.

    Nobody takes an assay of "servers" if it's "just that box in the corner there". The only time people worry about their servers is when they're not doing what they're supposed to.


    Chas - The one, the only.
    THANK GOD!!!

  • That's for serving 6-8 GB/day as typical HTTP requests from a web server, NOT AN FTP SERVER. The load placed on a machine or set of machines in these two roles is very different.

    I've got 4 Linux machines with dual 300 MHz PIIs and half a gig of RAM each using round robin DNS to handle a very busy web site, and it doesn't serve anywhere near 1000 gigs a day, yet it needs hardware that is much more powerfull than cdrom.com, precisely because web serving is a much harder thing than FTP serving.

    You are assuming FreeBSD and Linux have identical load handling patterns - they don't. It is not inherently harder to server static HTML pages than FTP files, and if used a special light-weight HTTP server (ftp.cdrom.com use a special light weight FTP server) then I do not think it would unfeasible to serve similar amounts of HTTP data.

    In order to make ftp.cdrom.com capable of transferring that much data, however, sendfile() was needed. The FreeBSD sendfile API is, if I've understood correctly, different from the Linux one, in order to be able to support HTTP. If you'd want to serve web data competitively from a Linux machine, I think you would want to implement a similar API for Linux.

    You'd probably also need to do a number of mods to the Linux VM system if you want similar performance to FreeBSD; however, I can't state that conclusively, as it is a long time since I've seen any benchmarks between the two.

    Eivind.

  • I've always gotten horribly slow connections from them, too many people always hitting it (mostly gamers I think). It was great back in '95 or so, but that place is too crowded now ...

    what did Yogi Berra say ... "No one goes to that restaurant anymore - it's too crowded."
  • If I remember right, Microsoft had the record for a time, after releasing Windows 95. Then it was set by a big bunch of servers, not one single. This traffic lasted for several days. I don't remember how much it was.

    Later, when cdrom.com moved their server, they copyed all the data over a 100Mbit connection and got the new record. I don't remember how much this was, either.

    I haven't heard of anybody breaking this record before now.
  • Ok, cdrom.com had the record around december 24th or something 1998, 820 GB in one single day.

    I can't recall what the Microsoft record was.
    --

  • Is the Smart Reseller test the same that was published on ZDNet?

    If so, the configs were hardly "out of the box" - the Linux box in the ZD test was heavily tuned by a member of the Samba team. Furthermore, ZD didn't publish this information, where at least Mindcraft admitted that they tuned the hell out of the NT box.

    --
  • I used to work for CRL up until 4/1/99. I've stood and drooled over that machine on more than 1 occasion. ;)

    I'm not too surprised that it continues to break it's own records, CRL is a Tier-1 Backbone provider, so probably about 1/4 of the traffic is from within the same network, and the other 3/4 go across the NAP's on pipes dedicated to wcarchive.


    --Jason Bell

  • Don't forget that Slashdot itself runs Linux and Apache and handles about half a million hits a day, much of that dynamically generated.

    With respect to the free Unices and stability, why does Slashdot disappear occasionally? Is it network problems? Bugs in Slashdot code? Or does Linux hose up?

    Just curious...

  • Mindcraft's test was a bunch of BS, they were very favorable to NT and not to Linux. Anyone here have an MSCE? We could do our own linx/NT test with the exact same machine and see which one does better. And how about a real world test, Linux can support 200+ users on a network, NT has trouble with more than 40. I've seen NT networks drop where a linux network wouldnt have even noticed the work load.
  • Huh? Did you read the article? Red Hat is the reason for the bandwith, not the reason the machine can handle the bandwidth.

    Although I do agree that the article should have mentioned that wcarchive is a FreeBSD box. Since nobody else serves that sort of bandwidth, it's hard to see if Linux would be able to keep up (but it's certainly an experiment worth trying).
  • It's now stable as a rock on Alpha's too :)
  • by SandHawk ( 15347 ) on Sunday May 02, 1999 @02:36PM (#1907471) Homepage
    By my rough calculations, their net connection should be costing them about $750,000/yr at their average rate of 800GB/day. (I looked at their ISP's pricing, which is about the best I've seen.) They must sell a lot of CD-ROMs to be able to write that off as advertising and/or good-will expenses.

    Actually, their whole architecture seems strange. This seems like something much better handled by multiple machines with connections to different ISPs. Oh, but they're colocated in their ISP's machine room....

    I'd love to see this kind of information (bandwidth, machine, OS) and more (time-of-day loading curve, ...) for all the big data providers, whoever they are... (download.com? yahoo? aol? conxion? ...) They don't seem to brag about it much. If they have little info pages like cdrom.com's, I haven't been able to find them.

    I don't know why, but this kind of stuff just grabs me. Lifestyles of the bandwidth-rich and cache-famous? Packed-Tranfer Pr0n?

  • > I can't recall what the Microsoft record was. Probably people downloading some "minor bugfixes" for Win95.

    I believe Microsoft's record was for IE4, or possibly IE5 most recently.
  • Umm, my 5.9 gcherokee does 0-60 in 7 secs, I don't think a hyundai can beat that
    (little sensitive sbout my car :-)
  • I love Linux as much or more than the next guy, and NetBSD sounds pretty cool, but how the heck does this record prove anything conclusively about NetBSD and Linux?

    This says a lot about FreeBSD, and the potential for Free Software in general. Don't make more out of it than there is to be made, though.

    It makes Free Software/Open Source advocates look intellectually dishonest.
  • My point was that it's a uniprocessor machine, and that benchmarking (and real-world deployment) has shown Linux to pretty conclusively outperform NT on uniprocessor machines. Quite likely FreeBSD would outperform Linux in some server benchmarks, but that's beside the point.

  • Well, Linux currently has 17% of the server market, and is estimated to be growing at 25% a year for the next few years...

    If you check http://www.netcraft.net/survey/ you'll see that Apache massively dominates the web server market with around a 60% share. Being open source rather than commercially backed obviously hasn't stopped it from putting a huge dent in Microsoft's sales.

    I'm sure Microsoft wishes that Linux _was_ a traditional single company commercial vendor, since that would give them a target to shoot at.
  • As Mindcraft's web site says (paraphrasing) "you identify your goals, we do the testing to satisfy them". Given that the paying customer was identified as Microsoft, it should come as no surprise that the goal was to show NT being faster then Linux. Bear in mind though, that all _independent_ testing has shown exactly the opposite to be true, certainly for uniprocessor machines such as the ftp.cdrom.com server.

    There have yet to be any standard SMP benchmarks (TPC-D, SPECWeb96 etc) published, although an unofficial Oracle benchmark indicated Linux to beat NT there also.

    Also bear in mind that the "Mindcraft" testing has since been shown to have been performed in a Microsoft lab (the "Mindcraft" e-mails originated from a Microsoft domain)...

    Ultimately, all the "Mindcraft" tests really proved is that Microsoft is starting to take Linux as a _very_ serious threat to NT - not surprising given the Linux server marketshare and growth numbers.

    Microsoft is attempting to recover from the PR nightmare resulting from this testing by redoing the tests with "unimpeachable" Linux configuration expertise supplied by Linus and Alan Cox ... but as those two have indicated, this is a complete farce, and you can expect the "retest" results to be as information free as the first ones.

  • How many net servers in the real world run off SMP boxes? Most ISPs use server farms of uniprocressor machines - much better bang for the buck. No-one's denying that Linux's SMP performance could be improved, but exactly how it compares to NT (which has it's own set of problems) is really unknown to this stage due to lack of fair testing.

    The Oracle test I mentioned took one approach to fairness in testing both NT and Linux out of the box with no tuning on either side.

    Given how artificial benchmarks are, the real world observations of NT vs Linux performance should probably be given more weight anyway. A quad zeon box is hardly what people are running Linux servers on - many are running on 486's! Try that with NT...


  • by SpinyNorman ( 33776 ) on Sunday May 02, 1999 @08:56PM (#1907479)
    You can use this site:

    http://www.netcraft.com/cgi-bin/Survey/whats

    to find out what server and OS are being used by a given domain name. Try egg.microsoft.com !

    This works by recognising the characteristic signatures of the different OS's TCP/IP stacks as they respond to a bunch of wierd packets.
  • Your reply is quite correct. Furthermore, I wrote "recent release of Red Hat 6.0", which should made it even more clear.

    Alas, a second paragraph existed, which did mention FreeBSD in a as least offensive manner as I could manage. I guess CmdrTaco did not like my slamming of Windows instead... :-)
  • So what if that machine handled 900mb per day without sweat? It would take 1024 days to reach a 960 Gigabytes mark.
  • What wcarchive needs now is a nice gigabit connection to an OC-12 or so. It's actually mentioned in /archive-info/slow.txt, too. At that point, wcarchive will truly be the best. (It already is, but its link is a bit slow for its popularity ;)
  • Didn't their "tests" "prove" NT was *faster* than Linux? A Hyundai is "faster" than a Jeep Cherokee, but guess which is more powerful. And after all, power counts more than speed in 90% of cases.
  • It depends on your definition. That server simply can't be overloaded with the relatively low bandwidth it has. It's simply a "stuffed pipe."
  • I doubt Microsoft considers Linux as a threat, at least in the sense I'm conveying (an actual entity actively attempting to cause MS to lose business).

    Microsoft more likely considers them as a frustration or an obstacle, because at this point, they are. I don't think Linux powers more servers out there than NT yet, and I doubt MS can actually perceive it as a threat, it's an operating system, not a commercial venture. It might consider a commercial vendor a threat, but I don't see that either.

    Regardless of whether they're wrong, I don't think they really see it as a threat.
  • You can see what kind of machine it is at ftp://ftp.cdrom.com/config.txt [cdrom.com] and see a picture of the actual machine at ftp://ftp.cdrom.com/archive-info/wcarc hive.jpg [cdrom.com]
  • Why benchmarks? The claim made by the original poster was, the new record proved the stability and usefullness of the free unices. Now, ftp.cdrom.com runs FreeBSD. Obviously the new record can prove anything at all about FreeBSD only. It doesn't prove that Linux is _not_ just as capable either, but that's not the point made here.

    If you put product "A" to a certain test, this proves absolutely nothing about product "B".

    Sheesh.
    Patrick

"The four building blocks of the universe are fire, water, gravel and vinyl." -- Dave Barry

Working...