Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Supercomputing Education Technology

Purdue Plans a 1-Day Supercomputer "Barnraising" 97

An anonymous reader points out an article which says that "Purdue University says it will only need one day to install the largest supercomputer on a Big Ten campus. The so-called 'electronic barn-raising' will take place May 5 and involved more than 200 employees. The computer will be about the size of a semi trailer. Vice President for Information Technology at Purdue Gerry McCartney says it will be built in a single day to keep science and engineering researchers from facing a lengthy downtime." Another anonymous reader adds "To generate interest on campus, the organizers created a spoof movie trailer called 'Installation Day.'"
This discussion has been archived. No new comments can be posted.

Purdue Plans a 1-Day Supercomputer "Barnraising"

Comments Filter:
  • The Amish are great "barn raisers" maybe they can help.
    • by mikael ( 484 )
      Are the builders of this system required to wear beards and black hats?

      I've seen the websites where the Amish organise barn-raising parties. It's quite impressive. The womenfolk make sandwiches and other light meals, while the menfolk completely construct and assemble the parts to make a three or four floor structure. Presumably they can construct a house in the same amount of time?

      • They can frame a house in about the same amount of time. There is a lot of work to get the foundation ready and to finish the outside. A normal 4-6 man crew can frame a 3k square foot house and get it weather tight in about a week. They do about the same in a day or two.
      • Wait, you're alleging the Amish(!) use a website to organise barn-raisings? Linky to prove?
        • by mikael ( 484 )
          No, I've just visited photography web-pages dedicated to the craft of barn-raising:

          Amish Barn-Raising [amishphoto.com]

          A discussion [ittoolbox.com]

          I'm amazed that so many people can be coordinated in such a confined space. There's a new building being built on my local campus. At most there are never more than 10 workmen on site at any time, and even then, they are always working in separate areas, operating machinery (elevators, cranes, clamps for plate glass).
  • Biggest on Big10 campus is a lie.

    The article lists BigRed at Indiana (#43 on Top500) based on a technicality. But even the technicality is incorrect. The ABE cluster at NCSA@UIUC (#14 on Top500) is literally on the UIUC campus.

    I doubt the Purdue one will beat Abe on the Top500 list.
    • Re: (Score:2, Informative)

      by navygeek ( 1044768 )
      Someone need to go back and read (re-read?) the article. It says ABE is the biggest on a Big Ten campus. Purdue's will be the largest not connected to a national center. A semantic? Maybe, but it doesn't invalidate the claim.
    • Purdue does have (at least) one very cool thing UIUC doesn't have - an operational nuclear reactor. Sure, sure UIUC may still have the facilities, but it's under a decommission order and will be shut down soon.
  • Making fun of the Amish on the internet is like mooning a blind guy.
  • Dumb (Score:4, Insightful)

    by Spazmania ( 174582 ) on Thursday May 01, 2008 @05:24PM (#23268962) Homepage
    built in a single day to keep science and engineering researchers from facing a lengthy downtime

    Sounds like poor planning to me. The correct way to keep science and engineering researchers from facing a lengthy downtime: don't turn off the old computer until the new one is running and tested.
    • by maxume ( 22995 )
      I'm sure if you built them a redundant building with proper environmental control systems (that is, cooling), that they would be happy to keep everything online while they are putting in the new one.
    • by Corbets ( 169101 )
      Sure. If you have lots of space, enough resources to cover the cost of maintaining dual systems, etc. etc. etc.

      Sounds to me like you've never had to upgrade servers in an already overloaded data center. ;-)
      • Sounds to me like you've never had to upgrade servers in an already overloaded data center. ;-)

        Sure I have. I solved the problem by moving to a data center that wasn't overloaded.

        When you're installing that expensive a piece of hardware, you don't try to fit it to the environment; you fit the environment to it.
    • I am a participant in this event. Spazmania, that is the usual modus operandi; however, there is limited space, and there is nowhere to put one thousand rack mounted mahcines while the previous 75+ RACKS are emptied. Secondly, all users are QUITE aware, I'm sure, that their jobs are being temporarily placed on hold so that they can install this cluster. Sometimes, my friend, out here on the bleeding edge, exceptions must be made.
    • by mscman ( 1102471 )
      Well if we had the space, we would have. Unfortunately, our data center is rather small and fairly stressed on cooling and power. This was the only way possible to fit the new cluster in.
      • Like I said elsewhere in the thread: you'd have been better off sacrificing a few machines in the cluster and spending the money improving the space instead. Reliable computing starts with reliable infrastructure. If you're running that close to the edge then you don't have reliable infrastructure.

        • by mscman ( 1102471 )
          And as others have said elsewhere, if you would like to come convince Purdue's Board of Trustees along with our CIO to give us that money, we would be happy to. The reality is that everyone wants better computers, but can't afford a new facility. If you're interested in making a large donation... :)
          • if you would like to come convince Purdue's Board of Trustees along with our CIO to give us that money, we would be happy to

            No thanks. I was in my element at the DNC, but university politics are deadly. ;)

        • First, the amount of time wasted by trying to do this incrementally would be a much bigger hassle than doing this all at once. The last cluster we built a rack at a time was 512 nodes (16 racks plus one network rack), and required about 2 months to construct. Even if we could achieve something like a zero-downtime switchover, the months it would take to assemble the systems and test them out incrementally would be completely unacceptable to our customers.

          With the 1 week downtime, we were able to clean out
  • TFA mentioned the Dell 2*quad Xeon hardware, but failed to mention what kind of storage will be attached to it, what kind of network(s) they plan to use to rope it all together, what OS & filesystem they plan to use, & other stuff that would be fun to know.

    If they don't tell us what they're using, how can we have flame wars over whose technology really should have been used in it? We'll be stuck with nothing to do but make up bad car analogies.

    It would be like, "GM is announcing a barnraising
    • You can make a pretty good guess at interconnect based on the cost (if it's there, I don't care enough to read the article) ... remember to add a factor of 2 or 3 to the price to account for the edu discount...
    • I'd say it's highly likely that the interconnect will be Infiniband. As for storage... when you get that big, I think there are generally "service nodes" that connect to the storage systems on behalf of the compute nodes; I'm just gonna go out on a limb and say they'll use either Lustre or NFS. I wish there was more information somewhere...
      • I'd guess either Lustre or gFarm -- I really don't see NFS working. Maybe NFS4 does more than I think it does?
        • by Troy Baer ( 1395 )
          NFSv3 can scale this big for home directories if you spread the namespace and load across several beefy servers, especially if you also train your users to stage data in and out of parallel file systems (GPFS, Lustre, PVFS, etc.) and/or node-local file systems for I/O intensive jobs. There's no "silver bullet" file system that does everything well*, and there's no shame in using multiple file systems for different parts of your workload where they will work well.
          • I thought we were talking about a giant supercomputer, though -- I don't think we're talking about home directories.

            Also, what's the footnote on your "no silver bullet" line?
            • by Troy Baer ( 1395 )
              Er, supercomputers do have home directories, or at least rationally administered ones do.

              The footnote I'd intended to put in was "There are, however, several file systems that do everything poorly", but I figured I'd be in trouble with several vendors if I gave specific examples...

            • I thought we were talking about a giant supercomputer, though -- I don't think we're talking about home directories.
              Well users of the supercomputer need somewhere to keep files that their jobs on the supercomputer will need to access. Sure you could use the users central campus home directories but that is likely to be bad for performance and may also cause other issues (for example some universities are pretty tight when it comes to quotas for central storage).
              • Fair enough. I'm not so much debating that there will be any home directories...

                I'm suggesting that the NFS solution may well work for central campus home directories, but looks like it would not work well at all for the kinds of files you'd be dealing with on a supercomputer.
      • by gregsv ( 631463 )
        A subsection of the cluster will have Infiniband interconnect. Most nodes will be GigE connected. Storage will be NFS, served from several very high end dedicated NFS servers. The cluster will run RedHat Enterprise Linux.
    • The majority of the cluster will use Ethernet for the networking. (All machines will connect to the *same* switch.) A small number will use our existing Infiniband infrastructure. The machines will all have a single 160GB SATA disk, formatted with ext3, and run RHEL4. Half of the cluster will have 16GB of RAM and the other 32GB of RAM. All clusters are served networked storage over NFSv3 from four BlueArc NAS devices.
    • The storage will be provided by our already-in-place BlueArc Titan 2200 and 2500 systems. Both are two-head clusters with a few racks of disk behind each, the 2200 with 4Gb of uplink per head for home directory storage, and the 2500s each with 10Gb of uplink for scratch (high-speed) storage. They export their filesystems using NFS v3. We also provide an archival storage system using EMC's DXUL and a tape library that has a capacity of a couple of petabytes. We've tested the BlueArc Titans (which are FPG
  • "The so-called 'electronic barn-raising' will take place May 5 and involved more than 200 employees."

    Either the date or the tense is wrong.
  • 'tis a fine pool, english, but a supercomputer it ain't.
    • by jimrob ( 1092327 )
      (nitpick)

      Actually, since the referenced line is to the effect of, "'Tis a fine barn, English, but 'tis no pool." the line should read:

      "'Tis a fine barn, English, but 'tis no supercomputer."

      (/nitpick)
  • Whay a crappy day to pick for such a big job.
  • Is it me or is/was the moderation system broken? At least all of the comments in the earlier story about SCO were unmoderated.
  • by eap ( 91469 )

    "...it will be built in a single day to keep science and engineering researchers from facing a lengthy downtime."
    I'm afraid the damage has already been done on the downtime front, since it has not existed up until this point in time.
  • a beowulf cluster of these...
  • The singular of "megaflops" is not "megaflop".

    //pet peeve
  • Now that there will be a working Babbage engine around, can the Amish use it?
  • When did we start calling clusters "supercomputers"?
  • As a Purdue IT employee but not one associated with central IT (who are creating the cluster) I am approaching this as a way to take part of the day off to rub shoulders with other geeks. It should be fun and maybe even informative. They are even providing a probably poor but at least free lunch. :-) As for what central IT gets aside from a bunch of free skilled labor is good publicity and a sense of community. The borg gets to inspect its potential minions.
  • I know 200 people is going to be a disaster. Can you guarantee me Jim Stoner and his buddies can assemble rail kits or anything else? Wait until they get to the infiniband cabling. One bend in that cable at 100 dollars a foot will cause all kinds of problems for the budget. No Thanks. Instead give me a software engineer and four hardware techs three days to do it properly. I guarantee you at least 195 of these folks have never installed one and the concept scares me.
    • by tuomoks ( 246421 )
      Having installed computer centers.. It is scary, it is not so much that a bend cable (fiber?) cost but do you have a spare? Are you sure that the electricity is connect correctly? Will the cooling work? Hopefully it doesn't include the fire extinguisher system installation - a scary thought! Did you remember to secure the raised floor? Did you label everything - correctly? And so on..

      We used to do that over weekends in very large installations but it was scary every time. Too many things which could have go

Keep up the good work! But please don't ask me to help.

Working...