Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Lord of the Rings Media Movies Hardware

Rent A Bit Of Weta Digital 210

An anonymous reader writes linking to this story at stuff.co.nz, excerpting: "Five hundred powerful computers used by Weta Digital to help create the special effects for the Lord of the Rings may be put up for hire.... The pizza-box sized IBM blade servers each incorporate dual 2.8 gigahertz Intel Xeon processors and 6 [gigabytes?] of memory." Update: 03/22 07:08 GMT by S : The linked story says 6 megabytes of memory, we don't believe 'em.
This discussion has been archived. No new comments can be posted.

Rent A Bit Of Weta Digital

Comments Filter:
  • 6MB? (Score:2, Interesting)

    by Biogenesis ( 670772 )
    Shoulden't that read 6GB?
    • How much RAM can they put in those Xeon boards? I'm assuming they figured 6 GB per board would be sufficient, but how much RAM would be the max for the board?
    • Maybe it's 6MB of L2 or other on-CPU high-speed cache. An odd number, but it makes a lot more sense than any other explanation I can think of.

      I'm betting it's another marketroid amalgamation... something along the lines of:

      "1MB of L1 cache and 2MB of L2 cache per processor, for a total of 6MB per machine!"

      Just like those old '64 bit!' console advertisements. Uhh, yeah, 16 bits pipeline times three pipelines plus two extra 8-bit memory thingamajiggies may add up to 64 bits, but it for damn sure isn't
  • Imagine (Score:5, Funny)

    by Anonymous Coward on Monday March 22, 2004 @02:08AM (#8632125)
    A beowulf .. oh wait ...
  • Mildly off topic, but I seriously miss good pizzabox desktop boxes. Something simple, plain, fast, and with room for a couple of PCI slots on a riser card.

    The world needs more of them
  • Cost? (Score:5, Funny)

    by nb caffeine ( 448698 ) <nbcaffeine@g[ ]l.com ['mai' in gap]> on Monday March 22, 2004 @02:11AM (#8632145) Homepage Journal
    What would this cost? Do they charge something like cpu/hours or the like? Will the average person have the ability to rent some clock cycles? I just want something that will be able to run doom3 when it comes out.
  • by The_Ace666 ( 755363 ) on Monday March 22, 2004 @02:11AM (#8632146)
    Now where can I find a pizza-delivery company to get one of these babies delivered to my door?
  • Wow (Score:4, Funny)

    by KU_Fletch ( 678324 ) <bthomas1@NOsPam.ku.edu> on Monday March 22, 2004 @02:12AM (#8632150)
    A whole 6 megabytes of memory?! Way to beat up my 486.
    • Or my IBM PCjr, with the fully upgraded 128K ram. And an off topic tangent, a beowulf jr. cluster. Any takers?
    • Re:Wow (Score:3, Funny)

      by xkenny13 ( 309849 )
      A whole 6 megabytes of memory?!

      Yup!! It's amazing what you can accomplish once you get rid of all the bloatware.
    • A whole 6 megabytes of memory?! Way to beat up my 486.

      Of course it supports 6 MB of memory. It didn't say exactly or at most 6 MB. Oh no... I've turned into that annoying person at work that corrects everyone's grammar.
    • by torpor ( 458 )
      If you've got renderman set up to render to disk, and your disk arrays are pretty fast, I don't see any reason why these dedicated render machines shouldn't have only 6 megabytes of RAM per CPU.

      okay, it doesn't make a -ton- of sense to render direct to disk, but maybe it can be done and not require so much RAM?
      • You said it right there... 6MB of RAM per CPU... Five hundred computers... Thats a mighty big number. Since its a cluster each node has to render only a small part of the image and can render to disk.
      • Insightful??? How about funny?

        We're talking dual 2Gz Pentiums... I doubt that you could find dimms that could fit into those boxes that held less than 64Meg without special ordering (( That's right -- for the price of 512Meg of ram, we can give you a whopping 6Megabytes!)).

        Possible sane explanations would include:

        1. 6 Meg cache
        2. 6Gig of RAM
        3. 6Gig if Disks (a bit too small to believed)
        4. 6Meg video cards. (again, a bit small these days)

        I'd be betting on #1 ir #2.
        6 gig disks are hard to come by, these

        • by torpor ( 458 )

          (i'm thinking less general-purpose computing purchase, and i think you are thinking more ...)

          yeah, so i thought 6Megs was a typo at first, but then i considered the mere possibility that they may just have spec'ed their RAM to their direct process requirements, 'embedded system' style.

          and, i still don't see why not... though your point about RAM being available in sizes less than 64 megs is valid, i've seen 8meg dimm's for 2ghz Pentium systems, cheap, all over the place. remember, this is new zealand we
    • Ahh 6 megabytes... What I could do with that

      # cat meminfo
      total: used: free: shared: buffers: cached:
      Mem: 3076096 2887680 188416 0 221184 1466368
      Swap: 0 0 0
      MemTotal: 3004 kB
      MemFree: 184 kB

  • this reminds me of this [despair.com].
  • Update (Score:5, Funny)

    by hlopez ( 220083 ) on Monday March 22, 2004 @02:13AM (#8632156)
    Update: 03/22 07:08 GMT by S
    -we don't believe 'YOU-
  • by FS1 ( 636716 )
    Less than one minute after posting the story, and not proofreading it. There is an update correcting an obivious mistake. I'm surprised that you corrected it that quickly.
    • if you proofread the update, you'd realise they're referencing the article...

      But it goes to show how inherent assumptions and habit are made and what happens when they're broken. Noone assumes MB anymore...
      Remember the days when 600 MHz was blazing fast? now people might say "0.6 GHz" just to be able to express speed on the same order of magnitude..

  • by SexyKellyOsbourne ( 606860 ) on Monday March 22, 2004 @02:13AM (#8632158) Journal
    I'm rather tired of waiting for graphics to progress to the level they will be in in the year 2010 or so. I'd like to see these machines, which rendered Lord of the Rings, use their nearly unlimited processing power to let me play a game -- perhaps Half-Life or Quake 2 with a new rendering DLL -- to spit out 60fps of pure ray-traced bliss.

    Or just fire up InTrace [intrace.com] with a scene of 1 billion polygons of a super-detailed scene of sunflowers, with multiple reflections and all the other goodies, and crank it to 1600x1200.

    I can dream, can't I? :)
    • by troon ( 724114 ) on Monday March 22, 2004 @03:24AM (#8632393)

      I'm rather tired of waiting for graphics to progress to the level they will be in in the year 2010 or so.

      Just give it six years or so, and you should see the improvements you are waiting for.

    • by Viceice ( 462967 ) on Monday March 22, 2004 @05:41AM (#8632688)
      you know, if you ONLY wanted so SEE something like that, you could go out doors and look for a field covered in sunflowers.
    • by QuantumFTL ( 197300 ) * on Monday March 22, 2004 @05:49AM (#8632699)
      Check out http://www.worley.com/fprime.html

      My part-time employer (when I'm not working for NASA/JPL) Maas Digital [maasdigital.com] just bought a copy of the software... it utilizes stochastic methods to allow flexible real-time raytrace rendering (with good motion blur!)

      It turns out that motion blur in 3D graphics is a very hard problem because it's essentially a high-dimensional integral, and it turns out the best method of doing generalized high-dimensional numerical integration is a stochastic algorithm (monte carlo method) so it's not surprising to me that it's a great way to do motion blurs.

      My favorite aspect of stochastic methods is their ability to be continuously refined (for instance, in a video game, the longer you spent looking at an object, the better it would get etc, and the graphics performance would degrade very smoothly with changes in system load etc). It is also ideal for parallel processing, as it can be dynamically parallelized to completely heterogeneous computing nodes.

      Dan and I agree that there's going to be a lot of stochastic algorithms in the future of computer graphics (though he is hopeful that analytical methods will eventually make a comeback, as they have better asymptotic performance).

      Cheers,
      Justin Wick
    • I mean, if you're tired of waiting and everything.

  • Woot! (Score:2, Funny)

    by SillySnake ( 727102 )
    Finally a computer able to run the super-ulta-mega high detail Duke Nukem forever! Yes, that's right, the game is finished and just waiting for the computer graphics and processing worlds to catch up to it.. err, right? I mean.. Doom 3! err.. wait.. bah.. Never mind that Still, I would think that unless a company needed results very quickly a seti like application would be much cheaper. If the software guys can code one that can run on the company's network overnight or just at random downtime during
  • One thing to say... (Score:5, Interesting)

    by linuxkrn ( 635044 ) <[moc.nigolxunil] [ta] [nostawg]> on Monday March 22, 2004 @02:14AM (#8632165)
    seti@home!

    • by MagicDude ( 727944 ) on Monday March 22, 2004 @02:38AM (#8632255)
      No, we need something nerdier and more useless, like the biggest prime number ever [newscientist.com].
    • by CGP314 ( 672613 ) <CGP@NOSpAM.ColinGregoryPalmer.net> on Monday March 22, 2004 @03:37AM (#8632430) Homepage
      folding@home

      I used to run seti@home instead of folding@home, but then one day I realized I needed to switch. While finding extraterrestrial life would be the most important development in human history to date, the chances of finding it in my lifetime are very small.

      On the other hand, the chances of my getting cancer or any of the other of the diseases folding@home works on is very great. Plus, if folding@home cures any of these diseases, it will extend my life and increase the chances that extraterrestrials will be found within my lifetime.


      -Colin [colingregorypalmer.net]
      • Using your backassward logic, it seems more logical to devote your CPU time to researching automotive traffic patterns, so you don't get killed in an auto accident or get hit by a bus.
        • by CGP314 ( 672613 ) <CGP@NOSpAM.ColinGregoryPalmer.net> on Monday March 22, 2004 @05:33AM (#8632664) Homepage
          Using your backassward logic, it seems more logical to devote your CPU time to researching automotive traffic patterns, so you don't get killed in an auto accident or get hit by a bus.

          If there was a project that I could devote my CPU cycles that could reduce the possibility of me getting into a car accident, then I would drop folding@home for dontgethitbyacar@home. What's backassward about risk assesment?


          -Colin [colingregorypalmer.net]
          • You obviously have no sense of irony. Or humor.

            If you really did the risk assessment, you'd give up the stressful job you work at to earn money to buy overpriced computer hardware, it will give you a heart attack. You'd find a job without a daily commute where your risks of traffic death are high, you'd go work on a chicken ranch in Montana, selling eggs. Or maybe you should move to Alaska and become a hermit that avoids all human contact, so you don't pick up communicable diseases like influenza or AIDS.
            • Uh, no. it's risk analysis... choices are weighted.
              the opportunity cost of giving up a few cpu cycles for something that may save you life later is not high and therefore this decision is weighted more heavily than say abstaining from sexual intercourse to avoid disease and possibly death.
              don't they teach this stuff anymore? decision trees, risk analysis diagrams?
      • Did you consider that ETs might already have the cure for cancer?

        Go SETI!

        Fh
  • Nice (Score:1, Troll)

    by Duncan3 ( 10537 )
    While renting out unused machines is not even close to a new thing, it's the LoTR machines, so it's way cool here on /.

    This is what all that "on demand" hype is about after all... *yawn*

    but machines with that much memory in each aren't the norm, so it is a rather sweet cluster.
  • Distributed.net... (Score:5, Interesting)

    by rthille ( 8526 ) <(gro.tagnar) (ta) (todhsals-bew)> on Monday March 22, 2004 @02:18AM (#8632182) Homepage Journal
    Imagine distributed.net being a CPU co-op. They take problems from clients in need of a ton of CPU, farm it out to distributed.net members, and at the end of the month/year you get a small check for all the CPU cycles you spent helping solve problems.
    • A great idea in theory, but how would they track the amount of help that you did in a way that would be one hundred percent hack proof? I don't think you'd want to pay people to analyze every packet that you get back to make sure it's had whatever needed done, done to it. Granted, it would be possible to elimnate most unwanted results with a couple of filters, but when money becomes an issue the community will do what they can to get the most of it.
      • Easy -- Make the 'money' be not real money, but a lack of ads/nagging.

        Imagine getting prompted upon installing an application whether you want to A) pay B) have ads or C) donate cpu cycles.
        This would then allow developers to make money off of their software without making it unusable due to ad annoyance (xfire, aim, most shareware)
      • A great idea in theory, but how would they track the amount of help that you did in a way that would be one hundred percent hack proof?

        Obfuscate the work being done and insert test/checking operations between the 'real' work operations. Verify that the results of the test operations were what would be expected if they had been carried out.

        Do not reveal what the test operations are (or even what the 'real' work is). Do not reveal what percentage of operations are test operations. Change the test operati
    • by Motherfucking Shit ( 636021 ) on Monday March 22, 2004 @02:49AM (#8632282) Journal
      Imagine distributed.net being a CPU co-op. They take problems from clients in need of a ton of CPU, farm it out to distributed.net members, and at the end of the month/year you get a small check for all the CPU cycles you spent helping solve problems.
      This was already tried, by a company called ProcessTree. The idea was that they'd sell your CPU cycles out and you'd get a cut. They also had it set up in a pyramid fashion, so that you also got an extra few cents for each person you referred to the program.

      The best I could find was this mirror of the FAQ [multyportal.com]. Since ProcessTree.com now belongs to a domain poacher, I'm guessing they never did find a paying client...
  • I'm going to re-fake the moon landing, but do it right this time! No numbers on rocks, no waving flag, or overlapped crosshairs.

    I may have to re-release the Mars landing too, depending on how well they did...

    Beagle was a great idea, btw. Spend the money and then oops! no mission to render. Sheer genius.

  • Maybe they're right (Score:5, Interesting)

    by ctr2sprt ( 574731 ) on Monday March 22, 2004 @02:42AM (#8632265)
    Update: 03/22 07:08 GMT by S: The linked story says 6 megabytes of memory, we don't believe 'em.
    They might mean 6MB of L2 cache. I don't know what cache sizes are available for Xeons, but probably when you order 1000 CPUs at once Intel are willing to give you hard-to-find stuff.
    • I suppose that's possible, but really. When you describe a box, the first two pieces of info are usually speed and memory. I mean, those are the two most important variables right? This is a typo, though perhaps the reaction is kinda overblown. It is amazing to me that we could use 6 gygabytes of ram... I can't even conceive of 640,000 pieces of info, much less millions of bits. It's crazy crazy stuff
      • by drudd ( 43032 )
        Trust me, 6 GB goes by very quickly.

        So let's say I'm doing a simulation of structure formation in the universe.

        I have a cube grid of cells, 512 on a side (my own code uses adaptive mesh refinement to increase resolution, but we'll ignore that for simplicity).

        So each cell requires 3 floating variables to compute gravity, and 8 floating variables to calculate hydrodynamics. At 4 bytes per variable, that's a total of 5.5 GB just for the mesh.

        Then you need to add dark matter particles, allow for star form
    • by slash-tard ( 689130 ) on Monday March 22, 2004 @05:09AM (#8632606)
      Xeons only go up to 4 megs of cache and those were just recently released. At the time these were bought the max was 2 megs.
    • Each processor in the cluster has 512k of cache. These are Xeon DP systems, not Xeon MP.
  • by Anonymous Coward on Monday March 22, 2004 @02:50AM (#8632284)
    Posting anon as I have an interest in some of these companies :
    http://www.respower.com/ - 250+ machines (~500GHz), 250GB ram
    http://www.rendercore.com/ - 700 machines
    http://www.render-it.co.uk/ - 82 cpus (131GHz), 82GB ram)

    The only 'interesting' thing here is that it's WETA's farm. Other than that, I doubt they offer the wide selection of software (lest they struck deals lately) not to mention field experience with 'oddball' files.

    Good luck to them, though
  • Some PCs, mostly older systems used to help create the first film in the trilogy, The Fellowship of the Ring, have been donated to a local school.

    Those machines would still have to be pretty good, even if they are called 'older systems'.. Some of the local school geeks would love to think they are working on a machine that may have been used to create Gollum!
  • by ultranova ( 717540 ) on Monday March 22, 2004 @02:54AM (#8632301)

    Surely they used Token Ring to connect them ?

  • by pariahdecss ( 534450 ) on Monday March 22, 2004 @03:11AM (#8632349)
    Nasty fat hobbit probably sold the extra RAM to buy Twinkies(R)
  • The linked story says 6 megabytes of memory, we don't believe 'em.
    Be careful, you might just give old Billy boy a way out of the comment that's been dogging him for years...

    "No, no, I meant nobody would ever need more than 640 gigabytes of memory!!!"
  • by ProfessionalCookie ( 673314 ) on Monday March 22, 2004 @03:24AM (#8632392) Journal
    Correct me if I'm wrong here but aren't the Xeons currently 32 bit? Doesn't that mean they can't address more than 4 Gigs? I thought that's what the whole big deal was with 64 bit. Now maybe if they were G5s...
  • by GloomE ( 695185 ) on Monday March 22, 2004 @03:27AM (#8632398)
    Wouldn't they make more by selling them as (framed) collector's items?
    Blade 1 of 500: current bid $1(insert zeros here).
  • interconnect (Score:5, Insightful)

    by painehope ( 580569 ) on Monday March 22, 2004 @03:29AM (#8632406)
    the real killer is that there's quite a few industries that can't rent time on their cluster because the gigabit interconnect ( IBM blade chassis have a switch module internal to each chassis, and I don't think you can get any HSLL - high-speed, low-latency - network interconnect modules ( Myrinet, SCI, Quadrics, etc. ) for them ) has too high of a latency for their applications.

    Bandwidth-wise they should be fine, as each chassis has at least four ports that could be trunked to a top-level switch w/ a beefy backplane ( I could tell you the # of ports per chassis if I was at work, as I've been messing w/ some of their blades lately ), giving a peak per-chassis bw of > 400 MB/sec.

    Of course, I'm wondering how Weta got around it themselves, as I would think that rendering digital video is fairly heavy on inter-node communication. This would still be aswesome for web-servers or problems that are "embarassingly parallel".
    • Re:interconnect (Score:5, Insightful)

      by 2megs ( 8751 ) on Monday March 22, 2004 @03:38AM (#8632432)
      Rendering digital video is about as parallel as compute loads get. Generally each frame can be an independent computation. For most ray-tracing algorithms, computing each pixel of each frame is fully parallelizable too.

      The global AI things they did to have 10,000 troops all interacting together is obviously not quite so independent, but I'm willing to be the bulk of the compute load goes into creating pictures of those interactions, not the interactions themselves.

      • Re:interconnect (Score:5, Informative)

        by sakusha ( 441986 ) on Monday March 22, 2004 @05:56AM (#8632721)
        You obviously have never worked in CG. Many common, simple effects cannot be parallelized. For example, Maya's particle effects are notorious for their inability to be parallelized and run on render farms, if they use randomness (and most particle fx do use randomness in positioning). Those fx must be rendered sequentially on a single CPU. Each frame's particle positions are used to calculate the next frame's particle positions, they're all calculated at runtime.
        • This is what happens when Computer Scientists try to play Mathematician.
      • Yeah, basic raytracing is certainly an embarrassingly-parallel problem. I wrote a basic parallel raytracer using straight TCP to communicate between the nodes. Using 15 computers, located all over the place (several in the same room, several across campus, one on the other side of the country) resulted in basically a 15 times speedup with no noticeable overhead. Of course, the raytracer itself was very simple (no fancy effects, just shadows, reflection, refraction, etc.) and very slow, which helps make it p
    • Re:interconnect (Score:3, Insightful)

      by Obasan ( 28761 )
      Actually, rendering is fairly light on network requirements and very heavy on memory/cpu. (Download scene files & textures then crunch numbers for 10-40 minutes depending on layer complexity.)

      But the bladecenter chassis also does in fact support a Myrinet interconnect if you so desire.
  • by Anonymous Coward on Monday March 22, 2004 @03:34AM (#8632422)
    The IBM HS20 has 4 DIMM slots used in banks of 2. No reason to think 2x2GB and 2x1GB would not work.
    Linux, FreeBSD or Windows 2000 AS would support PAE allowing an app to use close to 4GB, leaving 2 GB for OS kernel , so seems reasonable.

    Ay one who doesent believe me check at crucial.com. I wont provide a URL but look for IBM, Bladecenter, HS20
  • by robbyjo ( 315601 ) on Monday March 22, 2004 @04:24AM (#8632523) Homepage

    Please...

    This may be an old news, but the details of that machine is here [sgi.com]. That's some stuff to drool over. Some excerpts:

    ... provide a combination of 4TB of online storage and more than 20TB of nearline storage as a global storage repository ...

    ... create and manage up to 100TB of data ...

    And now this machine is up for a rent. Here's [wetadigital.com] the company website.

  • by zmollusc ( 763634 ) on Monday March 22, 2004 @05:33AM (#8632660)
    to develop the military tactics used in the battle scenes. Cavelry charge (with lances) against infantry dug into rocks and buildings. Most inept castle defence ever devised. Etc etc. I assume it was all worked out on an unplugged (insert archaic/obscure home computer).
    • You might get a bit more respect for your opinion if you spelled things like "defense" and "cavalry" correctly.
      Either way, it was a lot of action. Sieges are not action. They're not fun. They were usually months of nothing. Bombardments and nothing else are also not fun. They lost appeal after about 60 seconds. They're not what movie-goers want. Besides, the orcs had the advantage of overwhelming numbers. Who cares if a lot of them died, as long as enough of them didn't?
  • by Animats ( 122034 ) on Monday March 22, 2004 @01:00PM (#8636100) Homepage
    OK, "grid computing" fans, here it is, a big CPU resource open for commercial customers. Let's see if people line up to buy cycles. There must be paying customers out there who want to do rendering, or VLSI simulation, or numerical wind-tunnel tests of wing sections, or something.

    We're waiting...

    As I've pointed out before, if there was a market for this, ISPs would be selling off-peak CPU time on their hosting farms.

One half large intestine = 1 Semicolon

Working...