Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
News

Affordable Supercomputers 117

Brian writes "CNN Online has a story on a company that has introduced supercomputers for under $100,000 and they hope to have four-layer supercomputers for under $4,000 before long. The computers use AMD processors and according to the company's Web site, they come running Linux. I can't wait to add one of these to my collection!" As always, we've heard about Patmos before. Check out an older story here.
This discussion has been archived. No new comments can be posted.

Affordable Supercomputers

Comments Filter:
  • Somebody showed me this UK site a few days ago: Applied Dataflow Research. [dataflow-research.co.uk]

    There's a lot of interesting reading about dataflow architectures and benchmarking, and they say they'll port Linux to their computer! I get the impression that it's still in development, but it looks quite interesting.

  • by Anonymous Coward
    There was a project here at UM called ZMOBS that used an array of Z80 chips. Later it was migrated to 68000 chips. This was some years ago, of course :-)
  • by Anonymous Coward
    First, crusoe was never designed to have any sort of floating point peformance whatsoever. It was designed to work very nicely in a laptop. Note there are *no* performance numbers on the transmeta site.

    What you care about is peformance/watt. This is where the G4 starts to look real interesting. 1 Gigaflop on a processor that uses 5 watts is pretty damn good. That's sustained peformance on an FFT, not peak. (Peak is something like 3.2 Gflops on a 400 mhz G4)

    For some interesting stuff on performance/watt see systemresearch.drgw.net [drgw.net]

  • This seems to be aimed more at the high availability (HA) market than the high performance computing (HPC) market.

    Agreed. It COULD be used as a small Beowulf cluster but doesn't appear to be set up that way out of the box. at $99K it's a damned expensive Beowulf.

    As for the neural net blah blah blah, I can't see how that gains much over more conventional load balancing software. The choice of K6 shows that they really don't understand what supercomputing is all about. Athlon would have been a much better choice for an AMD CPU.

    Sadly, there's an ocean of snake oil in the cheap supercomputer market right about now. I see MANY systems that are supposedly 'next generation' Beowulf that are really a rackmount Beowulf with a (marginally useful) GUI system monitor.

  • I think you are right on the money. I remember seeing a comment on the Transmeta presentation Wednesday that they were thinking of doing an SMP based system in the future.

    It makes sense. If you can run at an order of magnitude lower power, you should be able to scale the performance higher per board.

    Imagine a motherboard with 16 or 32 cheap CPUs running on a workstation. Yummie.
  • Who the F*** are you to step on my freedom to buy a supercomputer. I don't need a note from my mother to buy one and I sure don't need your permission.

    The only thing fascists like you would accomplish, besides destroying our freedom, is to drive personal supercomputer production out of the USA.
  • Funny, there's a cracker that attacks my machines for more than one year who operates from Brazilian domains only. He's regularly trying all the open ports (espacially pop3, imap and rpc).
    Perhaps that's the same ?
    Until now, all he has got, is killing my inetd wiht overflow attempts. There are a lot of crack scripts with comments in portuguese on the web.

  • If you don't know what you're doing, don't start doing it yet.

    That's a universal rule of any public forum, whether it be Usenet, Fidonet, Slashdot, or butting into a large conversation at a party.

    BTW, if you set for "plain old text" you won't have to worry about formatting markup; WYSIWYT. (What You See Is What You Typed.)
  • I'm guessing that vectorizable loops are loops in which one iteration doesn't depend on the previous ones. Thus the tasks that were formerly done one after another could all be done at the same time.

  • So what do you propose? Government intervention?

    Attorney General Janet Reno, or her *fill in country here* equivalent, deciding for you whether you "need" that much computing power?

    Government psychological profiling, deciding who's a "likely hacker"?
  • $99K seems a bit more that most people can afford
    to pop on a computer system. "Supercomputers
    for everyone?" Highly unlikely. More
    like supercomputers for businesses running
    parallel applications.

    George Lee

  • This appears to be a very customized beowulf cluster. It has an interconnection network that ties the machines together nicely, and it has a nice data storage unit to allow it to be more hot swappable than a more tradtional beowulf cluster.
    But both of these things can be done by someone constructiong their own system. Yes it would raise the price, but to $100K? I'm not sure.


    Further it looks as if it is an 8 node cluster. That isn't a very large cluster. It would be nice to think that this would scale similarly to other super computers, but I can't see the performance of any network approaching the speed of something like a crossbar switch anytime soon.


    Perhaps I'm missing something, but at $100k I think it would be safer to build your cluster, or purchase a tried machine in a similar price range. (such as the SGI origin)


    It is nice they have some packaged software but is it better than what we can get at http://www.beowulf-underground.org/ [beowulf-underground.org]?
    Perhaps, perhaps not. I hope to see what people who have used this machine, other supercomputers AND beowulfs have to say about, perhaps than we can decide if this is a good value.


    Doesn't http://www.paralline.com/ [paralline.com] sell similar machines anyway?

    For now I just don't see the value in it.
    But thats just my opinion, I could be wrong. :)

  • They use the K6 450 mhz processors in their 8 core boxes, but the two control boxes use athlon processors. How they can claim to add that up to being a a full 200mhz bus is beyond me.
  • Aha! found it: the fibre channel network runs at 200mhz. Had to refer to an outside article, so YMMV.
  • As a matter of due dillegence, I have checked around articles from several sites (CNN, some linked on their own page) and came up with the following conflicting data:

    Patmos mentions a 200 mhz bus on their page. -- they do not mention WHAT runs at 200 however.

    CNN says they run on 200Mhz K6-2 chips -- This is not so powerful, considering there are many options out there that run at higher speeds and have better FP performance.

    PC weeks says each PC on the rack is running a 450mhz K2 and and the Limbix controllers run on 750mhz Athlons. They mention the Fibre Channel network runs at 200mhz and 1.065 Gbps rate.

    So the FC connections are 200mhz, and the limbix boxes run a 200mhz bus, shared with the k6 450 servers running at 66mhz bus speeds.

    I would have built it with all athlons, if given the choice.
  • Quite affordable. nws.noaa.gov is LEASING an IBM SP supercomputer for 36 million for 2 years!

    hell yeah, it's affordable. :)
  • Linux's scalability lies in its problems with scaling MP in the same box. This is a modified beowulf cluster, and is not MP. There is not yet a chipset to run athlon under MP. This shouldn't be a problem, otherwise a beowulf cluster wouldn't work either.

    Though, I would like to see one of these with 4-way SMP athlons on each node. That would be killer! It would get a bit hot though :)
  • K6-2/450's are 100 Mhz bus. Other than that, your post sounds accurate.

    My mistake. Thanks!
  • It's funny, cause when I checked up on the definition on m-w.com, they listed a supercomputer (1968) as : "a large very fast mainframe used especially for scientific computations "... That's kinda laughable today. by that definition, a 1-way database server is the same as a Cray is the same as a IBM 390...

    I do agree with your assessment : it has to be at least 50-100 times faster until I consider it "super".. a Cray-Y-MP ain't even that super anymore...
  • (out of interest, how does the fp performance of one node of a proper supercomputer compare to an equivelant x86 chip?)

    I have an accquaintance with a beowulf cluster of PIII's or something like that. He says (running the same plane wave density functional software I use) he gets about half as much speed as he gets using an alpha with about the same clock speed. And that is running real world (at least for me) software, not just running some meaningless benchmark. Of course, your milage will vary. And for some reason he is having trouble with the myrinet that makes mpi as slow as to be worthless. :(

  • The Patmos site never really describes their systems as "supercomputers"

    Actually, the first paragraph of the Patmos site reads:
    The Perpetua(TM) supercomputer/ supersystem by Patmos International Corporation(TM) is state-of-the-art, leading-edge technology


    Good point about fp performance though, IIRC K6-II's had poor fp performance anyway (out of interest, how does the fp performance of one node of a proper supercomputer compare to an equivelant x86 chip?
    --
  • Please! Trolling is easy - you just have to have a target for your troll.

    Example:
    Perl is so obscure - it's completely unmaintainable. Anyone who thinks Perl could be used in place of Java for large-scale problems is wrong. Perl sucks.


    Now, sit back, and watch Mr. Christiansen come running....

    But that's just one example. Red Pen would similarly be easy to troll for....

    You just gotta know your audience.

    I think I'll check that "No Score +1 Bonus" box (seems like I'm always doing that....)
  • Ummmm, try more like 16,000 x 12,000 ;-)


    ----
    Lyell E. Haynes

  • Well, it's more like a beowulf cluster of low-speed K6-2s, but they use AI and Linux, which is pretty cool.

    Right now, I don't think you can buy anything cheaper than a K6-2, except perhaps an i486 (which is an order of magnitude slower...).

    I wonder if you could build a supercomputer of 6510s. That would be cool. Send all your old Commodore 64s and 128s to me, please.
  • I know next to nothing about how this site works (this is my FIRST day).

    I do not know if this applies to /., but the advice for Usenet runs along the lines of "if new in a forum, read first, then read more, and then consider posting yourself".

    This is hackerzone, and if you put forward interesting thoughts while exposing lack of technical knowledge, you'll rather be moderated as "Offtopic" than "Interesting". There's a want for a "JonKatz" category here. Not that I am in the know technically, but then I do not post often either.

    My suggestion would be to get over it, read the forum, see if you like it, then participate actively.
  • Remember the G4 (a _complete_ load of BS) is considered a supercomputer (according to apple which is also BS) because it can go 1 GFLOP. I think in the Top500 SC lists, not one of them was below a TFLOP.

    Mark Duell
  • UHMMMMMMMMM this is reasonably priced?

    gigabit ethernet and 8 athlons running linux
    in an enclosed rackmount case. I don't see how
    this is worth more than $35k? They don't
    say how much if any ram is in these boxes, if
    it's rdram or what. ALl they do is use a lot
    of buzzterms, a bad templates based html system (
    they seemed to have optimised this page for IE too. disgusting), and then compare to other equally overpriced boxes from compaq, in which they don't state benchmarks or component per component. Slashdot editors, was this REALLY worth an article? Instead of business people like this
    who are out to exploit the linux market, check out www.beowulf-underground.org, www.beowulf.org and the beowulf mailing list to find reputable vendors.. or look at my userinfo for the url of a decent company (not to boast the place I work for, but at least we're not prone to marketing lies.
  • Hehe. Oookay, you're right, WV is like another country.
  • I'm from Tennessee and I don't even get you. Who do you think you are? Mark Twain?
  • It sure would be interesting to see some of the mods they did to the open source segment of the OS.

    Unfortunately, they almost certainly developed their own code to do most of real interesting stuff.
    Another wrinkle might be that essentially we would have to wait for someone inclined to redistribute the source to buy the machine. I don't think they are under any requirement to distribute on request. Take this statement as a request for information, I don't have time to rephrase the question properly now that I am not entirely sure if it is true.
  • While I don't disagree with your conclusion, K6's do have their own breed of SMP, which if you are constructing your own parts, you are free to develop a chipset to support it. That is all the K6's are missing, support.

    BTW, as mentioned by someone else, they do not mention using K6's on the site. They do mention a 200mhz bus. Looks like CNN screwed the pooch.
  • You don't need OS X.

    Check out Project Appleseed [ucla.edu] for an example of a MacOS cluster supercomputer.

    Yes, it's getting a bit old now (G3/266 beige towers.) Imagine what they could do now.

    There are also no vectorizing compilers for the PPC 7400; the Metrowerks compiler will do inline AltaVec assembler, but it doesn't recognize vectoizable loops autmoatically and it doesn't support the linga franca of scientific computing (i.e. Fortran).

    Some of what you are saying is greek to me, but here's a link from the Project Appleseed site seems to answer the need for a Mac OS Fortran compiler: Absoft [absoft.com] Fortran 77 compiler.

    Rather just read the text-intensive abstract on the system? Appleseed Report [ucla.edu]


    --

  • 200 Mhz K6-2's arnt in the market anymoe.. a query on pricewatch for K6-2 300mhz's turned up chips for 27 dollars.. He had 11 processors.. that's well under 300 dollars.. I will use 100 processors, --- which would be 2700 bucks.

    Yeah, why not just build a bunch of individual K6 (or better yet low end athlon) boxes, put them on 100base-T and do a Beowulf cluster? Sounds to me that you could get similar/better perf for much less money.

    The only thing you *might* loose is the redundancy and reliablity, but if you set up the boxes right that should not be a prob either. Hell, you can duplicate their power reliablility with a backyard generator from Sears and a bunch of APC ups's.

  • 200MHz was the bus speed, Not the processor speed.
  • I mean, I looked through the page, and I haven't found a single reference to open source and/or licensing of the software developed by Patmos. The whole site is quite small, the technical info very brief, no names, no adresses... well, I have only experience with biotech companies, and if this were one, I would be very supectful.

    Regards,

    January

  • Sorry to split hairs, but some hairs need splitting. Many in the supercomputer industry have debated for a long time what the definition of supercomputer is. The answer they've come up with is beautifully vague and that is: the biggest computer you can build to solve a problem.

    If you have to ask the price, you aren't supercomputing (ie there is no price/performance, only performance).
  • Actually no. I tried to do somthing that would qualify as hacking exactly once, and it didn't work. However, I'm a computer science student trying to specialize on networking. I hear lectures with titles like "secure computer systems"...

    In general, to be a good hacker, you need knowledge (about the system you're trying to break in), not computation power.

  • Actually, many of the most powerful supercomputers are nothing but an array of x86 chips. IIRC, the currently leading supercomputer, the Intel-Sandia ASCI Option Red, uses Pentium-II CPUs
  • No operating system simply "scales to 100 processors". All supercomputers can only be really used to their full potential by extraordinarily specialized tasks. Though the real supercomputers are usually a bit better in that regard because they allow faster communication between nodes.
  • What I meant is that the operating system is no help at all in making a single program scale to 100 processors. Having 100 different processes run on 100 processors is still tricky but no fundamental problem.
  • Great, and then it'd only take you 20 Billion years to crack a 2048-bit RSA message instead of 100 Trillion...
  • Bullshit, pretty much. There is nothing in the way of hacking that you can do better with a "supercomputer" than with a cheap dime-a-dozen PC, except for password cracking, and thats not nearly as big a problems as you think.

    Besides, any computer like this one cant even be USED properly by standard software because of the massive parallelization involved. Basically, you have to custom-write your program from scratch to really make use of the hardware; so speaking of Qake framerates in this context is just plain ridiculous.

  • A troll's aim is to get responses. Which he can't achieve when no-one reads his stuff because it's moderated down. Therefore, being a troll on /. is anything but easy...
  • Someone who deliberately tries to provoke people into a flamewar.
  • Finally a good animated linux mascot. Now to make him sing and dance UF tunes ....
  • >I want to see what the big players (IBM, Sun, NEC, Fujitsu) have to comment [if you note the absence of SGI in the above list, it was intentional.]

    So I guess the company with 6 of the top 10 systems isn't big enough for ya, huh? From the Top 500, the top 10 systems are made by 4 manufacturers: Intel, IBM, SGI/Cray, and Hitachi. Those are what I would call the "Big players". (Well, not Intel). NEC and Fujitsu, I guess are also pretty big. But Sun?? Sorry - not yet in the supercomputing market. Take another look at the list and then tell me SGI isn't a "big player".

    Despite all perceptions to the contrary, SGI and Cray aren't dead yet!! Both companies (though at the moment we're one) have phenomenal new products coming out.

    I speak for myself, not my employer (SGI/Cray Research).

  • Any definition I've ever seen defines a supercomputer as any device that can execute one billion floating point operations per second, i.e. 1 gigaflop/sec. According to all the spec and benchmarks I've been able to find on this topic, the Motorola PPC 7400 microprocessor (used in the Apple G4 computer) meets this requirement. Gonzodoggy Since the federal government placed restrictions on the export of these computers under the same guideline, then, it seems that we already have affordable supercomputers for under $4000 (unless you order the supercool 22 flat panel display with it for $3900) Even though those export resrictions have been relaxed, the G4 still cannot be exported to Russia, Iraq, Libya, or any other countries that are deemed unfriendly to the Western world
  • I think you should consider becoming a troll. trolling is a satisfying way of putting your creativity to work. as an added bonus, your goal when trolling is to be moderated down! you can't lose! it rules! it rules!

    go to the troll forum [slashdot.org] for more information on how to sign up.


    thank you.
  • 1600x1200? You could probably do that with a overclocked Athlon and a voodoo 5. If you even needed that.
  • You ain't seen nothing yet, that was a rather polite reply by /. standards.
  • At this point in time, I feel that the general public is highly unfit to receive the arrival of such a portentous computer system at neither financial nor psychological levels.

    man, forget hackers. what pains me is the thought that, once these become widely available to the public, the vast majority of the lusers will use them to run exclusively word and solitaire.

  • Is 100k really affordable?
  • It seems odd to me that they are using 200 MHZ K6-2 chips, the cost of this chip is so low as to be not even a factor in the price for only 11. To be a supercomputer I think it would need about 50 of those chips, but that still would be a mediocre supercomputer in my opinion :P
  • No... k, look, I am not affiliated with janet reno in any manner (and i am damn happy that i'm not) nor am i in charge of any of these companies deciding. and given how i just rhetorically ass-raped about how hackers could make no more use out of these computers, i don't really feel like saying anything. but my point was simply that: most ppl nowadays wouldn't really b up to this technology, which is a shame (and i don't want any1 to start saying I'M a shame, gimme a break, I don't work in a computer firm, and i'm not staring at a screen 24//7 yada yada)
  • oops. i made a mistake. i meant that i GOT rhetorically ass raped by whoever it was (i think it was brazil) that proved me wrong about hackers
  • Haha, that's pretty true. Oh well. I've learned one thing today that I didn't expect to know, thanx.
  • Hmmm.. Good point. Often times I have looked at options/ideas similar to yours to perhaps, manufacture my own machine, the impending costs, and why are some company machines so overpriced. I can only think of the following arguments: 1. Manufacturing, specific parts, mass production adds to cost 2. Installation of operating system, and price of hiring people to do so, etc 3. Campaings of publicity or methods of hiring folks to promote the product. 4. Amounting to this, the price would increase substantially. However, the sales of price would of course (and this is simple logic; I do not presume to know marketing laws or ideas, but this is a concept any geek (or jock) could grap) be higher than the manufacturing price; if not, how could you make a profit??? This would probably amount to the overbloated prices.
  • what/who is a troll?????
  • I stand corrected... Hmm obviously you must know QUITE A DEAL about hacking, eh?? :) j/k
  • Starbridge systems claim to be alive and kicking but somthing does not seem right on their site in other words they are high on hyperbole but when it comes to specifics like independant benchmark tests for example they are strangely silent.Also two so called "ground breaking" seminars..presentations in florida appeared to have fallen flat....I see vapourware signs everywhere.
  • Since K6es aren't SMP-able, your chassis count needs to be == to your processor count, so:

    100 * 100 == 10,000.

    And I'd imagine you want more than 1.5 fiber net cards per proc? More like 4, at a low estimate?

    400 * 150 == 60,000.

    And if you can get a MB + case + hd for $100, I have a bridge to sell you. At best:

    $150 * 100 = 15,000

    So, with your admittedly low RAM counts, thats:

    10000 +
    60000 +
    15000
    ------

    85000 just in hardware....

    Knock yrself out....


    --
  • Thanks for the URL.

    It's not exactly the same concept since the Transmeta chips aren't gate-level reconfigurable computers, but the dynamic compilation stage seems to have close parallels in both products.

    I never did learn though whether Starbridge use layout caching in order not to have to recompile parts of the code already traversed previously. It sounds to me like this nice feature of Crusoe would be equally useful in dynamic RC designs like Starbridge's.

    Regarding CPU cores versus FPGA arrays, an FPGA like Xilinx's RC series (6200 onwards) can be regarded as just a core for a microcoded processor because layout control is performed by writing the layout info into a memory-mapped store, which in concept is no different to writing microcode to a conventional microcoded controller. It might be a bit difficult to identify an "instruction set" among all this funky layout data, but hey, when discussing concepts one has to be flexible. :-)
  • Chassis count was equal to CPU count. the 32 is price of chassis.
    --

  • >400 * 150 == 60,000.
    >And if you can get a MB + case + hd for $100, I >have a bridge to sell you. At best:

    How about a hub? Gadzoox and 3com have them out.
    --
  • One of the serious problems with massively parallel supercumputers is heat dissipation. I'll leave the rest of the calculation as an exercise for the reader.


    ----
  • hopefully Linux will benefit from the association with quailty high powered, serious computing applications at affordable licensing prices - while those other guys fill the mass market for wealthy customers who need a superwastey bloto-proactive 3D vr wiz to help them figure out 'double clicking'

    The Scarlet Pimpernel
  • I haven't heard anything. To tell the truth, I suspected fraud... that it was a bogus claim. Now.. it would be nice if it wasn't.. but they didn't give *any* technical specs... just made some bold claims.
  • The url is www.starbridgesystems.com [starbridgesystems.com] and they appear to be alive, well, and selling systems.

    Though I wouldn't call this the same concept as transmeta. FPGAs are fully programmable, and have no reall core; it all lies outside of the FPGA.
  • 200 Mhz K6-2's arnt in the market anymoe.. a query on pricewatch for K6-2 300mhz's turned up chips for 27 dollars.. He had 11 processors.. that's well under 300 dollars

    Much to the dismay of those tearing apart the system based on this statement, the BUS is 200mhz, and last I checked the Athlon ran at that speed (or rather 100mhz with upside/downside clocking.) Now, if you look at the statement on the site where they mention a 200mhz bus, the only chipset used on this system could be an athlon.

  • Well that may be true, but Linux does not scale to 100 processors. Sure, you could put 100 different machines onto a Beowulf cluster, but that would hardly be 100x as fast as a single machine, except in extraordinarily specialized tasks that befit parallel, networked processing.

    -----------

    "You can't shake the Devil's hand and say you're only kidding."

  • Today's PCs have enormous power already for most everything a person would need to do, individually. These affordable supercomputers are not meant for that customer base.

    The real benefits are going to be for small businesses, small universities, high schools and other organizations such as these that will begin to see the need more more processing power that an individual PC will not be able to handle. Supporting a large user base, applications run over a network (the trend back towards consoles hooked to a mainframe), and such. These are where this type of machine will really be useful.

    The organizations that purchase these cheap supercomputers will be looking especially at the tradeoffs of getting one supercomputer with 50 console stations vs. 50 high-end PCs.

    It's a fantastic leap in providing computing power to everyone, in my honest opinion.


    ----
    Lyell E. Haynes

  • On the other hand,

    If you put 100 average computer cpu's to work together in a beowulf cluster,
    it would be a 100 times as fast as your average computer - thus matching
    your definition of the term "Super computer".

    Please take a look at the Top-500 list of Supercomputers (forgot the link,
    use a search engine)...

    The real news in this story is the target price of $100k.
    Linux/Alpha based supercomputers are old news and have been aroung quite some time now...
    A Top-500 Supercomputer for only a few thousand US$ would definetly shake
    the grounds of this market.

    I want to see what the big players (IBM, Sun, NEC, Fujitsu) have to comment
    [if you note the absence of SGI in the above list, it was intentional.]

    --
  • 200 Mhz K6-2's never WERE on the market. The K6-2 started with 300. The K6 was available from 166-300, but was replaced fairly quickly by the K6-2 at 300, 333, and 350.

    I think the 200 Mhz K6-2 must be a typo. I think these are actually 450 Mhz K6-2's.
  • K6-2/450's are 100 Mhz bus. Other than that, your post sounds accurate.
  • A Few points:

    - CNN was obviously erroneous about the Patmos using AMD K6's instead of AMD Athalons.
    -- 200MHZ bus and the words cheap obviously seemed like a K6 to the semi-technically knowledgable author. In fact, ( as has been pointed out here several times ), the only 200MHZ bus available for a PC is the Athalon.
    -- They mentioned Patmos reaching 1GHZ w/in the year. The K6 does not have the potential to make this speed, in fact, few processors in the world other than the Athalon are currently capable of reaching this goal within a year. Especially without requiring a motherboard; which obviously would be bad for Patmos, since I assume they provide _some_ form of custimization of their motherboard. Or at the very least have carefully selected a board, and would not think kindly of choosing a new board so soon after their initial product release.

    - The unqualified use of the word super-computer.

    I've noticed several posts about people thinking that they could design a "super computer" even cheaper than Patmos. But really, all they could achieve is a "theoretically fast" machine. A supercomputer is the sum of all it's parts, and therefore the weakest link can break the chain.

    As a disclaimer, I am not formally trained in super-computer concepts, but much of this is based on my experience and common sence ( which may differ from horse to horse ).

    A super-computer must have top tier performance ( obviously ), must have data-integrity ( you don't spend half a mil just to have a core dump, or system freeze ), reliability ( 24 / 7 uptime while performing it's work ). It should also be scalable ( grows with your company or the task as is fiscally justifiable ).

    --Simple points: When selling a super computer, you must choose high quality parts ( or at least make things n-way redundant ).

    In my experience, IDE drives don't cut the mustard, due to their high volume, minimal quality price-focused nature. ( skimp on a heat-sink or shock obsorber to save 5 bucks per drive, etc ). When you buy IDE, you think disk space and low price. When you buy SCSI you think of performance and quality ( and usually expense ). Thus they are designed based on that marketing paradigm.

    An IDE drive also uses an IDE controller, and is thus inherently sequential. A SCSI device can queue multiple independant requests, so as to perform disk geometry to determine an optimal seek path. Additionally, due to the paradigm above, more cache and higher rotatial speeds are applied to SCSI devices ( not that they couldn't be applied to IDE.. but why should they? )

    As for a network, some referred to HUBs and ethernet. Ether does _not_ scale well. Sure you can get a faster / more intelligent HUB, but you never achieve maximal theoretical bandwidth. I'm not completely sure of the network technology used here, but it seems to be peer to peer and bi-direction ( to facility rapid acq's ).

    --Memory. This is really the key to a good super-computer design. SGI made use of wide 256 bit multi-ported memory busses with interleaved memory ( 16 consecutive bytes was segmented across 4 memory controllers, thus linear reads were faster, AND independent concurrent accesses had a statistical speed-up ). Of course a 256 memory BUS is expensive, especially in a multi-CPU configuration. SUN's starfire, for example, has up to 64 fully interconnected CPU's ( don't recall the BUS width ). This required a humongous backplane with added cost.

    AFAIK, the Athalon uses regular SDRAM ( and a cost effective solution would have made use of off-the-shelf parts ). SDRAM is nicer than older PC-based memory in that it's pipelining allowed multiple concurrent memory-row access within a chip. Several memory addresses within the same row could be in the pipeline, and up to 2 rows could have elements in the pipeline. This is a more sophisticated approach than interleaved memory, BUT, it introduces a longer / slower pipeline. RAMBUS furthers this concept by narrowing the BUS width and furthering segmentation. It allows greater concurrency, but latency ( and thus linear logical dependency ) is increased.

    RAMBUS's theory of high latency, high concurrency benifits non-linear programming, such as Italium's ( Intel's Merced ) deep speculative memory prefetching, or ALPHA and SUN's multi-threaded processors ( where cache misses cause an internal thread-level rapid context switch, thus hiding the latency ). Existing x86 architectures, however can not fully take advantage of such concurrency, and the net effect is slower execution time for linearly dependent algorithms ( non-local/consistent branching, and non-parallelizable math calculations ). In this case, making use of high speed / low latency interleaved EDO ( as is / was done in several graphics boards ) seems a better alternative ( but hasn't come to pass in mainstream motherboards ).

    --mutl-CPU. This is an interesting topic. Mutiple CPU's can connect to the same memory ( with large internal caches ), or can have a numa architecure with shared segmented memory ( isolated, with interconnecting buses ). Or they can be autonimous units connected via a network. There are pros and cons to each mechanism. The last requires the most redundant hardware ( which is actually good in terms of hot-plugability ), and has the slowest inter-CPU communication. It thus works well in message passing systems, as opposed to symmetric decomposition of large data arrays ( eg parallel vector processors ). Personally, I like the NUMA approach the best, but it requires proprietary hardware, and hot-plugability would have to take the form of a VME bus etc.

    It would seem that the approach here is multi-CPU ( 2 or 4 ) to perform a single task. Concurrent threads are distributed across machines in a message passing system ( hopefully minimal data-sharing ). The AI controller probably handles messaging, arbitration, in addition to the advertised load balancing. The multiple CPU's on a board are most likely for redundancy. My guess is that 2 or 3 CPU's are used for user-thread redundancy and a 2 or 1 CPU's are dedicated for OS operations ( using spin loops in the user threads ). Thus minimal context switches are required, minimal memory bandwidth is used ( since only one virtual CPU is ever accessing memory at a time ( though 2 CPU's are simultaneously requesting that information ). They may actually allow the Linux scheduler to rotate proc's, but as I've learned, this isn't Linux's strong point. A single tasked CPU is a happy CPU.

    I know SUN has optimizations for context switching ( keeping most pertinent info w/in the CPU, along with a unified virtual memory model, as opposed to the offset-based x86 virtual memory model ). Unfortunately this is offset by register window swapping, but such context-switching centric processors would allow for more efficient concurrent operations such as multi-threaded apps ( such as java. Before you laugh, one application of this low-cost supercomputer is web servers.. And serverlets are an emerging technology, people will ask the question, how can I make this existing code run faster in a short period of time ).

    -Concept. ASICs / FPGA's. SGI had the concept of making a simple, cheap, reliable, and fast logic CPU, then couple it with extremely proprietary logic / processing chips that offset the code logic. Combinational logic is faster than any sequential logic, though much more prone to bugs, and higher production costs. High performance reprogrammable FPGA's could help the industry, since the hardware could maintain a high volume, low cost ( as with current CPU's ). Thus you could make PCI / AGP expansion boards that handle load-balancing, message passing, java-extensions, OS-operations etc.

    I'm sure their AI logic is done similarly, but it's a completely seperate box, my thought would be that the "boxen" would have these expansion boards, and the customer could request optimizers for say, the web, or weather calculation, chess designs, what-have-you. The goal being that these expansion boards become as common as modern graphics accelerators, modems, sound cards etc, without having to go through all the trouble of designing the hardware of those boards.

    -Michael
  • They currently use AMD K6-2's in their machines, not Athlons. But they say the 1-Gig Athlon-based machines are coming soon.
  • Well, I won't claim to be the world's expert on this, but I do work with real supercomputers and (to a lesser extent) clusters on a daily basis (disclaimer: I work for SGI at the old Cray Research, though I don't speak for them).

    I too am very skeptical of this. From what I can tell of their web page, they don't scale very far since they only have 8 compute nodes. Even assuming 4 processors per node (and I think they only have 1) that would only be 32 processors. Granted, for 99.9% of the users out there, that is a whole boatload of processors. However, our large systems go to 512 processors (1024? Maybe in not too long). The Cray T3E goes up to 2048p. Even our clustering product generally has more than that. The cluster we recently installed at Ohio Supercomputing Center was 256 compute nodes, I believe. I just don't see how 8 to 32 processors is going to compete with that. Now their reliability looks pretty solid, though.

    Another problem, in general, with x86 "supercomputing" is that a lot of scientific code out there likes 64 bit math. Merced^H^H^H^H^H^HItanium, MIPS, Alpha, and the Cray Vector processors have a nice lead there.

    Lastly, someone previous in this thread said something to the effect "wouldn't it be great to make the top 500 for In short, I don't consider this to be a supercomputer. An HA cluster, maybe. But it's hard to tell since their site is pretty sparse on technical details. I am *very* suspicious of a "supercomputer" company that doesn't post benchmarks. One of Seymore Cray's rules of supercomputing is that your machine should be the fastest in the world at *something*. Lastly, they need to learn how to put in "'s instead of ?'s. Their HTML is inept at best...

  • I suggest you take another look at Irix. We regularly have 128 processor systems internally that are running everything from vi to MPI simulations to software compiles all at the same time. I doubt that we are using the system at 100% of its potential, but that's because a lot of what we need (interactive jobs and compiles) are I/O bound. So yes, there are operating systems that "just scale to 100 processors." Not that I am saying it's easy. My group (long prior to my joining it) put a *ton* of work into making Irix scale well on large single system images. We now have a 512 processor system running a single copy of the OS.

    I speak for myself, not for SGI.

  • Oxford University were doing a lot of work with FPGA systems but they were configured per application using a modified compiler. Their technique was something like:

    // code runs normally
    ...
    #define OPTIMIZE
    ...
    // this code compiled to FPGA
    ...
    #undef OPTIMIZE

    They were quoting 20x speed increases compared to a standard pentium. The downside is that initially you had to know in advance which bits of code required the speed increase.
  • For 99K it's nice to know that the OS is open source.

    Patmos's Limbix software, based on the Linux operating system, monitors and manages workloads using neural networking and fuzzy logic, two artificial intelligence methods.

    This would be really fun to have at the house. More power! Argh Argh Argh!

    Never knock on Death's door:

  • You're Missing one crucial point on this. If you read the story carefully, it says "Fibre channel bus" not "gigabit eithernet" Fibre channel is used in SAN's (it's just like SCSI, except the hard drives can be a mile apart and it runs at 1 gigabit, not 160 megabit)(Makes ultra66 look like MFM), and if you look on pricewatch, a fibre channel will run you from $350 to $5000 EACH COMPUTER. This is NOT include the cost of the cabling, and the cost of the Fibre channel hard drives. If you look for prices on Fibre channel HARD DRIVES, a single 18 gig 10,000 rpm will run you a hair over $1000. PLUS you have the cost of the fuzzy logic development. Have you ever heard of a supercomputer that can automatically detect a failing node and reroute the traffic and load all by itself seamlessly? Seems pretty cheap when you look at it this way.
  • It would be if you got something for the money! Come On. 11 K6-2/200's for $99,000. Get Real. I can almost overclock an Athlon that fast! Those chips can't cost over $25.00 each on a $70.00 motherboard! I can easily build a 16 Processor 800MHz Athlon system with 8GB of ECC RAM, and 500+GB of process storage, and a 1.2GB/S SAN to connect it all together. That is the system I can build for $99,000. It runs the same parallel code that his would run, I get a theoretical top of 12.8 GFlops though, Much more if the programs are optimized for the Athlon.
  • 1 2 3 testing testing

    is it working????
    (i'm tryin ta test around with it)

    ....


    Thanx! (if it worked)
  • They don't describe the CPU as being a 200MHz part; in the lack of real information, they never actually indicate the kind of CPU the system uses.

    What they describe as being "200MHz" is the bus speed, and that is a fairly different matter. If you look at those AMD K6 chips, they're connecting to motherboards that have bus speeds of (in these inflated days!) either 66MHz or 100MHz. That's rather less than 200MHz.

    The bus technology getting billed as a "200MHz thing" is the Alpha EV7, which suggests that the CPUs in these systems are either:

    • Compaq Alpha, or
    • AMD Athlon.
    I'd sort of anticipate the latter, but it is surprising that they are not trumpeting their use of whichever CPU they are using.

    The paucity of solid technical information and the proliferation of TM-this and TM-that is a bit distressing.

  • by Troy Baer ( 1395 ) on Friday January 21, 2000 @07:57AM (#1351161) Homepage

    AMD processors with SMp Linux, what a joke. Can you say PowerPC 7400 G4's with OS 10 or another UNIX variant.

    If the version of OS X Server I saw last spring is any indication, OS 10 is a total non-competitor. It had serious problems even compiling fairly generic ANSI C code (lmbench, MPICH).

    And the G4 is not all it's cracked up to be. There's not enough memory bandwidth on the PC100 bus to sustain anything close to the FP rates Motorola and Apple like to point at. There are also no vectorizing compilers for the PPC 7400; the Metrowerks compiler will do inline AltaVec assembler, but it doesn't recognize vectoizable loops autmoatically and it doesn't support the linga franca of scientific computing (i.e. Fortran).

    --Troy
  • by Morgaine ( 4316 ) on Friday January 21, 2000 @03:37AM (#1351162)
    Several months ago Slashdot featured a supercomputer-on-a-desktop that used on-the-fly reprogrammable FPGAs (Xilinx chips almost certainly) to gain massive speedup over conventional microprocessors. It featured a dynamic pre-compilation stage that fulfils a function very similar to that of the Code Morphing Software in the Transmeta products. (This general area is called Reconfigurable Computing, RC.)

    Has anyone heard any more about the company that was manufacturing the supercomputers? I seem to have lost the URL.
  • by larien ( 5608 ) on Friday January 21, 2000 @03:27AM (#1351163) Homepage Journal
    I think linux is going to have to make sure that all its scalability issues are sorted out before this really kicks off. As the Mindcraft survey showed (yes, I know the tests were flawed, but the underlying problems are still there), linux has some scalability problems. Admittedly, these are being fixed, but until such time as they are fixed, Solaris, IRIX and even *BSD are going to be better options for large scale "supercomputers".
    --
  • by doomy ( 7461 ) on Friday January 21, 2000 @03:37AM (#1351164) Homepage Journal
    200 Mhz K6-2's arnt in the market anymoe.. a query on pricewatch for K6-2 300mhz's turned up chips for 27 dollars.. He had 11 processors.. that's well under 300 dollars..

    I will use 100 processors,
    --- which would be 2700 bucks.

    Chasis -- 32 * 100 -- 3200
    Fiber net cards == 150 * 100 - 15000
    RAM - 64 (say 64 mb is 64 bucks) * 100 - 6400
    HD/mb& misc - (need not be fast and need not be that much) 100 * 100 = 10000

    Total == USD $37300 (37% of what he quoted). And this is with 100 300mhz k6-2's instead of 11 200mhz k6-2's.

    All prices quoted from pricewatch's listings (specially CPU & NIC).
    --
  • by yellowstone ( 62484 ) on Friday January 21, 2000 @06:25AM (#1351165) Homepage Journal
    Only if you absolutely need the processing power, and need it today.

    A (true) cautionary tale: company A develops some software for company B. Company B provides several high-end big money machines (multiple pentiums, hot-swappable SCSI raid array, rack mounted, etc).

    The development process (which goes through a couple phases) takes more than two years. When the project is done, Company B will probably abandon the servers because (relative to what's available today) the machines are no longer worth the shipping costs it would take to come and get them.

    The moral of the story is a corollary to Moore's law [intel.com]: the power of today's high-end super computer will very soon be mached by tomorrow's mid-range workstation (and then low-end home system, and then embedded chips...)

    My Palm VII has more RAM in it than the main frame machine I wrote code for as an undergrad &lt mumble&gt years ago...

  • by JoeDecker ( 102304 ) on Friday January 21, 2000 @10:42AM (#1351166)
    PATMOS Perpetua attempts to mime the way the human brain codes and processes information using multiple nodes arranged in layers and inter-connected fibre optic channels. The nodes, which PATMOS calls NBoxen, process input data locally and pass the signals to the NBoxen to which they are connected.

    The NBoxen in a Perpetua are inter-connected and are part of a network that has a minimum of three layers consisting of an input layer (explicate), a middle layer (implicate) and an output layer (explicate). The middle layer (implicate) is hidden and cannot be directly observed from outside. The input and output nodes (Limbix) shape the specific problem.

    The cool phenomena of a PATMOS Perpetua is that its remarkable neural network operates in the hidden or "implicate" NBoxens that are the mathematical forum in which the system inter-relates the "explicate" input and output signals. This is the forum in which the decisive calculations operate by which complex, non-linear relationships are learned and the solutions evolve.

    The "implicate" layer is able to deal with the very "noisy" data as it searches for patterns hidden in the noise. Because of the configuration of Perpetua and the presence of Limbix controllers, the system can deal with both the data which responds to hard laws and the end data which is not inimitable to hard laws because the underlying process is unknown.

    It is my veiw that there would be no golden world of linux without Richard Stallman(RMS). I and my company support the Free Software Foundation, and belive that RMS should be nominated for the Nobel Prize. To tread on the principles laid down by RMS would in my view be anti-social behavior. In the near future we will be posting our source code under the GPL.

    When we pushed the button on the spreadsheet and realized that we could sell super computers for the prices that came up, we had to rush to the side of our Chief Financial Officer, a Harvard MBA, and hold him up.

    While we appreciate the exuberance of the press we must tell you they occasionaly make critical errors. One of which goes to the speed of the CPUs of our device. Indeed our choice is Athlon. For those who question the scalibility of linux, we agree with you. Take a look at DSI at Indiana University. While we would like to go into greater detail in this matter, we must abide by certain biological principles regarding gestation. We have a lot more ideas about a co-processer option, which may allow us all to have our cake, and eat it too. Finally, certain statements in the press bring a smile to my face, "Super Computers for the Rest of Us" is one such statement.

    Perpetua is a super computer that can act as a high availability server, and they do go past 8 nodes. In fact we can make a Perpetua of any amount of nodes you want. I wait for sledgehammer with baited breath

    Sincerely,
    James A. Gatzka
    CEO Patmos International Corp.

  • by Troy Baer ( 1395 ) on Friday January 21, 2000 @03:48AM (#1351167) Homepage

    (Disclaimer: I work for Ohio Supercomputer Center but don't speak for them, yada yada yada...)

    This seems to be aimed more at the high availability (HA) market than the high performance computing (HPC) market. Comparing with a Compaq Himilaya is *not* a way to win points with HPC centers, because HPC centers don't buy Himilayas -- they buy mostly various breeds of Crays, SGI Origins, and IBM SPs, with a smattering of Beowulf clusters and large Sun configurations as well. The Patmos site also doesn't talk about floating point performance, which the HPC centers consider critical.

    The Patmos site never really describes their systems as "supercomputers" (although the phrase "super system is used once or twice), so this seems like bad reporting and/or a misunderstanding of what a supercomputer really is on CNN's part.

    (In case you're wondering what I consider a supercomputer, I personally think a super is anything capable of multiple GFLOPS that is used for scientific computations.)

    --Troy
  • by The Dodger ( 10689 ) on Friday January 21, 2000 @04:00AM (#1351168) Homepage

    I did a bit of looking into using Linux and new technologies like fibre channel, etc., to create high performance, high availability load-balanced, infinitely-scalable systems, as an idea for a company. Unfortunately, the venture capitalists I approached didn't seem to like the idea that there was no Intellectual Property involved. Perhaps it's something unique to British and Irish VCs...

    Anyway, my point is I'm not a virgin when it comes to using this sort of technology for these sort of purposes. I've had a quick look at the Patmos website, but detailed information seems to be in pretty sort supply. They should definitely have some form of benchmarks available for viewing if they're describing this thing as a supercomputer, but they don't appear to have any.

    In fact, I'm trying to figure out why they're describing this as a supercomputer, because it seems to me that, the way they've set it up is more like a HA cluster.

    I've got to admit that, when I see a company selling what they're describing as supercomputers, but which are really just Linux clusters, with little or no technical details forthcmoing, I get skeptical.

    YMMV. Any HPC/HA/Clustering experts care to give an informed opinion?

    The Dodger

  • by Durinia ( 72612 ) on Friday January 21, 2000 @03:31AM (#1351169)
    I have a lot of trouble with this being called a "supercomputer". This term is thrown around a lot these days, and most of the time, it's not deserved.

    A "supercomputer", by a more professional definition is a computer that runs at least 100 times faster than your "average" computer. Often, its more like 1000. Mainstream news agencies like CNN, CNET, etc. seem to like to use this word for just about anything more than 8 processors. SGI doesn't even consider its Origin line to be a supercomputer until it passes at least 32 processors.

    As far as this product goes, I think it's got a good place in the dedicated server business, and possibly some low end batch computation. I do have to admit, using AI concepts for system monitoring was a pretty neat trick!

If all else fails, lower your standards.

Working...