Affordable Supercomputers 117
Brian writes "CNN Online has a story on a company that has introduced supercomputers for under $100,000 and they hope to have four-layer supercomputers for under $4,000 before long. The computers use AMD processors and according to the company's Web site, they come running Linux. I can't wait to add one of these to my collection!" As always, we've heard about Patmos before. Check out an older story here.
Here's another supercomputer that uses Linux... (Score:1)
Somebody showed me this UK site a few days ago: Applied Dataflow Research. [dataflow-research.co.uk]
There's a lot of interesting reading about dataflow architectures and benchmarking, and they say they'll port Linux to their computer! I get the impression that it's still in development, but it looks quite interesting.
Re:AMD-based supercomputers (Score:1)
Re:Crusoe for supercomputers? (Score:1)
What you care about is peformance/watt. This is where the G4 starts to look real interesting. 1 Gigaflop on a processor that uses 5 watts is pretty damn good. That's sustained peformance on an FFT, not peak. (Peak is something like 3.2 Gflops on a 400 mhz G4)
For some interesting stuff on performance/watt see systemresearch.drgw.net [drgw.net]
Re:Not really a supercomputer, IMHO... (Score:1)
This seems to be aimed more at the high availability (HA) market than the high performance computing (HPC) market.
Agreed. It COULD be used as a small Beowulf cluster but doesn't appear to be set up that way out of the box. at $99K it's a damned expensive Beowulf.
As for the neural net blah blah blah, I can't see how that gains much over more conventional load balancing software. The choice of K6 shows that they really don't understand what supercomputing is all about. Athlon would have been a much better choice for an AMD CPU.
Sadly, there's an ocean of snake oil in the cheap supercomputer market right about now. I see MANY systems that are supposedly 'next generation' Beowulf that are really a rackmount Beowulf with a (marginally useful) GUI system monitor.
Re:Crusoe for supercomputers? (Score:1)
It makes sense. If you can run at an order of magnitude lower power, you should be able to scale the performance higher per board.
Imagine a motherboard with 16 or 32 cheap CPUs running on a workstation. Yummie.
Re:I don't back it (super computers)... Yet... (Score:1)
The only thing fascists like you would accomplish, besides destroying our freedom, is to drive personal supercomputer production out of the USA.
Re:I don't back it (super computers)... Yet... (Score:1)
Perhaps that's the same ?
Until now, all he has got, is killing my inetd wiht overflow attempts. There are a lot of crack scripts with comments in portuguese on the web.
Re:To whoever moderated me (Score:1)
That's a universal rule of any public forum, whether it be Usenet, Fidonet, Slashdot, or butting into a large conversation at a party.
BTW, if you set for "plain old text" you won't have to worry about formatting markup; WYSIWYT. (What You See Is What You Typed.)
Re:Huh (Score:1)
Re:I don't back it (super computers)... Yet... (Score:1)
Attorney General Janet Reno, or her *fill in country here* equivalent, deciding for you whether you "need" that much computing power?
Government psychological profiling, deciding who's a "likely hacker"?
What market is this aimed at? (Score:1)
to pop on a computer system. "Supercomputers
for everyone?" Highly unlikely. More
like supercomputers for businesses running
parallel applications.
George Lee
How useful is this? (Score:1)
But both of these things can be done by someone constructiong their own system. Yes it would raise the price, but to $100K? I'm not sure.
Further it looks as if it is an 8 node cluster. That isn't a very large cluster. It would be nice to think that this would scale similarly to other super computers, but I can't see the performance of any network approaching the speed of something like a crossbar switch anytime soon.
Perhaps I'm missing something, but at $100k I think it would be safer to build your cluster, or purchase a tried machine in a similar price range. (such as the SGI origin)
It is nice they have some packaged software but is it better than what we can get at http://www.beowulf-underground.org/ [beowulf-underground.org]?
Perhaps, perhaps not. I hope to see what people who have used this machine, other supercomputers AND beowulfs have to say about, perhaps than we can decide if this is a good value.
Doesn't http://www.paralline.com/ [paralline.com] sell similar machines anyway?
For now I just don't see the value in it.
But thats just my opinion, I could be wrong.
Re:Scalability (Score:1)
Re:Scalability (Score:1)
Re:Over bloated price? (Score:1)
Patmos mentions a 200 mhz bus on their page. -- they do not mention WHAT runs at 200 however.
CNN says they run on 200Mhz K6-2 chips -- This is not so powerful, considering there are many options out there that run at higher speeds and have better FP performance.
PC weeks says each PC on the rack is running a 450mhz K2 and and the Limbix controllers run on 750mhz Athlons. They mention the Fibre Channel network runs at 200mhz and 1.065 Gbps rate.
So the FC connections are 200mhz, and the limbix boxes run a 200mhz bus, shared with the k6 450 servers running at 66mhz bus speeds.
I would have built it with all athlons, if given the choice.
Re:Affordable? (Score:1)
hell yeah, it's affordable.
Re:Scalability (Score:1)
Though, I would like to see one of these with 4-way SMP athlons on each node. That would be killer! It would get a bit hot though
Re:Over bloated price? (Score:1)
My mistake. Thanks!
Re:Supercomputer??? (Score:1)
I do agree with your assessment : it has to be at least 50-100 times faster until I consider it "super".. a Cray-Y-MP ain't even that super anymore...
Re:Not really a supercomputer, IMHO... (Score:1)
I have an accquaintance with a beowulf cluster of PIII's or something like that. He says (running the same plane wave density functional software I use) he gets about half as much speed as he gets using an alpha with about the same clock speed. And that is running real world (at least for me) software, not just running some meaningless benchmark. Of course, your milage will vary. And for some reason he is having trouble with the myrinet that makes mpi as slow as to be worthless. :(
Re:Not really a supercomputer, IMHO... (Score:1)
Actually, the first paragraph of the Patmos site reads:
Re:To whoever moderated me (Score:1)
Example:
Now, sit back, and watch Mr. Christiansen come running....
But that's just one example. Red Pen would similarly be easy to troll for....
You just gotta know your audience.
I think I'll check that "No Score +1 Bonus" box (seems like I'm always doing that....)
Re:mmmmmmm..AMD (Score:1)
----
Lyell E. Haynes
AMD-based supercomputers (Score:1)
Right now, I don't think you can buy anything cheaper than a K6-2, except perhaps an i486 (which is an order of magnitude slower...).
I wonder if you could build a supercomputer of 6510s. That would be cool. Send all your old Commodore 64s and 128s to me, please.
To whoever was moderated (Score:1)
I do not know if this applies to
This is hackerzone, and if you put forward interesting thoughts while exposing lack of technical knowledge, you'll rather be moderated as "Offtopic" than "Interesting". There's a want for a "JonKatz" category here. Not that I am in the know technically, but then I do not post often either.
My suggestion would be to get over it, read the forum, see if you like it, then participate actively.
Re:Supercomputer??? (Score:1)
Mark Duell
Yeah my jett'as a Super ComputRe:Supercomputer??? (Score:1)
gigabit ethernet and 8 athlons running linux
in an enclosed rackmount case. I don't see how
this is worth more than $35k? They don't
say how much if any ram is in these boxes, if
it's rdram or what. ALl they do is use a lot
of buzzterms, a bad templates based html system (
they seemed to have optimised this page for IE too. disgusting), and then compare to other equally overpriced boxes from compaq, in which they don't state benchmarks or component per component. Slashdot editors, was this REALLY worth an article? Instead of business people like this
who are out to exploit the linux market, check out www.beowulf-underground.org, www.beowulf.org and the beowulf mailing list to find reputable vendors.. or look at my userinfo for the url of a decent company (not to boast the place I work for, but at least we're not prone to marketing lies.
Re:Affordable? (Score:1)
Re:Affordable? (Score:1)
Re:Linux Powered (Score:1)
Unfortunately, they almost certainly developed their own code to do most of real interesting stuff.
Another wrinkle might be that essentially we would have to wait for someone inclined to redistribute the source to buy the machine. I don't think they are under any requirement to distribute on request. Take this statement as a request for information, I don't have time to rephrase the question properly now that I am not entirely sure if it is true.
K6's do SMP (Score:1)
BTW, as mentioned by someone else, they do not mention using K6's on the site. They do mention a 200mhz bus. Looks like CNN screwed the pooch.
Re:Huh (Score:1)
Check out Project Appleseed [ucla.edu] for an example of a MacOS cluster supercomputer.
Yes, it's getting a bit old now (G3/266 beige towers.) Imagine what they could do now.
There are also no vectorizing compilers for the PPC 7400; the Metrowerks compiler will do inline AltaVec assembler, but it doesn't recognize vectoizable loops autmoatically and it doesn't support the linga franca of scientific computing (i.e. Fortran).
Some of what you are saying is greek to me, but here's a link from the Project Appleseed site seems to answer the need for a Mac OS Fortran compiler: Absoft [absoft.com] Fortran 77 compiler.
Rather just read the text-intensive abstract on the system? Appleseed Report [ucla.edu]
--
Re:Over bloated price? (Score:1)
Yeah, why not just build a bunch of individual K6 (or better yet low end athlon) boxes, put them on 100base-T and do a Beowulf cluster? Sounds to me that you could get similar/better perf for much less money.
The only thing you *might* loose is the redundancy and reliablity, but if you set up the boxes right that should not be a prob either. Hell, you can duplicate their power reliablility with a backyard generator from Sears and a bunch of APC ups's.
Re:Over bloated price? (Score:1)
...and where are the sources?! (Score:1)
Regards,
January
'taint supercomputing (Score:1)
If you have to ask the price, you aren't supercomputing (ie there is no price/performance, only performance).
Re:I don't back it (super computers)... Yet... (Score:1)
In general, to be a good hacker, you need knowledge (about the system you're trying to break in), not computation power.
Re:Not really a supercomputer, IMHO... (Score:1)
Re:Supercomputer??? (Score:1)
Re:Supercomputer??? (Score:1)
Re:I don't back it (super computers)... Yet... (Score:1)
Re:I don't back it (super computers)... Yet... (Score:1)
Besides, any computer like this one cant even be USED properly by standard software because of the massive parallelization involved. Basically, you have to custom-write your program from scratch to really make use of the hardware; so speaking of Qake framerates in this context is just plain ridiculous.
Re:To whoever moderated me (Score:1)
Re:To whoever moderated me (Score:1)
Get that penguin icon! (Score:1)
Re:Supercomputer??? (Score:1)
So I guess the company with 6 of the top 10 systems isn't big enough for ya, huh? From the Top 500, the top 10 systems are made by 4 manufacturers: Intel, IBM, SGI/Cray, and Hitachi. Those are what I would call the "Big players". (Well, not Intel). NEC and Fujitsu, I guess are also pretty big. But Sun?? Sorry - not yet in the supercomputing market. Take another look at the list and then tell me SGI isn't a "big player".
Despite all perceptions to the contrary, SGI and Cray aren't dead yet!! Both companies (though at the moment we're one) have phenomenal new products coming out.
I speak for myself, not my employer (SGI/Cray Research).
Re:I thought the new macs were supercomputers? (Score:1)
Re:To whoever moderated me (Score:1)
go to the troll forum [slashdot.org] for more information on how to sign up.
thank you.
Re:mmmmmmm..AMD (Score:1)
Re:I don't back it (super computers)... Yet... (Score:1)
Re:I don't back it (super computers)... Yet... (Score:1)
At this point in time, I feel that the general public is highly unfit to receive the arrival of such a portentous computer system at neither financial nor psychological levels.
man, forget hackers. what pains me is the thought that, once these become widely available to the public, the vast majority of the lusers will use them to run exclusively word and solitaire.
Affordable? (Score:1)
Re:Supercomputer??? (Score:1)
Re:I don't back it (super computers)... Yet... (Score:1)
Re:I don't back it (super computers)... Yet... (Score:1)
Re:I don't back it (super computers)... Yet... (Score:1)
Re:I don't back it (super computers)... Yet... (Score:1)
Re:Over bloated price? (Score:1)
Re:To whoever moderated me (Score:1)
Re:I don't back it (super computers)... Yet... (Score:1)
Re:What about that FPGA-based Transmeta-style s/co (Score:2)
Re:Over bloated price? (Score:2)
100 * 100 == 10,000.
And I'd imagine you want more than 1.5 fiber net cards per proc? More like 4, at a low estimate?
400 * 150 == 60,000.
And if you can get a MB + case + hd for $100, I have a bridge to sell you. At best:
$150 * 100 = 15,000
So, with your admittedly low RAM counts, thats:
10000 +
60000 +
15000
------
85000 just in hardware....
Knock yrself out....
--
Re:What about that FPGA-based Transmeta-style s/co (Score:2)
It's not exactly the same concept since the Transmeta chips aren't gate-level reconfigurable computers, but the dynamic compilation stage seems to have close parallels in both products.
I never did learn though whether Starbridge use layout caching in order not to have to recompile parts of the code already traversed previously. It sounds to me like this nice feature of Crusoe would be equally useful in dynamic RC designs like Starbridge's.
Regarding CPU cores versus FPGA arrays, an FPGA like Xilinx's RC series (6200 onwards) can be regarded as just a core for a microcoded processor because layout control is performed by writing the layout info into a memory-mapped store, which in concept is no different to writing microcode to a conventional microcoded controller. It might be a bit difficult to identify an "instruction set" among all this funky layout data, but hey, when discussing concepts one has to be flexible.
Re:Over bloated price? (Score:2)
--
60,000/4 (Score:2)
>400 * 150 == 60,000.
>And if you can get a MB + case + hd for $100, I >have a bridge to sell you. At best:
How about a hub? Gadzoox and 3com have them out.
--
Crusoe for supercomputers? (Score:2)
----
Nice tie in (Score:2)
The Scarlet Pimpernel
Re:What about that FPGA-based Transmeta-style s/co (Score:2)
Re:What about that FPGA-based Transmeta-style s/co (Score:2)
Though I wouldn't call this the same concept as transmeta. FPGAs are fully programmable, and have no reall core; it all lies outside of the FPGA.
Re:Over bloated price? (Score:2)
Much to the dismay of those tearing apart the system based on this statement, the BUS is 200mhz, and last I checked the Athlon ran at that speed (or rather 100mhz with upside/downside clocking.) Now, if you look at the statement on the site where they mention a 200mhz bus, the only chipset used on this system could be an athlon.
Re:Supercomputer??? (Score:2)
-----------
"You can't shake the Devil's hand and say you're only kidding."
Supercomputers... Maybe (Score:2)
The real benefits are going to be for small businesses, small universities, high schools and other organizations such as these that will begin to see the need more more processing power that an individual PC will not be able to handle. Supporting a large user base, applications run over a network (the trend back towards consoles hooked to a mainframe), and such. These are where this type of machine will really be useful.
The organizations that purchase these cheap supercomputers will be looking especially at the tradeoffs of getting one supercomputer with 50 console stations vs. 50 high-end PCs.
It's a fantastic leap in providing computing power to everyone, in my honest opinion.
----
Lyell E. Haynes
Re:Supercomputer??? (Score:2)
If you put 100 average computer cpu's to work together in a beowulf cluster,
it would be a 100 times as fast as your average computer - thus matching
your definition of the term "Super computer".
Please take a look at the Top-500 list of Supercomputers (forgot the link,
use a search engine)...
The real news in this story is the target price of $100k.
Linux/Alpha based supercomputers are old news and have been aroung quite some time now...
A Top-500 Supercomputer for only a few thousand US$ would definetly shake
the grounds of this market.
I want to see what the big players (IBM, Sun, NEC, Fujitsu) have to comment
[if you note the absence of SGI in the above list, it was intentional.]
--
Re:Over bloated price? (Score:2)
I think the 200 Mhz K6-2 must be a typo. I think these are actually 450 Mhz K6-2's.
Re:Over bloated price? (Score:2)
Concepts (Score:2)
- CNN was obviously erroneous about the Patmos using AMD K6's instead of AMD Athalons.
-- 200MHZ bus and the words cheap obviously seemed like a K6 to the semi-technically knowledgable author. In fact, ( as has been pointed out here several times ), the only 200MHZ bus available for a PC is the Athalon.
-- They mentioned Patmos reaching 1GHZ w/in the year. The K6 does not have the potential to make this speed, in fact, few processors in the world other than the Athalon are currently capable of reaching this goal within a year. Especially without requiring a motherboard; which obviously would be bad for Patmos, since I assume they provide _some_ form of custimization of their motherboard. Or at the very least have carefully selected a board, and would not think kindly of choosing a new board so soon after their initial product release.
- The unqualified use of the word super-computer.
I've noticed several posts about people thinking that they could design a "super computer" even cheaper than Patmos. But really, all they could achieve is a "theoretically fast" machine. A supercomputer is the sum of all it's parts, and therefore the weakest link can break the chain.
As a disclaimer, I am not formally trained in super-computer concepts, but much of this is based on my experience and common sence ( which may differ from horse to horse ).
A super-computer must have top tier performance ( obviously ), must have data-integrity ( you don't spend half a mil just to have a core dump, or system freeze ), reliability ( 24 / 7 uptime while performing it's work ). It should also be scalable ( grows with your company or the task as is fiscally justifiable ).
--Simple points: When selling a super computer, you must choose high quality parts ( or at least make things n-way redundant ).
In my experience, IDE drives don't cut the mustard, due to their high volume, minimal quality price-focused nature. ( skimp on a heat-sink or shock obsorber to save 5 bucks per drive, etc ). When you buy IDE, you think disk space and low price. When you buy SCSI you think of performance and quality ( and usually expense ). Thus they are designed based on that marketing paradigm.
An IDE drive also uses an IDE controller, and is thus inherently sequential. A SCSI device can queue multiple independant requests, so as to perform disk geometry to determine an optimal seek path. Additionally, due to the paradigm above, more cache and higher rotatial speeds are applied to SCSI devices ( not that they couldn't be applied to IDE.. but why should they? )
As for a network, some referred to HUBs and ethernet. Ether does _not_ scale well. Sure you can get a faster / more intelligent HUB, but you never achieve maximal theoretical bandwidth. I'm not completely sure of the network technology used here, but it seems to be peer to peer and bi-direction ( to facility rapid acq's ).
--Memory. This is really the key to a good super-computer design. SGI made use of wide 256 bit multi-ported memory busses with interleaved memory ( 16 consecutive bytes was segmented across 4 memory controllers, thus linear reads were faster, AND independent concurrent accesses had a statistical speed-up ). Of course a 256 memory BUS is expensive, especially in a multi-CPU configuration. SUN's starfire, for example, has up to 64 fully interconnected CPU's ( don't recall the BUS width ). This required a humongous backplane with added cost.
AFAIK, the Athalon uses regular SDRAM ( and a cost effective solution would have made use of off-the-shelf parts ). SDRAM is nicer than older PC-based memory in that it's pipelining allowed multiple concurrent memory-row access within a chip. Several memory addresses within the same row could be in the pipeline, and up to 2 rows could have elements in the pipeline. This is a more sophisticated approach than interleaved memory, BUT, it introduces a longer / slower pipeline. RAMBUS furthers this concept by narrowing the BUS width and furthering segmentation. It allows greater concurrency, but latency ( and thus linear logical dependency ) is increased.
RAMBUS's theory of high latency, high concurrency benifits non-linear programming, such as Italium's ( Intel's Merced ) deep speculative memory prefetching, or ALPHA and SUN's multi-threaded processors ( where cache misses cause an internal thread-level rapid context switch, thus hiding the latency ). Existing x86 architectures, however can not fully take advantage of such concurrency, and the net effect is slower execution time for linearly dependent algorithms ( non-local/consistent branching, and non-parallelizable math calculations ). In this case, making use of high speed / low latency interleaved EDO ( as is / was done in several graphics boards ) seems a better alternative ( but hasn't come to pass in mainstream motherboards ).
--mutl-CPU. This is an interesting topic. Mutiple CPU's can connect to the same memory ( with large internal caches ), or can have a numa architecure with shared segmented memory ( isolated, with interconnecting buses ). Or they can be autonimous units connected via a network. There are pros and cons to each mechanism. The last requires the most redundant hardware ( which is actually good in terms of hot-plugability ), and has the slowest inter-CPU communication. It thus works well in message passing systems, as opposed to symmetric decomposition of large data arrays ( eg parallel vector processors ). Personally, I like the NUMA approach the best, but it requires proprietary hardware, and hot-plugability would have to take the form of a VME bus etc.
It would seem that the approach here is multi-CPU ( 2 or 4 ) to perform a single task. Concurrent threads are distributed across machines in a message passing system ( hopefully minimal data-sharing ). The AI controller probably handles messaging, arbitration, in addition to the advertised load balancing. The multiple CPU's on a board are most likely for redundancy. My guess is that 2 or 3 CPU's are used for user-thread redundancy and a 2 or 1 CPU's are dedicated for OS operations ( using spin loops in the user threads ). Thus minimal context switches are required, minimal memory bandwidth is used ( since only one virtual CPU is ever accessing memory at a time ( though 2 CPU's are simultaneously requesting that information ). They may actually allow the Linux scheduler to rotate proc's, but as I've learned, this isn't Linux's strong point. A single tasked CPU is a happy CPU.
I know SUN has optimizations for context switching ( keeping most pertinent info w/in the CPU, along with a unified virtual memory model, as opposed to the offset-based x86 virtual memory model ). Unfortunately this is offset by register window swapping, but such context-switching centric processors would allow for more efficient concurrent operations such as multi-threaded apps ( such as java. Before you laugh, one application of this low-cost supercomputer is web servers.. And serverlets are an emerging technology, people will ask the question, how can I make this existing code run faster in a short period of time ).
-Concept. ASICs / FPGA's. SGI had the concept of making a simple, cheap, reliable, and fast logic CPU, then couple it with extremely proprietary logic / processing chips that offset the code logic. Combinational logic is faster than any sequential logic, though much more prone to bugs, and higher production costs. High performance reprogrammable FPGA's could help the industry, since the hardware could maintain a high volume, low cost ( as with current CPU's ). Thus you could make PCI / AGP expansion boards that handle load-balancing, message passing, java-extensions, OS-operations etc.
I'm sure their AI logic is done similarly, but it's a completely seperate box, my thought would be that the "boxen" would have these expansion boards, and the customer could request optimizers for say, the web, or weather calculation, chess designs, what-have-you. The goal being that these expansion boards become as common as modern graphics accelerators, modems, sound cards etc, without having to go through all the trouble of designing the hardware of those boards.
-Michael
Re:Scalability (Score:2)
Re:Supercomputer??? (Score:2)
Re:Skeptical (Score:2)
I too am very skeptical of this. From what I can tell of their web page, they don't scale very far since they only have 8 compute nodes. Even assuming 4 processors per node (and I think they only have 1) that would only be 32 processors. Granted, for 99.9% of the users out there, that is a whole boatload of processors. However, our large systems go to 512 processors (1024? Maybe in not too long). The Cray T3E goes up to 2048p. Even our clustering product generally has more than that. The cluster we recently installed at Ohio Supercomputing Center was 256 compute nodes, I believe. I just don't see how 8 to 32 processors is going to compete with that. Now their reliability looks pretty solid, though.
Another problem, in general, with x86 "supercomputing" is that a lot of scientific code out there likes 64 bit math. Merced^H^H^H^H^H^HItanium, MIPS, Alpha, and the Cray Vector processors have a nice lead there.
Lastly, someone previous in this thread said something to the effect "wouldn't it be great to make the top 500 for In short, I don't consider this to be a supercomputer. An HA cluster, maybe. But it's hard to tell since their site is pretty sparse on technical details. I am *very* suspicious of a "supercomputer" company that doesn't post benchmarks. One of Seymore Cray's rules of supercomputing is that your machine should be the fastest in the world at *something*. Lastly, they need to learn how to put in "'s instead of ?'s. Their HTML is inept at best...
Re:Supercomputer??? (Score:2)
I speak for myself, not for SGI.
Re:What about that FPGA-based Transmeta-style s/co (Score:2)
// code runs normally
...
...
// this code compiled to FPGA
...
They were quoting 20x speed increases compared to a standard pentium. The downside is that initially you had to know in advance which bits of code required the speed increase.#define OPTIMIZE
#undef OPTIMIZE
Linux Powered (Score:2)
Patmos's Limbix software, based on the Linux operating system, monitors and manages workloads using neural networking and fuzzy logic, two artificial intelligence methods.
This would be really fun to have at the house. More power! Argh Argh Argh!
Never knock on Death's door:
Re:Over bloated price? (Score:2)
Re:Affordable? (Score:2)
Re:To whoever moderated me (Score:2)
is it working????
(i'm tryin ta test around with it)
....
Thanx! (if it worked)
That's BUS speed, not CPU speed. (Score:3)
What they describe as being "200MHz" is the bus speed, and that is a fairly different matter. If you look at those AMD K6 chips, they're connecting to motherboards that have bus speeds of (in these inflated days!) either 66MHz or 100MHz. That's rather less than 200MHz.
The bus technology getting billed as a "200MHz thing" is the Alpha EV7, which suggests that the CPUs in these systems are either:
The paucity of solid technical information and the proliferation of TM-this and TM-that is a bit distressing.
Re:Huh (Score:3)
AMD processors with SMp Linux, what a joke. Can you say PowerPC 7400 G4's with OS 10 or another UNIX variant.
If the version of OS X Server I saw last spring is any indication, OS 10 is a total non-competitor. It had serious problems even compiling fairly generic ANSI C code (lmbench, MPICH).
And the G4 is not all it's cracked up to be. There's not enough memory bandwidth on the PC100 bus to sustain anything close to the FP rates Motorola and Apple like to point at. There are also no vectorizing compilers for the PPC 7400; the Metrowerks compiler will do inline AltaVec assembler, but it doesn't recognize vectoizable loops autmoatically and it doesn't support the linga franca of scientific computing (i.e. Fortran).
What about that FPGA-based Transmeta-style s/comp? (Score:3)
Has anyone heard any more about the company that was manufacturing the supercomputers? I seem to have lost the URL.
Scalability (Score:3)
--
Over bloated price? (Score:3)
I will use 100 processors,
--- which would be 2700 bucks.
Chasis -- 32 * 100 -- 3200
Fiber net cards == 150 * 100 - 15000
RAM - 64 (say 64 mb is 64 bucks) * 100 - 6400
HD/mb& misc - (need not be fast and need not be that much) 100 * 100 = 10000
Total == USD $37300 (37% of what he quoted). And this is with 100 300mhz k6-2's instead of 11 200mhz k6-2's.
All prices quoted from pricewatch's listings (specially CPU & NIC).
--
Are high-end systems worth the money? (Score:3)
A (true) cautionary tale: company A develops some software for company B. Company B provides several high-end big money machines (multiple pentiums, hot-swappable SCSI raid array, rack mounted, etc).
The development process (which goes through a couple phases) takes more than two years. When the project is done, Company B will probably abandon the servers because (relative to what's available today) the machines are no longer worth the shipping costs it would take to come and get them.
The moral of the story is a corollary to Moore's law [intel.com]: the power of today's high-end super computer will very soon be mached by tomorrow's mid-range workstation (and then low-end home system, and then embedded chips...)
My Palm VII has more RAM in it than the main frame machine I wrote code for as an undergrad < mumble> years ago...
From James A. Gatzka CEO of Patmos International (Score:3)
The NBoxen in a Perpetua are inter-connected and are part of a network that has a minimum of three layers consisting of an input layer (explicate), a middle layer (implicate) and an output layer (explicate). The middle layer (implicate) is hidden and cannot be directly observed from outside. The input and output nodes (Limbix) shape the specific problem.
The cool phenomena of a PATMOS Perpetua is that its remarkable neural network operates in the hidden or "implicate" NBoxens that are the mathematical forum in which the system inter-relates the "explicate" input and output signals. This is the forum in which the decisive calculations operate by which complex, non-linear relationships are learned and the solutions evolve.
The "implicate" layer is able to deal with the very "noisy" data as it searches for patterns hidden in the noise. Because of the configuration of Perpetua and the presence of Limbix controllers, the system can deal with both the data which responds to hard laws and the end data which is not inimitable to hard laws because the underlying process is unknown.
It is my veiw that there would be no golden world of linux without Richard Stallman(RMS). I and my company support the Free Software Foundation, and belive that RMS should be nominated for the Nobel Prize. To tread on the principles laid down by RMS would in my view be anti-social behavior. In the near future we will be posting our source code under the GPL.
When we pushed the button on the spreadsheet and realized that we could sell super computers for the prices that came up, we had to rush to the side of our Chief Financial Officer, a Harvard MBA, and hold him up.
While we appreciate the exuberance of the press we must tell you they occasionaly make critical errors. One of which goes to the speed of the CPUs of our device. Indeed our choice is Athlon. For those who question the scalibility of linux, we agree with you. Take a look at DSI at Indiana University. While we would like to go into greater detail in this matter, we must abide by certain biological principles regarding gestation. We have a lot more ideas about a co-processer option, which may allow us all to have our cake, and eat it too. Finally, certain statements in the press bring a smile to my face, "Super Computers for the Rest of Us" is one such statement.
Perpetua is a super computer that can act as a high availability server, and they do go past 8 nodes. In fact we can make a Perpetua of any amount of nodes you want. I wait for sledgehammer with baited breath
Sincerely,
James A. Gatzka
CEO Patmos International Corp.
Not really a supercomputer, IMHO... (Score:4)
(Disclaimer: I work for Ohio Supercomputer Center but don't speak for them, yada yada yada...)
This seems to be aimed more at the high availability (HA) market than the high performance computing (HPC) market. Comparing with a Compaq Himilaya is *not* a way to win points with HPC centers, because HPC centers don't buy Himilayas -- they buy mostly various breeds of Crays, SGI Origins, and IBM SPs, with a smattering of Beowulf clusters and large Sun configurations as well. The Patmos site also doesn't talk about floating point performance, which the HPC centers consider critical.
The Patmos site never really describes their systems as "supercomputers" (although the phrase "super system is used once or twice), so this seems like bad reporting and/or a misunderstanding of what a supercomputer really is on CNN's part.
(In case you're wondering what I consider a supercomputer, I personally think a super is anything capable of multiple GFLOPS that is used for scientific computations.)
Skeptical (Score:4)
Supercomputer??? (Score:4)
A "supercomputer", by a more professional definition is a computer that runs at least 100 times faster than your "average" computer. Often, its more like 1000. Mainstream news agencies like CNN, CNET, etc. seem to like to use this word for just about anything more than 8 processors. SGI doesn't even consider its Origin line to be a supercomputer until it passes at least 32 processors.
As far as this product goes, I think it's got a good place in the dedicated server business, and possibly some low end batch computation. I do have to admit, using AI concepts for system monitoring was a pretty neat trick!