On the Supercomputer Technology Crisis 347
scoobrs writes "Experts claim America has been eating our 'supercomputer feed corn' by developing clusters rather than new supercomputer processors and interconnects. Forbes says America is playing catch-up and that the new federal budget items are too little too late. Cray is laying people off due to decreased federal spending and claims lower margin products have forced them to create products based on commodity parts. Red Storm, one of their new Linux-based products, is being delayed to next year."
it makes sense (Score:5, Insightful)
Re:it makes sense (Score:5, Insightful)
My inclination is to let the market sort itself out, although if supercomputer makers go under, they won't necessarily reappear the moment they're needed.
Re:it makes sense (Score:5, Insightful)
This doesn't mean that clusters don't have some use in these regards, it just means that for these types of problems no one has figured out an efficient parallel algorithm to use on them.
Re:it makes sense (Score:2)
Can you give a specific example? When I think about it, the most CPU-heavy problems that occur to me are highly parallelizable. Things like solving partial differential equations, for instance, which means physical simulations. Or, in the case of research centers, I suppose neural networks might be one heavy user.
I can't think of any super-computer application that doesn't involve lots of data being proces
Re:it makes sense (Score:3, Informative)
Re:it makes sense (Score:3, Informative)
Re:it makes sense (Score:4, Informative)
Re:it makes sense (Score:2)
Possible bad example- all I'd do with this is create a virtual shared memory store on a gigabit network and use a reasonable data engine such as MySQL in SQL Server Mode to create a shared memory space. To make it really handy, put the whole thing on a terabyte ramdisk with battery backup.
The vector pro
Re:it makes sense (Score:2)
Aren't the people running these top 6 clusters part of various Universities and Research Labs?
I would think they would know if their cluster is doing what they want it to do. And if they have something that does the job, why should they go out and hand Cray several million dollars to build them something that Cray says is better?
Re:it makes sense (Score:4, Insightful)
Yes, there are tasks where supercomputers are needed. Most tasks are not among these. If there is a single parallelizable task in a CPU-intensive process, odds are that a cluster is your best bet. For example, even if your core algorithm requires intensive memory locking and must be done in a completely serial manner, if you are going to be running that core algorithm over a range of possible inputs, a cluster will probably be your best choice.
Re:it makes sense (Score:3, Informative)
The problem with clusters is that they don't scale well in all cases. Programming tricks may help, but profiling and testing on a sub cluster may not reveal bottlenecks in the full cluster.
Re:it makes sense (Score:2)
In previous years, clusters weren't
Solving the Wrong Problems with Other People's $$ (Score:3, Insightful)
Re:Solving the Wrong Problems with Other People's (Score:3, Interesting)
Re:it makes sense (Score:3, Interesting)
The d
Re:it makes sense (Score:3, Insightful)
Take for example Deep Crack (luminaries, remember that one?). Perfect example of specialized hardware for a single job.
Although, you are probably right in that most of todays supercomputing needs can be met by clustering together
Re:it makes sense (Score:3, Interesting)
Re:Moreover I think those industry panelists... (Score:3, Interesting)
Reminds me of my early programming days on the TI99/4A. The brilliant bit that made that computer more powerfull than most other micros on the market at the time (except maybe the Commodores and Ataris, but neither of those had this as much as the 99er did) was multiple specialized subprocessors. Most others had *maybe* a video processor and a sou
Re:it makes sense (Score:2)
Not only that, you can still use that cluster in the future for other jobs and not have to reinvest millions of dollars again.
Re:it makes sense (Score:3, Informative)
As 3D rendering and sophisticated media codecs are becoming the primary reasons for upgrading a home PC, the front side bus and CPU (especially the CPU) have
Re:it makes sense (Score:3, Interesting)
when you can build a top 5 supercomputer for under 6 million dollars, using off the shelf parts. Why spend the hundreds of millions of dollars?
Because instead of fundamentally advancing the science of computing, the industry is simply scaling commodity technology. The American supercomputer industry has gone from innovator to an assembly operation.
We're in need of a paradigm shift. Where's the next Seymour Cray?
Jason.
Re:it makes sense (Score:2)
Yes and so what?
The American space industry is taking the same path with AirShipOne. Don't you think it's good?
If computers hadn't commoditized, you'd still be posting your comment from a terminal connected to a mainframe, while a line of people are waiting in line behind you at the library. But instead, you have a great computer at home, just for you, because you can afford it. Heck, if you wanted, you could afford a s
Re:it makes sense (Score:5, Funny)
Re:it makes sense (Score:2, Insightful)
Not that I have ever done anything like that
Re:it makes sense (Score:4, Insightful)
Re:it makes sense (Score:5, Insightful)
Expected fallout from the Beowulf takeover (Score:4, Insightful)
Don't see this as bad news... it's a sign that we're winning.
Re:Expected fallout from the Beowulf takeover (Score:3, Insightful)
Re:Expected fallout from the Beowulf takeover (Score:3, Insightful)
Re:Expected fallout from the Beowulf takeover (Score:5, Insightful)
That being the case, wouldn't it make more sense to invest heavily in R&D to solve the cluster's problems and remove its limitations, than to invest heavily in R&D into next-gen mainframes?
Re:Expected fallout from the Beowulf takeover (Score:4, Insightful)
Re:Expected fallout from the Beowulf takeover (Score:4, Insightful)
Re:Expected fallout from the Beowulf takeover (Score:3, Insightful)
The problem with the big investments in supercomputing by the U.S. government in the last decade is that they've been time after time to put one huge white elephant system after another in one national lab after another. Some problems:
- They take a long time to build from first RFP, to contract award to first delivery till they are fully deployed. By the time they are done they are usually starting to look long of tooth and the lab starts the whole process all ov
Re:Expected fallout from the Beowulf takeover (Score:2)
One thing about technology is that each generation feeds off the last. If we get into a cycle where our expections for hardware empasize quantity rather than superiority how will we every achieve the ultimate ends of computing: to... uh... accomplish, uh, something... um...
Maybe it's all headed somewhere.
M
Re:Expected fallout from the Beowulf takeover (Score:2)
It's bad news for Cray (Score:3, Insightful)
Right. The Cray folks have just realized that they are about to go the way of buggy whip and the slide rule. They don't like it one bit. They can only complain by making a lot of noise. But it won't work. When you're extinct, there is no coming back.
Re:It's bad news for Cray (Score:2)
That's the was it's normally done.
Re:It's bad news for Cray (Score:5, Informative)
That's not the sign of a dying buisness model. If they are having problems, it's down to the mangement, not lack of demand.
There are problems that don't work well on clusters, but rocket on a proper supercomputer. These include a lot of interesting areas, there will always be demand for a few pieces of big iron. At the risk of echoing the ghost of IBM CEO's past, I think somewhere around 20-30 serious top end supercomputers in the world [0]. Most of the rest of the jobs will do just fine on high end clusters.
If you read the article, there are no quotes from Cray people. What there are quotes from is the people who used to get to play with special hardware, who now admin those clusters.
It's toys for the boys, not a buggy whip issue.
[0] That's informed by being someone who uses high perfromance computing, both cluster and supercomputer.
Re:Expected fallout from the Beowulf takeover (Score:3, Insightful)
Don't see this as bad news... it's a sign that we're winning.
Not necessarily. There are plenty of computational problems that, so far, do not lend themselves well to parallelized solutons.
The point of this post and the linked article is that the hype about Beowulf and similar che
Inevitable (Score:3, Insightful)
Re:Inevitable (Score:5, Informative)
In some cases.
Unfortunately, some problems are particularly unsuitable for clusters of commercial computers, and really benefit from specialized architectures such as shared memory or vector processors.
A while ago it was decided by the US government to essentially abandon such specializations, and buy COTS. It is certainly cheaper, but not necessarily effective.
And if they really want it... (Score:4, Insightful)
It's a simple cost tradeoff. If you can save millions in purchasing computers, it means more money to pay for people to run those computers and do the real work.
Re:Inevitable (Score:2)
My guess is that most of these problems could be done massively parallel, it's just harder to program (and thus hasn't been pursued yet). You can buy a lot of programmer-years for $10 million, though, and unlike a big vector mainframe purchase, you can share the results if you spend the money on software development ins
Re:Inevitable (Score:2)
Re:Inevitable (Score:4, Insightful)
It could be argued that at least *some* of the ASCI (Advanced SuperComputing Initiative) computers had specialized architectures with loads of bandwidth & low latency interconnect (in their day).
It's a bit of a joke complaining about a lack of vector computing when every Intel and AMD CPU sold today has floating point vector instruction set extensions with very interesting operators.
I'd argue that if you take those lamented early 90s supercomputers there's not a problem they can solve faster that a relatively small contemporary cluster or even a single desktop system. A standard 4 CPU single PC desktop system with the right architecture could also spank those legacy systems in memory bandwidth, shocking but true. It just didn't keep pace with the scale and cost reduction of small systems & clusters.
The real problem here is *relative* performance of supercomputers and commodity components, but as it takes hundreds of millions if not billions to develop a new competitive CPU & architecture and manufacture it, scientists pockets aren't deep enough to pay for those costs (and thank goodness because it's our tax dollars). It is rather pathetic to lament that supercomputers have been outpaced by clusters. The economics make it impossible for supercomputers sold in low numbers to keep pace. Or more reasonably stated, the economics of consumer PC systems makes powerful computing ubiquitous and affordable to the point where it no longer makes economic sense to pursue specialized processors and architectures to try to outperform them.
If anything is to be done it would be to increase the bandwidth and reduce the latency of cluster interconnect, and guess what, that's EXACTLY what smart people are working on right now.
As for eating America's seed corn, it is Intel and AMD that sell most CPUs used in clusters today. It is that competition and the pressure of increased development costs that makes custom hardware untennable.
It is just false to imply that supercomputing technologies fed lower end development. It is a romantic vision of trickledown technology but it is not actually how technological development works. Look at computer graphics, since the commodity PC graphics cards beat big iron from SGI there has been more innovation and development in graphics hardware, not less. There is competition and a willingness to experiment with new features. The same is true with CPUs from Intel and AMD and the architectures and innovations in memory bandwidth they constantly drive forward.
Re:Inevitable (Score:2)
Way back in the day, I had it working smoothly on a 386DX-40. This was actually *before* 486s came out, let alone Pentiums. Of course, that was under DOS.
As for the Just as Good point- 95% of people using Word Processors are just using them as a glorified typewri
Re:Inevitable (Score:2)
Law of Diminishing Returns (Score:4, Insightful)
Of course people are going to cry that companies like Cray are falling by the wayside, but the truth is that their services simply aren't as needed as they were in years past.
Re:Law of Diminishing Returns (Score:5, Interesting)
It's true that supercomputers aren't really all that useful or necessary these days. However, it may be that a future computing problem shall arise, which requires a next-generation supercomputer to solve. So we'd be well-served to have a next-generation supercomputer fresh from R&D, to apply to the problem.
We may only encounter one or two more supercomputer-class problems, but they might be important ones. We should be prepared.
On the other hand, we may encounter a problem that can only be solved by horses. But we don't see a lot of buggy-whip subsidies these days...
"Feed' Corn? (Score:4, Insightful)
Re:"Feed' Corn? (Score:3, Informative)
Mangled analogy (Score:2)
Makes more sense this way. You eat feed corn (or rather, livestock does); you save your seed corn to plant next year's crop. Eating your seed corn is thus a very bad, short-sighted thing.
Expert complains: (Score:4, Insightful)
I Need A RAIS (Score:5, Interesting)
If the 'supercomputers' of today are increasing performance, does it really matter the design?
Maybe that is a signal that monolithic computer tasks are best handled in a hive mentality - have the Queen issue the big orders, have the warriors performing security, have the workers transporting the goodies (data), and have the requisite extra daughters and suitors to grow the hive and assure its viability (redundancy).
The fact that it is cost-effective is even better.
A lost metaphor brings out my inner language nazi (Score:4, Informative)
Kids these days.
Re:A lost metaphor brings out my inner language na (Score:2)
Kids these days.
Maybee they did mean feed corn. As in corn that's harvested specifically to be fed to cattle.
Geezers these days.
Re:A lost metaphor brings out my inner language na (Score:2)
Idiots these days.
You are all missing the point (Score:5, Insightful)
Re:You are all missing the point (Score:2)
If the cost of the system plus the cost of the geek to run it is cheaper per unit of work than it is for a vector machine then that's all there is to it.
We are innovating by squeezing more and more processing power into smaller and smaller spaces and by improving on the interfaces for interco
No, YOU are actually missing the point. (Score:4, Insightful)
Just like it's more difficult to write multithreaded code than it is to write single-threaded code.
That's where software, and platforms come in. There is a TON of research being done, which uses technologies like Infiniband and Myrinet as interconnects, and can make a cluster "look" like a big monolithic machine. If you as an end user write code that goes down into the TCP stack itself, you're working too hard, and you're going about it the wrong way.
Put it this way: In 5 years the odds are overwhelming that there will be a good software platform that can let you pick 5000 servers and run your app 10,000 threaded, with everything appearing just like a single process, and running "as it would on a Cray." It's easier to solve this stuff with software -- take your problem (distributed computing) and solve the problem with a different set of technologies (high performance/low latency interconnects, shared address space/DMA across machines, etc).
Apple's Xgrid is a step in this direction. It's missing a ton of "Supercomputer" functionality right now, but it's a nice cross-machine GUI scheduler. Right now this type of app can address maybe 20% of what supercomputer apps need... in the future maybe more like 98%.
Re:No, YOU are actually missing the point. (Score:2)
The protocol stack. Many of the problems in speed when it comes to clusters is that people still use TCP(And it doesn't help that a bunch of idiots are trying to get people to use TCP/IP even at subnet level with Infiniband... Talk about crippling Infiniband by doing that..), with alll the performance hits that entails.
Software vs hardware (Score:3, Insightful)
We have to be careful about measuring these things however. One of the goals of cluster computing was to lower the cost of computing. If the government is spending less and still meeting needs, thats not nessecarily an indicator of a problem. If that means that we aren't writing code to fit into a vector platform, so be it!
There is no crisis (Score:4, Interesting)
Screw 'em. If there's a need, the market will provide. If it turns out that the important tasks can be parallelized and run on much less expensive clusters, then all that means is that we have a more efficient solution to the problem.
Pork Barrel, not Feed Corn (Score:2, Interesting)
Poor little babies, now wher
The classic supercomputer is the modern desktop (Score:4, Interesting)
Re:The classic supercomputer is the modern desktop (Score:4, Informative)
Catchup == lawsuits? (Score:2, Interesting)
(pat pending)
Trickle Down (Score:4, Interesting)
Not that many people really need a race care, but advances in fuels, materials, engineering in race cars eventually leads to bette passenger car. And for raw performsnce, strapping together a bunch of Festivas will not get you the same as an Indy racer.
Without a market you can't survive long term (Score:5, Insightful)
There seems to be some historical revisionism going on regarding the demise of the "supercomputer industry". People are coming out of the woodwork now saying that lack of government support caused the great supercomputer die off.
As Eugene Brooks predicted in his paper Attack of the Killer Micros, the supercomputer dieoff was caused by the increasing performance of microprocessor based systems. Many of us now own what used to be called supercomputers (e.g., 3GHz Pentinum processors, capable of hundreds of megaFLOPs).
The problem with supercomputers is that high performance codes must be specially designed for the supercomputer. This is very expensive. As people were able to fill their needs with high performance microprocessors they quit buying supercomputers.
Many people who need supercomputer levels of performance for specialized applications (e.g., rendering Finding Nemo or The Lord of the Rings) are able to use walls of processors or clusters.
There are, of course, groups where putting together off-the-shelf supercomputers will not suffice. But these groups are few and far between. As far as I can tell they consist of the government and a few corporations doing complex simulations. The problem is that this is not much of a market. Even if the government funds computer and interconnect architectural research, there does not seem to be a market to sustain the fruits of this research.
In the heyday of supercomputers there were those who argued that when cheap supercomptuers were available the market would develop. The problem is, again, programming. High performance supercomputer codes tend to be specialized for the architecture. Also, no supercomputer architecture is equally efficient for all applications. It is difficult to build a supercompter that is good at doing fluid flow calculations for Boeing and VLSI netlist simulation for Intel (the first applications tends to be SIMD, the second, MIMD). The end result of these problems tends to suppress any emerging supercomptuer market.
The reality right now seems to be that those who are doing massive computation must build specialized systems and throw a lot of talent into developing specialized codes.
What tasks require high-speed interconnects? (Score:5, Insightful)
So, what tasks still require a high-speed shared data memory? Answer that, and you'll understand where you can still sell a supercomputer.
Bruce
Re:What tasks require high-speed interconnects? (Score:3, Funny)
A high-speed shared memory test program?
Re:What tasks require high-speed interconnects? (Score:3, Interesting)
Re:What tasks require high-speed interconnects? (Score:3, Interesting)
A long time ago, at Pixar, I got an ARPA grant to work on an image-processing application for the feature film industry. The purpose of the grant was economic and military at the same time. I was to help create a market for multiprocessor computers (not really supercomputers) so that there would be U.S. manufacturers of them if when/if the Army needed them for military purposes. This is what often gets called corporate welfare, although I could see the defense purpose was valid. I don
Its about time to face the facts (Score:2)
Former Cray Folks Move On (Score:2, Informative)
About time... (Score:5, Informative)
For things like weather forecasting, maybe big vector machines still have an edge, but I suspect that's changing as the weather guys get more experience in using machines with large numbers of micros. This seems to have already occurred, in fact; NCAR [ucar.edu] appears to have mostly IBM RS6000 and SGI computers these days, with nary a Cray in sight.
The most common term I used to hear in the early 90's was Killer Micros [wikipedia.org]; I think the term dates back David Bailey in the 80's sometime. If you want more evidence that the death of the supercomputer has been going on for a long time, check out The Dead Supercomputer Society [paralogos.com], which lists dozens of failed companies and projects over the years; this page was apparently last updated 6 years ago!
Re:About time... (Score:2)
Re: (Score:2)
and in other news... (Score:4, Funny)
Forbes promoting socialism? (Score:4, Interesting)
Of course the governent should continue in its current policy of funding a few leading-edge machines that are too costly to sell into the general market, but will test new technology. The governemnt itself is a customer will energy testing, weather modeling, medicine development, etc.
supercomputer research doesn't do me any good... (Score:2)
Sorry, I'm selfish, but I like the previous status quo.
Bryan
Complex issues that have to be solved (Score:5, Informative)
If this was a simple issue, the HPC community would already have completely moved to clusters and never looked back 3 or 4 years ago. But it's not kiddies.
Want to run a physics projection for more than 1 microsecond? Takes real horsepower that clusters cannot provide even distributed. Just too much damn data. Chem codes that include REAL data for useable time slices? too slow for clustered memory. Every auto maker in the world (almost) has been whining about the lack of BIG horsepower for a few years now.(crash codes and FEA) I could go on forever. Sure, some problems work awesome on clusters, which is why we have them. But definately not all of them.
The problem is partly diminishing returns, partly the pathetic ammount of useable memory on a cluster and its joke for memory throughput, partly the growth in power of the low end and clustered networking, partly the ridiculously long development cycles invloved in High Performance Computing and the low $ returns,
One of the biggest things congress sees is that this country will more than likely NEVER again lead the world in computing power for defense and research.
And thats something we ought to do as the last real Superpower.
The national labs TRIED clusters, they don't get all the jobs done they wanted. (see testimony before congress, writings in HPC jounals, and the last couple RFPs from US gov. labs,heck every auto maker in the world) People in HPC _know_ it now, but having let what little there was of the supercomputer industry die out, there isn't mcuh of an industry left to turn to now. It just may be too darned late. HPC hasn't been a money making industry since the early 80s.
Heck, even Intel abandoned their clustered machine they custom built for the government.
Most folks in HPC will readily admit the Top500 is kind of a joke. The HPC-challenge #s are a little more realistic for the tests, but we really do need something that approximately real world applications, not just a 70s cpu benchmark.
For those that think this is a 'Linux wins' issue,
consider that mostly it was fast interconnect networks that allowed clustering, not the OS. Examine the history of clusters and you'll see this is true. Btw, the last few SC companies are already mostly moving to linux anyway.(nec,fujitsu,cray;ibm dabbles in hpc)
Hopefully the industry will survive long enough to allow for even better mergers of supercomputing power with low end cost, but at this point I doubt it. Cray has been on the ropes since 96, fujitsu's sc division is a loss leader, and NEC has been trying to get out of it for a while for something with a margin.
Ed -gov labs HPC research punk
-former Cray-on
-former CDC type
Re:Complex issues that have to be solved (Score:5, Interesting)
Clusters and supercomputers... (Score:5, Interesting)
I've seen a lot of naive comments suggesting that supercomputers are being replaced by clusters. The truth is, anyone who can replace their supercomputer with a cluster didn't need a supercomputer in the first place:
- (compared to a supercomputer):
- The prime advantage of an x86-based server is that it is cheap, and it has a fast processor. It is only fast for applications in which the whole dataset resides in memory - and even then, it is still the slowest of the group.
- Clusters are a little better, but suffer from severe scalability problems when driving IO-bound processes. As with the x86 server, if you can't put the full dataset into memory, you might as well forget using a cluster. The node to node throughput is several orders of magnitude slower than the processor bus in multiple CPU systems. (6.4GB/s vs 17MB/s for regular ethernet, or 170MB/s for Gigabit)
- Multiple CPU servers do better, but still lack the massive storage capacity of the mainframe. They work better than clusters for parallel algorithms requiring frequent syncronization, but still suffer from a lack of overall data storage capacity and throughput.
- Mainframes, OTOH, possess relatively modest processors, but the combined effect of having several of them, and the massive IO capability makes them very good for data processing. However, their processors aren't fast at anything, and often run at 1/2 or 1/3 the speed of their desktop counterparts.
- Supercomputers combine the IO throughput of a mainframe with the fast processors typically associated with RISC architectures (if you can still consider anything RISC or CISC nowadays). They have faster processors, more memory, and much greater IO throughput than any other category.
It used to be that the prime reason for faster computers came from the scientific and business communities. But now that the internet has turned computers into glorified televisions, the challenges have gone from that of crunching numbers to serving content:As our economy has shifted away from a technological base to an entertainment one, the need for supercomputers has begun to evaporate. We outsource innovation overseas so that we can lounge around on the couch watching tv and drinking beer (or surfing the net and drinking beer). The primary purpose of technological innovation has shifted from that of discovering the universe to merely bringing us better entertainment.
I agree-- clusters are limited (Score:2, Interesting)
The problem is that these simulatio
Hammertime (Score:3, Insightful)
Obviously the big SC vendors and designers seeing less business roll their way, why pay them tons of money when you can have grad students assemble your cluster for the price of some pizzas? That isn't to say SC clusters are the end-all be-all of computing but they're very useful and relatively inexpensive. Realistically they're simply an extension of what Cray started with their T3D supercomputer. The T3D was very impressive in its days but now the technology to build such systems is in the hands of just about everyone.
Taco: What the hell is up with the IT color scheme? This is even worse than the scheme for the Games section. I know the Slashdot editors don't actually read the site but other people try to and we're not all colorblind or reading from grayscale monitors.
I have one word for you...Quantum Computing! (Score:3, Interesting)
Seriously, I don't see the problem, so long as companies like IBM [ibm.com] and (dare I say it) Microsoft [microsoft.com] continue to do research in this area. That is the real value of companies that are committed to *real* research in revolutionary sciences and technology.
Of course, US companies don't have a hammerlock on this research. There is a lot of work being done internationally in the area, by corporations, and by educational/research institutions.
---anactofgod---
Since when is /. a capitalist playground? (Score:3, Insightful)
Letting supercomputing die may be harmless, after all, the US doesn't have to be the best at everything in the world and some other country will fund the research. But from some of the more coherent posts I've read, it seems like supercomputing has a definite niche in the natural sciences, something we should be pushing for a better society - learning for learning's sake - and paying for out of public coffers. My taxes go to a lot of shitty things I'd rather them not go to, like subsidizing Haliburton with no-bid contracts. Why is it so offensive to
So what (Score:4, Insightful)
There are many reasons for that too, for one other then in stealer, neculear, mathematic, and bio research feilds few industries need more computing power then can be had off the shelf any day of the week. That was not true yesterday it took all sorts of custom hardware to make CGI happen in films that can be done now in my basement in resonable time frames. So no more super computer market there the ROI is gone I am sure this plays out in all sorts of other engineering feilds as well.
Many places where you do need super computing power can be done with clusterd systems that are cheap to build and cheap to maintain.
At least people in the pure science and research fields have learned to be better thinkers and programers, they found ways to do things in parallel that were traditionally serial. Things that still are serial can be made to work on a cluster, sure it might take longer then a single computer considered to be equal FLOPSwise but considering I could either spend all the money I saved makeing my cluster bigger and more powerful so I can get back to equal time or on other profitable efforts while I wait there is again no ROI.
It so happens that may of the most interestin questions in math, physics and computer science such as quatum theory need massive amounts of parallel work, rather then serial so that works better on a cluster anyway.
If there is a real reason to do it people will build supercomputer, because there is nothing stopping them other then economics. No need to fear Supercomputers are not going away. Everyone else that needs that kinda proc-ing power will settle for clusters, as well they should. This is just another largly obsolete industry wanting someone to bail them out because they have failed to adapt to a changing market. If they are going to die we should let them, just like we should let the Universitys adapt or die, and the RIAA needs to adapt or die, we need to stop proping up obsolete undustries so new ones can replace them!
Interesting... (Score:3, Insightful)
Oh really. Don't blame me for not trusting a guy with that kind of potential bias.
1024 Chickens (Score:3, Funny)
It is about cost (Score:4, Insightful)
There never really was a supercomputer market. There was a cold war, that subsidized the supercomputer market.
Then there is the cost. Companies stopped making SC because they were too expensive. If the guy from Ford wants to pay 1 billion for a supercomputer I am sure someone will build him one. The cost build a FAB is over 4 billion. Why do you think HP teamed with Intel. Why do you think there are so few processor families? You have to make a living in the commodity market where you can sell things in the millions because supercomputers even in their heyday were sold in the hundreds.
Then there is the problem that many problems are solvable on clusters. So those specialized problems can not depend on other parts of the HPC market to help subsidized their corner of the market. i.e. clusters make the really hard problems more expensive.
It is question of how much you want to pay to solve your problem? Simple economics actually. If the numbers don't work, the problem doesn't get solved. If the Gov. wants to solve some problems (and during the cold war they did) then they can step in and subsidize the market.
And don't cry about Japan and the Top500. When the top500 has price column then it will start to be meaningful.
SGI is running linux on a 512 CPU NUMA (Score:3, Interesting)
http://www.finux.org/proceedings/ [finux.org]
Re:Isn't this good? (Score:3, Informative)
Better? (Score:2)
Re:Better? (Score:2)
That cluster with twice as many CPU's might just perform at 1/3(And that's optimistic) of the speed that the supercomputer does the job at when it comes to such tasks.
Re:Isn't this good? (Score:3, Interesting)
Re:Please make it stop! (Score:2)
And if you really dislike the color scheme that much, there are simple instructions in my journal about how to use Firfox to change the colors displayed on here.
Re:Please make it stop! (Score:2)
Re:Two sides to this... (Score:3, Interesting)
I was talking about a center whose purpose was the creation of ever-more-powerful supercomputers. The rental section would just be there to make use of the tech, and put it through its paces.