Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Supercomputing Businesses Earth Power News

The Story Behind a Failed HPC Startup 109

jbrodkin writes "SiCortex had an idea that it thought would take the supercomputing world by storm — build the most energy-efficient HPC clusters on the planet. But the recession, and the difficulties of penetrating a market dominated by Intel-based machines, proved to be too much for the company to handle. SiCortex ended up folding earlier this year, and its story may be a cautionary tale for startups trying to bring innovation to the supercomputing industry."
This discussion has been archived. No new comments can be posted.

The Story Behind a Failed HPC Startup

Comments Filter:
  • 1 down (Score:3, Informative)

    by Locke2005 ( 849178 ) on Tuesday November 03, 2009 @07:31PM (#29970688)
    Lightfleet [lightfleet.com] soon to follow. How is the company that was using Transmeta chips doing?
  • Re:Entrenched? (Score:3, Informative)

    by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Tuesday November 03, 2009 @11:41PM (#29973136) Homepage Journal

    First, the HPC world has a lot of commodity computers, but it also has a lot of very special-purpose computers.

    Second, the odds of someone buying an HPC machine and then running pre-compiled generically-optimized code on it is virtually zero.

    Third, HPC computers (as compared to loosely-clustered "pile-of-PCs" systems) are expensive and almost invariably use components that aren't "run-of-the-mill" (such as Infiniband or SCI for the interconnect).

    In consequence, not only is the ix86 not "entrenched", it can't be "entrenched". It can only be popular in specific segments of the HPC market and even then only until something better comes along.

    If HPC was tied that firmly to Intel, they'd all be using Windows Cluster Edition rather than Beowulf or MOSIX. Why? Because Beowulf and MOSIX require engineers who think, Windows does not. If thinking was superfluous to requirements, they'd be using an OS to suit. They aren't.

    Now, will MIPS/MIPS64 ever do well in HPC as a whole? Probably not. MIPS is great for the embedded market, which means most MIPS engineers understand the embedded terrain. That's not a skill you can readily migrate to other areas. I do expect, however, MIPS/MIPS64 to do extremely well in some HPC domains. It's low-power (which is why it's popular in embedded systems) which is great when you can't cart around huge generators, can't dispose of the heat easily or have to minimize the radio noise. Plenty of markets there.

    The Cell processor is an interesting design and seems to do great, but problems tend not to split 6-ways very often. I'd have preferred them to have a 4-way grouping of number-crunchers and have the other 2 cores really good at something else entirely. Perhaps as the manufacturing scale gets smaller, they'll be able to increase the variety of cores.

    But sooner or later, someone is going to build a chip that is absolutely just what the HPC world needs. The Gnu C compiler is easily enough extended and although it's not quite as good as Green Hills or some of the other high-end compilers, the gap isn't so great that HPCers won't use it.

    My guess is that such a chip will be very easily reconfigured through microcode and that it'll really be not much more than a bunch of core operations on silicon, a block of largely unsegmented memory and enough networking logic to allow the operator to fully exploit what's there. Oh, and a hell of a lot of internal bandwidth. To pull this off, you'd need to do for CPU internal buses what Infiniband and SCI have done for machine-level networking. That's the only truly hard part.

    Such designs have been attempted before, where CPUs have no registers but just a block of memory you can use as you will. This idea goes a little further, since it replaces both Intel's notion of hyperthreading and the modern idea of multiple cores with the idea that hyperthreads x cores would be fixed with the microcode deciding the values. The compiler for the program can then section the CPU according to the needs of the program, rather than sectioning the program according to the needs of the CPU.

    Could Intel borrow this? No. The above has no architecture, per-se, and no real instruction set. Just processor elements. There's nothing to copy, nothing to patent, and with no fixed instruction set, nothing to lock customers in with. The only thing they could really steal would be the faster internal bus. Which would keep them on desktops for decades to come, but because general-purpose is ALWAYS slower than special-purpose, it wouldn't keep them in the HPC market.

    We've seen the same with other components of computers, of course. Long-gone are the days of proprietary floppy drives (yes, some companies really tried to tie customers to their brand of floppy disk), proprietary printers, proprietary tape drives, proprietary hard disk interfaces, even proprietary RAM (Got RAMBUS?).

    Transmeta came close, but didn't go all the way (their CPU had an architecture of some sort) and were far too interested in the secrets busines

  • by Iphtashu Fitz ( 263795 ) on Wednesday November 04, 2009 @12:19AM (#29973392)

    I work as a sysadmin at a Boston-based university, and one of my jobs is managing an HPC cluster. We actually had SiCortex come give us a demo of one of their systems a little over a year ago and were rather impressed from a basic technology standpoint. However the biggest drawback we saw, which was a significant one, was that their cluster wasn't x86 based. We run a number of well known commercial apps on our cluster like Matlab, Mathematica, Fluent, Abaqus, and many others. Without those vendors all actively supporting MIPS, SciCortex was simply a non-starter for us when we were researching our next generation cluster. And by actively I mean rolling out MIPS versions of their products on a schedule comparable to their x86 product releases. Having to wait 6 months or more for MIPS versions simply isn't acceptable. If they could get firm commitments from those commercial vendors then we might have pursued SciCortex, but that simply wasn't the case. Even the inability to run a standard commercial linux distro was a huge drawback, since many commercial software vendors specifically require a commercial distro like Red Hat or SUSE if you're trying to get support from them.

  • Re:GPU's? (Score:4, Informative)

    by peawee03 ( 714493 ) <mcericksNO@SPAMuiuc.edu> on Wednesday November 04, 2009 @01:26AM (#29974010)
    Currently, Teslas are the single-precision future. All my work is in double precision (64-bit), which is where most GPUs are much much slower. IIRC, the next generation GPUs are going to have respectable double precision performance, but they're way down the road- hopefully I'll have moved on to a job where it doesn't matter by then. Hell, I consider it a victory when I've gotten a code translated from FORTRAN 77 to Fortran 95. GPUs? I'll wait until next decade. More normal cores are low-hanging fruit I can use with any MPI code *now*.

The only possible interpretation of any research whatever in the `social sciences' is: some do, some don't. -- Ernest Rutherford

Working...