Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Supercomputing Businesses Earth Power News

The Story Behind a Failed HPC Startup 109

jbrodkin writes "SiCortex had an idea that it thought would take the supercomputing world by storm — build the most energy-efficient HPC clusters on the planet. But the recession, and the difficulties of penetrating a market dominated by Intel-based machines, proved to be too much for the company to handle. SiCortex ended up folding earlier this year, and its story may be a cautionary tale for startups trying to bring innovation to the supercomputing industry."
This discussion has been archived. No new comments can be posted.

The Story Behind a Failed HPC Startup

Comments Filter:
  • Lesson learned (Score:1, Insightful)

    by Anonymous Coward on Tuesday November 03, 2009 @07:30PM (#29970680)

    Don't try anything new.

  • Re:Lesson learned (Score:3, Insightful)

    by sopssa ( 1498795 ) * <sopssa@email.com> on Tuesday November 03, 2009 @07:39PM (#29970840) Journal

    The thing is, industries like these are already really, really dominated by single players and everyone uses them. It's the same with Windows too - it's own marketshare will keeps it having that marketshare. In airplane industry all the European companies had to merge so that they could compete with Boeing.

    When something becomes like a standard, it's really hard to break in.

  • Fool's errand (Score:4, Insightful)

    by Locke2005 ( 849178 ) on Tuesday November 03, 2009 @07:40PM (#29970864)
    In a blog post after SiCortex shut down, Reilly says he believes there is still room for non-x86 machines in the HPC market. He is wrong. Much more money is being spent every year on improving x86 chips than all the competitors combined. Basing a supercomputer on MIPs was short-sighted; even if it offers a a price/performance or power/performance advantage now, in a couple years it won't, because x86 is being improved at a much faster rate. Where is Sequent now? The only way to build a successful desktop HPC company is to be able to do system design turns as fast as new x86 generations come out and ship soon after the new CPUs become widely available, e.g. a complete new product every 6 months. That requires partnership with either Intel or AMD, not use of a MIPs chip that no one is spending R&D resources on anymore.
  • by Wrath0fb0b ( 302444 ) on Tuesday November 03, 2009 @08:03PM (#29971196)

    Why not use something based of the Atom chip but massively parallel.

    You are probably one of those guys that thinks that if you can get 36 women working together on making a baby, it will be ready in 1 week.

    Not all problems can scale out to many cpus (or wombs, for that matter). Threading overhead, network latency/bandwidth, mutual exclusion (or the overhead on atomic data types) all conspire to defeat attempts to scale. This is, of course, if your problem is one that is even amenable to straightforward parallelization in the first place -- many problems (for instance, lattice simulations of Monte Carlo) are excruciating to scale to even 2 cpus.

    In my own (informal) tests on our HPC (x64, Linux, see my post above for details), I concluded that you need to be able to discretize your work into independent (and NONBLOCKING) chunks of ~5ms in order to make spawning a pthread worth it. Of course, "worth it" is a relative term -- some people would be glad to double the cpu-time required for a 25% reduction in wall-clock time while others might not, so I'll concede that my measurement is biased. IIRC, I required a net-efficiency (versus the single-core version) of no worse than 85% -- e.g. spend less than 15% of your cpu-time dealing with thread overhead or waiting for a mutex. This was for 8 cores on the same motherboard by the way, if you are spawning MPI jobs over a network socket, expect much much worse.

  • Wile E Coyote (Score:1, Insightful)

    by chill ( 34294 ) on Tuesday November 03, 2009 @08:03PM (#29971198) Journal

    Whenever I hear a story about some new type of "super" computer, I think of an old Road Runner cartoon. Wile E Coyote, Genius, is mixing chemical explosives in his little shack, which he doesn't know was moved onto the train tracks.

    He says to himself, "Wile E. Coyote SUPER genius. I like the sound of that." He then gets hit by the train.

    Some of these companies remind me a LOT of good, old Wile E. Coyote. The one in this article just found the train.

  • Re:Lesson learned (Score:3, Insightful)

    by serviscope_minor ( 664417 ) on Tuesday November 03, 2009 @08:07PM (#29971254) Journal

    Single player? Have you looked at the top 100? It's equal parts Intel (x86), AMD (x86) and IBM (Power related), with a smattering of others: Cell - mostly SPU, Itanium, SPARC, NEC and others.

    There's certaqinly no dominanint player and not even much of a dominant instruction set. The thing is, supercomputers are so expensice and unique that porting to a different instruction set is usually the least of the work, except for Roadrunner which is fast but rather hard to use.

  • Re:1 down (Score:3, Insightful)

    by fm6 ( 162816 ) on Tuesday November 03, 2009 @08:10PM (#29971278) Homepage Journal

    Orion? Long gone.

    http://www.theregister.co.uk/2006/02/14/orion_shuts_down/ [theregister.co.uk]

    The weird thing here is that the Register quotes Bill Gates as calling Orion's deskside supercomputers as part of a "key trend". Now, I've always though Bill's understanding of the marketplace was overrated. But you'd think that somebody whose immense fortune comes almost entirely from the triumph of commodity processors would know that this kind of effort is doomed.

    Some people are just in love with these fancy RISC architectures and stick with them in the face of their total failure in the marketplace. When I was at Sun, the Sparcophiles would quote impressive raw numbers for Sparc architectures, even trying to sell them to people who already had a solid commitment to commodity systems. And yet every single Sun product in the HPC Top 500 run Intel or AMD!

  • Re:Lesson learned (Score:2, Insightful)

    by tphb ( 181551 ) on Tuesday November 03, 2009 @08:24PM (#29971466)

    Lesson learned: there is no market for proprietary CPUs on MPP supercomputers. It's gone. If Cray and SGI couldn't do it, how are a couple guys from DEC and Novell going to pull it off?
    It's always sad when someone's dream fails, but come'on guys. You're pursuing a 15-years-ago market, just like DEC and Novell did when they died (okay, Novell exists, but it is irrelevant).

    Supercomputers are commodity processors increasingly in commodity boxes running commodity open-source software. A supercomputer running slower processors is not going to cut it.

  • by timmarhy ( 659436 ) on Tuesday November 03, 2009 @08:49PM (#29971760)
    These guys failed in a very typical geeky fashion. they understood the technology but not the business, and at the end of the day your customers need a business case to use your services. it's the tail attempting to wag the dog.
  • Re:Fool's errand (Score:3, Insightful)

    by BikeHelmet ( 1437881 ) on Tuesday November 03, 2009 @09:04PM (#29971918) Journal

    Right now x86 has only two viable competitors.

    -ARM
    -Whatever IBM can design. (but IBM's stuff is expensive)

    ARM CPUs tend to be cheap, power efficient, and pack a ton of performance for the price - and the company has enough cash to keep developing for years and years. Other companies fab, so that lets them keep focused on what they're good at. It's a relationship that mirrors GPU makers - ATI/nVidia/TSMC. However, ARM has a very low performance cap compared to x86, so that limits usage scenarios. Good for low power servers, but not so great for scientific computing or anything that hits the CPU hard. ARM hopes to release dual-core Cortex-A9 chips in 2010, so maybe they'll catch on - only time will tell.

    IBM has always been the leader in performance, but the price would knock you flat. 5ghz Power6, anyone? It still beats everything Intel puts out, and it's years old - assuming you can foot the bill and deal with the different architecture. And look at the Cell - upon release, it was something like 30x more efficient than Intel's highest end CPUs when in supercomputers. (because Intel's CPUs of the time failed completely at scaling upwards past a few cores) Also, it was cheaper - but the architecture isn't exactly friendly, and most companies prefer to toss a few dozen extra $2000 servers at a problem rather than deal with training/hiring employees that can work with a new architecture.

    And that's the problem - everyone knows x86, and even if a server costs 5 times as much, it comes out more economical.

    But luckily for ARM, lots of people are getting more familiar with their instruction sets. These days just about every tiny device has an ARM CPU powering it... finding developers will not be a problem.

  • by MarkvW ( 1037596 ) on Tuesday November 03, 2009 @09:45PM (#29972274)

    Somebody is going to crack the market--and it won't be one of the people who sit at home and cry in their beer about how Intel rules the world and that nobody has any hope of success!!

    Thank goodness for the entrepreneurs who spit on lassitude and take their shot! Those wozniaks are the people who end up delivering really cool stuff for the rest of humanity, and leave the conventional wisdom people in the dust.

  • Re:Lesson learned (Score:4, Insightful)

    by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Tuesday November 03, 2009 @10:19PM (#29972552) Homepage Journal

    Having worked in one HPC startup (Lightfleet), I can say that one of the biggest dangers any startup faces is its own management. Good ideas don't make themselves into good products or turn themselves into good profits by selling. Good ideas don't even make it easier - you only have to look at how many products that are both defective by design AND sell astronomically well to see that.

    I can't speak for SiCortex' case, but it looks to me like they had a great idea but lacked the support system needed to get very far in the market. It's not a unique story - Inmos didn't fail on technological grounds. Transmeta probably didn't, either.

    Really, it would be great if there could be some effort into examining the inventions of the past to see what ideas are worth trying to recreate. For example, would there be any value in Content Addressable Memory? Cray got an MPI stack into RAM, but could some sort of hardware message-passing be useful in general? Although SCI and Infiniband are not extinct, they're not prospering too well either - could they be redone in a way that didn't hurt performance but did bring them into the mass market?

    Then, there's all sorts of ideas that have died (or are dying - Netcraft confirms it) that probably should be dead. Bulk Synchronous Processing is fading, distributed shared memory is now only available in spiritualist workshops, CORBA was mortally wounded by its own specification committee and parallel languages like PARLOG and UPC are not running rampant even though there are huge problems with getting programs to run well on SMP and/or multicore systems.

  • by labradore ( 26729 ) on Tuesday November 03, 2009 @11:44PM (#29973158)
    They were ahead of schedule to profitability. They lost funding for the next gen. equipment development because one of their VCs was overextended (read: losing too much money on other risky ventures) and decided to pull out. The risk with a company like that may be high but once you get enough profitability, you can fund further product development internally. They had sold about twenty $1.5M machines in about a year's time on the market. They said they were about 1.5 years to profitability, so I'm guessing that they were expecting to sell another 75 or 100 top-end machines to get to break-even. At that rate, they were probably spending less than $20M a year on development. I'm guessing that they burned up $100M to get were they got. In the overall scheme of things, that's not a big bet. If they managed to develop 20 to 50- thousand node machines and increase the output per core within 3 years, that is something that would have been able to do more than fill a niche. They probably would have developed some game-changing technology in the bargain. Stuff that the Intel and Google might just be interested in.

    To be clear: this was not a failure due to the economics of competing against Intel/x86. This was a failure due to not being lucky. It takes sustained funding to make your way from start-up to profit in most technical businesses. HPC is more technical and thus more expensive than most.

  • Re:Lesson learned (Score:5, Insightful)

    by Jacques Chester ( 151652 ) on Wednesday November 04, 2009 @12:55AM (#29973702)

    They didn't die because their customers abandoned them for something cheaper. They died because they had a cashflow crisis due to investors pulling out of a planned round of fundraising. They had millions of dollars of sales in the pipeline.

    The lesson isn't "Don't compete with Intel", it's "When you run out of money, you're out of business". Or perhaps, "The financial crisis killed lots of otherwise sound businesses". Luck, as the OP pointed out, played a large part.

  • Re:Fool's errand (Score:3, Insightful)

    by Jacques Chester ( 151652 ) on Wednesday November 04, 2009 @01:01AM (#29973750)
    > Much more money is being spent every year on improving x86 chips than all the competitors combined.

    By your logic, General Motors should be crushing Ferrari in the supercar market. After all, GM spends much more on their car development than Ferrari does.

New York... when civilization falls apart, remember, we were way ahead of you. - David Letterman

Working...