Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
HP

Video HP Claims Their Moonshot System is a 'New Style of IT' (Video) 68

Video no longer available.
Didn't we already have something kind of like this called a Blade server? But this is better! An HP Web page devoted to Moonshot says, 'Compared to traditional servers, up to: 89% less energy; 80% less space; 77% less cost; and 97% less complex.' If this is all true, the world of servers is now undergoing a radical change. || A quote from another Moonshot page: "The HP Moonshot 1500 Chassis has 45 hot-pluggable servers installed and fits into 4.3U. The density comes in part from the low-energy, efficient processors. The innovative chassis design supports 45 servers, 2 network switches, and supporting components.' These are software-defined servers. HP claims they are the first ones ever, a claim that may depend on how you define "software-defined." And what software defines them? In this case, at Texas Linux Fest, it seems to be Ubuntu Linux. (Alternate Video Link)

Tim: Perfect. So, Thomas, we are here at the HP booth at the Texas Linux Fest.

Thomas: Yes.

Tim: What is HP doing here?

Thomas: Well, we are showing off the hardware we have. I am a ROM engineer for HP and I have this actually sitting in my kit, and I thought, “Hey, I can show this to other people, and they can see what goes into all the projects of all the enterprises.”

Tim: So what is the hardware specifically that you got right in front of you right now?

Thomas: Yeah, so what we have here is a Moonshot System. Moonshots are the latest in our scale out systems. If you have an application you want run and you need to run them on a lot of slave nodes, this is what you want.

Tim: So, explain the actual – the smaller pieces that you’ve got stacked in front of you right now?

Thomas: All right. So these are individual cartridges. This is an Intel cartridge that is running in Atom Avoton. But I’m actually more excited today about these two. These are ARM server cartridges. So, this bottom one here is from a company called Texas Instruments and this is actually four ARM CPUs on here. Each one is a separate compute node. What’s cool about these is that they also come with eight TI DSP units. They run OpenCL, they run OpenMP, and OpenMPI. So if you can do algorithms, things of that nature, it can run on these systems.

Tim: Can you distinguish what you are calling a cartridge from what I think of as a blade in an ISP or a data center context?

Thomas: So a blade system is an analogy to our DL and ML lines which are rack and tower systems. They are dual socket Xeons and they come with three or four I/O options, either gigabit or fiber... probably some others I’m forgetting about, sorry. But they are meant to be general purpose CPUs and you could do anything with them. What’s great about them over other PCs is that we share on power and cooling and also management. So if you’re in data center, you need a lot of these general purpose CPUs, then you want blade systems, what’s different with these is that we remove a lot of components you don’t need for whatever particular applications. It is safer since we have a web server here. So you are more interested in shoveling a lot of I/O as fast as you can, low latency.

So you don’t really need Xeons to do that. Xeons are a floating point and a lot of compute power but you are not rendering 3D models here, you are just serving your files. So what you would be interested in that would be this here, it is the first 64-bit ARM shipped out there. It is the X-Gene from APM. And what’s special about this cartridge is, 8 cores, 3.4 Ghz has a dedicated 10 Gig NIC, also front and back, it is here. It has 8 DIMM slots, so it can get 64 gigs a RAM. Now, that may sound small for a particular server, but if you look down here, this is the chassis that it is going to fit into. And you can fit 45 of these in here, so if you can do the math, hopefully I am doing the math correctly, the camera is kind of throwing me off, but it is 45 nodes total for 4.3U space, 2.8 TB of RAM. Again, I’ll remind you that’s all 10 gig NIC connected. On the backside, you have up to 8 40 gig ports. So if you need I/O this is what you want.

Tim: Now what is your role in the development of this system?

Thomas: I am the ROM engineer for this particular system. That means that I’m working U-Boot for this. I am sure you have heard of U Boot, I think.

Tim: For people who aren’t familiar, explain U Boot a little bit.

Thomas: Well, historically U Boot has been on PowerPCs like your Apple G3s. Other companies like Freescale and Applied Micro had PowerPCs. IBM of course does PowerPC. But U Boot is totally used on PowerPCs and other embedded systems like MIPS to bring up the kernel after you start the CPU.

Tim: Now once you’ve brought up the kernel in this hardware, what sort of software is going to override that?

Thomas: So for the Intels, we support various Linuxes. For the ones I am working on, the ARMs, on first ship we will support Canonical, Trusty 14.04 and actually here at the demo, sorry you can’t come, we have Trusty running on the 64 bit and we have Canonical Precise running on the TI circuits.

Tim: What are some applications with all the processing power that you’ve got on here, what you’ve distinguished this from as a general purpose computer, what are some instances where you might want to run this sort of very specialized hardware?

Thomas: Yeah. So, for the 64-bit, we’ve mentioned web server, but more specifically like memcache could be good and I talked to Duplex for it, and they said that this will be very good for Hadoop slave nodes, where you are just shoveling data in and out.

Tim: Now you mentioned that this was the first 64 bit ARM chip out there

Thomas: Not the TI, but the APM one

Tim: Right. When you say out there, is this a shipping product yet?

Thomas: This is not shipping yet. Now you can if you want, you can go to APM website and you can get the reference for it today, we will ship it to you happily. HP we plan on shipping it this year. So wait a month or two and we will get it out for you.

Tim: Thomas, you mentioned that Ubuntu is one of the Linux distros that is going to be running on this. Talk about that and how does that work?

Thomas: Well, what happens is that a whole bunch of lawyers get together, have lunch, and out comes a product and I get a job and a paycheck. But more specifically, what we do is we have a three-wayNDA, so Canonical signs in with us, and we have another NDA with the SoC vendor. So, in this case, we have TI and we have APM. And we sit down, draw some schematics, and we get the firmware running on it.

Tim: So those were a couple of – would you call them partner companies?

Thomas: Yes, these are two of our partner companies, yes. We have other partners going forward, but this is what I can talk to you about today.

Tim: Okay. HP has been talking about Moonshot for quite a few months now, the idea of Moonshot being—how would you sum that up? Is it quick deployment, is it new hardware? Is it a new software infrastructure? Let’s talk about that.

Thomas: It is targeted hardware. Now, HP, we’ve done c-Class for about ten years, and we are pretty good in that field. What we are bringing to Moonshot is all of our expertise in infrastructure and management and power of cooling and all the quality and support that goes into it, we are putting it into this package and we are working with other vendors, SoC vendors to deliver a product targeted at a particular product segment. And put a little pretty bow on it and send it off.

Tim: But the hardware being tuned to work with Linux, means it is going to work probably with a lot of distros on it.

Thomas: Yes. Officially, on first ship on these ARM servers, with Canonical, but this supports PXE Boot so if the user wants to, they could do whatever they want with it. But officially, we have tested this for over a year or so, I am not sure exactly how long, it has been at least a year, and we have ironed out the kinks and we are going to make sure that you are going to get a quality product when you get it.

This discussion has been archived. No new comments can be posted.

HP Claims Their Moonshot System is a 'New Style of IT' (Video)

Comments Filter:
  • by Anonymous Coward

    I can see this being useful for approximately ten percent of the market.

  • But wait! (Score:3, Funny)

    by djupedal ( 584558 ) on Tuesday July 15, 2014 @05:02PM (#47461163)
    There's more! Buy now and receive a second HP MS System for free! Just pay shipping and handling.

    Not available in any store.
  • But Moonshot servers are a couple years old, with a few success stories from HP itself (www.hp.com is fully moonshot-powered) and others. Yes they are efficient, small and easy to run, but they are also quite less powerful than a "traditional" server. Now all they do is release new "cartridges" for the platform. Are we soon to hear about generation 2.0? Maybe at HP Discover?

    • by Anonymous Coward on Tuesday July 15, 2014 @05:14PM (#47461309)

      Only HP would call them "server cartridges". I think their CEO cartridge is running low, they should go get a new one.

    • by afidel ( 530433 )

      Nah, HP's all about long lived chassis, the C7000 blade enclosure is 8 years old and they're still adding new blades and I/O modules for it. The previous P class chassis was supported for 6 years.

    • by amorsen ( 7485 )

      www.hp.com is fully moonshot-powered

      That would explain why the HP site is so ridiculously slow. Except that it has been slow for years, but maybe they were always running it on prototypes.

    • by stiggle ( 649614 )

      Its because HP customers are used to printer ink cartridges being overpriced disposable units. They're thinking is to move this into computer components and release them as over-priced disposable units too.

  • 4.3 U (Score:4, Insightful)

    by digsbo ( 1292334 ) on Tuesday July 15, 2014 @05:13PM (#47461297)
    4.3U? They couldn't have made a reasonable tradeoff to go to an even unit size?
    • by Anonymous Coward

      4.3U? They couldn't have made a reasonable tradeoff to go to an even unit size?

      Maybe they have a 0.7U add-on planned for it :-)

      • by Burdell ( 228580 )

        It is probably either 7.5 inches (4.29 U) or 190 milimeters (4.27 U) tall. However, I don't know why you'd make something designed to be rack mounted that is not an integral multiple of U, unless you have something that needs cables attached to the front (in which case you still designed it poorly).

    • Exactly, in a cold / hot isle rack you are left with a gap which would need plugging with something.

      A 42U rack would have 7U wasted space that is almost another 2 servers...
      • Re:4.3 U (Score:5, Funny)

        by mlts ( 1038732 ) on Tuesday July 15, 2014 @05:32PM (#47461517)

        A swimming pool noodle cut to fit works perfectly with gaps in the hot/cold aisles. Don't ask how I know...

      • by Shimbo ( 100005 )

        Exactly, in a cold / hot isle rack you are left with a gap which would need plugging with something.

        A 42U rack would have 7U wasted space that is almost another 2 servers...

        They will sell you a .66U spacer, or a 13U box that fits three of them. It may be a dumb idea but not that dumb.

        • by digsbo ( 1292334 )
          39U of these plus 2-3 U of network equipment seems reasonably efficient. I didn't see the bit about the 13U consolidated chassis, but that is pretty sensible.
          • It would have been smarter not to require an additional chassis (who wants to lug an extra 13U chassis into a datacenter?). It should be done in the rail system to offset the 2nd and 3rd server, then you only have to fit different rails to offset.
    • Re:4.3 U (Score:4, Informative)

      by radarskiy ( 2874255 ) on Tuesday July 15, 2014 @10:29PM (#47463473)

      This is actually an established size from HP. It allows two 3.5" drives per vertical blade (cheaper than 2.5") which would not fit inside of a 4U chassis, but fits one more chassis per rack than 5U would

  • Amazing. (Score:5, Funny)

    by ddt ( 14627 ) <ddt@davetaylor.name> on Tuesday July 15, 2014 @05:15PM (#47461331) Homepage
    "If you do algorithms, things of that nature, you can run on these systems."

    Sold!
  • by guytoronto ( 956941 ) on Tuesday July 15, 2014 @05:21PM (#47461415)
    Being in IT sales, I am often required to surf HP's website. Their site is consistently painfully slow. You would think that a company like HP would make sure their servers could serve up webpages faster than a snail.
    • by Anonymous Coward

      Couldn't agree more on this! At some I just gave up on HP because of their website. Dell / Lenovo may have a million options and overfancy pages too, but at least load times are predictable. They should all take lessons from some of the new co's like google who seem able to run fast sites (news.google.com, etc)

    • by duk242 ( 1412949 )
      Oh man, plus one to this... Their support site is so slow that when you're logging warranty jobs you hit the button to load the page and then alt tab and do something else for a bit...
  • Totally would buy (Score:5, Interesting)

    by Anonymous Coward on Tuesday July 15, 2014 @05:23PM (#47461435)

    If I had the money, I'd totally buy it and avoid the cluster****ery that is cloud services.

    BUT...

    Notice what the average cpu is. Intel Atom class hardware. Or in otherwords, this is designed for doing dreamhost-style weak cloud VPS, so while you may have 45 servers in the box, the net performance is ... well...

    The Atom processor picked, S1260 (2 core, 4 thread @ $64.00 )has a passmark of 916
    The highest rated is Intel Xeon E5-2697 v2 @ 2.70GHz, passmark 17361
    So 19 of those Atoms (38 cores, 76 threads) = 1 E5-2697v2 (12 core, 24 thread @ $2614.00)
    One dual E5-2697v2 server is almost equal, and you have 24 usable cores that could be turned into weak VPS servers. Get the point I'm making?
    Moonshot might be a better choice for provisioning weak dedicated hosts instead of VPS's (which are inherently weak, even when running on solid hardware, they are still subject to being oversold.) The S1260 is 64$, the E5-2697v2 is $2614, or roughly the cost of 40 of the Atom's. So on paper someone might go "oh look I can can afford an entire moonshot server for the price of a single cpu E5-2697v2 and get twice as many cores, when the single thread performance of the 2697 is a passmark of 1,662 (yes . 181% of the 4 threads of the Atom.)

    The thing is, this kind of configuration is better suited for certain tasks, such as a web server cluster front end (where it's rarely the CPU, but the network infrastructure that's the bottleneck) where you can turn on or off identical servers as required, and none of them actually need hard drives connected, they can just be PXE booted from a NAS.

    Though I'm worried when I see "software defined" anywhere in marketing, as most virtualization software fails hard when under load (75%CPU.) So maybe a data center that is space/power constrained can see a use for this, but if you're running a high usage website, you're better off optimizing the software stack (like switching to nginx or using Varnish in front of apache httpd +php-fpm instead of leaving it at the highly inefficient httpd prefork+mod_php) than deploying more inefficient servers.

    • IBM? (Score:5, Informative)

      by s.petry ( 762400 ) on Tuesday July 15, 2014 @05:57PM (#47461755)

      The whole promotion seems to resemble everything from IBM PureServers that were introduced about 2 years ago, but of course lacking any type of performance. At least the IBM servers allowed scaling, higher performance CPUs, integrated disks, etc..

      When management and marketing design computers, this is what we get. HP has not really been a technical player for a long time, at least in terms of innovation. Superdome was okay, but Sun E class machines made them look like an old mainframe in terms of usability. Itanium flopped and they never put much into the PA RISC chips after that. Omniback and NNM were great, but required manpower and HP has despised T&M billing for as long as I've worked with them which goes back to HP-UX 9 and VUE days. (I contracted for them in Michigan, because they would not hire direct technical people).

    • Your limiting factor is actually cooling. For the W/ft^2 you can pull out of a room you can't fill every rack to the top with Xeons.

    • Agreed! In my experience you lose performance and therefore efficiency with VMs when running CPU core/freq dependent applications. The applicaitons we run are 60-80% faster on "bare metal" linux than any VM deployment we've tried so far.
    • The moonshot is targeted for a different workload than general computing. We are currently looking at them for replacing our VDI solution. We have several pieces of software that need a better video card and cpu than what a typical VM could provide. With the moonshot we can simply install our software on the bare metal hardware and skip the visualization layer. The moonshot supports 45 blades and you can get a blade that has 4 servers built in, without a hard drive of course. 45 * 4 = 180 desktops per 4.3U
  • Not so fast (Score:2, Interesting)

    by Anonymous Coward

    This sounds like a great idea, right? 45 servers in a single chassis? With an OA (onboard assistant) to allow administration of each individual blade. So about 12 months after you've implemented this monster in your production (no downtime) environment a blade fails. No problem you replace the blade. But the NEW blade comes with firmware that requires an update to the OA (entire chassis) and the new OA firmware won't work with the other 44 blades until you update them also. Hmmmm... hey boss, can I ge

    • by dbIII ( 701233 )
      That's why I like the SuperMicro (and I'm sure others) way of doing a dense server. With some models each machine in the shared case shares the power supply and that's it. You may need third party software to wrangle the cluster, and no deeper than the OS level but a different bit of hardware isn't going to upset anything else.
  • Wow !

    Imagine if they could back-port this work to their current range of x86 blade servers !

    :-)

  • They forgot the golden rule of IT. If your company has the #1 worst rated consumer customer support and the #1 least reliable laptops (emachines beat them at desktops) then don't create a brand new technology that people will be hesitant to use. You pretty much have to be the exact opposite. Only the best company can come out with something new, claim "just trust us, it works perfectly and you should use it" and have people believe them. I really hope this finally bankrupts them so I can stop having to
    • I thought the golden rule of IT was CYA?

    • Now if each cpu was the size of a usb stick and plugged into the USB3 socket, but gave the power of an atom cpu. You could then dynamically plug in out cpus like in HAL. (or some mini PCIexpress socket)

  • Cisco got fr1st post!

    • I was about to say that this sounds very much like Cisco UCS where everything is defined in 'software'. You define the template and its components and this includes things like WWN's and MAC addresses and it allows you to migrate the 'server' to different blades since it is all in 'software'.

      With that said, the UCS kit we run at work doesn't have anywhere near the density claimed by HP with their moonshot but claiming they were the first to create a software defined blade chassis and the likes is not correc

  • We bought some Transmeta-based blades at $LARGE_US_BANK a while back, and they sucked. Hard. Like, don't bother running stuff on them hard. They went back to HP, or in the trash, I forget, and we got real hardware. It looks like HP is reviving the concept of shitty servers for people who don't do a lot with them. Instead of 1 beefy 4U machine, you have a 45-node Beowulf cluster of suck, and most problems ARENT trivially scalable. Or, if your app really is that scalable (or you've spent the time t

  • Look at a SuperMicro catalogue from around 2008 onwards or Verari from even earlier.
    • by afidel ( 530433 )

      What does SM have that's even remotely like Moonshot? I don't believe they have anything like 45 modules with 4x 8-core ARM processors in 5U. Verari looks interesting, but at 1700 cores per rack it's almost 10x less dense than moonshot.

      • by dbIII ( 701233 )

        I don't believe they have anything like 45 modules with 4x 8-core ARM processors in 5U

        The numbers are a bit different but the "new style" is not new.

  • by Anonymous Coward

    The company I'm at is looking at some new serious number-crunching servers.. We had a HP rep come in and propose a Moonshot system. The head of IT and I looked at each other and laughed out loud... Moonshot uses ATOM processors. I don't care how many of them you have, we're not using a rack of low-ball processors in our system. Moonshot is a complete joke.

    I think they use ATOM processors because it was the only way to get such high density and still be able to get the heat out of the system. It also may be

  • We got demoed this 6 months or so ago.

    I still fail to see what this buys you over a bunch of regular blades or rackmounts running your virtualisation platform of choice.

    • We got demoed this 6 months or so ago.

      I still fail to see what this buys you over a bunch of regular blades or rackmounts running your virtualisation platform of choice.

      Best use-case proposal I've seen is for something like VDI. Instead of sharing the resources of one server, every desktop gets their own processor and memory.

  • It's 10U with 64 almost real servers (haswell xeons) and has integrated storage and networking. You only need to hook up some power and a beffy uplink to it and you're done. And did I mention a rest api to controll it? Works even with openstack baremetal if you want that. Last I heard (two weeks ago) Moonshot is still only at cli.
    Apollos on the other side, those are worth considering. But Moonshot ... too little, too late.

  • it must be a nightmare if the chassis fails

Math is like love -- a simple idea but it can get complicated. -- R. Drabek

Working...