Video HP Claims Their Moonshot System is a 'New Style of IT' (Video) 68
Tim: Perfect. So, Thomas, we are here at the HP booth at the Texas Linux Fest.
Thomas: Yes.
Tim: What is HP doing here?
Thomas: Well, we are showing off the hardware we have. I am a ROM engineer for HP and I have this actually sitting in my kit, and I thought, “Hey, I can show this to other people, and they can see what goes into all the projects of all the enterprises.”
Tim: So what is the hardware specifically that you got right in front of you right now?
Thomas: Yeah, so what we have here is a Moonshot System. Moonshots are the latest in our scale out systems. If you have an application you want run and you need to run them on a lot of slave nodes, this is what you want.
Tim: So, explain the actual – the smaller pieces that you’ve got stacked in front of you right now?
Thomas: All right. So these are individual cartridges. This is an Intel cartridge that is running in Atom Avoton. But I’m actually more excited today about these two. These are ARM server cartridges. So, this bottom one here is from a company called Texas Instruments and this is actually four ARM CPUs on here. Each one is a separate compute node. What’s cool about these is that they also come with eight TI DSP units. They run OpenCL, they run OpenMP, and OpenMPI. So if you can do algorithms, things of that nature, it can run on these systems.
Tim: Can you distinguish what you are calling a cartridge from what I think of as a blade in an ISP or a data center context?
Thomas: So a blade system is an analogy to our DL and ML lines which are rack and tower systems. They are dual socket Xeons and they come with three or four I/O options, either gigabit or fiber... probably some others I’m forgetting about, sorry. But they are meant to be general purpose CPUs and you could do anything with them. What’s great about them over other PCs is that we share on power and cooling and also management. So if you’re in data center, you need a lot of these general purpose CPUs, then you want blade systems, what’s different with these is that we remove a lot of components you don’t need for whatever particular applications. It is safer since we have a web server here. So you are more interested in shoveling a lot of I/O as fast as you can, low latency.
So you don’t really need Xeons to do that. Xeons are a floating point and a lot of compute power but you are not rendering 3D models here, you are just serving your files. So what you would be interested in that would be this here, it is the first 64-bit ARM shipped out there. It is the X-Gene from APM. And what’s special about this cartridge is, 8 cores, 3.4 Ghz has a dedicated 10 Gig NIC, also front and back, it is here. It has 8 DIMM slots, so it can get 64 gigs a RAM. Now, that may sound small for a particular server, but if you look down here, this is the chassis that it is going to fit into. And you can fit 45 of these in here, so if you can do the math, hopefully I am doing the math correctly, the camera is kind of throwing me off, but it is 45 nodes total for 4.3U space, 2.8 TB of RAM. Again, I’ll remind you that’s all 10 gig NIC connected. On the backside, you have up to 8 40 gig ports. So if you need I/O this is what you want.
Tim: Now what is your role in the development of this system?
Thomas: I am the ROM engineer for this particular system. That means that I’m working U-Boot for this. I am sure you have heard of U Boot, I think.
Tim: For people who aren’t familiar, explain U Boot a little bit.
Thomas: Well, historically U Boot has been on PowerPCs like your Apple G3s. Other companies like Freescale and Applied Micro had PowerPCs. IBM of course does PowerPC. But U Boot is totally used on PowerPCs and other embedded systems like MIPS to bring up the kernel after you start the CPU.
Tim: Now once you’ve brought up the kernel in this hardware, what sort of software is going to override that?
Thomas: So for the Intels, we support various Linuxes. For the ones I am working on, the ARMs, on first ship we will support Canonical, Trusty 14.04 and actually here at the demo, sorry you can’t come, we have Trusty running on the 64 bit and we have Canonical Precise running on the TI circuits.
Tim: What are some applications with all the processing power that you’ve got on here, what you’ve distinguished this from as a general purpose computer, what are some instances where you might want to run this sort of very specialized hardware?
Thomas: Yeah. So, for the 64-bit, we’ve mentioned web server, but more specifically like memcache could be good and I talked to Duplex for it, and they said that this will be very good for Hadoop slave nodes, where you are just shoveling data in and out.
Tim: Now you mentioned that this was the first 64 bit ARM chip out there
Thomas: Not the TI, but the APM one
Tim: Right. When you say out there, is this a shipping product yet?
Thomas: This is not shipping yet. Now you can if you want, you can go to APM website and you can get the reference for it today, we will ship it to you happily. HP we plan on shipping it this year. So wait a month or two and we will get it out for you.
Tim: Thomas, you mentioned that Ubuntu is one of the Linux distros that is going to be running on this. Talk about that and how does that work?
Thomas: Well, what happens is that a whole bunch of lawyers get together, have lunch, and out comes a product and I get a job and a paycheck. But more specifically, what we do is we have a three-wayNDA, so Canonical signs in with us, and we have another NDA with the SoC vendor. So, in this case, we have TI and we have APM. And we sit down, draw some schematics, and we get the firmware running on it.
Tim: So those were a couple of – would you call them partner companies?
Thomas: Yes, these are two of our partner companies, yes. We have other partners going forward, but this is what I can talk to you about today.
Tim: Okay. HP has been talking about Moonshot for quite a few months now, the idea of Moonshot being—how would you sum that up? Is it quick deployment, is it new hardware? Is it a new software infrastructure? Let’s talk about that.
Thomas: It is targeted hardware. Now, HP, we’ve done c-Class for about ten years, and we are pretty good in that field. What we are bringing to Moonshot is all of our expertise in infrastructure and management and power of cooling and all the quality and support that goes into it, we are putting it into this package and we are working with other vendors, SoC vendors to deliver a product targeted at a particular product segment. And put a little pretty bow on it and send it off.
Tim: But the hardware being tuned to work with Linux, means it is going to work probably with a lot of distros on it.
Thomas: Yes. Officially, on first ship on these ARM servers, with Canonical, but this supports PXE Boot so if the user wants to, they could do whatever they want with it. But officially, we have tested this for over a year or so, I am not sure exactly how long, it has been at least a year, and we have ironed out the kinks and we are going to make sure that you are going to get a quality product when you get it.
So dedicated cloud servers (Score:1)
I can see this being useful for approximately ten percent of the market.
Re: (Score:2)
But wait! (Score:3, Funny)
Not available in any store.
But Moonshot is years old (Score:1)
But Moonshot servers are a couple years old, with a few success stories from HP itself (www.hp.com is fully moonshot-powered) and others. Yes they are efficient, small and easy to run, but they are also quite less powerful than a "traditional" server. Now all they do is release new "cartridges" for the platform. Are we soon to hear about generation 2.0? Maybe at HP Discover?
Re:But Moonshot is years old (Score:5, Funny)
Only HP would call them "server cartridges". I think their CEO cartridge is running low, they should go get a new one.
Re: (Score:2)
Nah, HP's all about long lived chassis, the C7000 blade enclosure is 8 years old and they're still adding new blades and I/O modules for it. The previous P class chassis was supported for 6 years.
Re: (Score:2)
www.hp.com is fully moonshot-powered
That would explain why the HP site is so ridiculously slow. Except that it has been slow for years, but maybe they were always running it on prototypes.
Re: (Score:2)
Its because HP customers are used to printer ink cartridges being overpriced disposable units. They're thinking is to move this into computer components and release them as over-priced disposable units too.
4.3 U (Score:4, Insightful)
4.3 U (Score:1)
4.3U? They couldn't have made a reasonable tradeoff to go to an even unit size?
Maybe they have a 0.7U add-on planned for it :-)
Re: (Score:2)
It is probably either 7.5 inches (4.29 U) or 190 milimeters (4.27 U) tall. However, I don't know why you'd make something designed to be rack mounted that is not an integral multiple of U, unless you have something that needs cables attached to the front (in which case you still designed it poorly).
Re: (Score:3)
A 42U rack would have 7U wasted space that is almost another 2 servers...
Re:4.3 U (Score:5, Funny)
A swimming pool noodle cut to fit works perfectly with gaps in the hot/cold aisles. Don't ask how I know...
Re: (Score:2)
Exactly, in a cold / hot isle rack you are left with a gap which would need plugging with something.
A 42U rack would have 7U wasted space that is almost another 2 servers...
They will sell you a .66U spacer, or a 13U box that fits three of them. It may be a dumb idea but not that dumb.
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
there are three screw holes per U, so 1/3 u makes sense.
Re:4.3 U (Score:4, Informative)
This is actually an established size from HP. It allows two 3.5" drives per vertical blade (cheaper than 2.5") which would not fit inside of a 4U chassis, but fits one more chassis per rack than 5U would
Amazing. (Score:5, Funny)
Sold!
What does their website run on? (Score:5, Informative)
Re: (Score:1)
Couldn't agree more on this! At some I just gave up on HP because of their website. Dell / Lenovo may have a million options and overfancy pages too, but at least load times are predictable. They should all take lessons from some of the new co's like google who seem able to run fast sites (news.google.com, etc)
Re: (Score:1)
Totally would buy (Score:5, Interesting)
If I had the money, I'd totally buy it and avoid the cluster****ery that is cloud services.
BUT...
Notice what the average cpu is. Intel Atom class hardware. Or in otherwords, this is designed for doing dreamhost-style weak cloud VPS, so while you may have 45 servers in the box, the net performance is ... well...
The Atom processor picked, S1260 (2 core, 4 thread @ $64.00 )has a passmark of 916
The highest rated is Intel Xeon E5-2697 v2 @ 2.70GHz, passmark 17361
So 19 of those Atoms (38 cores, 76 threads) = 1 E5-2697v2 (12 core, 24 thread @ $2614.00)
One dual E5-2697v2 server is almost equal, and you have 24 usable cores that could be turned into weak VPS servers. Get the point I'm making?
Moonshot might be a better choice for provisioning weak dedicated hosts instead of VPS's (which are inherently weak, even when running on solid hardware, they are still subject to being oversold.) The S1260 is 64$, the E5-2697v2 is $2614, or roughly the cost of 40 of the Atom's. So on paper someone might go "oh look I can can afford an entire moonshot server for the price of a single cpu E5-2697v2 and get twice as many cores, when the single thread performance of the 2697 is a passmark of 1,662 (yes . 181% of the 4 threads of the Atom.)
The thing is, this kind of configuration is better suited for certain tasks, such as a web server cluster front end (where it's rarely the CPU, but the network infrastructure that's the bottleneck) where you can turn on or off identical servers as required, and none of them actually need hard drives connected, they can just be PXE booted from a NAS.
Though I'm worried when I see "software defined" anywhere in marketing, as most virtualization software fails hard when under load (75%CPU.) So maybe a data center that is space/power constrained can see a use for this, but if you're running a high usage website, you're better off optimizing the software stack (like switching to nginx or using Varnish in front of apache httpd +php-fpm instead of leaving it at the highly inefficient httpd prefork+mod_php) than deploying more inefficient servers.
IBM? (Score:5, Informative)
The whole promotion seems to resemble everything from IBM PureServers that were introduced about 2 years ago, but of course lacking any type of performance. At least the IBM servers allowed scaling, higher performance CPUs, integrated disks, etc..
When management and marketing design computers, this is what we get. HP has not really been a technical player for a long time, at least in terms of innovation. Superdome was okay, but Sun E class machines made them look like an old mainframe in terms of usability. Itanium flopped and they never put much into the PA RISC chips after that. Omniback and NNM were great, but required manpower and HP has despised T&M billing for as long as I've worked with them which goes back to HP-UX 9 and VUE days. (I contracted for them in Michigan, because they would not hire direct technical people).
Re:Totally would buy (Score:4, Informative)
Shouldn't you be telling that to HP? from the site: "The HP ProLiant Moonshot Server is available with the Intel® Atom Processor S1260...."
Re: (Score:2)
Your limiting factor is actually cooling. For the W/ft^2 you can pull out of a room you can't fill every rack to the top with Xeons.
Re: (Score:2)
Re: (Score:2)
Not so fast (Score:2, Interesting)
This sounds like a great idea, right? 45 servers in a single chassis? With an OA (onboard assistant) to allow administration of each individual blade. So about 12 months after you've implemented this monster in your production (no downtime) environment a blade fails. No problem you replace the blade. But the NEW blade comes with firmware that requires an update to the OA (entire chassis) and the new OA firmware won't work with the other 44 blades until you update them also. Hmmmm... hey boss, can I ge
Re: (Score:2)
97% less complex ???? (Score:2)
Imagine if they could back-port this work to their current range of x86 blade servers !
I have a suggestion (Score:2)
Re: (Score:2)
I thought the golden rule of IT was CYA?
Re: (Score:1)
Re: (Score:2)
Now if each cpu was the size of a usb stick and plugged into the USB3 socket, but gave the power of an atom cpu. You could then dynamically plug in out cpus like in HAL. (or some mini PCIexpress socket)
Re: (Score:2)
So, like VDI? Because that works SO WELL.
UCS what? (Score:2)
Cisco got fr1st post!
Re: (Score:1)
I was about to say that this sounds very much like Cisco UCS where everything is defined in 'software'. You define the template and its components and this includes things like WWN's and MAC addresses and it allows you to migrate the 'server' to different blades since it is all in 'software'.
With that said, the UCS kit we run at work doesn't have anywhere near the density claimed by HP with their moonshot but claiming they were the first to create a software defined blade chassis and the likes is not correc
Tried with Transmeta (Score:2)
We bought some Transmeta-based blades at $LARGE_US_BANK a while back, and they sucked. Hard. Like, don't bother running stuff on them hard. They went back to HP, or in the trash, I forget, and we got real hardware. It looks like HP is reviving the concept of shitty servers for people who don't do a lot with them. Instead of 1 beefy 4U machine, you have a 45-node Beowulf cluster of suck, and most problems ARENT trivially scalable. Or, if your app really is that scalable (or you've spent the time t
Good idea but not new (Score:2)
Re: (Score:2)
What does SM have that's even remotely like Moonshot? I don't believe they have anything like 45 modules with 4x 8-core ARM processors in 5U. Verari looks interesting, but at 1700 cores per rack it's almost 10x less dense than moonshot.
Re: (Score:2)
The numbers are a bit different but the "new style" is not new.
Moonshot... shot down... (Score:1)
The company I'm at is looking at some new serious number-crunching servers.. We had a HP rep come in and propose a Moonshot system. The head of IT and I looked at each other and laughed out loud... Moonshot uses ATOM processors. I don't care how many of them you have, we're not using a rack of low-ball processors in our system. Moonshot is a complete joke.
I think they use ATOM processors because it was the only way to get such high density and still be able to get the heat out of the system. It also may be
I don't get it (Score:2)
We got demoed this 6 months or so ago.
I still fail to see what this buys you over a bunch of regular blades or rackmounts running your virtualisation platform of choice.
Re: (Score:2)
We got demoed this 6 months or so ago.
I still fail to see what this buys you over a bunch of regular blades or rackmounts running your virtualisation platform of choice.
Best use-case proposal I've seen is for something like VDI. Instead of sharing the resources of one server, every desktop gets their own processor and memory.
AMD SeaMicro is a better choice (Score:2)
It's 10U with 64 almost real servers (haswell xeons) and has integrated storage and networking. You only need to hook up some power and a beffy uplink to it and you're done. And did I mention a rest api to controll it? Works even with openstack baremetal if you want that. Last I heard (two weeks ago) Moonshot is still only at cli. ... too little, too late.
Apollos on the other side, those are worth considering. But Moonshot
HW Failure (Score:1)