Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Virtualization News Hardware

ARM Based Server Cluster Benchmarked 55

An anonymous reader writes "Anandtech compares the Boston Viridis, a server with Calxeda's ARM server technology, with the typical Intel Xeon technology in a server environment. Turns out that the Quad ARM A9 chip has it weaknesses, but it can offer an amazing performance per Watt ratio in some applications. Anandtech tests bandwidth, compression, decompression, building/compiling and a hosted web environment on top of Ubuntu 12.10." At least in their tests (highly parallel, lightweight file serving), the ARM nodes offered slightly better throughput at lower power use, although from the looks of it you'd just be giving money to the server manufacturer instead of the power company.
This discussion has been archived. No new comments can be posted.

ARM Based Server Cluster Benchmarked

Comments Filter:
  • What if you are nearing the limits of the datacenter, cooling, power delivery etc. I don't have exact numbers but the cost for watt is greater than what you pay the power company.
    • by lucm ( 889690 )

      What if you are nearing the limits of the datacenter, cooling, power delivery etc. I don't have exact numbers but the cost for watt is greater than what you pay the power company.

      That's more complicated than it looks. On a first look it may seem like this does not change cooling requirements; most datacenter simply use a formula such as watts/3 to get a rough ideas of the needed BTUs. However the more you space out heat sources, the more natural cooling (convection) can do a magnificent job as the air flow is more optimally utilized. Or maybe having more heat sources can mitigate the benefits; it's hard to tell, that's why God created CFD applications.

      The only hard limits would be p

      • by hawguy ( 1600213 )

        That's more complicated than it looks. On a first look it may seem like this does not change cooling requirements; most datacenter simply use a formula such as watts/3 to get a rough ideas of the needed BTUs. However the more you space out heat sources, the more natural cooling (convection) can do a magnificent job as the air flow is more optimally utilized.

        Every densely packed datacenter I've seen uses forced air cooling to suck in cool air from the cool aisles and blow warm air into the hot isles. Natural convection seems less important in such a scenario.

        • by swb ( 14022 )

          Many (most?) data centers I've been in have been buildings converted from some other use -- office buildings, warehouses, etc. But regardless of how they are built, they always seem to have relatively low ceilings, even in converted spaces where they rip out the ceiling grid.

          I get the density argument, but I often wonder if someone built a data center with a 50 foot ceiling (a large, flat building) if you wouldn't gain some cooling benefit from convection that would be worth the sacrifice in vertical densi

          • by hawguy ( 1600213 )

            Many (most?) data centers I've been in have been buildings converted from some other use -- office buildings, warehouses, etc. But regardless of how they are built, they always seem to have relatively low ceilings, even in converted spaces where they rip out the ceiling grid.

            I get the density argument, but I often wonder if someone built a data center with a 50 foot ceiling (a large, flat building) if you wouldn't gain some cooling benefit from convection that would be worth the sacrifice in vertical density versus the cost in intensive forced air cooling.

            I don't see why high ceilings would make a difference - you have X BTU/hr of heat to remove - letting it accumulate at the ceiling doesn't seem to make much difference (unless you have a lot of conductive losses through the walls/ceiling).

            I can't beleive there's any economic argument for giving up 66% - 75% of your potential floor space (12 or 16 foot ceilings versus 48 foot ceilings) just to let heat rise to the ceiling.

            Hot aisles make the heat exchangers more efficient.

            • by lucm ( 889690 )

              Many (most?) data centers I've been in have been buildings converted from some other use -- office buildings, warehouses, etc. But regardless of how they are built, they always seem to have relatively low ceilings, even in converted spaces where they rip out the ceiling grid.

              I get the density argument, but I often wonder if someone built a data center with a 50 foot ceiling (a large, flat building) if you wouldn't gain some cooling benefit from convection that would be worth the sacrifice in vertical density versus the cost in intensive forced air cooling.

              I don't see why high ceilings would make a difference - you have X BTU/hr of heat to remove - letting it accumulate at the ceiling doesn't seem to make much difference (unless you have a lot of conductive losses through the walls/ceiling).

              I can't beleive there's any economic argument for giving up 66% - 75% of your potential floor space (12 or 16 foot ceilings versus 48 foot ceilings) just to let heat rise to the ceiling.

              Hot aisles make the heat exchangers more efficient.

              That's not the point of improved convection. The idea is not to let heat rise, it is to give more room to the air so the flow can establish wider patterns, bringing cooler air in contact with the heat source without requiring additional power for blowers. Contrary to popular misconception, there is no need to blow artic air on a server to cool it down.

              • by hawguy ( 1600213 )

                Many (most?) data centers I've been in have been buildings converted from some other use -- office buildings, warehouses, etc. But regardless of how they are built, they always seem to have relatively low ceilings, even in converted spaces where they rip out the ceiling grid.

                I get the density argument, but I often wonder if someone built a data center with a 50 foot ceiling (a large, flat building) if you wouldn't gain some cooling benefit from convection that would be worth the sacrifice in vertical density versus the cost in intensive forced air cooling.

                I don't see why high ceilings would make a difference - you have X BTU/hr of heat to remove - letting it accumulate at the ceiling doesn't seem to make much difference (unless you have a lot of conductive losses through the walls/ceiling).

                I can't beleive there's any economic argument for giving up 66% - 75% of your potential floor space (12 or 16 foot ceilings versus 48 foot ceilings) just to let heat rise to the ceiling.

                Hot aisles make the heat exchangers more efficient.

                That's not the point of improved convection. The idea is not to let heat rise, it is to give more room to the air so the flow can establish wider patterns, bringing cooler air in contact with the heat source without requiring additional power for blowers. Contrary to popular misconception, there is no need to blow artic air on a server to cool it down.

                But you get the same benefit from hot aisles/cold isles for the price of some baffles to separate the hot/cold air - much cheaper than 50 foot ceilings.

    • actually the power cost is a savings, but you are correct that physical space is also it's own cost. They showed (in contrast to the trollmitter) that the servers were relatively close in performance in real world scenarios, but there are absolutely things that are more built for x86 at the moment.

  • by Anonymous Coward

    " although from the looks of it you'd just be giving money to the server manufacturer instead of the power company."

    Isn't that the truth. This is the new market paradigm for just about everything. You no longer pay for a product or service. You pay for what you get out of it.

    For example, the way fuels have been priced for the last decade or so (since the first runup after 9/11), you pay for the energy you get out of the fuel, not the fuel itself.

    Case in point, diesel cars are 20% more efficient than gasolin

    • Another way of saying it is that capitalism sort of works. Or at least the laws of supply and demand do.
      Product A is cheaper than product B. Demand for Product A increases. Price of Product A increases as price of product B decreases. Per unit of usefulness they end up costing the same.
      To get back on topic though in this case energy costs are fixed by supply and demand. At the moment ARM cores are in server terms a niche product so you don't get the benefits of bulk supply. Those efficiencies can be improv

      • actually, no. This is the definition of capitalism failing. Not due to capitalism or overregulation or underregulation, but a lack of clear regulation.

        • Can you explain this please, I don't get your argument.
          I was referring to:

          For example, the way fuels have been priced for the last decade or so (since the first runup after 9/11), you pay for the energy you get out of the fuel, not the fuel itself.

          As far as I am aware finding the price of something by it's value to society is a good thing. How would you rather it worked?
          The alternative I can see is that some things are socially or politically favoured and so are forced down our throats whether it's a good idea or not. As long as all costs* are taken into account then what's the problem?
          *As a counter example I know not all costs of coal are taken into account, and they should be

    • by gl4ss ( 559668 )

      it's corporates swindling corporates though with these.

      they're paying premium for cheaper hw so they can claim to be green.

    • by afidel ( 530433 ) on Wednesday March 13, 2013 @10:04AM (#43158969)

      so natural gas costs 30% more per BTU input than fuel oil.

      What planet do you live on?

      heating oil $4.058/gallon [eia.gov], 138,700 BTU/gallon [energykinetics.com] = $29.26 per million BTU
      natural gas $ 0.55143 [ohio.gov] per hundred cubic feet (ccf), 102,000 BTU's [onlineconversion.com] per ccf = $5.41 per million BTU

      • by afidel ( 530433 )

        Oh, and the natural gas price includes delivery but the fuel oil price does not so the discrepancy is even larger.

      • He is talking about the furnace cost, not the fuel. The fuel cost nets things out to around zero.

        • by afidel ( 530433 )

          Bryant preferred series 110,000 Btu 2 stage 95% efficient: About $1,725 [qualitysmith.com]

          Bryant Preferred 80 115,000 Btu 374RAN oil furnace, 83.5% efficient: $1,779 [webhvac.com]

          Same quality unit from the same manufacturer, same input BTU, MORE expensive for the oil and you have to add a fuel tank to the cost of the oil unit. He's simply wrong.

    • Natural gas furnaces are 25% more efficient than fuel oil furnaces, so natural gas costs 30% more per BTU input than fuel oil.

      I have to disagree. I did the math on this years ago when deciding whether to replace my gas furnace with another gas furnace or get an oil furnace. The data point very clearly in the opposite direction of what you are saying.

      This may vary geographically, but the most recent data I could find for where I live (upstate New York) is this: Gas costs $11.49 for 1000 cu.ft. [eia.gov] as of last

    • Passing up Moderation to ask the poster a question, but first, recap of the part I'm asking about:

      Corporate America is going to stick it to you no matter what you do to get ahead. If you find a clever way to save money, our greedy corporate masters will STEAL it from you one way or another, because at the end of the day, they are pulling all the strings and turning all the knobs.

      Question: Are our government masters ever "Greedy" and do they ever "STEAL".

      The reason I ask, is because people who make statements like you did, tend to believe that Government can do no wrong, as long as they "stick it to the rich". I find this mentality to be childish and self serving.

      To the point about fuel prices (and other things), have you ever considered that the government restrictions cause some of th

      • To the point about fuel prices (and other things), have you ever considered that the government restrictions cause some of the price differentials? I mean, after all, we can't drill, or build pipelines or refineries or ... due to government restrictions.

        so .... what does a govt get out of restrictions? under the table payments from those deep-pocketed environmental non-profits. or maybe it's Big Solar lobbyists? wait ...

        you can make the point that the govt is misguided in placing restrictions drilling, pipelines, etc ... but it isn't greed. there's clearly more $ to be had all around by sucking every drop out of domestic oil deposits.

  • ARM based servers already has file server cluster design win's

    http://www.eetimes.com/electronics-news/4407353/Baidu-taps-Marvell-for-32-bit-ARM-storage-server

    what will be intresting is ability to leverage designs for phones as clusters because then you can use the volume e.g. SOC for phones costs $20 roughly so imagine filling a DC with those...

    have fun

    John Jones

    • by Anonymous Coward

      Hell, I wanna built my PC from those! 1024-core (@1.4GHz). 512 GB of RAM. 16TB of 512-associative SSD storage. That's the equivalent of e.g. 512 Samsung Galaxy S III phones. Put them all in in copper slots with water channels inside the copper separators, and a nice big passive (optionally active) radiator outside.

      Unfortunately, such a phone is much more expensive because of all the other stuff inside. But if you’d remove all that, including batteries, displays, wireless tech, etc...

    • Except, of course, the Marvell chip sucks.

  • by Anonymous Coward

    Depending on where you are, even a small percentage of power savings could pay for the hardware fairly quickly. Here in the Silicon Valley at least, PG&E charges upwards of $0.30/kWh for the average home power consumer, and their rates go higher based on usage tiers. Running a data center of supercomputer cluster wouldn't be cheap when it costs me ~$300/month to power my desktop PC and toaster oven.

    • by gl4ss ( 559668 )

      Depending on where you are, even a small percentage of power savings could pay for the hardware fairly quickly. Here in the Silicon Valley at least, PG&E charges upwards of $0.30/kWh for the average home power consumer, and their rates go higher based on usage tiers. Running a data center of supercomputer cluster wouldn't be cheap when it costs me ~$300/month to power my desktop PC and toaster oven.

      but the joke is the hw is inferior and has cheaper parts...

      article:
      " $20,000 is the official price for one Boston Viridis with 24 nodes at 1.4GHz and 96GB of RAM. That is simply very expensive. A Dell R720 with dual 10 gigabit, 96GB of RAM and two Xeons E5-L2650L is in the $8000 range; you could easily buy two Dell R720 and double your performance. The higher power bill of the Xeon E5 servers is that case hardly an issue, unless you are very power constrained. However, these systems are targeted at larger d

      • by Guspaz ( 556486 )

        I think you missed this part:

        However, these systems are targeted at larger deployments.

        And this part:

        Buy a whole rack of them and the price comes down to $352 per server node, or about $8500 per server. We have some experience with medium quantity sales, and our best guess is that you get typically a 10 to 20% discount when you buy 20 of them. So that would mean that the Xeon E5 server would be around $6500-$7200 and the Boston Viridis around $8500.

        It's still more expensive, but the gap narrows substantially.

      • Running a data center of supercomputer cluster wouldn't be cheap when it costs me ~$300/month to power my desktop PC and toaster oven.

        and of course you are exaggerating. i live in san jose, and during the summer when i don't have heat running (and i don't have AC), my gas+electric bill is around $70. that's for a small house. maybe it is those grow lights in your basement?

    • by hawguy ( 1600213 )

      Depending on where you are, even a small percentage of power savings could pay for the hardware fairly quickly. Here in the Silicon Valley at least, PG&E charges upwards of $0.30/kWh for the average home power consumer, and their rates go higher based on usage tiers. Running a data center of supercomputer cluster wouldn't be cheap when it costs me ~$300/month to power my desktop PC and toaster oven.

      PG&E uses a tiered rate structure so while the highest rate may be in the 34 cent/KWh range, the average rate for most homes is lower.

      Here are the Tiers:

      Baseline
      Tier 2 101%-130%
      Tier 3 131%-200%
      Tier 4 201%-300%
      Tier 5 >300%

      Here are the rates (Residential E1, no time of day):

      $0.13230
      $0.15040
      $0.30025
      $0.34025
      $0.34025

      Baseline quantities depends on region - my single family townhome has a baseline of 273KWh. Baseline is supposed to be 50 - 60% of an average home's power usage. We tend to stay under 300KWh/m

  • While the page with benchmark data includes an intel v ARM comparison, when it came to the power consumption charts there was no intel data to be found. None.

    If one of the major themes of the product is power consumption, wouldn't it stand to reason that Intel numbers to compare would be critical as part of the review?

  • I guess these A9's are not SoC integrated with a GPU, like say an Exynos. 24 GPUs + 96 ARM cores in a box could make them attractive for some compute applications. High end GPUs would probably smoke them good though.

    A9's are nice, but more compelling as a desktop replacement for the spreadsheet and wordprocessor set, or low-power home servers / appliances. They're just seriously bandwidth challenged, but the average corporate desktop doesn't need it. Replacing hundreds of x86 desktops with Exynos's and y

    • They could integrate, but high end GPUs burn 120W/GPU and are good only at crunching numbers.These are good at serving web pages, files and routine DB work. And then go to sleep (and use very little power) when people aren't asking for web pages. They're not intended to replace Xeon's or GPUs for number crunching, the realization is that number crunching isn't that important for a lot of applications

      It's not entirely unlike the cell phone/tablet vs. Desktop debate. For most people, cell phones offer all the

      • by Shinobi ( 19308 )

        " the realization is that number crunching isn't that important for a lot of applications"

        The reality is, it's important for a LOT of applications, but it's in the background. SSL is just one example.

        As a VPN gateway for example, I think the Xeon would just smash any figures the ARM cluster could put up, incl watt/connection etc

        • What he means is that the kind of number crunching that would favor a GPU over a CPU is not for many applications.
  • the ARM nodes offered slightly better throughput at lower power use, although from the looks of it you'd just be giving money to the server manufacturer instead of the power company.

    So then... GIVE your money to the server manufacturer instead for crissesake. There seems to be an obvious environmental benefit to be had.

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...