Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Earth Power

Building the Green Data Center 86

blackbearnh writes "O'Reilly News talked to Bill Coleman, former founder of BEA and current founder and CEO of Cassett Corporation, about the challenges involved in building more energy-efficient data centers. Coleman's company is trying to change the way resources in the data center are used, by more efficiently leveraging virtualization to utilize servers to a higher degree. In the interview, Coleman touches on this topic, but spends most of his time discussing how modern data centers grossly overcool and overdeploy hardware, leading to abysmal levels of efficiency."
This discussion has been archived. No new comments can be posted.

Building the Green Data Center

Comments Filter:
  • by BigZaphod ( 12942 ) on Saturday June 21, 2008 @11:46AM (#23886147) Homepage

    Software has an impact, too. Messy, heavy code takes longer to run, takes more CPUs, etc. Imagine how much energy could be saved if there wasn't so much code bloat!

    • Re: (Score:3, Insightful)

      by cp.tar ( 871488 )

      Software has an impact, too. Messy, heavy code takes longer to run, takes more CPUs, etc. Imagine how much energy could be saved if there wasn't so much code bloat!

      So that means that servers should be built the Gentoo way, from scratch, using just the things you need, no more, no less.
      How much does it cost to deploy such a server?
      How much does it cost to pay someone qualified enough to do it properly?

      The code bloat is paired with feature bloat. And the more features there are, the more you have to pick and choose -- or, if you cannot choose, support. Because your users will want them, more likely than not.

      Now, cleaning up the world's code... sounds like great wo

      • o that means that servers should be built the Gentoo way, from scratch, using just the things you need, no more, no less.
        How much does it cost to deploy such a server?
        How much does it cost to pay someone qualified enough to do it properly?

        Frankly anyone with half a brain can pretty much use mkinitrd to make such a server.

        How much does it cost to hire 500 admins for thousands of machines rather than half a dozen? How much does electricity and AC cost?

        Meh, no point explaining. The price of oil and the economics will do that job.

      • possibly not, perhaps he means that software should be built without the 'make it easy for the developer' features that modern languages contain. I mean, its easy to write an app in a scripting language but it will be bigger, slower, require a VM to host the script, will use more memory (especially so if it has a garbage collector), and so on.

        There is a trend of saying that programmer productivity is everything, and if it requires faster computers with more RAM, then that's just too bad. I'm sure that one d

    • Messy, heavy code takes longer to run, takes more CPUs, etc.

      Do you guys have to bring Vista into every thread?

  • The outback (Score:1, Interesting)

    In all seriousness I've often wondered why they don't just slap server farms and data centres in the australian desert, well any for that matter. Solar power galore (almost no cloud cover in central Oz), and if the miners get paid to go live out there and work for $$$, surely if you cut out alot of your overheads IT guys would take big bucks to do it. Offer the work to Residency applicants even to cut the wage. Also there are enough big mining outfits out that way, so they would probably relish being able t
    • Re: (Score:3, Insightful)

      by FooAtWFU ( 699187 )

      No one in the server farm business is going to try and break into the solar-power business. It's not their area of expertise. It's an entirely different sort of business altogether. If there were a ton of solar power stations littering the outback, or if someone enterprising were ready to put some up in the hopes of attracting power-hungry industries with cheap electricity, that'd be another thing. But I would imagine it's still a rather risky proposition, as far as things go.

      Besides, the bandwidth and l

      • No one in the server farm business is going to try and break into the solar-power business.

        Kinda like how the roof of Google's headquarters isn't covered in solar panels [google.com]?

        Maybe they don't want to get into the business of supplying solar panel, but there's plenty of interest in using renewable technology to try and lower energy costs. (Not to mention even photovoltaic solar can help reduce cooling bills, because insolation is being used to generate electricity instead of simply heating the interior.)

      • dust is the problem. ever try diagnosing a fault in your operating system when everything is correct, the power supply is operational (and come to find out) that a fine layer of micron-sized dust on the contacts of your graphics card has been adjusting the signal just enough to cause the driver to crash? now multiply that by a thousand machines, and you'll know why nobody is rushing out into the desert to build data centers.

        • by jhw539 ( 982431 )
          While I think desert datacenters are a bad plan, dust is an easily solved issue. Filters are an incredibly mature technology, and so little outside air is brought in (basically just enough to keep the floor positively pressurized so nothing sneaks in through the cracks) it's a non-issue. That said, the most efficient datacenter designs I see use 100% outside air (with appropriate low-face velocity, low fan power filtration) much of the time to cool the space, but their in temperate climes.
    • by Bandman ( 86149 )

      I would guess a combination of difficulty and expense in

      a) bandwidth
      b) cooling
      c) supplies
      d) available utilities (i.e. running water, available healthcare)

  • by afidel ( 530433 ) on Saturday June 21, 2008 @11:48AM (#23886183)
    He talks about turning off unused capacity like it's some future panacea, HP and VMWare have been doing it for a couple years already. He also dismisses turning servers off as not being a big deal but anyone who's run a datacenter knows that servers that have been running for years often fail when they are shut off. There are numerous physical reason for this from inrush current to bearing wear. A modern boot from SAN server is probably much less likely to fail at boot then older ones with DAS, but the chance is very much non-zero. Of course with a good dynamic provisioning system a single host failure doesn't matter because that new VM will just get spun up on a different host that's woken up.
    • Re: (Score:3, Interesting)

      by symbolset ( 646467 )

      Of course with a good dynamic provisioning system a single host failure doesn't matter because that new VM will just get spun up on a different host that's woken up.

      Bingo. A node is just a node. A decent control system will detect a node failing to come up, flag it for service and bring up another one. In some datacenters not designed for this sort of redundancy a server failure is a big deal where people have to come in on a holiday weekend. If you do it right the dead server just sits there until you get around to that rack in the regular course.

      • Or as I've heard an opinion about the Sun's "container server farm", only 90% of the servers are working, and every one that breaks is replaced by one unused before - and when there are not enough spares, the entire container is replaced at the customer

      • Exactly! He's right - in the virtualisation scenario a single server is not important. Your virtual servers are important. With virtualisation - if you've designed your clusters correctly, the servers that are brought back online have all their hardware checks taking place before your virtual servers start running on the nodes anyway. A dead server or a required cluster aware service fails to start - no downtime. You're still running all your virtual servers on other hardware - just not as much of it a
    • ..also I'm sure server admins would be pretty wary of running their hardware at the fail point of heat instead of "grossly overcooling" them and being able to sleep at night.
      • Re: (Score:3, Informative)

        by Calinous ( 985536 )

        This "grossly overcooling" business is done for several reasons:
        There are 18 Celsius just out from the cooling units, but there might be pockets of warmer air in the data warehouse (based on rack position and use).
        This "grossly overcooling" allows the servers to have a long duration of functionality when the air conditioning breaks.
        The PSUs are working better at lower temperatures (even if they are perfectly fine otherwise). Also, the cooling

    • The article and this discussion also fails to address the financial and organizational problems inherent in purchasing and allocating systems. Virtualization and shared systems are not always politically and operationally feasible. The computing cloud is a great concept; but the implementation is more complicated by people -- discreet individuals with discreet goals and discreet financing methods. When cloud computing can provide simple and comprehensive chargebacks on an effective granular level, then we c
      • "We're a Windows shop." You only hint at your real concerns -- that license tracking and organizational inertia prevents it in your case. That's too bad for you.

        The technology is obviously available and immensely powerful. Some will use it, some will shun it. In the corporate world which do you suppose is going to out-compete the other?

      • by afidel ( 530433 )
        This is an area where the Citrix acquisition of XenSource will help them since Citrix realized the value of chargebacks way back in the MetaFrame XP days. My problem with a chargeback model is that it is hard to justify spare capacity in a chargeback environment yet you must have it to provide effective fault tolerance, do you overcharge a fixed amount for spare capacity, or is it a percentage so that as an applications requirements grow you can keep up with the bigger spare hardware needed to accommodate i
        • How do you sell it to the business if their budget is getting squeezed (I know this applies to physical servers as well, but if you have capacity in the virtual cluster to fit their app in it's a lot harder to say no).

          Oh, you're looking at it from a salesman's point of view, rather than a customer's. That can't be good for your customer. Since Xen is an open source project RedHat's new approach using KVM [cnet.com] could prove more interesting.

          1 dual processor/8 core server running Oracle with in-memory cache option and support: roughly $200,000.

          50 dual processor/8 core servers each running several VM's of postgresql with pgpool-II [mricon.com] and memcached [danga.com]: roughly $200,000. The freedom to PXEBoot a blank box into a replicant node faste

  • by MrMr ( 219533 ) on Saturday June 21, 2008 @11:48AM (#23886187)
    by more efficiently leveraging virtualization to utilize servers to a higher degree
    I should have printed a fresh stack of these. [bullshitbingo.net]
  • by Colin Smith ( 2679 ) on Saturday June 21, 2008 @11:52AM (#23886215)

    Switch the machines off at the the socket. You can do it using SNMP.
    Monitor the average load on your machines, if too low, migrate everything off it and switch a machine off. If too high, switch one on.

    Course it assumes you know how to create highly available load balanced clusters. Automatic installations, network booting and all that. Not so difficult.

     

    • This is actually getting remarkably easy for Linux clusters, and the help is coming from a bizarre source - LTSP.

      I wrote a journal piece [slashdot.org] about it just recently. I'm setting this up for me and it's interesting.

      People are doing some interesting stuff with LTSP -- call centers with IP softphones, render farms. Soon we may see entire infrastructure with redundant servers powering on to serve demand spikes and shutting off when not in use.

      • We do something like this, but from scratch rather than using LTSP. It's really not difficult, just a slightly different way of looking at how an operating system and server application should work. Think botnet. It's a fundamental shift in the mathematics of computing infrastructure, from linear or worse to logarithmic.

         

        • I'm liking the LTSP model because I can do it without investing my time writing code. I intend to pull up an on demand render farm without writing a single line of code.

          I am also interested in the potential of exploiting the unused resources of desktop computers to turn an entire organization into an on-demand compute cluster and/or distributed redundant storage. Joe the typist doesn't need a quad-core 4GB machine to draft a letter, but as long as he's got one we may as well do something useful under the

          • by Bandman ( 86149 )

            Count me in too. I want config details as well

          • Depending on the size of your organization, and how "corporate" the network is (are workstations in software lock-down?), you may have to spend a lot of time designing software that can ensure some level of code authenticity for any deployed work. Otherwise, don't expect to get approval to run this on "most every" random workstation. I had a project where machines would automatically download code and data sets to run as requested, synchronized by a central server. Three things stood in my way: (a) limit

          • Tiny base OS (linux), booted from PXE/TFTP server running from ramdisk. networking, storage, snmp, ssh, grid engine, "botnet" client and bugger all else. On top of that you run a VM host, Xen, VMware, vserver or whatever fits your requirements. This is the infrastructure platform. It can be rolled out to anything which supports PXE. Tens of thousands of machines if required. They can be functional literally as fast as machines can be fitted into racks.

            Basically you don't touch a machine till it comes up and

            • Thanks. That's the ticket. I assume if it takes 15-30 minutes to configure, you are downloading and chain booting a disk image. I suppose if I take that route I can preload the disk images on a spare server with one boot image that then puts the server back to sleep. Then when the load comes up the provisioned server can be awakened in short order.

              It takes under a minute to bring up my clients because everything runs in the ramdisk so far.

              I'd let the load get much lower -- maybe .5 on each cpu before

              • Thanks. That's the ticket. I assume if it takes 15-30 minutes to configure, you are downloading and chain booting a disk image

                Kind of. The base OS boots and runs 100% from ramdisk, takes about 15 seconds to download and maybe another 30 to boot. About 100Mb or so uncompressed. Doing it from scratch keeps the underlying infrastructure OS small. What takes the time is the application image. It usually has some data attached with which it is packaged, anything from a few hundred mb to gigabytes. The local storage is purely for the application packages or VM images while they're hosted on that machine.

                What's got me curious is how to make the management piece redundant and load balanced as well. I'll just have to work on it.

                Not sure which load balancing

    • What you say is shutting off computers, and then starting them with network starts. My own PSU (450W unit) uses more than 15W when connected to mains - so I switch it off from the mains when not in use.
            Now, 15W loss in a 500W PSU when off is a drop in a bucket - yet it might help a bit

    • Am I missing something? Outside of a few very large organizations, isnt the operation of a data center separated from the equipment inside the data centers? Dont most data centers rent out space for customers? If this is the case, outside of increasing customers bills for energy consumption, there is nothing that a data center can do to change the way the customer does business. Not every customer is going to find it practical to have a managed virtual server environment, or be ok with allowing systems to b

    • Depends on the application. Not everything fits in the LAMP, web server model.

  • Overcooling? (Score:1, Interesting)

    by Anonymous Coward

    I think this guy confuses heat and temperature. In datacenters, cooling costs are mostly proportional to the heat produced, and have little to do with the temperature you maintain in the steady state.

    • Re: (Score:3, Informative)

      by jabuzz ( 182671 )

      And I think you don't understand thermodynamics either. Cooling to say 18 Celsius when you can happily get away with 25 Celsius will have a big impact on your cooling bill even through you are getting rid of the same amount of heat.

      • This cooling thing is a thermal machine - and the best efficiency depends on both the current temperature (inside or outside) and the temperature difference. That is, if you have outside 25 Celsius, you need twice the power to cool at 15 Celsius as opposed to cooling at 20 Celsius

      • Why cool systems? (Score:2, Interesting)

        by rathaven ( 1253420 )
        Good point, however, since this is just thermodynamics - why do we actively cool systems? Managed properly the heat should be able to be utilised in ways far more effective than air conditioning. I think people often forget that air conditioning isn't actually a cooling solution if you take the whole picture. You are providing more energy and therefore more heat to make a small area temporarily cooler.
        • Forgive me - I seem to have had a case of grammar ineptitude. That should have read,"I think people often forget that air conditioning isn't actually a cooling solution - if you take the complete picture."
    • Exactly. Just maintain the servers at a couple of degrees above absolute zero. The heat you remove each day will be the same, and you get the added benefit of having superconductors! Just don't step in the liquified air on the server floor.
  • by harry666t ( 1062422 ) <harry666t&gmail,com> on Saturday June 21, 2008 @12:17PM (#23886417)
    1. Get a data center
    2. Paint it green
    3. ???
    4. Cthulhu
  • former founder (Score:3, Insightful)

    by rpillala ( 583965 ) on Saturday June 21, 2008 @12:31PM (#23886533)

    How can you be a former founder of something? Someone else can't come along later and found it again can they?

  • He's no longer the founder of BEA? Who is then?

  • I've used Dell's Greenprint Calculator [dell.com] to determine usage in my racks pretty often.

    It's got a nice interface and gives you all the energy information you need on their equipment, plus allows you to insert your own equipment's energy profile to calculate total usage.

    It's very handy

    • by emj ( 15659 )

      Yes this is a very good tool, I tried it when upgrading a couple of servers, and was amazed how much heat output memory modules has.

  • by 1sockchuck ( 826398 ) on Saturday June 21, 2008 @01:07PM (#23886835) Homepage
    This is a huge topic, since so many different strategies are being brought to bear. For data center operators, energy efficiency is a business imperative since the power bills are soaring. Here are some sources offering ongoing reading about Green Data Centers:

    The Green Data Center Blog [greenm3.com]
    Data Center Knowledge [datacenterknowledge.com]
    Groves Green IT [typepad.com]
    The Big List of Green Technology Blogs [datacenterknowledge.com]
  • Northern Climates? (Score:3, Interesting)

    by photon317 ( 208409 ) on Saturday June 21, 2008 @01:39PM (#23887109)


    What I've always wondered is why we don't build more datacenters in colder climates here in north america. Why put huge commercial datacenters in places like Dallas or San Diego (there are plenty in each) when you could place them in Canada or Alaska? In a cold enough climate, you could just about heatsink the racks to the outside ambient temperature and have little left to do for cooling. I suppose the downside is 20ms of extra latency to some places, and perhaps having to put more fiber and power infrastructure in a remote place. But surely in the long run the cooling savings would win no?

    • Because I would guess, that the other things get much more expensive. I would guess, that few personell would like to live in some remote Alaskan or Canadian village, so you will have to pay them more, if you can even find some. Then you need a lot of power, I somehow doubt that that is available so easily either. Next is the problem with connectivity. A single connection is not exactly a good thing for a datacenter, you want to have redundancy. Also you have to move the equipment, whenever new hardware is

    • by Herger ( 48454 )

      I've wondered why they don't put datacenters in old textile industry centers like Lowell, MA and Augusta, GA. Both of these places have canals that once supplied the mills with running water that drove turbines. You could rebuild the turbines to generate electricity and draw water off the canal for cooling. Plus mill towns tend not to be too far away from fiber, if there isn't already enough capacity there.

      If someone has a couple million in venture capital to spare, I would like to attempt a project like

    • by jhw539 ( 982431 )
      Most big datacenters I've seen are cited to a great extent based on the availability of power. Finding 20+ MW of unused capacity of adequate reliability is difficult, and it is expensive to have that scale infrastructure built out just for your datacenter. I have heard of datacenters catching the 'green' bug when they ran out of power and were told tough - build your own damn plant then. The other issue is of course good feeds to the internet, although that seems to be coming up less and less as a problem.
      • Yeah, like Manitoba Hydro.

        http://hydro.mb.ca/ [hydro.mb.ca]

        Winter gets you cold, cold, cold temperatures. Hydro power here costs you 5c/kWh and much cheaper for larger users. Want it even cheaper, get up north to Thompson - closer to the source.

        Yet, no large data centers here. And in a "town" of 600,000+ people.

    • by falstaff ( 96005 )

      Plus Canada has lots of green hydro electricity. And data stored in Canada is exempt from the US Patriot act.

  • Some companies make it a total no-brainer to control the number of servers running -- according to the current computing demand. Alpiron makes a product that integrates seamlessly with Citrix and Terminal Server [alpiron.com], so that the users are at no point affected. Additionally, it takes 20 minutes to set up, and you get alarms, power-saving reports etc. for free.
  • The headline on /. is "News: Building the Green Data Center". Every IT publication for the past year has put "building the green data center" on its cover. It's not news anymore!
  • Speaking strictly to the cooling generation side of things, the biggest thing that saves energy is implementing freecooling, that is bringing in outside air directly when it is cold and using it to cool the building (contamination is a easy known problem to deal with - filtering is not hard). If you're in a dry climate, use a cooling tower to make cold water and use that in coils. Blindingly simple, but datacenters just don't do it, even though their 24/7 load that is independent of outdoor air temperature
    • Outside air can be very humid at night... not sure how high humidity it's safe to run a datacenter at though.

      • No, you are just ignorant like vast majority of people. It is called "RELATIVE HUMIDITY", NOT humidity.

        100% saturated air at 0C is dry when heated to 20C = about 26% @ 20C.

        http://einstein.atmos.colostate.edu/~mcnoldy/Humidity.html [colostate.edu]

        100% humid air at -30C, raised temperature to 20C gives you humidity of 2% @ 20C. That is DANGEROUSLY TOO LOW for data centers. You want about 30% AFAIK otherwise you risks static charge problems with people zapping servers and servers even zapping themselves.

        • I know about relative humidity.

          I have no idea where you're finding -30C air... maybe you should sell it.

          I'm in southern California; on a cool summer's night, it might get down to the high 50's Fahrenheit with 70% or higher humidity.

      • not sure how high humidity it's safe to run a datacenter at though

        60% seems to be the common recommendation among datacenter humidifier vendors, even those who could sell more gear by changing that number. Static sucks.

        • Cool. Turns out it's been over 80% humid at night here lately. (so Cal, 63F and 84% humidity at the moment.)

          • Be careful though, you can get from nice humidity to condensing humidity in short order. Static sucks for servers, but drops of water can be worse!

            • I know, that's what I was trying to point out to jhw539 above; that outside air at night may be TOO humid to use for pumping into the datacenter, and I'm guessing that dehumidifying it may not be much cheaper than running the A/C instead.

              And I was thanking you for the 60% ref.

              Personally, I think more swimming pools should be used as liquid cooling to improve the efficency of air conditioners... you heat the pool for almost free, and your a/c runs cooler.
              I don't know how bad the corrosion would be, but some

  • Get the self contained Toshiba 5MW reactor, build data center around it.

    For desalination plant design, see above.

To be is to program.

Working...