Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Earth Power

Building the Green Data Center 86

blackbearnh writes "O'Reilly News talked to Bill Coleman, former founder of BEA and current founder and CEO of Cassett Corporation, about the challenges involved in building more energy-efficient data centers. Coleman's company is trying to change the way resources in the data center are used, by more efficiently leveraging virtualization to utilize servers to a higher degree. In the interview, Coleman touches on this topic, but spends most of his time discussing how modern data centers grossly overcool and overdeploy hardware, leading to abysmal levels of efficiency."
This discussion has been archived. No new comments can be posted.

Building the Green Data Center

Comments Filter:
  • by BigZaphod ( 12942 ) on Saturday June 21, 2008 @12:46PM (#23886147) Homepage

    Software has an impact, too. Messy, heavy code takes longer to run, takes more CPUs, etc. Imagine how much energy could be saved if there wasn't so much code bloat!

  • The outback (Score:1, Interesting)

    by stainlesssteelpat ( 905359 ) on Saturday June 21, 2008 @12:47PM (#23886177)
    In all seriousness I've often wondered why they don't just slap server farms and data centres in the australian desert, well any for that matter. Solar power galore (almost no cloud cover in central Oz), and if the miners get paid to go live out there and work for $$$, surely if you cut out alot of your overheads IT guys would take big bucks to do it. Offer the work to Residency applicants even to cut the wage. Also there are enough big mining outfits out that way, so they would probably relish being able to out-source their IT needs. y
  • by Colin Smith ( 2679 ) on Saturday June 21, 2008 @12:52PM (#23886215)

    Switch the machines off at the the socket. You can do it using SNMP.
    Monitor the average load on your machines, if too low, migrate everything off it and switch a machine off. If too high, switch one on.

    Course it assumes you know how to create highly available load balanced clusters. Automatic installations, network booting and all that. Not so difficult.

     

  • Overcooling? (Score:1, Interesting)

    by Anonymous Coward on Saturday June 21, 2008 @12:55PM (#23886233)

    I think this guy confuses heat and temperature. In datacenters, cooling costs are mostly proportional to the heat produced, and have little to do with the temperature you maintain in the steady state.

  • by symbolset ( 646467 ) on Saturday June 21, 2008 @01:17PM (#23886423) Journal

    Of course with a good dynamic provisioning system a single host failure doesn't matter because that new VM will just get spun up on a different host that's woken up.

    Bingo. A node is just a node. A decent control system will detect a node failing to come up, flag it for service and bring up another one. In some datacenters not designed for this sort of redundancy a server failure is a big deal where people have to come in on a holiday weekend. If you do it right the dead server just sits there until you get around to that rack in the regular course.

  • Why cool systems? (Score:2, Interesting)

    by rathaven ( 1253420 ) on Saturday June 21, 2008 @02:00PM (#23886777)
    Good point, however, since this is just thermodynamics - why do we actively cool systems? Managed properly the heat should be able to be utilised in ways far more effective than air conditioning. I think people often forget that air conditioning isn't actually a cooling solution if you take the whole picture. You are providing more energy and therefore more heat to make a small area temporarily cooler.
  • Northern Climates? (Score:3, Interesting)

    by photon317 ( 208409 ) on Saturday June 21, 2008 @02:39PM (#23887109)


    What I've always wondered is why we don't build more datacenters in colder climates here in north america. Why put huge commercial datacenters in places like Dallas or San Diego (there are plenty in each) when you could place them in Canada or Alaska? In a cold enough climate, you could just about heatsink the racks to the outside ambient temperature and have little left to do for cooling. I suppose the downside is 20ms of extra latency to some places, and perhaps having to put more fiber and power infrastructure in a remote place. But surely in the long run the cooling savings would win no?

If you want to put yourself on the map, publish your own map.

Working...