Building the Green Data Center 86
blackbearnh writes "O'Reilly News talked to Bill Coleman, former founder of BEA and current founder and CEO of Cassett Corporation, about the challenges involved in building more energy-efficient data centers. Coleman's company is trying to change the way resources in the data center are used, by more efficiently leveraging virtualization to utilize servers to a higher degree. In the interview, Coleman touches on this topic, but spends most of his time discussing how modern data centers grossly overcool and overdeploy hardware, leading to abysmal levels of efficiency."
Lean Code = Green Code (Score:5, Interesting)
Software has an impact, too. Messy, heavy code takes longer to run, takes more CPUs, etc. Imagine how much energy could be saved if there wasn't so much code bloat!
The outback (Score:1, Interesting)
Managed power distribution units (Score:4, Interesting)
Switch the machines off at the the socket. You can do it using SNMP.
Monitor the average load on your machines, if too low, migrate everything off it and switch a machine off. If too high, switch one on.
Course it assumes you know how to create highly available load balanced clusters. Automatic installations, network booting and all that. Not so difficult.
Overcooling? (Score:1, Interesting)
I think this guy confuses heat and temperature. In datacenters, cooling costs are mostly proportional to the heat produced, and have little to do with the temperature you maintain in the steady state.
Re:He's missing real world experience (Score:3, Interesting)
Bingo. A node is just a node. A decent control system will detect a node failing to come up, flag it for service and bring up another one. In some datacenters not designed for this sort of redundancy a server failure is a big deal where people have to come in on a holiday weekend. If you do it right the dead server just sits there until you get around to that rack in the regular course.
Why cool systems? (Score:2, Interesting)
Northern Climates? (Score:3, Interesting)
What I've always wondered is why we don't build more datacenters in colder climates here in north america. Why put huge commercial datacenters in places like Dallas or San Diego (there are plenty in each) when you could place them in Canada or Alaska? In a cold enough climate, you could just about heatsink the racks to the outside ambient temperature and have little left to do for cooling. I suppose the downside is 20ms of extra latency to some places, and perhaps having to put more fiber and power infrastructure in a remote place. But surely in the long run the cooling savings would win no?