Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Earth Power

Round Robin Scheduling Not Power-Efficient 141

Via_Patrino writes "While having to distribute load between several servers, round robin, or any other technique that balances load equally, is the most common approach because of its simplicity. But a recent study shows that trying to accumulate load on some servers can improve energy efficiency because the other servers will be mostly unused during off-peak periods and then able to make better use of power saving methods. Specially, where load involves lots of concurrent power-consuming TCP connections, which was the case in the study, a new load-balancing algorithm resulted in an overall 30% power savings. Here's the paper (PDF)."
This discussion has been archived. No new comments can be posted.

Round Robin Scheduling Not Power-Efficient

Comments Filter:
  • Logical conclusion (Score:5, Insightful)

    by DoofusOfDeath ( 636671 ) on Friday May 09, 2008 @10:04AM (#23350294)
    So if we're willing to sacrifice speed for energy savings, shouldn't we just use the bare minimum number of computers that can handle the workload without crashing?
    • by TooMuchToDo ( 882796 ) on Friday May 09, 2008 @10:12AM (#23350402)
      What this means is someone needs to architect an intelligent loading system. Ideally, it would manage the load on your base load servers (that are on all the time), and when those servers reach 85-95% of capacity (numbers from my ass) other servers should be brought out of low power/sleep mode to start serving.

      Of course, if you use Amazon EC2, this is all moot, as they can shift load around to have their cluster run at peak efficiency.

      • It can be done with your standard job scheduling/load balancing systems and, yes, about half a dozen shell scripts.
         
        • Re: (Score:2, Insightful)

          by TooMuchToDo ( 882796 )
          Unless you're running a hugely complex ASP.net application, in which case, it doesn't work so well.
          • by Colin Smith ( 2679 ) on Friday May 09, 2008 @10:41AM (#23350844)
            Good luck with that. Really.
             
            • by TooMuchToDo ( 882796 ) on Friday May 09, 2008 @12:19PM (#23352370)
              Ahh, Slashdot, where anything Microsoft is thought of as a piece of shit. Mailtrust (formerly Webmail.us) has hundreds of thousands users using their mail product. The backend? Linux running mail daemons and their storage system. The front end? Windows Server 2008.

              Get over yourself. Use the right tool for the job.

              • Get over yourself. Use the right tool for the job.
                LOL. Hey you were the one telling me you had picked the wrong tool for the job.
                 
                • Umm, no. I said "Unless you're running a hugely complex ASP.net application, in which case, it doesn't work so well." Just because you can't do $latest-greatest-fad-of-the-month with your application, doesn't mean it's the right tool for the job.

                  As always with Slashdot, a car example: I can't tow anything with my Corvette, nor is it that fuel efficient, but it still works.

      • It's not clear from the article that they're referring only to persistent connections which remain open but which don't have much activity. The one they analyze is Windows Live Messenger.

        They talk about 30% savings in these applications, but also give 59bn kWh as the figure for total power usage for all data centers, the majority of which probably wouldn't benefit from tweaks suited to persistent connections.
      • What this means is someone needs to architect an intelligent loading system. Ideally, it would manage the load on your base load servers (that are on all the time), and when those servers reach 85-95% of capacity (numbers from my ass) other servers should be brought out of low power/sleep mode to start serving.

        Of course, if you use Amazon EC2, this is all moot, as they can shift load around to have their cluster run at peak efficiency.

        The most insightful part of your post was "(numbers from my ass)"...

        No... just kidding... really... actually, seeing that in the middle of an intelligent post gave me my laugh today! Thanks!!!

        So... on to a serious reply, I think certain BladeCenter configurations were designed to do this. I also find that on various single box SMP solutions from IBM, a noticeable amount of power is saved when the server is not under load. My power usage will vary as much as 150 watts on my Netfinity M10 7000 (4Way SMP

    • Re: (Score:2, Informative)

      BTW Dynamic workflow based provisioning of VMs can (or will eventually) allow you to do this without sacrificing speed.
    • Re: (Score:2, Informative)

      by redcircle ( 796312 )
      I think that's the point of the study and solution. Round robin doesn't account for under-utilization of resources so it still balances between multiple servers when not needed. What their new algorithm does is allow the servers that are not needed to use their power saving features and maximize utilization of only the needed resources(servers).
    • Re: (Score:1, Insightful)

      by Anonymous Coward
      Not that I'm going to read the article... but surely the situation is more along the lines of:

      You have a load that varies, the peak load requires 10 servers to keep performance acceptable (where the definition of acceptable can vary according to redundancy requirements, etc). But the usual load is 20% of the peak.

      With round robin each server would be running at 20% load in the usual case. From a power perspective it might be better to have 2 servers running at 100% and 8 servers idle, or 4 servers at 50% an
    • by MozeeToby ( 1163751 ) on Friday May 09, 2008 @10:23AM (#23350562)
      We've been sacrificing computing power for efficiency for years. New Server CPUs tout thier energy savings atleast as much, and quite often more than they tout their computational power. As electricity gets more expensive and data centers continue to grow this trend can only continue; it's simply too expensive to a warehouse full of server racks unless you focus on efficiency.

      I'm waiting for the first company to put a data center a few hundred feet under water, where the water temp is low. You'd be surrounded by the worlds biggest heat sink. The environmentalists would have a hissy fit but that's never stopped industry before, and of course you could argue that you are saving electricty on cooling.
      • 1:34AM - "Honey, I have to go to work to replace a faulty power supply..."
        Regular data center - tech drives to work, replaces hard drive.
        Underwater data center - tech drives to end of the pier where he gets into his 1-man submarine and dives to -100ft.

        Now I ask you this: Does State Farm have submarine insurance?
      • I would totally be willing to have a site hosted in a data center that's under the water table and depends on a reliable source of power to keep its pumps going. It sounds like nothing could go wrong. I'll just throw in a few extra servers for failover if anything happens.

        Screw you, environmentalists!
      • by dpilot ( 134227 )
        1: Then put turbine blades above your data center, so that the upwelling heated water spins them, generating electricity.

        2: Use some of the electricity to power your data center, and the rest to power other thermodynamically impossible projects.

        3: Profit!! (no hidden step needed here, just an impossible one)
      • I'm waiting for the first company to put a data center a few hundred feet under water,
        - thats just silly, It would be far more efficient to pump water to cool the servers, pump it through a heat exchanger then use the hot water for, well hot water. Even server farms need hot running water.
      • The environmentalists would have a hissy fit

        Only the stupid ones. Either undersea or in the middle of a city, you're adding heat energy to the planet. The main question is whether you want to spend even more heat energy to relocate some of the excess to a more convenient location.

        Put another way, "server electricity" is better than "server electricity plus air conditioning electricity".

        Now, an easier sell might be to move data centers into less urban, possibly even rural settings. Think about it: no urban heat island to contend with. Usuall

        • Re: (Score:2, Insightful)

          by kesuki ( 321456 )
          'The environmentalists would have a hissy fit'

          "Only the stupid ones."

          So your logic is that, 'since we're using all this energy, what we need to do is heat up the oceans, instead of the atmosphere, because it takes less electricity to do that, thus making us use 5% less energy making us 5% more environmentally friendly'

          the difference is huge though, if we heat up the atmosphere, it radiates into space (if it didn't LA would be about 6000 degrees Fahrenheit at noon)

          If we heat up the oceans, less CO2 and less
          • Citations for the idea that the ocean holds heat while the air radiates it harmlessly?

      • by kesuki ( 321456 )
        it's much easier to just run a pipe, and draw in fresh water from a nearby waterway/the ocean.

        the former WTC IIRC was using seawater to cool the building.. you don't need to build underwater, which would be impractically expensive compared to cheaply piping cold water in and piping hot water out.

        and yes environmentalists are against this for a lot of reasons (one is that warm water doesn't let oxygen or CO2 in our out the way cold water does, has to do with total available surface area)... but that really h
        • Re: (Score:3, Informative)

          by afidel ( 530433 )
          Enwave Energy Corporation in Toronto, Ontario is already doing this. They have a 59K ton integrated district cooling plant using deep lake water as an energy sink. Chicago is thinking of doing something similar with the huge volume of water they already draw from the lake for other purposes. The Toronto project probably kept another coal plant from coming online because it's got a cooling capacity of 207MW which would require about 400MW of electricity between transmission losses and cooling system ineffici
      • I'm waiting for the first company to put a data center a few hundred feet under water, where the water temp is low. You'd be surrounded by the worlds biggest heat sink. The environmentalists would have a hissy fit but that's never stopped industry before, and of course you could argue that you are saving electricty on cooling.

        Here's a homebrew prototype [thebuehls.com]. Wonder if it has sprung a leak yet?

      • The problem with underwater data centers is the water. How's about the arctic. It's cold there too. This project chose the arctic [npr.org] to reduce their cooling costs and remove the need for redundant cooling systems.
    • Re: (Score:2, Interesting)

      by Anonymous Coward
      You aren't sacrificing speed as long as you have properly benchmarked your servers, and understand where the performance hockeystick starts.

      Apply a connection limit slightly below the performance hockeystick in your load balancers / content switches and you will get maximum power utilization with minimum performance impact.

      One other way I see customers getting maxim utilization out of their servers is by using dynamic resource schedule and vmware esx to move virtual servers around behind a load balancer. At
    • by Hatta ( 162192 )
      That's exactly what this paper is saying. You still need extra servers around to handle peak load, but leave them in a power saving state until you need them. Round robin should work as a way to distribute work across your pool of servers, but for the sake of energy efficiency you want to shrink your pool to the minimum size that can handle the work.
    • by Detritus ( 11846 )
      You really need more than that to avoid scheduling problems. I'd limit the load to about 60%. Event driven systems under high load can behave strangely.
    • Speed for what? CPU .. honestly I could suffer a 50% cut in CPU on my servers and never know it. Disk I/O. Already optimized for that, Ram preload saves me there. Network I/O I run my network at about 60% capacity peak, 20% point median. So what would I be sacrificing by turning down all the systems not needed to handle the current load? If one of my systems is handling 100 users without fail or lag, why in heavens name would I believe that 10 systems running 10 users each would make the user experi
  • by athloi ( 1075845 ) on Friday May 09, 2008 @10:05AM (#23350306) Homepage Journal

    Confronted with distributing food rations to hungry orphans, people would rather be fair than efficient, even if it means letting some of the food go to waste, a US study shows.

    But the tests demonstrated that most people preferred equity in distributing food -- that all the hungry mouths got fed equally -- rather than an efficiency that perhaps meant that one orphan got almost nothing but also that no food went to waste.

    http://news.yahoo.com/s/afp/20080509/ts_alt_afp/ussciencepsychologymoralityresearch_080509123210 [yahoo.com]


    This problem shows up in many places.
    • ...you set necessary goals and then find the most efficient way(s) to go about them.

      OTOH I think the kind of study summarized by the Yahoo link gives science a bad name in human rights circles. In this case they treated a necessity as if it were a luxury where efficiency could become the paramount consideration. So we now know about a bit of human nature within an either/or false dichotomy (which is not very useful), plus we have the nasty suggestion that feeding everyone simply won't do from an efficiency
    • Not that I RTFPaper, but hungry orphans is a bad example if you're trying to show that people are irrational. The objective is to have as few starving orphans as possible, so trading three fewer starving orphans for 300 fewer thanksgiving-stuffed-full orphans is still a win, even if a lot of food is wasted in the process.

      Now if there was research showing that people would rather throw food away than give it as an unfair surplus to some of the orphans, THAT would be news.
  • by dsginter ( 104154 ) on Friday May 09, 2008 @10:08AM (#23350340)
    I don't think that we should go down this road again - why don't we talk about religion or politics, instead?
  • Specially, where load involves lots of concurrent power-consuming TCP connections
    I really don't think all those connection management packets add up to much wattage flowing through the tubes... On a serious note good hardware load balancing solutions can already aggregate traffic onto tiers of servers, adding more as the load rises, and minimize the number of backend TCP connections to the servers by doing things like multiplexing/pipelining.
  • by Colin Smith ( 2679 ) on Friday May 09, 2008 @10:11AM (#23350382)
    Just switch them off...

    If the load on your boxes is below a threshold, remove one of them from the load balance list, wait for connections to end, or migrate the processes off to another machine, and switch it off. When the load is above a certain threshold, you power on an additional node, configure it for whichever service and add it to the load balancer.

    Oh come on people, you call yourselves engineers? It really isn't that difficult.

     
    • by russotto ( 537200 ) on Friday May 09, 2008 @10:18AM (#23350484) Journal

      If the load on your boxes is below a threshold, remove one of them from the load balance list, wait for connections to end, or migrate the processes off to another machine, and switch it off. When the load is above a certain threshold, you power on an additional node, configure it for whichever service and add it to the load balancer.


      Sure, that's not too difficult to do. But it does add complexity. And it does mean your system can't respond to increased load as quickly, as you have to wait for your additional boxes to boot up. If the increased load is predictable, you can anticipate, but that adds more complexity. It doesn't save you on capital costs as you still have to size your power and A/C systems for peak load. Powering the boxes on and off may shorten their lives or reduce their reliability. The question isn't whether it can be done; it's whether it's worth it.
      • If you have a fairly "dumb" system where you're running a webapp across an array of web servers, and you have one DB server, adding the complexity to save power is probably not worth it. If you're Google, Amazon, etc. and your power bill every year is bigger than the real estate bill for some medium sized companies, than you probably should be integrating power efficiency architecture into your process somewhere.
      • Powering the boxes on and off may shorten their lives or reduce their reliability.
        I thought this was debunked a while ago?
        • We usually see 0.3-1% failure rates on servers in a data center powerdown (scheduled, soft shutdown). Bigger issues with blades and high power 1U servers, fewer with midrange and mainframe equipment. Hard drive and power supply failures are most common, generally attributed to thermal shock on re-start.

          Would love to see any information that can debunk so we can hit the equipment manufacturers up for damages...
          • Yeah, I bet what I am thinking of was about desktop computers. I shut all my desktops down after using them, and the laptops get put on standby, hibernate, and shutdown multiple times a day.
          • I would assume that if you did two powerdowns in a row, the 2nd time you brought everything back up you would see very few (if any) failures as the machines that survived the first powerdown would likely survive a second. I would assume that if the machines were regularly powered down, instead of seeing a number of them go in a clump like you observe, you would have a spread out stream of occasional failures. The question becomes, under which scenario are you losing the most machines in a period of time?
        • by dbIII ( 701233 )
          The failure mechanism is thermal fatigue - things moving when temperatures change and cracks opening up. If you don't have a large temperature difference, have a good design and don't expect a life of more than a decade then it isn't likely to be a problem.
      • It isn't a problem. By that, I mean... Watch where you put your state...

        Powering the boxes on and off may shorten their lives or reduce their reliability.
        Who cares, they are disposable 300 boxes. When it dies you take it out and put another one in it's place, send the old one back to the manufacturer to be replaced under warranty.

         
      • by Locutus ( 9039 )
        or have the "stand-by" system(s) in a sleep mode so they can be ready for the extra load more quickly. This would trigger bringing in/powering on another box which would go into "stand-by" mode if load keeps going up.

        it would be silly to have all your boxen running at 5% load because of a dumb load balancing scheme. Energy wasteful to say the least.

        LoB
        • by kesuki ( 321456 )
          an efficient operating system should have 0% load on the CPU when it's doing nothing so the power savings circuitry on chip should be able to power it down to nothing, the HD as well should be able to spun down with no disc activity, and modern fan controls can spin down the fans to 'power saving mode'

          I mean of course this means you're not using windows... but from personal experience, when i switched one of my computers from windows to FreeBSD (and this was in 1996) i was saving over $5 a month in electric
      • by afidel ( 530433 )
        The only part of a server that I would be significantly worried about during power cycles is drives and fans, and with boot from network/SAN that's no longer a concern so you might have to service fans more often, with decent servers they have redundant sets so it's just some additional scheduled maintenance, not a big cost next to turning loads of servers off.
    • Re: (Score:3, Informative)

      by iamhigh ( 1252742 ) *
      Agreed. How hard is it to understand that if you use 50% load on 10 servers, you will probably be using more energy than a 100% load on 5 servers. It's common sense when you realize that a 50% load != 50% power consumption.

      I am starting to think I didn't miss much by not going to a big name computer science school.
      • by barzok ( 26681 )
        What you describe is more closely related to electrical engineering than computer science.
      • You're right, it isn't very hard to understand. Fortunately for them, there's more to it than just understanding that simple fact. Say you want to bring in a system like this.. what are the optimum values for server load to balance speed and energy efficiency? What are the costs involved with bringing hardware on and offline all the time?

        They are performing research to gain further insight and data accumulation, something that takes much more than just "oh sure, I know power consumption != load."

        Aikon-

    • Re: (Score:3, Insightful)

      by PenguiN42 ( 86863 )
      Oh come on people, you call yourselves engineers? It really isn't that difficult.

      You'd be surprised how much of engineering is taking "obvious" ideas and banging your head against them for months/years trying to get all the details to work out right.
  • It doesn't take any account of the load on each box. If one is dying, it will still hand it, say half the work.

    Load balancing is where you actually check the load and then make an informed decision about where to allocate the work.

    OK, rant over. Now back to your scheduled programming.

    • It depends on what you consider load. If server process load is your load that your balancing, then yes you have to check that load to balance it. If connections, bandwidth or people are your load, then round robin is best. For balancing something like serving static files, round robin is probably faster, cheaper and more reliable.
  • by Animats ( 122034 ) on Friday May 09, 2008 @10:14AM (#23350428) Homepage

    Operators of multiple steam boilers have been dealing with this problem for a century. The number of boilers fired up is adjusted with demand, with the need for some demand prediction because it takes time to get steam up. This was done manually for decades; now it's often automated.

    The same thing applies to multiple HVAC compressors. Usually there's a long-term round-robin switch so that the order of compressor start is rotated on a daily or weekly basis to equalize wear.

    More and more, IT is becoming like stationary engineering.

    • Re: (Score:3, Interesting)

      by somersault ( 912633 )
      Similar idea to modern fuel efficient engines shutting down cylinders when you're idling as well (probably oversimplifying there but you know what I mean)
    • by russotto ( 537200 ) on Friday May 09, 2008 @10:27AM (#23350648) Journal

      Operators of multiple steam boilers have been dealing with this problem for a century. The number of boilers fired up is adjusted with demand, with the need for some demand prediction because it takes time to get steam up. This was done manually for decades; now it's often automated.


      Which, alas, won't stop someone from patenting it with respect to servers. Even if it's already been done with computers too.

      Incidentally, I've seen descriptions of currently available HVAC control systems for office buildings which takes into account the season, the direction the building faces, the thermal mass of the building, demand, etc, and even learns some of these parameters while running, rather than forcing the installer to calculate them. But every office building I've worked in has had crappy systems which amount to running the compressors on a timer and using individually controlled dampers to provide even cooling (poorly). It seems that we have the technology, but not the will (or the capital) to use them.
      • by Hatta ( 162192 )
        Clearly the manufacturer of those systems needs to pay a senator or two to introduce legislation mandating higher energy efficiency in new installations.
    • by dwater ( 72834 )
      Similar issues in electricity generation too. They have big (coal/nucular) power stations to satisfy the base demand in electricity and then less efficient gas turbine stations that can fire up (and down) quickly to meet the peak demands.

      It just seems too obvious for there to not be a solution to this in computing already, let alone it requiring a study to come to this conclusion.

      This being so suggests there's more to the story than the summary itself suggests; but to test that I'd have to follow all the li
    • The only thing that makes this hard is a metric of what "fully loaded" means for a server. With generators and boilers, you have a single number which represents output, and you know what the capacity of each unit is, so you know when to start up the next unit. Computer servers are more difficult to characterize.

      So you have to measure some values of server load, convert that to a single number, and use it for load measurement purposes. Then it all works just like boiler scheduling.

      You don't even nee

  • Pound, haproxy (Score:4, Insightful)

    by QuoteMstr ( 55051 ) <dan.colascione@gmail.com> on Friday May 09, 2008 @10:21AM (#23350538)
    We're running a no-frills OpenBSD load balancer at work. Right now, it's running Pound (the quickest thing we could get up once traffic spiked a few weeks ago), but we're considering other approaches too. haproxy's load balancing knobs look interesting. It looks like you can configure it so the maximum number of clients scales with the current load. The problem is that there's no feedback system.

    Some kind of loadavg-based, or even response-time, feedback mechanism would be great! Pound has that (I believe), but since Pound requires downtime for every configuration change, we want to move away from it ASAP.
    • Re: (Score:3, Informative)

      by Fweeky ( 41046 )
      pen [siag.nu] can perform some configuration changes on the fly using an optional control service; you can set server weightings at least. It's also event driven rather than the thread-per-connection model I believe pound uses, so it should scale better.
    • keepalived.

       
  • more obvious statements

    A cluster of computers doing a job is less efficient that a single server doing the same job. Adding to that having a cluster creates more points of failure, and more overhead communicating between those statements.

    If you have the option to run the DB & the application on the same server, try to do so.
  • by mlwmohawk ( 801821 ) on Friday May 09, 2008 @10:27AM (#23350638)
    This is a very cool idea, and I don't think it will affect usability too much either. As long as the load balancer keeps tabs on system loading, via snmp or something, it can turn on/off machines based on need.

    Assuming your system scales smoothly, i.e. gets proportionally slower as the system load starts to exceed processing capacity. For example, a process will always take 100ms as long as there is CPU time to spare, but once the CPU gets to 100% utilization, you have to start time slicing more processes, that 100ms starts to be 150ms. The load balancer can spin up a new server an start bring down the processing times.

    This is an obvious solution to an obvious problem, but until now, we've just never had to examine it.
  • Redundant systems are not efficient? You don't say.
    Redundant systems are redundantly redundant..That's why they are robust.
    This message brought to you by the Department of Redundancy Department.
  • by JamesP ( 688957 )
    Everybody knows that it should be Round Batman, it is soo much better :P

  • This is not a new idea. VMware is making a killing around this same concept of consolidating load from many servers onto fewer servers. People tend to forget that an idle server still uses 50% of its peak power utilization. There is a good write up here on Green Data Center [colinmcnamara.com]
  • BigIP (Score:3, Interesting)

    by ZonkerWilliam ( 953437 ) * on Friday May 09, 2008 @11:08AM (#23351270) Journal
    BigIP's can use round robin and use prioritizing, in other words one server receives the most connections over the others. So how is this new?
    • More smarts, I think.

      Does your setup allocate ZERO connections to certain servers over some length time, which are set up to reduce energy use upon such zero connections? If not, this looks like it might help.

      They're claiming real-world energy efficiency gains, so it looks like it's an improvement somehow.

      I would assume it's because this now adds dynamic adjustment, which could be based on total system-stack metrics of peak_load_capability, energy_minimization, acceptable_response_time, etc. Somet

      • Does your setup allocate ZERO connections to certain servers over some length time, which are set up to reduce energy use upon such zero connections? If not, this looks like it might help.

        F5's BigIP, the load balancer in question, doesn't specifically allocate ZERO connections, your right about that. Although F5 does allow for allot of flexibility in load balancing, you can separate out traffic, ie http and https to go to two separate servers. This could approximate your zero connections. When your talking zero connections though, as this algorithm suggest, whats the latency? I can't help think taking time for the load balancer to establish sessions (most likely in the thousands) on the fl

        • I'm thinking that this is more like:

          (1)(a) "allocate all connections to servers $EnergyEfficient_1 thru $EnergyEfficient_9 (in a particular order due to their decreasing "EnergyStar rating") as long as their average load is less than 80%"
          (b) "meanwhile, PowerMonsterServers are in [no-op, CPUhalt, powerd'ed down, spindown, standy, Wake-on-LAN] mode, thus saving total energy over having all of these machines idle at 15% load and consuming 100W each "
          (c) "then if total systems load >95% and avg. respo

  • Amazing that all these discoveries can now be repeated with Green Tech phrasing & sound like they're new. Now a new discovery. Busy waits R not energy efficient. Where's my nobel prize?

  • by bill_kress ( 99356 ) on Friday May 09, 2008 @12:15PM (#23352310)
    I think it's probably simplistic to simply distribute a load to all cores of a CPU evenly. Although asymmetrical might be tougher, I could see a system with one low-power always-on core to deal with system requests and organization (Maybe even low enough power to remain on during a suspend), One to handle all GUI threads and interact with the GPU on a private bus, a couple normal cores to handle typical user threading, one of which doesn't come on until the first is like 50% loaded, and one or two high-speed high-power cores that run all-out when the system is plugged in and needs them for intensive processing.

    It would take some targeted software design to take advantage of this, but I think we could be looking at a moores law style increase in power...

  • by viking80 ( 697716 ) on Friday May 09, 2008 @12:26PM (#23352464) Journal
    14 soccer moms are taking the team of 14 kids to a game. They have two options:
    A. Spread the kids among all the cars, and drive all the cars (14 cars)
      or
    B. Fill up a car, and send off. Repeat until done. (6 cars)

    What is more energy efficient?

    Soccer moms have solved this without statistical analysis or engine torque curves.
    • Re: (Score:3, Interesting)

      by Culture20 ( 968837 )
      Parent post has it all.
      Car analogy? Check.
      Soccer Moms? Check. Check. (no mention of how many are single though)

      But... a lot of soccer moms don't care. They're busy with their other kids and errands too (each server runs more than just apache), so they want the flexibility of driving their own car. Show me a website where the hardware is designed to be energy efficient, and I'll see a site that can't handle a good slashdotting.
    • by wombert ( 858309 ) on Friday May 09, 2008 @01:08PM (#23352978)
      I believe your calculations are wrong. It's understandable, though, since soccer parenting is a fairly unique branch of mathematics.

      First off, you're assuming a standard car with 1 adult driver and 4 passengers; instead, you should be using an SUV with a capacity of 6-8, including driver.
      (Result: 4-5 vehicles)

      Next, you have to consider that not all parents will attend every game. The primary reason that soccer moms drive SUVs is that they must occasionally transport several of their child's teammates to a game (or, worse, to practice!) when their turn comes up in the rotation. Therefore, you only need enough SUVs to cover the number of child passengers, and the number of adults will follow.
      (Result: 2-3 vehicles)

      However, you might recall that the other reason that soccer moms drive SUVs is that they often have additional children that have not yet reach sports playing age, and must be transported along with the parent, in a car seat (which, in the case of a standard car, would reduce passenger capacity by at least 20% by rendering the back center seat useless.) Assume that approximately 1 in 3 soccer moms have an additional child to transport, and the child adds to the overall passenger count.
      (Result: 3-4 vehicles)

      Finally, realizing that the overloaded schedule and priorities of child + parent create scheduling conflicts, it is impossible to get optimal performance. At least 1 child per SUV will be late, leaving a seat empty and requiring another parent with car to tranport them.
      (Result: 6-8 vehicles)

      The result is a range of possible values, but your initial calculation of 6 vehicles is optimistic at best.

  • by perlith ( 1133671 ) on Friday May 09, 2008 @12:59PM (#23352882)

    "Round Robin Scheduling Not Power-Efficient when using Windows Live Messenger"

    RTFA, in the abstract, "In this paper, we characterize unique properties, performance, and power models of connection servers, based on a real data trace collected from the deployed Windows Live Messenger."

    The research itself appears pretty solid. I'd be interested if they publish a followup paper where the model was based off of a variety of applications which utilize round-robin, not just one.

  • by cabazorro ( 601004 ) on Friday May 09, 2008 @02:57PM (#23354438) Journal
    Here is the solution. In the winter run your web farm in the North hemisphere. In the winter migrate to the South hemisphere. Run it in basements of large apartment complex. Charge for the heating. Heating oil is going up the roof.

"The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people so full of doubts." -- Bertrand Russell

Working...