Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
News

1/4 Width Rack-mount Linux Servers 60

An Anonymous Coward writes: "It seems an unheard of company, inlogica, are selling a 1/4 with rack mount server that comes complete with Slackware Linux(woot) in the UK as we speak. They appear to be quite a good spec too, PIII-600mhz 128mb RAM, 17GB HD.. Now if only i could afford one..." They're cute, but I wonder about power requirements, I wouldn't think most racks would be designed to support this many systems.
This discussion has been archived. No new comments can be posted.

1/4 Width Rack-mount Linux Servers

Comments Filter:
  • That's about 26 Watts! (DOn't bother replying abotu RMS - this is just an estimate)

    It is GNU/Watts (Sorry, couldn't resist)

    Unless they draw 1/4 less power and generate 1/4 less heat, the benefit is reduced.

    Since almost all input power is converted into heath both are reduced the same amount. (and if you say 1/4 less it means 75% of the original amount, not 25%)

    They also don't include BTU load.

    5 kW electrical power == 5 kW heath. (converting this to archaic units is left as an exercise for US government Agencies that can afford sending probes to Mars and miss it)
  • Obviously that's not the solution these were meant for (unless you can build a new datacenter just to guarantee you can feed power and cold air to them).

    This strikes me as the ideal solution for someone needing three or four distinct servers who is currently running only one. You can pull your single server, move data to the first 1/4 and keep running, then build #2 into a DB box, #3 into the secure server and #4 into the main web server, then change #1 over into a "gateway" type box. One of these wouldn't have nearly the problems that a rack-full would...

    Of course, the site having noting but a homepage and no specs at all doesn't help their cause much. Even if that's a good price for four servers, the site puts me right off of them.
  • They use 48VDC. You are better off running your servers at 220V AC instead of 120V since they will use half of the current at twice the voltage.
  • Visit www.rlxtechnologies.com [rlxtechnologies.com]. They have a density of 8 computers per 1U of space(it's actually 24/3U). On each system, you get a transmeta cpu/633mhz. 128m - 512m. 2 drives(can be shipped with 10 or 30g). 3 nics(altho, one nic on each system is connected internally).

    The entire chassis of 24 blades runs off a single 450W power supply. However, there is a second power supply, for redundancy. In testing, we have even turned off the fans, with no harm done. The cpu heatsinks are quite touchable.

    Note: I contracted at RLX, and have actually worked with the hardware. It's great stuff.
  • I've seen systems similar to this where they use a lot of laptop parts, a common power supply and pretty much anything they can do to make more than one system run off of a shared part. I'd imagine this type of integration could cut your per system electrical consumption by 10-20%.

    An even bigger win would be a mutlisystem chassis that would have an integrated fiber channel switch. It'd cut the cabling and power way down -- run a single fibe channel cable to a Xiotech or EMC cabinet.

    Heat is still a killer. I'd like to see a data center installed in a room that spanned more than one floor. I'd put a raised floor about 6 feet off the ground and use perforated floor to draw cool air from the subfloor. The ceiling would be open to big exhaust fans on the ceiling of the second floor. I think what makes heat so tough on data centers is that after you raise the floor there's not a lot of head room left for the equipment. Having an extra 10-20 feet above the equipment for heat to rise into would go a long way.

  • 40 Amps @ 120V means you'd have .222 Amps per server, That's about 26 Watts! (DOn't bother replying abotu RMS - this is just an estimate)

    Just remember- those are GNU/watts you're using.
  • Someone has to make a grill front, sectional locking cab for hosting companies.


    APC makes some really nice racks with grill front.
    Plus, it being APC, you can be pretty sure that a 3KVA + battery expansion will not make it collapse into a pile of junk.


  • ...this is why serious companies order kits and standard micro-ATX motherboards and build themselves...

    serious companies? computers that aren't CE certified aren't even allowed in an incorporated business (u.s. only, i think). no, it's not the best price, but if my racks are full and i need to fit 5 more machines in a closet or spare cubicle, i can now.

    ...easily replaceable...

    sure, but if people are busy actually doing work and a machine goes down, try getting warranty service on a non-CE or homebrew machine. you can't run to johnny's computer shop and expect identical parts 6 months after ~any~ purchase.
  • x86 servers really don't have anything but intel processors in them, so a semi-high end pentium 3 is expected. and semi-high end pentium 3 units generally cost about as much as that 1.2 ghz athlon you quoted. i'm assuming they're using ECC ram as well, which generally costs much more (with ram prices these days, who knows though...). oh yeah, and probably scsi drives, though they're not specified either. ide drives that never get "parked" from spinning down have a shorter lifespan. also, in general, the case costs much more, though here i really don't know.

    it's not so much that they lag behind the desktops, it's just that the quality of the hardware is at a higher importance compared to the quantity. smaller more reliable scsi drives instead of bigger ide storage. ECC ram over standard. intel over amd just to be safe. you end up paying the same amount for the rack as you do for the workstation with a little weaker machine. in a server, if you know what your needs are going to be, you buy to meet those needs saving money not getting the fastest and biggest and then spend that savings on reliability.

    B1ood

  • If your smart, you'll get the machines equipped to take 40VDC. If your colo can drop you a DC power connection, this is very useful. This means that you have X number of power supplies eliminated that could potentially fail in the future. Also DC power tends to be a little more reliable, especially if the place has a large bank of batteries to supply the machines, these end up acting as rather large capacitors..

  • Just one nit: If you're running 120V power through there you're an idiot. However, even doing the math at 240V it doesn't make sense.

    Having said that, these machines have their place. Imagine a rack with four of these and two nice disk arrays [hp.com]. A 1 rack, HA server.

  • but they're missing the point. If you use exactly the same number of servers in a rack you get hella airflow.

    Say you've want a rack like this:

    • 4U UPS
    • 7U Raid
    • 4U Backup
    • 1U Switch
      +
    • 20 unit Beowulf cluster!!

    That gets pretty crowded with 1U machines. And hot! But if those 20 machines only take up 5U, then things aren't going to get so hot. I assume there's at least a bit of insulation on the sides of the unit, so the heat will go out the top or bottom. Given the size, though, probably not.

  • It used to be that when you bought space in a colo center you paid for the number of racks you used and the amount of bandwidth(plus extras for more crossconnects and stuff) Then along came google. Google found a way to get, IIRC 192, a lot metric shitload of machines in one rack. They drew so much power, that the surrounding cages were unusable. Now you pay for the rackspace, the bandwidth, and for each 20 amp circut(generally). What does this mean? For people who need large clusters their hosting costs will go down. The rackspace (generally bought in units of entire racks) runs somewhere between $600 - 1400 (US) a month, depending on your provider. So while it wont be a direct drop in colo costs, this could be big for some poeple.

    /*
    *Not a Sermon, Just a Thought
    */
  • So how do I mount the middle two units?
  • How hard would it be to build one of these one's self? Aren't they simply taking a server and cramming it into a case, and charging three-times too much for it?

    I'm sure a reader here could come up with a home-spun solution at a quater of the cost. :)

  • Actually, more to the point...

    Couldn't you slip some kind of brace around a laptop, give it a network card, and *still* save money?

  • let me see... New feature...
    Nope, called watchdog timer and is incorporated in most microprocessors (from the $3 8-bit and up) and your customized linux 2.4 kernel (don't know about older versions) supports this feature. Software emulation or hardware version.

    But interresting point.
    /. effect, server reboots, /. effect increases, server reboots....
    And you have a rundown admin after 24-hours of ./ing...

    Robert Christiansen
    Student at the Technical University of Denmark,
    Department of Informatics and Mathematical Modelling,
    Computer Engineering & Technology Division (www.imm.dtu.dk/cet)
  • http://www.rlxtechnologies.com/
  • Will they have any more luck than VA Linux?

    Three words:

    PC price war
  • Have you ever tried to buy laptop hardware? For something that small it's pretty expensive, and then you'll have to build your own case, make sure the cooling works... Believe me, it ain't cheap.
    --
  • This is UNHEARD OF!!

    Mmm, I love the smell of sarcasm in the morning...


    =============================================
  • Damn foreigners. Always using the comma in the wrong spots. Use decimal places people.

  • Haven't quite decided on how to handle the power density issue, but it should be interesting.

    The heat problem actually has a couple of solutions we're trying to choose between. The simpler is to zip tie a pair of big box fans to the outside, fair them in to suck all the air out of the cage, and crank 'em up. The more complex involves a liquid-cooling rig for the CPUs. Not sure where to mount the radiators yet, though.

  • Well, kinda. We're more interested in CPU density than individual server density, but (pending testing - we're gonna build some next month and see if the design's right) we should be able to cram 4x 1.1 GHz P-3s, a total of 2GB of memory, 40GB of disk, and gigabit uplinks into a 1U slot, configured as 2 physical servers. If I'm right about a couple things beyond that, we _might_ we able to double that - have to see how the first test goes. So, we're looking at (min) 150 to (max) 300 CPUs per rack (gotta leave room for a switch). Should be fun. Oh, and the cost is in the $1000/CPU range, built.
  • It still comes back to 'I need a separate server per smart customer'. Add the time to set it up/reconfigure (not all customers stay) to the fitting, powering, watering and feeding and you're looking at lots of labour - the most costly element. Better approaches are available. In an IBM S/390 it apparently takes about 9 seconds to create another virtual machine - and you can have around 40.000 systems running in one unit. All we need to see now is a smaller version of this (but a *bit* larger than what VMWare GSX offers ;-) and all that kit will be redundant (or usable for games consoles ;-).
  • But t ton of "old" notebooks ( You can get cheap 900 Mhz machines) nad stack those into a cabinet.... more cpu, more disk.. and a console for every server. What? You say power and heat problem... hey, no worse than what this product looks to have.
  • Out of curiousity, what's the thinnest available in existence? Not keeping it to a particular OS. (Although FreeBSD/Linux would be nice)
  • anyway, if you're really stuck (i would imagine a day or two w/out taking a shit) you can go to a proctologist. Ass doctors are trained for this sort of thing. Maybe then you'll post something relevant? Here's a good ad for being straight edge, poster child. sXe-4-Life!
  • >Ever feel your average laptop when its been on for a few hours - i.e. you can't wear shorts and sit it on your lap - it burns!

    36 hours with an IBM thinkpad 755C: and my groin never felt toastier :>

  • Yea, this is just a little off topic.
  • by Anonymous Coward
    The power supplies should be seperate from the rack units -- far seperate. There's no reason why you should need to drag 110V to each individual computer -- why not just drag the 5V/12V instead?

    With such a setup, you could locate the power supplies in some form of larger, seperate box. The heat venting could be much more easily controlled as well...

  • Isn't this exactly what telcos do? I seem to recall that telco kit runs from a 24V or 12V DC supply. I know that some vendors (Sun for one) do provide servers with the appropriate power setup to do this.
    --
    the telephone rings / problem between screen and chair / thoughts of homocide
  • Ah - I do that anyway - we don't have 110VAC mains :)
    --
    the telephone rings / problem between screen and chair / thoughts of homocide
  • A company we talk to, Captec (http://www.captec.co.uk/ [captec.co.uk]), also in the UK sent me a flyer last week for their new high-density unit. It has 4 redundant PSUs, an integrated KVM, shared CDROM and FDD, and slots for 20 CPU modules (although the picture shows 10). CPU modules has either 3 10/100 ethernet port or 2 10/100 and one gigabit one, with CPUs up to 1Ghz each in 'socket 370' (presumably FCPGA, actually).
    --
    the telephone rings / problem between screen and chair / thoughts of homocide
  • density for that ? I've got 42 compaq 2U webservers, (2)1Ghz,1.152 GB mem and 2 18gb disks, all crammed into one 19" rack, but I have to drag power from 3 nearby rack to supply everything. Someone needs to invent a rack with 440 power so it can supply the needed juice. I also ran into a heat problem but the was fixable with the addition of a rear rack extension and a large fan.
    Sun makes a similar box, a netra I think, but same problem. These things do make great SAN boxes.
  • All of their links seem to point back at their home page. Do they really have a product?

    --
  • These things look like they were designed and assembled by an eight-year old. I particularly like the home-depot standard mounting bracket that doesn't quite fit and giant screws they use to hold every thing together. Gives it that "homebrew" look that's sure to impress. And how do you change these things out or add new units? It looks like you have to remove the entire 1U package to replace a failed unit.

    Dell, HP, Compaq, and others are all about to roll out similar "blade" servers, and these guys will end up in the gutter... Next, Please!

  • Have a look at the RLX System 324 [rlxtechnologies.com]. It's basically a 3U chassis holding 24 independent servers. The chassis provides power (so you only need 14 power cables to power all the machines) and network (3 networks, public, private and management). The power requirements are quite low too (they're based on Crusoe).

  • It weighs 1,739 pounds? They must be using some fairly dense metals.

    And where will I put a VAT?
  • Didn't know they made cases of neuron star material these days --- I've heard of strange and expensive case, but this takes it to a new height!
  • It is good to see something with Slack installed by default coming off of the profuction lines. I think Slackware is one of the best (toss up between slack and debian) server distros out there. Slack does not get enough attention with Red Hat and Mandrake and Suse targeting newbies. I think it is great to see slack pre-installed. If they only had an option for Debian... Then we would have the 2 best server OS distros avalible.
    Not trying to start a distro flame war, Just my $.02
  • Re:IBM

    That isn't terribly surprising, a lot of the major vendor's systems cool front to back. HP's do, it looks like the Compaq I just bought (I love these fire sales) does as well. I take it your cabinets have glass front doors? Someone has to make a grill front, sectional locking cab for hosting companies.

    ostiguy
  • ...hiring a web designer ?!
  • Did you see that there's nothing but a front page? All of the links off the front page go back to the front page.

    The product and its marketing materials aren't ready for prime time, and it's still getting flooded with slashdot referrer hits. I'm sure that just encourages "beta" press releases like this.

  • Dell racks have a grill front.. Very nice
    --
  • Back in August 2000 pcplus(I never look at the site but it is refusing me now) [slashdot.org] Awarded a Best Performer award to this exact product in it's October 2000 Issue (so the review could have been done as early as June...thats 1 Year ago!). The basic summary:

    1. Good data transfer
    2. Only one fan and ethernet
    About half of the review deals with the software setup and the slackware choice with no complaints other than "butchering the WebMin interface". It even compliments a good, tuned manual.

    The rest of the review probably has more interesting facts for the /. crowd. The main points of interest were that the hard disk (centrally mounted secured to the base) remained comfortable temperature wise during testing though they were worried about it. The expelled air temperature gave little cause for concern but they didn't like relying on one fan. The machine came with 128Mb and the maximum was 256Mb. There is an internal picture of the machine and it looks quite neat and tidy, but you can't really see the guts of the machine.

    I don't think this is the machine for putting 48 machines in 12U, I think (and so did PCPlus) it's for the people who need a basic server but don't want to take up a load of space. Maybe you might buy a second, third or fourth but then.... you will be best served elsewhere.

  • Stupid, stupid, stupid!

    http://pcplus.co.uk [pcplus.co.uk] is the correct link. It is the link given on the cover of their magazine. I can't get into it now though :-(

  • Small, Rack Mountable, under Warranty, but not nearly as cheap or as much fun as reverse engineering a "coming any day now" Microsoft X-Box, which supposedly costs $425 to manufacture, but Microsoft will sell for $299.

    Ok, the xbox won't come in a tiny rack mount enclosure, but at with a 700-some Mhz Pentium3 other pretty cool stuff, you'll come a lot closer to the beowolf-on-a-budget. Of course, there is the issue of having to use USB-based network adaptors, but never mind that.

    Plus there's the fun-factor of seeing Linux run on a piece of hardware made (and sold at a loss) by Microsoft!

    If this sounds like a fun way to void the warranty on a $299 piece of brand new hardware, take a peek at this thread over at linux-hacker.net [linux-hacker.net].

  • Our cabinets (Wrightline) have a plexi front with grill around it, and a full grill back. We can get full grill fronts from Wrightline, but there was something about their choices not aesthetically matching the rest of the cabinets (hey, I don't make the decisions around there, I just keep the network running).

    The customer with the 1U servers got a cage, and we installed fans at the top back of the cabinets. That seems to keep the machines alive, but you could still broil meat from the heat coming off their cabinets.

  • Ever feel your average laptop when its been on for a few hours - i.e. you can't wear shorts and sit it on your lap - it burns!

    Laptops are NOT designed for 24x7 operation

  • They appear to be quite a good spec too, PIII-600mhz 128mb RAM, 17GB HD.

    Good spec? Whenever we replace servers, our current default is a dual PIII 1 GHz with 1 GB of ram. 128 MB of ram isn't even good for serving static web pages. And that price of $1700 is outrageous for the specs (I get the feeling what we're really paying for is the 1/4").

    Pre-built rack systems are always more expensive, this is why serious companies order kits and standard micro-ATX motherboards and build themselves (easily replaceable, locked and open hardware config, etc., etc.). This is why VA Linux was nothing more than a buzz word. This company will be good to impress your friends over at Mom & Pop co-hosting, but for serious applications the price, proprietary hardware, and specs don't make sense.

    And, my god, the heat! 180 servers in one rack!? They should probably subtract a few from that number (for good measure, as they're certain to die) as the center starts to heat up like a tomahawk.
  • >serious companies? computers that aren't CE >certified aren't even allowed in an incorporated >business (u.s. only, i think). no, it's not the >best price, but if my racks are full and i need >to fit 5 more machines in a closet or spare >cubicle, i can now.

    Serious companies who are large enough have their own certification system. In fact, we have our own hardware department who is responsible for system configurations and testing of new hardware and the locking of hardware components.

    It isn't economical to rely on a single 3rd party entity to do all your servicing and who's hardware isn't an open standard, as in the case of many rack systems with custom motherboards.

    >sure, but if people are busy actually doing work >and a machine goes down, try getting warranty >service on a non-CE or homebrew machine.

    First, our machines are not 'homebrew', but this has been addressed in the previous paragraph. Second, large companies don't need warranties because they have their own service departments. We can do the work oursevles faster, for a cheaper cost (components + our real labour), and without dealing with 3rd party manufactures or service contractors. These companies always have a huge service cost because they have overhead as well to support. We just break even, because we don't charge premiums on our own services.

    Components are very easy to come by. When we have a new spec system, we order several of the discrete components as used in the system for off-hand use. This way, if something goes bad we just swap it for another part.

    This is how all companies like ours work. It would be very foolish to be dealing with RMA numbers, service centers, shipping of servers, etc. when all we would need to do is replace a part. This is in part why we have our own hardware specifications and standards.
  • An automatic fail-safe function restarts the server in the event of the operating system crashing. It comes fully configured as a Linux/Apache server...

    I'm sure the Slashdot crowd is going to love that.
  • I managed a data center for a large R&D lab - about 600 servers, mostly HP's

    HP servers consume more power than other machines, and a lot(if not most) of that is to power the fans. If you've looked inside for example a D-class,you'll see some non-impressive hardware, but the fans are impressively big and solid. Same goes for all the HP servers. They all seem to have fans able to handle twice the size of each server..

    Sun on the other hand, seem to swing too much the other way. Sun servers are equipped with the minimum amount of cooling, and if not given enough external help, and plenty space they'll shut down all the time. It's funny though, how they always name their server something-Fire

  • "They appear to be quite a good spec too, PIII-600mhz 128mb RAM, 17GB HD"

    Are rack mounted computers always that far behind stuff in cases? You can get a computer with twice as much of everything for a fairly cheap. 1.2Ghz, 256mb RAM, 40GB HD, yada.

    Granted, you usually don't need that much power. But how does this tiny bit of space saving compare to the spec difference?
    ---
  • by mindstrm ( 20013 ) on Sunday July 01, 2001 @05:18AM (#117012)
    it's three times the price it should be.
    Automatic failsafe? Oh whee, a watchdog card. Those are cheap.

    This is a gimmick.. and unless you really really need that rackspace at a huge premium... I'd say you are wasting your money on these.
  • by mpe ( 36238 ) on Sunday July 01, 2001 @09:16AM (#117013)
    I mean lets see - 180 servers in a rack. 40 Amps @ 120V means you'd have .222 Amps per server, That's about 26 Watts! (DOn't bother replying abotu RMS - this is just an estimate)

    There is a slight mistake here is that you are assuming a US power supply for a product sold in the UK. A more sensible supply would be something like 32 Amp 3 phase 230V which gives you 120 Watt per server.
  • by jaredcat ( 223478 ) on Sunday July 01, 2001 @04:53AM (#117014)
    How many times has this happened in the past 6 months? A marketing rep from some unknown company posts their new product line to SlashDot in hopes of getting some undeserved PR... and it gets approved as newsworthy.

    Anyone remember that DVD player [slashdot.org] that had built in Sega games?

  • by sacremon ( 244448 ) on Sunday July 01, 2001 @05:31AM (#117015)
    I work in a web hosting data center of a backbone provider. I could see some company wanting to be cheap on real estate and rent a half cabinet (the smallest we have) and putting a bunch of these in there.

    However, I have to agree with what others have already commented about power and heat - these will draw a lot. We have a customer who tried to fill a cabinet with the 1U IBM servers (37 of them) and we finally convinced them that it was going to work, despite what IBM claims. The things would cook themselves. Turns out the fine print on IBM's claim was that it would work if you used an open rack - not practical in a hosting environment with other companies, even if you have a cage. These things will be even worse.

    Oh, as far as the cost is concerned, sure you can home build something cheaper - that's not where the cost comes in. The cost comes in the support, which is much more expensive with servers than home machines, as they often are 'replace within 24 hours' at a minimum. We've got contracts with Sun that are 'replace within 2 hours, 24x7' that annually cost something like 40% of the cost of the servers.

  • I agree about the power issue. That and heat. Install these babies in a closed cabinet and make sure you've got a jet engine pulling air form teh subfloor. I managed a data center for a large R&D lab - about 600 servers, mostly HP's. Our standard cabinet had TWO 20 Amp feeds, one on each side. Stack a bunch of HP C Class workstations as a server farm and it'll dry your hair in 2 minutes with all teh heat coming out. And they loaded those two circuits too.

    I mean lets see - 180 servers in a rack. 40 Amps @ 120V means you'd have .222 Amps per server, That's about 26 Watts! (DOn't bother replying abotu RMS - this is just an estimate) I'd LOVE to find a Pent_III 600 that could run on So I'd LOVE to know how much power they pack into a rack full of these things. Without power specs on teh site - well you can't tell. They also don't include BTU load. Any data center manager needs those specs to know the added load on the environemnt.

    Remember, its not JUST rack space. The reasons a server costs so much in real estate are:

    • Rack space (your average well built rack runs about a grand or two)
    • Network Port (may or may not include bandwidth costs)
    • Load on electrical PDUs
    • Load on cooling systems

    SO just because you can squeeze them into such a small space doesn't mean they are worth teh extra money. Unless they draw 1/4 less power and generate 1/4 less heat, the benefit is reduced.

    But that said - they are pretty cool. But my other fear is they probably use more custom or on board parts than off the shelf (memory, NIC, etc) meaning fixing one probably means replaceing it, not just a board inside. I know 1U racks have lots on board too... But at least the memory isa replaceable. With this - its had to tell (no internal shots that I could find)

Arithmetic is being able to count up to twenty without taking off your shoes. -- Mickey Mouse

Working...