1/4 Width Rack-mount Linux Servers 60
An Anonymous Coward writes: "It seems an unheard of company,
inlogica, are selling a 1/4 with rack mount server that comes complete with Slackware Linux(woot) in the UK as we speak. They appear to be quite a good spec too, PIII-600mhz 128mb RAM, 17GB HD.. Now if only i could afford one..." They're cute, but I wonder about power requirements, I wouldn't think most racks would be designed to support this many systems.
Re:1/4U doesn't mean 1/4 cheaper for a server spac (Score:1)
It is GNU/Watts (Sorry, couldn't resist)
Unless they draw 1/4 less power and generate 1/4 less heat, the benefit is reduced.
Since almost all input power is converted into heath both are reduced the same amount. (and if you say 1/4 less it means 75% of the original amount, not 25%)
They also don't include BTU load.
5 kW electrical power == 5 kW heath. (converting this to archaic units is left as an exercise for US government Agencies that can afford sending probes to Mars and miss it)
Problems with filling a rack with these? (Score:1)
This strikes me as the ideal solution for someone needing three or four distinct servers who is currently running only one. You can pull your single server, move data to the first 1/4 and keep running, then build #2 into a DB box, #3 into the secure server and #4 into the main web server, then change #1 over into a "gateway" type box. One of these wouldn't have nearly the problems that a rack-full would...
Of course, the site having noting but a homepage and no specs at all doesn't help their cause much. Even if that's a good price for four servers, the site puts me right off of them.
Re:where do you get the power (Score:1)
336 servers in a single rack (Score:1)
Visit www.rlxtechnologies.com [rlxtechnologies.com]. They have a density of 8 computers per 1U of space(it's actually 24/3U). On each system, you get a transmeta cpu/633mhz. 128m - 512m. 2 drives(can be shipped with 10 or 30g). 3 nics(altho, one nic on each system is connected internally).
The entire chassis of 24 blades runs off a single 450W power supply. However, there is a second power supply, for redundancy. In testing, we have even turned off the fans, with no harm done. The cpu heatsinks are quite touchable.
Note: I contracted at RLX, and have actually worked with the hardware. It's great stuff.Re:1/4U doesn't mean 1/4 cheaper for a server spac (Score:1)
An even bigger win would be a mutlisystem chassis that would have an integrated fiber channel switch. It'd cut the cabling and power way down -- run a single fibe channel cable to a Xiotech or EMC cabinet.
Heat is still a killer. I'd like to see a data center installed in a room that spanned more than one floor. I'd put a raised floor about 6 feet off the ground and use perforated floor to draw cool air from the subfloor. The ceiling would be open to big exhaust fans on the ceiling of the second floor. I think what makes heat so tough on data centers is that after you raise the floor there's not a lot of head room left for the equipment. Having an extra 10-20 feet above the equipment for heat to rise into would go a long way.
Re:1/4U doesn't mean 1/4 cheaper for a server spac (Score:1)
Just remember- those are GNU/watts you're using.
Re:Problems waiting to happen (Score:1)
APC makes some really nice racks with grill front.
Plus, it being APC, you can be pretty sure that a 3KVA + battery expansion will not make it collapse into a pile of junk.
re:not for serious use (Score:1)
serious companies? computers that aren't CE certified aren't even allowed in an incorporated business (u.s. only, i think). no, it's not the best price, but if my racks are full and i need to fit 5 more machines in a closet or spare cubicle, i can now.
sure, but if people are busy actually doing work and a machine goes down, try getting warranty service on a non-CE or homebrew machine. you can't run to johnny's computer shop and expect identical parts 6 months after ~any~ purchase.
Re:That's a good spec? (Score:1)
it's not so much that they lag behind the desktops, it's just that the quality of the hardware is at a higher importance compared to the quantity. smaller more reliable scsi drives instead of bigger ide storage. ECC ram over standard. intel over amd just to be safe. you end up paying the same amount for the rack as you do for the workstation with a little weaker machine. in a server, if you know what your needs are going to be, you buy to meet those needs saving money not getting the fastest and biggest and then spend that savings on reliability.
B1ood
DC power (Score:1)
Re:1/4U doesn't mean 1/4 cheaper for a server spac (Score:1)
Having said that, these machines have their place. Imagine a rack with four of these and two nice disk arrays [hp.com]. A 1 rack, HA server.
Everyone's talking about heat (Score:1)
Say you've want a rack like this:
+
That gets pretty crowded with 1U machines. And hot! But if those 20 machines only take up 5U, then things aren't going to get so hot. I assume there's at least a bit of insulation on the sides of the unit, so the heat will go out the top or bottom. Given the size, though, probably not.
Re:1/4U doesn't mean 1/4 cheaper for a server spac (Score:1)
*Not a Sermon, Just a Thought
*/
Mounting? (Score:1)
Uh... (Score:1)
I'm sure a reader here could come up with a home-spun solution at a quater of the cost. :)
Re:Uh... (Score:1)
Couldn't you slip some kind of brace around a laptop, give it a network card, and *still* save money?
Re:Just what we need (Score:1)
Nope, called watchdog timer and is incorporated in most microprocessors (from the $3 8-bit and up) and your customized linux 2.4 kernel (don't know about older versions) supports this feature. Software emulation or hardware version.
But interresting point.
/. effect, server reboots,
And you have a rundown admin after 24-hours of
Robert Christiansen
Student at the Technical University of Denmark,
Department of Informatics and Mathematical Modelling,
Computer Engineering & Technology Division (www.imm.dtu.dk/cet)
What about.... (Score:1)
VA Linux (Score:1)
Three words:
PC price war
Re:Uh... (Score:1)
--
What the... (Score:1)
Mmm, I love the smell of sarcasm in the morning...
=============================================
Re:Heavy! (Score:1)
Damn foreigners. Always using the comma in the wrong spots. Use decimal places people.
Re:where do you get the power (Score:1)
The heat problem actually has a couple of solutions we're trying to choose between. The simpler is to zip tie a pair of big box fans to the outside, fair them in to suck all the air out of the cage, and crank 'em up. The more complex involves a liquid-cooling rig for the CPUs. Not sure where to mount the radiators yet, though.
Re:Uh... (Score:1)
Waste of time - get an IBM S/390 (Score:1)
If space is soo much a concern... (Score:1)
Thinnest (Score:1)
off topic, but I guess im sympathizing... (Score:1)
Re:Uh... (Score:1)
36 hours with an IBM thinkpad 755C: and my groin never felt toastier :>
Re:That's a good spec? (Score:1)
Re:where do you get the power (Score:2)
With such a setup, you could locate the power supplies in some form of larger, seperate box. The heat venting could be much more easily controlled as well...
Re:where do you get the power (Score:2)
--
the telephone rings / problem between screen and chair / thoughts of homocide
Re:where do you get the power (Score:2)
--
the telephone rings / problem between screen and chair / thoughts of homocide
How about ~7 per U? (Score:2)
--
the telephone rings / problem between screen and chair / thoughts of homocide
where do you get the power (Score:2)
Sun makes a similar box, a netra I think, but same problem. These things do make great SAN boxes.
Strange (Score:2)
--
very unprofessional look (Score:2)
Dell, HP, Compaq, and others are all about to roll out similar "blade" servers, and these guys will end up in the gutter... Next, Please!
336 individual servers in a rack was seen before.. (Score:2)
Have a look at the RLX System 324 [rlxtechnologies.com]. It's basically a 3U chassis holding 24 independent servers. The chassis provides power (so you only need 14 power cables to power all the machines) and network (3 networks, public, private and management). The power requirements are quite low too (they're based on Crusoe).
Heavy! (Score:2)
And where will I put a VAT?
Re:Heavy! (Score:2)
Slackware!!! (Score:2)
Not trying to start a distro flame war, Just my $.02
Re:Problems waiting to happen (Score:2)
That isn't terribly surprising, a lot of the major vendor's systems cool front to back. HP's do, it looks like the Compaq I just bought (I love these fire sales) does as well. I take it your cabinets have glass front doors? Someone has to make a grill front, sectional locking cab for hosting companies.
ostiguy
Web server farms ? What about... (Score:2)
Re:oh not again... (Score:2)
Did you see that there's nothing but a front page? All of the links off the front page go back to the front page.
The product and its marketing materials aren't ready for prime time, and it's still getting flooded with slashdot referrer hits. I'm sure that just encourages "beta" press releases like this.
Re:Problems waiting to happen (Score:2)
--
Old News! (Score:2)
Back in August 2000 pcplus(I never look at the site but it is refusing me now) [slashdot.org] Awarded a Best Performer award to this exact product in it's October 2000 Issue (so the review could have been done as early as June...thats 1 Year ago!). The basic summary:
The rest of the review probably has more interesting facts for the /. crowd. The main points of interest were that the hard disk (centrally mounted secured to the base) remained comfortable temperature wise during testing though they were worried about it. The expelled air temperature gave little cause for concern but they didn't like relying on one fan. The machine came with 128Mb and the maximum was 256Mb. There is an internal picture of the machine and it looks quite neat and tidy, but you can't really see the guts of the machine.
I don't think this is the machine for putting 48 machines in 12U, I think (and so did PCPlus) it's for the people who need a basic server but don't want to take up a load of space. Maybe you might buy a second, third or fourth but then.... you will be best served elsewhere.
Re:Old News! (Score:2)
Stupid, stupid, stupid!
http://pcplus.co.uk [pcplus.co.uk] is the correct link. It is the link given on the cover of their magazine. I can't get into it now though :-(
X-Box(tm) (Score:2)
Ok, the xbox won't come in a tiny rack mount enclosure, but at with a 700-some Mhz Pentium3 other pretty cool stuff, you'll come a lot closer to the beowolf-on-a-budget. Of course, there is the issue of having to use USB-based network adaptors, but never mind that.
Plus there's the fun-factor of seeing Linux run on a piece of hardware made (and sold at a loss) by Microsoft!
If this sounds like a fun way to void the warranty on a $299 piece of brand new hardware, take a peek at this thread over at linux-hacker.net [linux-hacker.net].
Re:Problems waiting to happen (Score:2)
The customer with the 1U servers got a cage, and we installed fans at the top back of the cabinets. That seems to keep the machines alive, but you could still broil meat from the heat coming off their cabinets.
Re:Uh... (Score:2)
Laptops are NOT designed for 24x7 operation
not for serious use (Score:2)
Good spec? Whenever we replace servers, our current default is a dual PIII 1 GHz with 1 GB of ram. 128 MB of ram isn't even good for serving static web pages. And that price of $1700 is outrageous for the specs (I get the feeling what we're really paying for is the 1/4").
Pre-built rack systems are always more expensive, this is why serious companies order kits and standard micro-ATX motherboards and build themselves (easily replaceable, locked and open hardware config, etc., etc.). This is why VA Linux was nothing more than a buzz word. This company will be good to impress your friends over at Mom & Pop co-hosting, but for serious applications the price, proprietary hardware, and specs don't make sense.
And, my god, the heat! 180 servers in one rack!? They should probably subtract a few from that number (for good measure, as they're certain to die) as the center starts to heat up like a tomahawk.
Re:not for serious use (Score:2)
Serious companies who are large enough have their own certification system. In fact, we have our own hardware department who is responsible for system configurations and testing of new hardware and the locking of hardware components.
It isn't economical to rely on a single 3rd party entity to do all your servicing and who's hardware isn't an open standard, as in the case of many rack systems with custom motherboards.
>sure, but if people are busy actually doing work >and a machine goes down, try getting warranty >service on a non-CE or homebrew machine.
First, our machines are not 'homebrew', but this has been addressed in the previous paragraph. Second, large companies don't need warranties because they have their own service departments. We can do the work oursevles faster, for a cheaper cost (components + our real labour), and without dealing with 3rd party manufactures or service contractors. These companies always have a huge service cost because they have overhead as well to support. We just break even, because we don't charge premiums on our own services.
Components are very easy to come by. When we have a new spec system, we order several of the discrete components as used in the system for off-hand use. This way, if something goes bad we just swap it for another part.
This is how all companies like ours work. It would be very foolish to be dealing with RMA numbers, service centers, shipping of servers, etc. when all we would need to do is replace a part. This is in part why we have our own hardware specifications and standards.
Just what we need (Score:2)
I'm sure the Slashdot crowd is going to love that.
HP servers pull alot ow power (Score:2)
HP servers consume more power than other machines, and a lot(if not most) of that is to power the fans. If you've looked inside for example a D-class,you'll see some non-impressive hardware, but the fans are impressively big and solid. Same goes for all the HP servers. They all seem to have fans able to handle twice the size of each server..
Sun on the other hand, seem to swing too much the other way. Sun servers are equipped with the minimum amount of cooling, and if not given enough external help, and plenty space they'll shut down all the time. It's funny though, how they always name their server something-Fire
That's a good spec? (Score:2)
Are rack mounted computers always that far behind stuff in cases? You can get a computer with twice as much of everything for a fairly cheap. 1.2Ghz, 256mb RAM, 40GB HD, yada.
Granted, you usually don't need that much power. But how does this tiny bit of space saving compare to the spec difference?
---
So.. (Score:3)
Automatic failsafe? Oh whee, a watchdog card. Those are cheap.
This is a gimmick.. and unless you really really need that rackspace at a huge premium... I'd say you are wasting your money on these.
Re:1/4U doesn't mean 1/4 cheaper for a server spac (Score:3)
There is a slight mistake here is that you are assuming a US power supply for a product sold in the UK. A more sensible supply would be something like 32 Amp 3 phase 230V which gives you 120 Watt per server.
oh not again... (Score:5)
Anyone remember that DVD player [slashdot.org] that had built in Sega games?
Problems waiting to happen (Score:5)
However, I have to agree with what others have already commented about power and heat - these will draw a lot. We have a customer who tried to fill a cabinet with the 1U IBM servers (37 of them) and we finally convinced them that it was going to work, despite what IBM claims. The things would cook themselves. Turns out the fine print on IBM's claim was that it would work if you used an open rack - not practical in a hosting environment with other companies, even if you have a cage. These things will be even worse.
Oh, as far as the cost is concerned, sure you can home build something cheaper - that's not where the cost comes in. The cost comes in the support, which is much more expensive with servers than home machines, as they often are 'replace within 24 hours' at a minimum. We've got contracts with Sun that are 'replace within 2 hours, 24x7' that annually cost something like 40% of the cost of the servers.
1/4U doesn't mean 1/4 cheaper for a server space. (Score:5)
I mean lets see - 180 servers in a rack. 40 Amps @ 120V means you'd have .222 Amps per server, That's about 26 Watts! (DOn't bother replying abotu RMS - this is just an estimate) I'd LOVE to find a Pent_III 600 that could run on
So I'd LOVE to know how much power they pack into a rack full of these things. Without power specs on teh site - well you can't tell. They also don't include BTU load. Any data center manager needs those specs to know the added load on the environemnt.
Remember, its not JUST rack space. The reasons a server costs so much in real estate are:
SO just because you can squeeze them into such a small space doesn't mean they are worth teh extra money. Unless they draw 1/4 less power and generate 1/4 less heat, the benefit is reduced.
But that said - they are pretty cool. But my other fear is they probably use more custom or on board parts than off the shelf (memory, NIC, etc) meaning fixing one probably means replaceing it, not just a board inside. I know 1U racks have lots on board too... But at least the memory isa replaceable. With this - its had to tell (no internal shots that I could find)