Open Compute Project Comes Under Fire 86
judgecorp writes: The Open Compute Project, the Facebook-backed effort to create low-cost open source hardware for data centers has come under fire for a slack testing regime. The criticism was first aired at The Register where an anonymous test engineer described the project's testing as a "complete and total joke." The founding director of the project, Cole Crawford has penned an open letter in reply. The issue seems to be that the testing for standard highly-reliable hardware used by telcos and the like is very thorough and expensive. Some want the OCP to use more rigorous testing to replicate that level of reliability. Crawford argues that web-scale data centers are designed to cope with hardware failures, and "Tier 1" reliability would be a waste of effort.
Smells like astroturf. (Score:4, Insightful)
Probably Cisco trolling against a movement that's going to put them out of business.
Sooner the better, I say.
Re: (Score:2)
Autism ... Autism Everywhere. (Score:3, Funny)
Some people just have to get a burr up their ass [arse] about everything.
Wait, Register is still up? Do they still say 'boffin' every paragraph? I couldn't bear to click through.
Re: (Score:2)
We aren't talking about a rack full of dell/hp knock-off "servers". OCP hardware is rows of racks full of stripped down, barebones systems. If your "mission critical" app fails, it's because you or your data center are a bunch of fools. Resilience comes from redundancy. If you fail to provide the redundant hardware, or capacity to spin up your crapplication on other systems, then that's your damn fault. (just as much as choosing to build your own rack full of budget trash.)
OCP hardware is cheap, so you can
Re: (Score:2)
So all this cheap hardware gets deployed, then swapped out a whole bunch of times. The waste stream is much, much bigger because you're routinely scrapping out cheaply thrown together motherboards, etc.
It doesn't sound very green.
Re: Sort of.. (Score:2)
On top of that, the more of these things you expect to deploy, the better an investment in test and verification amortizes. How much does the testing cost, and how long does it take someone to replace a failed system, and how many replacements does it take before the operations cost exceeds the verification cost?
Re: (Score:1)
If I understand it then it is not green at all. They, quite literally, plan on chucking out whole stripped down towers when a single component fails. They will not be replacing fans, hard drives, RAM, failed network cards, or any of that. It is cheaper for them to toss them in the trash then it is for them to debug, fault-check, and/or replace hardware. It is not that the techs are making that much, it is because the hardware is that cheap and the value of uptime is so high. They are probably even going to
Re: (Score:1)
Your understanding is wrong.
OpenCompute systems tend to be more reliable, because they have less components that can fail, and they have strict guidelines over which components may be used in their construction based on statistics from previous crops failure analysis. In these sorts of operations systems are taken offline when they fail, and when a rack reaches a certain level of degradation, the whole rack is taken down, removed, and queued for refurbishment. They typically have a work center at one end of
Re: (Score:1)
Ah - thanks. I was under the impression that they were just going to be grabbing failed units out and chucking them in the bin. I was not surprised that they would do so. I am glad they will be fixing them for a while, at least. Disposal of eWaste is a problem even today though more and more is being recycled or reused.
Re: (Score:2)
What happens when due to a lack of testing your cheap OCP hardware has a design flaw and 10,000 servers all fail in a month?
That is the criticism I think, that there is too little testing in OCP designs to make sure critical design flaws don't exist. No amount of fault tolerant software design is going to save you from mass hardware failures.
Web-scale (Score:2)
Web-scale? Way to be tone-deaf there Mr. Crawford.
Or maybe the ridicule heaped on users of that particular term is something indulged only by the neckbeard wannabes that haunt Slashdot. In which case, carry on!
Cheap hardware. Smart Software (Score:5, Insightful)
Re: (Score:2)
Yep. This thread is full of people pooh-poohing this idea and meanwhile it's the strategy used by the most successful corporations on the internet. Welcome to Slashdot!
Re: (Score:2)
Note that their datacenter disciplines are not actually proven to be the best, but boy do they think so.
They are proven to be the best for their specific type of operations. I'm quite sure that their SOPs won't work for the banking or healthcare industry for example.
If Facebook goes down, a bunch of 30 year olds are going to complain (teens use other social media these days, and grandparents won't care and try again later). If the Sutter Health (norcal hospital chain) network/DC goes down, people's health will be affected.
Different operations and requirements, require different budgets and ways of working
Re:Cheap hardware. Smart Software (Score:5, Interesting)
I think the point is that so far it is only used by "the most successful corporations on the internet". In fact, you can probably count the number of organisations in the entire world that qualify on the fingers of one hand, though it will take a few more fingers to count how much money they have invested to reach this point.
Unfortunately, as lovely and friendly as all the Software Defined X advances seem with their mantra of openness, almost no-one is actually building a "web-scale data centre" with a 24/7 staff dedicated to just swapping out broken hardware and effectively unlimited resources to devote to designing hardware architectures and building control software that can cope with frequent failures without losing significant amounts of real money. For normal organisations, even those with heavy IT requirements and 12 figure market caps, running your critical infrastructure on hardware that does have a serious level of testing and consequent robustness may still be advantageous.
(Full disclosure: I sometimes work for clients in the networking industry, though whether an industry shift towards things like OCP would benefit or harm them would be open to debate so I think I'm still reasonably neutral here.)
Re: (Score:2)
Unfortunately, as lovely and friendly as all the Software Defined X advances seem with their mantra of openness, almost no-one is actually building a "web-scale data centre" with a 24/7 staff dedicated to just swapping out broken hardware and effectively unlimited resources to devote to designing hardware architectures and building control software that can cope with frequent failures without losing significant amounts of real money.
I think that's because most customers don't want that, partly because they don't understand how they would use it yet — but also because there is the fundamental problem of paying a middleman. If you are depending on someone to build the cloud for you, you're going to have to accept that they're going to want to get paid for their trouble. And nobody likes to write checks, they like to cash 'em.
Re: (Score:2)
Isn't this the point of the cloud: don't buy/build/maintain your own, rent from us and save because we do it cheaper and better than you ever could on your own?
I think by the time you reach a scale where you have 24/7/365.24 staffing adequate to handle the failures as they happen, you can take advantage of the higher failure rate / lower cost equipment. You don't need to be Google scale to do this.
Re:Cheap hardware. Smart Software (Score:5, Interesting)
Well, I have a few issues with the cloud hype, starting with the scarcity of evidence to support claims about cloud services being cheaper and/or more secure and/or more reliable than doing things yourself. Every major cloud provider has had serious downtime, and there is only so much you can attribute to being more visible at greater scale or to users not configuring HA tools properly. Far too many on-line services also run into significant security/privacy problems. And cost-wise going with the cloud rather than your own systems tends to be favourable at certain levels (other things being equal) but it can be outrageously expensive in other cases.
These myths aren't really the point here anyway. The point in this case is that no matter how fast your recovery time may be, whatever was happening on your hardware at the time it failed is lost, and in some cases you simply can't make that transparent to your users. Not everything in the world of programming is a distributed map-reduce where losing a hardware node means you just redistribute the 0.0001% of the job it was doing to another and no-one notices. Not everything in the world of networking can tolerate a multi-second failover process without an observable blip in connectivity. As for redundant/HA storage, the CAP theorem called and asked to speak with you about your database, but I think you were on with physics at the time so I just took a message.
It's not just about whether the wastage due to more frequent failures works out cheaper economically than paying a premium for better hardware. It's also about how much downtime you (or your customers) are willing to tolerate and what proportion of overall system time is spent just recovering from failures. If you've ever had the joy of watching the (N+1)-th drive fail in your RAID with N-way redundancy while it's still rebuilding from replacing the earlier failures, you'll know what I mean.
Re: (Score:3)
I've never had an N+1 drive fail in a RAID setup. What I have had happen is the power supply to the whole array fail... then we can talk about redundant power supplies, but, really, the data needs to be mirrored offsite at a place where a serious (fire / flood / riot / meteor strike / whatever) event doesn't take down all copies of the data / service. This was sort of the founding principle of ARPANET, anyway.
Economics varies, people negotiate bad contracts all the time that lead to higher costs of whatev
the main benefit is flexibility (Score:4, Informative)
I don't think I'd ever go to the cloud because it's cheaper or more secure or more reliable. The main benefit that I see is flexibility.
If your loads are stable and known in advance, it's likely cheaper to buy hardware and staff people to take care of it. On the other hand if loads spike wildly from one day to the next the cloud makes perfect sense. Need a thousand cores of compute power right this second? Amazon/Google/Rackspace/HP would be happy to rent it to you.
Nonsense (Score:3)
Re: (Score:2)
There is no such thing as "bullet-proof hardware" except in the sense that some of it would stop a 45 bullet.
Cheaply build hardware fails more often, but *ALL* hardware fails, and you need to plan for it. Ever hear of "RAID"? That's the way all (almost all?) hard disks are built these days. But they still fail. They used to fail more frequently. ("RAID" == "Redundant Array of Inexpensive Disks").
Re: (Score:3)
There is no such thing as "bullet-proof hardware"
Uh no... there definitely is. There's no X86 based system that really falls into this category though. Many mainframe systems are bulletproof, in the sense the mainframe won't fail or crash, or lose work, or corrupt data, upon any component failures. Tandem computers' systems and some other past solutions on the market were pretty darned bullet proof.
That didn't mean no components failed -- only that when components died - CPUs and system bus inc
there is some very reliable hardware out there (Score:2)
I worked on a telecom switch that ran processing on cards that had two CPUs in lockstep. If the output of the two ever differed the card was taken out of service and its last transaction was rolled back. Memory contents were stored in at least three places at any given time. The dataplane was inductively coupled to avoid the possibility of DC current damaging things.
We replaced it with commodity hardware and smarter software. It wasn't *quite* as reliable, but it was a whole lot cheaper and the speeds r
Re: (Score:2)
Yes. That was a bit better than a "tell me three times" system. But there are still failure modes (as you note) which was my point.
Re: (Score:3, Informative)
Re: (Score:2)
Since most companies won't be able to hire the top 1% of those people, might it be more worthwhile to buy more reliable and expensive hardware?
Re: (Score:2)
instead of the expensive, and rare, high-end personnel and internal resources that Google and Facebook have.
Then they are destined to fail, if they are unwilling to invest in suitably skilled personnel AND high enough quality development for the chosen architecture to implement their intended plan.
might it be more worthwhile to buy more reliable and expensive hardware?
Paying up to keep the more qualified personnel on staff can have other benefits. I think the competition for good people is much less
Re: (Score:2)
You need cheap commodity hardware with smart software on top. Just ask Google or Facebook.
The software used by the rest of us (e.g. MySQL) isn't that smart, and it's very expensive to get software that is that smart --- requires hundreds of thousands of ops engineer developer man hours, potentially to build that software system.
There are open source products that can be that smart, with enough deployment work. Developing smart custom applications is a bear.
It may very well be cheaper in many cases f
Saying you test is easy. (Score:5, Insightful)
Re:Saying you test is easy. (Score:4, Informative)
Re: (Score:3)
Agreed, but still, even in a non-NEBS scenario, there's still a lot to be tested because you're putting something potentially flammable in someone's data center. It's really easy to think of designing so a server failure doesn't bring a cluster down, but a server failure that results in a fire has the potential to do more.
The one time I had a fire in a test lab, it really scared me, and made me realize as rare as that kind of thing is, it's potentially disastrous. And that's why they test for it.
Re: (Score:2)
toxic material is an important consideration.
but NEBS test servers for a data center is ridiculous !
Major manufacters (HP,IBM,SUN,etc) only test one or two hardware chassis for NEBS.
one basic 2u server & the next size up multi processor.
NEBS servers are designed to be utility server in a telco switch site.
The power is DC and the site has a big bank of batteries to power the site during outages.
A telco is aiming for NO outages and is very hardware focused.
Anyone elses datacenter is
Re: (Score:2)
Telco switches are ghost towns... big empty buildings out in the boonies that used to hold massive racks of relays with a little box in the middle that replaces all that, or tiny shacks built after the tech came up to speed that just holds the little box. They aren't manned, they are critical, and they need to have reliability due to their geographic dispersal.
Datacenters are, eponymously, centralized. Keep a staff of 4-5 guys on-hand at all-times, give them a PC gaming center to play epic COD on when thi
Re: Saying you test is easy. (Score:1)
Financially, hardware Tax depreciates in three years anyway. Lately, hardware is a little slow on Moore's Law but power efficiency/computing performance has been about the same pace... If you're at the top end you're losing money not replacing fairly often. What happens after isn't their problem. There's no purpose in testing something to last in the desert for ten years because the vas majority of hardware is "disposable". If you want to complain about the waste push for more recyclable materials, and of
Re: (Score:1)
and of course boards that use fewer parts they don't need...
I now have a picture in my head of a guy, his name is Ralph, sitting there, drilling holes, and soldering on random extra bits like capacitors, diodes, a spare bios chip bracket, and a USB port. I know what you meant but, really, that is how my brain works.
Yeah we'll just do that in software? (Score:2)
Re: (Score:2)
"web-scale data centers are designed to cope with hardware failures". So.... it's OK if you use my motherboard design and they randomly fail, because you should just make up for that in software or hardware redundancy? Um, no.
That's exactly what it means, and how it works. When you have tens of thousands of nodes, some of them WILL eventually fail during operation, no matter how good the hardware is. Thus, the software must be designed to accommodate hardware failures and seamlessly continue operation without interruption or data loss. If you already have to design the software to handle that anyway, then there is not much incentive to go to great lengths to improve hardware reliability. Whether the failure rate is 1:100000
"designed to cope with hardware failure" (Score:1)
FTF Crawford.
Isn't this expected? (Score:5, Insightful)
It was my understanding that their premise, from the beginning, was that existing hardware vendors were excessively focused on adding costly, thermally demanding, and often proprietary, features at the hardware level that were unnecessary if you were willing to compensate for their absence in your software design.
There is obviously some level of reliability below which no compensation at the software level is possible(if you can't run the algorithm for detecting errors because it keeps glitching out, it's probably not going to work); but the impression they always conveyed was that many of the more sophisticated reliability mechanisms are really features aimed at people who are substantially less able to cope with failure; and are therefore willing to pay substantially more for hardware that can invisibly paper over a variety of moderately serious failures and allow the software on top to run without incident; rather than buying lots of cheap hardware that has a risk of going down in a screaming heap.
So long as nobody gets any stupid optimistic ideas, I don't really see the issue. Sure, if Facebook were about sending men to mars, they should seriously consider having three CPUs running in lockstep and voting on all operations and so on; but this project is about delivering as many ad impressions per dollar as possible; no reason to get worked up over the occasional glitch.
Re: (Score:2)
I certainly can't rule out the former, especially since a bunch of preening software narcissists who "move fast and break things" and are proud of it don't seem like naturals for
Re: (Score:2)
if you can't run the algorithm for detecting errors because it keeps glitching out, it's probably not going to work
Chances are you can't make good assurances about tolerating any kind of byzantine fault.
I realize there are finally some options for tolerating certain kinds of Byzantine faults in specific kinds of scenarios. In general, it is too hard or expensive, so the fact is, less reliable hardware does mean the application will be less reliable. Buying cheaper hardware is still a cost tradeof
5 9's (Score:5, Insightful)
I'm gonna side with OCP on this one. It is far more economical to deal with reliability via redundancy than it is via expensive parts. This is why we use RAID rather than speccing our drives to last 10 years minimum. All the big players in the datacenter market have put thousands of hours each into designing systems tolerant of missing parts.
The downside is that writing custom stacks tolerant of missing pieces is fucking hard and a huge up-front investment for a company. Most off-the-shelf software does not have that level of redundancy and fault tolerance baked in already. This means that for many small to medium sized deployments it's cheaper to buy a really expensive fault tolerant rack of servers and run your off-the-shelf software on it than it is to buy into OCP with inexpensive hardware that's more open to failure, because your software is NOT open to failure.
Different strokes for different folks and all. Use the right tool for the job. And OCP was made by companies with massive data farms to fit their needs... and their needs are probably not your needs.
Re: (Score:1)
Exactly this. Pick the right tool for the right job. If you are just serving up simple web pages to the masses, go cheap, they can always hit refresh if things fail.
If you have serious money flowing through the platform, plan and purchase accordingly. What is an outage going to cost you? A $50,000 server may end up being very, very cheap if an outage costs you $100,000 per hour.
Re: (Score:2)
Exactly this. Pick the right tool for the right job. If you are just serving up simple web pages to the masses, go cheap, they can always hit refresh if things fail.
If you have serious money flowing through the platform, plan and purchase accordingly. What is an outage going to cost you? A $50,000 server may end up being very, very cheap if an outage costs you $100,000 per hour.
If an outage costs you $100K/hour, you better not be running it on a single server.
Silicon Valley (Score:1)
Cheap, reliable, fast.... (Score:2)
Pick two...
It all boils down to what you want, but of the three things we all say we want, you get only two...
Hardware failures (Score:2)
...Crawford argues that web-scale data centers are designed to cope with hardware failures...
By that logic, the telco data centers are not designed to cope with hardware failures?
.
Of course, I really don't care if facebook has downtime due to hardware reliability issues. facebook is more a waste of time than anything else.
Re: (Score:2)
Of course, I really don't care if facebook has downtime due to hardware reliability issues. facebook is more a waste of time than anything else.
Facebook's customers would tend to disagree. They are paying a lot of money to Facebook and they want their money's worth.
Facebook's users are not the customers, they are the product.
Re: (Score:1)
I'd imagine Facebook puts more resources into keeping the tracking and Ad-serving hardware 100% operational. The rest of the infrastructure is just the chicken feed sprinkle.
Re: (Score:2)
The rest of the infrastructure is just the chicken feed sprinkle.
That "chicken feed sprinkle" is precisely what the customers are paying for. Facebook is not just selling ads, they are selling everything you type.
testing is for design problems, too (Score:4, Interesting)
it doesn't matter how many redundant servers you have, if they are all going to fail in the same way
Be highly available in software, not hardware (Score:5, Insightful)
I suspect open compute project welcomes additional testing resources for the benefit of everyone... as long as it doesn't involve an oppressive amount of process that simply serve to slow down progress.
But... Web scale IS different, so I can't blame the main sponsors for not prioritizing what isn't as important to them. Once you accept that ALL hardware fails, and that you can either pay more for more reliable hardware, or you can develop better software architecture to handle failures, you look at things differently. Spend your money once on good software engineering, instead of over and over on every server.
Re: (Score:2)
Once you accept that ALL hardware fails, and that you can either pay more for more reliable hardware
If you have all the same hardware and it's not adequately tested, then all of your hardware is vulnerable to the same issues, and your application will possibly fail on all of them! Throwing more hardware at the problem just means more failures.
or you can develop better software architecture to handle failures
How can you develop software to work around systemic hardware problems? How can you write software that automatically detects if your floating point hardware is always correct? You say "do it on multiple systems and compare the results" but what if they all hav
Re: (Score:2)
Obviously your designs need to be tested.
the implementations need to be tested, too. all chips are not created equal. this batch works great, the next batch fails under certain circumstances. without actual ongoing hardware tests you won't catch it.
Who needs OCP? (Score:3)
MongoDB is Web-scale.
Test engineer says... (Score:3)
Test engineer says... big companies need to hire more test engineers.
Are we surprised?
Re: (Score:2)
the reality of massive system outages affecting NYSE and airlines says that more test engineers are needed
If you'd been watching the attack maps, (Score:2)
If you'd been watching the attack maps, you'd know that:
(1) It's China
(2) It's likely at the government level
If you'd been watching current events, you know that:
(3) China's economy has been crashing, going on three weeks now
(4) They're really unhappy about people taking money out of, and shorting, Chinese stocks, adding to the crash
(5) They've lost $3.25T in market cap since June 12th
(6) That's just over 20% of their Gross National Product
So it's likely they are attacking our financial markets over that.
Se
Re: (Score:1)
Software engineers say 'give us much more money to make software that is ten times as complex so you can throw it on cheap hardware to run.'
Are we surprised?
The trick is, robust hardware is robust hardware. It's done, you test it, then you build quality metrics into the process of building it and you're done. Complicated software to accommodate less robust hardware is bigger, more complex, and thus more prone to software bugs. You fix it by making it even more complex.
But the software guys will be there