8TB Drives Are Highly Reliable, Says Backblaze (yahoo.com) 209
An anonymous reader writes from a report via Yahoo News: Cloud backup and storage provider Backblaze has published its hard drive stats for Q2 2016. Yahoo News reports: "The report is based on data drives, not boot drives, that are deployed across the company's data centers in quantities of 45 or more. According to the report, the company saw an annualized failure rate of 19.81 percent with the Seagate ST4000DX000 4TB drive in a quantity of 197 units working 18,428 days. The next in line was the WD WD40EFRX 4TB drive in a quantity of 46 units working 4,186 days. This model had an annualized failure rate of 8.72 percent for that quarter. The company's report also notes that it finally introduced 8TB hard drives into its fold: first with a mere 45 8TB HGST units and then over 2,700 units from Seagate crammed into the company's Blackblaze Vaults, which include 20 Storage Pods containing 45 drives each. The company moved to 8TB drives to optimize storage density. According to a chart provided in the report, the 8TB drives are highly reliable. The HGST HDS5C8080ALE600 worked for 22,858 days and only saw two failures, generating an annualized failure rate of 3.20 percent. The Seagate ST8000DM002 worked for 44,000 days and only saw four failures, generating an annual failure rate of 3.30 percent." For comparison, Backblaze's reliability report for Q1 2016 can be found here.
UPDATE 8/2/16: Corrected Seagate Model "DT8000DM002" to "ST8000DM002."
UPDATE 8/2/16: Corrected Seagate Model "DT8000DM002" to "ST8000DM002."
Yeah, but... (Score:5, Funny)
Re: Yeah, but... (Score:2)
Re: (Score:2)
Re: (Score:2)
This much capacity is overkill for music storage, and only useful in the homes of pr0n enthusiasts. SATAn storage.
Re: (Score:2)
to fix that, just run your 45's at 33.
(GOML)
Re: (Score:2)
speaking of music, I have a hard drive based music player in my car and its been in the car since about 2003. it had whatever was the best IDE (not sata, too early for sata) notebook drive of its time. I put as much mp3 fileage on that as would fit.
in all these years of hot and cold (bay area, but still we get some hot days when the car is left out in the sun) and vibration from daily driving, I have yet to replace that drive! it could have been a samsung or ibm or maybe hitachi. funny enough, none of t
Re: (Score:2)
I have a 1.5TB Seagate drive that is still spinning.
You can manage to get lucky even with the most notorious hardware.
Re: (Score:2)
Reliability (Score:2)
Reliability is not so great an issue with raid systems being what they are today. What the bean counters fail to consider is the cost in man power required to replace seagate drives on a constant basis. Not just in the racks but process RMA's or the proper destruction and disposal of drives which may contain sensitive data.
I wonder how those numbers would look if other vendors were offered an equal analysis period. I know WD was mentioned but it didn't appear they got equal share.
Also: First. :)
Re: (Score:3)
OTOH, given SSDs and the inability to guarantee the erasure of all data on the drive, unencrypted data should never hit the drives at all, and the key should of course also never be stored on the same media (unencrypted).
That said, only my newer systems use encrypted volumes. My old drives I take apart and shatter/melt the platters.
Re: (Score:2)
the inability to guarantee the erasure of all data on the drive, unencrypted data should never hit the drives at all
Wow, that's not something I had considered. Thanks for that bit of info!
Re: (Score:2)
My old drives I take apart and shatter/melt the platters.
I use a drill press to bore 8 or 10 holes straight through my old drives, then for fun I hit 'em with a hammer a few times while whispering my ex-wife's name. If the CIA/NSA/FBI wants my data bad enough to recover it after that, they're welcome to it.
Re: (Score:2)
Re: (Score:2)
Had to destroy a drive that had a lot of student data on it.
Used a FN-FAL and 147 grain FMJ bullets at about 2700 feet per second.
Re: (Score:2)
Had to destroy a drive that had a lot of student data on it.
Used a FN-FAL and 147 grain FMJ bullets at about 2700 feet per second.
Yeah, I used to take them out and shoot them (it's fun) but I got tired of picking up all the little bits and pieces of the hard drive. I don't want to litter my shooting areas with fragments of stuff like that. But it is fun to shoot a hard drive and watch it turn to metallic confetti. :)
Re: (Score:2)
OTOH, given the inability to guarantee the erasure of all data on any drive, unencrypted data should never hit the drives at all, and the key should of course also never be stored on the same media (unencrypted).
FTFY.
You are absolutely correct though -- you should never rely on making data inaccessible via erasure instead of via encryption.
Incidentally, the ST8000DM002s that we are talking about here support for OPAL which makes it trivial to "throw away the key" by sending the drive a reset-DEK command.
Re:Reliability (Score:5, Insightful)
OTOH, given SSDs and the inability to guarantee the erasure of all data on the drive,
Wow, SSD even survives incinerators? Where I used to work, the policy for drives was to open them up and strip them for their magnets, then have magnet fun. The platters made good frisbees, but the problem is that they go through car windows, and the dents in cars are deep, so frisbee with care.
Re: (Score:3)
And they can hurt too. Not that I'd have any personal experience with that....
Re: (Score:2)
Re: (Score:2)
HDDs are even worse. They silently remap blocks all the time, with no way to erase the data off the old partially-failed ones. At least most SSDs use encryption, and doing an ATA secure erase command will wipe and regenerate the key.
Also, SSDs are much easier to physically destroy in a shredder or with a blowtorch. Much less metal and armour on them.
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
Jesus H. Jeremiah Christ, I just installed one of those in a ProxMox virtualisation machine on the weekend. Will have to pull it tonight, thanks for the heads-up.
Re: (Score:2)
If you are looking at the Economics of this from a Labor / Parts issue, you're already behind the curve.
The real value is the data on the drives, and how much you'll miss that data if and when it goes away. The problem is, nobody values their data, until it is gone. And then it is too late.
Re: (Score:2)
Re: (Score:2)
IMHO raidz2 is a better idea but it's not what they are doing.
Re: (Score:2)
Re: (Score:3)
Reliability is not so great an issue with raid systems being what they are today.
At the scale Backblaze is talking about, I would say it's an issue. Somebody has to keep all those drives in stock and walk back to a cage to replace them. It's not data loss we're worried about here, it's costs.
Re: (Score:2)
And that's exactly what I stated in my post too.
Re: (Score:2)
At the cost of these drives, isn't it just cheaper to rack the arrays on wheels, and shove them out the back door into the recycling trailer?
Re: (Score:2)
Even for a small home array, it's terribly annoying when all of your drives die young and all at the same time (Seagate).
Correct Seagate 8 TB Model (Score:2)
Re: (Score:2)
Re:Correct Seagate 8 TB Model (Score:5, Funny)
I've found one! The mythical Slashdot editor who edits. All hail the editing editor!
Re: (Score:2)
Not SSD Drives (Score:2)
These are all platter drives, but you can only discover that in the comments at TFA.
There are so few 8GB HGST drives, and they're so new, that the current data about them is statistically insignificant/unreliable, as is any model with less than 500 units and 200k drive days.
Re: (Score:2)
Why is heaven's name would anyone think that a cheap cloud backup company would be installing 8 TB SSDs in their massive storage arrays? Those things are thousands of dollars per drive!
Re:Not SSD Drives (Score:4, Interesting)
The numbers in the summary come from different places, because the first chart in the linked article, for the April-June quarter says:
Seagate 8TB, 2720 drives, 35840 drive days, 3 failures (13 days average per drive, 3% annual failure rate)
HGST 8TM, 45 drives, 3825 drive days, 0 failures (85 days average per drive, 0% annual failure rate)
The second chart, from April 2013 through the end of June, doesn't show drive numbers, just days, failures, and rates. The numbers in the summary seem to be pulled from both.
Assuming that the 8TB drives stay in use until they die, here's where the stats seem to come from (drive days/# of drives). Drive days pulled from the "all time" chart, # of drives from the latest quarter chart):
22858/45= 507 days average use HGST HUH728080ALE600
44000/2700= 16 days average use Seagate ST8000DM002
Now, anyone experienced with Seagate wouldn't expect the 3.3% annualized failure rate to be that low in another year and a half. The HGST rate _is_ after almost a year and a half.
High failure rate (Score:5, Insightful)
"...the company saw an annualized failure rate of 19.81 percent with the Seagate ST4000DX000 4TB drive"
A failure rate of almost 20% in a data center? Geez, that's pathetic.
A temperature-controlled environment, clean power, low shock and vibration, and 1 out of 5 still fails? Remind me never to buy Seagate. Oh, wait, I already vowed never to buy another Seagate- about 10 years ago after experiencing their unequaled propensity to die fast and hard.
Maybe other people have had better luck with Seagate than I have, but for me they've always been disappointing.
Re: (Score:2)
Re: (Score:2)
I had one of those. 1 year warranty. And it died in 14 months.
Thanks for this. I was just about to read the article, till I saw your comment that you had 1 and it failed in 14 months. Saved me all the trouble.
Re: (Score:2)
Depends on how old the drives are. When I was working in a data centre I was having a lot of hard drive failures but they were laptop SCSI drives in blades that had been running continuously for over three years so it was expected for them to be hitting their end of life. (It was around 2007 so that's why the those were the drives.)
So if the 4TB drives are a few years old getting a lot of use then I can see why they would be failing at a high rate. If they are newer then I would be worried. I'm more concern
Re: (Score:2)
Perhaps they don't keep the temperature as cool as they should in order to save a few bucks?
That could be, but the other brands were failing at a much lower rate so it does make you wonder about the overall longevity of Seagate drives.
Re: (Score:2)
Their results should be taken as an strong indication and not 100% reliable unless you have a situation very similar to Backblaze.
Re: (Score:2)
They are using consumer drives for data center needs, this is the big reason their failure rate is relatively high. Still, with the redundancy, it is cheaper to run this way. Rumor is that Google ran that way with off the shelf computers. Use dirt cheap commodity products that are good quality, have exceptional redundancy, throw them away as they implode.
Re: (Score:2)
Is there a real mechanical difference between "consumer" and "enterprise" drives, these days, at the bleeding edge of the storage-per-unit curve?
Mostly I see differences in firmware, which (IMHO) ought to be end-user selectable anyway.
(Before anyone replies, I chose those words carefully to avoid outliers like Raptor little-drive-in-a-big-heatsink configurations, or any other stuff that puts any metric other than capacity-per-dollar as a primary criteria.)
Re: (Score:2)
Yes. There are specific mechanical differences in build quality around stability and vibration dampening between enterprise and consumer level drives. It's more than just flashing some different firmware (but that may be a part of what differentiates drives).
The best indicators are length of warranty and specification of purpose, in my experience.
Re: (Score:2)
Don't you mean "damping"?
And isn't the spindle motor still affixed firmly to the chassis, which is affixed to the enclosure?
Re: (Score:2)
Don't you mean "damping"?
That's one of my pet peeves too, but we've lost that war and now we'll never know for sure if people mean getting things soggy or cancelling oscillations.
Re: (Score:2)
They have a few things about their "pod" design on their website and if you look at it you will see that you are correct. It looks like an utterly insane design until you consider that the things are mostly idle, so typically don't generate a lot of heat, and that they have distributed servers with distributed workloads so they can afford to lose one entirely for a while.
Re: (Score:2)
Having owned a lot of the Seagate 2-3TB drives 20% is way too low from my experience. I think I have 2-3 still running out of a batch of 8. Including RMAs.
Re: (Score:3)
If you wrote off every manufacturer that hit a 20% annualized failure rate you would now be unable to buy any drives.
Re: (Score:2)
True, but Seagate are particularly bad as they have a history of releasing unreliable drives on a regular basis and then just endlessly swapping them for more unreliable drives until the warranty expires.
For individuals (not datacentres) HGST is the best bet. As well as being generally very reliable they do proper testing and fix their problems. They might cost a bit more, but what's a few quid here and there to avoid all that hassle?
Re:High failure rate (Score:5, Informative)
A temperature-controlled environment, clean power, low shock and vibration, and 1 out of 5 still fails
The density and structure of a pod is only temperature-controlled in that it is going to get hot, quickly.
Remind me never to buy Seagate.
The numbers from Backblaze you'll actually see that you shouldn't buy one particular desktop model of hard drive for your "datacenter." Numbers like Backblaze releases are quite fascinating in that you can analyze them. You can find which models at any vendor to prefer or avoid.
Oh, wait, I already vowed never to buy another Seagate- about 10 years ago after experiencing their unequaled propensity to die fast and hard.
Sorry to hear about your loss. I hope you kept backup copies. If not, I hope it taught you that if you don't have a copy then you don't have a backup.
It is certainly reasonable to avoid a vendor when a lot of their products from many lines have defects at a given time. Seagate's desktop line certainly took a hit from the initial Backblaze numbers. The DM1000's huge failure rate is almost as legendary as the IBM Death Star line or Maxtor click-of-death. But stuff from before or after a given run may have better or worse quality. And of course even manufactures can get batches of bad parts. (Hidden variables like that are one of the reasons why the singular of data isn't anecdote.)
I also wonder if we'll ever get numbers from Backblaze on things like the actual temperature, decibels and power these drives lived through. More than just avoiding a particular model. It would be nice to know how hot, loud and nasty you can get before your commodity-class storage starts pooping out.
Re: (Score:2)
We got some numbers earlier from another story but they were entirely useless since they were average temperatures on machines that are idle most of the time. Maximums could tell us something useful. I won't bother linking to the earlier story because it was like a high school project. If it wasn't for their niche use of distributed archiving where their machines are unlikely to get very hot (but possibly individu
Re: (Score:3)
> I also wonder if we'll ever get numbers from Backblaze on things like the actual temperature
The raw data dump includes drive temperatures as reported by "smartctl". You can find a dump here: https://www.backblaze.com/b2/h... [backblaze.com]
We analyzed the failures correlated with temperature in this blog post in 2014: https://www.backblaze.com/blog... [backblaze.com]
In a conversation with some of the Facebook Open Storage people, they said hard drives have
Re: (Score:2)
They seemed fine to me until they bought Maxtor in 2006; then you never knew what you were going to get, a Maxtor w/ a Seagate badge or a HDD that might have less than a 20% annual failure rate, in the first year.
I'd guess since then; they closed all the Seagate factories and run exclusively from the cheaper Maxtor facilities. (all of that is a guess, but MBAs always think reducing cost > * so probably in the ballpark).
Re: (Score:2)
Not exactly. Take a look at their web page and their "pod" design. They have jammed in drives where they will fit but they have very different loads to a normal data center so can get away with it most of the time. Unlike a normal data center they will never be running all the drives in a "pod" flat out so something that would be a smoking mess elsewhere merely has drives in the middle that cannot shed heat properly.
That's not to excuse Seagate, it's just to point out
Re: (Score:2)
Re: (Score:2)
I suggest you take a look at their description of how they pack their disks in and you will understand the heat issue I mentioned above.
Re: (Score:2)
If you are doing a file server, then SATA multiplexers are more than adequate especially for what Backblaze are doing. Let put it this way 45 drives is 9 SATA multiplexers which at 3Gbps SATA is a total of 27Gbps throughput, more than enough to saturate two 10Gbps Ethernet links.
However you can get 6Gbps SATA multiplexers these days, and Backblazes latest pods have 60 drives, so that is 72Gbps, which is nearly enough to saturate a couple of 40Gbps Ethernet links.
People always and I mean *ALWAYS* overestimat
Re: (Score:3)
> I think their pods only have GigE interfaces
Originally (up until 3 years ago) that was true, but all new pods have 10 GbE interfaces, and 100% of the pods in our "Backblaze 20 pod Vaults" have 10 GbE interfaces. And there are some really strange (and wonderful) performance twists on using 20 pods to store each file: when you fetch a 1 MByte file from a vault, we need 17 pods to respond each supplying only 60k bytes to reassemble the complete file from the Reed Solomon.
Re: (Score:2)
...so then you must obviously be able to point to some recent notorious models like the 1.5TB and the 3TB Barracudas. You should be able to do this off the top of your head. HELL, you should have included an example in your post.
Although I suspect that you're just talking BS.
I think my first batch of WD replacements are coming up on the age at which my last batch of Seagates started to become bothersome.
22,858 and 44,000 days?!? (Score:5, Funny)
Re: (Score:3)
Drive days.
1000 drives for 44 days each would get you 44000 drive days.
Riiiiiight. (Score:2)
My "prediction" is it will most likely be that there is an 70% failure rate with Seagate being the top offender.
Re: (Score:3, Insightful)
Come back in 3 or 5 years and tell me out of all the 8TB sold in 2016/2017 just how many are still functional and THEN what the failure rate is/was.
My "prediction" is it will most likely be that there is an 70% failure rate with Seagate being the top offender.
By then the data is worthless to anybody except the manufacturer. We necessarily have to accept a deficit of statistical quality to make forward predictions that are actually worth something, like knowing if I'm building a SAN, what drives I should buy.
In 5 years, I'm not going to be buying 8TB drives, so knowing what the failure rate for some 8TB drive was is inconsequential. Either HDDs continue to improve and I buy 32TB or larger HDDs, or they don't, and I'll be filling my SAN with 8TB or larger SSDs, Xp
Re: (Score:2)
If it's working for them (Score:5, Insightful)
Take it with a grain of salt when Backblaze say a drive is crap since it may only be crap in their very hostile environment, but if they didn't break it then it's very likely to work well anywhere.
Re: (Score:2)
Take it with a grain of salt when Backblaze say a drive is crap since it may only be crap in their very hostile environment, but if they didn't break it then it's very likely to work well anywhere.
What's the typical drive temperature in Backblaze's cases in their environment?
Re: (Score:2)
They are not saying apart from an entirely useless average for machines that are idle a lot of the time with drives spun down. I'm not entirely sure they know or care what their maximums are and how long drives are hot for.
I suggest taking a look at their web pages that describe their pod designs to get a better idea of the situation instead of just taking my word that they shove drives in wherever they will fit without taking h
Re: (Score:3)
"Short answer: the coolest drives are 21.92 Celcius and the hottest drive was 30.54 degrees."
Based on the Google stats from a few years ago it was pretty clear that drive temperature was only a problem above 55C
I target 45C as allowable maximum and 35C as normal with no apparent increase in mortality over colder temperatures, but that saves a lot in terms of running the cooling plant. The batches of Seagates Constellations we had with stupidly high failure rates ran well under 30C
For home use my fileserver'
Re: (Score:2)
It's those temperature excursions beyond the design limits that matter and not if the average is 22C instead of 20C over months. So being hostile for a week is somewhat obviously (a lot more than IMHO) likely to be why they have extremely high failure rates across the full range of brands and models (with some much higher than others but all higher than others seem to experience).
Re: (Score:2)
Temperatures beyond design limits do. I've seen it several times especially back when WD drives ran very hot and some people had inadequate server cases and/or no alarms when fans died.
80GB drives also reliable (Score:2)
I had a 1st Gen Seagate 80GB SATA fail last month after 11 years and change, of 24/7 daily operation and very few power-off cycles.
Contrasting anecdote (Score:5, Interesting)
I'm an independent white-box NAS guy, and with the exception of the truly awful 1.5TB Seagate drives from 2008-2009 or so, I have not had any significant problems with them. I've got a few thousand 3 to 8 TB drives deployed with my clients, most of them cheap consumer drives (not even the "NAS" editions), and the annual failure rate is roughly 2% across all brands. This has been consistent for many years and I factor these stats into my costs and warranty projections. I have
The thing that bothers me about Backblaze, and the reason why I have a very hard time taking their results seriously, is the way they design their pods. They take a custom fabbed chassis, then fill it with the most ghetto components known to man: SATA port multipliers, ultra-low-end HBAs, dual "gamer" power supplies, very substandard cooling, and until recently they used super sketchy desktop boards. It's only last year that they finally changed the board for a Supermicro, primarily to get 10GbE very cheaply. For that same money, you can buy a ready-made 60-bay Supermicro chassis with redundant power and SAS - and a warranty. Hell, I bet SM would deliver directly to Backblaze's doorstep *and* give them a friendly discount.
Anyway... epic digression aside, when people ask me which brand is better, I tell them to buy whichever has the best warranty. A hard drive *will* die, the question is when, so the only logical course of action is to plan around its inevitable demise by keeping backups and redundancies, and learning the ins and outs of the RMA process.
Re: (Score:2)
Re: (Score:2)
What I've learned... (Score:4, Insightful)
What I've learned from reading the comments here is that people are just as clueless when it comes to storage reliability as they ever were, and are just as capable of throwing the baby out with the bathwater as at any other time.
Dear Slashdot: Never change.
The biggest drives always are (Score:3)
the most unreliable.
That is why you buy in the sweet spot for best value and let someone else prove new technologies and HD densities for you..
Re: (Score:2)
Yeah I always shop for best value. So I now have 8TB drives in my system.
Oh what you didn't realise that 8TB SMR drives were the cheapest per megabyte before posting?
Re: (Score:3)
More like Seagate 8TB not being utter trash (like so many other Seagate drives).
Re: (Score:2)
Re: (Score:2)
It's simply down to where they position themselves in the market. HGST cost a little more but you get better testing and reliability. Seagate are cheaper but more hit-and-miss. If your product is at the low end of the market, a basic DVR sold in supermarkets at the lowest possible price, you fit a Seagate drive and don't worry too much about failures after the 1 year warranty period. If it's a quality product that sells for a little more you fit a Hitachi drive and your reputation for relaibility increases.
Re: (Score:2)
If you can stand some failures and upgrade your drives regularly anyway, Seagate might be cheaper for you. If you prefer reliability and long term data retention, pay a little more for HGST.
Makes sense. At the start-up where I'm currently being chief engineer, we're foreseeing a storage problem in a couple of months, and writing down specs and requirements for a long-term storage solution. I'm leaning towards the HGST solution, for reasons of long term data retention: it will definitely be a use case that a customer asks for 5- or 10-year old data, and we don't want to be able to fulfill that request by being penny wise pound foolish on Seagate. YMMV.
Re: (Score:2)
HGST n = 45
Seagate n = 2700
I'd have much more confidence in the Seagate sample's rating.
Re: (Score:3)
They're both terrible numbers, though perhaps not terrible by Seagate standards. The best of the HGST 4 TB drives had an annualized failure rate of only 0.4%. If these numbers are correct, then these drives are about an order of magnitude less reliable than previous generations of hardware....
Of course, the confidence intervals on these numbers are huge. On the low end, t
Re:comment (Score:5, Insightful)
If you've got 3,000 drives at home to come up with directly home applicable numbers, then please share them.
This is mostly useful to compare models vs models as the environment is kept the same.
It's completely legitimate to say model X is more reliable than model Y, it's not valid to say model X has a Z% failure rate in a home environment however.
Re: (Score:2)
I've got a cool story, bro. We put a NAS device online in our new data center with 4 Seagate Archive 8000 drives in it, and 2 of them died within 24 hours, trashing the RAID array. Thankfully, since it was a new NAS, it wasn't a big deal.
Re: (Score:2)
You wot mate?
Seagate Archive drives are designed for cold storage, as they say 6 times on their web page for the drive. If you don't know what "cold storage" means, it means "not RAID".
So, you build a RAID array out of drives designed for "not RAID", and they started failing on you. And this is somehow Seagate's fault? The mind boggles.
Re: (Score:2)
Drives of that size are no longer limited to gimped "archival" roles.
On the one hand, a drive is probably likely to be more reliable when you pamper it and don't really do much with it. On the other hand, I've had plenty of Seagates fail in just that kind of use case.
Gimped archive disks? Who cares if they are reliable or not?
Re: (Score:2)
SMR drives are different - the S is for Shingled. It's an oddball recording technology that requires an entire track be written to change any block. They're really the worst choice for random write patterns.
Re: (Score:2)
Aren't we all building raid arrays with non-raid disks for SMB and home use nowadays anyway ?
If you're being pedantic and take RAID to mean "Redundant Array of Inexpensive Disks" then the Seagate Archive is in fact the most ideal candidate.
I wouldn't though but that doens't mean he shouldn't.
Re: comment (Score:2)
No. The archive disks aren't designed for a long duty cycle. They are meant to have data dumped onto them and then work as a read only disk.
The constant usage of a RAID array will cause drive failure via thermally induced URE in short order
Re: (Score:2)
Drives designed for RAID use typically have different firmware which react differently to issues - RAID friendly drives react quicker to failures, meaning they are less likely to fail the RAID over correctable errors. Put a drive not intended for RAID use in an array and you will see more failures over drive level correctable errors.
Archival disks are one of those drives you will see this issue with.
Re: (Score:2)
I believe that WDC designs RED drives from 1 to 4TB for systems with 5 drives or less.
WD RED drives are available in 6TB and 8TB. Regular drives are 5400RPMs with 64MB cache. Pro drives are 7200RPMS with 128GB cache.
Re: comment (Score:2)
Is that a typo or is there really 2000 times the cache?
Re: (Score:2)
Re: (Score:2)
Yeah, it will be a LONG time before I trust Seagate anything, after burning through every Seagate drive I've bought in the past 6 years. Every. Single. One. Not even heavy duty usage, some were just archival drives.
Utter trash. Ever since they bought Maxtor... they took a terrible turn for the worse.
I can't help but wonder if they saw an order for Backblaze... and said... gee guys, make sure QA sends only the most reliable bins to those guys.
Re: (Score:3)
yes it would be unit-days, as in nX=22858 so each drive in the array (n) had an uptime of X=22858/n. We know what n is. It's 45. Therefore, X=22858/45=~508 days. The stated MTBF of the HGST Enterprise-class drives is 2.5 million hours. That would put the expected array failure rate at 2,314 days (2.5mill. divided by array size).
So don't be impressed, this is actually a failure report.