on how FC or high-end drives have up to 2.5 million hours and low-end drives have up to 500k hours MTBF values. 500k hours means 57 years. Isn't that enough for any kind of IT investment? IT also means that only 1 of the 57 drives will likely fail (statistically, less will occur) in one year in an array.
So thinking of a typical user with eight disks per RAID-5 array, the chance of two disks failing the same day is 5/100000 and roughly the chance for two disks to fail before restriping occurs (considering that it will take 5 hours) is 1/100000.
I see no problems with these specs even if they are true.
Drives are going to fail no matter the stats of MTBF for the
. The idea is to have as few failures as possible over time. Drives with higher MTBF specs tend to last longer than lesser MTBF drives due to the components used in manufacturing and the tolerance specs they are built to. The higher the MTBF, usually the higher the cost to make them. If failure was not an issue, then folks selling lower-duty cycle disks would not need to offer double parity
I have worked in the storage business for a long time now with a few different companies and in my experience, the solutions using the higher MTBF drives tend to have fewer disk failures over time. Even high MTBF drives may have issues if there is a problem during manufacturing. (Remember those expensive tires blowing out causing SUV's to roll over?)
If you're using a RAID solution for disk-class data archiving for a long period of time and you are using low MTBF disks, then use ADG or double parity to protect yourself. If you use a
solution with high quality disks that is capable of global sparing of failed drives, then standard RAID should be fine as long as you have enough spares allocated.
Editor's note: Do you agree with this expert's response? If you have more to share, post it in one of our
The views and opinions expressed by Christopher L. Poelker are his alone and not necessarily shared by Hitachi Data Systems.