Since I'm also guilty here with my huge array of Caviar Greens, let me also say that every few weeks I have a batch job that reads *all* data from that array. Why on earth would I need to occasionally and repeatedly read 21TB of data from something that should already be super reliable? Here's the failure scenario for what might happen to me if I didn't:
* Array starts off operating as normal, but drive 3 has a bad sector that cropped up a few months back. This has gone unnoticed because the bad sector was part of a rarely accessed file.
* During operation, drive 1 encounters a new bad sector.
* Since drive 1 is a consumer drive it goes into a retry loop, repeatedly attempting to read and correct the bad sector.
* The RAID controller exceeds its timeout threshold waiting on drive 1 and marks it offline.
* Array is now in degraded status with drive 1 marked as failed.
* User replaces drive 1. RAID controller initiates rebuild using parity data from the other drives.
* During rebuild, RAID controller encounters the bad sector on drive 3.
* Since drive 3 is a consumer drive it goes into a retry loop, repeatedly attempting to read and correct the bad sector.
* The RAID controller exceeds its timeout threshold waiting on drive 3 and marks it offline.
* Rebuild fails.