This RAID rebuild time and risk estimator calculates how long a rebuild will take after a drive failure, and your probability of a second failure during that window. Enter RAID level, drive size, and array age to assess whether your current setup is safe enough.
When a drive fails in a RAID array, the rebuild process reconstructs the lost data across the remaining drives. This can take hours, or days, depending on your drive size, RAID level, and how much load the array is under during the rebuild.
During the rebuild window, your array is running degraded. A second drive failure before the rebuild completes could mean total data loss (in RAID 5) or further degradation (in RAID 6). This estimator helps you understand how long your rebuild will take and how much risk you're carrying during that window.
Array Configuration
50%
Rebuild Speed
Estimated Rebuild Time
—
—
Second Failure Risk
Low
—
Data at Risk
—
Usable Array Capacity
—
Mitigation Tips
How we calculate: Rebuild time estimates are based on effective rebuild throughput, which accounts for controller overhead, parity recalculation, and workload contention. The high estimate uses worst-case speed (full drive rebuild at the low end of the speed range); the low estimate uses best-case (partial rebuild at the high end). Real-world rebuild speeds vary by NAS hardware, filesystem, and concurrent usage. These estimates represent typical ranges for consumer and prosumer NAS devices, enterprise RAID controllers may differ. Second failure risk assessments are qualitative guidance based on drive age and capacity trends, not statistical modelling.
Rebuild Time Reference: Seagate IronWolf NAS Drives in RAID 5
Estimated rebuild times for a 4-drive RAID 5 array on consumer Synology/QNAP NAS hardware. Times assume light concurrent load during rebuild.
Drive size
Data read (3 remaining drives)
At 50 MB/s
At 80 MB/s
At 120 MB/s
4 TB IronWolf
~12 TB
~67 hrs
~42 hrs
~28 hrs
6 TB IronWolf
~18 TB
~100 hrs
~63 hrs
~42 hrs
8 TB IronWolf
~24 TB
~133 hrs
~83 hrs
~56 hrs
12 TB IronWolf
~36 TB
~200 hrs
~125 hrs
~83 hrs
16 TB IronWolf
~48 TB
~267 hrs
~167 hrs
~111 hrs
Consumer NAS hardware (Synology DS-series, QNAP TS-series) typically achieves 50-100 MB/s rebuild throughput under light load, and 30-60 MB/s under concurrent use. Use the estimator above to enter your specific throughput.
What to Do During a RAID Rebuild in Australia
Source a replacement drive immediately. Mwave and PLE ship nationally within 1-3 business days; in regional areas, factor in 5-7 days. Don't wait for the rebuild to finish or fail before ordering, that extends your vulnerable window by the full shipping time.
Reduce concurrent workload. Pause Plex transcoding, backup jobs, Docker containers, and large file copies during the rebuild. Every competing read/write reduces rebuild speed and extends your exposure window.
Don't power off the NAS. An unclean shutdown mid-rebuild can corrupt the parity process and on RAID 5 cause data loss. Keep the NAS on UPS if available.
Verify your backup. Before the rebuild completes, confirm your off-site or backup copy is current and restorable. A failed rebuild on RAID 5 means your backup is your only copy.
Monitor SMART data on remaining drives. In Synology Storage Manager or QNAP Storage & Snapshots, run a SMART test on all remaining drives. Reallocated sectors rising under rebuild load signal the next failure.
AU Data Recovery Costs: If the Rebuild Fails
If a RAID 5 rebuild fails (URE, second drive failure, or power interruption), professional data recovery is the last resort.
Payam Data Recovery (Melbourne): RAID recovery typically $1,500-$5,000 AUD depending on failure mode and drive count. Free quote; no-recovery, no-fee policy.
Ontrack (AU operations): $2,000-$8,000+ AUD for RAID recovery. Enterprise-grade lab used for critical or large-scale failures.
The cost of adding RAID 6 (one extra drive, ~$150-$510 AUD depending on size) versus professional data recovery makes the case clearly: for any array using drives 8 TB or larger, RAID 6 or SHR-2 is the right choice.
Frequently Asked Questions
Why do RAID rebuilds take so long?
A RAID rebuild doesn't copy data from a backup, it reconstructs the missing data by reading every block from every remaining drive and recalculating the lost information using parity. For a 16 TB drive in a 4-drive RAID 5 array, that means reading 48 TB of data from three drives while simultaneously writing 16 TB to the replacement drive. Consumer NAS hardware (Synology DS-series, QNAP TS-series) typically achieves 50-120 MB/s rebuild throughput, far below raw drive speeds, because the RAID controller's parity processing is the bottleneck.
Can I use my NAS during a rebuild?
Yes, but every file access competes with the rebuild process for drive bandwidth and controller resources. Streaming a 4K Plex movie during a rebuild could halve your rebuild speed, extending a 20-hour rebuild to 40+ hours. If your data is critical, reduce or pause non-essential workloads (media streaming, large file copies, backup jobs) until the rebuild completes. The shorter the rebuild window, the lower your exposure to a second failure.
What is a URE and why does it matter during rebuild?
A URE (Unrecoverable Read Error) occurs when a drive cannot read a sector even after retries. Consumer drives are typically rated at 1 URE per 1014 bits read, roughly one error per 12.5 TB of data read. During a rebuild of a 4-drive RAID 5 array with 16 TB drives, you may read 48 TB from the remaining drives, making a URE statistically plausible. If a URE occurs during a RAID 5 rebuild, the rebuild fails and the data on that sector is unrecoverable. RAID 6 provides a safety net because its second parity block can compensate for a single URE. This is the strongest argument for RAID 6 on large drives.
What's the difference between a hot spare and a cold spare?
A hot spare is a drive installed in your NAS and configured to automatically replace a failed drive, the rebuild begins immediately, often before you notice the failure alert. A cold spare is a replacement drive sitting on a shelf, ready to be manually installed. Hot spares dramatically reduce the risk window because the rebuild starts within minutes, not hours or days (however long it takes you to notice the alert, source a replacement, and physically swap it). Most Synology and QNAP NAS units support hot spare configuration in Storage Manager, if you have an empty bay, configuring a hot spare is one of the highest-value reliability actions available.
Should I use RAID 5 or RAID 6 for large drives?
For drives 12 TB and larger, the consensus among storage professionals has shifted firmly toward RAID 6 (or SHR-2 on Synology). The reasoning: larger drives take longer to rebuild, and during that longer rebuild window the remaining drives are under sustained heavy load, exactly the conditions most likely to trigger a second failure or a URE. RAID 6 tolerates two simultaneous drive failures, providing a safety net that RAID 5 simply doesn't offer. The cost is one additional drive's worth of parity overhead, on a 4-drive array, you get 2 drives of usable space instead of 3. For most users with large drives, that trade-off is worth the protection.
How can I reduce my rebuild risk?
Four practical steps: (1) Run scheduled scrubs monthly: Synology calls this Data Scrubbing, QNAP calls it RAID Scrubbing, TrueNAS calls it a Scrub. These detect UREs and bad sectors before a failure forces a rebuild. (2) Replace drives proactively when they reach 4-5 years old or when SMART attributes (reallocated sectors, pending sectors, current pending count) trend upward. (3) Configure a hot spare if you have an available bay. (4) Maintain a verified backup that is separate from your RAID array: RAID is not a backup, and a rebuild failure means relying entirely on that backup to recover your data.
What happens if the rebuild fails?
The outcome depends on your RAID level. On RAID 5 with one drive already down, a failed rebuild usually means total data loss for the array. On RAID 6, a failure during rebuild means you've lost your second layer of protection but data is still intact, though critically vulnerable, you need to resolve the situation quickly. On RAID 1 or RAID 10, a failed rebuild of one mirror means data still exists on the surviving mirror, but you've lost redundancy. In all cases, a failed rebuild is exactly the scenario that makes backups essential. RAID protects against drive failure, backups protect against everything else, including RAID failures themselves.
Does rebuild speed depend on drive speed or the RAID controller?
Both, but the controller is usually the bottleneck on consumer NAS hardware. The controller must read from all remaining drives simultaneously, calculate missing data using XOR or Reed-Solomon parity, and write to the replacement drive, all while potentially servicing normal file access. Consumer NAS devices use ARM or low-power Intel processors that limit rebuild throughput to roughly 50-120 MB/s in ideal conditions. Drive speed only matters if it falls below the controller's rebuild rate, which is uncommon with modern HDDs. Upgrading to faster drives won't improve rebuild times if the controller is already the bottleneck.
Can I source a replacement NAS drive quickly in Australia during a live rebuild?
In metro areas, Mwave, PLE, and Scorptec offer next-business-day shipping and some have walk-in stores (PLE in Perth and Adelaide, Mwave in Sydney, Scorptec in Melbourne and Sydney). In regional areas, expect 3-7 business days from online retailers. For 8 TB and larger IronWolf/WD Red Plus drives, most major AU retailers carry stock, but availability for your specific model can vary. The only reliable way to eliminate the shipping wait is keeping a cold spare on the shelf, purchased when you built the array. An extra IronWolf 8 TB costs approximately $215-$265 AUD at current AU retail, trivial compared to running RAID 5 degraded for a week while a replacement ships.