Is RAID 5 Still Safe With Large Drives

RAID 5 was designed for drives that were far smaller than what NAS devices hold today. As drive capacities have grown to 8TB, 12TB, and beyond, the rebuild risk window has grown with them. Here is the honest assessment of when RAID 5 is still acceptable and when you should move to RAID 6.

RAID 5 with 4TB drives is reasonable. RAID 5 with 12TB or 16TB drives is a gamble. The problem is not RAID 5's design - it is drive capacity. As drives have grown larger, the time required to rebuild an array after a drive failure has grown with them. During that rebuild window, every surviving drive must be read in full. On large drives, the probability of encountering an unrecoverable read error (URE) on a surviving drive during that read approaches uncomfortable levels. A URE during RAID 5 rebuild is total data loss with no recovery path.

In short: RAID 5 is still acceptable with drives up to roughly 4-6TB, particularly if you use NAS-Pro or enterprise drives with higher URE specs. For drives 8TB and above, RAID 6 is the recommended configuration. For drives 12TB and above, RAID 6 should be considered the minimum. The risk is not theoretical - it scales directly with drive capacity.

The URE Problem Explained

Every hard drive has a specification called the Unrecoverable Read Error (URE) rate - also written as UBER (Unrecoverable Bit Error Rate). This is the rate at which the drive is expected to encounter a bit that cannot be read, even after retries. For consumer and prosumer NAS-class drives, the typical rate is 1 unrecoverable error per 10^14 bits read. For enterprise and pro-tier drives, this is typically 1 per 10^15 bits.

In everyday terms: a consumer NAS drive is expected to encounter one unrecoverable read error approximately every 12.5TB of sequential reads. A pro-tier or enterprise drive extends this to approximately 125TB.

During a RAID 5 rebuild, every surviving drive must be read in its entirety to reconstruct the data from the failed drive. The total data read from the surviving drives depends on the drive size and array configuration. In a 3-drive RAID 5 array with 12TB drives, rebuilding one failed drive requires reading approximately 24TB from the two surviving drives. At a consumer drive's URE rate of 1 in 12.5TB, the probability of hitting at least one unrecoverable error during that 24TB read is approximately 85%. If a URE occurs during the rebuild, the reconstruction process cannot complete correctly, and the entire array fails with no recovery path.

How Risk Scales With Drive Size

The URE risk during a RAID 5 rebuild scales directly with the total data read from surviving drives. Larger drives mean more data to read, which means a higher probability of hitting a URE before the rebuild completes.

Estimated RAID 5 rebuild URE risk by drive size (3-drive array, consumer drives at 10^14 URE rate)

Drive size Data read during rebuild Approx. URE probability Rebuild time (typical NAS)
4TB drives 4TB~8TB~47%~18-24 hours
6TB drives 6TB~12TB~62%~28-36 hours
8TB drives 8TB~16TB~72%~36-48 hours
12TB drives 12TB~24TB~85%~55-72 hours
16TB drives 16TB~32TB~92%~72-96 hours
20TB drives 20TB~40TB~96%~90-120 hours

These probabilities assume consumer NAS drives at the 10^14 UBER spec. The numbers look alarming but come with important context: a URE during rebuild does not automatically mean data loss in all cases. Modern RAID controllers and NAS operating systems can sometimes handle read errors gracefully, particularly for short error bursts. ZFS-based systems (TrueNAS, QNAP QuTS Hero) have better URE handling than ext4-based systems because of block-level checksums. And drives with better UBER specs (IronWolf Pro, WD Red Pro at 10^15) substantially reduce the URE probability.

But the directional finding holds: as drive capacity grows, the risk profile of RAID 5 degrades. The question is not whether RAID 5 is always unsafe - it is whether the risk level is acceptable for your data.

Rebuild Time: The Compounding Factor

The rebuild time problem compounds the URE risk. A RAID 5 array rebuilding a 4TB drive might take 18-24 hours. A 20TB drive rebuild at the speeds typical of a home or small business NAS might take 4-5 days. During that entire period, the array is running in a degraded state with no drive redundancy. A second drive failure during this window means total data loss.

Real NAS rebuild speeds are slower than theoretical maximum sequential throughput. A NAS handling concurrent read and write operations from users while rebuilding will see rebuild speeds of 50-80MB/s rather than the drive's rated maximum. Rebuilding 20TB of data at 60MB/s takes approximately 93 hours. Nearly four days with zero redundancy and a statistical certainty approaching 96% of encountering at least one URE on consumer drives.

Practical implication: the argument for moving to RAID 6 on large drives is not just about URE probability during any single rebuild event. It is about the combination of URE probability and the extended rebuild window that large drives create.

RAID 6: What Changes

RAID 6 uses two parity drives instead of one, allowing the array to tolerate two simultaneous drive failures without data loss. More importantly for the large-drive URE scenario: RAID 6 can survive one URE during a rebuild. If the reconstruction process encounters a single unrecoverable read error on a surviving drive, the second parity set provides the redundancy needed to complete the rebuild despite the error.

RAID 6 is not free. The additional parity drive costs one drive's worth of capacity compared to RAID 5. In a 4-drive array: RAID 5 gives you 3 drives of usable space, RAID 6 gives you 2. Write performance is also lower due to double parity calculations. On a 4-drive array with modest hardware, RAID 6 write performance can be 15-30% lower than RAID 5 under random write workloads.

Synology's SHR-2 is the equivalent of RAID 6 for mixed-size drives: it uses two parity drives and tolerates two simultaneous failures. For NAS devices with Synology DSM, SHR-2 is the appropriate selection when the drive failure risk profile argues for moving beyond single-parity protection.

How ZFS Changes the Equation

ZFS-based NAS systems (TrueNAS, QNAP QuTS Hero) handle the URE problem differently from traditional RAID implementations. ZFS stores a checksum for every block of data written to the pool. During a rebuild (called resilvering in ZFS terminology), each block is verified against its stored checksum. A block that fails its checksum can be detected and, if the parity allows for it, corrected or flagged.

This block-level integrity checking does not eliminate the URE risk but changes the failure mode. Instead of a silent data corruption event where incorrect data is written to the rebuilt drive, ZFS detects the bad block and handles it explicitly. In RAIDZ1 (the ZFS equivalent of RAID 5), a detected bad block during resilvering can still cause failure if no correctable copy exists. In RAIDZ2 (equivalent of RAID 6), one bad block can be reconstructed from the second parity set.

ZFS also has regular data scrubbing built in. Scheduling a weekly or monthly scrub reads every block on the pool and verifies checksums, detecting and correcting silent corruption that RAID controllers running ext4 or Btrfs may miss entirely. This scrubbing process is one of the strongest arguments for using a ZFS-based NAS OS (TrueNAS, QuTS Hero) when data integrity is a priority.

The Practical Decision: When to Use RAID 5 and When to Use RAID 6

The following guidance is based on drive size, drive tier, and the file system in use:

Up to 4TB drives, consumer NAS (ext4/Btrfs) RAID 5 is acceptable. URE risk during rebuild is real but the rebuild window is short enough to be manageable. Keep backups current.
Up to 4TB drives, ZFS (TrueNAS/QuTS Hero) RAIDZ1 (RAID 5 equivalent) is acceptable. ZFS error handling reduces practical risk.
4-8TB drives, consumer NAS (ext4/Btrfs) RAID 5 is workable but RAID 6 is recommended if the NAS holds important data. Rebuild times are 24-48 hours with meaningful URE exposure.
4-8TB drives, pro-tier (IronWolf Pro, WD Red Pro at 10^15 UBER) RAID 5 is acceptable. Pro drives reduce URE probability by 10x compared to consumer drives.
8TB+ drives, consumer NAS RAID 6 / SHR-2 recommended. URE probability during rebuild is too high on standard NAS drives to be comfortable.
8TB+ drives, pro-tier or enterprise, with ZFS RAIDZ1 is defensible with pro drives and ZFS scrubbing. RAIDZ2 is better practice and removes the question entirely.
12TB+ drives, any configuration RAID 6 / RAIDZ2 minimum. The rebuild window and URE exposure at this capacity make single-parity impractical regardless of drive tier.

The one rule that applies regardless of RAID level: RAID is not a backup. A RAID 6 array with current backups is the correct setup. A RAID 5 array with no backup is always inadequate, regardless of drive size. See the RAID is not a backup guide for the full explanation.

Australian Buyers: What You Need to Know

Drive pricing in 2026 and the RAID 6 capacity cost. The capacity penalty of RAID 6 over RAID 5 is real: in a 4-drive array, you lose one full drive to the second parity set. With NAS-grade drives priced above $200 AUD for 4TB and considerably more for larger capacities, the effective cost of the additional RAID 6 parity is now a meaningful dollar amount. Factor this into the drive purchase decision rather than treating RAID level as a configuration choice independent of hardware cost.

Pro-tier drives worth considering in AU. The Seagate IronWolf Pro and WD Red Pro are available through Scorptec, Mwave, and PLE at a premium over consumer NAS drives. For large-capacity RAID 5 deployments where the operator is resistant to moving to RAID 6, using pro-tier drives with 10^15 UBER rates substantially improves the URE probability profile. This is not a substitute for RAID 6 at 12TB and above, but it meaningfully changes the risk at 6-8TB.

Synology DSM drive compatibility note. With the Synology drive compatibility controversy resolved in DSM 7.3 for desktop Plus models, third-party drives including IronWolf and WD Red Plus are fully supported again for standard RAID and SHR pool creation. M.2 NVMe drives for cache and storage still require drives on Synology's official compatibility list. Enterprise and rackmount Synology models maintain stricter compatibility requirements. Verify your specific model's compatibility list at the Synology website before purchasing drives for a new pool.

QNAP QuTS Hero for large drives. QNAP's QuTS Hero operating system (ZFS-based) is available on NAS models with 8GB or more of RAM. For buyers running 8TB and above drives and wanting the strongest practical protection against URE events during rebuild, QuTS Hero with RAIDZ2 configuration provides better data integrity guarantees than QTS (ext4) with RAID 6. The RAM requirement and setup complexity are higher, but the data protection improvement is meaningful at these drive capacities.

Is RAID 5 dead?

No. RAID 5 remains appropriate for smaller drive sizes and pro-tier drives with better UBER specs. The popular claim that RAID 5 is dead came from the transition to multi-terabyte drives in enterprise environments during the early 2010s. For a home NAS with 4TB drives and regular backups, RAID 5 is still a functional and common configuration. The risk scaling at 8TB and above is real and worth taking seriously, but it does not make RAID 5 categorically unsuitable at all drive sizes.

What is RAID 6 in Synology and QNAP terminology?

Synology calls their dual-parity configuration SHR-2 (for mixed drive sizes) or RAID 6 (for matching drives). QNAP calls it RAID 6. TrueNAS and Unraid use RAIDZ2. All refer to the same underlying concept: two parity blocks per stripe, allowing the array to survive two simultaneous drive failures. The specific implementation differs but the data protection outcome is the same.

How can I reduce RAID 5 rebuild risk if I cannot switch to RAID 6?

Several steps reduce risk. First, use pro-tier drives (IronWolf Pro, WD Red Pro) with 10^15 UBER rates rather than consumer drives at 10^14. Second, reduce rebuild time by replacing a failed drive as quickly as possible and minimising other I/O activity on the NAS during the rebuild. Third, ensure current backups exist before any rebuild begins. Fourth, run scheduled SMART tests and RAID scrubs to detect marginal drives before they fail outright rather than discovering them during a rebuild.

Does RAID 6 require more drives than RAID 5?

RAID 6 requires a minimum of four drives (versus three for RAID 5). In a 4-drive array, RAID 5 gives you three drives of usable space while RAID 6 gives you two. The capacity cost is one full drive. In larger arrays (6-bay, 8-bay), the capacity penalty of RAID 6 becomes proportionally smaller. An 8-drive RAID 6 array gives you six drives of usable space (75% efficiency), which is close to RAID 5's seven drives (87.5% efficiency) while providing substantially better rebuild resilience.

Does Synology SHR automatically choose RAID 5 or RAID 6?

SHR (Synology Hybrid RAID) defaults to single parity (equivalent to RAID 5) and is labelled SHR. SHR-2 uses dual parity (equivalent to RAID 6) and must be selected explicitly when creating the storage pool. On a 4-bay Synology NAS with four drives, selecting SHR-2 provides dual-parity protection and tolerates two simultaneous drive failures. SHR-2 requires a minimum of four drives. For 8TB and above drives on a 4-bay Synology, SHR-2 is the recommended configuration.

What happens to data if RAID 5 rebuild fails due to a URE?

It depends on the file system and NAS platform. On ext4-based systems (most Synology and QNAP in standard mode), a URE during rebuild that cannot be corrected causes the rebuild to fail and the array to enter a failed or degraded-failed state. The data on the array becomes inaccessible. Recovery requires professional data recovery services, which in Australia typically start at $1,000-2,000 AUD and are not guaranteed for RAID 5 failure scenarios. On ZFS-based systems, the error is detected and handled more explicitly, but recovery still depends on whether the second parity set has sufficient information to reconstruct the affected blocks.

Understanding RAID risk is the starting point. The RAID guide covers all RAID levels, capacity calculations, and how to choose the right configuration for your NAS and drive combination.

Read the Full RAID Guide