Using Unraid? Unraid 7 supports native ZFS pools for cache and unassigned devices. ZFS on Unraid works best for cache pools, application storage, and VM images, not as a replacement for the main parity array. Learn about Unraid cache pools →

ZFS vdev Design Wizard: Capacity, Resilience & Performance

This ZFS vdev design wizard calculates usable capacity, redundancy, and read/write performance for common vdev configurations (mirror, RAIDZ1, RAIDZ2, and RAIDZ3) based on drive count and size. Helps you choose the right ZFS pool layout before provisioning.

ZFS pools are built from vdevs. How you arrange your drives into vdevs determines everything: usable capacity, how many failures you can survive, rebuild time, and performance. The math isn't hard, but the trade-offs are easy to get wrong. This wizard walks through your drive inventory and workload, and tells you which vdev layout makes sense and which common mistakes to avoid.

No Redundancy: Single-Drive Pool A single-drive ZFS pool has no redundancy. Data integrity checking (checksumming) still works, but there is no recovery path if the drive fails.
RAIDZ1 + Large Drives: Elevated Risk RAIDZ1 with large drives is increasingly risky. Resilver times can exceed 24-36 hours for 16 TB drives. During that window, a second failure means data loss. As drives get larger and resilvers take longer, RAIDZ1's safety margin narrows. For critical or higher-value data, RAIDZ2 is commonly preferred regardless of drive size.
RAIDZ Requires at Least 3 Drives A 2-drive configuration should use a mirror, not RAIDZ.
Odd Drive Count with Mirror Pool Mirror pools work best with even drive counts. An odd number means one drive must go elsewhere. Consider adding one more for a clean mirror, or use RAIDZ instead.
Auto-Detect ashift Note Auto-detect works for most modern drives. If you are using a drive manufactured before ~2012, verify its physical sector size manually before creating the pool.
Workload Mismatch: VM / Container + RAIDZ Your workload (VMs / containers) would benefit from a mirror pool rather than RAIDZ. Consider adding more drives to enable a mirror configuration, or use SSDs for VM storage on a separate mirror pool.
Recommended Layout
-
Recommended
Vdev count
-
Drives per vdev
-
Usable capacity (after ZFS slop reserve)
-
Effective working capacity (at 80%)
-
Failures tolerated
-
Estimated resilver time
-
80% capacity guideline: 80% is a common rule of thumb. ZFS performance can degrade at high fill levels and new writes are refused above ~97% full. For random-write or VM workloads, degradation can appear earlier; for sequential media storage, you can often go higher. The 80% guideline gives practical headroom for most workloads.
Copied!
Alternative Layouts
Layout Usable Failures Resilver Best for
Workload Fit
  • Cache pool: ZFS mirror is the recommended layout. Two NVMe drives in a mirror gives redundancy and good performance. RAIDZ is not recommended for cache pools.
  • Unassigned Devices: ZFS pools on unassigned devices work well for application data, VM images, and secondary storage.
  • Main array: The main Unraid parity array does not use ZFS. This tool applies to ZFS pools only, not the main Unraid array.
  • RAM for ARC: ZFS uses spare RAM for ARC (Adaptive Replacement Cache). More RAM generally improves read performance. There is no strict per-TB requirement; workload matters far more than capacity. Heavy features like dedup are the main RAM multiplier. 8 GB is a reasonable minimum for a home NAS.
Methodology: Usable capacity is calculated from vdev geometry with a ~3% deduction for ZFS slop space reserve (pools above ~97% full refuse new writes). Resilver time is estimated from drive size and sequential read speed. Mirror resilvering reads one drive; RAIDZ resilvering reads all surviving drives in parallel. Performance guidance is qualitative based on ZFS I/O patterns. Actual throughput depends on CPU speed, RAM (ARC), recordsize tuning, and workload specifics.
Last reviewed: 6 March 2026

AU Drive Pricing for ZFS Builds (early 2026)

ZFS performs best with enterprise or NAS-rated drives. For TrueNAS Scale or TrueNAS Core builds in Australia, use IronWolf Pro, WD Red Pro, WD Gold, or Seagate Exos for maximum reliability. Prices from Mwave, PLE, Scorptec.

DriveCapacityAU retail rangeZFS use case
Seagate IronWolf Pro8 TB$290-$350Home/SOHO ZFS, good endurance, 5-year warranty
Seagate IronWolf Pro16 TB$520-$620Larger vdevs, RAIDZ2/RAIDZ3 builds
WD Red Pro8 TB$300-$3607200 RPM, good for ZFS write-heavy workloads
WD Red Pro16 TB$530-$630Large ZFS mirror or RAIDZ2 vdev
Seagate Exos X1818 TB$600-$720Enterprise, best for high-density ZFS arrays
WD Gold16 TB$600-$720Enterprise, 5-year warranty, datacenter rated

ZFS RAM Rule: AU Hardware Context

ZFS benefits significantly from RAM (ARC cache). The traditional guideline is 1 GB RAM per 1 TB of storage, but modern workloads are less demanding. For AU home/SOHO TrueNAS builds:

  • Minimum: 8 GB RAM for a small pool (8-16 TB usable)
  • Recommended: 16-32 GB RAM for a typical home ZFS build (16-40 TB usable)
  • ECC RAM: Strongly recommended for ZFS to prevent silent data corruption. Most dedicated TrueNAS builds use ECC-capable platforms.

Common AU TrueNAS platforms: repurposed Supermicro servers (eBay AU, $300-$800), Topton/Cwwk mini-PCs with ECC support (AliExpress AU import, $400-$700), or purpose-built QNAP TVS-h series NAS with ZFS (Mwave/PLE, $1,200+).

ACL warranty note: Enterprise drives (Exos, WD Gold) purchased from AU retailers are covered by the Australian Consumer Law regardless of the manufacturer's country warranty terms. Seagate and WD service AU RMAs locally, check the manufacturer AU website for depot locations (Sydney, Melbourne).
Related Tools
Frequently Asked Questions
The basic building block of a ZFS pool. Each vdev can be a mirror, RAIDZ, or single drive. Data is striped across vdevs. If one vdev loses enough drives to fail, the entire pool is lost. Vdev selection is the most important architectural decision in ZFS pool design.
Technically yes, but strongly discouraged for data vdevs. Mixing mirror and RAIDZ vdevs creates an uneven performance and reliability profile, and one vdev failing loses the entire pool. Keep all data vdevs the same type and width. Exception: special vdevs (metadata acceleration) and SLOG devices are designed to be added alongside data vdevs and are not subject to this restriction.
Not in traditional ZFS. A RAIDZ vdev has fixed width at creation. To expand the pool, add an entirely new vdev. ZFS 2.2+ introduced RAIDZ online expansion (FreeBSD Foundation overview), but support varies by platform and ZFS module version. Run zpool upgrade -v and look for raidz_expansion in the feature list before relying on it.
Yes. Mirrors can be converted from 2-way to 3-way by attaching a new drive using zpool attach. Data is automatically re-replicated to the new drive. This is one of the key advantages of mirror-based pools.
SLOG (Separate Intent LOG) is a small, fast device dedicated to synchronous write acceleration. It only helps workloads that issue synchronous writes: databases, VMs using sync writes, NFS with sync semantics. For a typical home NAS doing async writes, a SLOG provides no benefit. If you do run databases or VMs, a small NVMe or Optane device as SLOG can significantly reduce write latency.
Default 128K for general NAS workloads. 1M for large media files (set with zfs set recordsize=1M poolname/datasetname). 8K-64K for databases and VMs where record size should match the application's block size. Note: recordsize only applies to new writes after the property is changed. Existing data is unaffected. See the OpenZFS Workload Tuning guide for workload-specific guidance.
Run zpool upgrade -v on your system and look for pool feature@raidz_expansion in the feature list. On Unraid 7, check your current kernel version and ZFS module version via the Unraid GUI under Info. The feature requires ZFS 2.2 or later.