ZFS Explained for Home NAS Users: What TrueNAS Does Differently

ZFS is the storage filesystem that TrueNAS is built on, and it works fundamentally differently from the filesystems used by most other NAS platforms. It runs a checksum on every block of data, detects silent corruption automatically, and can repair it without user intervention. For anyone storing data they cannot afford to lose, this changes what a NAS can guarantee.

ZFS is a combined filesystem and volume manager originally developed by Sun Microsystems for enterprise storage. On a home NAS running TrueNAS, it provides one capability that no other common NAS filesystem offers: automatic detection and repair of data corruption on drives that are still working. Traditional RAID and parity systems protect you from a dead drive. ZFS also protects you from a drive that is alive but quietly corrupting data, a failure mode called bit rot that most home NAS setups cannot detect until data is already lost.

In short: ZFS stores a checksum alongside every block of data and verifies that checksum on every read. If data has silently changed, ZFS detects it. With redundancy configured, ZFS automatically reads the good copy from a mirror or parity drive and repairs the corruption without any user action. TrueNAS Scale is free ZFS-based NAS software for home use.

What Is Bit Rot and Why Does It Matter?

Bit rot is a colloquial term for silent data corruption: individual bits in a stored file changing from 0 to 1 or 1 to 0 without any reported error from the storage hardware. It happens more often than most people expect, particularly on drives that have been sitting idle for extended periods, on older drives, or on any storage media exposed to cosmic radiation (which affects all storage, including SSDs and HDDs, at a low but non-zero rate).

The problem with bit rot is that conventional RAID and parity systems cannot detect it. If a single sector on one of your data drives silently flips a bit, RAID does not know that the bit was wrong. The parity value was calculated from the incorrect bit. The file reads back with the corrupt data and your system reports everything as healthy. You may only discover the corruption when you try to open a file years later and find it is damaged.

ZFS solves this by storing a checksum alongside every block of data at write time. On every read, ZFS recalculates the checksum and compares it. If they do not match, ZFS knows the data has changed since it was written. With a mirror or RAIDZ pool, ZFS can silently read the good copy from the redundant drive and repair the corrupted block without user intervention or any error report surfacing to the application layer. The scrub command in TrueNAS manually triggers this check across the entire pool on a schedule.

ZFS Concepts Every TrueNAS User Should Know

Pools and vdevs

A ZFS pool is the top-level storage container. Pools are made of one or more virtual devices (vdevs). A vdev is a group of drives arranged in a specific redundancy configuration. The pool's total capacity is the sum of all vdevs.

The most important design decision in a ZFS pool is vdev configuration, because vdev type determines redundancy. Common vdev types:

  • Mirror: Two or more drives that are exact copies of each other. A 2-way mirror loses half capacity but survives one drive failure. A 3-way mirror survives two.
  • RAIDZ1: Like RAID 5. Can survive one drive failure. Minimum 3 drives. One drive worth of capacity is used for parity.
  • RAIDZ2: Like RAID 6. Can survive two drive failures. Minimum 4 drives.
  • RAIDZ3: Can survive three drive failures. Minimum 5 drives. Rarely used at home.

Once a vdev is created, you cannot change its type or the number of drives in it without destroying the pool and recreating it. This is the biggest design constraint in ZFS and the main reason pool planning matters before you buy drives.

Drive size matching in a vdev

Drives in a ZFS vdev should match in size. In a RAIDZ1 vdev with 4 drives of 8TB, 8TB, 8TB, and 4TB, ZFS uses only 4TB from each drive. The extra space on the 8TB drives is wasted. This is in direct contrast to Unraid, where mixing drive sizes in the same array wastes no capacity. For TrueNAS, the correct approach is to add a new vdev when you want to expand, rather than adding an individual drive to an existing vdev.

ARC: ZFS memory caching

ZFS uses system RAM as a read cache called the ARC (Adaptive Replacement Cache). The more RAM available, the more data ZFS can cache and serve without a disk read. A common guideline is 1GB of RAM per TB of pool capacity, but this is a performance guideline, not a hard requirement. TrueNAS will run with less RAM but will cache less data and rely more on disk reads.

ECC RAM is strongly recommended for TrueNAS builds. ZFS relies on RAM integrity for its write operations and checksums. Non-ECC RAM corruption can theoretically lead to corrupt data being written to a pool and the checksum being calculated against the corrupt value. ECC RAM catches and corrects single-bit memory errors before they reach ZFS. See our homelab hardware guide for ECC RAM sourcing in Australia.

Snapshots and replication

ZFS snapshots are point-in-time copies of a dataset, created almost instantly and consuming only the space required to store changes since the snapshot was taken. TrueNAS can create scheduled snapshots of every dataset and display or restore previous versions of files through the web interface or via SMB shadow copies (Windows Previous Versions).

ZFS replication sends snapshots to a remote system over SSH. TrueNAS makes this configurable through the web GUI. A scheduled replication task sending daily snapshots to a second TrueNAS instance offsite provides the NAS equivalent of a full backup solution without additional software. This is one of TrueNAS's most valuable capabilities for serious data protection.

TrueNAS Scale: ZFS for Home Builders

TrueNAS Scale is the Debian-based version of TrueNAS, free and open source. It provides all ZFS capabilities through a web interface designed for non-expert users. Pool creation, scrub scheduling, snapshot tasks, and replication are all configurable through the GUI without command-line work. Advanced ZFS tuning (ARC size limits, recordsize, compression settings) requires the CLI but the defaults are sensible for home use.

The current version as of 2026 is TrueNAS 24.10 (Electriceel). The App system in this version uses Helm charts rather than the older plugin system. It provides containers for Plex, Nextcloud, Immich, and other services, though the interface is less polished than Unraid's Community Applications plugin. A significant improvement roadmap is in progress.

TrueNAS Scale is completely free with no licence fees, no drive count restrictions, and no paywalled features. iXsystems sells enterprise support contracts separately but the software itself is open source and available without registration. Hardware requirements are modest: any x86-64 CPU, 8GB RAM minimum (16GB recommended for home use), and a SATA HBA or motherboard SATA ports for drives.

ZFS vs Unraid Parity vs Btrfs on Synology

Storage Integrity Comparison: ZFS vs Unraid Parity vs Synology Btrfs

ZFS (TrueNAS) Unraid Parity Btrfs (Synology)
Per-block checksums Yes, on all data and metadataNoYes, metadata only on EXT4; data and metadata on Btrfs
Silent corruption detection Yes, on every readNoYes (Btrfs), No (EXT4)
Automatic repair with redundancy YesNoPartial (Btrfs RAID 1)
Drive failure protection Yes (mirror, RAIDZ1, RAIDZ2)Yes (single or dual parity)Yes (RAID 1, SHR, RAID 5/6)
Drive size mixing No; vdev drives should matchYes, any sizes in same arrayLimited (SHR handles some mixing)
Snapshots Yes, near-instant, space-efficientNo native snapshotsYes, on Btrfs volumes
Replication Yes, ZFS send/receiveNo built-inYes, Hyper Backup
RAM requirements Higher (ARC caching)LowerModerate
Best for Maximum data integrityDrive flexibilityPolished all-in-one NAS

Is ZFS Right for Your Home NAS?

ZFS is the right choice when the data you are storing is difficult or impossible to recreate. Family photographs taken over many years, video footage, business documents, and archival backups are examples where silent corruption discovered years later would be a serious loss. The overhead of ZFS (matched drive sizes, higher RAM, more complex pool planning) is worth accepting for this data.

ZFS is less critical for data that is routinely replicated elsewhere: media libraries that also exist on streaming services, downloads that can be re-fetched, software that can be reinstalled. For a NAS primarily serving as a media server or download cache, Unraid's simpler model may be adequate.

The practical recommendation: if you use your NAS to store photographs, important documents, or any data that exists nowhere else, TrueNAS with ZFS is the correct choice. If you primarily want drive flexibility, easy Docker containers, and a less complex initial setup, Unraid is appropriate. For most homelab builders who care about their data, ZFS is worth learning.

💡

Running TrueNAS in Australia: TrueNAS Scale runs on any x86-64 PC. For a home build, an Intel Core i3 or Xeon with 16GB ECC RAM and 4 drives in a RAIDZ1 vdev is a solid starting configuration. ECC RAM is harder to find at consumer AU retailers; eBay AU is the best source for used ECC DIMMs from decommissioned enterprise servers. A used Dell PowerEdge tower (T130, T140) comes with ECC RAM installed and is a cost-effective TrueNAS platform sourced locally.

Does ZFS require ECC RAM to work?

No, ZFS works without ECC RAM. ECC RAM is strongly recommended because ZFS relies on RAM integrity for its checksums and write operations. A RAM error could theoretically cause a corrupt block to be written with a matching (but wrong) checksum, defeating ZFS's integrity guarantee. In practice, RAM errors are rare in quality non-ECC RAM, and many home TrueNAS users run without ECC successfully. For data you cannot afford to lose, ECC is the correct choice.

How often should I run a ZFS scrub?

Monthly scrubs are the standard recommendation for home NAS use. TrueNAS allows scheduling automatic scrubs. A scrub reads every block on the pool, verifies its checksum, and repairs any detected corruption using redundant data. A scrub on a 4TB pool typically takes 2-6 hours. Run them during off-hours. If a scrub finds errors, investigate the drive's SMART data immediately.

Can I add a single drive to an existing ZFS pool?

You can add a new vdev to an existing pool, which is how ZFS pools expand. In TrueNAS 24.04 and later, expanding a RAIDZ vdev by adding a single drive within the same vdev is supported as an experimental feature. Traditionally, you add expansion capacity by adding a new vdev of the same type (another RAIDZ1 with the same number of drives). You cannot add a single drive to an existing RAIDZ vdev in the traditional model. This is a significant difference from Unraid, which allows adding individual drives to the array at any time.

Is TrueNAS Scale the same as TrueNAS Core?

TrueNAS Scale is the Linux-based (Debian) version. TrueNAS Core is the FreeBSD-based version. Both use ZFS and share the same basic NAS features. TrueNAS Scale is the current focus of development and includes Docker container support and a more modern app system. TrueNAS Core is in maintenance mode. New home builds should use TrueNAS Scale.

What is the minimum RAM for TrueNAS Scale at home?

The official minimum is 8GB. For a home NAS with 4-6 drives and light container use, 16GB provides a comfortable ARC cache and room for running a few services. If you plan to run VMs inside TrueNAS or have a large pool (20TB or more), 32GB or more improves performance. The guideline of 1GB per TB of storage is for performance optimisation, not a hard requirement.

Comparing TrueNAS Scale against Unraid and Proxmox? The full three-way comparison covers which OS is right for each use case.

Read the Full OS Comparison →