TrueNAS Replace ZFS Disks With Bigger One's

System: TrueNAS Community Edition 25.04.1

Goal: replace the 4x 4TB RAID-Z2 ZFS pool’s disks with bigger 12TB disks to increase the available space from 6.9TiB usable to 21.01TiB usable space.

Follow instructions of replace disk without hot spare

Preparation: figure out which drive-bay is occupied by which disk. This is done by writing down the Serial- and drive-bay-number of each disk. It’s important to only pull out the one that was marked “offline”!

  1. Unintuitively go to “manage devices” (I tried first in “manage disks”, especially since there the identification which disk has which serial number is easier, since that info is part of the UI table)
  2. Find the disk you want to swap, click “offline” button to mark it off line
  3. Physically remove the disk from the server
  4. Physically insert replacement (same size or larger)
  5. Go to “manage disks” and click wipe -> quick wipe (by default they are pre-setup with partition and FS, keeping them from being added to a zfs pool)
  6. Go to “manage devices” and click “replace” -> new disk should now show up in drop-down to replace
  7. Wait for resilver.
    • Resilver status in TrueNAS GUI is showing ETA under storage / ZFS Health tab (unintuitively not when clicking on the spinning arrows top right), or you can use zpool status on terminal
    • On my system, holding 2.56TiB net data, meaning resilver needs to write 1.28TiB, replacing one Seagate ST4000VN008-2DR166 4TB with Western Digital WD120EFBX 12TB disk took: 4h5min
    • NOTE: the resilvering time estimation (on a low to un-used system) becomes reliable after about 25% are done
      • At the start it announced 6 hours to resilver at 250M/s rate, however speed went up to 400M/s quite stably for the rest of run
      • Sometimes, e.g. during the last 5%, it slowed down to 80-200M/s
  8. After resilver go to “Storage dashboard” and click “Expand” button to claim the newly available space

Example output:

$ zpool status
scan: resilver in progress since Fri Jun 20 16:53:12 2025
2.21T / 5.26T scanned at 601M/s, 1.46T / 5.26T issued at 397M/s
363G resilvered, 27.74% done, 02:47:23 to go
# ...
scan: resilvered 1.28T in 04:05:43 with 0 errors on Fri Jun 20 20:58:55 2025

Upside: pool is online during resilver and can be used without downtime

Downside: replacing 4 disks takes about 16.5 hours (if you happen to swap them right away after each resilver)

Mental note on speeding this up:

One could be considering a faster solution, if you trust the new harddrive:

  1. ZFS export a snapshot of the old pool to the new spare-disk (you bought a designated hot/cold spare disk for your array, right?).
    • Likely takes 4 hours, 250MiB/s for e.g. 4TB net-use of old zpool
  2. Build new RAID-Z2 array from new disks
  3. Load snapshot from spare.
    • takes 4 hours

downtime considerations:

  • Worst case takes down the NAS from the moment you snapshot (to avoid new being data written). (8 hours)
  • Best case takes down NAS from the moment you take the second snapshot, that gets incrementally added to the spare-disk (4 hours plus the time it took to write the new snapshot)

This process is potentially twice as fast, taking 8 hours, with a best case of 4+ hours downtime. But that kind of downtime is hardly doable during business days, a weekend stunt therefore. Also, in order to trust the new disk, a burn-in, that e.g. writes and verifies the whole disk, is needed - takes 13.33 hours… (napkin math for 12TB drive @ 250M/s write speed). Conclusion: the preparations take similar effort as the live resilvering, the downtime is a substantial disadvantage, one extra disk port / external adapter is needed, … making this approach uninteresting.