Netgrimoire/Vault-Grimoire/ZFS/Storage-Layout.md
2026-04-12 09:53:51 -05:00

11 KiB
Raw Blame History

title description published date tags editor dateCreated
Netgrimoire Storage Where is it at true 2026-02-23T18:38:27.621Z markdown 2026-01-22T21:10:37.035Z

NAS Storage Layout

Overview

ZNAS is the primary NAS for Netgrimoire. It runs Ubuntu with OpenZFS and serves as the source of truth for all storage, including datasets that replicate out to the Pocket Grimoire portable system.

The system mounts everything under /export/ for NFS sharing, with select datasets mounted under /srv/ for local service consumption (Immich, NextCloud-AIO, Kopia, backup).

ZFS Pools

  • vault — primary NAS storage, RAIDZ1×2, 8 drives
  • greenpg — Pocket Grimoire GREEN SSD (Kanguru UltraLock), docked for sync when present

Zpool Architecture

pool: vault
state: ONLINE
scan: scrub repaired 0B in 2 days 10:24:08 with 0 errors on Tue Feb 10 10:48:10 2026

config:
    NAME                                    STATE     READ WRITE CKSUM
    vault                                   ONLINE       0     0     0
      raidz1-0                              ONLINE       0     0     0
        ata-ST24000DM001-3Y7103_ZXA06K45    ONLINE       0     0     0
        ata-ST24000DM001-3Y7103_ZXA08CVY    ONLINE       0     0     0
        ata-ST24000DM001-3Y7103_ZXA0FP10    ONLINE       0     0     0
      raidz1-1                              ONLINE       0     0     0
        ata-ST16000NE000-2RW103_ZL2Q3275    ONLINE       0     0     0
        ata-ST16000NM001G-2KK103_ZL26R5XW   ONLINE       0     0     0
        ata-ST16000NT001-3LV101_ZRS0KVQW    ONLINE       0     0     0
        ata-WDC_WD140EDFZ-11A0VA0_9MG81N0J  ONLINE       0     0     0
        ata-WDC_WD140EDFZ-11A0VA0_Y5J35Z6C  ONLINE       0     0     0

errors: No known data errors

raidz1-0 is 3× Seagate 24TB (~48TB usable). raidz1-1 is 3× Seagate 16TB + 2× WD 14TB (~56TB usable — the 14TB drives are the limiting factor per stripe, leaving ~2TB/drive unused on the 16TB drives). Total pool: ~94TB raw, 39TB currently available.

pool: greenpg
state: ONLINE

config:
    NAME                                       STATE     READ WRITE CKSUM
    greenpg                                    ONLINE       0     0     0
      scsi-1Kanguru_UltraLock_DB090722NC10001  ONLINE       0     0     0

errors: No known data errors

greenpg is a portable pool. Export it before physically moving to Pocket Grimoire.

ZFS Datasets

Dataset Mountpoint Used Avail Refer Quota Compression Purpose
vault /export 55.3T 39.0T 771G none 1.00x Pool root / NFSv4 pseudo-root
vault/Common /export/Common 214G 39.0T 214G none 1.06x General shared storage
vault/Data /export/Data 38.4T 39.0T 36.4T none 1.00x Primary data — 36.4T lives directly in dataset root
vault/Data/media_books /export/Data/media/books 925G 39.0T 925G none 1.03x Book library
vault/Data/media_comics /export/Data/media/comics 1.15T 39.0T 1.15T none 1.00x Comic library
vault/Green /export/Green 14.7T 5.31T 9.66T 20T 1.00x Personal media — 9.66T direct, 5.02T in Pocket child
vault/Green/Pocket /export/Green/Pocket 5.02T 2.48T 5.02T 7.5T 1.00x Pocket Grimoire replication source
vault/Kopia /srv/vault/kopia_repository 349G 39.0T 349G none 1.02x Kopia backup repository
vault/NextCloud-AIO /srv/NextCloud-AIO 341G 39.0T 341G none 1.01x NextCloud data
vault/Photos /export/Photos 135K 39.0T 135K none 1.00x Photos (sparse — see notes)
vault/backup /srv/vault/backup 442G 582G 442G 1T 1.00x Local system backups
vault/docker /export/Docker 22.2G 39.0T 22.2G none 1.13x Docker volumes
vault/immich /srv/immich 117G 39.0T 117G none 1.03x Immich photo service data
greenpg /greenpg 2.94T 4.20T 96K 1.00x GREEN SSD pool root (portable)
greenpg/Pocket /greenpg/Pocket 2.94T 4.20T 2.94T 1.00x Personal media + Stash data

Notes on specific datasets:

vault/Data — 36.4T lives directly in the dataset root at /export/Data/. media_books and media_comics are the only child datasets and account for ~2T combined. The remaining ~36T is general data stored directly under the parent.

vault/Green — 9.66T lives directly in /export/Green/ with the remaining 5.02T in the Pocket child dataset. The 20T quota caps total Green growth. vault/Green/Pocket has its own 7.5T sub-quota.

vault/Photos — nearly empty (135K). Photos are primarily managed through Immich at vault/immich. This dataset may be vestigial or reserved for future use.

vault/backup — has a hard 1T quota. Unlike other vault datasets which draw from the full 39T pool availability, this dataset is capped. Current usage is 442G with 582G remaining.

Compression ratios are near 1.00x across most datasets because content is already compressed (media files, binary data). vault/docker (1.13x) and vault/Common (1.06x) see modest gains from compressible config and text data.

NFS Exports

All exports use NFSv4 with /export as the pseudo-filesystem root (fsid=0).

Export fsid Options Notes
/export 0 ro, no_root_squash, no_subtree_check, crossmnt NFSv4 pseudo-root — required for v4 clients
/export/Common 4 rw, no_subtree_check, insecure General access
/export/Data 5 rw, no_subtree_check, insecure, crossmnt Data root
/export/Data/media/books 51 rw, no_subtree_check, insecure, nohide Separate ZFS dataset — needs nohide
/export/Data/media/comics 52 rw, no_subtree_check, insecure, nohide Separate ZFS dataset — needs nohide
/export/Docker 29 rw, no_root_squash, sync, no_subtree_check, insecure Container volumes
/export/Green 30 rw, no_root_squash, no_subtree_check, insecure Personal media + Pocket Grimoire source
/export/photos 31 rw, no_root_squash, no_subtree_check, insecure Photos

Current /etc/exports:

/export         *(ro,fsid=0,no_root_squash,no_subtree_check,crossmnt)
/export/Common  *(fsid=4,rw,no_subtree_check,insecure)
/export/Data    *(fsid=5,rw,no_subtree_check,insecure,crossmnt)
/export/Data/media/books   *(fsid=51,rw,no_subtree_check,insecure,nohide)
/export/Data/media/comics  *(fsid=52,rw,no_subtree_check,insecure,nohide)
/export/Docker  *(fsid=29,rw,no_root_squash,sync,no_subtree_check,insecure)
/export/Green   *(fsid=30,rw,no_root_squash,no_subtree_check,insecure)
/export/photos  *(fsid=31,rw,no_root_squash,no_subtree_check,insecure)

There is also an active loopback NFSv4 mount on the system itself:

localhost:/ → /data/nfs/znas  (NFSv4.2, rsize/wsize=1M)

SMB Shares

(To be documented.)

Standard Paths

  • /export/ — NFS root (vault pool root)
  • /export/Data/ — primary data
  • /export/Data/media/books/ — book library
  • /export/Data/media/comics/ — comic library
  • /export/Green/ — personal media
  • /export/Green/Pocket/ — Pocket Grimoire replication source
  • /export/Docker/ — container volumes
  • /export/Photos/ — photos
  • /srv/immich/ — Immich service data
  • /srv/NextCloud-AIO/ — NextCloud data
  • /srv/vault/kopia_repository/ — Kopia backup repo
  • /srv/vault/backup/ — local system backups
  • /greenpg/Pocket/ — GREEN SSD when docked for sync

Permissions & UID/GID Model

(To be documented — dockhand UID 1964, container access rules.)

Services Using Local Mounts

These datasets are consumed directly by services on ZNAS and are not NFS-exported:

Service Dataset Mountpoint
Immich vault/immich /srv/immich
NextCloud-AIO vault/NextCloud-AIO /srv/NextCloud-AIO
Kopia vault/Kopia /srv/vault/kopia_repository
Local backup vault/backup /srv/vault/backup

Pocket Grimoire Integration

vault/Green/Pocket is the replication source for the Pocket Grimoire GREEN SSD (greenpg). It contains personal media and Stash application data (database, previews, blobs). See the Pocket Grimoire deployment guide for full procedures.

Fast resync when GREEN SSD is physically docked on ZNAS:

# Check pool name (retains whatever name it had when last exported)
zpool list | grep greenpg

# Import if needed
sudo zpool import greenpg
sudo zfs load-key greenpg
sudo zfs mount -a

# Sync
sudo syncoid vault/Green/Pocket greenpg/Pocket

# Export before physically disconnecting — always do this
sudo zfs unmount greenpg/Pocket
sudo zfs unmount greenpg
sudo zpool export greenpg

Network sync runs automatically on Pocket Grimoire via a 6-hour syncoid systemd timer when connected over the network.

Backup & Snapshot Strategy

Snapshots:

# Manual pre-change snapshot
zfs snapshot vault/Docker@before-upgrade

# List all snapshots
zfs list -t snapshot

# List snapshots for a specific dataset
zfs list -t snapshot -r vault/Green

Kopia: Repository at vault/Kopia/srv/vault/kopia_repository. (Document snapshot policy and sources.)

Replication: vault/Green/Pocketgreenpg/Pocket via syncoid. See Pocket Grimoire Integration above.

Known Gotchas

NFSv4 pseudo-root — The /export entry with fsid=0 is required for NFSv4 clients to enumerate subdirectories. Do not remove it even if it appears redundant.

nohide on sub-datasetsvault/Data/media_books and vault/Data/media_comics are separate ZFS datasets mounted beneath the vault/Data export path. NFS does not cross filesystem boundaries by default. Without nohide clients see empty directories at those paths.

vault/backup quota — This dataset has a hard 1T quota and does not share the general pool availability. Current headroom is ~582G. Monitor before large backup operations.

vault/Green quota — Capped at 20T total with a 7.5T sub-quota on vault/Green/Pocket. The GREEN SSD itself is ~7TB, so the sub-quota is the effective ceiling for the Pocket sync.

raidz1-1 mixed drive sizes — The three 16TB drives in raidz1-1 have ~2TB/drive going unused because RAIDZ1 stripes are limited by the smallest drive in the VDEV (14TB WDs). This capacity is permanently unavailable unless the VDEV is rebuilt.

Kanguru UltraLock hardware encryption — The GREEN SSD has hardware-level PIN protection in addition to ZFS encryption. The drive must be hardware-unlocked before zpool import will see it.

Always export greenpg before disconnecting — Export flushes writes and marks the pool clean. Pulling the drive without exporting risks a dirty import on next use.

vault/Data root usage — 36.4T lives directly in /export/Data/ rather than in child datasets. This is normal for this setup but means zfs list on the parent alone shows the full usage without a breakdown.

Command Reference

  • Health: zpool status
  • Space available to pool: zpool list
  • Space available to datasets: zfs list
  • Dataset configuration: zfs get -r compression,dedup,recordsize,atime,quota,reservation vault
  • Create a snapshot: zfs snapshot vault/Docker@before-upgrade
  • List snapshots: zfs list -t snapshot
  • Reload NFS exports: sudo exportfs -ra
  • Show active NFS exports: sudo exportfs -v
  • Run a scrub: sudo zpool scrub vault
  • Sync GREEN SSD: sudo syncoid vault/Green/Pocket greenpg/Pocket