New Grimoire
This commit is contained in:
parent
77d589a13d
commit
cc574f8aed
157 changed files with 29420 additions and 0 deletions
393
Vault-Grimoire/ZFS/NFS-Exports.md
Normal file
393
Vault-Grimoire/ZFS/NFS-Exports.md
Normal file
|
|
@ -0,0 +1,393 @@
|
|||
---
|
||||
title: ZFS-NFS-Exports
|
||||
description: Exporting NFS shares from ZFS datasets
|
||||
published: true
|
||||
date: 2026-02-23T21:58:20.626Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-02-01T20:45:40.210Z
|
||||
---
|
||||
|
||||
# NFS Configuration
|
||||
|
||||
## Overview
|
||||
|
||||
ZNAS exports storage via NFSv4. All exports are ZFS datasets mounted directly to `/export/*` — no bind mounts. NFS is configured to wait for ZFS at boot via a systemd override.
|
||||
|
||||
ZNAS also mounts its own NFS exports back to itself at `/data/nfs/znas`. This is intentional: Docker Swarm containers scheduled to ZNAS need to access NAS storage at the same paths as containers running on other swarm members. The loopback mount provides a consistent NFS-backed path regardless of which node a container lands on.
|
||||
|
||||
All other clients are Linux systems using autofs.
|
||||
|
||||
---
|
||||
|
||||
## Server Configuration
|
||||
|
||||
### ZFS Mountpoints
|
||||
|
||||
ZFS datasets mount directly to `/export/*`. No bind mounts are used.
|
||||
|
||||
```
|
||||
vault → /export
|
||||
vault/Common → /export/Common
|
||||
vault/Data → /export/Data
|
||||
vault/Data/media_books → /export/Data/media/books
|
||||
vault/Data/media_comics → /export/Data/media/comics
|
||||
vault/Docker → /export/Docker
|
||||
vault/Green → /export/Green
|
||||
vault/Green/Pocket → /export/Green/Pocket
|
||||
vault/Photos → /export/Photos
|
||||
```
|
||||
|
||||
Verify at any time:
|
||||
|
||||
```bash
|
||||
mount | grep export
|
||||
```
|
||||
|
||||
### /etc/exports
|
||||
|
||||
```
|
||||
# NFSv4 - pseudo filesystem root
|
||||
/export *(ro,fsid=0,no_root_squash,no_subtree_check,crossmnt)
|
||||
|
||||
# Shares beneath the NFSv4 root
|
||||
/export/Common *(fsid=4,rw,no_subtree_check,insecure)
|
||||
/export/Data *(fsid=5,rw,no_subtree_check,insecure,crossmnt)
|
||||
/export/Data/media/books *(fsid=51,rw,no_subtree_check,insecure,nohide)
|
||||
/export/Data/media/comics *(fsid=52,rw,no_subtree_check,insecure,nohide)
|
||||
/export/Docker *(fsid=29,rw,no_root_squash,sync,no_subtree_check,insecure)
|
||||
/export/Green *(fsid=30,rw,no_root_squash,no_subtree_check,insecure)
|
||||
/export/photos *(fsid=31,rw,no_root_squash,no_subtree_check,insecure)
|
||||
```
|
||||
|
||||
**Key options:**
|
||||
|
||||
- `fsid=0` on `/export` — required for NFSv4 pseudo-root. Clients enumerate all exports from here.
|
||||
- `crossmnt` — allows NFS to cross ZFS dataset boundaries when traversing the tree.
|
||||
- `nohide` — required on `media/books` and `media/comics` because they are separate ZFS datasets mounted beneath the `vault/Data` export path. Without it clients see empty directories.
|
||||
- `no_root_squash` — Docker and Green exports allow root writes. Required for container volume mounts.
|
||||
- `insecure` — permits connections from unprivileged ports (>1024). Required for some Linux NFS clients and all macOS clients.
|
||||
- `sync` on Docker — forces synchronous writes for container volume safety.
|
||||
|
||||
### systemd Boot Order Override
|
||||
|
||||
NFS is configured to wait for ZFS to fully mount before starting.
|
||||
|
||||
`/etc/systemd/system/nfs-server.service.d/override.conf`:
|
||||
|
||||
```ini
|
||||
[Unit]
|
||||
After=zfs-import.target zfs-mount.service local-fs.target
|
||||
Requires=zfs-import.target zfs-mount.service
|
||||
```
|
||||
|
||||
Apply after any changes:
|
||||
|
||||
```bash
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl restart nfs-server
|
||||
```
|
||||
|
||||
### Autofs Disabled on Server
|
||||
|
||||
Autofs is disabled on ZNAS itself. It must only run on NFS clients. Running autofs on the server creates recursive mount loops.
|
||||
|
||||
```bash
|
||||
sudo systemctl stop autofs
|
||||
sudo systemctl disable autofs
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Loopback Mount (Docker Swarm)
|
||||
|
||||
ZNAS mounts its own NFS exports back to itself at `/data/nfs/znas`. This ensures containers scheduled to ZNAS by Docker Swarm access storage at the same NFS-backed paths as containers running on any other swarm member — consistent regardless of which node a service lands on.
|
||||
|
||||
Swarm container volume mounts reference paths under `/data/nfs/znas/` rather than `/export/` directly.
|
||||
|
||||
### The Timing Problem
|
||||
|
||||
Getting this mount to survive reboots reliably was non-trivial. The loopback has a chicken-and-egg dependency chain:
|
||||
|
||||
1. ZFS must import and mount pools before NFS server can export anything
|
||||
2. NFS server must be fully started before the loopback mount can succeed
|
||||
3. The loopback mount must be established before Docker Swarm containers start
|
||||
|
||||
A plain `_netdev` fstab entry is not sufficient — `_netdev` only guarantees the network is up, not that the NFS server is ready. The mount would race against NFS startup and fail silently or hang.
|
||||
|
||||
### Solution — fstab with x-systemd.after
|
||||
|
||||
The loopback is established via `/etc/fstab` using the `x-systemd.after` option to explicitly declare the dependency on `nfs-server.service`:
|
||||
|
||||
```
|
||||
localhost:/ /data/nfs/znas nfs4 defaults,_netdev,x-systemd.after=nfs-server.service 0 0
|
||||
```
|
||||
|
||||
`x-systemd.after=nfs-server.service` causes systemd-fstab-generator to automatically create a mount unit (`data-nfs-znas.mount`) with `After=nfs-server.service` in its `[Unit]` block. This guarantees the full dependency chain:
|
||||
|
||||
```
|
||||
zfs-import.target
|
||||
→ zfs-mount.service
|
||||
→ nfs-server.service (via nfs-server override.conf)
|
||||
→ data-nfs-znas.mount (via x-systemd.after in fstab)
|
||||
→ remote-fs.target
|
||||
→ Docker Swarm containers
|
||||
```
|
||||
|
||||
The generated unit (created automatically at runtime by systemd-fstab-generator — not a file on disk):
|
||||
|
||||
```ini
|
||||
# /run/systemd/generator/data-nfs-znas.mount
|
||||
[Unit]
|
||||
Documentation=man:fstab(5) man:systemd-fstab-generator(8)
|
||||
SourcePath=/etc/fstab
|
||||
After=nfs-server.service
|
||||
Before=remote-fs.target
|
||||
|
||||
[Mount]
|
||||
What=localhost:/
|
||||
Where=/data/nfs/znas
|
||||
Type=nfs4
|
||||
Options=defaults,_netdev,x-systemd.after=nfs-server.service
|
||||
```
|
||||
|
||||
**Do not create a hand-written systemd mount unit for this.** systemd-fstab-generator handles it automatically from the fstab entry. A manual unit would conflict.
|
||||
|
||||
### Verify Loopback is Active
|
||||
|
||||
```bash
|
||||
mount | grep data/nfs/znas
|
||||
# Should show: localhost:/ on /data/nfs/znas type nfs4 (...)
|
||||
|
||||
systemctl status data-nfs-znas.mount
|
||||
# Should show: active (mounted)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Client Configuration
|
||||
|
||||
All non-Swarm clients are Linux systems using autofs.
|
||||
|
||||
### Autofs Configuration
|
||||
|
||||
`/etc/auto.master` (relevant entry):
|
||||
|
||||
```
|
||||
/data/nfs /etc/auto.nfs
|
||||
```
|
||||
|
||||
`/etc/auto.nfs`:
|
||||
|
||||
```
|
||||
znas -fstype=nfs4 192.168.5.10:/
|
||||
```
|
||||
|
||||
This mounts the full NFSv4 tree from ZNAS at `/data/nfs/znas` on demand — the same path used by the loopback mount on ZNAS itself. All swarm nodes (including ZNAS) access NAS storage via `/data/nfs/znas/`.
|
||||
|
||||
**Note:** Autofs must be enabled on clients and disabled on the NFS server. Running autofs on the server creates recursive mount loops.
|
||||
|
||||
### Adding a New Client
|
||||
|
||||
```bash
|
||||
# Install autofs if not present
|
||||
sudo apt install autofs
|
||||
|
||||
# Add to /etc/auto.master if not already present
|
||||
echo "/data/nfs /etc/auto.nfs" | sudo tee -a /etc/auto.master
|
||||
|
||||
# Create or update /etc/auto.nfs
|
||||
echo "znas -fstype=nfs4 192.168.5.10:/" | sudo tee -a /etc/auto.nfs
|
||||
|
||||
# Reload autofs
|
||||
sudo systemctl reload autofs
|
||||
|
||||
# Trigger mount by accessing the path
|
||||
ls /data/nfs/znas/
|
||||
```
|
||||
|
||||
### Manual Mount (testing only)
|
||||
|
||||
```bash
|
||||
# Verify exports are visible from client
|
||||
showmount -e 192.168.5.10
|
||||
|
||||
# Test manual mount
|
||||
sudo mkdir -p /mnt/znas
|
||||
sudo mount -t nfs4 192.168.5.10:/ /mnt/znas
|
||||
|
||||
# Verify tree is accessible
|
||||
ls /mnt/znas/Data/media/books/
|
||||
|
||||
# Unmount after testing
|
||||
sudo umount /mnt/znas
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Adding New Datasets
|
||||
|
||||
When creating a new ZFS dataset that needs to be NFS-accessible:
|
||||
|
||||
```bash
|
||||
# Create with the correct mountpoint from the start
|
||||
sudo zfs create -o mountpoint=/export/Data/new_folder vault/Data/new_folder
|
||||
```
|
||||
|
||||
The dataset will be automatically visible via NFS due to `crossmnt` and `nohide` on the parent — no changes to `/etc/exports` needed unless the new dataset requires different access controls.
|
||||
|
||||
If different permissions are required, add an explicit entry to `/etc/exports` and reload:
|
||||
|
||||
```bash
|
||||
sudo exportfs -ra
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Current Export List
|
||||
|
||||
Verified via `showmount -e 127.0.0.1`:
|
||||
|
||||
```
|
||||
/export/photos *
|
||||
/export/Green *
|
||||
/export/Docker *
|
||||
/export/Data/media/comics *
|
||||
/export/Data/media/books *
|
||||
/export/Data *
|
||||
/export/Common *
|
||||
/export *
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Known Gotchas
|
||||
|
||||
**Loopback mount races NFS at boot** — This was the hardest problem to solve. A plain `_netdev` fstab entry only guarantees the network interface is up, not that the NFS server is ready to accept connections. The loopback mount would attempt before NFS finished starting and fail silently or hang. The fix is `x-systemd.after=nfs-server.service` in the fstab options, which causes systemd-fstab-generator to emit an `After=nfs-server.service` dependency in the generated mount unit. The full required boot chain is: `zfs-import.target` → `zfs-mount.service` → `nfs-server.service` → `data-nfs-znas.mount`. Each link must be explicit.
|
||||
|
||||
**Do not hand-write a systemd mount unit for the loopback** — systemd-fstab-generator creates `data-nfs-znas.mount` automatically from the fstab entry at runtime (in `/run/systemd/generator/`, not `/etc/systemd/system/`). Creating a manual unit in `/etc/systemd/system/` will conflict with the generated one.
|
||||
|
||||
**Autofs must be disabled on the server** — Running autofs on ZNAS itself creates a recursive mount loop. Autofs belongs on clients only. If autofs is accidentally re-enabled on ZNAS it will fight with the fstab loopback mount.
|
||||
|
||||
**NFSv4 pseudo-root is required** — The `/export` entry with `fsid=0` is mandatory for NFSv4 clients. Without it clients cannot enumerate the export tree. Do not remove it even though it looks redundant.
|
||||
|
||||
**`nohide` on sub-datasets** — `vault/Data/media_books` and `vault/Data/media_comics` are separate ZFS datasets mounted beneath the `vault/Data` export path. NFS does not cross filesystem boundaries by default. Without `nohide` clients see empty directories at those paths even though the data is present.
|
||||
|
||||
**Do not use bind mounts for ZFS datasets** — Configure ZFS mountpoints directly to `/export/*`. Bind mounts in fstab for ZFS datasets cause ordering problems and are unnecessary.
|
||||
|
||||
**Always set mountpoints when creating new datasets** — If a dataset is created without an explicit mountpoint it will inherit the parent's path and may not be visible or exportable correctly. Set `mountpoint=` at creation time.
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Datasets not visible via NFS
|
||||
|
||||
```bash
|
||||
# Verify dataset is mounted
|
||||
zfs list | grep dataset_name
|
||||
|
||||
# Check NFS can read it
|
||||
sudo -u nobody ls -la /export/path/to/dataset/
|
||||
|
||||
# Reload exports
|
||||
sudo exportfs -ra
|
||||
sudo systemctl restart nfs-server
|
||||
```
|
||||
|
||||
### Client shows empty directories
|
||||
|
||||
```bash
|
||||
# Clear NFS cache and remount
|
||||
sudo umount -f /mnt/znas
|
||||
sudo mount -t nfs4 192.168.5.10:/ /mnt/znas
|
||||
|
||||
# Test without caching to isolate the problem
|
||||
sudo mount -t nfs4 -o noac,lookupcache=none 192.168.5.10:/ /mnt/znas
|
||||
```
|
||||
|
||||
### After reboot, exports are empty
|
||||
|
||||
```bash
|
||||
# Confirm ZFS mounted before NFS started
|
||||
systemctl status zfs-mount.service
|
||||
systemctl status nfs-server.service
|
||||
|
||||
# Confirm override is in place
|
||||
systemctl cat nfs-server.service | grep -A5 "\[Unit\]"
|
||||
```
|
||||
|
||||
### Loopback mount not working for Swarm containers
|
||||
|
||||
```bash
|
||||
# Check mount unit status
|
||||
systemctl status data-nfs-znas.mount
|
||||
|
||||
# Verify full dependency chain is satisfied
|
||||
systemctl status zfs-mount.service
|
||||
systemctl status nfs-server.service
|
||||
systemctl status data-nfs-znas.mount
|
||||
|
||||
# Verify loopback is mounted
|
||||
mount | grep data/nfs/znas
|
||||
|
||||
# If missing, mount manually to test
|
||||
sudo mount -t nfs4 127.0.0.1:/ /data/nfs/znas
|
||||
|
||||
# Check container can see the path
|
||||
docker run --rm -v /data/nfs/znas/Data:/data alpine ls /data
|
||||
```
|
||||
|
||||
If the unit fails at boot, confirm the fstab entry includes `x-systemd.after=nfs-server.service` — without this the mount races against NFS startup and loses. A plain `_netdev` entry is not sufficient.
|
||||
|
||||
---
|
||||
|
||||
## Configuration Files Reference
|
||||
|
||||
### /etc/exports
|
||||
|
||||
```
|
||||
/export *(ro,fsid=0,no_root_squash,no_subtree_check,crossmnt)
|
||||
/export/Common *(fsid=4,rw,no_subtree_check,insecure)
|
||||
/export/Data *(fsid=5,rw,no_subtree_check,insecure,crossmnt)
|
||||
/export/Data/media/books *(fsid=51,rw,no_subtree_check,insecure,nohide)
|
||||
/export/Data/media/comics *(fsid=52,rw,no_subtree_check,insecure,nohide)
|
||||
/export/Docker *(fsid=29,rw,no_root_squash,sync,no_subtree_check,insecure)
|
||||
/export/Green *(fsid=30,rw,no_root_squash,no_subtree_check,insecure)
|
||||
/export/photos *(fsid=31,rw,no_root_squash,no_subtree_check,insecure)
|
||||
```
|
||||
|
||||
### /etc/systemd/system/nfs-server.service.d/override.conf
|
||||
|
||||
```ini
|
||||
[Unit]
|
||||
After=zfs-import.target zfs-mount.service local-fs.target
|
||||
Requires=zfs-import.target zfs-mount.service
|
||||
```
|
||||
|
||||
### /etc/fstab (ZNAS system mounts only)
|
||||
|
||||
ZFS datasets are not listed here — ZFS handles its own mounting. Only system partitions appear:
|
||||
|
||||
```
|
||||
# / - btrfs on nvme0n1p2
|
||||
/dev/disk/by-uuid/40c60952-0340-4a78-81f9-5b2193da26c6 / btrfs defaults 0 1
|
||||
# /boot - ext4 on nvme0n1p3
|
||||
/dev/disk/by-uuid/4abb4efa-0b2b-4e4a-bcaf-78227db4628f /boot ext4 defaults 0 1
|
||||
# swap
|
||||
/dev/disk/by-uuid/d07437a0-3d0e-417a-a88e-438c603c2237 none swap sw 0 0
|
||||
# /srv - btrfs on nvme0n1p5
|
||||
/dev/disk/by-uuid/c66e81ff-436e-4d6f-980b-6f4875ea7c8e /srv btrfs defaults 0 1
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Command Reference
|
||||
|
||||
- Show active exports: `sudo exportfs -v`
|
||||
- Reload exports: `sudo exportfs -ra`
|
||||
- Show available exports (from any host): `showmount -e 192.168.5.10`
|
||||
- Restart NFS: `sudo systemctl restart nfs-server`
|
||||
- Check NFS status: `systemctl status nfs-server`
|
||||
- Verify ZFS mounts: `mount | grep export`
|
||||
- Verify loopback: `mount | grep data/nfs`
|
||||
239
Vault-Grimoire/ZFS/Storage-Layout.md
Normal file
239
Vault-Grimoire/ZFS/Storage-Layout.md
Normal file
|
|
@ -0,0 +1,239 @@
|
|||
---
|
||||
title: Netgrimoire Storage
|
||||
description: Where is it at
|
||||
published: true
|
||||
date: 2026-02-23T18:38:27.621Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-01-22T21:10:37.035Z
|
||||
---
|
||||
|
||||
# NAS Storage Layout
|
||||
|
||||
## Overview
|
||||
|
||||
ZNAS is the primary NAS for Netgrimoire. It runs Ubuntu with OpenZFS and serves as the source of truth for all storage, including datasets that replicate out to the Pocket Grimoire portable system.
|
||||
|
||||
The system mounts everything under `/export/` for NFS sharing, with select datasets mounted under `/srv/` for local service consumption (Immich, NextCloud-AIO, Kopia, backup).
|
||||
|
||||
## ZFS Pools
|
||||
|
||||
- `vault` — primary NAS storage, RAIDZ1×2, 8 drives
|
||||
- `greenpg` — Pocket Grimoire GREEN SSD (Kanguru UltraLock), docked for sync when present
|
||||
|
||||
## Zpool Architecture
|
||||
|
||||
```
|
||||
pool: vault
|
||||
state: ONLINE
|
||||
scan: scrub repaired 0B in 2 days 10:24:08 with 0 errors on Tue Feb 10 10:48:10 2026
|
||||
|
||||
config:
|
||||
NAME STATE READ WRITE CKSUM
|
||||
vault ONLINE 0 0 0
|
||||
raidz1-0 ONLINE 0 0 0
|
||||
ata-ST24000DM001-3Y7103_ZXA06K45 ONLINE 0 0 0
|
||||
ata-ST24000DM001-3Y7103_ZXA08CVY ONLINE 0 0 0
|
||||
ata-ST24000DM001-3Y7103_ZXA0FP10 ONLINE 0 0 0
|
||||
raidz1-1 ONLINE 0 0 0
|
||||
ata-ST16000NE000-2RW103_ZL2Q3275 ONLINE 0 0 0
|
||||
ata-ST16000NM001G-2KK103_ZL26R5XW ONLINE 0 0 0
|
||||
ata-ST16000NT001-3LV101_ZRS0KVQW ONLINE 0 0 0
|
||||
ata-WDC_WD140EDFZ-11A0VA0_9MG81N0J ONLINE 0 0 0
|
||||
ata-WDC_WD140EDFZ-11A0VA0_Y5J35Z6C ONLINE 0 0 0
|
||||
|
||||
errors: No known data errors
|
||||
```
|
||||
|
||||
`raidz1-0` is 3× Seagate 24TB (~48TB usable). `raidz1-1` is 3× Seagate 16TB + 2× WD 14TB (~56TB usable — the 14TB drives are the limiting factor per stripe, leaving ~2TB/drive unused on the 16TB drives). Total pool: ~94TB raw, 39TB currently available.
|
||||
|
||||
```
|
||||
pool: greenpg
|
||||
state: ONLINE
|
||||
|
||||
config:
|
||||
NAME STATE READ WRITE CKSUM
|
||||
greenpg ONLINE 0 0 0
|
||||
scsi-1Kanguru_UltraLock_DB090722NC10001 ONLINE 0 0 0
|
||||
|
||||
errors: No known data errors
|
||||
```
|
||||
|
||||
`greenpg` is a portable pool. Export it before physically moving to Pocket Grimoire.
|
||||
|
||||
## ZFS Datasets
|
||||
|
||||
| Dataset | Mountpoint | Used | Avail | Refer | Quota | Compression | Purpose |
|
||||
|---------|-----------|------|-------|-------|-------|-------------|---------|
|
||||
| `vault` | `/export` | 55.3T | 39.0T | 771G | none | 1.00x | Pool root / NFSv4 pseudo-root |
|
||||
| `vault/Common` | `/export/Common` | 214G | 39.0T | 214G | none | 1.06x | General shared storage |
|
||||
| `vault/Data` | `/export/Data` | 38.4T | 39.0T | 36.4T | none | 1.00x | Primary data — 36.4T lives directly in dataset root |
|
||||
| `vault/Data/media_books` | `/export/Data/media/books` | 925G | 39.0T | 925G | none | 1.03x | Book library |
|
||||
| `vault/Data/media_comics` | `/export/Data/media/comics` | 1.15T | 39.0T | 1.15T | none | 1.00x | Comic library |
|
||||
| `vault/Green` | `/export/Green` | 14.7T | 5.31T | 9.66T | 20T | 1.00x | Personal media — 9.66T direct, 5.02T in Pocket child |
|
||||
| `vault/Green/Pocket` | `/export/Green/Pocket` | 5.02T | 2.48T | 5.02T | 7.5T | 1.00x | Pocket Grimoire replication source |
|
||||
| `vault/Kopia` | `/srv/vault/kopia_repository` | 349G | 39.0T | 349G | none | 1.02x | Kopia backup repository |
|
||||
| `vault/NextCloud-AIO` | `/srv/NextCloud-AIO` | 341G | 39.0T | 341G | none | 1.01x | NextCloud data |
|
||||
| `vault/Photos` | `/export/Photos` | 135K | 39.0T | 135K | none | 1.00x | Photos (sparse — see notes) |
|
||||
| `vault/backup` | `/srv/vault/backup` | 442G | 582G | 442G | 1T | 1.00x | Local system backups |
|
||||
| `vault/docker` | `/export/Docker` | 22.2G | 39.0T | 22.2G | none | 1.13x | Docker volumes |
|
||||
| `vault/immich` | `/srv/immich` | 117G | 39.0T | 117G | none | 1.03x | Immich photo service data |
|
||||
| `greenpg` | `/greenpg` | 2.94T | 4.20T | 96K | — | 1.00x | GREEN SSD pool root (portable) |
|
||||
| `greenpg/Pocket` | `/greenpg/Pocket` | 2.94T | 4.20T | 2.94T | — | 1.00x | Personal media + Stash data |
|
||||
|
||||
**Notes on specific datasets:**
|
||||
|
||||
`vault/Data` — 36.4T lives directly in the dataset root at `/export/Data/`. `media_books` and `media_comics` are the only child datasets and account for ~2T combined. The remaining ~36T is general data stored directly under the parent.
|
||||
|
||||
`vault/Green` — 9.66T lives directly in `/export/Green/` with the remaining 5.02T in the `Pocket` child dataset. The 20T quota caps total Green growth. `vault/Green/Pocket` has its own 7.5T sub-quota.
|
||||
|
||||
`vault/Photos` — nearly empty (135K). Photos are primarily managed through Immich at `vault/immich`. This dataset may be vestigial or reserved for future use.
|
||||
|
||||
`vault/backup` — has a hard 1T quota. Unlike other vault datasets which draw from the full 39T pool availability, this dataset is capped. Current usage is 442G with 582G remaining.
|
||||
|
||||
Compression ratios are near 1.00x across most datasets because content is already compressed (media files, binary data). `vault/docker` (1.13x) and `vault/Common` (1.06x) see modest gains from compressible config and text data.
|
||||
|
||||
## NFS Exports
|
||||
|
||||
All exports use NFSv4 with `/export` as the pseudo-filesystem root (`fsid=0`).
|
||||
|
||||
| Export | fsid | Options | Notes |
|
||||
|--------|------|---------|-------|
|
||||
| `/export` | 0 | `ro, no_root_squash, no_subtree_check, crossmnt` | NFSv4 pseudo-root — required for v4 clients |
|
||||
| `/export/Common` | 4 | `rw, no_subtree_check, insecure` | General access |
|
||||
| `/export/Data` | 5 | `rw, no_subtree_check, insecure, crossmnt` | Data root |
|
||||
| `/export/Data/media/books` | 51 | `rw, no_subtree_check, insecure, nohide` | Separate ZFS dataset — needs `nohide` |
|
||||
| `/export/Data/media/comics` | 52 | `rw, no_subtree_check, insecure, nohide` | Separate ZFS dataset — needs `nohide` |
|
||||
| `/export/Docker` | 29 | `rw, no_root_squash, sync, no_subtree_check, insecure` | Container volumes |
|
||||
| `/export/Green` | 30 | `rw, no_root_squash, no_subtree_check, insecure` | Personal media + Pocket Grimoire source |
|
||||
| `/export/photos` | 31 | `rw, no_root_squash, no_subtree_check, insecure` | Photos |
|
||||
|
||||
Current `/etc/exports`:
|
||||
|
||||
```
|
||||
/export *(ro,fsid=0,no_root_squash,no_subtree_check,crossmnt)
|
||||
/export/Common *(fsid=4,rw,no_subtree_check,insecure)
|
||||
/export/Data *(fsid=5,rw,no_subtree_check,insecure,crossmnt)
|
||||
/export/Data/media/books *(fsid=51,rw,no_subtree_check,insecure,nohide)
|
||||
/export/Data/media/comics *(fsid=52,rw,no_subtree_check,insecure,nohide)
|
||||
/export/Docker *(fsid=29,rw,no_root_squash,sync,no_subtree_check,insecure)
|
||||
/export/Green *(fsid=30,rw,no_root_squash,no_subtree_check,insecure)
|
||||
/export/photos *(fsid=31,rw,no_root_squash,no_subtree_check,insecure)
|
||||
```
|
||||
|
||||
There is also an active loopback NFSv4 mount on the system itself:
|
||||
|
||||
```
|
||||
localhost:/ → /data/nfs/znas (NFSv4.2, rsize/wsize=1M)
|
||||
```
|
||||
|
||||
## SMB Shares
|
||||
|
||||
*(To be documented.)*
|
||||
|
||||
## Standard Paths
|
||||
|
||||
- `/export/` — NFS root (vault pool root)
|
||||
- `/export/Data/` — primary data
|
||||
- `/export/Data/media/books/` — book library
|
||||
- `/export/Data/media/comics/` — comic library
|
||||
- `/export/Green/` — personal media
|
||||
- `/export/Green/Pocket/` — Pocket Grimoire replication source
|
||||
- `/export/Docker/` — container volumes
|
||||
- `/export/Photos/` — photos
|
||||
- `/srv/immich/` — Immich service data
|
||||
- `/srv/NextCloud-AIO/` — NextCloud data
|
||||
- `/srv/vault/kopia_repository/` — Kopia backup repo
|
||||
- `/srv/vault/backup/` — local system backups
|
||||
- `/greenpg/Pocket/` — GREEN SSD when docked for sync
|
||||
|
||||
## Permissions & UID/GID Model
|
||||
|
||||
*(To be documented — dockhand UID 1964, container access rules.)*
|
||||
|
||||
## Services Using Local Mounts
|
||||
|
||||
These datasets are consumed directly by services on ZNAS and are not NFS-exported:
|
||||
|
||||
| Service | Dataset | Mountpoint |
|
||||
|---------|---------|-----------|
|
||||
| Immich | `vault/immich` | `/srv/immich` |
|
||||
| NextCloud-AIO | `vault/NextCloud-AIO` | `/srv/NextCloud-AIO` |
|
||||
| Kopia | `vault/Kopia` | `/srv/vault/kopia_repository` |
|
||||
| Local backup | `vault/backup` | `/srv/vault/backup` |
|
||||
|
||||
## Pocket Grimoire Integration
|
||||
|
||||
`vault/Green/Pocket` is the replication source for the Pocket Grimoire GREEN SSD (`greenpg`). It contains personal media and Stash application data (database, previews, blobs). See the Pocket Grimoire deployment guide for full procedures.
|
||||
|
||||
**Fast resync when GREEN SSD is physically docked on ZNAS:**
|
||||
|
||||
```bash
|
||||
# Check pool name (retains whatever name it had when last exported)
|
||||
zpool list | grep greenpg
|
||||
|
||||
# Import if needed
|
||||
sudo zpool import greenpg
|
||||
sudo zfs load-key greenpg
|
||||
sudo zfs mount -a
|
||||
|
||||
# Sync
|
||||
sudo syncoid vault/Green/Pocket greenpg/Pocket
|
||||
|
||||
# Export before physically disconnecting — always do this
|
||||
sudo zfs unmount greenpg/Pocket
|
||||
sudo zfs unmount greenpg
|
||||
sudo zpool export greenpg
|
||||
```
|
||||
|
||||
**Network sync** runs automatically on Pocket Grimoire via a 6-hour syncoid systemd timer when connected over the network.
|
||||
|
||||
## Backup & Snapshot Strategy
|
||||
|
||||
**Snapshots:**
|
||||
|
||||
```bash
|
||||
# Manual pre-change snapshot
|
||||
zfs snapshot vault/Docker@before-upgrade
|
||||
|
||||
# List all snapshots
|
||||
zfs list -t snapshot
|
||||
|
||||
# List snapshots for a specific dataset
|
||||
zfs list -t snapshot -r vault/Green
|
||||
```
|
||||
|
||||
**Kopia:** Repository at `vault/Kopia` → `/srv/vault/kopia_repository`. *(Document snapshot policy and sources.)*
|
||||
|
||||
**Replication:** `vault/Green/Pocket` → `greenpg/Pocket` via syncoid. See Pocket Grimoire Integration above.
|
||||
|
||||
## Known Gotchas
|
||||
|
||||
**NFSv4 pseudo-root** — The `/export` entry with `fsid=0` is required for NFSv4 clients to enumerate subdirectories. Do not remove it even if it appears redundant.
|
||||
|
||||
**`nohide` on sub-datasets** — `vault/Data/media_books` and `vault/Data/media_comics` are separate ZFS datasets mounted beneath the `vault/Data` export path. NFS does not cross filesystem boundaries by default. Without `nohide` clients see empty directories at those paths.
|
||||
|
||||
**`vault/backup` quota** — This dataset has a hard 1T quota and does not share the general pool availability. Current headroom is ~582G. Monitor before large backup operations.
|
||||
|
||||
**`vault/Green` quota** — Capped at 20T total with a 7.5T sub-quota on `vault/Green/Pocket`. The GREEN SSD itself is ~7TB, so the sub-quota is the effective ceiling for the Pocket sync.
|
||||
|
||||
**raidz1-1 mixed drive sizes** — The three 16TB drives in raidz1-1 have ~2TB/drive going unused because RAIDZ1 stripes are limited by the smallest drive in the VDEV (14TB WDs). This capacity is permanently unavailable unless the VDEV is rebuilt.
|
||||
|
||||
**Kanguru UltraLock hardware encryption** — The GREEN SSD has hardware-level PIN protection in addition to ZFS encryption. The drive must be hardware-unlocked before `zpool import` will see it.
|
||||
|
||||
**Always export `greenpg` before disconnecting** — Export flushes writes and marks the pool clean. Pulling the drive without exporting risks a dirty import on next use.
|
||||
|
||||
**`vault/Data` root usage** — 36.4T lives directly in `/export/Data/` rather than in child datasets. This is normal for this setup but means `zfs list` on the parent alone shows the full usage without a breakdown.
|
||||
|
||||
## Command Reference
|
||||
|
||||
- Health: `zpool status`
|
||||
- Space available to pool: `zpool list`
|
||||
- Space available to datasets: `zfs list`
|
||||
- Dataset configuration: `zfs get -r compression,dedup,recordsize,atime,quota,reservation vault`
|
||||
- Create a snapshot: `zfs snapshot vault/Docker@before-upgrade`
|
||||
- List snapshots: `zfs list -t snapshot`
|
||||
- Reload NFS exports: `sudo exportfs -ra`
|
||||
- Show active NFS exports: `sudo exportfs -v`
|
||||
- Run a scrub: `sudo zpool scrub vault`
|
||||
- Sync GREEN SSD: `sudo syncoid vault/Green/Pocket greenpg/Pocket`
|
||||
168
Vault-Grimoire/ZFS/ZFS-Commands.md
Normal file
168
Vault-Grimoire/ZFS/ZFS-Commands.md
Normal file
|
|
@ -0,0 +1,168 @@
|
|||
---
|
||||
title: ZFS Common Commands
|
||||
description: ZFS Commands
|
||||
published: true
|
||||
date: 2026-02-20T04:26:23.798Z
|
||||
tags: zfs commands
|
||||
editor: markdown
|
||||
dateCreated: 2026-01-31T15:23:07.585Z
|
||||
---
|
||||
|
||||
# ZFS Essential Commands Cheat Sheet
|
||||
|
||||
---
|
||||
|
||||
## Pool Health & Status
|
||||
|
||||
zpool status
|
||||
|
||||
zpool status -v
|
||||
|
||||
zpool list
|
||||
|
||||
## Dataset Space & Usage
|
||||
|
||||
zfs list
|
||||
|
||||
zfs list -r vault
|
||||
|
||||
zfs list -o name,used,avail,refer,logicalused,compressratio
|
||||
|
||||
zfs list -r -o name,used,avail,refer,quota,reservation vault
|
||||
|
||||
## Dataset Properties & Settings
|
||||
|
||||
zfs get all vault/dataset
|
||||
|
||||
zfs get -r compression,dedup,recordsize,atime,quota,reservation vault
|
||||
|
||||
zfs get -r compression,dedup,recordsize,encryption,keylocation,keyformat,snapdir vault
|
||||
|
||||
zfs get -s local -r all vault
|
||||
|
||||
zfs get quota,refquota,reservation,refreservation -r vault
|
||||
|
||||
## Mount Encrypted Dataset
|
||||
|
||||
zfs load-key vault/Green/Pocket
|
||||
|
||||
zfs mount vault/Green/Pocket
|
||||
|
||||
## Pool I/O & Performance Monitoring
|
||||
|
||||
zpool iostat -v 1
|
||||
|
||||
arcstat 1
|
||||
|
||||
cat /proc/spl/kstat/zfs/arcstats
|
||||
|
||||
## Scrubs & Data Integrity
|
||||
|
||||
zpool scrub vault
|
||||
|
||||
zpool scrub -s vault
|
||||
|
||||
zpool status
|
||||
|
||||
## Snapshots
|
||||
|
||||
zfs snapshot vault/dataset@snapname
|
||||
|
||||
zfs list -t snapshot
|
||||
|
||||
zfs rollback vault/dataset@snapname
|
||||
|
||||
zfs clone vault/dataset@snapname vault/dataset-clone
|
||||
|
||||
## Replication (Send / Receive)
|
||||
|
||||
zfs send vault/dataset@snap1 | zfs receive backup/dataset
|
||||
|
||||
zfs send -i snap1 vault/dataset@snap2 | zfs receive backup/dataset
|
||||
|
||||
zfs send -nv vault/dataset@snap1
|
||||
|
||||
## Dataset Tuning (Live-Safe Changes)
|
||||
|
||||
zfs set compression=lz4 vault/dataset
|
||||
|
||||
zfs set recordsize=1M vault/dataset
|
||||
|
||||
zfs set atime=off vault/dataset
|
||||
|
||||
zfs set dedup=on vault/dataset
|
||||
|
||||
## Encryption Management
|
||||
|
||||
zfs get encryption,keylocation,keystatus vault/dataset
|
||||
|
||||
zfs unload-key vault/dataset
|
||||
|
||||
zfs load-key vault/dataset
|
||||
|
||||
## Disk Preparation & Cleanup
|
||||
|
||||
wipefs /dev/sdX
|
||||
|
||||
wipefs -a /dev/sdX
|
||||
|
||||
zpool labelclear -f /dev/sdX
|
||||
|
||||
sgdisk --zap-all /dev/sdX
|
||||
|
||||
lsblk -f /dev/sdX
|
||||
|
||||
## Pool Expansion (Add VDEV)
|
||||
|
||||
zpool add vault raidz2 \
|
||||
/dev/disk/by-id/disk1 \
|
||||
/dev/disk/by-id/disk2 \
|
||||
/dev/disk/by-id/disk3 \
|
||||
/dev/disk/by-id/disk4 \
|
||||
/dev/disk/by-id/disk5
|
||||
|
||||
## Pool Import / Recovery
|
||||
|
||||
zpool import
|
||||
|
||||
zpool import vault
|
||||
|
||||
zpool import -f vault
|
||||
|
||||
zpool import -o readonly=on vault
|
||||
|
||||
## Locks, Holds & History
|
||||
|
||||
zfs holds -r vault
|
||||
|
||||
zpool history
|
||||
|
||||
zfs diff vault/dataset@snap1 vault/dataset@snap2
|
||||
|
||||
## Deduplication & Compression Stats
|
||||
|
||||
zpool list -v
|
||||
|
||||
zdb -DD vault
|
||||
|
||||
## Inventory / Documentation Dumps
|
||||
|
||||
zpool status > zpool-status.txt
|
||||
|
||||
zfs list -r > zfs-layout.txt
|
||||
|
||||
zfs get -r all vault > zfs-settings.txt
|
||||
|
||||
## Top 10 Must-Know Commands
|
||||
|
||||
zpool status
|
||||
zpool list
|
||||
zpool iostat -v 1
|
||||
zpool scrub vault
|
||||
zfs list
|
||||
zfs get all vault/dataset
|
||||
zfs snapshot vault/dataset@snap
|
||||
zfs rollback vault/dataset@snap
|
||||
zfs send | zfs receive
|
||||
arcstat 1
|
||||
|
||||
Loading…
Add table
Add a link
Reference in a new issue