prep for new grimoire

This commit is contained in:
traveler 2026-04-12 09:39:57 -05:00
parent a72eb28f9e
commit 2aff30ab71
165 changed files with 0 additions and 0 deletions

View file

@ -0,0 +1,940 @@
---
title: Setting Up Kopia
description:
published: true
date: 2026-02-20T04:27:59.823Z
tags:
editor: markdown
dateCreated: 2026-01-23T22:14:17.009Z
---
# Kopia Backup System Documentation
## Overview
This system implements a two-tier backup strategy using **two separate Kopia Server instances**:
1. **Primary Repository** (`/srv/vault/kopia_repository`) - Full backups of all clients, served on port 51515
2. **Vault Repository** (`/srv/vault/backup`) - Targeted critical data backups, served on port 51516, replicated offsite via ZFS send/receive
The Vault repository sits on its own ZFS dataset to enable clean replication to offsite Pi systems. Running two separate Kopia servers allows independent management of each repository while maintaining the same HTTPS-based client connection model for both.
---
## Architecture
```
Clients (docker2, cindy's desktop, etc.)
├─→ Primary Backup → Kopia Server Primary (port 51515)
│ → /srv/vault/kopia_repository (all data)
└─→ Vault Backup → Kopia Server Vault (port 51516)
→ /srv/vault/backup (critical data only)
ZFS Send/Receive
┌───────┴───────┐
↓ ↓
Pi Vault 1 Pi Vault 2
(offsite) (offsite)
```
---
## Initial Setup on ZNAS
### Prerequisites
- Docker installed on ZNAS
- ZFS pool available
### 1. Create ZFS Datasets
```bash
# Primary repository dataset (if not already created)
zfs create -o mountpoint=/srv/vault zpool/vault
zfs create zpool/vault/kopia_repository
# Vault repository dataset (for offsite replication)
zfs create zpool/vault/backup
```
### 2. Install Kopia Servers (Docker)
We run **two separate Kopia Server containers** - one for primary backups, one for vault backups.
```bash
# Primary repository server (port 51515)
docker run -d \
--name kopia-server-primary \
--restart unless-stopped \
-p 51515:51515 \
-v /srv/vault/kopia_repository:/app/repository \
-v /srv/vault/config-primary:/app/config \
-v /srv/vault/logs-primary:/app/logs \
kopia/kopia:latest server start \
--address=0.0.0.0:51515 \
--tls-generate-cert
# Vault repository server (port 51516)
docker run -d \
--name kopia-server-vault \
--restart unless-stopped \
-p 51516:51516 \
-v /srv/vault/backup:/app/repository \
-v /srv/vault/config-vault:/app/config \
-v /srv/vault/logs-vault:/app/logs \
kopia/kopia:latest server start \
--address=0.0.0.0:51516 \
--tls-generate-cert
```
**Get the certificate fingerprints:**
```bash
# Primary server fingerprint
docker exec kopia-server-primary kopia server status
# Vault server fingerprint
docker exec kopia-server-vault kopia server status
```
**Note:** Record both certificate fingerprints - you'll need them for client connections.
- **Primary server cert SHA256:** `696a4999f594b5273a174fd7cab677d8dd1628f9b9d27e557daa87103ee064b2`
- **Vault server cert SHA256:** *(get from command above)*
### 3. Create Kopia Repositories
Each server manages its own repository. These are created during first server start, but you can initialize them manually if needed.
```bash
# Primary repository (usually created via GUI on first use)
docker exec -it kopia-server-primary kopia repository create filesystem \
--path=/app/repository \
--description="Primary backup repository"
# Vault repository
docker exec -it kopia-server-vault kopia repository create filesystem \
--path=/app/repository \
--description="Vault backup repository for offsite replication"
```
**Note:** If you created the primary repository via the Kopia UI, you don't need to run the first command.
### 4. Create User Accounts
Create users on each server separately.
**Primary repository users:**
```bash
# Enter primary server container
docker exec -it kopia-server-primary /bin/sh
# Create users
kopia server users add admin@docker2
kopia server users add cindy@DESKTOP-QLSVD8P
# Password for cindy: LucyDog123
# Exit container
exit
```
**Vault repository users:**
```bash
# Enter vault server container
docker exec -it kopia-server-vault /bin/sh
# Create users
kopia server users add admin@docker2-vault
kopia server users add cindy@DESKTOP-QLSVD8P-vault
# Use same passwords or different based on security requirements
# Exit container
exit
```
---
## Client Configuration
### Linux Client (docker2)
#### Primary Backup Setup
1. **Install Kopia**
```bash
# Download and install kopia .deb package
wget https://github.com/kopia/kopia/releases/download/v0.XX.X/kopia_0.XX.X_amd64.deb
sudo dpkg -i kopia_0.XX.X_amd64.deb
```
2. **Remove old repository (if exists)**
```bash
sudo kopia repository disconnect || true
sudo rm -rf /root/.config/kopia
```
3. **Connect to primary repository**
```bash
sudo kopia repository connect server \
--url=https://192.168.5.10:51515 \
--override-username=admin@docker2 \
--server-cert-fingerprint=696a4999f594b5273a174fd7cab677d8dd1628f9b9d27e557daa87103ee064b2
```
4. **Create initial snapshot**
```bash
sudo kopia snapshot create /DockerVol/
```
5. **Set up cron job for primary backups**
```bash
sudo crontab -e
# Add this line (runs every 3 hours)
*/180 * * * * /usr/bin/kopia snapshot create /DockerVol >> /var/log/kopia-primary-cron.log 2>&1
```
#### Vault Backup Setup (Critical Data)
1. **Create secondary kopia config directory**
```bash
sudo mkdir -p /root/.config/kopia-vault
```
2. **Connect to vault repository**
```bash
sudo kopia --config-file=/root/.config/kopia-vault/repository.config \
repository connect server \
--url=https://192.168.5.10:51516 \
--override-username=admin@docker2-vault \
--server-cert-fingerprint=<VAULT_SERVER_CERT_FINGERPRINT>
```
**Note:** Replace `<VAULT_SERVER_CERT_FINGERPRINT>` with the actual fingerprint from the vault server (see setup section).
3. **Create vault backup script**
```bash
sudo nano /usr/local/bin/kopia-vault-backup.sh
```
Add this content:
```bash
#!/bin/bash
# Kopia Vault Backup Script
# Backs up critical data to vault repository for offsite replication
KOPIA_CONFIG="/root/.config/kopia-vault/repository.config"
LOG_FILE="/var/log/kopia-vault-cron.log"
# Add your critical directories here
VAULT_DIRS=(
"/DockerVol/critical-app1"
"/DockerVol/critical-app2"
"/home/admin/documents"
)
echo "=== Vault backup started at $(date) ===" >> "$LOG_FILE"
for dir in "${VAULT_DIRS[@]}"; do
if [ -d "$dir" ]; then
echo "Backing up: $dir" >> "$LOG_FILE"
/usr/bin/kopia --config-file="$KOPIA_CONFIG" snapshot create "$dir" >> "$LOG_FILE" 2>&1
else
echo "Directory not found: $dir" >> "$LOG_FILE"
fi
done
echo "=== Vault backup completed at $(date) ===" >> "$LOG_FILE"
echo "" >> "$LOG_FILE"
```
4. **Make script executable**
```bash
sudo chmod +x /usr/local/bin/kopia-vault-backup.sh
```
5. **Set up cron job for vault backups**
```bash
sudo crontab -e
# Add this line (runs daily at 3 AM)
0 3 * * * /usr/local/bin/kopia-vault-backup.sh
```
---
### Windows Client (Cindy's Desktop)
#### Primary Backup Setup
1. **Install Kopia**
```powershell
# Using winget
winget install kopia
```
2. **Connect to primary repository**
```powershell
kopia repository connect server `
--url=https://192.168.5.10:51515 `
--override-username=cindy@DESKTOP-QLSVD8P `
--server-cert-fingerprint=696a4999f594b5273a174fd7cab677d8dd1628f9b9d27e557daa87103ee064b2
```
3. **Create initial snapshot**
```powershell
kopia snapshot create C:\Users\cindy
```
4. **Set exclusion policy**
```powershell
kopia policy set `
--global `
--add-ignore "**\AppData\Local\Temp\**" `
--add-ignore "**\AppData\Local\Packages\**"
```
5. **Create primary backup script**
```powershell
# Create scripts folder
New-Item -ItemType Directory -Force -Path C:\Scripts
# Create backup script
New-Item -ItemType File -Path C:\Scripts\kopia-primary-nightly.ps1
```
Add this content to `C:\Scripts\kopia-primary-nightly.ps1`:
```powershell
# Kopia Primary Backup Script
# Repository password
$env:KOPIA_PASSWORD = "LucyDog123"
# Run backup with logging
kopia snapshot create C:\Users\cindy `
--progress `
| Tee-Object -FilePath C:\Logs\kopia-primary.log -Append
# Log completion
Add-Content -Path C:\Logs\kopia-primary.log -Value "Backup completed at $(Get-Date)"
Add-Content -Path C:\Logs\kopia-primary.log -Value "---"
```
6. **Secure the script**
- Right-click `C:\Scripts\kopia-primary-nightly.ps1` → Properties → Security
- Ensure only Cindy's user account has read access
7. **Create scheduled task for primary backup**
- Press `Win + R` → type `taskschd.msc`
- Click "Create Task" (not "Basic Task")
**General tab:**
- Name: `Kopia Primary Nightly Backup`
- ✔ Run whether user is logged on or not
- ✔ Run with highest privileges
- Configure for: Windows 10/11
**Triggers tab:**
- New → Daily at 2:00 AM
- ✔ Enabled
**Actions tab:**
- Program: `powershell.exe`
- Arguments: `-ExecutionPolicy Bypass -File C:\Scripts\kopia-primary-nightly.ps1`
- Start in: `C:\Scripts`
**Conditions tab:**
- ✔ Wake the computer to run this task
- ✔ Start only if on AC power (recommended for laptops)
**Settings tab:**
- ✔ Allow task to be run on demand
- ✔ Run task as soon as possible after scheduled start is missed
- ❌ Stop the task if it runs longer than...
**Note:** When creating the task, use PIN (not Windows password) when prompted. For scheduled task credential: use password Harvey123= (MS account password)
#### Vault Backup Setup (Critical Data)
1. **Create vault config directory**
```powershell
New-Item -ItemType Directory -Force -Path C:\Users\cindy\.config\kopia-vault
```
2. **Connect to vault repository**
```powershell
kopia --config-file="C:\Users\cindy\.config\kopia-vault\repository.config" `
repository connect server `
--url=https://192.168.5.10:51516 `
--override-username=cindy@DESKTOP-QLSVD8P-vault `
--server-cert-fingerprint=<VAULT_SERVER_CERT_FINGERPRINT>
```
**Note:** Replace `<VAULT_SERVER_CERT_FINGERPRINT>` with the actual fingerprint from the vault server.
3. **Create vault backup script**
```powershell
New-Item -ItemType File -Path C:\Scripts\kopia-vault-nightly.ps1
```
Add this content to `C:\Scripts\kopia-vault-nightly.ps1`:
```powershell
# Kopia Vault Backup Script
# Backs up critical data to vault repository for offsite replication
$env:KOPIA_PASSWORD = "LucyDog123"
$KOPIA_CONFIG = "C:\Users\cindy\.config\kopia-vault\repository.config"
# Define critical directories to back up
$VaultDirs = @(
"C:\Users\cindy\Documents",
"C:\Users\cindy\Pictures",
"C:\Users\cindy\Desktop\Important"
)
# Log header
Add-Content -Path C:\Logs\kopia-vault.log -Value "=== Vault backup started at $(Get-Date) ==="
# Backup each directory
foreach ($dir in $VaultDirs) {
if (Test-Path $dir) {
Add-Content -Path C:\Logs\kopia-vault.log -Value "Backing up: $dir"
kopia --config-file="$KOPIA_CONFIG" snapshot create $dir `
| Tee-Object -FilePath C:\Logs\kopia-vault.log -Append
} else {
Add-Content -Path C:\Logs\kopia-vault.log -Value "Directory not found: $dir"
}
}
# Log completion
Add-Content -Path C:\Logs\kopia-vault.log -Value "=== Vault backup completed at $(Get-Date) ==="
Add-Content -Path C:\Logs\kopia-vault.log -Value ""
```
4. **Create log directory**
```powershell
New-Item -ItemType Directory -Force -Path C:\Logs
```
5. **Create scheduled task for vault backup**
- Press `Win + R` → type `taskschd.msc`
- Click "Create Task"
**General tab:**
- Name: `Kopia Vault Nightly Backup`
- ✔ Run whether user is logged on or not
- ✔ Run with highest privileges
**Triggers tab:**
- New → Daily at 3:00 AM (after primary backup)
- ✔ Enabled
**Actions tab:**
- Program: `powershell.exe`
- Arguments: `-ExecutionPolicy Bypass -File C:\Scripts\kopia-vault-nightly.ps1`
- Start in: `C:\Scripts`
**Conditions/Settings:** Same as primary backup task
---
## ZFS Replication to Offsite Pi Vaults
### Setup on ZNAS (Source)
1. **Create snapshot script**
```bash
sudo nano /usr/local/bin/vault-snapshot.sh
```
Add this content:
```bash
#!/bin/bash
# Create ZFS snapshot of vault dataset for replication
DATASET="zpool/vault/backup"
SNAPSHOT_NAME="vault-$(date +%Y%m%d-%H%M%S)"
# Create snapshot
zfs snapshot "${DATASET}@${SNAPSHOT_NAME}"
# Keep only last 7 days of snapshots on source
zfs list -t snapshot -o name -s creation | grep "^${DATASET}@vault-" | head -n -7 | xargs -r -n 1 zfs destroy
echo "Created snapshot: ${DATASET}@${SNAPSHOT_NAME}"
```
2. **Make executable**
```bash
sudo chmod +x /usr/local/bin/vault-snapshot.sh
```
3. **Schedule snapshot creation**
```bash
sudo crontab -e
# Add this line (create snapshot daily at 4 AM, after vault backups complete)
0 4 * * * /usr/local/bin/vault-snapshot.sh >> /var/log/vault-snapshot.log 2>&1
```
4. **Create replication script**
```bash
sudo nano /usr/local/bin/vault-replicate.sh
```
Add this content:
```bash
#!/bin/bash
# Replicate vault dataset to offsite Pi systems
DATASET="zpool/vault/backup"
PI1_HOST="pi-vault-1.local" # Update with actual hostname/IP
PI2_HOST="pi-vault-2.local" # Update with actual hostname/IP
PI_USER="admin"
REMOTE_DATASET="tank/vault-backup" # Update with actual dataset on Pi
# Get the latest snapshot
LATEST_SNAP=$(zfs list -t snapshot -o name -s creation | grep "^${DATASET}@vault-" | tail -n 1)
if [ -z "$LATEST_SNAP" ]; then
echo "No snapshots found for replication"
exit 1
fi
echo "Replicating snapshot: $LATEST_SNAP"
# Function to replicate to a target
replicate_to_target() {
local TARGET_HOST=$1
echo "=== Replicating to $TARGET_HOST ==="
# Get the last snapshot on remote (if any)
LAST_REMOTE=$(ssh ${PI_USER}@${TARGET_HOST} "zfs list -t snapshot -o name -s creation 2>/dev/null | grep '^${REMOTE_DATASET}@vault-' | tail -n 1" || echo "")
if [ -z "$LAST_REMOTE" ]; then
# Initial replication (full send)
echo "Performing initial full replication to $TARGET_HOST"
zfs send -c $LATEST_SNAP | ssh ${PI_USER}@${TARGET_HOST} "zfs receive -F ${REMOTE_DATASET}"
else
# Incremental replication
echo "Performing incremental replication to $TARGET_HOST"
LAST_SNAP_NAME=$(echo $LAST_REMOTE | cut -d'@' -f2)
zfs send -c -i ${DATASET}@${LAST_SNAP_NAME} $LATEST_SNAP | ssh ${PI_USER}@${TARGET_HOST} "zfs receive -F ${REMOTE_DATASET}"
fi
# Clean up old snapshots on remote (keep last 30 days)
ssh ${PI_USER}@${TARGET_HOST} "zfs list -t snapshot -o name -s creation | grep '^${REMOTE_DATASET}@vault-' | head -n -30 | xargs -r -n 1 zfs destroy"
echo "Replication to $TARGET_HOST completed"
}
# Replicate to both Pi systems
replicate_to_target $PI1_HOST
replicate_to_target $PI2_HOST
echo "All replications completed at $(date)"
```
5. **Make executable**
```bash
sudo chmod +x /usr/local/bin/vault-replicate.sh
```
6. **Set up SSH keys for passwordless replication**
```bash
# Generate SSH key if needed
ssh-keygen -t ed25519 -C "znas-replication"
# Copy to both Pi systems
ssh-copy-id admin@pi-vault-1.local
ssh-copy-id admin@pi-vault-2.local
```
7. **Schedule replication**
```bash
sudo crontab -e
# Add this line (replicate daily at 5 AM, after snapshot creation)
0 5 * * * /usr/local/bin/vault-replicate.sh >> /var/log/vault-replicate.log 2>&1
```
### Setup on Pi Vault Systems (Targets)
Repeat these steps on both Pi Vault 1 and Pi Vault 2:
1. **Create ZFS pool on SSD** (if not already done)
```bash
# Assuming SSD is /dev/sda
sudo zpool create tank /dev/sda
```
2. **Create dataset for receiving backups**
```bash
sudo zfs create tank/vault-backup
```
3. **Set appropriate permissions**
```bash
# Allow the replication user to receive snapshots
sudo zfs allow admin receive,create,mount,destroy tank/vault-backup
```
4. **Verify replication** (after first run)
```bash
zfs list -t snapshot | grep vault-
```
---
## Maintenance and Monitoring
### Regular Health Checks
**On Clients:**
```bash
# Linux
sudo kopia snapshot list
sudo kopia snapshot verify --file-parallelism=8
sudo kopia repository status
# Windows (PowerShell)
kopia snapshot list
kopia snapshot verify --file-parallelism=8
kopia repository status
```
**On ZNAS:**
```bash
# Check ZFS health
zpool status
# Check both Kopia servers are running
docker ps | grep kopia
# Check vault snapshots
zfs list -t snapshot | grep "vault/backup"
# Check replication logs
tail -f /var/log/vault-replicate.log
# View server statuses
docker exec kopia-server-primary kopia server status
docker exec kopia-server-vault kopia server status
```
**On Pi Vaults:**
```bash
# Check received snapshots
zfs list -t snapshot | grep vault-backup
# Check available space
zfs list tank/vault-backup
```
### Monthly Maintenance Tasks
1. **Verify vault backups are replicating**
```bash
# On ZNAS
cat /var/log/vault-replicate.log | grep "completed"
# On Pi systems
zfs list -t snapshot -o name,creation | grep vault-backup | tail
```
2. **Test restore from vault repository**
```bash
# Connect to vault repo and verify a random snapshot
kopia --config-file=/path/to/vault/config repository connect server --url=...
kopia snapshot list
kopia snapshot verify --file-parallelism=8
```
3. **Check disk space on all systems**
4. **Review backup logs for errors**
### Backup Policy Recommendations
**Primary Repository:**
- Retention: 7 daily, 4 weekly, 6 monthly
- Compression: enabled
- All data from clients
**Vault Repository:**
- Retention: 14 daily, 8 weekly, 12 monthly, 3 yearly
- Compression: enabled
- Only critical data for offsite protection
**ZFS Snapshots:**
- Keep 7 days on ZNAS (source)
- Keep 30 days on Pi vaults (targets)
---
## Disaster Recovery Procedures
### Scenario 1: Restore from Primary Repository
```bash
# Linux
sudo kopia snapshot list
sudo kopia snapshot restore <snapshot-id> /restore/location
# Windows
kopia snapshot list
kopia snapshot restore <snapshot-id> C:\restore\location
```
### Scenario 2: Restore from Vault Repository (Offsite)
If ZNAS is unavailable, restore directly from Pi vault:
1. **On Pi vault:**
```bash
# Mount the latest snapshot
LATEST=$(zfs list -t snapshot -o name | grep vault-backup | tail -n 1)
zfs clone $LATEST tank/vault-backup-restore
```
2. **Access Kopia repository directly:**
```bash
kopia repository connect filesystem --path=/tank/vault-backup-restore
kopia snapshot list
kopia snapshot restore <snapshot-id> /restore/location
```
3. **Clean up after restore:**
```bash
zfs destroy tank/vault-backup-restore
```
### Scenario 3: Complete System Rebuild
1. Rebuild ZNAS and restore vault dataset from Pi
2. Reinstall Kopia server in Docker
3. Point server to restored vault repository
4. Reconnect clients to primary and vault repositories
5. Resume scheduled backups
---
## Troubleshooting
### Client can't connect to repository
```bash
# Check both servers are running
docker ps | grep kopia
# Should see both kopia-server-primary and kopia-server-vault
# Check firewall
sudo ufw status | grep 51515
sudo ufw status | grep 51516
# Verify certificate fingerprints
docker exec kopia-server-primary kopia server status
docker exec kopia-server-vault kopia server status
# Check server logs
docker logs kopia-server-primary
docker logs kopia-server-vault
```
### Vault replication failing
```bash
# Check SSH connectivity
ssh admin@pi-vault-1.local "echo Connected"
# Check ZFS pool health
zpool status
# Check remote dataset exists
ssh admin@pi-vault-1.local "zfs list tank/vault-backup"
# Manual test send
zfs send -n -v zpool/vault/backup@latest | ssh admin@pi-vault-1.local "cat > /dev/null"
```
### Windows scheduled task not running
- Check Task Scheduler → Task History
- Verify PIN/password authentication (use password Harvey123= for task credential)
- Check that computer is awake at scheduled time
- Review power settings (prevent sleep, wake for tasks)
- Check log files: `C:\Logs\kopia-primary.log` and `C:\Logs\kopia-vault.log`
### Snapshot cleanup not working
```bash
# Manually clean old snapshots
zfs list -t snapshot -o name,used,creation | grep vault-backup
# Remove specific snapshot
zfs destroy zpool/vault/backup@vault-YYYYMMDD-HHMMSS
```
---
## Security Notes
1. **Passwords in scripts:** Current implementation stores passwords in plaintext in scripts. For production, consider:
- Windows Credential Manager
- Linux keyring or encrypted credential storage
- Environment variables set at system level
2. **SSH keys:** Replication uses SSH keys. Keep private keys secure and use passphrase protection where possible.
3. **Network security:** Kopia server uses HTTPS with certificate validation. Ensure certificate fingerprint is verified on first connection.
4. **Physical security:** Offsite Pi vaults should be stored in secure locations with different risk profiles (fire, flood, theft).
---
## Quick Reference Commands
### Kopia Client Commands
```bash
# List snapshots
kopia snapshot list
# Create snapshot
kopia snapshot create /path/to/backup
# Verify integrity
kopia snapshot verify --file-parallelism=8
# Check repository status
kopia repository status
# View policies
kopia policy list
# Mount snapshot (Linux)
kopia mount <snapshot-id> /mnt/snapshot
# Use alternate config (for vault repository)
kopia --config-file=/path/to/vault/repository.config snapshot list
```
### ZFS Commands
```bash
# List snapshots
zfs list -t snapshot
# Create manual snapshot
zfs snapshot zpool/vault/backup@manual-$(date +%Y%m%d)
# Send full snapshot
zfs send zpool/vault/backup@snapshot | ssh user@host zfs receive tank/backup
# Send incremental
zfs send -i @old @new zpool/vault/backup | ssh user@host zfs receive tank/backup
# List replication progress
zpool status -v
# Check dataset size
zfs list -o space zpool/vault/backup
```
---
## Appendix: System Specifications
**ZNAS:**
- ZFS fileserver
- Docker running **two** Kopia servers:
- **kopia-server-primary** on port 51515
- **kopia-server-vault** on port 51516
- IP: 192.168.5.10
- Datasets:
- `/srv/vault/kopia_repository` (zpool/vault/kopia_repository) - Primary repository
- `/srv/vault/backup` (zpool/vault/backup) - Vault repository (replicated)
**Clients:**
- **docker2** (Linux) - Backs up /DockerVol/
- Primary: Every 3 hours → port 51515
- Vault: Daily at 3 AM (critical directories only) → port 51516
- **DESKTOP-QLSVD8P** (Windows - Cindy's desktop) - Backs up C:\Users\cindy
- Primary: Daily at 2 AM → port 51515
- Vault: Daily at 3 AM (Documents, Pictures, Important files) → port 51516
- Kopia password: LucyDog123
- Task Scheduler credential: Harvey123=
**Offsite Vaults:**
- **Pi Vault 1** - Raspberry Pi with SSD (tank/vault-backup)
- **Pi Vault 2** - Raspberry Pi with SSD (tank/vault-backup)
**Server Certificates:**
- Primary server SHA256: `696a4999f594b5273a174fd7cab677d8dd1628f9b9d27e557daa87103ee064b2`
- Vault server SHA256: *(get from `docker exec kopia-server-vault kopia server status`)*
---
## Workflow Summary
### Daily Backup Flow
**2:00 AM** - Cindy's desktop primary backup runs
**3:00 AM** - docker2 vault backup runs
**3:00 AM** - Cindy's desktop vault backup runs
**4:00 AM** - ZNAS creates ZFS snapshot of vault dataset
**5:00 AM** - ZNAS replicates vault snapshot to both Pi systems
**Every 3 hours** - docker2 primary backup runs
### What Gets Backed Up Where
**Primary Repository (Full Backups):**
- docker2: /DockerVol/ (all Docker volumes)
- Cindy: C:\Users\cindy (entire user profile, minus temp files)
**Vault Repository (Critical Data for Offsite):**
- docker2: Selected critical Docker volumes
- Cindy: Documents, Pictures, Important desktop files
**Offsite (Via ZFS Send):**
- Entire vault repository (all clients' critical data)
- Replicated to 2 separate Pi systems
---
## Future Enhancements
Consider adding:
- Email notifications on backup failures
- Monitoring dashboard (Grafana/Prometheus)
- Backup validation automation
- Additional retention policies per client
- Encrypted credentials storage
- Remote monitoring of Pi vault systems
- Automated restore testing
- Bandwidth throttling for replication
- Multiple ZFS snapshot retention policies
---
## Change Log
- **2025-02-11** - Initial comprehensive documentation created
- Added two-tier backup strategy (primary + vault)
- Added ZFS replication procedures for offsite backup
- Added Pi vault setup instructions
- Added disaster recovery procedures
- Consolidated all client configurations
- Added workflow diagrams and timing
---
## Support and Feedback
For issues or improvements to this documentation, contact the system administrator.
**Useful Resources:**
- Kopia Documentation: https://kopia.io/docs/
- ZFS Administration Guide: https://openzfs.github.io/openzfs-docs/
- Kopia GitHub: https://github.com/kopia/kopia

View file

@ -0,0 +1,239 @@
---
title: Netgrimoire Storage
description: Where is it at
published: true
date: 2026-02-23T18:38:27.621Z
tags:
editor: markdown
dateCreated: 2026-01-22T21:10:37.035Z
---
# NAS Storage Layout
## Overview
ZNAS is the primary NAS for Netgrimoire. It runs Ubuntu with OpenZFS and serves as the source of truth for all storage, including datasets that replicate out to the Pocket Grimoire portable system.
The system mounts everything under `/export/` for NFS sharing, with select datasets mounted under `/srv/` for local service consumption (Immich, NextCloud-AIO, Kopia, backup).
## ZFS Pools
- `vault` — primary NAS storage, RAIDZ1×2, 8 drives
- `greenpg` — Pocket Grimoire GREEN SSD (Kanguru UltraLock), docked for sync when present
## Zpool Architecture
```
pool: vault
state: ONLINE
scan: scrub repaired 0B in 2 days 10:24:08 with 0 errors on Tue Feb 10 10:48:10 2026
config:
NAME STATE READ WRITE CKSUM
vault ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
ata-ST24000DM001-3Y7103_ZXA06K45 ONLINE 0 0 0
ata-ST24000DM001-3Y7103_ZXA08CVY ONLINE 0 0 0
ata-ST24000DM001-3Y7103_ZXA0FP10 ONLINE 0 0 0
raidz1-1 ONLINE 0 0 0
ata-ST16000NE000-2RW103_ZL2Q3275 ONLINE 0 0 0
ata-ST16000NM001G-2KK103_ZL26R5XW ONLINE 0 0 0
ata-ST16000NT001-3LV101_ZRS0KVQW ONLINE 0 0 0
ata-WDC_WD140EDFZ-11A0VA0_9MG81N0J ONLINE 0 0 0
ata-WDC_WD140EDFZ-11A0VA0_Y5J35Z6C ONLINE 0 0 0
errors: No known data errors
```
`raidz1-0` is 3× Seagate 24TB (~48TB usable). `raidz1-1` is 3× Seagate 16TB + 2× WD 14TB (~56TB usable — the 14TB drives are the limiting factor per stripe, leaving ~2TB/drive unused on the 16TB drives). Total pool: ~94TB raw, 39TB currently available.
```
pool: greenpg
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
greenpg ONLINE 0 0 0
scsi-1Kanguru_UltraLock_DB090722NC10001 ONLINE 0 0 0
errors: No known data errors
```
`greenpg` is a portable pool. Export it before physically moving to Pocket Grimoire.
## ZFS Datasets
| Dataset | Mountpoint | Used | Avail | Refer | Quota | Compression | Purpose |
|---------|-----------|------|-------|-------|-------|-------------|---------|
| `vault` | `/export` | 55.3T | 39.0T | 771G | none | 1.00x | Pool root / NFSv4 pseudo-root |
| `vault/Common` | `/export/Common` | 214G | 39.0T | 214G | none | 1.06x | General shared storage |
| `vault/Data` | `/export/Data` | 38.4T | 39.0T | 36.4T | none | 1.00x | Primary data — 36.4T lives directly in dataset root |
| `vault/Data/media_books` | `/export/Data/media/books` | 925G | 39.0T | 925G | none | 1.03x | Book library |
| `vault/Data/media_comics` | `/export/Data/media/comics` | 1.15T | 39.0T | 1.15T | none | 1.00x | Comic library |
| `vault/Green` | `/export/Green` | 14.7T | 5.31T | 9.66T | 20T | 1.00x | Personal media — 9.66T direct, 5.02T in Pocket child |
| `vault/Green/Pocket` | `/export/Green/Pocket` | 5.02T | 2.48T | 5.02T | 7.5T | 1.00x | Pocket Grimoire replication source |
| `vault/Kopia` | `/srv/vault/kopia_repository` | 349G | 39.0T | 349G | none | 1.02x | Kopia backup repository |
| `vault/NextCloud-AIO` | `/srv/NextCloud-AIO` | 341G | 39.0T | 341G | none | 1.01x | NextCloud data |
| `vault/Photos` | `/export/Photos` | 135K | 39.0T | 135K | none | 1.00x | Photos (sparse — see notes) |
| `vault/backup` | `/srv/vault/backup` | 442G | 582G | 442G | 1T | 1.00x | Local system backups |
| `vault/docker` | `/export/Docker` | 22.2G | 39.0T | 22.2G | none | 1.13x | Docker volumes |
| `vault/immich` | `/srv/immich` | 117G | 39.0T | 117G | none | 1.03x | Immich photo service data |
| `greenpg` | `/greenpg` | 2.94T | 4.20T | 96K | — | 1.00x | GREEN SSD pool root (portable) |
| `greenpg/Pocket` | `/greenpg/Pocket` | 2.94T | 4.20T | 2.94T | — | 1.00x | Personal media + Stash data |
**Notes on specific datasets:**
`vault/Data` — 36.4T lives directly in the dataset root at `/export/Data/`. `media_books` and `media_comics` are the only child datasets and account for ~2T combined. The remaining ~36T is general data stored directly under the parent.
`vault/Green` — 9.66T lives directly in `/export/Green/` with the remaining 5.02T in the `Pocket` child dataset. The 20T quota caps total Green growth. `vault/Green/Pocket` has its own 7.5T sub-quota.
`vault/Photos` — nearly empty (135K). Photos are primarily managed through Immich at `vault/immich`. This dataset may be vestigial or reserved for future use.
`vault/backup` — has a hard 1T quota. Unlike other vault datasets which draw from the full 39T pool availability, this dataset is capped. Current usage is 442G with 582G remaining.
Compression ratios are near 1.00x across most datasets because content is already compressed (media files, binary data). `vault/docker` (1.13x) and `vault/Common` (1.06x) see modest gains from compressible config and text data.
## NFS Exports
All exports use NFSv4 with `/export` as the pseudo-filesystem root (`fsid=0`).
| Export | fsid | Options | Notes |
|--------|------|---------|-------|
| `/export` | 0 | `ro, no_root_squash, no_subtree_check, crossmnt` | NFSv4 pseudo-root — required for v4 clients |
| `/export/Common` | 4 | `rw, no_subtree_check, insecure` | General access |
| `/export/Data` | 5 | `rw, no_subtree_check, insecure, crossmnt` | Data root |
| `/export/Data/media/books` | 51 | `rw, no_subtree_check, insecure, nohide` | Separate ZFS dataset — needs `nohide` |
| `/export/Data/media/comics` | 52 | `rw, no_subtree_check, insecure, nohide` | Separate ZFS dataset — needs `nohide` |
| `/export/Docker` | 29 | `rw, no_root_squash, sync, no_subtree_check, insecure` | Container volumes |
| `/export/Green` | 30 | `rw, no_root_squash, no_subtree_check, insecure` | Personal media + Pocket Grimoire source |
| `/export/photos` | 31 | `rw, no_root_squash, no_subtree_check, insecure` | Photos |
Current `/etc/exports`:
```
/export *(ro,fsid=0,no_root_squash,no_subtree_check,crossmnt)
/export/Common *(fsid=4,rw,no_subtree_check,insecure)
/export/Data *(fsid=5,rw,no_subtree_check,insecure,crossmnt)
/export/Data/media/books *(fsid=51,rw,no_subtree_check,insecure,nohide)
/export/Data/media/comics *(fsid=52,rw,no_subtree_check,insecure,nohide)
/export/Docker *(fsid=29,rw,no_root_squash,sync,no_subtree_check,insecure)
/export/Green *(fsid=30,rw,no_root_squash,no_subtree_check,insecure)
/export/photos *(fsid=31,rw,no_root_squash,no_subtree_check,insecure)
```
There is also an active loopback NFSv4 mount on the system itself:
```
localhost:/ → /data/nfs/znas (NFSv4.2, rsize/wsize=1M)
```
## SMB Shares
*(To be documented.)*
## Standard Paths
- `/export/` — NFS root (vault pool root)
- `/export/Data/` — primary data
- `/export/Data/media/books/` — book library
- `/export/Data/media/comics/` — comic library
- `/export/Green/` — personal media
- `/export/Green/Pocket/` — Pocket Grimoire replication source
- `/export/Docker/` — container volumes
- `/export/Photos/` — photos
- `/srv/immich/` — Immich service data
- `/srv/NextCloud-AIO/` — NextCloud data
- `/srv/vault/kopia_repository/` — Kopia backup repo
- `/srv/vault/backup/` — local system backups
- `/greenpg/Pocket/` — GREEN SSD when docked for sync
## Permissions & UID/GID Model
*(To be documented — dockhand UID 1964, container access rules.)*
## Services Using Local Mounts
These datasets are consumed directly by services on ZNAS and are not NFS-exported:
| Service | Dataset | Mountpoint |
|---------|---------|-----------|
| Immich | `vault/immich` | `/srv/immich` |
| NextCloud-AIO | `vault/NextCloud-AIO` | `/srv/NextCloud-AIO` |
| Kopia | `vault/Kopia` | `/srv/vault/kopia_repository` |
| Local backup | `vault/backup` | `/srv/vault/backup` |
## Pocket Grimoire Integration
`vault/Green/Pocket` is the replication source for the Pocket Grimoire GREEN SSD (`greenpg`). It contains personal media and Stash application data (database, previews, blobs). See the Pocket Grimoire deployment guide for full procedures.
**Fast resync when GREEN SSD is physically docked on ZNAS:**
```bash
# Check pool name (retains whatever name it had when last exported)
zpool list | grep greenpg
# Import if needed
sudo zpool import greenpg
sudo zfs load-key greenpg
sudo zfs mount -a
# Sync
sudo syncoid vault/Green/Pocket greenpg/Pocket
# Export before physically disconnecting — always do this
sudo zfs unmount greenpg/Pocket
sudo zfs unmount greenpg
sudo zpool export greenpg
```
**Network sync** runs automatically on Pocket Grimoire via a 6-hour syncoid systemd timer when connected over the network.
## Backup & Snapshot Strategy
**Snapshots:**
```bash
# Manual pre-change snapshot
zfs snapshot vault/Docker@before-upgrade
# List all snapshots
zfs list -t snapshot
# List snapshots for a specific dataset
zfs list -t snapshot -r vault/Green
```
**Kopia:** Repository at `vault/Kopia``/srv/vault/kopia_repository`. *(Document snapshot policy and sources.)*
**Replication:** `vault/Green/Pocket``greenpg/Pocket` via syncoid. See Pocket Grimoire Integration above.
## Known Gotchas
**NFSv4 pseudo-root** — The `/export` entry with `fsid=0` is required for NFSv4 clients to enumerate subdirectories. Do not remove it even if it appears redundant.
**`nohide` on sub-datasets** — `vault/Data/media_books` and `vault/Data/media_comics` are separate ZFS datasets mounted beneath the `vault/Data` export path. NFS does not cross filesystem boundaries by default. Without `nohide` clients see empty directories at those paths.
**`vault/backup` quota** — This dataset has a hard 1T quota and does not share the general pool availability. Current headroom is ~582G. Monitor before large backup operations.
**`vault/Green` quota** — Capped at 20T total with a 7.5T sub-quota on `vault/Green/Pocket`. The GREEN SSD itself is ~7TB, so the sub-quota is the effective ceiling for the Pocket sync.
**raidz1-1 mixed drive sizes** — The three 16TB drives in raidz1-1 have ~2TB/drive going unused because RAIDZ1 stripes are limited by the smallest drive in the VDEV (14TB WDs). This capacity is permanently unavailable unless the VDEV is rebuilt.
**Kanguru UltraLock hardware encryption** — The GREEN SSD has hardware-level PIN protection in addition to ZFS encryption. The drive must be hardware-unlocked before `zpool import` will see it.
**Always export `greenpg` before disconnecting** — Export flushes writes and marks the pool clean. Pulling the drive without exporting risks a dirty import on next use.
**`vault/Data` root usage** — 36.4T lives directly in `/export/Data/` rather than in child datasets. This is normal for this setup but means `zfs list` on the parent alone shows the full usage without a breakdown.
## Command Reference
- Health: `zpool status`
- Space available to pool: `zpool list`
- Space available to datasets: `zfs list`
- Dataset configuration: `zfs get -r compression,dedup,recordsize,atime,quota,reservation vault`
- Create a snapshot: `zfs snapshot vault/Docker@before-upgrade`
- List snapshots: `zfs list -t snapshot`
- Reload NFS exports: `sudo exportfs -ra`
- Show active NFS exports: `sudo exportfs -v`
- Run a scrub: `sudo zpool scrub vault`
- Sync GREEN SSD: `sudo syncoid vault/Green/Pocket greenpg/Pocket`

View file

@ -0,0 +1,168 @@
---
title: ZFS Common Commands
description: ZFS Commands
published: true
date: 2026-02-20T04:26:23.798Z
tags: zfs commands
editor: markdown
dateCreated: 2026-01-31T15:23:07.585Z
---
# ZFS Essential Commands Cheat Sheet
---
## Pool Health & Status
zpool status
zpool status -v
zpool list
## Dataset Space & Usage
zfs list
zfs list -r vault
zfs list -o name,used,avail,refer,logicalused,compressratio
zfs list -r -o name,used,avail,refer,quota,reservation vault
## Dataset Properties & Settings
zfs get all vault/dataset
zfs get -r compression,dedup,recordsize,atime,quota,reservation vault
zfs get -r compression,dedup,recordsize,encryption,keylocation,keyformat,snapdir vault
zfs get -s local -r all vault
zfs get quota,refquota,reservation,refreservation -r vault
## Mount Encrypted Dataset
zfs load-key vault/Green/Pocket
zfs mount vault/Green/Pocket
## Pool I/O & Performance Monitoring
zpool iostat -v 1
arcstat 1
cat /proc/spl/kstat/zfs/arcstats
## Scrubs & Data Integrity
zpool scrub vault
zpool scrub -s vault
zpool status
## Snapshots
zfs snapshot vault/dataset@snapname
zfs list -t snapshot
zfs rollback vault/dataset@snapname
zfs clone vault/dataset@snapname vault/dataset-clone
## Replication (Send / Receive)
zfs send vault/dataset@snap1 | zfs receive backup/dataset
zfs send -i snap1 vault/dataset@snap2 | zfs receive backup/dataset
zfs send -nv vault/dataset@snap1
## Dataset Tuning (Live-Safe Changes)
zfs set compression=lz4 vault/dataset
zfs set recordsize=1M vault/dataset
zfs set atime=off vault/dataset
zfs set dedup=on vault/dataset
## Encryption Management
zfs get encryption,keylocation,keystatus vault/dataset
zfs unload-key vault/dataset
zfs load-key vault/dataset
## Disk Preparation & Cleanup
wipefs /dev/sdX
wipefs -a /dev/sdX
zpool labelclear -f /dev/sdX
sgdisk --zap-all /dev/sdX
lsblk -f /dev/sdX
## Pool Expansion (Add VDEV)
zpool add vault raidz2 \
/dev/disk/by-id/disk1 \
/dev/disk/by-id/disk2 \
/dev/disk/by-id/disk3 \
/dev/disk/by-id/disk4 \
/dev/disk/by-id/disk5
## Pool Import / Recovery
zpool import
zpool import vault
zpool import -f vault
zpool import -o readonly=on vault
## Locks, Holds & History
zfs holds -r vault
zpool history
zfs diff vault/dataset@snap1 vault/dataset@snap2
## Deduplication & Compression Stats
zpool list -v
zdb -DD vault
## Inventory / Documentation Dumps
zpool status > zpool-status.txt
zfs list -r > zfs-layout.txt
zfs get -r all vault > zfs-settings.txt
## Top 10 Must-Know Commands
zpool status
zpool list
zpool iostat -v 1
zpool scrub vault
zfs list
zfs get all vault/dataset
zfs snapshot vault/dataset@snap
zfs rollback vault/dataset@snap
zfs send | zfs receive
arcstat 1

View file

@ -0,0 +1,393 @@
---
title: ZFS-NFS-Exports
description: Exporting NFS shares from ZFS datasets
published: true
date: 2026-02-23T21:58:20.626Z
tags:
editor: markdown
dateCreated: 2026-02-01T20:45:40.210Z
---
# NFS Configuration
## Overview
ZNAS exports storage via NFSv4. All exports are ZFS datasets mounted directly to `/export/*` — no bind mounts. NFS is configured to wait for ZFS at boot via a systemd override.
ZNAS also mounts its own NFS exports back to itself at `/data/nfs/znas`. This is intentional: Docker Swarm containers scheduled to ZNAS need to access NAS storage at the same paths as containers running on other swarm members. The loopback mount provides a consistent NFS-backed path regardless of which node a container lands on.
All other clients are Linux systems using autofs.
---
## Server Configuration
### ZFS Mountpoints
ZFS datasets mount directly to `/export/*`. No bind mounts are used.
```
vault → /export
vault/Common → /export/Common
vault/Data → /export/Data
vault/Data/media_books → /export/Data/media/books
vault/Data/media_comics → /export/Data/media/comics
vault/Docker → /export/Docker
vault/Green → /export/Green
vault/Green/Pocket → /export/Green/Pocket
vault/Photos → /export/Photos
```
Verify at any time:
```bash
mount | grep export
```
### /etc/exports
```
# NFSv4 - pseudo filesystem root
/export *(ro,fsid=0,no_root_squash,no_subtree_check,crossmnt)
# Shares beneath the NFSv4 root
/export/Common *(fsid=4,rw,no_subtree_check,insecure)
/export/Data *(fsid=5,rw,no_subtree_check,insecure,crossmnt)
/export/Data/media/books *(fsid=51,rw,no_subtree_check,insecure,nohide)
/export/Data/media/comics *(fsid=52,rw,no_subtree_check,insecure,nohide)
/export/Docker *(fsid=29,rw,no_root_squash,sync,no_subtree_check,insecure)
/export/Green *(fsid=30,rw,no_root_squash,no_subtree_check,insecure)
/export/photos *(fsid=31,rw,no_root_squash,no_subtree_check,insecure)
```
**Key options:**
- `fsid=0` on `/export` — required for NFSv4 pseudo-root. Clients enumerate all exports from here.
- `crossmnt` — allows NFS to cross ZFS dataset boundaries when traversing the tree.
- `nohide` — required on `media/books` and `media/comics` because they are separate ZFS datasets mounted beneath the `vault/Data` export path. Without it clients see empty directories.
- `no_root_squash` — Docker and Green exports allow root writes. Required for container volume mounts.
- `insecure` — permits connections from unprivileged ports (>1024). Required for some Linux NFS clients and all macOS clients.
- `sync` on Docker — forces synchronous writes for container volume safety.
### systemd Boot Order Override
NFS is configured to wait for ZFS to fully mount before starting.
`/etc/systemd/system/nfs-server.service.d/override.conf`:
```ini
[Unit]
After=zfs-import.target zfs-mount.service local-fs.target
Requires=zfs-import.target zfs-mount.service
```
Apply after any changes:
```bash
sudo systemctl daemon-reload
sudo systemctl restart nfs-server
```
### Autofs Disabled on Server
Autofs is disabled on ZNAS itself. It must only run on NFS clients. Running autofs on the server creates recursive mount loops.
```bash
sudo systemctl stop autofs
sudo systemctl disable autofs
```
---
## Loopback Mount (Docker Swarm)
ZNAS mounts its own NFS exports back to itself at `/data/nfs/znas`. This ensures containers scheduled to ZNAS by Docker Swarm access storage at the same NFS-backed paths as containers running on any other swarm member — consistent regardless of which node a service lands on.
Swarm container volume mounts reference paths under `/data/nfs/znas/` rather than `/export/` directly.
### The Timing Problem
Getting this mount to survive reboots reliably was non-trivial. The loopback has a chicken-and-egg dependency chain:
1. ZFS must import and mount pools before NFS server can export anything
2. NFS server must be fully started before the loopback mount can succeed
3. The loopback mount must be established before Docker Swarm containers start
A plain `_netdev` fstab entry is not sufficient — `_netdev` only guarantees the network is up, not that the NFS server is ready. The mount would race against NFS startup and fail silently or hang.
### Solution — fstab with x-systemd.after
The loopback is established via `/etc/fstab` using the `x-systemd.after` option to explicitly declare the dependency on `nfs-server.service`:
```
localhost:/ /data/nfs/znas nfs4 defaults,_netdev,x-systemd.after=nfs-server.service 0 0
```
`x-systemd.after=nfs-server.service` causes systemd-fstab-generator to automatically create a mount unit (`data-nfs-znas.mount`) with `After=nfs-server.service` in its `[Unit]` block. This guarantees the full dependency chain:
```
zfs-import.target
→ zfs-mount.service
→ nfs-server.service (via nfs-server override.conf)
→ data-nfs-znas.mount (via x-systemd.after in fstab)
→ remote-fs.target
→ Docker Swarm containers
```
The generated unit (created automatically at runtime by systemd-fstab-generator — not a file on disk):
```ini
# /run/systemd/generator/data-nfs-znas.mount
[Unit]
Documentation=man:fstab(5) man:systemd-fstab-generator(8)
SourcePath=/etc/fstab
After=nfs-server.service
Before=remote-fs.target
[Mount]
What=localhost:/
Where=/data/nfs/znas
Type=nfs4
Options=defaults,_netdev,x-systemd.after=nfs-server.service
```
**Do not create a hand-written systemd mount unit for this.** systemd-fstab-generator handles it automatically from the fstab entry. A manual unit would conflict.
### Verify Loopback is Active
```bash
mount | grep data/nfs/znas
# Should show: localhost:/ on /data/nfs/znas type nfs4 (...)
systemctl status data-nfs-znas.mount
# Should show: active (mounted)
```
---
## Client Configuration
All non-Swarm clients are Linux systems using autofs.
### Autofs Configuration
`/etc/auto.master` (relevant entry):
```
/data/nfs /etc/auto.nfs
```
`/etc/auto.nfs`:
```
znas -fstype=nfs4 192.168.5.10:/
```
This mounts the full NFSv4 tree from ZNAS at `/data/nfs/znas` on demand — the same path used by the loopback mount on ZNAS itself. All swarm nodes (including ZNAS) access NAS storage via `/data/nfs/znas/`.
**Note:** Autofs must be enabled on clients and disabled on the NFS server. Running autofs on the server creates recursive mount loops.
### Adding a New Client
```bash
# Install autofs if not present
sudo apt install autofs
# Add to /etc/auto.master if not already present
echo "/data/nfs /etc/auto.nfs" | sudo tee -a /etc/auto.master
# Create or update /etc/auto.nfs
echo "znas -fstype=nfs4 192.168.5.10:/" | sudo tee -a /etc/auto.nfs
# Reload autofs
sudo systemctl reload autofs
# Trigger mount by accessing the path
ls /data/nfs/znas/
```
### Manual Mount (testing only)
```bash
# Verify exports are visible from client
showmount -e 192.168.5.10
# Test manual mount
sudo mkdir -p /mnt/znas
sudo mount -t nfs4 192.168.5.10:/ /mnt/znas
# Verify tree is accessible
ls /mnt/znas/Data/media/books/
# Unmount after testing
sudo umount /mnt/znas
```
---
## Adding New Datasets
When creating a new ZFS dataset that needs to be NFS-accessible:
```bash
# Create with the correct mountpoint from the start
sudo zfs create -o mountpoint=/export/Data/new_folder vault/Data/new_folder
```
The dataset will be automatically visible via NFS due to `crossmnt` and `nohide` on the parent — no changes to `/etc/exports` needed unless the new dataset requires different access controls.
If different permissions are required, add an explicit entry to `/etc/exports` and reload:
```bash
sudo exportfs -ra
```
---
## Current Export List
Verified via `showmount -e 127.0.0.1`:
```
/export/photos *
/export/Green *
/export/Docker *
/export/Data/media/comics *
/export/Data/media/books *
/export/Data *
/export/Common *
/export *
```
---
## Known Gotchas
**Loopback mount races NFS at boot** — This was the hardest problem to solve. A plain `_netdev` fstab entry only guarantees the network interface is up, not that the NFS server is ready to accept connections. The loopback mount would attempt before NFS finished starting and fail silently or hang. The fix is `x-systemd.after=nfs-server.service` in the fstab options, which causes systemd-fstab-generator to emit an `After=nfs-server.service` dependency in the generated mount unit. The full required boot chain is: `zfs-import.target``zfs-mount.service``nfs-server.service``data-nfs-znas.mount`. Each link must be explicit.
**Do not hand-write a systemd mount unit for the loopback** — systemd-fstab-generator creates `data-nfs-znas.mount` automatically from the fstab entry at runtime (in `/run/systemd/generator/`, not `/etc/systemd/system/`). Creating a manual unit in `/etc/systemd/system/` will conflict with the generated one.
**Autofs must be disabled on the server** — Running autofs on ZNAS itself creates a recursive mount loop. Autofs belongs on clients only. If autofs is accidentally re-enabled on ZNAS it will fight with the fstab loopback mount.
**NFSv4 pseudo-root is required** — The `/export` entry with `fsid=0` is mandatory for NFSv4 clients. Without it clients cannot enumerate the export tree. Do not remove it even though it looks redundant.
**`nohide` on sub-datasets** — `vault/Data/media_books` and `vault/Data/media_comics` are separate ZFS datasets mounted beneath the `vault/Data` export path. NFS does not cross filesystem boundaries by default. Without `nohide` clients see empty directories at those paths even though the data is present.
**Do not use bind mounts for ZFS datasets** — Configure ZFS mountpoints directly to `/export/*`. Bind mounts in fstab for ZFS datasets cause ordering problems and are unnecessary.
**Always set mountpoints when creating new datasets** — If a dataset is created without an explicit mountpoint it will inherit the parent's path and may not be visible or exportable correctly. Set `mountpoint=` at creation time.
---
## Troubleshooting
### Datasets not visible via NFS
```bash
# Verify dataset is mounted
zfs list | grep dataset_name
# Check NFS can read it
sudo -u nobody ls -la /export/path/to/dataset/
# Reload exports
sudo exportfs -ra
sudo systemctl restart nfs-server
```
### Client shows empty directories
```bash
# Clear NFS cache and remount
sudo umount -f /mnt/znas
sudo mount -t nfs4 192.168.5.10:/ /mnt/znas
# Test without caching to isolate the problem
sudo mount -t nfs4 -o noac,lookupcache=none 192.168.5.10:/ /mnt/znas
```
### After reboot, exports are empty
```bash
# Confirm ZFS mounted before NFS started
systemctl status zfs-mount.service
systemctl status nfs-server.service
# Confirm override is in place
systemctl cat nfs-server.service | grep -A5 "\[Unit\]"
```
### Loopback mount not working for Swarm containers
```bash
# Check mount unit status
systemctl status data-nfs-znas.mount
# Verify full dependency chain is satisfied
systemctl status zfs-mount.service
systemctl status nfs-server.service
systemctl status data-nfs-znas.mount
# Verify loopback is mounted
mount | grep data/nfs/znas
# If missing, mount manually to test
sudo mount -t nfs4 127.0.0.1:/ /data/nfs/znas
# Check container can see the path
docker run --rm -v /data/nfs/znas/Data:/data alpine ls /data
```
If the unit fails at boot, confirm the fstab entry includes `x-systemd.after=nfs-server.service` — without this the mount races against NFS startup and loses. A plain `_netdev` entry is not sufficient.
---
## Configuration Files Reference
### /etc/exports
```
/export *(ro,fsid=0,no_root_squash,no_subtree_check,crossmnt)
/export/Common *(fsid=4,rw,no_subtree_check,insecure)
/export/Data *(fsid=5,rw,no_subtree_check,insecure,crossmnt)
/export/Data/media/books *(fsid=51,rw,no_subtree_check,insecure,nohide)
/export/Data/media/comics *(fsid=52,rw,no_subtree_check,insecure,nohide)
/export/Docker *(fsid=29,rw,no_root_squash,sync,no_subtree_check,insecure)
/export/Green *(fsid=30,rw,no_root_squash,no_subtree_check,insecure)
/export/photos *(fsid=31,rw,no_root_squash,no_subtree_check,insecure)
```
### /etc/systemd/system/nfs-server.service.d/override.conf
```ini
[Unit]
After=zfs-import.target zfs-mount.service local-fs.target
Requires=zfs-import.target zfs-mount.service
```
### /etc/fstab (ZNAS system mounts only)
ZFS datasets are not listed here — ZFS handles its own mounting. Only system partitions appear:
```
# / - btrfs on nvme0n1p2
/dev/disk/by-uuid/40c60952-0340-4a78-81f9-5b2193da26c6 / btrfs defaults 0 1
# /boot - ext4 on nvme0n1p3
/dev/disk/by-uuid/4abb4efa-0b2b-4e4a-bcaf-78227db4628f /boot ext4 defaults 0 1
# swap
/dev/disk/by-uuid/d07437a0-3d0e-417a-a88e-438c603c2237 none swap sw 0 0
# /srv - btrfs on nvme0n1p5
/dev/disk/by-uuid/c66e81ff-436e-4d6f-980b-6f4875ea7c8e /srv btrfs defaults 0 1
```
---
## Command Reference
- Show active exports: `sudo exportfs -v`
- Reload exports: `sudo exportfs -ra`
- Show available exports (from any host): `showmount -e 192.168.5.10`
- Restart NFS: `sudo systemctl restart nfs-server`
- Check NFS status: `systemctl status nfs-server`
- Verify ZFS mounts: `mount | grep export`
- Verify loopback: `mount | grep data/nfs`