Compare commits

..

No commits in common. "fc05032a7cd62be091fa98603fd453b35d51bde8" and "77ed0db95d93451fa6b73a57ff502c71983b3f31" have entirely different histories.

2 changed files with 140 additions and 753 deletions

View file

@ -1,463 +0,0 @@
---
title: OpnSense - NTFY Integration
description: Security Notifications
published: true
date: 2026-02-23T22:00:37.268Z
tags:
editor: markdown
dateCreated: 2026-02-23T22:00:37.268Z
---
# OPNsense ntfy Alerts
**Service:** ntfy push notifications from OPNsense
**Host:** OPNsense firewall
**ntfy Server:** Your self-hosted ntfy instance on Netgrimoire
**Methods:** CrowdSec HTTP plugin · Monit custom script · Suricata EVE watcher
---
## Overview
OPNsense does not have a built-in ntfy notification channel, but there are three distinct integration points that together provide complete coverage:
| Method | What It Alerts On | Priority |
|---|---|---|
| **CrowdSec HTTP plugin** | Every IP ban decision CrowdSec makes | 🔴 Best for threat intel alerts |
| **Monit + curl script** | System health, service failures, Suricata EVE matches, login failures | 🔴 Best for operational alerts |
| **Suricata EVE watcher** | Suricata high-severity IDS hits (via Monit watching eve.json) | 🟡 Covered via Monit |
All three use your self-hosted ntfy instance. None require external services.
---
## Prerequisites
Before starting, confirm:
- ntfy is running and reachable at `https://ntfy.netgrimoire.com` (or your internal URL)
- ntfy topic created: e.g. `opnsense-alerts`
- If ntfy has auth enabled, have a token ready
- SSH access to OPNsense as root
---
## Method 1 — CrowdSec HTTP Notification Plugin
This is the cleanest integration for security alerts. CrowdSec has a built-in HTTP notification plugin. Every time it makes a ban decision — whether from community intel, a Suricata match passed through CrowdSec, or a brute-force detection — it POSTs to ntfy.
### Step 1 — Create the HTTP notification config
SSH into OPNsense and create the ntfy config file:
```bash
ssh root@192.168.3.4
```
```bash
cat > /usr/local/etc/crowdsec/notifications/ntfy.yaml << 'EOF'
# ntfy notification plugin for CrowdSec
# CrowdSec uses its built-in HTTP plugin pointed at ntfy
type: http
name: ntfy_default
log_level: info
# ntfy accepts plain POST body as the notification message
# format is a Go template — .[]Alert is the list of alerts
format: |
{{range .}}
🚨 CrowdSec Decision
Scenario: {{.Scenario}}
Attacker IP: {{.Source.IP}}
Country: {{.Source.Cn}}
Action: {{.Decisions | len}} x {{(index .Decisions 0).Type}}
Duration: {{(index .Decisions 0).Duration}}
{{end}}
url: https://ntfy.netgrimoire.com/opnsense-alerts
method: POST
headers:
Title: "CrowdSec Ban — OPNsense"
Priority: "high"
Tags: "rotating_light,shield"
# Uncomment and set token if ntfy auth is enabled:
# Authorization: "Bearer YOUR_NTFY_TOKEN"
# skip_tls_verify: false
EOF
```
> ⚠ Replace `https://ntfy.netgrimoire.com/opnsense-alerts` with your actual ntfy URL and topic. If ntfy is internal-only and OPNsense can reach it by hostname, the internal URL works fine.
### Step 2 — Register the plugin in profiles.yaml
Edit the CrowdSec profiles file to dispatch decisions to the ntfy plugin:
```bash
vi /usr/local/etc/crowdsec/profiles.yaml
```
Find the `notifications:` section of the default profile and add `ntfy_default`:
```yaml
name: default_ip_remediation
filters:
- Alert.Remediation == true && Alert.GetScope() == "Ip"
decisions:
- type: ban
duration: 4h
notifications:
- ntfy_default # ← add this line
on_success: break
```
> ✓ The `ntfy_default` name must match the `name:` field in the yaml file you created above exactly.
### Step 3 — Set correct file ownership
CrowdSec rejects plugins if the configuration file is not owned by the root user and root group. Ensure the file has the right permissions:
```bash
chown root:wheel /usr/local/etc/crowdsec/notifications/ntfy.yaml
chmod 600 /usr/local/etc/crowdsec/notifications/ntfy.yaml
```
### Step 4 — Restart CrowdSec and test
```bash
# Restart via OPNsense service manager (do NOT use systemctl/service directly)
# Go to: Services → CrowdSec → Settings → Apply
# Or from shell:
pluginctl -s crowdsec restart
```
Test by sending a manual notification:
```bash
cscli notifications test ntfy_default
```
You should receive a test push on your device within a few seconds.
Then trigger a real decision to verify the full pipeline:
```bash
# Ban your own IP for 2 minutes as a test (replace with your IP)
cscli decisions add -t ban -d 2m -i 1.2.3.4
# Watch for ntfy notification
# Remove the test ban:
cscli decisions delete -i 1.2.3.4
```
---
## Method 2 — Monit + curl Script
Monit is OPNsense's built-in service monitor. It can watch processes, files, system resources, and log patterns — and call a custom shell script when a condition is met. The script fires a curl POST to ntfy.
This covers things CrowdSec doesn't — service failures, high CPU, gateway down events, SSH login failures, disk usage, and Suricata EVE alerts.
### Step 2.1 — Create the ntfy alert script
```bash
cat > /usr/local/bin/ntfy-alert.sh << 'EOF'
#!/usr/local/bin/bash
# ntfy-alert.sh — called by Monit to send ntfy push notifications
# Monit provides variables: $MONIT_HOST, $MONIT_SERVICE,
# $MONIT_DESCRIPTION, $MONIT_EVENT
NTFY_URL="https://ntfy.netgrimoire.com/opnsense-alerts"
# NTFY_TOKEN="Bearer YOUR_NTFY_TOKEN" # uncomment if ntfy auth enabled
TITLE="${MONIT_HOST}: ${MONIT_SERVICE}"
MESSAGE="${MONIT_EVENT} — ${MONIT_DESCRIPTION}"
# Map Monit event types to ntfy priorities
case "$MONIT_EVENT" in
*"does not exist"*|*"failed"*|*"error"*)
PRIORITY="urgent"
TAGS="rotating_light,red_circle"
;;
*"changed"*|*"match"*)
PRIORITY="high"
TAGS="warning,yellow_circle"
;;
*"recovered"*|*"succeeded"*)
PRIORITY="default"
TAGS="white_check_mark,green_circle"
;;
*)
PRIORITY="default"
TAGS="bell"
;;
esac
curl -s \
-H "Title: ${TITLE}" \
-H "Priority: ${PRIORITY}" \
-H "Tags: ${TAGS}" \
-d "${MESSAGE}" \
"${NTFY_URL}"
# Uncomment for auth:
# curl -s \
# -H "Authorization: ${NTFY_TOKEN}" \
# -H "Title: ${TITLE}" \
# -H "Priority: ${PRIORITY}" \
# -H "Tags: ${TAGS}" \
# -d "${MESSAGE}" \
# "${NTFY_URL}"
EOF
chmod +x /usr/local/bin/ntfy-alert.sh
```
### Step 2.2 — Enable Monit
Navigate to **Services → Monit → Settings → General Settings**
| Setting | Value |
|---|---|
| Enabled | ✓ |
| Polling Interval | 30 seconds |
| Start Delay | 120 seconds |
| Mail Server | Leave blank (using script instead) |
Click **Save**.
### Step 2.3 — Add Service Tests
Navigate to **Services → Monit → Service Tests Settings** and add the following tests:
**Test 1 — Custom Alert via Script**
| Field | Value |
|---|---|
| Name | `ntfy_alert` |
| Condition | `failed` |
| Action | Execute |
| Path | `/usr/local/bin/ntfy-alert.sh` |
This is the reusable action that all other tests will invoke.
**Test 2 — Suricata EVE High Alert**
| Field | Value |
|---|---|
| Name | `SuricataHighAlert` |
| Condition | `content = "\"severity\":1"` |
| Action | Execute → `/usr/local/bin/ntfy-alert.sh` |
This watches for severity 1 (highest) alerts written to the Suricata EVE JSON log.
**Test 3 — Suricata Process Down**
| Field | Value |
|---|---|
| Name | `SuricataRunning` |
| Condition | `failed` |
| Action | Execute → `/usr/local/bin/ntfy-alert.sh` |
**Test 4 — CrowdSec Process Down**
| Field | Value |
|---|---|
| Name | `CrowdSecRunning` |
| Condition | `failed` |
| Action | Execute → `/usr/local/bin/ntfy-alert.sh` |
**Test 5 — SSH Login Failure**
| Field | Value |
|---|---|
| Name | `SSHFailedLogin` |
| Condition | `content = "Failed password"` |
| Action | Execute → `/usr/local/bin/ntfy-alert.sh` |
**Test 6 — OPNsense Web UI Login Failure**
| Field | Value |
|---|---|
| Name | `WebUILoginFail` |
| Condition | `content = "webgui"` |
| Action | Execute → `/usr/local/bin/ntfy-alert.sh` |
### Step 2.4 — Add Service Monitors
Navigate to **Services → Monit → Service Settings** and add:
**Monitor 1 — Suricata EVE Log (high alerts)**
| Field | Value |
|---|---|
| Name | `SuricataEVE` |
| Type | File |
| Path | `/var/log/suricata/eve.json` |
| Tests | `SuricataHighAlert` |
**Monitor 2 — Suricata Process**
| Field | Value |
|---|---|
| Name | `Suricata` |
| Type | Process |
| PID File | `/var/run/suricata.pid` |
| Tests | `SuricataRunning` |
| Restart Method | /usr/local/etc/rc.d/suricata restart |
**Monitor 3 — CrowdSec Process**
| Field | Value |
|---|---|
| Name | `CrowdSec` |
| Type | Process |
| Match | `crowdsec` |
| Tests | `CrowdSecRunning` |
**Monitor 4 — SSH Auth Log**
| Field | Value |
|---|---|
| Name | `SSHAuth` |
| Type | File |
| Path | `/var/log/auth.log` |
| Tests | `SSHFailedLogin` |
**Monitor 5 — System Resources (optional)**
| Field | Value |
|---|---|
| Name | `System` |
| Type | System |
| Tests | `ntfy_alert` (on resource threshold exceeded) |
Click **Apply** after adding all services.
### Step 2.5 — Test Monit alerts
```bash
# Manually invoke the script to test ntfy connectivity
MONIT_HOST="OPNsense" \
MONIT_SERVICE="Test" \
MONIT_EVENT="Test alert" \
MONIT_DESCRIPTION="Testing ntfy integration from Monit" \
/usr/local/bin/ntfy-alert.sh
```
You should receive a push notification immediately.
---
## Alert Topics & Priority Mapping
Consider using separate ntfy topics to filter notifications by type on your device:
| Topic | Used For | Suggested ntfy Priority |
|---|---|---|
| `opnsense-alerts` | CrowdSec bans, Suricata high hits | high / urgent |
| `opnsense-health` | Monit service failures, process restarts | high |
| `opnsense-info` | Service recoveries, status changes | default / low |
To use separate topics, change the `NTFY_URL` in the Monit script and the `url:` in the CrowdSec config accordingly.
---
## ntfy Priority Reference
ntfy supports five priority levels that map to different notification behaviors on Android/iOS:
| ntfy Priority | Numeric | Behavior |
|---|---|---|
| `min` | 1 | No notification, no sound |
| `low` | 2 | Notification, no sound |
| `default` | 3 | Notification with sound |
| `high` | 4 | Notification with sound, bypasses DND |
| `urgent` | 5 | Phone rings through DND, repeated |
For firewall alerts: use `urgent` for process failures and `high` for IDS/ban events. Reserve `urgent` sparingly to avoid alert fatigue.
---
## Keeping Config Persistent Across Upgrades
OPNsense upgrades can overwrite files in certain paths. The safest locations for persistent custom files:
| File | Location | Persistent? |
|---|---|---|
| ntfy-alert.sh | `/usr/local/bin/ntfy-alert.sh` | ✓ Yes — not touched by upgrades |
| CrowdSec ntfy.yaml | `/usr/local/etc/crowdsec/notifications/ntfy.yaml` | ✓ Yes — plugin config directory |
| CrowdSec profiles.yaml | `/usr/local/etc/crowdsec/profiles.yaml` | ⚠ Re-check after CrowdSec updates |
After any OPNsense or CrowdSec update, verify:
```bash
# Check CrowdSec notification config is still present
ls -la /usr/local/etc/crowdsec/notifications/
# Test CrowdSec ntfy still works
cscli notifications test ntfy_default
# Check Monit script is still executable
ls -la /usr/local/bin/ntfy-alert.sh
```
---
## Troubleshooting
**No notification received from CrowdSec test:**
```bash
# Check CrowdSec logs for plugin errors
tail -50 /var/log/crowdsec.log | grep -i ntfy
tail -50 /var/log/crowdsec.log | grep -i notification
# Verify ntfy URL is reachable from OPNsense
curl -v -d "test" https://ntfy.netgrimoire.com/opnsense-alerts
# Check profiles.yaml has ntfy_default in notifications section
grep -A5 "notifications:" /usr/local/etc/crowdsec/profiles.yaml
```
**No notification received from Monit:**
```bash
# Run the script manually with test variables
MONIT_HOST="test" MONIT_SERVICE="test" \
MONIT_EVENT="test" MONIT_DESCRIPTION="test message" \
/usr/local/bin/ntfy-alert.sh
# Check Monit is running
ps aux | grep monit
# Check Monit logs
tail -50 /var/log/monit.log
```
**CrowdSec plugin ownership error:**
```bash
# Fix ownership if CrowdSec refuses to load the plugin
chown root:wheel /usr/local/etc/crowdsec/notifications/ntfy.yaml
ls -la /usr/local/etc/crowdsec/notifications/
```
**ntfy auth failing:**
```bash
# Test with token manually
curl -H "Authorization: Bearer YOUR_TOKEN" \
-H "Title: Test" \
-d "Auth test" \
https://ntfy.netgrimoire.com/opnsense-alerts
```
---
## Related Documentation
- [OPNsense Firewall](./opnsense-firewall) — parent firewall documentation
- [CrowdSec](./crowdsec) — threat intelligence engine sending these alerts
- [Suricata IDS/IPS](./suricata-ids-ips) — source of EVE alerts watched by Monit
- [ntfy](./ntfy) — self-hosted notification server on Netgrimoire

View file

@ -2,287 +2,170 @@
title: ZFS-NFS-Exports title: ZFS-NFS-Exports
description: Exporting NFS shares from ZFS datasets description: Exporting NFS shares from ZFS datasets
published: true published: true
date: 2026-02-23T21:58:11.949Z date: 2026-02-01T20:45:40.210Z
tags: tags:
editor: markdown editor: markdown
dateCreated: 2026-02-01T20:45:40.210Z dateCreated: 2026-02-01T20:45:40.210Z
--- ---
# NFS Configuration # NFS + ZFS Configuration Fix
## Overview ## Problems Identified
ZNAS exports storage via NFSv4. All exports are ZFS datasets mounted directly to `/export/*` — no bind mounts. NFS is configured to wait for ZFS at boot via a systemd override. 1. **Boot order issue**: NFS was starting before ZFS mounted the datasets
2. **Autofs recursive loop**: The server was NFS-mounting its own exports back to itself via autofs, creating conflicts
3. **Mountpoint mismatch**: ZFS datasets were mounting to `/srv/vault/*` but NFS was trying to export from `/export/*` using bind mounts that didn't work properly
4. **Child dataset mountpoints**: When the parent dataset mountpoint changed, child datasets still had old mountpoints and weren't visible
ZNAS also mounts its own NFS exports back to itself at `/data/nfs/znas`. This is intentional: Docker Swarm containers scheduled to ZNAS need to access NAS storage at the same paths as containers running on other swarm members. The loopback mount provides a consistent NFS-backed path regardless of which node a container lands on. ## Complete Solution
All other clients are Linux systems using autofs.
---
## Server Configuration
### ZFS Mountpoints
ZFS datasets mount directly to `/export/*`. No bind mounts are used.
```
vault → /export
vault/Common → /export/Common
vault/Data → /export/Data
vault/Data/media_books → /export/Data/media/books
vault/Data/media_comics → /export/Data/media/comics
vault/Docker → /export/Docker
vault/Green → /export/Green
vault/Green/Pocket → /export/Green/Pocket
vault/Photos → /export/Photos
```
Verify at any time:
### Step 1: Stop and Disable Autofs on NFS Server
```bash ```bash
mount | grep export sudo systemctl stop autofs
sudo systemctl disable autofs
``` ```
### /etc/exports **Note**: Autofs should only run on NFS clients, not on the server itself.
### Step 2: Set ZFS Datasets to Mount Directly to `/export/*`
Instead of using bind mounts, configure ZFS to mount directly to the export paths:
```bash
sudo zfs set mountpoint=/export vault
sudo zfs set mountpoint=/export/Data vault/Data
sudo zfs set mountpoint=/export/Green vault/Green
sudo zfs set mountpoint=/export/Docker vault/docker
sudo zfs set mountpoint=/export/Common vault/Common
``` ```
### Step 3: Configure Child Dataset Mountpoints
For any child datasets (subdirectories under parent datasets), set their mountpoints explicitly:
```bash
sudo zfs set mountpoint=/export/Data/media/books vault/Data/media_books
sudo zfs set mountpoint=/export/Data/media/comics vault/Data/media_comics
```
### Step 4: Unmount and Remount Datasets
Unmount children first, then parents, then remount all:
```bash
# Unmount child datasets first
sudo zfs unmount vault/Data/media_books
sudo zfs unmount vault/Data/media_comics
# Unmount parent datasets
sudo zfs unmount vault/Data
sudo zfs unmount vault/Green
sudo zfs unmount vault/docker
sudo zfs unmount vault/Common
# Remount all
sudo zfs mount -a
```
### Step 5: Configure NFS Exports
Edit `/etc/exports`:
```bash
sudo nano /etc/exports
```
Content:
```bash
# /etc/exports: the access control list for filesystems which may be exported
# to NFS clients. See exports(5).
# NFSv4 - pseudo filesystem root # NFSv4 - pseudo filesystem root
/export *(ro,fsid=0,no_root_squash,no_subtree_check,crossmnt) /export *(ro,fsid=0,crossmnt,no_subtree_check)
# Shares beneath the NFSv4 root # Shares beneath the NFSv4 root
/export/Common *(fsid=4,rw,no_subtree_check,insecure) /export/Common *(fsid=4,rw,no_subtree_check,insecure)
/export/Data *(fsid=5,rw,no_subtree_check,insecure,crossmnt) /export/Data *(fsid=5,rw,no_subtree_check,insecure,crossmnt,nohide)
/export/Data/media/books *(fsid=51,rw,no_subtree_check,insecure,nohide)
/export/Data/media/comics *(fsid=52,rw,no_subtree_check,insecure,nohide)
/export/Docker *(fsid=29,rw,no_root_squash,sync,no_subtree_check,insecure) /export/Docker *(fsid=29,rw,no_root_squash,sync,no_subtree_check,insecure)
/export/Green *(fsid=30,rw,no_root_squash,no_subtree_check,insecure) /export/Green *(fsid=30,rw,no_root_squash,no_subtree_check,insecure)
/export/photos *(fsid=31,rw,no_root_squash,no_subtree_check,insecure)
``` ```
**Key options:** **Key options explained**:
- `crossmnt`: Allows NFS to cross filesystem mount boundaries
- `nohide`: Makes child mounts visible through parent exports
- `no_subtree_check`: Improves reliability and performance
- `fsid=0` on `/export` — required for NFSv4 pseudo-root. Clients enumerate all exports from here. ### Step 6: Configure NFS to Wait for ZFS at Boot
- `crossmnt` — allows NFS to cross ZFS dataset boundaries when traversing the tree.
- `nohide` — required on `media/books` and `media/comics` because they are separate ZFS datasets mounted beneath the `vault/Data` export path. Without it clients see empty directories.
- `no_root_squash` — Docker and Green exports allow root writes. Required for container volume mounts.
- `insecure` — permits connections from unprivileged ports (>1024). Required for some Linux NFS clients and all macOS clients.
- `sync` on Docker — forces synchronous writes for container volume safety.
### systemd Boot Order Override Create systemd override for NFS server:
```bash
NFS is configured to wait for ZFS to fully mount before starting. sudo mkdir -p /etc/systemd/system/nfs-server.service.d/
sudo nano /etc/systemd/system/nfs-server.service.d/override.conf
`/etc/systemd/system/nfs-server.service.d/override.conf`: ```
Add this content:
```ini ```ini
[Unit] [Unit]
After=zfs-import.target zfs-mount.service local-fs.target After=zfs-import.target zfs-mount.service local-fs.target
Requires=zfs-import.target zfs-mount.service Requires=zfs-import.target zfs-mount.service
``` ```
Apply after any changes: This ensures NFS waits for ZFS to be fully mounted before starting.
### Step 7: Reload and Restart Services
```bash ```bash
sudo systemctl daemon-reload sudo systemctl daemon-reload
sudo exportfs -ra
sudo systemctl restart nfs-server sudo systemctl restart nfs-server
``` ```
### Autofs Disabled on Server ### Step 8: Verify Configuration
Autofs is disabled on ZNAS itself. It must only run on NFS clients. Running autofs on the server creates recursive mount loops.
On the server:
```bash ```bash
sudo systemctl stop autofs # Check ZFS mounts
sudo systemctl disable autofs zfs list -r vault
# Check what's actually mounted
mount | grep export
# Verify NFS exports
sudo exportfs -v
# Check content is visible
ls -la /export/Data/media/books/
``` ```
--- On the client:
## Loopback Mount (Docker Swarm)
ZNAS mounts its own NFS exports back to itself at `/data/nfs/znas`. This ensures containers scheduled to ZNAS by Docker Swarm access storage at the same NFS-backed paths as containers running on any other swarm member — consistent regardless of which node a service lands on.
Swarm container volume mounts reference paths under `/data/nfs/znas/` rather than `/export/` directly.
### The Timing Problem
Getting this mount to survive reboots reliably was non-trivial. The loopback has a chicken-and-egg dependency chain:
1. ZFS must import and mount pools before NFS server can export anything
2. NFS server must be fully started before the loopback mount can succeed
3. The loopback mount must be established before Docker Swarm containers start
A plain `_netdev` fstab entry is not sufficient — `_netdev` only guarantees the network is up, not that the NFS server is ready. The mount would race against NFS startup and fail silently or hang.
### Solution — fstab with x-systemd.after
The loopback is established via `/etc/fstab` using the `x-systemd.after` option to explicitly declare the dependency on `nfs-server.service`:
```
localhost:/ /data/nfs/znas nfs4 defaults,_netdev,x-systemd.after=nfs-server.service 0 0
```
`x-systemd.after=nfs-server.service` causes systemd-fstab-generator to automatically create a mount unit (`data-nfs-znas.mount`) with `After=nfs-server.service` in its `[Unit]` block. This guarantees the full dependency chain:
```
zfs-import.target
→ zfs-mount.service
→ nfs-server.service (via nfs-server override.conf)
→ data-nfs-znas.mount (via x-systemd.after in fstab)
→ remote-fs.target
→ Docker Swarm containers
```
The generated unit (created automatically at runtime by systemd-fstab-generator — not a file on disk):
```ini
# /run/systemd/generator/data-nfs-znas.mount
[Unit]
Documentation=man:fstab(5) man:systemd-fstab-generator(8)
SourcePath=/etc/fstab
After=nfs-server.service
Before=remote-fs.target
[Mount]
What=localhost:/
Where=/data/nfs/znas
Type=nfs4
Options=defaults,_netdev,x-systemd.after=nfs-server.service
```
**Do not create a hand-written systemd mount unit for this.** systemd-fstab-generator handles it automatically from the fstab entry. A manual unit would conflict.
### Verify Loopback is Active
```bash ```bash
mount | grep data/nfs/znas # Show available exports
# Should show: localhost:/ on /data/nfs/znas type nfs4 (...)
systemctl status data-nfs-znas.mount
# Should show: active (mounted)
```
---
## Client Configuration
All non-Swarm clients are Linux systems using autofs.
### Autofs Configuration
`/etc/auto.master` (relevant entry):
```
/data/nfs /etc/auto.nfs
```
`/etc/auto.nfs`:
```
znas -fstype=nfs4 192.168.5.10:/
```
This mounts the full NFSv4 tree from ZNAS at `/data/nfs/znas` on demand — the same path used by the loopback mount on ZNAS itself. All swarm nodes (including ZNAS) access NAS storage via `/data/nfs/znas/`.
**Note:** Autofs must be enabled on clients and disabled on the NFS server. Running autofs on the server creates recursive mount loops.
### Adding a New Client
```bash
# Install autofs if not present
sudo apt install autofs
# Add to /etc/auto.master if not already present
echo "/data/nfs /etc/auto.nfs" | sudo tee -a /etc/auto.master
# Create or update /etc/auto.nfs
echo "znas -fstype=nfs4 192.168.5.10:/" | sudo tee -a /etc/auto.nfs
# Reload autofs
sudo systemctl reload autofs
# Trigger mount by accessing the path
ls /data/nfs/znas/
```
### Manual Mount (testing only)
```bash
# Verify exports are visible from client
showmount -e 192.168.5.10 showmount -e 192.168.5.10
# Test manual mount # Mount NFS share
sudo mkdir -p /mnt/znas
sudo mount -t nfs4 192.168.5.10:/ /mnt/znas sudo mount -t nfs4 192.168.5.10:/ /mnt/znas
# Verify tree is accessible # Verify content
ls /mnt/znas/Data/media/books/ ls -la /mnt/znas/Data/media/books/
# Unmount after testing
sudo umount /mnt/znas
``` ```
---
## Adding New Datasets ## Adding New Datasets
When creating a new ZFS dataset that needs to be NFS-accessible: When creating new datasets that need to be exported via NFS:
```bash ```bash
# Create with the correct mountpoint from the start # Create dataset with correct mountpoint from the start
sudo zfs create -o mountpoint=/export/Data/new_folder vault/Data/new_folder sudo zfs create -o mountpoint=/export/Data/new_folder vault/Data/new_folder
# The dataset will automatically mount and be visible via NFS
# due to the crossmnt and nohide options on the parent export
# Verify it's visible
ls -la /export/Data/new_folder/
``` ```
The dataset will be automatically visible via NFS due to `crossmnt` and `nohide` on the parent — no changes to `/etc/exports` needed unless the new dataset requires different access controls. **No need to**:
- Modify `/etc/exports` (unless you need special permissions)
If different permissions are required, add an explicit entry to `/etc/exports` and reload: - Create bind mounts in `/etc/fstab`
- Restart NFS (it will see the new mount automatically)
```bash
sudo exportfs -ra
```
---
## Current Export List
Verified via `showmount -e 127.0.0.1`:
```
/export/photos *
/export/Green *
/export/Docker *
/export/Data/media/comics *
/export/Data/media/books *
/export/Data *
/export/Common *
/export *
```
---
## Known Gotchas
**Loopback mount races NFS at boot** — This was the hardest problem to solve. A plain `_netdev` fstab entry only guarantees the network interface is up, not that the NFS server is ready to accept connections. The loopback mount would attempt before NFS finished starting and fail silently or hang. The fix is `x-systemd.after=nfs-server.service` in the fstab options, which causes systemd-fstab-generator to emit an `After=nfs-server.service` dependency in the generated mount unit. The full required boot chain is: `zfs-import.target``zfs-mount.service``nfs-server.service``data-nfs-znas.mount`. Each link must be explicit.
**Do not hand-write a systemd mount unit for the loopback** — systemd-fstab-generator creates `data-nfs-znas.mount` automatically from the fstab entry at runtime (in `/run/systemd/generator/`, not `/etc/systemd/system/`). Creating a manual unit in `/etc/systemd/system/` will conflict with the generated one.
**Autofs must be disabled on the server** — Running autofs on ZNAS itself creates a recursive mount loop. Autofs belongs on clients only. If autofs is accidentally re-enabled on ZNAS it will fight with the fstab loopback mount.
**NFSv4 pseudo-root is required** — The `/export` entry with `fsid=0` is mandatory for NFSv4 clients. Without it clients cannot enumerate the export tree. Do not remove it even though it looks redundant.
**`nohide` on sub-datasets** — `vault/Data/media_books` and `vault/Data/media_comics` are separate ZFS datasets mounted beneath the `vault/Data` export path. NFS does not cross filesystem boundaries by default. Without `nohide` clients see empty directories at those paths even though the data is present.
**Do not use bind mounts for ZFS datasets** — Configure ZFS mountpoints directly to `/export/*`. Bind mounts in fstab for ZFS datasets cause ordering problems and are unnecessary.
**Always set mountpoints when creating new datasets** — If a dataset is created without an explicit mountpoint it will inherit the parent's path and may not be visible or exportable correctly. Set `mountpoint=` at creation time.
---
## Troubleshooting ## Troubleshooting
### Datasets not visible via NFS ### Datasets not visible via NFS
```bash ```bash
# Verify dataset is mounted # Verify dataset is mounted
zfs list | grep dataset_name zfs list | grep dataset_name
@ -290,104 +173,71 @@ zfs list | grep dataset_name
# Check NFS can read it # Check NFS can read it
sudo -u nobody ls -la /export/path/to/dataset/ sudo -u nobody ls -la /export/path/to/dataset/
# Reload exports # Restart NFS
sudo exportfs -ra sudo exportfs -ra
sudo systemctl restart nfs-server sudo systemctl restart nfs-server
``` ```
### Client shows empty directories ### Client shows empty directories
```bash ```bash
# Clear NFS cache and remount # On client, clear NFS cache
sudo umount -f /mnt/znas sudo umount -f /mnt/znas
sudo mount -t nfs4 192.168.5.10:/ /mnt/znas sudo mount -t nfs4 192.168.5.10:/ /mnt/znas
# Test without caching to isolate the problem # Or mount with no caching to test
sudo mount -t nfs4 -o noac,lookupcache=none 192.168.5.10:/ /mnt/znas sudo mount -t nfs4 -o noac,lookupcache=none 192.168.5.10:/ /mnt/znas
``` ```
### After reboot, exports are empty ### After reboot, exports are empty
```bash ```bash
# Confirm ZFS mounted before NFS started # Verify ZFS mounted before NFS started
systemctl status zfs-mount.service systemctl status zfs-mount.service
systemctl status nfs-server.service systemctl status nfs-server.service
# Confirm override is in place # Check the override is in place
systemctl cat nfs-server.service | grep -A5 "\[Unit\]" systemctl cat nfs-server.service | grep -A5 Unit
``` ```
### Loopback mount not working for Swarm containers ## Important Notes
```bash - **Do not use bind mounts** in `/etc/fstab` for ZFS datasets - let ZFS handle mounting directly
# Check mount unit status - **Keep autofs disabled** on the NFS server to prevent recursive mount loops
systemctl status data-nfs-znas.mount - **Child datasets** must have their mountpoints explicitly set to be visible
- **The `crossmnt` and `nohide` options** are critical for NFSv4 to traverse ZFS dataset boundaries
# Verify full dependency chain is satisfied - **Always set mountpoints when creating datasets** to avoid having to fix them later
systemctl status zfs-mount.service
systemctl status nfs-server.service
systemctl status data-nfs-znas.mount
# Verify loopback is mounted
mount | grep data/nfs/znas
# If missing, mount manually to test
sudo mount -t nfs4 127.0.0.1:/ /data/nfs/znas
# Check container can see the path
docker run --rm -v /data/nfs/znas/Data:/data alpine ls /data
```
If the unit fails at boot, confirm the fstab entry includes `x-systemd.after=nfs-server.service` — without this the mount races against NFS startup and loses. A plain `_netdev` entry is not sufficient.
---
## Configuration Files Reference ## Configuration Files Reference
### /etc/exports ### /etc/fstab
``` Should **NOT** contain bind mounts for `/export/*`. ZFS handles mounting directly.
/export *(ro,fsid=0,no_root_squash,no_subtree_check,crossmnt)
/export/Common *(fsid=4,rw,no_subtree_check,insecure) Only keep system mounts:
/export/Data *(fsid=5,rw,no_subtree_check,insecure,crossmnt) ```bash
/export/Data/media/books *(fsid=51,rw,no_subtree_check,insecure,nohide) # / was on /dev/nvme0n1p2 during curtin installation
/export/Data/media/comics *(fsid=52,rw,no_subtree_check,insecure,nohide) /dev/disk/by-uuid/40c60952-0340-4a78-81f9-5b2193da26c6 / btrfs defaults 0 1
/export/Docker *(fsid=29,rw,no_root_squash,sync,no_subtree_check,insecure) # /boot was on /dev/nvme0n1p3 during curtin installation
/export/Green *(fsid=30,rw,no_root_squash,no_subtree_check,insecure) /dev/disk/by-uuid/4abb4efa-0b2b-4e4a-bcaf-78227db4628f /boot ext4 defaults 0 1
/export/photos *(fsid=31,rw,no_root_squash,no_subtree_check,insecure) /dev/disk/by-uuid/d07437a0-3d0e-417a-a88e-438c603c2237 none swap sw 0 0
# /srv was on /dev/nvme0n1p5 during curtin installation
/dev/disk/by-uuid/c66e81ff-436e-4d6f-980b-6f4875ea7c8e /srv btrfs defaults 0 1
``` ```
### /etc/systemd/system/nfs-server.service.d/override.conf ### /etc/systemd/system/nfs-server.service.d/override.conf
```ini ```ini
[Unit] [Unit]
After=zfs-import.target zfs-mount.service local-fs.target After=zfs-import.target zfs-mount.service local-fs.target
Requires=zfs-import.target zfs-mount.service Requires=zfs-import.target zfs-mount.service
``` ```
### /etc/fstab (ZNAS system mounts only) ### /etc/exports
```bash
# NFSv4 - pseudo filesystem root
/export *(ro,fsid=0,crossmnt,no_subtree_check)
ZFS datasets are not listed here — ZFS handles its own mounting. Only system partitions appear: # Shares beneath the NFSv4 root
/export/Common *(fsid=4,rw,no_subtree_check,insecure)
``` /export/Data *(fsid=5,rw,no_subtree_check,insecure,crossmnt,nohide)
# / - btrfs on nvme0n1p2 /export/Docker *(fsid=29,rw,no_root_squash,sync,no_subtree_check,insecure)
/dev/disk/by-uuid/40c60952-0340-4a78-81f9-5b2193da26c6 / btrfs defaults 0 1 /export/Green *(fsid=30,rw,no_root_squash,no_subtree_check,insecure)
# /boot - ext4 on nvme0n1p3 ```
/dev/disk/by-uuid/4abb4efa-0b2b-4e4a-bcaf-78227db4628f /boot ext4 defaults 0 1
# swap
/dev/disk/by-uuid/d07437a0-3d0e-417a-a88e-438c603c2237 none swap sw 0 0
# /srv - btrfs on nvme0n1p5
/dev/disk/by-uuid/c66e81ff-436e-4d6f-980b-6f4875ea7c8e /srv btrfs defaults 0 1
```
---
## Command Reference
- Show active exports: `sudo exportfs -v`
- Reload exports: `sudo exportfs -ra`
- Show available exports (from any host): `showmount -e 192.168.5.10`
- Restart NFS: `sudo systemctl restart nfs-server`
- Check NFS status: `systemctl status nfs-server`
- Verify ZFS mounts: `mount | grep export`
- Verify loopback: `mount | grep data/nfs`