docs: create ZFS-Commands
This commit is contained in:
parent
512eb7aed8
commit
380a740763
1 changed files with 310 additions and 0 deletions
310
ZFS-Commands.md
Normal file
310
ZFS-Commands.md
Normal file
|
|
@ -0,0 +1,310 @@
|
||||||
|
---
|
||||||
|
title: ZFS Common Commands
|
||||||
|
description: ZFS Commands
|
||||||
|
published: true
|
||||||
|
date: 2026-01-31T15:23:07.585Z
|
||||||
|
tags: zfs commands
|
||||||
|
editor: markdown
|
||||||
|
dateCreated: 2026-01-31T15:23:07.585Z
|
||||||
|
---
|
||||||
|
|
||||||
|
# ZFS Essential Commands Cheat Sheet
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Pool Health & Status
|
||||||
|
|
||||||
|
zpool status
|
||||||
|
|
||||||
|
zpool status -v
|
||||||
|
|
||||||
|
zpool list
|
||||||
|
|
||||||
|
## Dataset Space & Usage
|
||||||
|
|
||||||
|
zfs list
|
||||||
|
|
||||||
|
zfs list -r vault
|
||||||
|
|
||||||
|
zfs list -o name,used,avail,refer,logicalused,compressratio
|
||||||
|
|
||||||
|
zfs list -r -o name,used,avail,refer,quota,reservation vault
|
||||||
|
|
||||||
|
## Dataset Properties & Settings
|
||||||
|
|
||||||
|
zfs get all vault/dataset
|
||||||
|
|
||||||
|
zfs get -r compression,dedup,recordsize,atime,quota,reservation vault
|
||||||
|
|
||||||
|
zfs get -r compression,dedup,recordsize,encryption,keylocation,keyformat,snapdir vault
|
||||||
|
|
||||||
|
zfs get -s local -r all vault
|
||||||
|
|
||||||
|
zfs get quota,refquota,reservation,refreservation -r vault
|
||||||
|
|
||||||
|
## Pool I/O & Performance Monitoring
|
||||||
|
|
||||||
|
zpool iostat -v 1
|
||||||
|
|
||||||
|
arcstat 1
|
||||||
|
|
||||||
|
cat /proc/spl/kstat/zfs/arcstats
|
||||||
|
|
||||||
|
## Scrubs & Data Integrity
|
||||||
|
|
||||||
|
zpool scrub vault
|
||||||
|
|
||||||
|
zpool scrub -s vault
|
||||||
|
|
||||||
|
zpool status
|
||||||
|
|
||||||
|
## Snapshots
|
||||||
|
|
||||||
|
zfs snapshot vault/dataset@snapname
|
||||||
|
|
||||||
|
zfs list -t snapshot
|
||||||
|
|
||||||
|
zfs rollback vault/dataset@snapname
|
||||||
|
|
||||||
|
zfs clone vault/dataset@snapname vault/dataset-clone
|
||||||
|
|
||||||
|
## Replication (Send / Receive)
|
||||||
|
|
||||||
|
zfs send vault/dataset@snap1 | zfs receive backup/dataset
|
||||||
|
|
||||||
|
zfs send -i snap1 vault/dataset@snap2 | zfs receive backup/dataset
|
||||||
|
|
||||||
|
zfs send -nv vault/dataset@snap1
|
||||||
|
|
||||||
|
## Dataset Tuning (Live-Safe Changes)
|
||||||
|
|
||||||
|
zfs set compression=lz4 vault/dataset
|
||||||
|
|
||||||
|
zfs set recordsize=1M vault/dataset
|
||||||
|
|
||||||
|
zfs set atime=off vault/dataset
|
||||||
|
|
||||||
|
zfs set dedup=on vault/dataset
|
||||||
|
|
||||||
|
## Encryption Management
|
||||||
|
|
||||||
|
zfs get encryption,keylocation,keystatus vault/dataset
|
||||||
|
|
||||||
|
zfs unload-key vault/dataset
|
||||||
|
|
||||||
|
zfs load-key vault/dataset
|
||||||
|
|
||||||
|
## Disk Preparation & Cleanup
|
||||||
|
|
||||||
|
wipefs /dev/sdX
|
||||||
|
|
||||||
|
wipefs -a /dev/sdX
|
||||||
|
|
||||||
|
zpool labelclear -f /dev/sdX
|
||||||
|
|
||||||
|
sgdisk --zap-all /dev/sdX
|
||||||
|
|
||||||
|
lsblk -f /dev/sdX
|
||||||
|
|
||||||
|
## Pool Expansion (Add VDEV)
|
||||||
|
|
||||||
|
zpool add vault raidz2 \
|
||||||
|
/dev/disk/by-id/disk1 \
|
||||||
|
/dev/disk/by-id/disk2 \
|
||||||
|
/dev/disk/by-id/disk3 \
|
||||||
|
/dev/disk/by-id/disk4 \
|
||||||
|
/dev/disk/by-id/disk5
|
||||||
|
|
||||||
|
## Pool Import / Recovery
|
||||||
|
|
||||||
|
zpool import
|
||||||
|
|
||||||
|
zpool import vault
|
||||||
|
|
||||||
|
zpool import -f vault
|
||||||
|
|
||||||
|
zpool import -o readonly=on vault
|
||||||
|
|
||||||
|
## Locks, Holds & History
|
||||||
|
|
||||||
|
zfs holds -r vault
|
||||||
|
|
||||||
|
zpool history
|
||||||
|
|
||||||
|
zfs diff vault/dataset@snap1 vault/dataset@snap2
|
||||||
|
|
||||||
|
## Deduplication & Compression Stats
|
||||||
|
|
||||||
|
zpool list -v
|
||||||
|
|
||||||
|
zdb -DD vault
|
||||||
|
|
||||||
|
## Inventory / Documentation Dumps
|
||||||
|
|
||||||
|
zpool status > zpool-status.txt
|
||||||
|
|
||||||
|
zfs list -r > zfs-layout.txt
|
||||||
|
|
||||||
|
zfs get -r all vault > zfs-settings.txt
|
||||||
|
|
||||||
|
## Top 10 Must-Know Commands
|
||||||
|
|
||||||
|
zpool status
|
||||||
|
zpool list
|
||||||
|
zpool iostat -v 1
|
||||||
|
zpool scrub vault
|
||||||
|
zfs list
|
||||||
|
zfs get all vault/dataset
|
||||||
|
zfs snapshot vault/dataset@snap
|
||||||
|
zfs rollback vault/dataset@snap
|
||||||
|
zfs send | zfs receive
|
||||||
|
arcstat 1
|
||||||
|
|
||||||
|
You said:
|
||||||
|
can I tie my zfs keys to a ubikey
|
||||||
|
ChatGPT said:
|
||||||
|
|
||||||
|
Short answer: yes — indirectly, and safely.
|
||||||
|
ZFS cannot talk to a YubiKey by itself, but you can bind ZFS encryption unlock to a YubiKey-backed secret using standard Linux plumbing.
|
||||||
|
|
||||||
|
There are three sane ways to do this. I’ll rank them from best → worst for a homelab like yours.
|
||||||
|
🥇 Best option: YubiKey → unlock file → ZFS keylocation=file://
|
||||||
|
|
||||||
|
This is the most common, reliable, and automation-friendly approach.
|
||||||
|
How it works (conceptually)
|
||||||
|
|
||||||
|
YubiKey
|
||||||
|
↓ (touch / PIN)
|
||||||
|
Decrypts keyfile
|
||||||
|
↓
|
||||||
|
ZFS loads dataset key
|
||||||
|
|
||||||
|
ZFS never sees the YubiKey — it just reads a key file that only exists while the YubiKey is present.
|
||||||
|
Step-by-step (clean + robust)
|
||||||
|
1️⃣ Generate a random ZFS keyfile
|
||||||
|
|
||||||
|
mkdir -p /root/zfs-keys
|
||||||
|
chmod 700 /root/zfs-keys
|
||||||
|
|
||||||
|
dd if=/dev/urandom of=/root/zfs-keys/vault.key bs=32 count=1
|
||||||
|
chmod 600 /root/zfs-keys/vault.key
|
||||||
|
|
||||||
|
2️⃣ Encrypt that keyfile to the YubiKey
|
||||||
|
|
||||||
|
Using GPG (most flexible):
|
||||||
|
|
||||||
|
gpg --encrypt --recipient <your-yubikey-gpg-id> \
|
||||||
|
/root/zfs-keys/vault.key
|
||||||
|
|
||||||
|
This creates:
|
||||||
|
|
||||||
|
vault.key.gpg
|
||||||
|
|
||||||
|
Then securely delete the plaintext key:
|
||||||
|
|
||||||
|
shred -u /root/zfs-keys/vault.key
|
||||||
|
|
||||||
|
3️⃣ Set ZFS to expect a keyfile
|
||||||
|
|
||||||
|
zfs set keyformat=raw vault/Secret
|
||||||
|
zfs set keylocation=file:///root/zfs-keys/vault.key vault/Secret
|
||||||
|
|
||||||
|
4️⃣ Unlock procedure (manual or scripted)
|
||||||
|
|
||||||
|
Insert YubiKey → decrypt → load ZFS key:
|
||||||
|
|
||||||
|
gpg --decrypt /root/zfs-keys/vault.key.gpg > /root/zfs-keys/vault.key
|
||||||
|
zfs load-key vault/Secret
|
||||||
|
shred -u /root/zfs-keys/vault.key
|
||||||
|
|
||||||
|
✔️ Dataset unlocked
|
||||||
|
✔️ Keyfile wiped
|
||||||
|
✔️ YubiKey required every time
|
||||||
|
5️⃣ Lock again (optional)
|
||||||
|
|
||||||
|
zfs unload-key vault/Secret
|
||||||
|
|
||||||
|
🥈 Alternative: YubiKey challenge–response → derived key
|
||||||
|
|
||||||
|
This uses the YubiKey’s HMAC-SHA1 challenge/response slot.
|
||||||
|
Pros
|
||||||
|
|
||||||
|
No stored keyfile
|
||||||
|
|
||||||
|
Strong hardware binding
|
||||||
|
|
||||||
|
Cons
|
||||||
|
|
||||||
|
More brittle
|
||||||
|
|
||||||
|
Harder to recover
|
||||||
|
|
||||||
|
Less transparent
|
||||||
|
|
||||||
|
Typical flow:
|
||||||
|
|
||||||
|
ykchalresp -2 <challenge> | sha256sum | zfs load-key
|
||||||
|
|
||||||
|
⚠️ I only recommend this if you are comfortable with irreversible crypto mistakes.
|
||||||
|
🥉 Not recommended: YubiKey PIV directly with ZFS
|
||||||
|
|
||||||
|
ZFS:
|
||||||
|
|
||||||
|
❌ Cannot use PKCS#11
|
||||||
|
|
||||||
|
❌ Cannot read smartcards
|
||||||
|
|
||||||
|
❌ Cannot prompt hardware
|
||||||
|
|
||||||
|
Anything claiming “direct ZFS + YubiKey” is lying or wrapping option #1.
|
||||||
|
🚨 Critical recovery warnings (read this)
|
||||||
|
|
||||||
|
If you do this:
|
||||||
|
|
||||||
|
🔒 BACK UP THE ENCRYPTED KEYFILE
|
||||||
|
|
||||||
|
🔑 Have a second YubiKey
|
||||||
|
|
||||||
|
🧾 Document the unlock process offline
|
||||||
|
|
||||||
|
Best practice:
|
||||||
|
|
||||||
|
• Primary YubiKey
|
||||||
|
• Spare YubiKey enrolled to same GPG key
|
||||||
|
• Encrypted keyfile backup (offline)
|
||||||
|
|
||||||
|
If you lose the YubiKey and the encrypted keyfile:
|
||||||
|
|
||||||
|
The data is permanently unrecoverable
|
||||||
|
|
||||||
|
ZFS encryption is not forgiving.
|
||||||
|
✅ Recommended pattern for your lab
|
||||||
|
|
||||||
|
Given your setup (servers, NFS, ZFS send, occasional offline use):
|
||||||
|
|
||||||
|
Use file-based keys
|
||||||
|
|
||||||
|
Protect keyfile with YubiKey + GPG
|
||||||
|
|
||||||
|
Unlock only datasets that truly need encryption
|
||||||
|
|
||||||
|
Leave media + Docker unencrypted for sanity
|
||||||
|
|
||||||
|
Example layout:
|
||||||
|
|
||||||
|
vault/Green (unencrypted)
|
||||||
|
vault/Docker (unencrypted)
|
||||||
|
vault/Backups (encrypted, YubiKey)
|
||||||
|
vault/Secrets (encrypted, YubiKey)
|
||||||
|
|
||||||
|
🧠 Want next?
|
||||||
|
|
||||||
|
I can:
|
||||||
|
|
||||||
|
Write a one-command unlock script with safety checks
|
||||||
|
|
||||||
|
Design a boot-time + manual unlock hybrid
|
||||||
|
|
||||||
|
Help you rotate keys or migrate existing encrypted datasets
|
||||||
|
|
||||||
|
Just tell me how automated you want this to be.
|
||||||
Loading…
Add table
Add a link
Reference in a new issue