prep for new grimoire
This commit is contained in:
parent
a72eb28f9e
commit
2aff30ab71
165 changed files with 0 additions and 0 deletions
26
False Grimoire/Netgrimoire/Audits/Calibre-web-2026-04-03.md
Normal file
26
False Grimoire/Netgrimoire/Audits/Calibre-web-2026-04-03.md
Normal file
|
|
@ -0,0 +1,26 @@
|
|||
---
|
||||
title: Audit - Calibre-web.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:30:36.844Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:30:36.844Z
|
||||
---
|
||||
|
||||
# Audit Report — Calibre-web.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/Calibre-web.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
PASS: Homepage labels (homepage.group, homepage.name, homepage.icon, homepage.href, homepage.description) are all present and correctly configured.
|
||||
FAIL: Caddy labels on exposed services are incorrect. The caddy.labels should be set to a single string value containing all domains separated by commas, not an array. Correct format would be "caddy=books.netgrimoire.com, books.pncharris.com".
|
||||
PASS: Placement constraints (node.hostname) are correctly specified as 'znas'.
|
||||
PASS: Volumes use the /DockerVol/<service> path convention.
|
||||
PASS: Network references the external netgrimoire overlay.
|
||||
|
||||
VERDICT: FAIL
|
||||
47
False Grimoire/Netgrimoire/Audits/JellySeer-2026-04-03.md
Normal file
47
False Grimoire/Netgrimoire/Audits/JellySeer-2026-04-03.md
Normal file
|
|
@ -0,0 +1,47 @@
|
|||
---
|
||||
title: Audit - JellySeer.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:31:31.742Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:31:31.742Z
|
||||
---
|
||||
|
||||
# Audit Report — JellySeer.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/JellySeer.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### SWARM AUDIT
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: "Media Search" — **PASS**
|
||||
- `homepage.name`: "JellySeer" — **PASS**
|
||||
- `homepage.icon`: "sh-jellyseerr.svg" — **PASS**
|
||||
- `homepage.href`: "https://requests.netgrimoire.com" — **PASS**
|
||||
- `homepage.description`: "Media Server" — **PASS**
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- `kuma.jellyseer.http.name`: "JellySeer" — **PASS**
|
||||
- `kuma.jellyseer.http.url`: "http://jellyseer:5055" — **PASS**
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy: requests.netgrimoire.com` — **PASS**
|
||||
- `caddy.reverse_proxy: http://jellyseer:5055` — **PASS**
|
||||
|
||||
4. **Placement constraints**:
|
||||
- `node.hostname == docker5` — **PASS**
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- `/DockerVol/JellySeer/config:/app/config` — **PASS**
|
||||
- `/data/nfs/znas/Data/media:/data:shared` — **FAIL**: The volume `/data/nfs/znas/Data/media:/data:shared` does not follow the `/DockerVol/<service>` path convention. It is recommended to use a volume path that follows this convention for better organization and consistency.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- `netgrimoire` network — **PASS**
|
||||
|
||||
### VERDICT: FAIL
|
||||
50
False Grimoire/Netgrimoire/Audits/JellyStat-2026-04-03.md
Normal file
50
False Grimoire/Netgrimoire/Audits/JellyStat-2026-04-03.md
Normal file
|
|
@ -0,0 +1,50 @@
|
|||
---
|
||||
title: Audit - JellyStat.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:32:31.251Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:32:31.251Z
|
||||
---
|
||||
|
||||
# Audit Report — JellyStat.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/JellyStat.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### Audit Results:
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group=Library` — **PASS**
|
||||
- `homepage.name=JellyStat` — **PASS**
|
||||
- `homepage.icon=jellystat.png` — **FAIL**: The icon file path should be relative to the service's context or a valid absolute URL.
|
||||
- **Fix**: Update the icon path to use a valid location.
|
||||
- `homepage.href=http://jellystat.netgrimoire.com` — **PASS**
|
||||
- `homepage.description=Jelly Stats` — **PASS**
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- The service does not appear to be Uptime Kuma; the labels are irrelevant here. **PASS**
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy=jellystat.netgrimoire.com` — **PASS**
|
||||
- `caddy.reverse_proxy="{{upstreams 3000}}"` — **PASS**
|
||||
- **Note**: Ensure that the reverse proxy configuration is correct and functional within your Caddy setup.
|
||||
|
||||
4. **Placement constraints**:
|
||||
- `node.hostname == bruce` — **PASS**
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- `/DockerVol/jellystat/postgres-data` — **PASS**
|
||||
- `/DockerVol/jellystat/backup-data` — **PASS**
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- `netgrimoire` — **PASS**
|
||||
|
||||
### VERDICT: FAIL
|
||||
|
||||
The audit has identified one issue that needs to be addressed. Specifically, the `homepage.icon` label should use a valid file path or URL for the icon image. Once this is resolved, the audit will pass.
|
||||
107
False Grimoire/Netgrimoire/Audits/SQL-mgmt-2026-04-03.md
Normal file
107
False Grimoire/Netgrimoire/Audits/SQL-mgmt-2026-04-03.md
Normal file
|
|
@ -0,0 +1,107 @@
|
|||
---
|
||||
title: Audit - SQL-mgmt.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:34:04.814Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:34:04.814Z
|
||||
---
|
||||
|
||||
# Audit Report — SQL-mgmt.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/SQL-mgmt.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### SWARM AUDIT REPORT
|
||||
|
||||
#### Homepage Labels
|
||||
1. **PASS**: `phpmyadmin`
|
||||
- `homepage.group=Management`
|
||||
- `homepage.name=PHPMyadmin`
|
||||
- `homepage.icon=phpmyadmin.png`
|
||||
- `homepage.href=http://phpmyadmin.netgrimoire.com`
|
||||
- `homepage.description=MySQL Manager`
|
||||
|
||||
2. **PASS**: `phppgadmin`
|
||||
- `homepage.group=Management`
|
||||
- `homepage.name=PHPpgmyadmin`
|
||||
- `homepage.icon=phppgmyadmin.png`
|
||||
- `homepage.href=http://phppgmyadmin.netgrimoire.com`
|
||||
- `homepage.description=Postgres Manager`
|
||||
|
||||
#### Uptime Kuma Labels
|
||||
1. **FAIL**: `phpmyadmin` and `phppgadmin`
|
||||
- Missing labels: `kuma.msql.http.name`, `kuma.mealie.http.url`.
|
||||
|
||||
2. **FIX**:
|
||||
```yaml
|
||||
phpmyadmin:
|
||||
deploy:
|
||||
labels:
|
||||
...
|
||||
kuma.msql.http.name="PHPMyadmin"
|
||||
kuma.msql.http.url=http://phpmyadmin:80
|
||||
...
|
||||
|
||||
phppgadmin:
|
||||
deploy:
|
||||
labels:
|
||||
...
|
||||
kuma.mealie.http.url=http://phppgmyadmin:80
|
||||
...
|
||||
```
|
||||
|
||||
#### Caddy Labels on Exposed Services
|
||||
1. **PASS**: `phpmyadmin`
|
||||
- `caddy=phpmyadmin.netgrimoire.com`
|
||||
- `caddy.reverse_proxy="{{upstreams 80}}"`
|
||||
|
||||
2. **PASS**: `phppgadmin`
|
||||
- `caddy=phppgmyadmin.netgrimoire.com`
|
||||
- `caddy.reverse_proxy="{{upstreams 80}}"`
|
||||
|
||||
#### Placement Constraints
|
||||
1. **FAIL**: Both services are missing placement constraints (`node.hostname`).
|
||||
|
||||
2. **FIX**:
|
||||
```yaml
|
||||
phpmyadmin:
|
||||
deploy:
|
||||
labels:
|
||||
...
|
||||
placement:
|
||||
constraints:
|
||||
- node.hostname==<desired-hostname>
|
||||
|
||||
phppgadmin:
|
||||
deploy:
|
||||
labels:
|
||||
...
|
||||
placement:
|
||||
constraints:
|
||||
- node.hostname==<desired-hostname>
|
||||
```
|
||||
|
||||
#### Volumes Use /DockerVol/<service> Path Convention
|
||||
1. **FAIL**: Both services are missing volume configurations.
|
||||
|
||||
2. **FIX**:
|
||||
```yaml
|
||||
phpmyadmin:
|
||||
volumes:
|
||||
- /DockerVol/phpmyadmin:/var/lib/mysql
|
||||
|
||||
phppgadmin:
|
||||
volumes:
|
||||
- /DockerVol/phppgadmin:/var/lib/postgresql/data
|
||||
```
|
||||
|
||||
#### Network References External `netgrimoire` Overlay
|
||||
1. **PASS**: Both services correctly reference the external network `netgrimoire`.
|
||||
|
||||
### VERDICT: FAIL
|
||||
47
False Grimoire/Netgrimoire/Audits/authelia-2026-04-03.md
Normal file
47
False Grimoire/Netgrimoire/Audits/authelia-2026-04-03.md
Normal file
|
|
@ -0,0 +1,47 @@
|
|||
---
|
||||
title: Audit - authelia.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:34:59.760Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:34:59.760Z
|
||||
---
|
||||
|
||||
# Audit Report — authelia.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/authelia.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**Homepage labels:**
|
||||
- **PASS**: homepage.group=Management
|
||||
- **PASS**: homepage.name=Authelia
|
||||
- **PASS**: homepage.icon=authelia.png
|
||||
- **PASS**: homepage.href=https://login.wasted-bandwidth.net
|
||||
- **PASS**: homepage.description=SSO / Forward-Auth
|
||||
|
||||
**Uptime Kuma labels:**
|
||||
- **PASS**: kuma.authelia.http.name="Authelia"
|
||||
- **PASS**: kuma.authelia.http.url=http://authelia:9091
|
||||
|
||||
**Caddy labels on exposed services:**
|
||||
- **PASS**: caddy=login.wasted-bandwidth.net
|
||||
- **PASS**: caddy.reverse_proxy={{upstreams 9091}}
|
||||
|
||||
**Placement constraints:**
|
||||
- **FAIL**: Both 'authelia' and 'redis' are constrained to run on the node 'nas', but there is no guarantee that 'nas' will always be available. Consider using a more flexible constraint.
|
||||
- Fix: Change `constraints: - node.hostname == nas` to a more general placement strategy.
|
||||
|
||||
**Volumes use /DockerVol/<service> path convention:**
|
||||
- **PASS**: `/DockerVol/authelia/config:/config`
|
||||
- **PASS**: `/DockerVol/authelia/secrets:/secrets`
|
||||
- **PASS**: `/DockerVol/authelia/redis:/data`
|
||||
|
||||
**Network references external netgrimoire overlay:**
|
||||
- **PASS**: `networks: - netgrimoire`
|
||||
|
||||
**VERDICT: FAIL**
|
||||
48
False Grimoire/Netgrimoire/Audits/authentik-2026-04-03.md
Normal file
48
False Grimoire/Netgrimoire/Audits/authentik-2026-04-03.md
Normal file
|
|
@ -0,0 +1,48 @@
|
|||
---
|
||||
title: Audit - authentik.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:36:24.241Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:36:24.241Z
|
||||
---
|
||||
|
||||
# Audit Report — authentik.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/authentik.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**SWARM AUDIT REPORT**
|
||||
|
||||
1. **Homepage labels**
|
||||
- `homepage.group`: PASS
|
||||
- `homepage.name`: PASS
|
||||
- `homepage.icon`: PASS
|
||||
- `homepage.href`: PASS
|
||||
- `homepage.description`: PASS
|
||||
|
||||
2. **Uptime Kuma labels**
|
||||
- No Uptime Kuma service found, hence no labels to check.
|
||||
|
||||
3. **Caddy labels on exposed services**
|
||||
- `caddy=auth.netgrimoire.com` and `caddy.reverse_proxy="{{upstreams 9000}}"`: PASS
|
||||
|
||||
4. **Placement constraints**
|
||||
- `node.hostname == znas`: PASS for all services
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**
|
||||
- `/DockerVol/Authentik/Postgres`, `/DockerVol/Authentik/redis`, `/DockerVol/Authentik/media`, `/DockerVol/Authentik/custom-templates`: PASS
|
||||
- `/var/run/docker.sock` for `worker` service: FAIL
|
||||
|
||||
6. **Network references external netgrimoire overlay**
|
||||
- `netgrimoire` network is referenced by both `authentik` and `worker` services, and it is set to `external: true`: PASS
|
||||
|
||||
**Fixes Required**
|
||||
- Update the `worker` service volume `/var/run/docker.sock:/var/run/docker.sock` to match the convention by using a Docker volume or bind mount with `/DockerVol/Authentik/docker.sock`.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
44
False Grimoire/Netgrimoire/Audits/bazarr-2026-04-03.md
Normal file
44
False Grimoire/Netgrimoire/Audits/bazarr-2026-04-03.md
Normal file
|
|
@ -0,0 +1,44 @@
|
|||
---
|
||||
title: Audit - bazarr.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:37:15.344Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:37:15.344Z
|
||||
---
|
||||
|
||||
# Audit Report — bazarr.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/bazarr.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### Audit Report for `swarm/bazarr.yaml`
|
||||
|
||||
#### Homepage Labels
|
||||
- **PASS**: homepage.group, homepage.name, homepage.icon, homepage.href, homepage.description are all correctly defined.
|
||||
|
||||
#### Uptime Kuma Labels
|
||||
- **FAIL**: No Uptime Kuma labels found. Expected labels like `kuma.bazarr.http.name` and `kuma.bazarr.http.url`.
|
||||
- **Fix**: Add the necessary labels for Uptime Kuma integration.
|
||||
|
||||
#### Caddy Labels on Exposed Services
|
||||
- **PASS**: caddy label is correctly defined as `caddy=bazarr.netgrimoire.com`.
|
||||
- **FAIL**: The reverse proxy configuration in the Caddy label is incorrect. It should use `{{upstreams bazarr:6767}}` instead of `{{upstreams 6767}}`.
|
||||
- **Fix**: Correct the reverse proxy configuration to `caddy.reverse_proxy: "{{upstreams bazarr:6767}}"`.
|
||||
|
||||
#### Placement Constraints
|
||||
- **PASS**: The node hostname constraint is correctly defined as `node.hostname == docker4`.
|
||||
|
||||
#### Volumes Use /DockerVol/<service> Path Convention
|
||||
- **FAIL**: Volume paths do not follow the `/DockerVol/<service>` convention.
|
||||
- **Fix**: Correct volume paths to follow the convention. For example, change `/DockerVol/bazarr/config` to `/DockerVol/bazarr/config`.
|
||||
|
||||
#### Network References External Netgrimoire Overlay
|
||||
- **PASS**: The network reference is correctly set to an external `netgrimoire` overlay.
|
||||
|
||||
### VERDICT: FAIL
|
||||
50
False Grimoire/Netgrimoire/Audits/beets-2026-04-03.md
Normal file
50
False Grimoire/Netgrimoire/Audits/beets-2026-04-03.md
Normal file
|
|
@ -0,0 +1,50 @@
|
|||
---
|
||||
title: Audit - beets.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:38:00.938Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:38:00.938Z
|
||||
---
|
||||
|
||||
# Audit Report — beets.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/beets.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** PASS
|
||||
|
||||
---
|
||||
|
||||
### Audit Summary:
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: PASSED
|
||||
- `homepage.name`: PASSED
|
||||
- `homepage.icon`: PASSED
|
||||
- `homepage.href`: PASSED
|
||||
- `homepage.description`: PASSED
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- Not applicable as Uptime Kuma is not referenced in this configuration.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy=beets.netgrimoire.com`: PASSED
|
||||
- `caddy.reverse_proxy`: PASSED
|
||||
|
||||
4. **Placement constraints**:
|
||||
- `node.hostname == nas`: PASSED
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- `/DockerVol/beets/config`: PASSED
|
||||
- `/data/nfs/Baxter/Data/media/music/Collection`: FAIL (does not follow the path convention)
|
||||
- Fix: Update to `/DockerVol/beets/music`
|
||||
- `/data/nfs/Baxter/Data/media/music/ingest`: FAIL (does not follow the path convention)
|
||||
- Fix: Update to `/DockerVol/beets/downloads`
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- `netgrimoire` network: PASSED
|
||||
|
||||
### VERDICT:
|
||||
FAIL
|
||||
44
False Grimoire/Netgrimoire/Audits/beszel-2026-04-03.md
Normal file
44
False Grimoire/Netgrimoire/Audits/beszel-2026-04-03.md
Normal file
|
|
@ -0,0 +1,44 @@
|
|||
---
|
||||
title: Audit - beszel.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:38:47.782Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:38:47.782Z
|
||||
---
|
||||
|
||||
# Audit Report — beszel.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/beszel.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
1. **Homepage labels:** All homepage labels are present.
|
||||
- `homepage.group=Monitoring`
|
||||
- `homepage.name=Beszel`
|
||||
- `homepage.icon=beszel.png`
|
||||
- `homepage.href=https://beszel.netgrimoire.com`
|
||||
- `homepage.description=Beszel Service`
|
||||
|
||||
2. **Uptime Kuma labels:** The Uptime Kuma labels are not provided in the deploy block; they should be checked within the service's configuration.
|
||||
|
||||
3. **Caddy labels on exposed services:**
|
||||
- `caddy=beszel.netgrimoire.com`
|
||||
- `caddy.import=authentik`
|
||||
- `caddy.reverse_proxy="{{upstreams 8090}}"`
|
||||
|
||||
4. **Placement constraints:** The constraint is based on the node label, not the node hostname.
|
||||
- Current: `constraints: ["node.labels.general == true"]`
|
||||
- Fix: Update to use `node.hostname` if necessary.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention:**
|
||||
- Volume path: `/data/nfs/znas/Docker/beszel:/beszel_data`
|
||||
- Fix: The volume does not follow the `/DockerVol/<service>` pattern; update to use a standard Docker volume path like `/DockerVol/beszel`.
|
||||
|
||||
6. **Network references external netgrimoire overlay:** The network is correctly referenced as `netgrimoire`, which is an external overlay.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
|
@ -0,0 +1,46 @@
|
|||
---
|
||||
title: Audit - beszel_agents.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:40:11.085Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:40:11.085Z
|
||||
---
|
||||
|
||||
# Audit Report — beszel_agents.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/beszel_agents.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**SWARM AUDIT**
|
||||
|
||||
1. **Homepage labels**: No homepage labels are specified in the file.
|
||||
- **Fix**: Add `homepage.group`, `homepage.name`, `homepage.icon`, `homepage.href`, and `homepage.description` to your Docker Swarm configuration.
|
||||
|
||||
2. **Uptime Kuma labels**: No Uptime Kuma labels are specified in the file.
|
||||
- **Fix**: If you are using Uptime Kuma, add the appropriate labels as per its documentation.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `beszel-agent-docker2`, `beszel-agent-docker3`, `beszel-agent-docker4`, `beszel-agent-znas`, `beszel-agent-dockerpi1`: No Caddy labels are specified.
|
||||
- **Fix**: Add Caddy labels to specify the domain and reverse proxy configuration for these services.
|
||||
|
||||
4. **Placement constraints**:
|
||||
- All services use `node.hostname` placement constraints.
|
||||
- **PASS**: This is correctly configured.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- No volumes follow this specific path convention in the file.
|
||||
- **Fix**: Ensure that all volumes are specified with paths like `/DockerVol/beszel-agent-docker2`, `/DockerVol/beszel-agent-docker3`, etc.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- All services reference an external `netgrimoire` network.
|
||||
- **PASS**: This is correctly configured.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
The file fails the audit due to missing homepage, Uptime Kuma, and Caddy labels, and volumes not following the specified path convention.
|
||||
29
False Grimoire/Netgrimoire/Audits/caddy-1-2026-04-03.md
Normal file
29
False Grimoire/Netgrimoire/Audits/caddy-1-2026-04-03.md
Normal file
|
|
@ -0,0 +1,29 @@
|
|||
---
|
||||
title: Audit - caddy-1.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:30:38.025Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:30:38.025Z
|
||||
---
|
||||
|
||||
# Audit Report — caddy-1.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/stack/caddy/caddy-1.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
PASS Items:
|
||||
1. The Caddy labels `caddy=<domain>` and `caddy.reverse_proxy` are present on the exposed service.
|
||||
2. Placement constraints for node.hostname are correctly specified with `node.hostname == znas`.
|
||||
3. Volumes use the `/export/Docker/caddy` path convention.
|
||||
4. The network reference is to an external overlay named `netgrimoire`.
|
||||
|
||||
FAIL Items:
|
||||
1. No homepage labels (`homepage.group`, `homepage.name`, `homepage.icon`, `homepage.href`, `homepage.description`) are present in the configuration.
|
||||
|
||||
VERDICT: FAIL
|
||||
47
False Grimoire/Netgrimoire/Audits/caddy-2026-04-03.md
Normal file
47
False Grimoire/Netgrimoire/Audits/caddy-2026-04-03.md
Normal file
|
|
@ -0,0 +1,47 @@
|
|||
---
|
||||
title: Audit - caddy.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:31:34.043Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:31:34.043Z
|
||||
---
|
||||
|
||||
# Audit Report — caddy.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/stack/caddy/caddy.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**SWARM AUDIT REPORT**
|
||||
|
||||
1. **Homepage labels**: There are no homepage-related labels in the provided YAML file.
|
||||
- **FAIL**: Missing homepage labels.
|
||||
|
||||
2. **Uptime Kuma labels**: There are no Uptime Kuma-related labels in the provided YAML file.
|
||||
- **FAIL**: Missing Uptime Kuma labels.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- **PASS**: Caddy service does not have any specific labels as per the provided configuration.
|
||||
|
||||
4. **Placement constraints**:
|
||||
- **PASS**: Both `caddy` and `crowdsec` services are constrained to run on the node with hostname `znas`.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- **FAIL**: The volumes are not using the `/DockerVol/<service>` path convention.
|
||||
- `/var/run/docker.sock`
|
||||
- `/export/Docker/caddy/Caddyfile`
|
||||
- `/export/Docker/caddy:/data`
|
||||
- `caddy-logs`
|
||||
- `crowdsec-db`
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- **PASS**: The services reference the externally created `netgrimoire` and `vpn` networks.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
The provided YAML file contains several issues that need to be addressed to meet all the audit criteria, including missing homepage and Uptime Kuma labels, non-conforming volume paths, and lack of use of the external `netgrimoire` overlay network.
|
||||
52
False Grimoire/Netgrimoire/Audits/cloudcmd-2026-04-03.md
Normal file
52
False Grimoire/Netgrimoire/Audits/cloudcmd-2026-04-03.md
Normal file
|
|
@ -0,0 +1,52 @@
|
|||
---
|
||||
title: Audit - cloudcmd.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:40:56.554Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:40:56.554Z
|
||||
---
|
||||
|
||||
# Audit Report — cloudcmd.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/cloudcmd.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### Swarm Audit Report for `cloudcmd.yaml`
|
||||
|
||||
#### 1. Homepage Labels:
|
||||
- **PASS**: homepage.group=Application
|
||||
- **PASS**: homepage.name=Cloud Commander
|
||||
- **PASS**: homepage.icon=cloudcmd.png
|
||||
- **FAIL**: homepage.href=http://commander.netgrimoire.com - Incorrect URL, should be relative to the service.
|
||||
|
||||
**Fix**: Update `homepage.href` to a relative path such as `/`.
|
||||
|
||||
#### 2. Uptime Kuma Labels:
|
||||
- **FAIL**: kuma.cloud.http.name="Cloudcmd" - Missing `.name`.
|
||||
|
||||
**Fix**: Correct label to `kuma.cloud.http.name=Cloudcmd`.
|
||||
|
||||
#### 3. Caddy Labels on Exposed Services:
|
||||
- **PASS**: caddy=commander.netgrimoire.com
|
||||
- **PASS**: caddy.reverse_proxy="{{upstreams 8000}}"
|
||||
|
||||
#### 4. Placement Constraints:
|
||||
- **FAIL**: node.hostname == nas - Ensure that `nas` is correctly configured and available in the Swarm.
|
||||
|
||||
**Fix**: Verify that the hostname `nas` is correct and exists within your Swarm cluster.
|
||||
|
||||
#### 5. Volumes Use /DockerVol/<service> Path Convention:
|
||||
- **FAIL**: ~:/root - Home directory path should use a Docker volume convention.
|
||||
|
||||
**Fix**: Replace `~:/root` with `/DockerVol/cloudcmd/root`.
|
||||
|
||||
#### 6. Network References External netgrimoire Overlay:
|
||||
- **PASS**: References external network netgrimoire
|
||||
|
||||
### VERDICT: FAIL
|
||||
48
False Grimoire/Netgrimoire/Audits/comixed-2026-04-03.md
Normal file
48
False Grimoire/Netgrimoire/Audits/comixed-2026-04-03.md
Normal file
|
|
@ -0,0 +1,48 @@
|
|||
---
|
||||
title: Audit - comixed.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:41:45.208Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:41:45.208Z
|
||||
---
|
||||
|
||||
# Audit Report — comixed.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/comixed.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**Audit Results for `swarm/comixed.yaml`:**
|
||||
|
||||
1. **Homepage Labels:**
|
||||
- **PASS**: `homepage.group`, `homepage.name`, `homepage.href`
|
||||
- Values are correctly set.
|
||||
- **FAIL**: `homepage.icon`, `homepage.description`
|
||||
- Missing values. Set these to appropriate values.
|
||||
|
||||
2. **Uptime Kuma Labels:**
|
||||
- **FAIL**: Uptime Kuma labels not found.
|
||||
- No labels related to Uptime Kuma are present in the deployment block.
|
||||
|
||||
3. **Caddy Labels on Exposed Services:**
|
||||
- **PASS**: `caddy=<domain>`, `caddy.reverse_proxy`
|
||||
- Correctly configured for domain `comics.netgrimoire.com` and reverse proxy.
|
||||
|
||||
4. **Placement Constraints:**
|
||||
- **PASS**: `node.hostname == nas`
|
||||
- Constraint correctly placed to run on the node named `nas`.
|
||||
|
||||
5. **Volumes Use `/DockerVol/<service>` Path Convention:**
|
||||
- **PASS**: All volumes use the specified path convention (`/DockerVol/comixed/config`).
|
||||
|
||||
6. **Network References External Netgrimoire Overlay:**
|
||||
- **PASS**: The network `netgrimoire` is correctly referenced as external.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
The audit identified issues with the homepage labels and the absence of Uptime Kuma labels. These should be addressed to ensure compliance with the audit criteria.
|
||||
47
False Grimoire/Netgrimoire/Audits/commander-2026-04-03.md
Normal file
47
False Grimoire/Netgrimoire/Audits/commander-2026-04-03.md
Normal file
|
|
@ -0,0 +1,47 @@
|
|||
---
|
||||
title: Audit - commander.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:42:30.634Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:42:30.634Z
|
||||
---
|
||||
|
||||
# Audit Report — commander.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/commander.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**Audit Results:**
|
||||
|
||||
1. **Homepage labels:**
|
||||
- **PASS:** homepage.group=Applications
|
||||
- **PASS:** homepage.name=Cloud Commander
|
||||
- **PASS:** homepage.icon=mdi-cloud
|
||||
- **FAIL:** homepage.href is incorrect. The correct URL should be https://cloudcmd.netgrimoire.com instead of https://commander.netgrimoire.com.
|
||||
- **FAIL:** homepage.description is missing.
|
||||
|
||||
2. **Uptime Kuma labels:**
|
||||
- **FAIL:** Uptime Kuma labels are not present in the provided YAML file.
|
||||
|
||||
3. **Caddy labels on exposed services:**
|
||||
- **PASS:** caddy=commander.netgrimoire.com
|
||||
- **FAIL:** caddy.reverse_proxy is missing an upstreams configuration, which should reference the service port (e.g., {{upstreams 8000}}).
|
||||
|
||||
4. **Placement constraints:**
|
||||
- **PASS:** node.hostname=nas
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention:**
|
||||
- **FAIL:** Volumes are using relative paths instead of the /DockerVol/<service> convention. Example volumes should be `/DockerVol/cloudcmd:/root` and `/DockerVol/cloudcmd:/mnt/fs`.
|
||||
|
||||
6. **Network references external netgrimoire overlay:**
|
||||
- **PASS:** Network references an external netgrimoire overlay.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
One or more of the items failed during the audit, which prevents a full PASS verdict.
|
||||
54
False Grimoire/Netgrimoire/Audits/configarr-2026-04-03.md
Normal file
54
False Grimoire/Netgrimoire/Audits/configarr-2026-04-03.md
Normal file
|
|
@ -0,0 +1,54 @@
|
|||
---
|
||||
title: Audit - configarr.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:43:33.261Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:43:33.261Z
|
||||
---
|
||||
|
||||
# Audit Report — configarr.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/configarr.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### SWARM AUDIT RESULTS
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: "Jolly Roger" (PASS)
|
||||
- `homepage.name`: "Configarr" (PASS)
|
||||
- `homepage.icon`: "si-config" (PASS)
|
||||
- `homepage.href`: "https://configarr.netgrimoire.com" (PASS)
|
||||
- `homepage.description`: "Automatically sync TRaSH formats & configs" (PASS)
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- Missing Uptime Kuma labels (`kuma.configarr.http.name` and `kuma.configarr.http.url`). These are critical for monitoring and should be added.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy=configarr.netgrimoire.com` (PASS)
|
||||
- `caddy.reverse_proxy: "{{upstreams 8000}}"` (PASS)
|
||||
|
||||
4. **Placement constraints**:
|
||||
- No placement constraints specified (`node.hostname`). This is acceptable if there are no specific node requirements.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- Volumes do not follow the `/DockerVol/<service>` path convention. They should be adjusted as follows:
|
||||
```yaml
|
||||
volumes:
|
||||
- /data/nfs/Baxter/Docker/configarr/config:/DockerVol/configarr/config
|
||||
- /data/nfs/Baxter/Docker/configarr/repos:/DockerVol/configarr/repos
|
||||
- /data/nfs/Baxter/Docker/configarr/cfs:/DockerVol/configarr/cfs
|
||||
- /data/nfs/Baxter/Docker/configarr/templates:/DockerVol/configarr/templates
|
||||
```
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- Network `netgrimoire` is correctly referencing an external overlay (PASS)
|
||||
|
||||
### VERDICT: FAIL
|
||||
|
||||
The configuration includes critical issues that need to be addressed for it to meet the required standards, specifically missing Uptime Kuma labels and incorrect volume paths.
|
||||
26
False Grimoire/Netgrimoire/Audits/dailytxt-2026-04-03.md
Normal file
26
False Grimoire/Netgrimoire/Audits/dailytxt-2026-04-03.md
Normal file
|
|
@ -0,0 +1,26 @@
|
|||
---
|
||||
title: Audit - dailytxt.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:44:52.573Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:44:52.573Z
|
||||
---
|
||||
|
||||
# Audit Report — dailytxt.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/dailytxt.yaml
|
||||
**Type:** Docker Compose
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
PASS DailyTxT service is configured to expose port 8000 on localhost, which matches an entry in the Caddyfile.
|
||||
|
||||
FAIL Default password detected for `ADMIN_PASSWORD`. It's strongly recommended to change this to a strong, unique password.
|
||||
FAIL The `SECRET_TOKEN` environment variable is left as `...`, indicating it's not set. A secret token should be generated using a secure method and included here.
|
||||
FAIL The `ALLOW_REGISTRATION` setting is enabled, which can expose the service to unauthorized access. This should be disabled in production environments.
|
||||
|
||||
VERDICT: FAIL
|
||||
52
False Grimoire/Netgrimoire/Audits/database-2026-04-03.md
Normal file
52
False Grimoire/Netgrimoire/Audits/database-2026-04-03.md
Normal file
|
|
@ -0,0 +1,52 @@
|
|||
---
|
||||
title: Audit - database.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:45:35.594Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:45:35.594Z
|
||||
---
|
||||
|
||||
# Audit Report — database.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/database.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### Audit of `swarm/database.yaml`
|
||||
|
||||
#### 1. Homepage labels: homepage.group, homepage.name, homepage.icon, homepage.href, homepage.description
|
||||
|
||||
**PASS**: All homepage labels are present and correctly configured.
|
||||
|
||||
#### 2. Uptime Kuma labels: kuma.<n>.http.name, kuma.<n>.http.url
|
||||
|
||||
**FAIL**: No Uptime Kuma services or labels found in the configuration.
|
||||
- **Fix**: Add a section for Uptime Kuma if necessary, including the required labels.
|
||||
|
||||
#### 3. Caddy labels on exposed services: caddy=<domain>, caddy.reverse_proxy
|
||||
|
||||
**PASS**: The `pgadmin4` service has the correct Caddy labels:
|
||||
- `caddy=pgadmin.netgrimoire.com`
|
||||
- `caddy.import=authentik`
|
||||
- `caddy.reverse_proxy="{{upstreams 80}}"`
|
||||
|
||||
#### 4. Placement constraints: node.hostname
|
||||
|
||||
**FAIL**: There are no placement constraints defined in the configuration.
|
||||
- **Fix**: Consider adding constraints to ensure services run on specific nodes if necessary.
|
||||
|
||||
#### 5. Volumes use /DockerVol/<service> path convention
|
||||
|
||||
**FAIL**: No volumes are specified in the configuration.
|
||||
- **Fix**: If volumes are needed, add them following the `/DockerVol/<service>` path convention.
|
||||
|
||||
#### 6. Network references external netgrimoire overlay
|
||||
|
||||
**PASS**: The service references an external network named `netgrimoire`.
|
||||
|
||||
### VERDICT: FAIL
|
||||
42
False Grimoire/Netgrimoire/Audits/diun-2026-04-03.md
Normal file
42
False Grimoire/Netgrimoire/Audits/diun-2026-04-03.md
Normal file
|
|
@ -0,0 +1,42 @@
|
|||
---
|
||||
title: Audit - diun.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:46:19.248Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:46:19.248Z
|
||||
---
|
||||
|
||||
# Audit Report — diun.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/diun.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**Audit Results for swarm/diun.yaml**
|
||||
|
||||
1. **Homepage labels**: No homepage labels defined in the file.
|
||||
- **Fix**: Add appropriate homepage labels if applicable.
|
||||
|
||||
2. **Uptime Kuma labels**: No Uptime Kuma labels defined in the file.
|
||||
- **Fix**: Add appropriate Uptime Kuma labels if applicable.
|
||||
|
||||
3. **Caddy labels on exposed services**: Caddy labels are missing for this service.
|
||||
- **Fix**: Add Caddy labels if needed, e.g., `caddy=example.com,caddy.reverse_proxy=http://localhost:80`.
|
||||
|
||||
4. **Placement constraints**: The constraint is set to run only on manager nodes, which is appropriate.
|
||||
- **PASS**: Placement constraints are correctly defined.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**: The volume uses `/data/nfs/znas/Docker/diun`, which does not follow the `/DockerVol/<service>` convention.
|
||||
- **Fix**: Update the volume path to follow the convention, e.g., `/DockerVol/diun`.
|
||||
|
||||
6. **Network references external netgrimoire overlay**: The network `netgrimoire` is correctly referencing an external overlay network.
|
||||
- **PASS**: Network reference is correct.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
The volume path does not follow the recommended convention, and additional labels for homepage, Uptime Kuma, and Caddy are missing.
|
||||
53
False Grimoire/Netgrimoire/Audits/dockpeek-2026-04-03.md
Normal file
53
False Grimoire/Netgrimoire/Audits/dockpeek-2026-04-03.md
Normal file
|
|
@ -0,0 +1,53 @@
|
|||
---
|
||||
title: Audit - dockpeek.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:47:08.875Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:47:08.875Z
|
||||
---
|
||||
|
||||
# Audit Report — dockpeek.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/dockpeek.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** PASS
|
||||
|
||||
---
|
||||
|
||||
**SWARM AUDIT**
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: PASS
|
||||
- `homepage.name`: PASS
|
||||
- `homepage.icon`: PASS
|
||||
- `homepage.href`: PASS
|
||||
- **`homepage.description`: FAIL**
|
||||
- Issue: Missing
|
||||
- Fix: Add `homepage.description: "Description of the service"`
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- `kuma.dockpeek.http.name`: PASS
|
||||
- `kuma.dockpeek.http.url`: PASS
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy=dockpeek.netgrimoire.com`: PASS
|
||||
- `caddy.reverse_proxy`: PASS
|
||||
|
||||
4. **Placement constraints**:
|
||||
- `node.role == manager`: FAIL
|
||||
- Issue: Constraints should be based on node attributes (e.g., `node.hostname`), not roles.
|
||||
- Fix: Replace with specific hostname or other attribute-based constraint.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- The volume `/var/run/docker.sock:/var/run/docker.sock` does not follow the `/DockerVol/<service>` convention.
|
||||
- Issue: Volume should be mounted using a custom path within `/DockerVol`.
|
||||
- Fix: Replace with something like `/DockerVol/dockpeek/docker.sock`.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- The network `netgrimoire` is referenced as an external network.
|
||||
- PASS
|
||||
|
||||
**VERDICT:** FAIL
|
||||
47
False Grimoire/Netgrimoire/Audits/dozzle-2026-04-03.md
Normal file
47
False Grimoire/Netgrimoire/Audits/dozzle-2026-04-03.md
Normal file
|
|
@ -0,0 +1,47 @@
|
|||
---
|
||||
title: Audit - dozzle.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:47:44.863Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:47:44.863Z
|
||||
---
|
||||
|
||||
# Audit Report — dozzle.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/dozzle.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
1. **Homepage labels**:
|
||||
- **PASS**: homepage.group=Management
|
||||
- **PASS**: homepage.name=Dozzle
|
||||
- **FAIL**: homepage.icon is missing.
|
||||
- **PASS**: homepage.href=http://dozzle.netgrimoire.com
|
||||
- **PASS**: homepage.description=Docker logs
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- No Uptime Kuma service found in the configuration.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- No Caddy services found in the configuration.
|
||||
|
||||
4. **Placement constraints**:
|
||||
- No placement constraints defined.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- **FAIL**: Volumes should follow the /DockerVol/dozzle path convention, but they are set to /var/run/docker.sock.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- **PASS**: Network dozzle references an external netgrimoire overlay.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
Reasons for failure:
|
||||
- Missing homepage.icon.
|
||||
- Volumes are not using the recommended path convention.
|
||||
- The /var/run/docker.sock volume is exposed directly, which might pose security risks.
|
||||
52
False Grimoire/Netgrimoire/Audits/dumbterm-2026-04-03.md
Normal file
52
False Grimoire/Netgrimoire/Audits/dumbterm-2026-04-03.md
Normal file
|
|
@ -0,0 +1,52 @@
|
|||
---
|
||||
title: Audit - dumbterm.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:48:40.660Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:48:40.660Z
|
||||
---
|
||||
|
||||
# Audit Report — dumbterm.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/dumbterm.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**Homepage Labels:**
|
||||
PASS - homepage.group=Remote Access
|
||||
PASS - homepage.name=Dumbterm
|
||||
FAIL - homepage.icon=dumbterm.png (should be a valid path to the icon file)
|
||||
FAIL - homepage.href=https://cli.netgrimoire.com (URL should be http://dumbterm:3000 based on BASE_URL)
|
||||
FAIL - homepage.description=Terminal (description is too short and lacks detail)
|
||||
|
||||
**Uptime Kuma Labels:**
|
||||
PASS - kuma.cli.http.name="dumbterm"
|
||||
PASS - kuma.cli.http.url=http://dumbterm:3000
|
||||
|
||||
**Caddy Labels on Exposed Services:**
|
||||
FAIL - caddy=cli.netgrimoire.com (domain should match the actual domain used in Caddy configuration)
|
||||
FAIL - caddy.reverse_proxy="{{upstreams 3000}}" (reverse proxy should be configured correctly)
|
||||
|
||||
**Placement Constraints:**
|
||||
FAIL - node.hostname is not defined
|
||||
|
||||
**Volumes Use /DockerVol/<service> Path Convention:**
|
||||
FAIL - Volumes are using paths outside the convention, e.g., /data/nfs/Baxter/Docker/dumbterm/root:/root
|
||||
|
||||
**Network References External Netgrimoire Overlay:**
|
||||
PASS - Network references external netgrimoire overlay
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
To fix the issues:
|
||||
1. Update `homepage.icon` to a valid path.
|
||||
2. Correct `homepage.href` based on the actual base URL used by the application.
|
||||
3. Provide more detail in `homepage.description`.
|
||||
4. Configure Caddy with the correct domain and reverse proxy settings.
|
||||
5. Define placement constraints for node.hostname if specific nodes are required.
|
||||
6. Update volume paths to use the /DockerVol/<service> convention.
|
||||
40
False Grimoire/Netgrimoire/Audits/dupecheck-2026-04-03.md
Normal file
40
False Grimoire/Netgrimoire/Audits/dupecheck-2026-04-03.md
Normal file
|
|
@ -0,0 +1,40 @@
|
|||
---
|
||||
title: Audit - dupecheck.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:49:24.657Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:49:24.657Z
|
||||
---
|
||||
|
||||
# Audit Report — dupecheck.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/dupecheck.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### SWARM AUDIT CHECKS
|
||||
|
||||
1. **Homepage labels**:
|
||||
- **PASS**: `homepage.group`, `homepage.name`, `homepage.icon`, `homepage.href`, and `homepage.description` are correctly set.
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- **FAIL**: Uptime Kuma labels (`kuma.<n>.http.name` and `kuma.<n>.http.url`) are not provided in the configuration.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- **PASS**: Caddy labels (`caddy=<domain>`, `caddy.import`, and `caddy.reverse_proxy`) are correctly set.
|
||||
|
||||
4. **Placement constraints**:
|
||||
- **PASS**: The placement constraint (`node.hostname == znas`) is correctly specified.
|
||||
|
||||
5. **Volumes use `/DockerVol/<service>` path convention**:
|
||||
- **PASS**: Volumes follow the `/DockerVol/<service>` path convention, e.g., `/DockerVol/czkawka:/config`.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- **PASS**: The service uses an external network `netgrimoire` which is correctly referenced.
|
||||
|
||||
### VERDICT: FAIL
|
||||
52
False Grimoire/Netgrimoire/Audits/filebrowser-2026-04-03.md
Normal file
52
False Grimoire/Netgrimoire/Audits/filebrowser-2026-04-03.md
Normal file
|
|
@ -0,0 +1,52 @@
|
|||
---
|
||||
title: Audit - filebrowser.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:50:18.312Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:50:18.312Z
|
||||
---
|
||||
|
||||
# Audit Report — filebrowser.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/filebrowser.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### SWARM AUDIT REVIEW
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group=Jolly Roger`: PASS
|
||||
- `homepage.name=FileBrowser`: PASS
|
||||
- `homepage.icon=filebrowser.png`: PASS
|
||||
- `homepage.href=http://filebrowser.netgrimoire.com`: PASS
|
||||
- `homepage.description=Web-based file manager`: PASS
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- The document does not provide any Uptime Kuma labels, so this check cannot be verified with the provided YAML.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy=filebrowser.netgrimoire.com`: PASS
|
||||
- `caddy.reverse_proxy="{{upstreams 80}}"`: PASS
|
||||
|
||||
4. **Placement constraints**:
|
||||
- The document specifies a constraint `node.labels.general == true`, which is not provided in the YAML.
|
||||
- **Issue**: The placement constraint refers to a label that may not be present on all nodes.
|
||||
- **Fix**: Ensure that all target nodes have the label `general=true`.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- The volumes are located at `/data/nfs/Baxter/Docker/filebrowser/config` and `/data/nfs/Baxter/Docker/filebrowser/srv`.
|
||||
- **Issue**: These paths do not follow the `/DockerVol/<service>` convention.
|
||||
- **Fix**: Rename the volumes to follow the convention, e.g., `volume: /DockerVol/filebrowser/config`.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- The network is correctly set as `netgrimoire` with `external: true`.
|
||||
- **PASS**
|
||||
|
||||
### VERDICT: FAIL
|
||||
|
||||
- The placement constraint and volume naming do not meet the specified conventions, which prevents a complete PASS status.
|
||||
49
False Grimoire/Netgrimoire/Audits/firefox-2026-04-03.md
Normal file
49
False Grimoire/Netgrimoire/Audits/firefox-2026-04-03.md
Normal file
|
|
@ -0,0 +1,49 @@
|
|||
---
|
||||
title: Audit - firefox.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:51:09.611Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:51:09.611Z
|
||||
---
|
||||
|
||||
# Audit Report — firefox.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/firefox.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**SWARM AUDIT**
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: Remote Access (PASS)
|
||||
- `homepage.name`: Firefox (PASS)
|
||||
- `homepage.icon`: firefox.png (PASS)
|
||||
- `homepage.href`: https://firefox.netgrimoire.com (PASS)
|
||||
- `homepage.description`: Remote Browser (PASS)
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- No Uptime Kuma labels found in the provided YAML file (FAIL). Ensure that any services running on this host have proper Uptime Kuma labels for visibility.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy=firefox.netgrimoire.com` (PASS)
|
||||
- `caddy.reverse_proxy=http://firefox:5800` (PASS)
|
||||
|
||||
4. **Placement constraints**:
|
||||
- No placement constraints found in the provided YAML file (FAIL). Ensure that any critical services have proper placement constraints to meet availability requirements.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- Volume path `/data/nfs/znas/Docker/firefox` does not follow the `/DockerVol/<service>` convention (FAIL). Volumes should be placed in a directory following this naming scheme for consistency and ease of management.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- Network `netgrimoire` is referenced correctly and marked as external (PASS).
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
- The YAML file lacks Uptime Kuma labels, which are essential for monitoring the status of services.
|
||||
- No placement constraints are defined, which can lead to issues with service availability and redundancy.
|
||||
- Volumes do not follow the recommended path convention, which may cause confusion and difficulty in managing storage resources.
|
||||
25
False Grimoire/Netgrimoire/Audits/first.md
Normal file
25
False Grimoire/Netgrimoire/Audits/first.md
Normal file
|
|
@ -0,0 +1,25 @@
|
|||
---
|
||||
title: Untitled Page
|
||||
description:
|
||||
published: true
|
||||
date: 2026-04-01T01:56:08.260Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-01T01:50:18.740Z
|
||||
---
|
||||
|
||||
# Header
|
||||
dffasdf
|
||||
asdf
|
||||
asd
|
||||
asdf
|
||||
asdf
|
||||
asdf
|
||||
asdf
|
||||
asdf
|
||||
asdf
|
||||
asdf
|
||||
asdf
|
||||
asdf
|
||||
asdf
|
||||
asdf
|
||||
53
False Grimoire/Netgrimoire/Audits/forgejo-2026-04-03.md
Normal file
53
False Grimoire/Netgrimoire/Audits/forgejo-2026-04-03.md
Normal file
|
|
@ -0,0 +1,53 @@
|
|||
---
|
||||
title: Audit - forgejo.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:52:02.048Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:52:02.048Z
|
||||
---
|
||||
|
||||
# Audit Report — forgejo.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/forgejo.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### SWARM AUDIT
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: Applications (PASS)
|
||||
- `homepage.name`: Forgejo (PASS)
|
||||
- `homepage.icon`: forgejo.png (FAIL)
|
||||
- Issue: The icon file path should be relative to the service's working directory or a valid URL.
|
||||
- `homepage.href`: https://git.netgrimoire.com (PASS)
|
||||
- `homepage.description`: Git Repository (PASS)
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- `kuma.git.http.name`: Forgejo (PASS)
|
||||
- `kuma.git.http.url`: http://forgejo:3000 (PASS)
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy=git.netgrimoire.com` (PASS)
|
||||
- `caddy.reverse_proxy=forgejo:3000` (PASS)
|
||||
|
||||
4. **Placement constraints**:
|
||||
- `node.hostname==znas` (PASS)
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- `/DockerVol/forgejo:/data` (PASS)
|
||||
- `/etc/timezone:/etc/timezone:ro` (FAIL)
|
||||
- Issue: The timezone files should be mounted from a local path within the host or a valid network location.
|
||||
- `/etc/localtime:/etc/localtime:ro` (FAIL)
|
||||
- Same issue as above.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- `netgrimoire`: (PASS)
|
||||
|
||||
### Final Line
|
||||
|
||||
VERDICT: FAIL
|
||||
46
False Grimoire/Netgrimoire/Audits/freshrss-2026-04-03.md
Normal file
46
False Grimoire/Netgrimoire/Audits/freshrss-2026-04-03.md
Normal file
|
|
@ -0,0 +1,46 @@
|
|||
---
|
||||
title: Audit - freshrss.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:52:41.486Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:52:41.486Z
|
||||
---
|
||||
|
||||
# Audit Report — freshrss.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/freshrss.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
```plaintext
|
||||
1. Homepage labels:
|
||||
- homepage.group: "Services" (PASS)
|
||||
- homepage.name: "FreshRSS" (PASS)
|
||||
- homepage.icon: "rss" (PASS)
|
||||
- homepage.href: "https://rss.netgrimoire.com" (PASS)
|
||||
- homepage.description is missing (FAIL)
|
||||
|
||||
2. Uptime Kuma labels:
|
||||
- kuma.freshrss.http.name: "FreshRSS" (PASS)
|
||||
- kuma.freshrss.http.url: "https://rss.netgrimoire.com" (PASS)
|
||||
|
||||
3. Caddy labels on exposed services:
|
||||
- caddy=<domain>: Missing specific domain (FAIL)
|
||||
- caddy.reverse_proxy: "{{upstreams 80}}" (PASS)
|
||||
|
||||
4. Placement constraints:
|
||||
- node.hostname is missing (FAIL)
|
||||
|
||||
5. Volumes use /DockerVol/<service> path convention:
|
||||
- /data/nfs/Baxter/Docker/freshrss:/config does not follow the convention (FAIL)
|
||||
|
||||
6. Network references external netgrimoire overlay:
|
||||
- netgrimoire network referenced correctly (PASS)
|
||||
|
||||
VERDICT: FAIL
|
||||
```
|
||||
40
False Grimoire/Netgrimoire/Audits/gitrunner-2026-04-03.md
Normal file
40
False Grimoire/Netgrimoire/Audits/gitrunner-2026-04-03.md
Normal file
|
|
@ -0,0 +1,40 @@
|
|||
---
|
||||
title: Audit - gitrunner.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:53:10.577Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:53:10.577Z
|
||||
---
|
||||
|
||||
# Audit Report — gitrunner.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/gitrunner.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** PASS
|
||||
|
||||
---
|
||||
|
||||
```plaintext
|
||||
1. Homepage labels:
|
||||
- PASS: homepage.group, homepage.name, homepage.icon, homepage.href, homepage.description are present.
|
||||
|
||||
2. Uptime Kuma labels:
|
||||
- FAIL: No Uptime Kuma services found in the YAML file.
|
||||
|
||||
3. Caddy labels on exposed services:
|
||||
- FAIL: No Caddy services or configuration found in the YAML file.
|
||||
|
||||
4. Placement constraints:
|
||||
- PASS: node.role == manager constraint is present.
|
||||
|
||||
5. Volumes use /DockerVol/<service> path convention:
|
||||
- PASS: All volumes follow the /DockerVol/<service> path convention.
|
||||
|
||||
6. Network references external netgrimoire overlay:
|
||||
- PASS: The network "netgrimoire" is referenced and marked as external.
|
||||
|
||||
VERDICT: PASS
|
||||
```
|
||||
42
False Grimoire/Netgrimoire/Audits/glance-2026-04-03.md
Normal file
42
False Grimoire/Netgrimoire/Audits/glance-2026-04-03.md
Normal file
|
|
@ -0,0 +1,42 @@
|
|||
---
|
||||
title: Audit - glance.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:53:50.034Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:53:50.034Z
|
||||
---
|
||||
|
||||
# Audit Report — glance.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/glance.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** PASS
|
||||
|
||||
---
|
||||
|
||||
### Audit Summary:
|
||||
|
||||
1. **Homepage labels**:
|
||||
- **PASS**: homepage.group, homepage.name, homepage.href, homepage.description are correctly set.
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- **FAIL**: No Uptime Kuma related labels found.
|
||||
- **Fix**: Add the necessary labels under `labels` for Uptime Kuma if needed.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- **PASS**: The `caddy` label is present with the domain and reverse proxy configuration.
|
||||
|
||||
4. **Placement constraints**:
|
||||
- **FAIL**: No placement constraints found.
|
||||
- **Fix**: Add constraints under `deploy.resources` to specify node.hostname or any other resource constraint if needed.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- **PASS**: The volume path `/data/nfs/znas/Docker/glance:/app/config` follows the recommended naming convention.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- **PASS**: The service references an external network `netgrimoire`.
|
||||
|
||||
### VERDICT: PASS
|
||||
|
|
@ -0,0 +1,41 @@
|
|||
---
|
||||
title: Audit - gremlin-stack.yml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:30:10.234Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:30:10.234Z
|
||||
---
|
||||
|
||||
# Audit Report — gremlin-stack.yml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/stack/Gremlin/gremlin-stack.yml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### SWARM AUDIT RESULTS:
|
||||
|
||||
1. **Homepage labels**:
|
||||
- **PASS**: All homepage labels are present for each service.
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- **FAIL (n8n)**: Missing kuma.n8n.http.name and kuma.n8n.http.url.
|
||||
**Fix**: Add these labels to the n8n service configuration.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- **PASS**: All caddy labels are present for each exposed service.
|
||||
|
||||
4. **Placement constraints**:
|
||||
- **PASS**: All placement constraints are correctly set for node.hostname.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- **PASS**: All volumes follow the /DockerVol/<service> path convention.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- **PASS**: The swarm network 'netgrimoire' is correctly referenced as an external network for all services.
|
||||
|
||||
### VERDICT: FAIL
|
||||
44
False Grimoire/Netgrimoire/Audits/homepage-2026-04-03.md
Normal file
44
False Grimoire/Netgrimoire/Audits/homepage-2026-04-03.md
Normal file
|
|
@ -0,0 +1,44 @@
|
|||
---
|
||||
title: Audit - homepage.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:54:34.224Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:54:34.224Z
|
||||
---
|
||||
|
||||
# Audit Report — homepage.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/homepage.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### SWARM AUDIT REVIEW
|
||||
|
||||
1. **Homepage labels**:
|
||||
- **FAIL**: `homepage.group` label is missing.
|
||||
- **Fix**: Add `homepage.group=<group>` to the labels.
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- **PASS**: No Uptime Kuma services are defined in this configuration, so no labels need to be checked.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- **FAIL**: The `caddy` label is incorrectly used as a boolean flag rather than specifying the domain.
|
||||
- **Fix**: Correctly define the `caddy` label with the domain and reverse proxy configuration: `caddy=homepage.netgrimoire.com caddy.reverse_proxy="{{upstreams 3000}}"`.
|
||||
|
||||
4. **Placement constraints**:
|
||||
- **PASS**: The `node.hostname==znas` constraint is correctly defined.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- **PASS**: All volume paths follow the `/DockerVol/<service>` convention.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- **PASS**: The `netgrimoire` network is correctly referenced as an external overlay.
|
||||
|
||||
### VERDICT: FAIL
|
||||
|
||||
The configuration is missing several required labels and has incorrect label syntax, resulting in a fail verdict.
|
||||
47
False Grimoire/Netgrimoire/Audits/hydra-2026-04-03.md
Normal file
47
False Grimoire/Netgrimoire/Audits/hydra-2026-04-03.md
Normal file
|
|
@ -0,0 +1,47 @@
|
|||
---
|
||||
title: Audit - hydra.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:55:21.784Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:55:21.784Z
|
||||
---
|
||||
|
||||
# Audit Report — hydra.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/hydra.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: PASSED
|
||||
- `homepage.name`: PASSED
|
||||
- `homepage.icon`: PASSED
|
||||
- `homepage.href`: PASSED
|
||||
- `homepage.description`: PASSED
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- `kuma.hydra.http.name`: PASSED
|
||||
- `kuma.hydra.http.url`: PASSED
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy=hydra.netgrimoire.com`: PASSED
|
||||
- `caddy.reverse_proxy: hydra2:5076`: PASSED
|
||||
|
||||
4. **Placement constraints**:
|
||||
- `node.labels.general == true`: PASSED
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- `/data/nfs/znas/Docker/hydra2/config`: FAIL
|
||||
- Fix: Update the volume to follow the convention, e.g., `/DockerVol/hydra2/config`.
|
||||
- `/data/nfs/znas/Docker/hydra2/downloads`: FAIL
|
||||
- Fix: Update the volume to follow the convention, e.g., `/DockerVol/hydra2/downloads`.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- `netgrimoire`: PASSED
|
||||
|
||||
VERDICT: FAIL
|
||||
50
False Grimoire/Netgrimoire/Audits/joplin-2026-04-03.md
Normal file
50
False Grimoire/Netgrimoire/Audits/joplin-2026-04-03.md
Normal file
|
|
@ -0,0 +1,50 @@
|
|||
---
|
||||
title: Audit - joplin.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:56:20.747Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:56:20.747Z
|
||||
---
|
||||
|
||||
# Audit Report — joplin.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/joplin.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**SWARM AUDIT**
|
||||
|
||||
1. **Homepage labels**:
|
||||
- **PASS**: `homepage.group=Services`
|
||||
- **PASS**: `homepage.name=Joplin`
|
||||
- **FAIL**: `homepage.icon=joplin.png` (should be a valid URL or path)
|
||||
- **PASS**: `homepage.href=https://joplin.netgrimoire.com`
|
||||
- **PASS**: `homepage.description=Note Server`
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- **FAIL**: No Uptime Kuma labels found.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- **PASS**: `caddy=joplin.netgrimoire.com`
|
||||
- **FAIL**: `caddy.reverse_proxy="{{upstreams 22300}}"` should be `caddy.reverse_proxy=["http://joplin:22300"]`
|
||||
|
||||
4. **Placement constraints**:
|
||||
- **PASS**: `node.hostname == docker3`
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- **PASS**: `/DockerVol/joplindb:/var/lib/postgresql/data`
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- **PASS**: Uses `netgrimoire` network which is marked as `external: true`.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
Fixes required:
|
||||
- Correct the icon URL in `homepage.icon`.
|
||||
- Add Uptime Kuma labels.
|
||||
- Correct the Caddy reverse proxy configuration.
|
||||
27
False Grimoire/Netgrimoire/Audits/journiv-2026-04-03.md
Normal file
27
False Grimoire/Netgrimoire/Audits/journiv-2026-04-03.md
Normal file
|
|
@ -0,0 +1,27 @@
|
|||
---
|
||||
title: Audit - journiv.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:57:23.495Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:57:23.495Z
|
||||
---
|
||||
|
||||
# Audit Report — journiv.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/journiv.yaml
|
||||
**Type:** Docker Compose
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
PASS: Caddyfile has a global block for Crowdsec configuration.
|
||||
PASS: All services are reverse-proxied through Caddy, ensuring they do not expose ports directly.
|
||||
|
||||
FAIL:
|
||||
- The service at `fish.pncharris.com` is missing a Caddyfile entry.
|
||||
- No entries exist for the subdomains of `webmail.netgrimoire.com`.
|
||||
|
||||
VERDICT: FAIL
|
||||
52
False Grimoire/Netgrimoire/Audits/kavita-2026-04-03.md
Normal file
52
False Grimoire/Netgrimoire/Audits/kavita-2026-04-03.md
Normal file
|
|
@ -0,0 +1,52 @@
|
|||
---
|
||||
title: Audit - kavita.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:58:18.686Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:58:18.686Z
|
||||
---
|
||||
|
||||
# Audit Report — kavita.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/kavita.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**SWARM AUDIT REPORT**
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: PASS
|
||||
- `homepage.name`: PASS
|
||||
- `homepage.icon`: PASS
|
||||
- `homepage.href`: PASS
|
||||
- `homepage.description`: PASS
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- Missing Uptime Kuma labels (e.g., `kuma.kavita.http.name` and `kuma.kavita.http.url`). These are not defined in the provided configuration.
|
||||
- **FAIL**: Add appropriate Uptime Kuma labels for monitoring.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy`: PASS
|
||||
- `caddy.reverse_proxy`: PASS
|
||||
|
||||
4. **Placement constraints**:
|
||||
- No placement constraints (e.g., `node.hostname`) specified.
|
||||
- **FAIL**: Consider adding placement constraints if specific nodes are required for service placement.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- `/data/nfs/Baxter/Data/media/comics`: FAIL
|
||||
- Volume paths do not follow the `/DockerVol/<service>` convention.
|
||||
- **Fix**: Update volume paths to conform to the convention, e.g., `/DockerVol/kavita/media/comics`.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- `netgrimoire`: PASS
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
- The configuration contains several issues that need resolution before it can be considered fully compliant with best practices.
|
||||
- Address the Uptime Kuma labels, placement constraints, and volume paths as indicated.
|
||||
46
False Grimoire/Netgrimoire/Audits/kopia-2026-04-03.md
Normal file
46
False Grimoire/Netgrimoire/Audits/kopia-2026-04-03.md
Normal file
|
|
@ -0,0 +1,46 @@
|
|||
---
|
||||
title: Audit - kopia.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:59:09.430Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:59:09.430Z
|
||||
---
|
||||
|
||||
# Audit Report — kopia.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/kopia.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** PASS
|
||||
|
||||
---
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: PASS (Backup)
|
||||
- `homepage.name`: PASS (Kopia)
|
||||
- `homepage.icon`: PASS (kopia.png)
|
||||
- `homepage.href`: PASS (https://kopia.netgrimoire.com)
|
||||
- `homepage.description`: PASS (Snapshot backup and deduplication)
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- Not applicable as there are no Uptime Kuma labels.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy`: PASS (kopia.netgrimoire.com)
|
||||
- `caddy.reverse_proxy`: PASS (kopia.netgrimoire.com:51515)
|
||||
|
||||
4. **Placement constraints**:
|
||||
- `node.hostname == znas`: PASS
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- `/DockerVol/kopia/config`: PASS
|
||||
- `/DockerVol/kopia/cache`: PASS
|
||||
- `/DockerVol/kopia/cert`: PASS
|
||||
- `/DockerVol/kopia/logs`: PASS
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- `netgrimoire`: PASS (external)
|
||||
|
||||
VERDICT: PASS
|
||||
44
False Grimoire/Netgrimoire/Audits/kuma-2026-04-03.md
Normal file
44
False Grimoire/Netgrimoire/Audits/kuma-2026-04-03.md
Normal file
|
|
@ -0,0 +1,44 @@
|
|||
---
|
||||
title: Audit - kuma.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:59:59.242Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:59:59.242Z
|
||||
---
|
||||
|
||||
# Audit Report — kuma.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/kuma.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**SWARM AUDIT**
|
||||
|
||||
1. **Homepage labels**:
|
||||
- **PASS**: homepage.group=Monitoring, homepage.name=Kuma Uptime, homepage.icon=kuma.png, homepage.href=https://kuma.netgrimoire.com, homepage.description=Services Monitor
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- **FAIL**: No labels found for Uptime Kuma service.
|
||||
- **Fix**: Add appropriate labels to the Uptime Kuma service under the `labels` section.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- **PASS**: caddy=kuma.netgrimoire.com, caddy.reverse_proxy=kuma:3001
|
||||
|
||||
4. **Placement constraints**:
|
||||
- **FAIL**: node.hostname constraint for autokuma service does not match the provided fix.
|
||||
- **Fix**: Use `node.role == manager` instead of `node.hostname`.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- **PASS**: All volumes follow the /DockerVol/<service> path convention.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- **PASS**: The swarm uses an external network netgrimoire.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
- Missing or incorrect labels for Uptime Kuma and placement constraints for autokuma service are preventing the audit from being PASS.
|
||||
64
False Grimoire/Netgrimoire/Audits/library-2026-04-03.md
Normal file
64
False Grimoire/Netgrimoire/Audits/library-2026-04-03.md
Normal file
|
|
@ -0,0 +1,64 @@
|
|||
---
|
||||
title: Audit - library.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:00:59.147Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:00:59.147Z
|
||||
---
|
||||
|
||||
# Audit Report — library.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/library.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**SWARM AUDIT REPORT**
|
||||
|
||||
1. **Homepage labels:**
|
||||
- `homepage.group=Library`
|
||||
- `homepage.name=Netgrimoire Library`
|
||||
- `homepage.icon=calibre-web.png`
|
||||
- `homepage.href=http://books.netgrimoire.com`
|
||||
- `homepage.description=Curated Library`
|
||||
|
||||
**PASS**: All homepage labels are correctly configured.
|
||||
|
||||
2. **Uptime Kuma labels:**
|
||||
- `kuma.calibre1.http.name="Calibre-Netgrimoire"`
|
||||
- `kuma.auth.http.url=http://calibre-netgrimoire:8083`
|
||||
|
||||
**PASS**: Uptime Kuma labels are correctly configured for the Calibre service.
|
||||
|
||||
3. **Caddy labels on exposed services:**
|
||||
- `caddy=books.netgrimoire.com`
|
||||
- `caddy.reverse_proxy="{{upstreams 8083}}"`
|
||||
|
||||
**PASS**: Caddy labels are correctly configured to reverse proxy to the Calibre service.
|
||||
|
||||
4. **Placement constraints:**
|
||||
- `node.labels.general == true`
|
||||
|
||||
**FAIL**: The placement constraint should use `node.hostname` instead of `node.labels.general`.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention:**
|
||||
- `/data/nfs/Baxter/Docker/Calibre-netgrimoire/Config:/config`
|
||||
- `/data/nfs/Baxter/Data:/data:shared`
|
||||
|
||||
**FAIL**: Volumes are not using the recommended `/DockerVol/<service>` path convention. They should be mounted under `/DockerVol/Calibre-Netgrimoire`.
|
||||
|
||||
6. **Network references external netgrimoire overlay:**
|
||||
- `networks:`
|
||||
- `- netgrimoire`
|
||||
|
||||
**PASS**: The service is correctly using an external network.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
Fixes required:
|
||||
1. Update the placement constraint to use `node.hostname`.
|
||||
2. Update volume paths to follow the `/DockerVol/<service>` convention.
|
||||
50
False Grimoire/Netgrimoire/Audits/linkding-2026-04-03.md
Normal file
50
False Grimoire/Netgrimoire/Audits/linkding-2026-04-03.md
Normal file
|
|
@ -0,0 +1,50 @@
|
|||
---
|
||||
title: Audit - linkding.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:01:44.209Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:01:44.209Z
|
||||
---
|
||||
|
||||
# Audit Report — linkding.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/linkding.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**Audit Results**
|
||||
|
||||
1. **Homepage labels:**
|
||||
- `homepage.group`: PASS
|
||||
- `homepage.name`: PASS
|
||||
- `homepage.icon`: PASS
|
||||
- `homepage.href`: PASS
|
||||
- `homepage.description`: PASS
|
||||
|
||||
2. **Uptime Kuma labels:**
|
||||
- `kuma.linkding.http.name`: PASS
|
||||
- `kuma.linkding.http.url`: PASS
|
||||
|
||||
3. **Caddy labels on exposed services:**
|
||||
- `caddy=link.netgrimoire.com`: PASS
|
||||
- `caddy.reverse_proxy=linkding:9090`: PASS
|
||||
|
||||
4. **Placement constraints:**
|
||||
- No placement constraints specified, which is acceptable if not needed. **PASS**
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention:**
|
||||
- Volume path is `/data/nfs/Baxter/Docker/linkding/data`, which does not follow the `/DockerVol/<service>` convention. **FAIL**
|
||||
|
||||
6. **Network references external netgrimoire overlay:**
|
||||
- `netgrimoire` network is referenced and set as external, which is correct. **PASS**
|
||||
|
||||
**Fixes Needed:**
|
||||
- Update volume path to use the `/DockerVol/linkding` convention.
|
||||
|
||||
**Final Line:**
|
||||
VERDICT: FAIL
|
||||
43
False Grimoire/Netgrimoire/Audits/lldap-2026-04-03.md
Normal file
43
False Grimoire/Netgrimoire/Audits/lldap-2026-04-03.md
Normal file
|
|
@ -0,0 +1,43 @@
|
|||
---
|
||||
title: Audit - lldap.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:02:52.353Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:02:52.353Z
|
||||
---
|
||||
|
||||
# Audit Report — lldap.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/lldap.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**Audit Report for swarm/lldap.yaml**
|
||||
|
||||
1. **Homepage Labels**:
|
||||
- **PASS**: All required labels (`homepage.group`, `homepage.name`, `homepage.icon`, `homepage.href`, `homepage.description`) are present and correctly formatted.
|
||||
|
||||
2. **Uptime Kuma Labels**:
|
||||
- **FAIL**: The Uptime Kuma labels are not specified in the provided YAML. The labels should be prefixed with `kuma.` and include details like `http.name` and `http.url`.
|
||||
**Fix**: Add appropriate Kuma labels under the `deploy.labels` section.
|
||||
|
||||
3. **Caddy Labels on Exposed Services**:
|
||||
- **PASS**: All required Caddy labels (`caddy=<domain>`, `caddy.reverse_proxy`) are present and correctly formatted.
|
||||
|
||||
4. **Placement Constraints**:
|
||||
- **PASS**: The placement constraint `node.hostname == docker4` is applied to both services as specified.
|
||||
|
||||
5. **Volumes Use /DockerVol/<service> Path Convention**:
|
||||
- **PASS**: All volumes adhere to the `/DockerVol/<service>` path convention, with examples provided for both `lldap-db` and `lldap`.
|
||||
|
||||
6. **Network References External netgrimoire Overlay**:
|
||||
- **PASS**: The network reference `netgrimoire` is correctly set as external.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
The Uptime Kuma labels are missing from the service configuration, which is a critical issue affecting monitoring and management visibility of the LLDAP service.
|
||||
40
False Grimoire/Netgrimoire/Audits/logging-2026-04-03.md
Normal file
40
False Grimoire/Netgrimoire/Audits/logging-2026-04-03.md
Normal file
|
|
@ -0,0 +1,40 @@
|
|||
---
|
||||
title: Audit - logging.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:03:52.043Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:03:52.043Z
|
||||
---
|
||||
|
||||
# Audit Report — logging.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/logging.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** PASS
|
||||
|
||||
---
|
||||
|
||||
### Audit Report
|
||||
|
||||
**1. Homepage labels:**
|
||||
PASS - All services have `homepage.group`, `homepage.name`, `homepage.icon`, `homepage.href`, and `homepage.description` labels.
|
||||
|
||||
**2. Uptime Kuma labels:**
|
||||
FAIL - Uptime Kuma is not present in the provided YAML, so this check is not applicable.
|
||||
|
||||
**3. Caddy labels on exposed services:**
|
||||
PASS - All services that expose ports (loki:3100, grafana:3000) have `caddy` and `caddy.reverse_proxy` labels correctly configured.
|
||||
|
||||
**4. Placement constraints:**
|
||||
PASS - All services use placement constraints to ensure they run only on a node with the hostname `docker4`.
|
||||
|
||||
**5. Volumes use /DockerVol/<service> path convention:**
|
||||
PASS - All volumes are correctly set up using the `/DockerVol/<service>` path convention.
|
||||
|
||||
**6. Network references external netgrimoire overlay:**
|
||||
PASS - The network `netgrimoire` is referenced and is marked as external, ensuring correct network isolation and configuration.
|
||||
|
||||
### VERDICT: PASS
|
||||
47
False Grimoire/Netgrimoire/Audits/mealie-2026-04-03.md
Normal file
47
False Grimoire/Netgrimoire/Audits/mealie-2026-04-03.md
Normal file
|
|
@ -0,0 +1,47 @@
|
|||
---
|
||||
title: Audit - mealie.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:04:46.697Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:04:46.697Z
|
||||
---
|
||||
|
||||
# Audit Report — mealie.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/mealie.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**Audit Results:**
|
||||
|
||||
1. **Homepage labels:**
|
||||
- PASS: homepage.group=PNCHarris Apps
|
||||
- PASS: homepage.name=Mealie
|
||||
- PASS: homepage.icon=mealie.png
|
||||
- PASS: homepage.href=https://recipe.netgrimoire.com
|
||||
- PASS: homepage.description=Recipe Manager
|
||||
|
||||
2. **Uptime Kuma labels:**
|
||||
- FAIL: Missing Uptime Kuma labels. No Kuma-related labels are present in the file.
|
||||
|
||||
3. **Caddy labels on exposed services:**
|
||||
- PASS: caddy=recipe.netgrimoire.com
|
||||
- PASS: caddy.reverse_proxy=recipe:9000
|
||||
|
||||
4. **Placement constraints:**
|
||||
- PASS: node.hostname == docker4
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention:**
|
||||
- PASS: /DockerVol/mealie:/app/data
|
||||
|
||||
6. **Network references external netgrimoire overlay:**
|
||||
- PASS: netgrimoire network is referenced as external.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
The file does not include any Uptime Kuma labels, which are necessary for monitoring the service with Uptime Kuma.
|
||||
41
False Grimoire/Netgrimoire/Audits/ntfy-2026-04-03.md
Normal file
41
False Grimoire/Netgrimoire/Audits/ntfy-2026-04-03.md
Normal file
|
|
@ -0,0 +1,41 @@
|
|||
---
|
||||
title: Audit - ntfy.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:05:29.837Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:05:29.837Z
|
||||
---
|
||||
|
||||
# Audit Report — ntfy.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/ntfy.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**Audit Report for swarm/ntfy.yaml**
|
||||
|
||||
1. **Homepage labels**: NOT APPLICABLE - The configuration file does not include any homepage labels.
|
||||
|
||||
2. **Uptime Kuma labels**: NOT APPLICABLE - The configuration file does not include Uptime Kuma labels.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- PASS: `caddy=ntfy.netgrimoire.com`
|
||||
- PASS: `caddy.reverse_proxy`
|
||||
|
||||
4. **Placement constraints**: NOT APPLICABLE - The configuration file does not include any placement constraints.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- FAIL: Volumes are using `/data/nfs/znas/Docker/ntfy/cache` and `/data/nfs/znas/Docker/ntfy/etc`, which do not follow the `/DockerVol/<service>` path convention.
|
||||
**Fix**: Update volumes to use a path like `/DockerVol/ntfy/cache` and `/DockerVol/ntfy/etc`.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- PASS: The network `netgrimoire` is referenced as an external network.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
The volume paths do not conform to the specified convention, which could lead to management and organization issues in the future. Ensure all volumes follow the `/DockerVol/<service>` path convention for better consistency and ease of maintenance.
|
||||
47
False Grimoire/Netgrimoire/Audits/nzbget-2026-04-03.md
Normal file
47
False Grimoire/Netgrimoire/Audits/nzbget-2026-04-03.md
Normal file
|
|
@ -0,0 +1,47 @@
|
|||
---
|
||||
title: Audit - nzbget.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:06:10.689Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:06:10.689Z
|
||||
---
|
||||
|
||||
# Audit Report — nzbget.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/nzbget.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### SWARM AUDIT
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: PASS
|
||||
- `homepage.name`: PASS
|
||||
- `homepage.icon`: PASS
|
||||
- `homepage.href`: PASS
|
||||
- `homepage.description`: PASS
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- `kuma.nzbget.http.name`: PASS
|
||||
- `kuma.nzbget.http.url`: PASS
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy=nzbget.netgrimoire.com`: PASS
|
||||
- `caddy.reverse_proxy="{{upstreams 6789}}"`: PASS
|
||||
|
||||
4. **Placement constraints**:
|
||||
- `node.hostname=docker5`: PASS
|
||||
|
||||
5. **Volumes use `/DockerVol/<service>` path convention**:
|
||||
- `/DockerVol/nzbget/config:/config`: PASS
|
||||
- `/data/nfs/znas/Green/:/data:shared`: FAIL (Volume paths should follow the `/DockerVol/<service>` convention)
|
||||
|
||||
6. **Network references external `netgrimoire` overlay**:
|
||||
- `networks`: PASS
|
||||
|
||||
### VERDICT: FAIL
|
||||
53
False Grimoire/Netgrimoire/Audits/ollama-2026-04-03.md
Normal file
53
False Grimoire/Netgrimoire/Audits/ollama-2026-04-03.md
Normal file
|
|
@ -0,0 +1,53 @@
|
|||
---
|
||||
title: Audit - ollama.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:07:35.106Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:07:35.106Z
|
||||
---
|
||||
|
||||
# Audit Report — ollama.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/ollama.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** PASS
|
||||
|
||||
---
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: PASS
|
||||
- `homepage.name`: PASS
|
||||
- `homepage.icon`: PASS
|
||||
- `homepage.href`: PASS
|
||||
- `homepage.description`: PASS
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- `kuma.ollama.http.name`: PASS
|
||||
- `kuma.ollama.http.url`: PASS
|
||||
- `kuma.openwebui.http.name`: PASS
|
||||
- `kuma.openwebui.http.url`: PASS
|
||||
- `kuma.qdrant.http.name`: PASS
|
||||
- `kuma.qdrant.http.url`: PASS
|
||||
- `kuma.n8n.http.name`: PASS
|
||||
- `kuma.n8n.http.url`: PASS
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy=ai.netgrimoire.com` and `caddy.reverse_proxy={{upstreams 8080}}`: PASS
|
||||
- `caddy=n8n.netgrimoire.com` and `caddy.reverse_proxy={{upstreams 5678}}`: PASS
|
||||
|
||||
4. **Placement constraints**:
|
||||
- `node.hostname == docker4`: PASS for all services
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- `/DockerVol/ollama` for ollama: PASS
|
||||
- `/DockerVol/open-webui` for open-webui: PASS
|
||||
- `/DockerVol/qdrant` for qdrant: PASS
|
||||
- `/DockerVol/n8n` for n8n: PASS
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- `netgrimoire`: PASS
|
||||
|
||||
**VERDICT: PASS**
|
||||
57
False Grimoire/Netgrimoire/Audits/phpipam-2026-04-03.md
Normal file
57
False Grimoire/Netgrimoire/Audits/phpipam-2026-04-03.md
Normal file
|
|
@ -0,0 +1,57 @@
|
|||
---
|
||||
title: Audit - phpipam.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:08:37.768Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:08:37.768Z
|
||||
---
|
||||
|
||||
# Audit Report — phpipam.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/phpipam.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** PASS
|
||||
|
||||
---
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: Management
|
||||
- `homepage.name`: phpIPAM
|
||||
- `homepage.icon`: ipam.png
|
||||
- `homepage.href`: http://ipam.netgrimoire.com
|
||||
- `homepage.description`: IP Address Management
|
||||
|
||||
**PASS**: All homepage labels are correctly set.
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- `kuma.<n>.http.name`
|
||||
- `kuma.<n>.http.url`
|
||||
|
||||
**FAIL**: No Uptime Kuma service is defined in the YAML file, so these labels do not apply.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy=ipam.netgrimoire.com`
|
||||
- `caddy.reverse_proxy="{{upstreams 80}}"`
|
||||
|
||||
**PASS**: Caddy labels are correctly set for the phpIPAM-web service.
|
||||
|
||||
4. **Placement constraints**:
|
||||
- `node.hostname == docker3`
|
||||
|
||||
**FAIL**: The placement constraint is applied to all services, but it should be verified that `docker3` node exists and is available. Additionally, consider using a more dynamic constraint if possible (e.g., based on resource availability).
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- `/DockerVol/phpipam/phpipam-logo:/phpipam/css/images/logo`
|
||||
- `/DockerVol/phpipam/mariadb:/var/lib/mysql`
|
||||
|
||||
**PASS**: All volumes follow the specified path convention.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- `netgrimoire` network is referenced by all services.
|
||||
|
||||
**PASS**: The `netgrimoire` network is correctly referenced as an external overlay network.
|
||||
|
||||
**VERDICT: PASS**
|
||||
54
False Grimoire/Netgrimoire/Audits/pinchflat-2026-04-03.md
Normal file
54
False Grimoire/Netgrimoire/Audits/pinchflat-2026-04-03.md
Normal file
|
|
@ -0,0 +1,54 @@
|
|||
---
|
||||
title: Audit - pinchflat.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:09:34.505Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:09:34.505Z
|
||||
---
|
||||
|
||||
# Audit Report — pinchflat.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/pinchflat.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** PASS
|
||||
|
||||
---
|
||||
|
||||
### SWARM AUDIT CHECKS:
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: "Downloaders" - PASS
|
||||
- `homepage.name`: "PinchFlat" - PASS
|
||||
- `homepage.icon`: "pinchflat.png" - FAIL (icon path should be accessible)
|
||||
- `homepage.href`: "https://pinchflat.netgrimoire.com" - PASS
|
||||
- `homepage.description`: "YouTube Library" - PASS
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- `kuma.pf.http.name`: "PinchFlat" - PASS
|
||||
- `kuma.pf.http.url`: "http://pinchflat:8945" - PASS
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy=pinchflat.netgrimoire.com` - PASS
|
||||
- `caddy.import=authentik` - PASS
|
||||
- `caddy.reverse_proxy=pinchflat:8945` - PASS
|
||||
|
||||
4. **Placement constraints**:
|
||||
- `node.hostname==nas` - PASS
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- `/DockerVol/pinchflat/config:/config` - PASS
|
||||
- `/data/nfs/Baxter/Data/media/other/pinchflat:/downloads` - FAIL (should follow the /DockerVol/<service> convention)
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- `netgrimoire` network is external - PASS
|
||||
|
||||
### FIXES:
|
||||
|
||||
- Update `homepage.icon` to a valid accessible path.
|
||||
- Change `/data/nfs/Baxter/Data/media/other/pinchflat:/downloads` to follow the convention by placing it under `/DockerVol/pinchflat/downloads`.
|
||||
|
||||
### VERDICT:
|
||||
FAIL
|
||||
|
|
@ -0,0 +1,67 @@
|
|||
---
|
||||
title: Audit - portainer-agent-stack.yml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:10:38.984Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:10:38.984Z
|
||||
---
|
||||
|
||||
# Audit Report — portainer-agent-stack.yml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/portainer-agent-stack.yml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### Audit Results:
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: Present
|
||||
- `homepage.name`: Present
|
||||
- `homepage.icon`: Present
|
||||
- `homepage.href`: Present
|
||||
- `homepage.description`: Present
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- `kuma.prt.http.name`: Missing (Expected: "Portainer")
|
||||
- `kuma.prt.http.url`: Missing (Expected: "http://portainer:9000")
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy=docker.netgrimoire.com`: Present
|
||||
- `caddy.reverse_proxy="http://portainer:9000"`: Present
|
||||
|
||||
4. **Placement constraints**:
|
||||
- `node.hostname == znas`: Present (Note: This constraint might not be ideal for a global service, but it's specified as per the file.)
|
||||
|
||||
5. **Volumes use `/DockerVol/<service>` path convention**:
|
||||
- Volume `/DockerVol/portainer`: Present
|
||||
- Volume `/var/run/docker.sock`: Absent (Not following the `/DockerVol/<service>` convention)
|
||||
- Volume `/var/lib/docker/volumes`: Absent (Not following the `/DockerVol/<service>` convention)
|
||||
|
||||
6. **Network references external `netgrimoire` overlay**:
|
||||
- `networks: netgrimoire`: Present
|
||||
|
||||
### Issues and Fixes:
|
||||
|
||||
- **Uptime Kuma labels**: Add the missing labels with the correct values.
|
||||
```yaml
|
||||
- kuma.prt.http.name="Portainer"
|
||||
- kuma.prt.http.url=http://portainer:9000
|
||||
```
|
||||
|
||||
- **Volume use `/DockerVol/<service>` path convention**: Adjust the volumes to follow the convention.
|
||||
```yaml
|
||||
volumes:
|
||||
- /DockerVol/agent/var/run/docker.sock:/var/run/docker.sock
|
||||
- /DockerVol/agent/var/lib/docker/volumes:/var/lib/docker/volumes
|
||||
```
|
||||
|
||||
### Final Verdict:
|
||||
|
||||
VERDICT: FAIL
|
||||
|
||||
Reasons for failure include missing Uptime Kuma labels and non-conformity with the `/DockerVol/<service>` volume path convention.
|
||||
44
False Grimoire/Netgrimoire/Audits/profilarr-2026-04-03.md
Normal file
44
False Grimoire/Netgrimoire/Audits/profilarr-2026-04-03.md
Normal file
|
|
@ -0,0 +1,44 @@
|
|||
---
|
||||
title: Audit - profilarr.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:11:28.457Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:11:28.457Z
|
||||
---
|
||||
|
||||
# Audit Report — profilarr.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/profilarr.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** PASS
|
||||
|
||||
---
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: "Jolly Roger" - PASS
|
||||
- `homepage.name`: "Profilarr" - PASS
|
||||
- `homepage.icon`: "profilarr" - PASS (Note: This should be a valid icon URL or path, not just the name of the service)
|
||||
- `homepage.href`: "https://profilarr.netgrimoire.com" - PASS
|
||||
- `homepage.description`: "Profilarr" - PASS
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- `kuma.profilarr.http.name`: "profilarr" - PASS
|
||||
- `kuma.profilarr.http.url`: "http://profilarr.netgrimoire.com" - PASS
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy: "profilarr.netgrimoire.com"` - PASS
|
||||
- `caddy.reverse_proxy: "{{upstreams 6868}}"` - PASS
|
||||
|
||||
4. **Placement constraints**:
|
||||
- `node.hostname == docker4` - PASS
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- `/DockerVol/profilarr:/config` - PASS
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- `netgrimoire` network is marked as `external: true`, ensuring it is referenced correctly in the compose file. - PASS
|
||||
|
||||
VERDICT: PASS
|
||||
27
False Grimoire/Netgrimoire/Audits/radarr-2026-04-03.md
Normal file
27
False Grimoire/Netgrimoire/Audits/radarr-2026-04-03.md
Normal file
|
|
@ -0,0 +1,27 @@
|
|||
---
|
||||
title: Audit - radarr.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:11:58.614Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:11:58.614Z
|
||||
---
|
||||
|
||||
# Audit Report — radarr.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/radarr.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
- **Homepage labels**: PASS
|
||||
- **Uptime Kuma labels**: PASS
|
||||
- **Caddy labels on exposed services**: FAIL - The `caddy.reverse_proxy` label should use the service name, not just the port. Fix: Change to `- caddy.reverse_proxy={{upstreams radarr}}`.
|
||||
- **Placement constraints**: PASS
|
||||
- **Volumes use /DockerVol/<service> path convention**: PASS
|
||||
- **Network references external netgrimoire overlay**: PASS
|
||||
|
||||
**VERDICT: FAIL**
|
||||
50
False Grimoire/Netgrimoire/Audits/readarr-2026-04-03.md
Normal file
50
False Grimoire/Netgrimoire/Audits/readarr-2026-04-03.md
Normal file
|
|
@ -0,0 +1,50 @@
|
|||
---
|
||||
title: Audit - readarr.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:12:56.461Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:12:56.461Z
|
||||
---
|
||||
|
||||
# Audit Report — readarr.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/readarr.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### SWARM AUDIT REPORT for `swarm/readarr.yaml`
|
||||
|
||||
#### Homepage Labels:
|
||||
1. **PASS**: homepage.group = Jolly Roger
|
||||
2. **PASS**: homepage.name = Readarr
|
||||
3. **PASS**: homepage.icon = readarr.png
|
||||
4. **PASS**: homepage.href = http://readarr.netgrimoire.com
|
||||
5. **PASS**: homepage.description = Ebook Library
|
||||
|
||||
#### Uptime Kuma Labels:
|
||||
1. **FAIL**: Missing `kuma.readarr.http.name` and `kuma.readarr.http.url`. These labels are necessary for integration with Uptime Kuma.
|
||||
|
||||
#### Caddy Labels on Exposed Services:
|
||||
1. **PASS**: caddy=readarr.netgrimoire.com
|
||||
2. **PASS**: caddy.reverse_proxy="{{upstreams 8787}}"
|
||||
- **ISSUE**: The use of `{{upstreams 8787}}` may not work as expected in a multi-container environment. It should be replaced with the actual service name if it's part of the same stack.
|
||||
- **FIX**: Replace `{{upstreams 8787}}` with `http://readarr:8787`.
|
||||
|
||||
#### Placement Constraints:
|
||||
1. **PASS**: node.hostname == docker4
|
||||
|
||||
#### Volumes Use `/DockerVol/<service>` Path Convention:
|
||||
1. **PASS**: /DockerVol/readarr/config
|
||||
2. **FAIL**: /data/nfs/Baxter/Data does not follow the `/DockerVol/<service>` convention. It should be placed under `/DockerVol/readarr/data`.
|
||||
|
||||
#### Network References External `netgrimoire` Overlay:
|
||||
1. **PASS**: netgrimoire network is external
|
||||
|
||||
### VERDICT: FAIL
|
||||
|
||||
The report indicates several issues that need to be addressed to fully comply with the specified guidelines. Ensure all labels are correctly defined, adhere to volume naming conventions, and review Caddy configurations for proper service integration.
|
||||
31
False Grimoire/Netgrimoire/Audits/recyclarr-2026-04-03.md
Normal file
31
False Grimoire/Netgrimoire/Audits/recyclarr-2026-04-03.md
Normal file
|
|
@ -0,0 +1,31 @@
|
|||
---
|
||||
title: Audit - recyclarr.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:13:33.974Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:13:33.974Z
|
||||
---
|
||||
|
||||
# Audit Report — recyclarr.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/recyclarr.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
1. **Homepage labels**: NOT APPLICABLE (No homepage labels specified in the provided YAML).
|
||||
2. **Uptime Kuma labels**: NOT APPLICABLE (No Uptime Kuma service or labels specified in the provided YAML).
|
||||
3. **Caddy labels on exposed services**: NOT APPLICABLE (No Caddy service or reverse proxy configuration specified in the provided YAML).
|
||||
4. **Placement constraints**:
|
||||
- PASS: The `node.hostname == docker4` constraint is correctly applied.
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- FAIL: The volume path `/data/nfs/Baxter/Docker/recyclarr:/config` does not follow the `/DockerVol/recyclarr` path convention.
|
||||
- **Fix**: Change the volume path to `/DockerVol/recyclarr/config`.
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- PASS: The network `netgrimoire` is correctly referenced as an external network.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
47
False Grimoire/Netgrimoire/Audits/roundcube-2026-04-03.md
Normal file
47
False Grimoire/Netgrimoire/Audits/roundcube-2026-04-03.md
Normal file
|
|
@ -0,0 +1,47 @@
|
|||
---
|
||||
title: Audit - roundcube.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:14:30.315Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:14:30.315Z
|
||||
---
|
||||
|
||||
# Audit Report — roundcube.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/roundcube.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** PASS
|
||||
|
||||
---
|
||||
|
||||
**Audit Report for swarm/roundcube.yaml**
|
||||
|
||||
1. **Homepage Labels**:
|
||||
- `homepage.group`: Present (`E-Mail`)
|
||||
- `homepage.name`: Present (`Roundcube`)
|
||||
- `homepage.icon`: Present (`roundcube.png`)
|
||||
- `homepage.href`: Present (`http://webmail.netgrimoire.com`)
|
||||
- `homepage.description`: Present (`E-mail client`)
|
||||
|
||||
2. **Uptime Kuma Labels**:
|
||||
- `kuma.rc.http.name="Mealie"`: Incorrect syntax, should be `kuma.rc.http.name=Mealie`
|
||||
- `kuma.mrc.http.url=http://roundcube:80`: Correct
|
||||
|
||||
3. **Caddy Labels on Exposed Services**:
|
||||
- `caddy`: Present (`webmail.netgrimoire.com`, `webmail.gnarlypandaproductions.com`, `webmail.pncharris.com`, `webmail.pncfishandmore.com`, `webmail.pncharrisenterprises.com`, `webmail.florosafd.org`)
|
||||
- `caddy.reverse_proxy`: Present (`{{upstreams 80}}`)
|
||||
|
||||
4. **Placement Constraints**:
|
||||
- `node.hostname`: Present and correct (`docker4`)
|
||||
|
||||
5. **Volumes Use /DockerVol/<service> Path Convention**:
|
||||
- `/DockerVol/roundcube/www:/var/www/html`: Correct
|
||||
|
||||
6. **Network References External netgrimoire Overlay**:
|
||||
- `netgrimoire` network: Present and external
|
||||
- `mailcow-network` network: Present and external
|
||||
|
||||
**VERDICT**: FAIL
|
||||
48
False Grimoire/Netgrimoire/Audits/sabnzbd-2026-04-03.md
Normal file
48
False Grimoire/Netgrimoire/Audits/sabnzbd-2026-04-03.md
Normal file
|
|
@ -0,0 +1,48 @@
|
|||
---
|
||||
title: Audit - sabnzbd.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:15:29.656Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:15:29.656Z
|
||||
---
|
||||
|
||||
# Audit Report — sabnzbd.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/sabnzbd.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
1. **Homepage labels**: All homepage labels are present and correctly formatted.
|
||||
- `homepage.group=Jolly Roger`
|
||||
- `homepage.name=Sabnzbd`
|
||||
- `homepage.icon=sabnzbd.png`
|
||||
- `homepage.href=http://sabnzbd.netgrimoire.com`
|
||||
- `homepage.description=Usenet Downloader`
|
||||
|
||||
2. **Uptime Kuma labels**: Both `kuma.sab.http.name` and `kuma.sab.http.url` are present.
|
||||
- `kuma.sab.http.name="Sabnzbd"`
|
||||
- `kuma.sab.http.url=http://sabnzbd:8080`
|
||||
|
||||
3. **Caddy labels on exposed services**: Caddy labels include both the domain and reverse proxy settings.
|
||||
- `caddy=sabnzbd.netgrimoire.com`
|
||||
- `caddy.reverse_proxy="{{upstreams 8080}}"`
|
||||
|
||||
4. **Placement constraints**: The placement constraint is referencing a specific node label (`node.labels.general == true`). This needs to be updated to reference the node's hostname instead for better clarity.
|
||||
- Current: `- node.labels.general == true`
|
||||
- Fix: Update to use `node.hostname` if appropriate, or keep the original if `general` is indeed a valid label.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**: The volumes do not follow the `/DockerVol/<service>` path convention.
|
||||
- Current paths:
|
||||
- `/data/nfs/znas/Data/:/data:shared`
|
||||
- `/data/nfs/znas/Docker/Sabnzbd:/config`
|
||||
|
||||
6. **Network references external netgrimoire overlay**: The network reference is correctly set to the `netgrimoire` network, which is marked as external.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
The placement constraint should be updated for clarity and the volumes should adhere to the specified path convention.
|
||||
50
False Grimoire/Netgrimoire/Audits/scanopy-2026-04-03.md
Normal file
50
False Grimoire/Netgrimoire/Audits/scanopy-2026-04-03.md
Normal file
|
|
@ -0,0 +1,50 @@
|
|||
---
|
||||
title: Audit - scanopy.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:17:06.276Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:17:06.276Z
|
||||
---
|
||||
|
||||
# Audit Report — scanopy.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/scanopy.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### SWARM AUDIT — check ALL of the following:
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: "Monitoring" (PASS)
|
||||
- `homepage.name`: "Scanopy" (PASS)
|
||||
- `homepage.icon`: "scanopy.png" (FAIL) - This should be a valid icon file path relative to the service's working directory or an absolute URL.
|
||||
- `homepage.href`: "https://scan.netgrimoire.com" (PASS)
|
||||
- `homepage.description`: "Network discovery & topology" (PASS)
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- The Uptime Kuma labels are not explicitly defined in the provided YAML file. Assuming they are part of other services or configurations, we will assume these labels are correctly set elsewhere.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy: "scn.netgrimoire.com"` (PASS)
|
||||
- `caddy.reverse_proxy`: "{{upstreams 60072}}" (PASS)
|
||||
|
||||
4. **Placement constraints**:
|
||||
- All services have placement constraints (`node.hostname == docker4`) which are correctly set (PASS).
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- `postgres` volume: `/DockerVol/scanopy/postgres:/var/lib/postgresql/data` (PASS)
|
||||
- `server` volume: `/DockerVol/scanopy/server-data:/data` (PASS)
|
||||
- `daemon` volume: `/DockerVol/scanopy/daemon-config:/root/.config/daemon` (PASS)
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- All services reference the `netgrimoire` network which is marked as external (PASS).
|
||||
|
||||
### Final Verdict
|
||||
VERDICT: FAIL
|
||||
|
||||
The issue identified is that the `homepage.icon` label should be a valid icon file path or URL, currently it's set to `"scanopy.png"`, which may not be accessible or correct. Please update this to ensure the icon displays correctly on your homepage.
|
||||
47
False Grimoire/Netgrimoire/Audits/sonarr-2026-04-03.md
Normal file
47
False Grimoire/Netgrimoire/Audits/sonarr-2026-04-03.md
Normal file
|
|
@ -0,0 +1,47 @@
|
|||
---
|
||||
title: Audit - sonarr.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:17:56.262Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:17:56.262Z
|
||||
---
|
||||
|
||||
# Audit Report — sonarr.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/sonarr.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**SWARM AUDIT REPORT**
|
||||
|
||||
1. **Homepage labels**:
|
||||
- **PASS**: homepage.group=Jolly Roger
|
||||
- **PASS**: homepage.name=Sonarr
|
||||
- **PASS**: homepage.icon=sonarr.png
|
||||
- **FAIL**: homepage.href=http://sonarr.netgrimoire.com should be http://sonarr:8989 (Relative URL recommended for internal services).
|
||||
- **PASS**: homepage.description=Television Library
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- **FAIL**: No kuma labels found.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- **PASS**: caddy=sonarr.netgrimoire.com
|
||||
- **PASS**: caddy.reverse_proxy="sonarr:8989"
|
||||
|
||||
4. **Placement constraints**:
|
||||
- **PASS**: node.hostname==docker5
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- **FAIL**: /data/nfs/znas/Data/:/data should be /DockerVol/Sonarr/data to adhere to the specified convention.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- **PASS**: Network netgrimoire is referenced as external.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
The configuration contains several issues that need to be addressed for a successful audit. The homepage and Caddy labels require specific fixes, while the volume path does not comply with the established convention.
|
||||
46
False Grimoire/Netgrimoire/Audits/termix-2026-04-03.md
Normal file
46
False Grimoire/Netgrimoire/Audits/termix-2026-04-03.md
Normal file
|
|
@ -0,0 +1,46 @@
|
|||
---
|
||||
title: Audit - termix.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:18:39.128Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:18:39.128Z
|
||||
---
|
||||
|
||||
# Audit Report — termix.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/termix.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### SWARM AUDIT RESULTS
|
||||
|
||||
#### Homepage Labels
|
||||
- **PASS**: `homepage.group`: Remote Access
|
||||
- **PASS**: `homepage.name`: Termix
|
||||
- **PASS**: `homepage.icon`: terminal
|
||||
- **PASS**: `homepage.href`: https://termix.netgrimoire.com
|
||||
- **PASS**: `homepage.description`: Web-based terminal interface
|
||||
|
||||
#### Uptime Kuma Labels
|
||||
- **PASS**: `kuma.termix.http.name`: Termix
|
||||
- **PASS**: `kuma.termix.http.url`: https://termix.netgrimoire.com
|
||||
|
||||
#### Caddy Labels
|
||||
- **FAIL**: Missing `caddy=<domain>` label. Add `caddy=termix.netgrimoire.com`.
|
||||
- **FAIL**: Missing `caddy.reverse_proxy` label. Add `caddy.reverse_proxy: termix:8080`.
|
||||
|
||||
#### Placement Constraints
|
||||
- **PASS**: `node.hostname == docker4`
|
||||
|
||||
#### Volumes Path Convention
|
||||
- **PASS**: `/DockerVol/termix:/app/data` adheres to the convention
|
||||
|
||||
#### Network References
|
||||
- **PASS**: References the external `netgrimoire` network
|
||||
|
||||
### VERDICT: FAIL
|
||||
52
False Grimoire/Netgrimoire/Audits/tmm-2026-04-03.md
Normal file
52
False Grimoire/Netgrimoire/Audits/tmm-2026-04-03.md
Normal file
|
|
@ -0,0 +1,52 @@
|
|||
---
|
||||
title: Audit - tmm.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:19:40.493Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:19:40.493Z
|
||||
---
|
||||
|
||||
# Audit Report — tmm.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/tmm.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### SWARM AUDIT RESULTS:
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: "Media Management" - PASS
|
||||
- `homepage.name`: "Tiny Media Manager" - PASS
|
||||
- `homepage.icon`: "troddit.png" - PASS
|
||||
- `homepage.href`: "https://tmm.netgrimoire.com" - PASS
|
||||
- `homepage.description`: "Media Manager" - PASS
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- No Kuma labels found. Uptime Kuma is not included in this service's deployment configuration.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy=tmm.netgrimoire.com` - PASS
|
||||
- `caddy.reverse_proxy=tinymediamanager_service:4000` - PASS
|
||||
|
||||
4. **Placement constraints**:
|
||||
- No placement constraints defined (`node.hostname`). This is acceptable if there are no specific node requirements.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- `/data/nfs/Baxter/Docker/tinymediamanager:/config:rw` - FAIL
|
||||
- Issue: The volume does not follow the `/DockerVol/<service>` path convention.
|
||||
- Fix: Update to `/DockerVol/tinymediamanager/config`.
|
||||
|
||||
- `/data/nfs/Baxter:/media:rw` - PASS
|
||||
- Note: This volume follows the general practice of using a separate data directory but does not follow the specific `/DockerVol/<service>` convention.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- `netgrimoire` network is referenced as `external: true`.
|
||||
|
||||
### VERDICT: FAIL
|
||||
|
||||
The audit failed due to a volume path that does not adhere to the specified convention. All other checks passed.
|
||||
30
False Grimoire/Netgrimoire/Audits/tunarr-2026-04-03.md
Normal file
30
False Grimoire/Netgrimoire/Audits/tunarr-2026-04-03.md
Normal file
|
|
@ -0,0 +1,30 @@
|
|||
---
|
||||
title: Audit - tunarr.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:20:23.850Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:20:23.850Z
|
||||
---
|
||||
|
||||
# Audit Report — tunarr.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/tunarr.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**SWARM AUDIT — check ALL of the following:**
|
||||
|
||||
1. **Homepage labels**: All homepage labels are present.
|
||||
2. **Uptime Kuma labels**: The labels `kuma.tunarr.http.name` and `kuma.tunarr.http.url` are present, but there is no label for another service (`<n>`) specified in the Uptime Kuma documentation. Assuming this is a single service, it is acceptable.
|
||||
3. **Caddy labels on exposed services**: The Caddy labels `caddy=tunarr.netgrimoire.com` and `caddy.reverse_proxy="{{upstreams 8000}}"` are present.
|
||||
4. **Placement constraints**: The placement constraint `node.labels.general == true` is not valid. It should be `node.role == "manager"` or another appropriate role if using a manager node.
|
||||
5. **Volumes use /DockerVol/<service> path convention**: The volumes follow the convention, e.g., `/data/nfs/Baxter/Docker/tunarr/config`.
|
||||
6. **Network references external netgrimoire overlay**: The network `netgrimoire` is referenced correctly and is an external network.
|
||||
|
||||
**Final line:**
|
||||
VERDICT: FAIL
|
||||
62
False Grimoire/Netgrimoire/Audits/vault-2026-04-03.md
Normal file
62
False Grimoire/Netgrimoire/Audits/vault-2026-04-03.md
Normal file
|
|
@ -0,0 +1,62 @@
|
|||
---
|
||||
title: Audit - vault.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:21:32.070Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:21:32.070Z
|
||||
---
|
||||
|
||||
# Audit Report — vault.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/vault.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: "Backup"
|
||||
- `homepage.name`: "Vault"
|
||||
- `homepage.icon`: "kopia.png"
|
||||
- `homepage.href`: "https://vault.netgrimoire.com"
|
||||
- `homepage.description`: "Snapshot backup and deduplication"
|
||||
|
||||
**PASS**: All homepage labels are correctly defined.
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- `kuma.kopia.http.name`: "Kopia Web"
|
||||
- `kuma.kopia.http.url`: "http://vault:51515"
|
||||
|
||||
**PASS**: Uptime Kuma labels are correctly defined.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy: vault.netgrimoire.com`
|
||||
- `caddy.reverse_proxy: "https://kopia-server-vault:51516"`
|
||||
|
||||
**FAIL**: The `caddy.reverse_proxy` label is incorrectly configured. It should point to the correct service, likely "vault" instead of "kopia-server-vault".
|
||||
|
||||
4. **Placement constraints**:
|
||||
- `node.hostname == znas`
|
||||
|
||||
**PASS**: Placement constraint correctly targets a specific node.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- `/DockerVol/vault/config:/app/config`
|
||||
- `/DockerVol/vault/cache:/app/cache`
|
||||
- `/DockerVol/vault/cert:/app/cert`
|
||||
- `/srv/vault/backup/repository:/vault`
|
||||
- `/DockerVol/vault/logs:/app/logs`
|
||||
|
||||
**FAIL**: Volume paths do not follow the `/DockerVol/<service>` convention. The volume path for the backup repository should be `/DockerVol/vault/backup/repository`.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- `netgrimoire`: External
|
||||
|
||||
**PASS**: Network reference is correctly set to an external network.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
The configuration contains issues that need to be addressed for the infrastructure to meet the specified standards and function correctly.
|
||||
45
False Grimoire/Netgrimoire/Audits/vaultwarden-2026-04-03.md
Normal file
45
False Grimoire/Netgrimoire/Audits/vaultwarden-2026-04-03.md
Normal file
|
|
@ -0,0 +1,45 @@
|
|||
---
|
||||
title: Audit - vaultwarden.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:22:15.425Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:22:15.425Z
|
||||
---
|
||||
|
||||
# Audit Report — vaultwarden.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/vaultwarden.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
1. **Homepage labels:**
|
||||
- `homepage.group`: PASS
|
||||
- `homepage.name`: PASS
|
||||
- `homepage.icon`: PASS
|
||||
- `homepage.href`: PASS
|
||||
- `homepage.description`: PASS
|
||||
|
||||
2. **Uptime Kuma labels:**
|
||||
- No Uptime Kuma labels are defined in the file. This is a potential issue since it might be required for proper monitoring and management of the service.
|
||||
|
||||
3. **Caddy labels on exposed services:**
|
||||
- `caddy=pass.netgrimoire.com`: PASS
|
||||
- `caddy.reverse_proxy=bitwarden:80`: PASS
|
||||
|
||||
4. **Placement constraints:**
|
||||
- `node.hostname == docker3`: PASS
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention:**
|
||||
- `/DockerVol/bitwarden:/data`: PASS
|
||||
|
||||
6. **Network references external netgrimoire overlay:**
|
||||
- `networks: netgrimoire` with `external: true`: PASS
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
The main issue is the absence of Uptime Kuma labels, which could affect monitoring and management of the service. This should be addressed to ensure comprehensive infrastructure auditing.
|
||||
44
False Grimoire/Netgrimoire/Audits/vikunja-2026-04-03.md
Normal file
44
False Grimoire/Netgrimoire/Audits/vikunja-2026-04-03.md
Normal file
|
|
@ -0,0 +1,44 @@
|
|||
---
|
||||
title: Audit - vikunja.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:23:18.317Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:23:18.317Z
|
||||
---
|
||||
|
||||
# Audit Report — vikunja.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/vikunja.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### Audit Report for `swarm/vikunja.yaml`
|
||||
|
||||
#### Homepage Labels
|
||||
**PASS**: The labels `homepage.group`, `homepage.name`, `homepage.icon`, `homepage.href`, and `homepage.description` are correctly defined.
|
||||
|
||||
#### Uptime Kuma Labels
|
||||
**FAIL**:
|
||||
- **Issue**: The configuration specifies the labels `kuma.vik.http.name` and `kuma.vik.http.url`, but there is no indication of multiple instances or a loop through them.
|
||||
- **Fix**: Define multiple instances or use a loop if needed, e.g., `kuma.0.http.name`, `kuma.0.http.url`.
|
||||
|
||||
#### Caddy Labels on Exposed Services
|
||||
**PASS**: The labels `caddy=task.netgrimoire.com`, `caddy.import: authentik`, and `caddy.reverse_proxy: "{{upstreams 3456}}"` are correctly defined.
|
||||
|
||||
#### Placement Constraints
|
||||
**PASS**: The constraints `node.hostname == docker4` are applied to both services.
|
||||
|
||||
#### Volumes Use `/DockerVol/<service>` Path Convention
|
||||
**PASS**: Both services use the correct path convention for volumes, e.g., `/DockerVol/vikunja/files` and `/DockerVol/vikunja/db`.
|
||||
|
||||
#### Network References External `netgrimoire` Overlay
|
||||
**PASS**: The network `netgrimoire` is correctly referenced as external.
|
||||
|
||||
### VERDICT: FAIL
|
||||
|
||||
The audit found one issue that needs to be addressed before the configuration can be considered fully compliant.
|
||||
27
False Grimoire/Netgrimoire/Audits/vscode-2026-04-03.md
Normal file
27
False Grimoire/Netgrimoire/Audits/vscode-2026-04-03.md
Normal file
|
|
@ -0,0 +1,27 @@
|
|||
---
|
||||
title: Audit - vscode.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:23:58.891Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:23:58.891Z
|
||||
---
|
||||
|
||||
# Audit Report — vscode.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/vscode.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
1. **Homepage labels**: PASS - All required labels are present and correctly formatted.
|
||||
2. **Uptime Kuma labels**: PASS - `kuma.cs.http.name` and `kuma.cs.http.url` are correctly set.
|
||||
3. **Caddy labels on exposed services**: PASS - `caddy=code.netgrimoire.com` and `caddy.reverse_proxy=code-server:8443` are correctly specified.
|
||||
4. **Placement constraints**: PASS - The constraint `node.hostname == znas` is present.
|
||||
5. **Volumes use /DockerVol/<service> path convention**: FAIL - The volume for `Projects` does not follow the `/DockerVol/<service>` path convention. It should be renamed to `/DockerVol/Code/projects`.
|
||||
6. **Network references external netgrimoire overlay**: PASS - The network `netgrimoire` is correctly referenced as external.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
54
False Grimoire/Netgrimoire/Audits/wallo-2026-04-03.md
Normal file
54
False Grimoire/Netgrimoire/Audits/wallo-2026-04-03.md
Normal file
|
|
@ -0,0 +1,54 @@
|
|||
---
|
||||
title: Audit - wallo.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:24:58.197Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:24:58.197Z
|
||||
---
|
||||
|
||||
# Audit Report — wallo.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/wallo.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### Audit Report for `wallo.yaml`
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: "PNCHarris Apps" (PASS)
|
||||
- `homepage.name`: "Wallos" (PASS)
|
||||
- `homepage.icon`: "wallos.png" (FAIL) - Missing value.
|
||||
- `homepage.href`: "https://expense.netgrimoire.com" (PASS)
|
||||
- `homepage.description`: "Subscription Manager" (PASS)
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- `kuma.wallos.http.name`: "Wallos" (PASS)
|
||||
- `kuma.wallos.http.url`: "http://wallos:80" (PASS)
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy=expense.netgrimoire.com` (PASS)
|
||||
- `caddy.reverse_proxy=wallos:80` (PASS)
|
||||
|
||||
4. **Placement constraints**: node.hostname
|
||||
- No placement constraints specified in the `deploy` block (FAIL) - Add `placement.constraints: [node.labels.hostname == "your-hostname"]` as needed.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- `/data/nfs/Baxter/Docker/wallos/db:/var/www/html/db` (FAIL) - Volumes do not follow the `/DockerVol/<service>` convention.
|
||||
- `/data/nfs/Baxter/Docker/wallos/logos:/var/www/html/images/uploads/logos` (FAIL)
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- Network `netgrimoire` is referenced as external (PASS)
|
||||
|
||||
### Fix Recommendations
|
||||
|
||||
- Correct the homepage icon value.
|
||||
- Add placement constraints if needed for specific node placement.
|
||||
- Rename and relocate volumes to follow the `/DockerVol/<service>` convention, e.g., `/DockerVol/wallos/db:/var/www/html/db`.
|
||||
|
||||
### Final Verdict
|
||||
VERDICT: FAIL
|
||||
45
False Grimoire/Netgrimoire/Audits/web-2026-04-03.md
Normal file
45
False Grimoire/Netgrimoire/Audits/web-2026-04-03.md
Normal file
|
|
@ -0,0 +1,45 @@
|
|||
---
|
||||
title: Audit - web.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:25:48.458Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:25:48.458Z
|
||||
---
|
||||
|
||||
# Audit Report — web.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/web.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.name` is present.
|
||||
- `homepage.icon` is present.
|
||||
- `homepage.href` is missing. Add it with the appropriate value.
|
||||
- `homepage.description` is missing. Consider adding a description for clarity.
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- `kuma.web.http.name` is present.
|
||||
- `kuma.web.http.url` is present.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy=www.netgrimoire.com` is present.
|
||||
- `caddy.reverse_proxy="web:80"` is present.
|
||||
|
||||
4. **Placement constraints**:
|
||||
- `node.labels.cpu == amd` is present, but it should be `node.labels.cpu == "amd"` for better readability.
|
||||
|
||||
5. **Volumes use `/DockerVol/<service>` path convention**:
|
||||
- The volumes are not using the `/DockerVol/<service>` path convention. They are located at `/data/nfs/znas/Docker/web/pages` and `/data/nfs/znas/Docker/web/apache`. Consider creating a symbolic link or moving these volumes to follow the convention.
|
||||
|
||||
6. **Network references external `netgrimoire` overlay**:
|
||||
- The network is correctly referencing an external `netgrimoire` overlay.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
The homepage labels are missing `homepage.href` and `homepage.description`. Additionally, the volumes need to be moved or symlinked to follow the `/DockerVol/<service>` path convention.
|
||||
42
False Grimoire/Netgrimoire/Audits/whisparr-2026-04-03.md
Normal file
42
False Grimoire/Netgrimoire/Audits/whisparr-2026-04-03.md
Normal file
|
|
@ -0,0 +1,42 @@
|
|||
---
|
||||
title: Audit - whisparr.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:27:24.537Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:27:24.537Z
|
||||
---
|
||||
|
||||
# Audit Report — whisparr.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/whisparr.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
1. **Homepage labels**: All homepage labels are present in the configuration.
|
||||
- Greenfin, Whisparr
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- **FAIL**: Uptime Kuma is not defined in the provided YAML file.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- **FAIL**: No Caddy labels are present in the configuration.
|
||||
|
||||
4. **Placement constraints**:
|
||||
- **PASS**: All placement constraints specify `node.hostname == docker1`.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- **PASS**: Volumes follow the `/DockerVol/<service>` naming convention.
|
||||
- Whisparr: `/DockerVol/Whisparr:/config`
|
||||
- Greenfin (commented out): `/DockerVol/greenfin/config:/config`
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- **PASS**: The network `netgrimoire` is referenced as an external network.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
The configuration lacks essential labels for Uptime Kuma and Caddy, which are critical for monitoring and routing services. Additionally, the Greenfin service is commented out but would still need to adhere to the volume and placement constraints guidelines if uncommented.
|
||||
76
False Grimoire/Netgrimoire/Audits/wiki-2026-04-03.md
Normal file
76
False Grimoire/Netgrimoire/Audits/wiki-2026-04-03.md
Normal file
|
|
@ -0,0 +1,76 @@
|
|||
---
|
||||
title: Audit - wiki.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:28:56.635Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:28:56.635Z
|
||||
---
|
||||
|
||||
# Audit Report — wiki.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/wiki.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** PASS
|
||||
|
||||
---
|
||||
|
||||
### SWARM AUDIT CHECKLIST:
|
||||
|
||||
1. **Homepage labels:**
|
||||
- **PASS**: `wikijs`, `drawio`
|
||||
- **FAIL**: No homepage labels defined for `wikijs-db`.
|
||||
- **Fix**: Add the following labels to `wikijs-db`:
|
||||
```yaml
|
||||
labels:
|
||||
homepage.group: "Database"
|
||||
homepage.name: "PostgreSQL"
|
||||
homepage.icon: "postgres.png"
|
||||
homepage.href: "https://www.postgresql.org"
|
||||
homepage.description: "Relational Database"
|
||||
diun.enable: "true"
|
||||
```
|
||||
|
||||
2. **Uptime Kuma labels:**
|
||||
- **FAIL**: `wikijs`, `drawio` missing Kuma labels.
|
||||
- **Fix**: Add the following labels to both `wikijs` and `drawio`:
|
||||
```yaml
|
||||
labels:
|
||||
kuma.<n>.http.name: "Wiki.js"
|
||||
kuma.<n>.http.url: "https://wiki.netgrimoire.com"
|
||||
# Replace <n> with a sequential number if multiple instances are needed.
|
||||
```
|
||||
|
||||
3. **Caddy labels on exposed services:**
|
||||
- **FAIL**: `drawio` missing Caddy labels for reverse proxy.
|
||||
- **Fix**: Add the following labels to `drawio`:
|
||||
```yaml
|
||||
labels:
|
||||
caddy: draw.netgrimoire.com
|
||||
caddy.reverse_proxy: "{{upstreams 8080}}"
|
||||
```
|
||||
- **PASS**: Both `wikijs-db`, `wikijs`, and `drawio` have `caddy=<domain>` labels.
|
||||
|
||||
4. **Placement constraints:**
|
||||
- **FAIL**: No placement constraints for `drawio`.
|
||||
- **Fix**: Add the following constraints to `drawio`:
|
||||
```yaml
|
||||
deploy:
|
||||
mode: replicated
|
||||
replicas: 1
|
||||
placement:
|
||||
constraints:
|
||||
- node.hostname == dockerpi1
|
||||
- node.labels.cpu == arm
|
||||
```
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention:**
|
||||
- **PASS**: All services follow this convention.
|
||||
|
||||
6. **Network references external netgrimoire overlay:**
|
||||
- **PASS**: Both `wikijs-db`, `wikijs`, and `drawio` reference the external network `netgrimoire`.
|
||||
|
||||
### VERDICT:
|
||||
FAIL
|
||||
218
False Grimoire/Netgrimoire/Authentication/ldap-client-setup.md
Normal file
218
False Grimoire/Netgrimoire/Authentication/ldap-client-setup.md
Normal file
|
|
@ -0,0 +1,218 @@
|
|||
---
|
||||
title: LDAP Client Setup
|
||||
description:
|
||||
published: true
|
||||
date: 2026-02-20T04:33:31.862Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-01-21T13:21:40.588Z
|
||||
---
|
||||
|
||||
|
||||
Your content here✅ LLDAP + SSSD Node Join Checklist (FINAL)
|
||||
|
||||
Assumptions
|
||||
|
||||
LLDAP server: docker4
|
||||
|
||||
LDAP URI: ldap://docker4:3890
|
||||
|
||||
Base DN: dc=netgrimoire,dc=com
|
||||
|
||||
Users/groups use lowercase attributes (uidnumber, gidnumber, homedirectory, unixshell, uniquemember)
|
||||
|
||||
No TLS (lab only)
|
||||
|
||||
Docker group GID = 1964 in LDAP
|
||||
|
||||
This node is Ubuntu/Debian-based
|
||||
|
||||
0️⃣ Safety first (do this every time)
|
||||
|
||||
Open two SSH sessions to the node
|
||||
|
||||
Confirm you can sudo
|
||||
|
||||
Do not edit nsswitch.conf until SSSD is confirmed working
|
||||
|
||||
1️⃣ Install required packages
|
||||
sudo apt update
|
||||
sudo apt install -y sssd sssd-ldap sssd-tools libpam-sss libnss-sss libsss-sudo ldap-utils oddjob oddjob-mkhomedir
|
||||
|
||||
Ensure legacy LDAP NSS is NOT installed
|
||||
sudo apt purge -y libnss-ldap libpam-ldap nslcd libnss-ldapd libpam-ldapd || true
|
||||
sudo apt autoremove -y
|
||||
|
||||
2️⃣ Verify LDAP connectivity (must pass)
|
||||
getent hosts docker4
|
||||
nc -vz docker4 3890
|
||||
ldapwhoami -x -H ldap://docker4:3890 \
|
||||
-D 'uid=admin,ou=people,dc=netgrimoire,dc=com' -w 'F@lcon13'
|
||||
|
||||
|
||||
❌ If any fail → stop and fix networking/DNS/firewall.
|
||||
|
||||
3️⃣ Create /etc/sssd/sssd.conf (single file, no includes)
|
||||
sudo vi /etc/sssd/sssd.conf
|
||||
|
||||
|
||||
Paste exactly:
|
||||
|
||||
[sssd]
|
||||
services = nss, pam, ssh
|
||||
config_file_version = 2
|
||||
domains = netgrimoire.com
|
||||
|
||||
[nss]
|
||||
filter_users = root
|
||||
filter_groups = root
|
||||
|
||||
[pam]
|
||||
offline_failed_login_attempts = 3
|
||||
offline_failed_login_delay = 5
|
||||
|
||||
[ssh]
|
||||
|
||||
[domain/netgrimoire.com]
|
||||
id_provider = ldap
|
||||
auth_provider = ldap
|
||||
chpass_provider = ldap
|
||||
access_provider = permit
|
||||
|
||||
enumerate = false
|
||||
cache_credentials = true
|
||||
|
||||
ldap_uri = ldap://docker4:3890
|
||||
ldap_schema = rfc2307bis
|
||||
ldap_search_base = dc=netgrimoire,dc=com
|
||||
|
||||
ldap_auth_disable_tls_never_use_in_production = true
|
||||
ldap_id_use_start_tls = false
|
||||
ldap_tls_reqcert = never
|
||||
|
||||
ldap_default_bind_dn = uid=admin,ou=people,dc=netgrimoire,dc=com
|
||||
ldap_default_authtok = F@lcon13
|
||||
|
||||
# USERS (lowercase attributes)
|
||||
ldap_user_search_base = ou=people,dc=netgrimoire,dc=com
|
||||
ldap_user_object_class = posixAccount
|
||||
ldap_user_name = uid
|
||||
ldap_user_gecos = cn
|
||||
ldap_user_uid_number = uidnumber
|
||||
ldap_user_gid_number = gidnumber
|
||||
ldap_user_home_directory = homedirectory
|
||||
ldap_user_shell = unixshell
|
||||
|
||||
# GROUPS (lowercase attributes)
|
||||
ldap_group_search_base = ou=groups,dc=netgrimoire,dc=com
|
||||
ldap_group_object_class = groupOfUniqueNames
|
||||
ldap_group_name = cn
|
||||
ldap_group_gid_number = gidnumber
|
||||
ldap_group_member = uniquemember
|
||||
|
||||
4️⃣ Fix permissions (SSSD will NOT start without this)
|
||||
sudo chown root:root /etc/sssd/sssd.conf
|
||||
sudo chmod 600 /etc/sssd/sssd.conf
|
||||
sudo chmod 700 /etc/sssd
|
||||
|
||||
|
||||
Validate:
|
||||
|
||||
sudo sssctl config-check
|
||||
|
||||
5️⃣ Start SSSD cleanly
|
||||
sudo systemctl enable sssd
|
||||
sudo systemctl stop sssd
|
||||
sudo rm -f /var/lib/sss/db/* /var/lib/sss/mc/*
|
||||
sudo systemctl start sssd
|
||||
|
||||
|
||||
Verify:
|
||||
|
||||
sudo systemctl status sssd --no-pager -l
|
||||
sudo sssctl domain-status netgrimoire.com
|
||||
|
||||
|
||||
Expected:
|
||||
|
||||
Online status: Online
|
||||
LDAP: docker4
|
||||
|
||||
6️⃣ Enable NSS lookups via SSSD (LDAP-first)
|
||||
|
||||
Edit /etc/nsswitch.conf:
|
||||
|
||||
passwd: sss files systemd
|
||||
group: sss files systemd
|
||||
shadow: sss files
|
||||
|
||||
|
||||
Test:
|
||||
|
||||
getent passwd graymutt
|
||||
getent group docker
|
||||
id graymutt
|
||||
|
||||
7️⃣ 🔑 RE-INITIALIZE PAM (THIS IS THE STEP YOU REMEMBERED)
|
||||
|
||||
This step is mandatory on Debian/Ubuntu.
|
||||
|
||||
sudo pam-auth-update
|
||||
|
||||
In the menu, ENABLE:
|
||||
|
||||
✅ Unix authentication
|
||||
|
||||
✅ SSSD
|
||||
|
||||
✅ Create home directory on login
|
||||
|
||||
DISABLE:
|
||||
|
||||
❌ LDAP Authentication (legacy)
|
||||
|
||||
❌ Kerberos (unless you explicitly use it)
|
||||
|
||||
Press OK.
|
||||
|
||||
8️⃣ Verify PAM wiring
|
||||
grep pam_sss.so /etc/pam.d/common-*
|
||||
grep pam_mkhomedir /etc/pam.d/common-session
|
||||
|
||||
|
||||
You should see:
|
||||
|
||||
session required pam_mkhomedir.so skel=/etc/skel umask=0022
|
||||
|
||||
9️⃣ Final login test (definitive)
|
||||
ssh graymutt@localhost
|
||||
|
||||
|
||||
Expected:
|
||||
|
||||
Login succeeds
|
||||
|
||||
/home/graymutt is auto-created
|
||||
|
||||
Correct LDAP groups present
|
||||
|
||||
🔟 (Optional but recommended) Remove local docker group
|
||||
|
||||
If the node has a local docker group (gid 998):
|
||||
|
||||
sudo groupdel docker
|
||||
|
||||
|
||||
Verify:
|
||||
|
||||
getent group docker
|
||||
|
||||
|
||||
Expected:
|
||||
|
||||
docker:x:1964:graymutt,dockhand
|
||||
|
||||
🧪 Fast troubleshooting commands
|
||||
sudo sssctl domain-status netgrimoire.com
|
||||
sudo tail -n 200 /var/log/sssd/sssd_netgrimoire.com.log
|
||||
sudo systemctl status sssd --no-pager -l
|
||||
841
False Grimoire/Netgrimoire/Backup/Immich_Backup.md
Normal file
841
False Grimoire/Netgrimoire/Backup/Immich_Backup.md
Normal file
|
|
@ -0,0 +1,841 @@
|
|||
---
|
||||
title: Immich Backup and Restore
|
||||
description: Immich backup with Kopia
|
||||
published: true
|
||||
date: 2026-02-20T04:11:52.181Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-02-14T03:14:32.594Z
|
||||
---
|
||||
|
||||
# Immich Backup and Recovery Guide
|
||||
|
||||
## Overview
|
||||
|
||||
This document provides comprehensive backup and recovery procedures for Immich photo server. Since Immich's data is stored on standard filesystems (not ZFS or BTRFS), snapshots are not available and we rely on Immich's native backup approach combined with Kopia for offsite storage in vaults.
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Common Backup Commands
|
||||
|
||||
```bash
|
||||
# Run a manual backup (all components)
|
||||
/opt/scripts/backup-immich.sh
|
||||
|
||||
# Backup just the database
|
||||
docker exec -t immich_postgres pg_dump --clean --if-exists \
|
||||
--dbname=immich --username=postgres | gzip > "/opt/immich-backups/dump.sql.gz"
|
||||
|
||||
# List Kopia snapshots
|
||||
kopia snapshot list --tags immich
|
||||
|
||||
# View backup logs
|
||||
tail -f /var/log/immich-backup.log
|
||||
```
|
||||
|
||||
### Common Restore Commands
|
||||
|
||||
```bash
|
||||
# Restore database from backup
|
||||
gunzip < /opt/immich-backups/immich-YYYYMMDD_HHMMSS/dump.sql.gz | \
|
||||
docker exec -i immich_postgres psql --username=postgres --dbname=immich
|
||||
|
||||
# Restore from Kopia to new server
|
||||
kopia snapshot list --tags tier1-backup
|
||||
kopia restore <snapshot-id> /opt/immich-backups/
|
||||
|
||||
# Check container status after restore
|
||||
docker compose ps
|
||||
docker compose logs -f
|
||||
```
|
||||
|
||||
## Critical Components to Backup
|
||||
|
||||
### 1. Docker Compose File
|
||||
- **Location**: `/opt/immich/docker-compose.yml` (or your installation path)
|
||||
- **Purpose**: Defines all containers, networks, and volumes
|
||||
- **Importance**: Critical for recreating the exact container configuration
|
||||
|
||||
### 2. Configuration Files
|
||||
- **Primary Config**: `/opt/immich/.env`
|
||||
- **Purpose**: Database credentials, upload locations, timezone settings
|
||||
- **Importance**: Required for proper service initialization
|
||||
|
||||
### 3. Database
|
||||
- **PostgreSQL Data**: Contains all metadata, user accounts, albums, sharing settings, face recognition data, timeline information
|
||||
- **Container**: `immich_postgres`
|
||||
- **Database Name**: `immich` (default)
|
||||
- **User**: `postgres` (default)
|
||||
- **Backup Method**: `pg_dump` (official Immich recommendation)
|
||||
|
||||
### 4. Photo/Video Library
|
||||
- **Upload Storage**: All original photos and videos uploaded by users
|
||||
- **Location**: `/srv/immich/library` (per your .env UPLOAD_LOCATION)
|
||||
- **Size**: Typically the largest component
|
||||
- **Critical**: This is your actual data - photos cannot be recreated
|
||||
|
||||
### 5. Additional Important Data
|
||||
- **Model Cache**: Docker volume `immich_model-cache` (machine learning models, can be re-downloaded)
|
||||
- **External Paths**: `/export/photos` and `/srv/NextCloud-AIO` (mounted as read-only in your setup)
|
||||
|
||||
## Backup Strategy
|
||||
|
||||
### Two-Tier Backup Approach
|
||||
|
||||
We use a **two-tier approach** combining Immich's native backup method with Kopia for offsite storage:
|
||||
|
||||
1. **Tier 1 (Local)**: Immich database dump + library backup creates consistent, component-level backups
|
||||
2. **Tier 2 (Offsite)**: Kopia snapshots the local backups and syncs to vaults
|
||||
|
||||
#### Why This Approach?
|
||||
|
||||
- **Best of both worlds**: Native database dump ensures Immich-specific consistency, Kopia provides deduplication and offsite protection
|
||||
- **Component-level restore**: Can restore individual components (just database, just library, etc.)
|
||||
- **Disaster recovery**: Full system restore from Kopia backups on new server
|
||||
- **Efficient storage**: Kopia's deduplication reduces storage needs for offsite copies
|
||||
|
||||
#### Backup Frequency
|
||||
- **Daily**: Immich backup runs at 2 AM
|
||||
- **Daily**: Kopia snapshot of backups runs at 3 AM
|
||||
- **Retention (Local)**: 7 days of Immich backups (managed by script)
|
||||
- **Retention (Kopia/Offsite)**: 30 daily, 12 weekly, 12 monthly
|
||||
|
||||
### Immich Native Backup Method
|
||||
|
||||
Immich's official backup approach uses `pg_dump` for the database:
|
||||
- Uses `pg_dump` with `--clean --if-exists` flags for consistent database dumps
|
||||
- Hot backup without stopping PostgreSQL
|
||||
- Produces compressed `.sql.gz` files
|
||||
- Database remains available during backup
|
||||
|
||||
For the photo/video library, we use a **hybrid approach**:
|
||||
- **Database**: Backed up locally as `dump.sql.gz` for fast component-level restore
|
||||
- **Library**: Backed up directly by Kopia (no tar) for optimal deduplication and incremental backups
|
||||
|
||||
**Why not tar the library?**
|
||||
- Kopia deduplicates at the file level - adding 1 photo shouldn't require backing up the entire library again
|
||||
- Individual file access for selective restore
|
||||
- Better compression and faster incremental backups
|
||||
- Lower risk - corrupted tar loses everything, corrupted file only affects that file
|
||||
|
||||
**Key Features:**
|
||||
- No downtime required
|
||||
- Consistent point-in-time snapshot
|
||||
- Standard PostgreSQL format (portable across systems)
|
||||
- Efficient incremental backups of photo library
|
||||
|
||||
## Setting Up Immich Backups
|
||||
|
||||
### Prereq:
|
||||
Make sure you are connected to the repository,
|
||||
|
||||
```bash
|
||||
sudo kopia repository connect server \
|
||||
--url=https://192.168.5.10:51516 \
|
||||
--override-username=admin \
|
||||
--server-cert-fingerprint=696a4999f594b5273a174fd7cab677d8dd1628f9b9d27e557daa87103ee064b2
|
||||
```
|
||||
|
||||
#### Step 1: Configure Backup Location
|
||||
|
||||
Set the backup destination:
|
||||
|
||||
```bash
|
||||
# Create the backup directory
|
||||
mkdir -p /opt/immich-backups
|
||||
chown -R root:root /opt/immich-backups
|
||||
chmod 755 /opt/immich-backups
|
||||
```
|
||||
|
||||
#### Step 2: Manual Backup Commands
|
||||
|
||||
```bash
|
||||
cd /opt/immich
|
||||
|
||||
# Backup database using Immich's recommended method
|
||||
docker exec -t immich_postgres pg_dump \
|
||||
--clean \
|
||||
--if-exists \
|
||||
--dbname=immich \
|
||||
--username=postgres \
|
||||
| gzip > "/opt/immich-backups/dump.sql.gz"
|
||||
|
||||
# Backup configuration files
|
||||
cp docker-compose.yml /opt/immich-backups/
|
||||
cp .env /opt/immich-backups/
|
||||
|
||||
# Backup library with Kopia (no tar - better deduplication)
|
||||
kopia snapshot create /srv/immich/library \
|
||||
--tags immich,library,photos \
|
||||
--description "Immich library manual backup"
|
||||
```
|
||||
|
||||
**What gets created:**
|
||||
- Local backup directory: `/opt/immich-backups/immich-YYYY-MM-DD-HH-MM-SS/`
|
||||
- Contains: `dump.sql.gz` (database), config files
|
||||
- Kopia snapshots:
|
||||
- `/opt/immich-backups` (database + config)
|
||||
- `/srv/immich/library` (photos/videos, no tar)
|
||||
- `/opt/immich` (installation directory)
|
||||
|
||||
#### Step 3: Automated Backup Script
|
||||
|
||||
Create `/opt/scripts/backup-immich.sh`:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
# Immich Automated Backup Script
|
||||
# This creates Immich backups, then snapshots them with Kopia for offsite storage
|
||||
|
||||
set -e
|
||||
|
||||
BACKUP_DATE=$(date +%Y%m%d_%H%M%S)
|
||||
LOG_FILE="/var/log/immich-backup.log"
|
||||
IMMICH_DIR="/opt/immich"
|
||||
BACKUP_DIR="/opt/immich-backups"
|
||||
KEEP_DAYS=7
|
||||
|
||||
# Database credentials from .env
|
||||
DB_USERNAME="postgres"
|
||||
DB_DATABASE_NAME="immich"
|
||||
POSTGRES_CONTAINER="immich_postgres"
|
||||
|
||||
echo "[${BACKUP_DATE}] ========================================" | tee -a "$LOG_FILE"
|
||||
echo "[${BACKUP_DATE}] Starting Immich backup process" | tee -a "$LOG_FILE"
|
||||
|
||||
# Step 1: Run Immich database backup using official method
|
||||
echo "[${BACKUP_DATE}] Running Immich database backup..." | tee -a "$LOG_FILE"
|
||||
|
||||
cd "$IMMICH_DIR"
|
||||
|
||||
# Create backup directory with timestamp
|
||||
mkdir -p "${BACKUP_DIR}/immich-${BACKUP_DATE}"
|
||||
|
||||
# Backup database using Immich's recommended method
|
||||
docker exec -t ${POSTGRES_CONTAINER} pg_dump \
|
||||
--clean \
|
||||
--if-exists \
|
||||
--dbname=${DB_DATABASE_NAME} \
|
||||
--username=${DB_USERNAME} \
|
||||
| gzip > "${BACKUP_DIR}/immich-${BACKUP_DATE}/dump.sql.gz"
|
||||
|
||||
BACKUP_EXIT=${PIPESTATUS[0]}
|
||||
|
||||
if [ $BACKUP_EXIT -ne 0 ]; then
|
||||
echo "[${BACKUP_DATE}] ERROR: Immich database backup failed with exit code ${BACKUP_EXIT}" | tee -a "$LOG_FILE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "[${BACKUP_DATE}] Immich database backup completed successfully" | tee -a "$LOG_FILE"
|
||||
|
||||
# Step 2: Verify library location exists (Kopia will backup directly, no tar needed)
|
||||
echo "[${BACKUP_DATE}] Verifying library location..." | tee -a "$LOG_FILE"
|
||||
|
||||
# Get the upload location from docker-compose volumes
|
||||
UPLOAD_LOCATION="/srv/immich/library"
|
||||
|
||||
if [ -d "${UPLOAD_LOCATION}" ]; then
|
||||
#LIBRARY_SIZE=$(du -sh ${UPLOAD_LOCATION} | cut -f1)
|
||||
echo "[${BACKUP_DATE}] Library location verified: ${UPLOAD_LOCATION} (${LIBRARY_SIZE})" | tee -a "$LOG_FILE"
|
||||
echo "[${BACKUP_DATE}] Kopia will backup library files directly (no tar, better deduplication)" | tee -a "$LOG_FILE"
|
||||
else
|
||||
echo "[${BACKUP_DATE}] WARNING: Upload location not found at ${UPLOAD_LOCATION}" | tee -a "$LOG_FILE"
|
||||
fi
|
||||
|
||||
# Step 3: Backup configuration files
|
||||
echo "[${BACKUP_DATE}] Backing up configuration files..." | tee -a "$LOG_FILE"
|
||||
|
||||
cp "${IMMICH_DIR}/docker-compose.yml" "${BACKUP_DIR}/immich-${BACKUP_DATE}/"
|
||||
cp "${IMMICH_DIR}/.env" "${BACKUP_DIR}/immich-${BACKUP_DATE}/"
|
||||
|
||||
echo "[${BACKUP_DATE}] Configuration backup completed" | tee -a "$LOG_FILE"
|
||||
|
||||
# Step 4: Clean up old backups
|
||||
echo "[${BACKUP_DATE}] Cleaning up backups older than ${KEEP_DAYS} days..." | tee -a "$LOG_FILE"
|
||||
|
||||
find "${BACKUP_DIR}" -maxdepth 1 -type d -name "immich-*" -mtime +${KEEP_DAYS} -exec rm -rf {} \; 2>&1 | tee -a "$LOG_FILE"
|
||||
|
||||
echo "[${BACKUP_DATE}] Local backup cleanup completed" | tee -a "$LOG_FILE"
|
||||
|
||||
# Step 5: Create Kopia snapshot of backup directory
|
||||
echo "[${BACKUP_DATE}] Creating Kopia snapshot..." | tee -a "$LOG_FILE"
|
||||
|
||||
kopia snapshot create "${BACKUP_DIR}" \
|
||||
--tags immich:tier1-backup \
|
||||
--description "Immich backup ${BACKUP_DATE}" \
|
||||
2>&1 | tee -a "$LOG_FILE"
|
||||
|
||||
KOPIA_EXIT=${PIPESTATUS[0]}
|
||||
|
||||
if [ $KOPIA_EXIT -ne 0 ]; then
|
||||
echo "[${BACKUP_DATE}] WARNING: Kopia snapshot failed with exit code ${KOPIA_EXIT}" | tee -a "$LOG_FILE"
|
||||
echo "[${BACKUP_DATE}] Local Immich backup exists but offsite copy may be incomplete" | tee -a "$LOG_FILE"
|
||||
exit 2
|
||||
fi
|
||||
|
||||
echo "[${BACKUP_DATE}] Kopia snapshot completed successfully" | tee -a "$LOG_FILE"
|
||||
|
||||
# Step 6: Backup the library directly with Kopia (better deduplication than tar)
|
||||
echo "[${BACKUP_DATE}] Creating Kopia snapshot of library..." | tee -a "$LOG_FILE"
|
||||
|
||||
if [ -d "${UPLOAD_LOCATION}" ]; then
|
||||
kopia snapshot create "${UPLOAD_LOCATION}" \
|
||||
--tags immich:library \
|
||||
--description "Immich library ${BACKUP_DATE}" \
|
||||
2>&1 | tee -a "$LOG_FILE"
|
||||
|
||||
KOPIA_LIB_EXIT=${PIPESTATUS[0]}
|
||||
|
||||
if [ $KOPIA_LIB_EXIT -ne 0 ]; then
|
||||
echo "[${BACKUP_DATE}] WARNING: Kopia library snapshot failed" | tee -a "$LOG_FILE"
|
||||
else
|
||||
echo "[${BACKUP_DATE}] Library snapshot completed successfully" | tee -a "$LOG_FILE"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Step 7: Also backup the Immich installation directory (configs, compose files)
|
||||
#echo "[${BACKUP_DATE}] Backing up Immich installation directory..." | tee -a "$LOG_FILE"
|
||||
|
||||
#kopia snapshot create "${IMMICH_DIR}" \
|
||||
# --tags immich,config,docker-compose \
|
||||
# --description "Immich config ${BACKUP_DATE}" \
|
||||
# 2>&1 | tee -a "$LOG_FILE"
|
||||
|
||||
echo "[${BACKUP_DATE}] Backup process completed successfully" | tee -a "$LOG_FILE"
|
||||
echo "[${BACKUP_DATE}] ========================================" | tee -a "$LOG_FILE"
|
||||
|
||||
# Optional: Send notification on completion
|
||||
# Add your notification method here (email, webhook, etc.)
|
||||
```
|
||||
|
||||
Make it executable:
|
||||
```bash
|
||||
chmod +x /opt/scripts/backup-immich.sh
|
||||
```
|
||||
|
||||
Add to crontab (daily at 2 AM):
|
||||
```bash
|
||||
# Edit root's crontab
|
||||
crontab -e
|
||||
|
||||
# Add this line:
|
||||
0 2 * * * /opt/scripts/backup-immich.sh 2>&1 | logger -t immich-backup
|
||||
```
|
||||
|
||||
### Offsite Backup to Vaults
|
||||
|
||||
After local Kopia snapshots are created, they sync to your offsite vaults automatically through Kopia's repository configuration.
|
||||
|
||||
## Recovery Procedures
|
||||
|
||||
### Understanding Two Recovery Methods
|
||||
|
||||
We have **two restore methods** depending on the scenario:
|
||||
|
||||
1. **Local Restore** (Preferred): For component-level or same-server recovery
|
||||
2. **Kopia Full Restore**: For complete disaster recovery to a new server
|
||||
|
||||
### Method 1: Local Restore (Recommended)
|
||||
|
||||
Use this method when:
|
||||
- Restoring on the same/similar server
|
||||
- Restoring specific components (just database, just library, etc.)
|
||||
- Recovering from local Immich backups
|
||||
|
||||
#### Full System Restore
|
||||
|
||||
```bash
|
||||
cd /opt/immich
|
||||
|
||||
# Stop Immich
|
||||
docker compose down
|
||||
|
||||
# List available backups
|
||||
ls -lh /opt/immich-backups/
|
||||
|
||||
# Choose a database backup
|
||||
BACKUP_PATH="/opt/immich-backups/immich-YYYYMMDD_HHMMSS"
|
||||
|
||||
# Restore database
|
||||
gunzip < ${BACKUP_PATH}/dump.sql.gz | \
|
||||
docker compose exec -T database psql --username=postgres --dbname=immich
|
||||
|
||||
# Restore library from Kopia
|
||||
kopia snapshot list --tags library
|
||||
kopia restore <library-snapshot-id> /srv/immich/library
|
||||
|
||||
# Fix permissions
|
||||
chown -R 1000:1000 /srv/immich/library
|
||||
|
||||
# Restore configuration (review changes first)
|
||||
cp ${BACKUP_PATH}/.env .env.restored
|
||||
cp ${BACKUP_PATH}/docker-compose.yml docker-compose.yml.restored
|
||||
|
||||
# Start Immich
|
||||
docker compose up -d
|
||||
|
||||
# Monitor logs
|
||||
docker compose logs -f
|
||||
```
|
||||
|
||||
#### Example: Restore Only Database
|
||||
|
||||
```bash
|
||||
cd /opt/immich
|
||||
|
||||
# Stop Immich
|
||||
docker compose down
|
||||
|
||||
# Start only database
|
||||
docker compose up -d database
|
||||
sleep 10
|
||||
|
||||
# Restore database from backup
|
||||
BACKUP_PATH="/opt/immich-backups/immich-YYYYMMDD_HHMMSS"
|
||||
gunzip < ${BACKUP_PATH}/dump.sql.gz | \
|
||||
docker compose exec -T database psql --username=postgres --dbname=immich
|
||||
|
||||
# Start all services
|
||||
docker compose down
|
||||
docker compose up -d
|
||||
|
||||
# Verify
|
||||
docker compose logs -f
|
||||
```
|
||||
|
||||
#### Example: Restore Only Library
|
||||
|
||||
```bash
|
||||
cd /opt/immich
|
||||
|
||||
# Stop Immich
|
||||
docker compose down
|
||||
|
||||
# Restore library from Kopia
|
||||
kopia snapshot list --tags library
|
||||
kopia restore <library-snapshot-id> /srv/immich/library
|
||||
|
||||
# Fix permissions
|
||||
chown -R 1000:1000 /srv/immich/library
|
||||
|
||||
# Start Immich
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
### Method 2: Complete Server Rebuild (Kopia Restore)
|
||||
|
||||
Use this when recovering to a completely new server or when local backups are unavailable.
|
||||
|
||||
#### Step 1: Prepare New Server
|
||||
|
||||
```bash
|
||||
# Update system
|
||||
apt update && apt upgrade -y
|
||||
|
||||
# Install Docker
|
||||
curl -fsSL https://get.docker.com | sh
|
||||
systemctl enable docker
|
||||
systemctl start docker
|
||||
|
||||
# Install Docker Compose
|
||||
apt install docker-compose-plugin -y
|
||||
|
||||
# Install Kopia
|
||||
curl -s https://kopia.io/signing-key | sudo gpg --dearmor -o /usr/share/keyrings/kopia-keyring.gpg
|
||||
echo "deb [signed-by=/usr/share/keyrings/kopia-keyring.gpg] https://packages.kopia.io/apt/ stable main" | sudo tee /etc/apt/sources.list.d/kopia.list
|
||||
apt update
|
||||
apt install kopia -y
|
||||
|
||||
# Create directory structure
|
||||
mkdir -p /opt/immich
|
||||
mkdir -p /opt/immich-backups
|
||||
mkdir -p /srv/immich/library
|
||||
mkdir -p /srv/immich/postgres
|
||||
```
|
||||
|
||||
#### Step 2: Restore Kopia Repository
|
||||
|
||||
```bash
|
||||
# Connect to your offsite vault
|
||||
kopia repository connect server \
|
||||
--url=https://192.168.5.10:51516 \
|
||||
--override-username=admin \
|
||||
--server-cert-fingerprint=696a4999f594b5273a174fd7cab677d8dd1628f9b9d27e557daa87103ee064b2
|
||||
|
||||
# List available snapshots
|
||||
kopia snapshot list --tags immich
|
||||
```
|
||||
|
||||
#### Step 3: Restore Configuration
|
||||
|
||||
```bash
|
||||
# Find and restore the config snapshot
|
||||
kopia snapshot list --tags config
|
||||
|
||||
# Restore to the Immich directory
|
||||
kopia restore <snapshot-id> /opt/immich/
|
||||
|
||||
# Verify critical files
|
||||
ls -la /opt/immich/.env
|
||||
ls -la /opt/immich/docker-compose.yml
|
||||
```
|
||||
|
||||
#### Step 4: Restore Immich Backups Directory
|
||||
|
||||
```bash
|
||||
# Restore the entire backup directory from Kopia
|
||||
kopia snapshot list --tags tier1-backup
|
||||
|
||||
# Restore the most recent backup
|
||||
kopia restore <snapshot-id> /opt/immich-backups/
|
||||
|
||||
# Verify backups were restored
|
||||
ls -la /opt/immich-backups/
|
||||
```
|
||||
|
||||
#### Step 5: Restore Database and Library
|
||||
|
||||
```bash
|
||||
cd /opt/immich
|
||||
|
||||
# Find the most recent backup
|
||||
LATEST_BACKUP=$(ls -td /opt/immich-backups/immich-* | head -1)
|
||||
echo "Restoring from: $LATEST_BACKUP"
|
||||
|
||||
# Start database container
|
||||
docker compose up -d database
|
||||
sleep 30
|
||||
|
||||
# Restore database
|
||||
gunzip < ${LATEST_BACKUP}/dump.sql.gz | \
|
||||
docker compose exec -T database psql --username=postgres --dbname=immich
|
||||
|
||||
# Restore library from Kopia
|
||||
kopia snapshot list --tags library
|
||||
kopia restore <library-snapshot-id> /srv/immich/library
|
||||
|
||||
# Fix permissions
|
||||
chown -R 1000:1000 /srv/immich/library
|
||||
```
|
||||
|
||||
#### Step 6: Start and Verify Immich
|
||||
|
||||
```bash
|
||||
cd /opt/immich
|
||||
|
||||
# Pull latest images (or use versions from backup if preferred)
|
||||
docker compose pull
|
||||
|
||||
# Start all services
|
||||
docker compose up -d
|
||||
|
||||
# Monitor logs
|
||||
docker compose logs -f
|
||||
```
|
||||
|
||||
#### Step 7: Post-Restore Verification
|
||||
|
||||
```bash
|
||||
# Check container status
|
||||
docker compose ps
|
||||
|
||||
# Test web interface
|
||||
curl -I http://localhost:2283
|
||||
|
||||
# Verify database
|
||||
docker compose exec database psql -U postgres -d immich -c "SELECT COUNT(*) FROM users;"
|
||||
|
||||
# Check library storage
|
||||
ls -lah /srv/immich/library/
|
||||
```
|
||||
|
||||
### Scenario 2: Restore Individual User's Photos
|
||||
|
||||
To restore a single user's library without affecting others:
|
||||
|
||||
**Option A: Using Kopia Mount (Recommended)**
|
||||
|
||||
```bash
|
||||
# Mount the Kopia snapshot
|
||||
kopia snapshot list --tags library
|
||||
mkdir -p /mnt/kopia-library
|
||||
kopia mount <library-snapshot-id> /mnt/kopia-library &
|
||||
|
||||
# Find the user's directory (using user ID from database)
|
||||
# User libraries are typically in: library/{user-uuid}/
|
||||
USER_UUID="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
|
||||
|
||||
# Copy user's data back
|
||||
rsync -av /mnt/kopia-library/${USER_UUID}/ \
|
||||
/srv/immich/library/${USER_UUID}/
|
||||
|
||||
# Fix permissions
|
||||
chown -R 1000:1000 /srv/immich/library/${USER_UUID}/
|
||||
|
||||
# Unmount
|
||||
kopia unmount /mnt/kopia-library
|
||||
|
||||
# Restart Immich to recognize changes
|
||||
cd /opt/immich
|
||||
docker compose restart immich-server
|
||||
```
|
||||
|
||||
**Option B: Selective Kopia Restore**
|
||||
|
||||
```bash
|
||||
cd /opt/immich
|
||||
docker compose down
|
||||
|
||||
# Restore just the specific user's directory
|
||||
kopia snapshot list --tags library
|
||||
USER_UUID="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
|
||||
|
||||
# Restore with path filter
|
||||
kopia restore <library-snapshot-id> /srv/immich/library \
|
||||
--snapshot-path="${USER_UUID}"
|
||||
|
||||
# Fix permissions
|
||||
chown -R 1000:1000 /srv/immich/library/${USER_UUID}/
|
||||
|
||||
# Start Immich
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
### Scenario 3: Database Recovery Only
|
||||
|
||||
If only the database is corrupted but library data is intact:
|
||||
|
||||
```bash
|
||||
cd /opt/immich
|
||||
|
||||
# Stop Immich
|
||||
docker compose down
|
||||
|
||||
# Start only database
|
||||
docker compose up -d database
|
||||
sleep 30
|
||||
|
||||
# Restore from most recent backup
|
||||
LATEST_BACKUP=$(ls -td /opt/immich-backups/immich-* | head -1)
|
||||
gunzip < ${LATEST_BACKUP}/dump.sql.gz | \
|
||||
docker compose exec -T database psql --username=postgres --dbname=immich
|
||||
|
||||
# Start all services
|
||||
docker compose down
|
||||
docker compose up -d
|
||||
|
||||
# Verify
|
||||
docker compose logs -f
|
||||
```
|
||||
|
||||
### Scenario 4: Configuration Recovery Only
|
||||
|
||||
If you only need to restore configuration files:
|
||||
|
||||
```bash
|
||||
cd /opt/immich
|
||||
|
||||
# Find the most recent backup
|
||||
LATEST_BACKUP=$(ls -td /opt/immich-backups/immich-* | head -1)
|
||||
|
||||
# Stop Immich
|
||||
docker compose down
|
||||
|
||||
# Backup current config (just in case)
|
||||
cp .env .env.pre-restore
|
||||
cp docker-compose.yml docker-compose.yml.pre-restore
|
||||
|
||||
# Restore config from backup
|
||||
cp ${LATEST_BACKUP}/.env ./
|
||||
cp ${LATEST_BACKUP}/docker-compose.yml ./
|
||||
|
||||
# Restart
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
## Verification and Testing
|
||||
|
||||
### Regular Backup Verification
|
||||
|
||||
Perform monthly restore tests to ensure backups are valid:
|
||||
|
||||
```bash
|
||||
# Test restore to temporary location
|
||||
mkdir -p /tmp/backup-test
|
||||
kopia snapshot list --tags immich
|
||||
kopia restore <snapshot-id> /tmp/backup-test/
|
||||
|
||||
# Verify files exist and are readable
|
||||
ls -lah /tmp/backup-test/
|
||||
gunzip < /tmp/backup-test/immich-*/dump.sql.gz | head -100
|
||||
|
||||
# Cleanup
|
||||
rm -rf /tmp/backup-test/
|
||||
```
|
||||
|
||||
### Backup Monitoring Script
|
||||
|
||||
Create `/opt/scripts/check-immich-backup.sh`:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
# Check last backup age
|
||||
LAST_BACKUP=$(ls -td /opt/immich-backups/immich-* 2>/dev/null | head -1)
|
||||
|
||||
if [ -z "$LAST_BACKUP" ]; then
|
||||
echo "WARNING: No Immich backups found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
BACKUP_DATE=$(basename "$LAST_BACKUP" | sed 's/immich-//')
|
||||
BACKUP_EPOCH=$(date -d "${BACKUP_DATE:0:8} ${BACKUP_DATE:9:2}:${BACKUP_DATE:11:2}:${BACKUP_DATE:13:2}" +%s 2>/dev/null)
|
||||
|
||||
if [ -z "$BACKUP_EPOCH" ]; then
|
||||
echo "WARNING: Cannot parse backup date"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
NOW=$(date +%s)
|
||||
AGE_HOURS=$(( ($NOW - $BACKUP_EPOCH) / 3600 ))
|
||||
|
||||
if [ $AGE_HOURS -gt 26 ]; then
|
||||
echo "WARNING: Last Immich backup is $AGE_HOURS hours old"
|
||||
# Send alert (email, Slack, etc.)
|
||||
exit 1
|
||||
else
|
||||
echo "OK: Last backup $AGE_HOURS hours ago"
|
||||
fi
|
||||
|
||||
# Check Kopia snapshots
|
||||
KOPIA_LAST=$(kopia snapshot list --tags immich --json 2>/dev/null | jq -r '.[0].startTime' 2>/dev/null)
|
||||
|
||||
if [ -n "$KOPIA_LAST" ]; then
|
||||
echo "Last Kopia snapshot: $KOPIA_LAST"
|
||||
else
|
||||
echo "WARNING: Cannot verify Kopia snapshots"
|
||||
fi
|
||||
```
|
||||
|
||||
## Disaster Recovery Checklist
|
||||
|
||||
When disaster strikes, follow this checklist:
|
||||
|
||||
- [ ] Confirm scope of failure (server, storage, specific component)
|
||||
- [ ] Gather server information (hostname, IP, DNS records)
|
||||
- [ ] Access offsite backup vault
|
||||
- [ ] Provision new server (if needed)
|
||||
- [ ] Install Docker and dependencies
|
||||
- [ ] Connect to Kopia repository
|
||||
- [ ] Restore configurations first
|
||||
- [ ] Restore database
|
||||
- [ ] Restore library data
|
||||
- [ ] Start services and verify
|
||||
- [ ] Test photo viewing and uploads
|
||||
- [ ] Verify user accounts and albums
|
||||
- [ ] Update DNS records if needed
|
||||
- [ ] Document any issues encountered
|
||||
- [ ] Update recovery procedures based on experience
|
||||
|
||||
## Important Notes
|
||||
|
||||
1. **External Mounts**: Your setup has `/export/photos` and `/srv/NextCloud-AIO` mounted as external read-only sources. These are not backed up by this script - ensure they have their own backup strategy.
|
||||
|
||||
2. **Database Password**: The default database password in your .env is `postgres`. Change this to a secure random password for production use.
|
||||
|
||||
3. **Permissions**: Library files should be owned by UID 1000:1000 for Immich to access them properly:
|
||||
```bash
|
||||
chown -R 1000:1000 /srv/immich/library
|
||||
```
|
||||
|
||||
4. **Testing**: Always test recovery procedures in a lab environment before trusting them in production.
|
||||
|
||||
5. **Documentation**: Keep this guide and server details in a separate location (printed copy, password manager, etc.).
|
||||
|
||||
6. **Retention Policy**: Review Kopia retention settings periodically to balance storage costs with recovery needs.
|
||||
|
||||
## Backup Architecture Notes
|
||||
|
||||
### Why Two Backup Layers?
|
||||
|
||||
**Immich Native Backups** (Tier 1):
|
||||
- ✅ Uses official Immich backup method (`pg_dump`)
|
||||
- ✅ Fast, component-aware backups
|
||||
- ✅ Selective restore (can restore just database or just library)
|
||||
- ✅ Standard PostgreSQL format (portable)
|
||||
- ❌ No deduplication (full copies each time)
|
||||
- ❌ Limited to local storage initially
|
||||
|
||||
**Kopia Snapshots** (Tier 2):
|
||||
- ✅ Deduplication and compression
|
||||
- ✅ Efficient offsite replication to vaults
|
||||
- ✅ Point-in-time recovery across multiple versions
|
||||
- ✅ Disaster recovery to completely new infrastructure
|
||||
- ❌ Less component-aware (treats as files)
|
||||
- ❌ Slower for granular component restore
|
||||
|
||||
### Storage Efficiency
|
||||
|
||||
Using this two-tier approach:
|
||||
- **Local**: Database backups (~7 days retention, relatively small)
|
||||
- **Kopia**: Database backups + library (efficient deduplication)
|
||||
|
||||
**Why library goes directly to Kopia without tar:**
|
||||
|
||||
Example with 500GB library, adding 10GB photos/month:
|
||||
|
||||
**With tar approach:**
|
||||
- Month 1: Backup 500GB tar
|
||||
- Month 2: Add 10GB photos → Entire 510GB tar changes → Backup 510GB
|
||||
- Month 3: Add 10GB photos → Entire 520GB tar changes → Backup 520GB
|
||||
- **Total storage needed**: 500 + 510 + 520 = 1,530GB
|
||||
|
||||
**Without tar (Kopia direct):**
|
||||
- Month 1: Backup 500GB
|
||||
- Month 2: Add 10GB photos → Kopia only backs up the 10GB new files
|
||||
- Month 3: Add 10GB photos → Kopia only backs up the 10GB new files
|
||||
- **Total storage needed**: 500 + 10 + 10 = 520GB
|
||||
|
||||
**Savings**: ~66% reduction in storage and backup time!
|
||||
|
||||
This is why we:
|
||||
- Keep database dumps local (small, fast component restore)
|
||||
- Let Kopia handle library directly (efficient, incremental, deduplicated)
|
||||
|
||||
### Compression and Deduplication
|
||||
|
||||
**Database backups** use `gzip` compression:
|
||||
- Typically 80-90% compression ratio for SQL dumps
|
||||
- Small enough to keep local copies
|
||||
|
||||
**Library backups** use Kopia's built-in compression and deduplication:
|
||||
- Photos (JPEG/HEIC): Already compressed, Kopia skips re-compression
|
||||
- Videos: Already compressed, minimal additional compression
|
||||
- RAW files: Some compression possible
|
||||
- **Deduplication**: If you upload the same photo twice, Kopia stores it once
|
||||
- **Block-level dedup**: Even modified photos share unchanged blocks
|
||||
|
||||
This is far more efficient than tar + gzip, which would:
|
||||
- Compress already-compressed photos (wasted CPU, minimal benefit)
|
||||
- Store entire archive even if only 1 file changed
|
||||
- Prevent deduplication across backups
|
||||
|
||||
## Additional Resources
|
||||
|
||||
- [Immich Official Backup Documentation](https://immich.app/docs/administration/backup-and-restore)
|
||||
- [Kopia Documentation](https://kopia.io/docs/)
|
||||
- [Docker Volume Backup Best Practices](https://docs.docker.com/storage/volumes/#back-up-restore-or-migrate-data-volumes)
|
||||
- [PostgreSQL pg_dump Documentation](https://www.postgresql.org/docs/current/app-pgdump.html)
|
||||
|
||||
## Revision History
|
||||
|
||||
| Date | Version | Changes |
|
||||
|------|---------|---------|
|
||||
| 2026-02-13 | 1.0 | Initial documentation - two-tier backup strategy using Immich's native backup method |
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: February 13, 2026
|
||||
**Maintained By**: System Administrator
|
||||
**Review Schedule**: Quarterly
|
||||
879
False Grimoire/Netgrimoire/Backup/MailCow_Backup.md
Normal file
879
False Grimoire/Netgrimoire/Backup/MailCow_Backup.md
Normal file
|
|
@ -0,0 +1,879 @@
|
|||
---
|
||||
title: Mailcow Backup and Restore Strategy
|
||||
description: Mailcow backup
|
||||
published: true
|
||||
date: 2026-02-20T04:15:25.924Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-02-11T01:20:59.127Z
|
||||
---
|
||||
|
||||
# Mailcow Backup and Recovery Guide
|
||||
|
||||
## Overview
|
||||
|
||||
This document provides comprehensive backup and recovery procedures for Mailcow email server. Since Mailcow is **not running on ZFS or BTRFS**, snapshots are not available and we rely on Mailcow's native backup script combined with Kopia for offsite storage in vaults.
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Common Backup Commands
|
||||
|
||||
```bash
|
||||
# Run a manual backup (all components)
|
||||
cd /opt/mailcow-dockerized
|
||||
MAILCOW_BACKUP_LOCATION=/opt/mailcow-backups \
|
||||
./helper-scripts/backup_and_restore.sh backup all --delete-days 7
|
||||
|
||||
# Backup with multithreading (faster)
|
||||
THREADS=4 MAILCOW_BACKUP_LOCATION=/opt/mailcow-backups \
|
||||
./helper-scripts/backup_and_restore.sh backup all --delete-days 7
|
||||
|
||||
# List Kopia snapshots
|
||||
kopia snapshot list --tags mailcow
|
||||
|
||||
# View backup logs
|
||||
tail -f /var/log/mailcow-backup.log
|
||||
```
|
||||
|
||||
### Common Restore Commands
|
||||
|
||||
```bash
|
||||
# Restore using mailcow native script (interactive)
|
||||
cd /opt/mailcow-dockerized
|
||||
./helper-scripts/backup_and_restore.sh restore
|
||||
|
||||
# Restore from Kopia to new server
|
||||
kopia snapshot list --tags tier1-backup
|
||||
kopia restore <snapshot-id> /opt/mailcow-backups/
|
||||
|
||||
# Check container status after restore
|
||||
docker compose ps
|
||||
docker compose logs -f
|
||||
```
|
||||
|
||||
## Critical Components to Backup
|
||||
|
||||
### 1. Docker Compose File
|
||||
- **Location**: `/opt/mailcow-dockerized/docker-compose.yml` (or your installation path)
|
||||
- **Purpose**: Defines all containers, networks, and volumes
|
||||
- **Importance**: Critical for recreating the exact container configuration
|
||||
|
||||
### 2. Configuration Files
|
||||
- **Primary Config**: `/opt/mailcow-dockerized/mailcow.conf`
|
||||
- **Additional Configs**:
|
||||
- `/opt/mailcow-dockerized/data/conf/` (all subdirectories)
|
||||
- Custom SSL certificates if not using Let's Encrypt
|
||||
- Any override files (e.g., `docker-compose.override.yml`)
|
||||
|
||||
### 3. Database
|
||||
- **MySQL/MariaDB Data**: Contains all mailbox configurations, users, domains, aliases, settings
|
||||
- **Docker Volume**: `mailcowdockerized_mysql-vol`
|
||||
- **Container Path**: `/var/lib/mysql`
|
||||
|
||||
### 4. Email Data
|
||||
- **Maildir Storage**: All actual email messages
|
||||
- **Docker Volume**: `mailcowdockerized_vmail-vol`
|
||||
- **Container Path**: `/var/vmail`
|
||||
- **Size**: Typically the largest component
|
||||
|
||||
### 5. Additional Important Data
|
||||
- **Redis Data**: `mailcowdockerized_redis-vol` (cache and sessions)
|
||||
- **Rspamd Data**: `mailcowdockerized_rspamd-vol` (spam learning)
|
||||
- **Crypt Data**: `mailcowdockerized_crypt-vol` (if using mailbox encryption)
|
||||
- **Postfix Queue**: `mailcowdockerized_postfix-vol` (queued/deferred mail)
|
||||
|
||||
## Backup Strategy
|
||||
|
||||
### Two-Tier Backup Approach
|
||||
|
||||
We use a **two-tier approach** combining Mailcow's native backup script with Kopia for offsite storage:
|
||||
|
||||
1. **Tier 1 (Local)**: Mailcow's `backup_and_restore.sh` script creates consistent, component-level backups
|
||||
2. **Tier 2 (Offsite)**: Kopia snapshots the local backups and syncs to vaults
|
||||
|
||||
#### Why This Approach?
|
||||
|
||||
- **Best of both worlds**: Native script ensures mailcow-specific consistency, Kopia provides deduplication and offsite protection
|
||||
- **Component-level restore**: Can restore individual components (just vmail, just mysql, etc.) using mailcow script
|
||||
- **Disaster recovery**: Full system restore from Kopia backups on new server
|
||||
- **Efficient storage**: Kopia's deduplication reduces storage needs for offsite copies
|
||||
|
||||
#### Backup Frequency
|
||||
- **Daily**: Mailcow native backup runs at 2 AM
|
||||
- **Daily**: Kopia snapshot of backups runs at 3 AM
|
||||
- **Retention (Local)**: 7 days of mailcow backups (managed by script)
|
||||
- **Retention (Kopia/Offsite)**: 30 daily, 12 weekly, 12 monthly
|
||||
|
||||
### Mailcow Native Backup Script
|
||||
|
||||
Mailcow includes `/opt/mailcow-dockerized/helper-scripts/backup_and_restore.sh` which handles:
|
||||
- **vmail**: Email data (mailboxes)
|
||||
- **mysql**: Database (using mariabackup for consistency)
|
||||
- **redis**: Redis database
|
||||
- **rspamd**: Spam filter learning data
|
||||
- **crypt**: Encryption data
|
||||
- **postfix**: Mail queue
|
||||
|
||||
**Key Features:**
|
||||
- Uses `mariabackup` (hot backup without stopping MySQL)
|
||||
- Supports multithreading for faster backups
|
||||
- Architecture-aware (handles x86/ARM differences)
|
||||
- Built-in cleanup with `--delete-days` parameter
|
||||
- Creates compressed archives (.tar.zst or .tar.gz)
|
||||
|
||||
### Setting Up Mailcow Backups
|
||||
|
||||
|
||||
#### Prereq:
|
||||
Make sure you are connected to the repository,
|
||||
|
||||
```bash
|
||||
sudo kopia repository connect server --url=https://192.168.5.10:51516 --override-username=admin --server-cert-fingerprint=696a4999f594b5273a174fd7cab677d8dd1628f9b9d27e557daa87103ee064b2
|
||||
```
|
||||
|
||||
|
||||
#### Step 1: Configure Backup Location
|
||||
|
||||
Set the backup destination via environment variable or in mailcow.conf:
|
||||
|
||||
```bash
|
||||
# Option 1: Set environment variable (preferred for automation)
|
||||
export MAILCOW_BACKUP_LOCATION="/opt/mailcow-backups"
|
||||
|
||||
# Option 2: Add to cron job directly (shown in automated script below)
|
||||
```
|
||||
|
||||
Create the backup directory:
|
||||
```bash
|
||||
mkdir -p /opt/mailcow-backups
|
||||
chown -R root:root /opt/mailcow-backups
|
||||
chmod 777 /opt/mailcow-backups
|
||||
```
|
||||
|
||||
#### Step 2: Manual Backup Commands
|
||||
|
||||
```bash
|
||||
cd /opt/mailcow-dockerized
|
||||
|
||||
# Backup all components, delete backups older than 7 days
|
||||
MAILCOW_BACKUP_LOCATION=/opt/mailcow-backups \
|
||||
./helper-scripts/backup_and_restore.sh backup all --delete-days 7
|
||||
|
||||
# Backup with multithreading (faster for large mailboxes)
|
||||
THREADS=4 MAILCOW_BACKUP_LOCATION=/opt/mailcow-backups \
|
||||
./helper-scripts/backup_and_restore.sh backup all --delete-days 7
|
||||
|
||||
# Backup specific components only
|
||||
MAILCOW_BACKUP_LOCATION=/opt/mailcow-backups \
|
||||
./helper-scripts/backup_and_restore.sh backup vmail mysql --delete-days 7
|
||||
```
|
||||
|
||||
**What gets created:**
|
||||
- Backup directory: `/opt/mailcow-backups/mailcow-YYYY-MM-DD-HH-MM-SS/`
|
||||
- Contains: `.tar.zst` compressed archives for each component
|
||||
- Plus: `mailcow.conf` copy for restore reference
|
||||
|
||||
#### Step 3: Automated Backup Script
|
||||
|
||||
Create `/opt/scripts/backup-mailcow.sh`:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
# Mailcow Automated Backup Script
|
||||
# This creates mailcow native backups, then snapshots them with Kopia for offsite storage
|
||||
|
||||
set -e
|
||||
|
||||
BACKUP_DATE=$(date +%Y%m%d_%H%M%S)
|
||||
LOG_FILE="/var/log/mailcow-backup.log"
|
||||
MAILCOW_DIR="/opt/mailcow-dockerized"
|
||||
BACKUP_DIR="/opt/mailcow-backups"
|
||||
THREADS=4 # Adjust based on your CPU cores
|
||||
KEEP_DAYS=7 # Keep local mailcow backups for 7 days
|
||||
|
||||
echo "[${BACKUP_DATE}] ========================================" | tee -a "$LOG_FILE"
|
||||
echo "[${BACKUP_DATE}] Starting Mailcow backup process" | tee -a "$LOG_FILE"
|
||||
|
||||
# Step 1: Run mailcow's native backup script
|
||||
echo "[${BACKUP_DATE}] Running mailcow native backup..." | tee -a "$LOG_FILE"
|
||||
|
||||
cd "$MAILCOW_DIR"
|
||||
|
||||
# Run the backup with multithreading
|
||||
THREADS=${THREADS} MAILCOW_BACKUP_LOCATION=${BACKUP_DIR} \
|
||||
./helper-scripts/backup_and_restore.sh backup all --delete-days ${KEEP_DAYS} \
|
||||
2>&1 | tee -a "$LOG_FILE"
|
||||
|
||||
BACKUP_EXIT=${PIPESTATUS[0]}
|
||||
|
||||
if [ $BACKUP_EXIT -ne 0 ]; then
|
||||
echo "[${BACKUP_DATE}] ERROR: Mailcow backup failed with exit code ${BACKUP_EXIT}" | tee -a "$LOG_FILE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "[${BACKUP_DATE}] Mailcow native backup completed successfully" | tee -a "$LOG_FILE"
|
||||
|
||||
# Step 2: Create Kopia snapshot of backup directory
|
||||
echo "[${BACKUP_DATE}] Creating Kopia snapshot..." | tee -a "$LOG_FILE"
|
||||
|
||||
kopia snapshot create "${BACKUP_DIR}" \
|
||||
--tags mailcow:tier1-backup \
|
||||
--description "Mailcow backup ${BACKUP_DATE}" \
|
||||
2>&1 | tee -a "$LOG_FILE"
|
||||
|
||||
KOPIA_EXIT=${PIPESTATUS[0]}
|
||||
|
||||
if [ $KOPIA_EXIT -ne 0 ]; then
|
||||
echo "[${BACKUP_DATE}] WARNING: Kopia snapshot failed with exit code ${KOPIA_EXIT}" | tee -a "$LOG_FILE"
|
||||
echo "[${BACKUP_DATE}] Local mailcow backup exists but offsite copy may be incomplete" | tee -a "$LOG_FILE"
|
||||
exit 2
|
||||
fi
|
||||
|
||||
echo "[${BACKUP_DATE}] Kopia snapshot completed successfully" | tee -a "$LOG_FILE"
|
||||
|
||||
# Step 3: Also backup the mailcow installation directory (configs, compose files)
|
||||
echo "[${BACKUP_DATE}] Backing up mailcow installation directory..." | tee -a "$LOG_FILE"
|
||||
|
||||
kopia snapshot create "${MAILCOW_DIR}" \
|
||||
--tags mailcow,config,docker-compose \
|
||||
--description "Mailcow config ${BACKUP_DATE}" \
|
||||
2>&1 | tee -a "$LOG_FILE"
|
||||
|
||||
echo "[${BACKUP_DATE}] Backup process completed successfully" | tee -a "$LOG_FILE"
|
||||
echo "[${BACKUP_DATE}] ========================================" | tee -a "$LOG_FILE"
|
||||
|
||||
# Optional: Send notification on completion
|
||||
# Add your notification method here (email, webhook, etc.)
|
||||
```
|
||||
|
||||
Make it executable:
|
||||
```bash
|
||||
chmod +x /opt/scripts/backup-mailcow.sh
|
||||
```
|
||||
|
||||
Add to crontab (daily at 2 AM):
|
||||
```bash
|
||||
# Edit root's crontab
|
||||
crontab -e
|
||||
|
||||
# Add this line:
|
||||
0 2 * * * /opt/scripts/backup-mailcow.sh 2>&1 | logger -t mailcow-backup
|
||||
```
|
||||
|
||||
### Offsite Backup to Vaults
|
||||
|
||||
After local Kopia snapshots are created, sync to your offsite vaults:
|
||||
|
||||
```bash
|
||||
# Option 1: Kopia repository sync (if using multiple Kopia repos)
|
||||
kopia repository sync-to filesystem --path /mnt/vault/mailcow-backup
|
||||
|
||||
# Option 2: Rsync to vault
|
||||
rsync -avz --delete /backup/kopia-repo/ /mnt/vault/mailcow-backup/
|
||||
|
||||
# Option 3: Rclone to remote vault
|
||||
rclone sync /backup/kopia-repo/ vault:mailcow-backup/
|
||||
```
|
||||
|
||||
## Recovery Procedures
|
||||
|
||||
### Understanding Two Recovery Methods
|
||||
|
||||
We have **two restore methods** depending on the scenario:
|
||||
|
||||
1. **Mailcow Native Restore** (Preferred): For component-level or same-server recovery
|
||||
2. **Kopia Full Restore**: For complete disaster recovery to a new server
|
||||
|
||||
### Method 1: Mailcow Native Restore (Recommended)
|
||||
|
||||
Use this method when:
|
||||
- Restoring on the same/similar server
|
||||
- Restoring specific components (just email, just database, etc.)
|
||||
- Recovering from local mailcow backups
|
||||
|
||||
#### Step 1: List Available Backups
|
||||
|
||||
```bash
|
||||
cd /opt/mailcow-dockerized
|
||||
|
||||
# Run the restore script
|
||||
./helper-scripts/backup_and_restore.sh restore
|
||||
```
|
||||
|
||||
The script will prompt:
|
||||
```
|
||||
Backup location (absolute path, starting with /): /opt/mailcow-backups
|
||||
```
|
||||
|
||||
#### Step 2: Select Backup
|
||||
|
||||
The script displays available backups:
|
||||
```
|
||||
Found project name mailcowdockerized
|
||||
[ 1 ] - /opt/mailcow-backups/mailcow-2026-02-09-02-00-14/
|
||||
[ 2 ] - /opt/mailcow-backups/mailcow-2026-02-10-02-00-08/
|
||||
```
|
||||
|
||||
Enter the number of the backup to restore.
|
||||
|
||||
#### Step 3: Select Components
|
||||
|
||||
Choose what to restore:
|
||||
```
|
||||
[ 0 ] - all
|
||||
[ 1 ] - Crypt data
|
||||
[ 2 ] - Rspamd data
|
||||
[ 3 ] - Mail directory (/var/vmail)
|
||||
[ 4 ] - Redis DB
|
||||
[ 5 ] - Postfix data
|
||||
[ 6 ] - SQL DB
|
||||
```
|
||||
|
||||
**Important**: The script will:
|
||||
- Stop mailcow containers automatically
|
||||
- Restore selected components
|
||||
- Handle permissions correctly
|
||||
- Restart containers when done
|
||||
|
||||
#### Example: Restore Only Email Data
|
||||
|
||||
```bash
|
||||
cd /opt/mailcow-dockerized
|
||||
./helper-scripts/backup_and_restore.sh restore
|
||||
|
||||
# When prompted:
|
||||
# - Backup location: /opt/mailcow-backups
|
||||
# - Select backup: 2 (most recent)
|
||||
# - Select component: 3 (Mail directory)
|
||||
```
|
||||
|
||||
#### Example: Restore Database Only
|
||||
|
||||
```bash
|
||||
cd /opt/mailcow-dockerized
|
||||
./helper-scripts/backup_and_restore.sh restore
|
||||
|
||||
# When prompted:
|
||||
# - Backup location: /opt/mailcow-backups
|
||||
# - Select backup: 2 (most recent)
|
||||
# - Select component: 6 (SQL DB)
|
||||
```
|
||||
|
||||
**Note**: For database restore, the script will modify `mailcow.conf` with the database credentials from the backup. Review the changes after restore.
|
||||
|
||||
### Method 2: Complete Server Rebuild (Kopia Restore)
|
||||
|
||||
Use this when recovering to a completely new server or when local backups are unavailable.
|
||||
|
||||
#### Step 1: Prepare New Server
|
||||
|
||||
```bash
|
||||
# Update system
|
||||
apt update && apt upgrade -y
|
||||
|
||||
# Install Docker
|
||||
curl -fsSL https://get.docker.com | sh
|
||||
systemctl enable docker
|
||||
systemctl start docker
|
||||
|
||||
# Install Docker Compose
|
||||
apt install docker-compose-plugin -y
|
||||
|
||||
# Install Kopia
|
||||
curl -s https://kopia.io/signing-key | apt-key add -
|
||||
echo "deb https://packages.kopia.io/apt/ stable main" | tee /etc/apt/sources.list.d/kopia.list
|
||||
apt update
|
||||
apt install kopia -y
|
||||
|
||||
# Create directory structure
|
||||
mkdir -p /opt/mailcow-dockerized
|
||||
mkdir -p /opt/mailcow-backups/database
|
||||
```
|
||||
|
||||
#### Step 2: Restore Kopia Repository
|
||||
|
||||
```bash
|
||||
# Connect to your offsite vault
|
||||
# If vault is mounted:
|
||||
kopia repository connect filesystem --path /mnt/vault/mailcow-backup
|
||||
|
||||
# If vault is remote:
|
||||
kopia repository connect s3 --bucket=your-bucket --access-key=xxx --secret-access-key=xxx
|
||||
|
||||
# List available snapshots
|
||||
kopia snapshot list --tags mailcow
|
||||
```
|
||||
|
||||
#### Step 3: Restore Configuration
|
||||
|
||||
```bash
|
||||
# Find and restore the config snapshot
|
||||
kopia snapshot list --tags config
|
||||
|
||||
# Restore to the Mailcow directory
|
||||
kopia restore <snapshot-id> /opt/mailcow-dockerized/
|
||||
|
||||
# Verify critical files
|
||||
ls -la /opt/mailcow-dockerized/mailcow.conf
|
||||
ls -la /opt/mailcow-dockerized/docker-compose.yml
|
||||
```
|
||||
|
||||
#### Step 4: Restore Mailcow Backups Directory
|
||||
|
||||
```bash
|
||||
# Restore the entire backup directory from Kopia
|
||||
kopia snapshot list --tags tier1-backup
|
||||
|
||||
# Restore the most recent backup
|
||||
kopia restore <snapshot-id> /opt/mailcow-backups/
|
||||
|
||||
# Verify backups were restored
|
||||
ls -la /opt/mailcow-backups/
|
||||
```
|
||||
|
||||
#### Step 5: Run Mailcow Native Restore
|
||||
|
||||
Now use mailcow's built-in restore script:
|
||||
|
||||
```bash
|
||||
cd /opt/mailcow-dockerized
|
||||
|
||||
# Run the restore script
|
||||
./helper-scripts/backup_and_restore.sh restore
|
||||
|
||||
# When prompted:
|
||||
# - Backup location: /opt/mailcow-backups
|
||||
# - Select the most recent backup
|
||||
# - Select [ 0 ] - all (to restore everything)
|
||||
```
|
||||
|
||||
The script will:
|
||||
1. Stop all mailcow containers
|
||||
2. Restore all components (vmail, mysql, redis, rspamd, postfix, crypt)
|
||||
3. Update mailcow.conf with restored database credentials
|
||||
4. Restart all containers
|
||||
|
||||
**Alternative: Manual Restore** (if you prefer more control)
|
||||
|
||||
```bash
|
||||
cd /opt/mailcow-dockerized
|
||||
|
||||
# Start containers to create volumes
|
||||
docker compose up -d --no-start
|
||||
docker compose down
|
||||
|
||||
# Find the most recent backup directory
|
||||
LATEST_BACKUP=$(ls -td /opt/mailcow-backups/mailcow-* | head -1)
|
||||
echo "Restoring from: $LATEST_BACKUP"
|
||||
|
||||
# Extract each component manually
|
||||
cd "$LATEST_BACKUP"
|
||||
|
||||
# Restore vmail (email data)
|
||||
docker run --rm \
|
||||
-v mailcowdockerized_vmail-vol:/backup \
|
||||
-v "$PWD":/restore \
|
||||
debian:bookworm-slim \
|
||||
tar --use-compress-program='zstd -d' -xvf /restore/backup_vmail.tar.zst
|
||||
|
||||
# Restore MySQL
|
||||
docker run --rm \
|
||||
-v mailcowdockerized_mysql-vol:/backup \
|
||||
-v "$PWD":/restore \
|
||||
mariadb:10.11 \
|
||||
tar --use-compress-program='zstd -d' -xvf /restore/backup_mysql.tar.zst
|
||||
|
||||
# Restore Redis
|
||||
docker run --rm \
|
||||
-v mailcowdockerized_redis-vol:/backup \
|
||||
-v "$PWD":/restore \
|
||||
debian:bookworm-slim \
|
||||
tar --use-compress-program='zstd -d' -xvf /restore/backup_redis.tar.zst
|
||||
|
||||
# Restore other components similarly (rspamd, postfix, crypt)
|
||||
# ...
|
||||
|
||||
# Copy mailcow.conf from backup
|
||||
cp "$LATEST_BACKUP/mailcow.conf" /opt/mailcow-dockerized/mailcow.conf
|
||||
```
|
||||
|
||||
#### Step 6: Start and Verify Mailcow
|
||||
|
||||
```bash
|
||||
cd /opt/mailcow-dockerized
|
||||
|
||||
# Pull latest images (or use versions from backup if preferred)
|
||||
docker compose pull
|
||||
|
||||
# Start all services
|
||||
docker compose up -d
|
||||
|
||||
# Monitor logs
|
||||
docker compose logs -f
|
||||
```
|
||||
|
||||
#### Step 7: Post-Restore Verification
|
||||
|
||||
```bash
|
||||
# Check container status
|
||||
docker compose ps
|
||||
|
||||
# Test web interface
|
||||
curl -I https://mail.yourdomain.com
|
||||
|
||||
# Check mail log
|
||||
docker compose logs -f postfix-mailcow
|
||||
|
||||
# Verify database
|
||||
docker compose exec mysql-mailcow mysql -u root -p$(grep DBROOT mailcow.conf | cut -d'=' -f2) -e "SHOW DATABASES;"
|
||||
|
||||
# Check email storage
|
||||
docker compose exec dovecot-mailcow ls -lah /var/vmail/
|
||||
```
|
||||
|
||||
### Scenario 2: Restore Individual Mailbox
|
||||
|
||||
To restore a single user's mailbox without affecting others:
|
||||
|
||||
#### Option A: Using Mailcow Backups (If Available)
|
||||
|
||||
```bash
|
||||
cd /opt/mailcow-dockerized
|
||||
|
||||
# Temporarily mount the backup
|
||||
BACKUP_DIR="/opt/mailcow-backups/mailcow-YYYY-MM-DD-HH-MM-SS"
|
||||
|
||||
# Extract just the vmail archive to a temporary location
|
||||
mkdir -p /tmp/vmail-restore
|
||||
cd "$BACKUP_DIR"
|
||||
tar --use-compress-program='zstd -d' -xvf backup_vmail.tar.zst -C /tmp/vmail-restore
|
||||
|
||||
# Find the user's mailbox
|
||||
# Structure: /tmp/vmail-restore/var/vmail/domain.com/user/
|
||||
ls -la /tmp/vmail-restore/var/vmail/yourdomain.com/
|
||||
|
||||
# Copy specific mailbox
|
||||
rsync -av /tmp/vmail-restore/var/vmail/yourdomain.com/user@domain.com/ \
|
||||
/var/lib/docker/volumes/mailcowdockerized_vmail-vol/_data/yourdomain.com/user@domain.com/
|
||||
|
||||
# Fix permissions
|
||||
docker run --rm \
|
||||
-v mailcowdockerized_vmail-vol:/vmail \
|
||||
debian:bookworm-slim \
|
||||
chown -R 5000:5000 /vmail/yourdomain.com/user@domain.com/
|
||||
|
||||
# Cleanup
|
||||
rm -rf /tmp/vmail-restore
|
||||
|
||||
# Restart Dovecot to recognize changes
|
||||
docker compose restart dovecot-mailcow
|
||||
```
|
||||
|
||||
#### Option B: Using Kopia Snapshot (If Local Backups Unavailable)
|
||||
|
||||
```bash
|
||||
# Mount the vmail snapshot temporarily
|
||||
mkdir -p /mnt/restore
|
||||
kopia mount <vmail-snapshot-id> /mnt/restore
|
||||
|
||||
# Find the user's mailbox
|
||||
# Structure: /mnt/restore/domain.com/user/
|
||||
ls -la /mnt/restore/yourdomain.com/
|
||||
|
||||
# Copy specific mailbox
|
||||
rsync -av /mnt/restore/yourdomain.com/user@domain.com/ \
|
||||
/var/lib/docker/volumes/mailcowdockerized_vmail-vol/_data/yourdomain.com/user@domain.com/
|
||||
|
||||
# Fix permissions
|
||||
chown -R 5000:5000 /var/lib/docker/volumes/mailcowdockerized_vmail-vol/_data/yourdomain.com/user@domain.com/
|
||||
|
||||
# Unmount
|
||||
kopia unmount /mnt/restore
|
||||
|
||||
# Restart Dovecot to recognize changes
|
||||
docker compose restart dovecot-mailcow
|
||||
```
|
||||
|
||||
### Scenario 3: Database Recovery Only
|
||||
|
||||
If only the database is corrupted but email data is intact:
|
||||
|
||||
#### Option A: Using Mailcow Native Restore (Recommended)
|
||||
|
||||
```bash
|
||||
cd /opt/mailcow-dockerized
|
||||
|
||||
# Run the restore script
|
||||
./helper-scripts/backup_and_restore.sh restore
|
||||
|
||||
# When prompted:
|
||||
# - Backup location: /opt/mailcow-backups
|
||||
# - Select the most recent backup
|
||||
# - Select [ 6 ] - SQL DB (database only)
|
||||
```
|
||||
|
||||
The script will:
|
||||
1. Stop mailcow
|
||||
2. Restore the MySQL database from the mariabackup archive
|
||||
3. Update mailcow.conf with the restored database credentials
|
||||
4. Restart mailcow
|
||||
|
||||
#### Option B: Manual Database Restore from Kopia
|
||||
|
||||
If local backups are unavailable:
|
||||
|
||||
```bash
|
||||
cd /opt/mailcow-dockerized
|
||||
|
||||
# Stop Mailcow
|
||||
docker compose down
|
||||
|
||||
# Start only MySQL
|
||||
docker compose up -d mysql-mailcow
|
||||
|
||||
# Wait for MySQL
|
||||
sleep 30
|
||||
|
||||
# Restore from Kopia database dump
|
||||
kopia snapshot list --tags database
|
||||
kopia restore <snapshot-id> /tmp/db-restore/
|
||||
|
||||
# Import the dump
|
||||
LATEST_DUMP=$(ls -t /tmp/db-restore/mailcow_*.sql | head -1)
|
||||
docker compose exec -T mysql-mailcow mysql -u root -p$(grep DBROOT mailcow.conf | cut -d'=' -f2) < "$LATEST_DUMP"
|
||||
|
||||
# Start all services
|
||||
docker compose down
|
||||
docker compose up -d
|
||||
|
||||
# Verify
|
||||
docker compose logs -f
|
||||
```
|
||||
|
||||
### Scenario 4: Configuration Recovery Only
|
||||
|
||||
If you only need to restore configuration files:
|
||||
|
||||
#### Option A: From Mailcow Backup
|
||||
|
||||
```bash
|
||||
# Find the most recent backup
|
||||
LATEST_BACKUP=$(ls -td /opt/mailcow-backups/mailcow-* | head -1)
|
||||
|
||||
# Stop Mailcow
|
||||
cd /opt/mailcow-dockerized
|
||||
docker compose down
|
||||
|
||||
# Backup current config (just in case)
|
||||
cp mailcow.conf mailcow.conf.pre-restore
|
||||
cp docker-compose.yml docker-compose.yml.pre-restore
|
||||
|
||||
# Restore mailcow.conf from backup
|
||||
cp "$LATEST_BACKUP/mailcow.conf" ./mailcow.conf
|
||||
|
||||
# If you also need other config files from data/conf/,
|
||||
# you would need to extract them from the backup archives
|
||||
|
||||
# Restart
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
#### Option B: From Kopia Snapshot
|
||||
|
||||
```bash
|
||||
# Restore config snapshot to temporary location
|
||||
kopia restore <config-snapshot-id> /tmp/mailcow-restore/
|
||||
|
||||
# Stop Mailcow
|
||||
cd /opt/mailcow-dockerized
|
||||
docker compose down
|
||||
|
||||
# Backup current config (just in case)
|
||||
cp mailcow.conf mailcow.conf.pre-restore
|
||||
cp docker-compose.yml docker-compose.yml.pre-restore
|
||||
|
||||
# Restore specific files
|
||||
cp /tmp/mailcow-restore/mailcow.conf ./
|
||||
cp /tmp/mailcow-restore/docker-compose.yml ./
|
||||
cp -r /tmp/mailcow-restore/data/conf/* ./data/conf/
|
||||
|
||||
# Restart
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
## Verification and Testing
|
||||
|
||||
### Regular Backup Verification
|
||||
|
||||
Perform monthly restore tests to ensure backups are valid:
|
||||
|
||||
```bash
|
||||
# Test restore to temporary location
|
||||
mkdir -p /tmp/backup-test
|
||||
kopia snapshot list --tags mailcow
|
||||
kopia restore <snapshot-id> /tmp/backup-test/
|
||||
|
||||
# Verify files exist and are readable
|
||||
ls -lah /tmp/backup-test/
|
||||
cat /tmp/backup-test/mailcow.conf
|
||||
|
||||
# Cleanup
|
||||
rm -rf /tmp/backup-test/
|
||||
```
|
||||
|
||||
### Backup Monitoring Script
|
||||
|
||||
Create `/opt/scripts/check-mailcow-backup.sh`:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
# Check last backup age
|
||||
LAST_BACKUP=$(kopia snapshot list --tags mailcow --json | jq -r '.[0].startTime')
|
||||
LAST_BACKUP_EPOCH=$(date -d "$LAST_BACKUP" +%s)
|
||||
NOW=$(date +%s)
|
||||
AGE_HOURS=$(( ($NOW - $LAST_BACKUP_EPOCH) / 3600 ))
|
||||
|
||||
if [ $AGE_HOURS -gt 26 ]; then
|
||||
echo "WARNING: Last Mailcow backup is $AGE_HOURS hours old"
|
||||
# Send alert (email, Slack, etc.)
|
||||
exit 1
|
||||
else
|
||||
echo "OK: Last backup $AGE_HOURS hours ago"
|
||||
fi
|
||||
```
|
||||
|
||||
## Disaster Recovery Checklist
|
||||
|
||||
When disaster strikes, follow this checklist:
|
||||
|
||||
- [ ] Confirm scope of failure (server, storage, specific component)
|
||||
- [ ] Gather server information (hostname, IP, DNS records)
|
||||
- [ ] Access offsite backup vault
|
||||
- [ ] Provision new server (if needed)
|
||||
- [ ] Install Docker and dependencies
|
||||
- [ ] Connect to Kopia repository
|
||||
- [ ] Restore configurations first
|
||||
- [ ] Restore database
|
||||
- [ ] Restore email data
|
||||
- [ ] Start services and verify
|
||||
- [ ] Test email sending/receiving
|
||||
- [ ] Verify webmail access
|
||||
- [ ] Check DNS records and update if needed
|
||||
- [ ] Document any issues encountered
|
||||
- [ ] Update recovery procedures based on experience
|
||||
|
||||
## Important Notes
|
||||
|
||||
1. **DNS**: Keep DNS records documented separately. Recovery includes updating DNS if server IP changes.
|
||||
|
||||
2. **SSL Certificates**: Let's Encrypt certificates are in the backup but may need renewal. Mailcow will handle this automatically.
|
||||
|
||||
3. **Permissions**: Docker volumes have specific UID/GID requirements:
|
||||
- vmail: `5000:5000`
|
||||
- mysql: `999:999`
|
||||
|
||||
4. **Testing**: Always test recovery procedures in a lab environment before trusting them in production.
|
||||
|
||||
5. **Documentation**: Keep this guide and server details in a separate location (printed copy, password manager, etc.).
|
||||
|
||||
6. **Retention Policy**: Review Kopia retention settings periodically to balance storage costs with recovery needs.
|
||||
|
||||
## Backup Architecture Notes
|
||||
|
||||
### Why Two Backup Layers?
|
||||
|
||||
**Mailcow Native Backups** (Tier 1):
|
||||
- ✅ Component-aware (knows about mailcow's structure)
|
||||
- ✅ Uses mariabackup for consistent MySQL hot backups
|
||||
- ✅ Fast, selective restore (can restore just one component)
|
||||
- ✅ Architecture-aware (handles x86/ARM differences)
|
||||
- ❌ No deduplication (full copies each time)
|
||||
- ❌ Limited to local storage initially
|
||||
|
||||
**Kopia Snapshots** (Tier 2):
|
||||
- ✅ Deduplication and compression
|
||||
- ✅ Efficient offsite replication to vaults
|
||||
- ✅ Point-in-time recovery across multiple versions
|
||||
- ✅ Disaster recovery to completely new infrastructure
|
||||
- ❌ Less component-aware (treats as files)
|
||||
- ❌ Slower for granular component restore
|
||||
|
||||
### Storage Efficiency
|
||||
|
||||
Using this two-tier approach:
|
||||
- **Local**: Mailcow creates ~7 days of native backups (may be large, but short retention)
|
||||
- **Offsite**: Kopia deduplicates these backups for long-term vault storage (much smaller)
|
||||
|
||||
Example storage calculation (10GB mailbox):
|
||||
- Local: 7 days × 10GB = ~70GB (before compression)
|
||||
- Kopia (offsite): First backup ~10GB, subsequent backups only store changes (might be <1GB/day after dedup)
|
||||
|
||||
### Compression Formats
|
||||
|
||||
Mailcow's script creates `.tar.zst` (Zstandard) or `.tar.gz` (gzip) files:
|
||||
- **Zstandard** (modern): Better compression ratio, faster (recommended)
|
||||
- **Gzip** (legacy): Wider compatibility with older systems
|
||||
|
||||
Verify your backup compression:
|
||||
```bash
|
||||
ls -lh /opt/mailcow-backups/mailcow-*/
|
||||
# Look for .tar.zst (preferred) or .tar.gz
|
||||
```
|
||||
|
||||
### Cross-Architecture Considerations
|
||||
|
||||
**Important for ARM/x86 Migration**:
|
||||
|
||||
Mailcow's backup script is architecture-aware. When restoring:
|
||||
- **Rspamd data** cannot be restored across different architectures (x86 ↔ ARM)
|
||||
- **All other components** (vmail, mysql, redis, postfix, crypt) are architecture-independent
|
||||
|
||||
If migrating between architectures:
|
||||
```bash
|
||||
# Restore everything EXCEPT rspamd
|
||||
# Select components individually: vmail, mysql, redis, postfix, crypt
|
||||
# Skip rspamd - it will rebuild its learning database over time
|
||||
```
|
||||
|
||||
### Testing Your Backups
|
||||
|
||||
**Monthly Test Protocol**:
|
||||
|
||||
1. **Verify local backups exist**:
|
||||
```bash
|
||||
ls -lh /opt/mailcow-backups/
|
||||
# Should see recent dated directories
|
||||
```
|
||||
|
||||
2. **Verify Kopia snapshots**:
|
||||
```bash
|
||||
kopia snapshot list --tags mailcow
|
||||
# Should see recent snapshots
|
||||
```
|
||||
|
||||
3. **Test restore in lab** (recommended quarterly):
|
||||
- Spin up a test VM
|
||||
- Restore from Kopia
|
||||
- Run mailcow native restore
|
||||
- Verify email delivery and webmail access
|
||||
|
||||
## Additional Resources
|
||||
|
||||
- [Mailcow Official Backup Documentation](https://docs.mailcow.email/backup_restore/b_n_r-backup/)
|
||||
- [Kopia Documentation](https://kopia.io/docs/)
|
||||
- [Docker Volume Backup Best Practices](https://docs.docker.com/storage/volumes/#back-up-restore-or-migrate-data-volumes)
|
||||
|
||||
## Revision History
|
||||
|
||||
| Date | Version | Changes |
|
||||
|------|---------|---------|
|
||||
| 2026-02-10 | 1.1 | Integrated mailcow native backup_and_restore.sh script as primary backup method |
|
||||
| 2026-02-10 | 1.0 | Initial documentation |
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: February 10, 2026
|
||||
**Maintained By**: System Administrator
|
||||
**Review Schedule**: Quarterly
|
||||
1151
False Grimoire/Netgrimoire/Backup/Nextcloud_Backup.md
Normal file
1151
False Grimoire/Netgrimoire/Backup/Nextcloud_Backup.md
Normal file
File diff suppressed because it is too large
Load diff
19
False Grimoire/Netgrimoire/Backup/Services_Backup.md
Normal file
19
False Grimoire/Netgrimoire/Backup/Services_Backup.md
Normal file
|
|
@ -0,0 +1,19 @@
|
|||
---
|
||||
title: Services Backup
|
||||
description:
|
||||
published: true
|
||||
date: 2026-02-20T04:08:15.923Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-02-05T21:28:23.152Z
|
||||
---
|
||||
|
||||
- [Mailcow](/backup-mailcow)
|
||||
- [Immich](/immich_backup)
|
||||
- [Nextcloud](/nextcloud_backup)
|
||||
- kopia
|
||||
- forgejo
|
||||
- bitwarden
|
||||
- wiki
|
||||
- journalv
|
||||
|
||||
567
False Grimoire/Netgrimoire/Backup/Wiki_Backup.md
Normal file
567
False Grimoire/Netgrimoire/Backup/Wiki_Backup.md
Normal file
|
|
@ -0,0 +1,567 @@
|
|||
---
|
||||
title: Wikijs Backup
|
||||
description: Backup Wikijs
|
||||
published: true
|
||||
date: 2026-02-23T04:35:32.870Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-02-23T04:35:24.121Z
|
||||
---
|
||||
|
||||
# Wiki.js Backup & Recovery
|
||||
|
||||
**Service:** Wiki.js (Netgrimoire)
|
||||
**Stack:** Docker Compose — Wiki.js + PostgreSQL
|
||||
**Backup Targets:** PostgreSQL database dump, Git content repository, Docker Compose config
|
||||
**Backup Destinations:** Local vault path → Kopia → offsite vaults
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Wiki.js data lives in two separate places that must be backed up independently:
|
||||
|
||||
**PostgreSQL database** — stores page metadata, navigation, user accounts, permissions, page history, assets, and all configuration. This is the critical component for a portable restore. Without it, a new instance has no knowledge of your wiki structure.
|
||||
|
||||
**Git content repository** — stores the actual page content in markdown files, synced from Forgejo. This is already mirrored on the VAULT SSD at `/vault/repos/wiki/`. It is inherently redundant as long as Forgejo is healthy, but is included in backups for completeness and offline portability.
|
||||
|
||||
**Docker Compose config** — the `docker-compose.yml` and `.env` files needed to recreate the stack.
|
||||
|
||||
---
|
||||
|
||||
## What Gets Backed Up
|
||||
|
||||
| Component | Location | Method | Critical? |
|
||||
|---|---|---|---|
|
||||
| PostgreSQL database | Docker volume | `pg_dump` → SQL file | Yes — primary restore target |
|
||||
| Git content repo | `/vault/repos/wiki/` | Already on VAULT SSD | Yes — page content |
|
||||
| Docker Compose files | `/opt/stacks/wikijs/` | rsync copy | Yes — stack config |
|
||||
| Wiki.js data volume | Docker volume | Optional rsync | No — DB + Git covers this |
|
||||
|
||||
---
|
||||
|
||||
## Backup Strategy
|
||||
|
||||
### Tier 1 — Daily Dump to Vault Path
|
||||
|
||||
A script runs daily via systemd timer. It produces a portable `pg_dump` SQL file written to `/vault/backups/wiki/`. These local dumps are retained for 14 days.
|
||||
|
||||
**Key choices:**
|
||||
|
||||
- `--format=plain` — plain SQL, portable to any PostgreSQL version and any host
|
||||
- `--no-owner` — strips role ownership, so the dump restores cleanly on a new instance with a different postgres user (critical for Pocket Grimoire restores)
|
||||
- `--no-acl` — strips GRANT/REVOKE statements for the same reason
|
||||
- No application downtime required — PostgreSQL handles consistent dumps natively
|
||||
|
||||
### Tier 2 — Kopia Snapshot to Offsite Vaults
|
||||
|
||||
After the daily dump completes, Kopia snapshots the entire `/vault/backups/wiki/` directory and replicates to your offsite vaults. Kopia deduplication means only changed blocks are transferred after the first run.
|
||||
|
||||
---
|
||||
|
||||
## Setup
|
||||
|
||||
### Step 0 — Confirm Kopia Repository Exists
|
||||
|
||||
If Kopia is not yet initialized on this host, initialize it first. If you already initialized Kopia for Mailcow or another service, skip this step — all services share the same Kopia repository.
|
||||
|
||||
```bash
|
||||
# Check if repository already exists
|
||||
kopia repository status
|
||||
|
||||
# If not initialized, create it against your vault path
|
||||
kopia repository create filesystem --path=/vault/kopia
|
||||
|
||||
# Connect on subsequent logins if disconnected
|
||||
kopia repository connect filesystem --path=/vault/kopia
|
||||
```
|
||||
|
||||
### Step 1 — Create Backup Directories
|
||||
|
||||
```bash
|
||||
sudo mkdir -p /vault/backups/wiki
|
||||
sudo chown $(whoami):$(whoami) /vault/backups/wiki
|
||||
```
|
||||
|
||||
### Step 2 — Create the Backup Script
|
||||
|
||||
```bash
|
||||
sudo nano /usr/local/sbin/wikijs-backup.sh
|
||||
```
|
||||
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
# wikijs-backup.sh — Daily Wiki.js backup: pg_dump + git repo + config
|
||||
# Writes to /vault/backups/wiki/, then snapshots with Kopia
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# ── Configuration ─────────────────────────────────────────────────────────────
|
||||
BACKUP_DIR="/vault/backups/wiki"
|
||||
DATE=$(date +%Y%m%d_%H%M%S)
|
||||
CONTAINER_DB="wikijs_db" # Adjust to your actual container name
|
||||
PG_USER="wikijs"
|
||||
PG_DB="wikijs"
|
||||
WIKI_STACK_DIR="/opt/stacks/wikijs" # Location of docker-compose.yml and .env
|
||||
GIT_REPO_DIR="/vault/repos/wiki" # Git content mirror (already on vault SSD)
|
||||
RETAIN_DAYS=14 # Local dump retention
|
||||
|
||||
LOG="/var/log/wikijs-backup.log"
|
||||
touch "$LOG"
|
||||
|
||||
log() { echo "$(date -Is) $*" | tee -a "$LOG"; }
|
||||
|
||||
# ── Step 1: PostgreSQL dump ────────────────────────────────────────────────────
|
||||
log "Starting Wiki.js PostgreSQL dump..."
|
||||
|
||||
docker exec "$CONTAINER_DB" pg_dump \
|
||||
-U "$PG_USER" \
|
||||
"$PG_DB" \
|
||||
--format=plain \
|
||||
--no-owner \
|
||||
--no-acl \
|
||||
> "${BACKUP_DIR}/wikijs-db-${DATE}.sql"
|
||||
|
||||
gzip "${BACKUP_DIR}/wikijs-db-${DATE}.sql"
|
||||
|
||||
log "PostgreSQL dump complete: wikijs-db-${DATE}.sql.gz"
|
||||
|
||||
# ── Step 2: Docker Compose config backup ──────────────────────────────────────
|
||||
log "Backing up Docker Compose config..."
|
||||
|
||||
CONFIG_BACKUP="${BACKUP_DIR}/wikijs-config-${DATE}.tar.gz"
|
||||
|
||||
tar -czf "$CONFIG_BACKUP" \
|
||||
-C "$(dirname "$WIKI_STACK_DIR")" \
|
||||
"$(basename "$WIKI_STACK_DIR")"
|
||||
|
||||
log "Config backup complete: wikijs-config-${DATE}.tar.gz"
|
||||
|
||||
# ── Step 3: Git repo snapshot (content mirror) ────────────────────────────────
|
||||
# The git repo lives on the VAULT SSD and is already versioned.
|
||||
# We record the current HEAD commit for reference.
|
||||
|
||||
if [ -d "${GIT_REPO_DIR}/.git" ]; then
|
||||
GIT_HEAD=$(git -C "$GIT_REPO_DIR" rev-parse HEAD 2>/dev/null || echo "unknown")
|
||||
echo "Git HEAD at backup time: ${GIT_HEAD}" \
|
||||
> "${BACKUP_DIR}/wikijs-git-ref-${DATE}.txt"
|
||||
log "Git content repo HEAD: ${GIT_HEAD}"
|
||||
else
|
||||
log "WARNING: Git repo not found at ${GIT_REPO_DIR} — skipping git ref"
|
||||
fi
|
||||
|
||||
# ── Step 4: Cleanup old local dumps ───────────────────────────────────────────
|
||||
log "Cleaning up dumps older than ${RETAIN_DAYS} days..."
|
||||
|
||||
find "$BACKUP_DIR" -name "wikijs-db-*.sql.gz" -mtime +"$RETAIN_DAYS" -delete
|
||||
find "$BACKUP_DIR" -name "wikijs-config-*.tar.gz" -mtime +"$RETAIN_DAYS" -delete
|
||||
find "$BACKUP_DIR" -name "wikijs-git-ref-*.txt" -mtime +"$RETAIN_DAYS" -delete
|
||||
|
||||
# ── Step 5: Kopia snapshot ────────────────────────────────────────────────────
|
||||
log "Running Kopia snapshot of /vault/backups/wiki/..."
|
||||
|
||||
kopia snapshot create "$BACKUP_DIR" \
|
||||
--tags "service:wikijs,host:$(hostname -s)"
|
||||
|
||||
log "Kopia snapshot complete."
|
||||
|
||||
# ── Done ──────────────────────────────────────────────────────────────────────
|
||||
log "Wiki.js backup finished successfully."
|
||||
```
|
||||
|
||||
```bash
|
||||
sudo chmod +x /usr/local/sbin/wikijs-backup.sh
|
||||
```
|
||||
|
||||
### Step 3 — Create systemd Service and Timer
|
||||
|
||||
```bash
|
||||
sudo nano /etc/systemd/system/wikijs-backup.service
|
||||
```
|
||||
|
||||
```ini
|
||||
[Unit]
|
||||
Description=Wiki.js daily backup (pg_dump + config + Kopia snapshot)
|
||||
After=docker.service
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
ExecStart=/usr/local/sbin/wikijs-backup.sh
|
||||
```
|
||||
|
||||
```bash
|
||||
sudo nano /etc/systemd/system/wikijs-backup.timer
|
||||
```
|
||||
|
||||
```ini
|
||||
[Unit]
|
||||
Description=Run Wiki.js backup daily at 02:00
|
||||
|
||||
[Timer]
|
||||
OnCalendar=*-*-* 02:00:00
|
||||
Persistent=true
|
||||
|
||||
[Install]
|
||||
WantedBy=timers.target
|
||||
```
|
||||
|
||||
```bash
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable wikijs-backup.timer
|
||||
sudo systemctl start wikijs-backup.timer
|
||||
|
||||
# Verify
|
||||
systemctl list-timers | grep wikijs
|
||||
```
|
||||
|
||||
### Step 4 — Configure Kopia Retention Policy
|
||||
|
||||
```bash
|
||||
# Set retention policy for wiki backups
|
||||
kopia policy set /vault/backups/wiki \
|
||||
--keep-daily 14 \
|
||||
--keep-weekly 8 \
|
||||
--keep-monthly 12 \
|
||||
--compression zstd
|
||||
|
||||
# Verify policy
|
||||
kopia policy show /vault/backups/wiki
|
||||
```
|
||||
|
||||
### Step 5 — Test the Backup
|
||||
|
||||
```bash
|
||||
# Run manually first time
|
||||
sudo /usr/local/sbin/wikijs-backup.sh
|
||||
|
||||
# Verify output
|
||||
ls -lh /vault/backups/wiki/
|
||||
# Should show: wikijs-db-YYYYMMDD_HHMMSS.sql.gz
|
||||
# wikijs-config-YYYYMMDD_HHMMSS.tar.gz
|
||||
# wikijs-git-ref-YYYYMMDD_HHMMSS.txt
|
||||
|
||||
# Verify Kopia snapshot was created
|
||||
kopia snapshot list /vault/backups/wiki
|
||||
|
||||
# Check backup log
|
||||
tail -n 30 /var/log/wikijs-backup.log
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Verifying Backups
|
||||
|
||||
### Check dump is readable
|
||||
|
||||
```bash
|
||||
# Inspect the SQL dump without extracting
|
||||
zcat /vault/backups/wiki/wikijs-db-YYYYMMDD_HHMMSS.sql.gz | head -50
|
||||
|
||||
# Should show PostgreSQL header, version info, and CREATE TABLE statements
|
||||
```
|
||||
|
||||
### Verify Kopia snapshots
|
||||
|
||||
```bash
|
||||
# List recent snapshots
|
||||
kopia snapshot list /vault/backups/wiki
|
||||
|
||||
# Show snapshot details
|
||||
kopia snapshot list /vault/backups/wiki --all
|
||||
|
||||
# Verify snapshot integrity
|
||||
kopia snapshot verify
|
||||
```
|
||||
|
||||
### Test restore to a temporary database (non-destructive)
|
||||
|
||||
```bash
|
||||
# Start a temporary Postgres container
|
||||
docker run --rm -d \
|
||||
--name wikijs-restore-test \
|
||||
-e POSTGRES_USER=wikijs \
|
||||
-e POSTGRES_PASSWORD=testpassword \
|
||||
-e POSTGRES_DB=wikijs_test \
|
||||
postgres:16-alpine
|
||||
|
||||
# Wait for Postgres to be ready
|
||||
sleep 5
|
||||
|
||||
# Restore dump into test container
|
||||
zcat /vault/backups/wiki/wikijs-db-YYYYMMDD_HHMMSS.sql.gz | \
|
||||
docker exec -i wikijs-restore-test psql -U wikijs -d wikijs_test
|
||||
|
||||
# Verify tables exist
|
||||
docker exec wikijs-restore-test psql -U wikijs -d wikijs_test -c "\dt"
|
||||
|
||||
# Expected output: List of tables (pages, users, pageHistory, assets, etc.)
|
||||
|
||||
# Cleanup test container
|
||||
docker stop wikijs-restore-test
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Recovery Procedures
|
||||
|
||||
### Scenario A — Restore to a New Wiki.js Instance (Any Host)
|
||||
|
||||
This covers full disaster recovery to a fresh server, including Pocket Grimoire.
|
||||
|
||||
**Requirements on the destination host:**
|
||||
- Docker and Docker Compose installed
|
||||
- A `docker-compose.yml` and `.env` ready (from backup or Pocket Grimoire stack)
|
||||
- Sufficient disk space
|
||||
|
||||
**Step 1: Locate the backup**
|
||||
|
||||
```bash
|
||||
# On Netgrimoire, find the dump to restore
|
||||
ls -lh /vault/backups/wiki/
|
||||
|
||||
# Or restore from Kopia
|
||||
kopia snapshot list /vault/backups/wiki
|
||||
kopia restore SNAPSHOT_ID /tmp/wiki-restore/
|
||||
ls /tmp/wiki-restore/
|
||||
```
|
||||
|
||||
**Step 2: Copy dump to the destination host**
|
||||
|
||||
```bash
|
||||
# From Netgrimoire, copy to the destination server
|
||||
scp /vault/backups/wiki/wikijs-db-YYYYMMDD_HHMMSS.sql.gz \
|
||||
user@destination-host:/tmp/
|
||||
|
||||
# Or to Pocket Grimoire
|
||||
scp /vault/backups/wiki/wikijs-db-YYYYMMDD_HHMMSS.sql.gz \
|
||||
user@pocket-grimoire.local:/tmp/
|
||||
```
|
||||
|
||||
**Step 3: Start the database container only**
|
||||
|
||||
On the destination host, start just the database — do not start Wiki.js yet:
|
||||
|
||||
```bash
|
||||
cd /srv/pocket-grimoire/stacks/wikijs # Adjust path as needed
|
||||
|
||||
# Start only the database container
|
||||
docker compose up -d db
|
||||
|
||||
# Wait for healthy status
|
||||
docker compose ps
|
||||
# db should show: healthy
|
||||
```
|
||||
|
||||
**Step 4: Restore the dump**
|
||||
|
||||
```bash
|
||||
# Restore the dump into the running database container
|
||||
zcat /tmp/wikijs-db-YYYYMMDD_HHMMSS.sql.gz | \
|
||||
docker exec -i pocketgrimoire_db psql \
|
||||
-U wikijs \
|
||||
-d wikijs
|
||||
|
||||
# Verify tables restored
|
||||
docker exec pocketgrimoire_db psql -U wikijs -d wikijs -c "\dt"
|
||||
```
|
||||
|
||||
**Step 5: Start Wiki.js**
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
|
||||
# Watch startup logs
|
||||
docker logs -f pocketgrimoire_wikijs
|
||||
# Wait for: "HTTP Server started successfully"
|
||||
```
|
||||
|
||||
**Step 6: Verify**
|
||||
|
||||
Open `http://pocket-grimoire.local:3000` and confirm:
|
||||
- Pages load correctly
|
||||
- Navigation structure is intact
|
||||
- User accounts are present (if you had multiple users)
|
||||
|
||||
**Step 7: Re-sync Git content (if needed)**
|
||||
|
||||
The database knows the page structure, but if the Git content repo isn't present on the new host, import it:
|
||||
|
||||
```bash
|
||||
# In Wiki.js admin panel:
|
||||
# Administration → Storage → Git
|
||||
# Click "Force Sync" or "Import Content"
|
||||
|
||||
# Or copy the repo from VAULT SSD
|
||||
rsync -avP /vault/repos/wiki/ /srv/pocket-grimoire/repos/wiki/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Scenario B — Restore on Existing Netgrimoire Instance
|
||||
|
||||
Use this when the Wiki.js database is corrupted but the host is otherwise healthy.
|
||||
|
||||
**Step 1: Stop Wiki.js (leave database running)**
|
||||
|
||||
```bash
|
||||
cd /opt/stacks/wikijs
|
||||
docker compose stop wikijs
|
||||
```
|
||||
|
||||
**Step 2: Drop and recreate the database**
|
||||
|
||||
```bash
|
||||
docker exec -it wikijs_db psql -U postgres -c "DROP DATABASE wikijs;"
|
||||
docker exec -it wikijs_db psql -U postgres -c "CREATE DATABASE wikijs OWNER wikijs;"
|
||||
```
|
||||
|
||||
**Step 3: Restore**
|
||||
|
||||
```bash
|
||||
zcat /vault/backups/wiki/wikijs-db-YYYYMMDD_HHMMSS.sql.gz | \
|
||||
docker exec -i wikijs_db psql -U wikijs -d wikijs
|
||||
```
|
||||
|
||||
**Step 4: Restart Wiki.js**
|
||||
|
||||
```bash
|
||||
docker compose start wikijs
|
||||
docker logs -f wikijs
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Scenario C — Restore Config Only
|
||||
|
||||
If the stack config was lost but the database volume is intact:
|
||||
|
||||
```bash
|
||||
# Extract config from backup
|
||||
tar -xzf /vault/backups/wiki/wikijs-config-YYYYMMDD_HHMMSS.tar.gz \
|
||||
-C /opt/stacks/
|
||||
|
||||
# Verify
|
||||
ls /opt/stacks/wikijs/
|
||||
# Should show: docker-compose.yml .env
|
||||
|
||||
# Restart stack
|
||||
cd /opt/stacks/wikijs
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Restore from Kopia (Offsite)
|
||||
|
||||
When local vault files are unavailable, restore the backup directory from Kopia first:
|
||||
|
||||
```bash
|
||||
# List available snapshots
|
||||
kopia snapshot list /vault/backups/wiki
|
||||
|
||||
# Restore snapshot to temp directory
|
||||
kopia restore SNAPSHOT_ID /tmp/wiki-restore/
|
||||
|
||||
# Then proceed with the appropriate scenario above
|
||||
# using files from /tmp/wiki-restore/ instead of /vault/backups/wiki/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Pocket Grimoire Specifics
|
||||
|
||||
When restoring to Pocket Grimoire, note the following differences from a full Netgrimoire instance:
|
||||
|
||||
**Container names** differ — use `pocketgrimoire_db` instead of `wikijs_db`.
|
||||
|
||||
**Stack path** is `/srv/pocket-grimoire/stacks/wikijs/` instead of `/opt/stacks/wikijs/`.
|
||||
|
||||
**The database is already initialized** when Pocket Grimoire is first set up. Restoring a Netgrimoire dump overwrites it entirely, which is the intended behavior — Pocket Grimoire becomes a mirror of Netgrimoire's wiki state.
|
||||
|
||||
**Git content repo** is located at `/srv/pocket-grimoire/repos/wiki/` and is populated via the sync script (`pocketgrimoire-sync.sh`). A database restore alone is sufficient if the Git repo is already in place.
|
||||
|
||||
**Recommended restore workflow for Pocket Grimoire:**
|
||||
|
||||
```bash
|
||||
# 1. Copy dump from VAULT SSD (already available on Pocket Grimoire)
|
||||
ls /srv/vaultpg/backups/wiki/
|
||||
|
||||
# 2. Start db container only
|
||||
cd /srv/pocket-grimoire/stacks/wikijs && docker compose up -d db
|
||||
|
||||
# 3. Restore
|
||||
zcat /srv/vaultpg/backups/wiki/wikijs-db-LATEST.sql.gz | \
|
||||
docker exec -i pocketgrimoire_db psql -U wikijs -d wikijs
|
||||
|
||||
# 4. Start full stack
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
Because the VAULT SSD is always connected to Pocket Grimoire, no file transfer is needed — the dumps are already there.
|
||||
|
||||
---
|
||||
|
||||
## Monitoring & Alerts
|
||||
|
||||
Add the following to your existing ntfy/monitoring setup to alert on backup failures. Wrap the backup script call in an error trap:
|
||||
|
||||
```bash
|
||||
# Add to wikijs-backup.sh after set -euo pipefail:
|
||||
|
||||
NTFY_URL="https://ntfy.YOUR_DOMAIN/wikijs-backup"
|
||||
|
||||
on_error() {
|
||||
curl -fsS -X POST "$NTFY_URL" \
|
||||
-H "Title: Wiki.js backup FAILED ($(hostname -s))" \
|
||||
-H "Priority: high" \
|
||||
-H "Tags: rotating_light" \
|
||||
-d "Backup failed at $(date -Is). Check /var/log/wikijs-backup.log"
|
||||
}
|
||||
trap on_error ERR
|
||||
```
|
||||
|
||||
### Check backup age manually
|
||||
|
||||
```bash
|
||||
# Find most recent dump
|
||||
ls -lt /vault/backups/wiki/wikijs-db-*.sql.gz | head -3
|
||||
|
||||
# Check Kopia last snapshot time
|
||||
kopia snapshot list /vault/backups/wiki | tail -5
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
```bash
|
||||
# Run backup manually
|
||||
sudo /usr/local/sbin/wikijs-backup.sh
|
||||
|
||||
# Watch backup log
|
||||
tail -f /var/log/wikijs-backup.log
|
||||
|
||||
# Check timer status
|
||||
systemctl status wikijs-backup.timer
|
||||
|
||||
# List local dumps
|
||||
ls -lh /vault/backups/wiki/
|
||||
|
||||
# List Kopia snapshots
|
||||
kopia snapshot list /vault/backups/wiki
|
||||
|
||||
# Restore dump (generic)
|
||||
zcat /vault/backups/wiki/wikijs-db-YYYYMMDD_HHMMSS.sql.gz | \
|
||||
docker exec -i CONTAINER_NAME psql -U wikijs -d wikijs
|
||||
|
||||
# Test dump is readable
|
||||
zcat /vault/backups/wiki/wikijs-db-YYYYMMDD_HHMMSS.sql.gz | head -50
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Revision History
|
||||
|
||||
| Version | Date | Notes |
|
||||
|---|---|---|
|
||||
| 1.0 | 2026-02-22 | Initial release — pg_dump + Kopia + Pocket Grimoire restore procedures |
|
||||
276
False Grimoire/Netgrimoire/Documentation_Standards.md
Normal file
276
False Grimoire/Netgrimoire/Documentation_Standards.md
Normal file
|
|
@ -0,0 +1,276 @@
|
|||
---
|
||||
title: Netgrimoire Documentation
|
||||
description: How to create and use Netgrimoire Docs
|
||||
published: true
|
||||
date: 2026-02-20T04:16:19.329Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-02-03T02:54:56.444Z
|
||||
---
|
||||
|
||||
# Homelab Documentation Structure & Diagram Standards
|
||||
|
||||
This document describes the **official documentation structure** for the homelab Wiki.js instance, including:
|
||||
- Folder and page layout
|
||||
- Naming conventions
|
||||
- How Git fits into the workflow
|
||||
- How to use draw.io (diagrams.net) for diagrams
|
||||
- How to ensure documentation is accessible when the lab is down
|
||||
|
||||
This page is intended to be a **reference and enforcement guide**.
|
||||
|
||||
---
|
||||
|
||||
## Core Principles
|
||||
|
||||
1. **Wiki.js is the editor, Git is the source of truth**
|
||||
2. **All documentation must be readable without Wiki.js**
|
||||
3. **Diagrams must be viewable without draw.io**
|
||||
4. **Folder structure must be predictable and consistent**
|
||||
5. **Emergency documentation must not depend on the lab being up**
|
||||
|
||||
---
|
||||
|
||||
## Repository Overview
|
||||
|
||||
All documentation lives in a single Git repository.
|
||||
|
||||
Wiki.js writes Markdown files into this repository automatically.
|
||||
The repository can be cloned to a laptop or other device for **offline access**.
|
||||
|
||||
Example:
|
||||
```bash
|
||||
git clone ssh://git@forgejo.example.com/homelab/docs.git
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Top-Level Folder Structure
|
||||
```
|
||||
homelab-docs/
|
||||
├── README.md
|
||||
├── emergency/
|
||||
├── infrastructure/
|
||||
├── storage/
|
||||
├── services/
|
||||
├── runbooks/
|
||||
├── diagrams/
|
||||
└── assets/
|
||||
```
|
||||
|
||||
### Folder Purpose
|
||||
|
||||
| Folder | Purpose |
|
||||
|--------|---------|
|
||||
| README.md | Entry point when the lab is down |
|
||||
| emergency/ | Recovery procedures and break-glass docs |
|
||||
| infrastructure/ | Core systems (identity, backups, networking) |
|
||||
| storage/ | Storage platforms and layouts |
|
||||
| services/ | Application-specific documentation |
|
||||
| runbooks/ | Step-by-step operational procedures |
|
||||
| diagrams/ | All draw.io diagrams and exports |
|
||||
| assets/ | Images or files used by documentation |
|
||||
|
||||
---
|
||||
|
||||
## Storage Documentation Structure
|
||||
```
|
||||
storage/
|
||||
└── core/
|
||||
├── zfs.md
|
||||
├── local-drives.md
|
||||
├── nas.md
|
||||
└── btrfs.md
|
||||
```
|
||||
|
||||
**Guidelines:**
|
||||
- Each storage technology gets its own page
|
||||
- Pages describe architecture, layout, and operational notes
|
||||
- Backup and snapshot policies belong elsewhere
|
||||
|
||||
---
|
||||
|
||||
## Infrastructure Documentation Structure
|
||||
```
|
||||
infrastructure/
|
||||
└── backups/
|
||||
├── zfs-snapshots.md
|
||||
└── application-backups.md
|
||||
```
|
||||
|
||||
**Guidelines:**
|
||||
- Infrastructure describes cross-cutting systems
|
||||
- Anything used by multiple hosts or services belongs here
|
||||
- Backup strategies are infrastructure, not storage
|
||||
|
||||
---
|
||||
|
||||
## Services Documentation Structure
|
||||
```
|
||||
services/
|
||||
└── mailcow.md
|
||||
```
|
||||
|
||||
**Guidelines:**
|
||||
- One page per service
|
||||
- Page should include:
|
||||
- Purpose
|
||||
- Architecture
|
||||
- Volumes
|
||||
- Backup considerations
|
||||
- Recovery notes
|
||||
|
||||
---
|
||||
|
||||
## Emergency Documentation
|
||||
```
|
||||
emergency/
|
||||
├── bring-up-order.md
|
||||
├── swarm-recovery.md
|
||||
├── zfs-import.md
|
||||
├── network-restore.md
|
||||
└── identity-break-glass.md
|
||||
```
|
||||
|
||||
**Rules:**
|
||||
|
||||
Emergency docs must be:
|
||||
- Text-first
|
||||
- Copy/paste friendly
|
||||
- Free of dependencies
|
||||
|
||||
These pages should be readable directly from Git.
|
||||
|
||||
---
|
||||
|
||||
## Naming Conventions (Mandatory)
|
||||
|
||||
**Folders:**
|
||||
- Lowercase
|
||||
- No spaces
|
||||
- Example: `infrastructure/backups`
|
||||
|
||||
**Page filenames:**
|
||||
- Lowercase
|
||||
- Hyphen-separated
|
||||
- Example: `zfs-snapshots.md`
|
||||
|
||||
**Page titles:**
|
||||
- Human readable
|
||||
- Proper case
|
||||
- Example: `# ZFS Snapshots`
|
||||
|
||||
---
|
||||
|
||||
## draw.io (diagrams.net) Usage
|
||||
|
||||
draw.io is used **only to create diagrams**, never as the sole storage location.
|
||||
|
||||
### Diagram Storage Layout
|
||||
```
|
||||
diagrams/
|
||||
├── network/
|
||||
│ ├── core.drawio
|
||||
│ ├── core.png
|
||||
│ └── core.svg
|
||||
├── docker/
|
||||
│ ├── swarm-architecture.drawio
|
||||
│ └── swarm-architecture.png
|
||||
└── storage/
|
||||
├── zfs-layout.drawio
|
||||
└── zfs-layout.png
|
||||
```
|
||||
|
||||
### File Types
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| .drawio | Editable source |
|
||||
| .png | Offline viewing |
|
||||
| .svg | Zoomable / high quality (optional) |
|
||||
|
||||
**Every diagram MUST have a PNG export.**
|
||||
|
||||
---
|
||||
|
||||
## Adding a Diagram (Required Workflow)
|
||||
|
||||
1. Create or edit the diagram in draw.io
|
||||
2. Save the `.drawio` file into `diagrams/<category>/`
|
||||
3. Export a `.png` (and optional `.svg`)
|
||||
4. Commit all files to Git
|
||||
|
||||
If a diagram cannot be viewed without draw.io running, it is **not complete**.
|
||||
|
||||
---
|
||||
|
||||
## Embedding Diagrams in Wiki.js Pages
|
||||
|
||||
Always embed PNG or SVG, never live editors.
|
||||
|
||||
Example:
|
||||
```markdown
|
||||

|
||||
|
||||
_Source file: core.drawio_
|
||||
```
|
||||
|
||||
This ensures:
|
||||
- Fast rendering
|
||||
- Offline viewing
|
||||
- No service dependency
|
||||
|
||||
---
|
||||
|
||||
## Git Workflow Expectations
|
||||
|
||||
**Authoring:**
|
||||
- All pages are created and edited in Wiki.js
|
||||
- Wiki.js commits changes automatically
|
||||
|
||||
**Offline Access:**
|
||||
- Documentation is read directly from the Git clone
|
||||
- Markdown and images must be sufficient without Wiki.js
|
||||
|
||||
**What Not To Do:**
|
||||
- Do not create wiki pages directly in Git
|
||||
- Do not rename paths outside Wiki.js
|
||||
- Do not store diagrams only inside draw.io
|
||||
|
||||
---
|
||||
|
||||
## Lab-Down Access Model
|
||||
|
||||
When the lab is unavailable:
|
||||
|
||||
1. Open the local Git clone
|
||||
2. Read `README.md`
|
||||
3. Navigate to `emergency/`
|
||||
4. View diagrams via `.png` files
|
||||
5. Execute recovery steps
|
||||
|
||||
**No services are required.**
|
||||
|
||||
---
|
||||
|
||||
## README.md (Recommended Content)
|
||||
|
||||
The root `README.md` should contain:
|
||||
- Purpose of the documentation
|
||||
- Where to start during an outage
|
||||
- Link list to emergency procedures
|
||||
- High-level architecture notes
|
||||
|
||||
---
|
||||
|
||||
## Final Notes
|
||||
|
||||
This structure is designed to:
|
||||
- Scale cleanly
|
||||
- Survive outages
|
||||
- Remain readable for years
|
||||
- Support automation and GitOps workflows
|
||||
|
||||
**If documentation cannot be read when the lab is down, it is incomplete.**
|
||||
|
||||
This structure makes that impossible.
|
||||
205
False Grimoire/Netgrimoire/Infrastructure/Docker_Template.md
Normal file
205
False Grimoire/Netgrimoire/Infrastructure/Docker_Template.md
Normal file
|
|
@ -0,0 +1,205 @@
|
|||
---
|
||||
title: Docker Template
|
||||
description: Swarm and Compose Template
|
||||
published: true
|
||||
date: 2026-04-10T19:53:21.433Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-10T19:53:21.433Z
|
||||
---
|
||||
|
||||
# Docker Swarm Template Standard — Netgrimoire
|
||||
|
||||
## Template
|
||||
|
||||
```yaml
|
||||
# Run with docker stack deploy -c <service>.yaml <service>
|
||||
services:
|
||||
<servicename>:
|
||||
image: <image>:latest
|
||||
environment:
|
||||
TZ: America/Chicago
|
||||
volumes:
|
||||
- /DockerVol/<servicename>:/config # use WITH placement constraint
|
||||
# - /data/nfs/znas/Docker/<servicename>:/data # use for bulk/data or no constraint
|
||||
networks:
|
||||
- netgrimoire
|
||||
deploy:
|
||||
restart_policy:
|
||||
condition: any
|
||||
delay: 5s
|
||||
max_attempts: 3
|
||||
window: 120s
|
||||
placement:
|
||||
constraints:
|
||||
- node.platform.arch != aarch64
|
||||
- node.platform.arch != arm
|
||||
labels:
|
||||
# --- Caddy ---
|
||||
caddy: <servicename>.netgrimoire.com
|
||||
caddy.reverse_proxy: <servicename>:<PORT>
|
||||
caddy.import: crowdsec
|
||||
caddy.import_1: authentik
|
||||
|
||||
# --- Uptime Kuma ---
|
||||
kuma.<servicename>.http.name: <Service Name>
|
||||
kuma.<servicename>.http.url: https://<servicename>.netgrimoire.com
|
||||
|
||||
# --- Homepage ---
|
||||
homepage.group: <Group>
|
||||
homepage.name: <Service Name>
|
||||
homepage.icon: <service>.png
|
||||
homepage.href: https://<servicename>.netgrimoire.com
|
||||
homepage.description: <Description>
|
||||
|
||||
# --- DIUN ---
|
||||
diun.enable: "true"
|
||||
|
||||
networks:
|
||||
netgrimoire:
|
||||
external: true
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Rules
|
||||
|
||||
### Forbidden Fields
|
||||
Never use these at the service level:
|
||||
- `version:` — deprecated in Compose v2+
|
||||
- `container_name:` — incompatible with Swarm replicas
|
||||
- `restart:` — use `deploy.restart_policy` instead
|
||||
- `depends_on:` — not supported in Swarm mode
|
||||
|
||||
### Volume Path Rules
|
||||
|
||||
| Path | When to Use |
|
||||
|------|------------|
|
||||
| `/DockerVol/<service>` | Config/state files. **Only valid when a placement constraint pins the service to a specific host.** |
|
||||
| `/data/nfs/znas/Docker/<service>` | Data/bulk volumes, or any service without a specific hostname constraint |
|
||||
|
||||
### Caddy Labels
|
||||
|
||||
```yaml
|
||||
caddy: service.netgrimoire.com
|
||||
caddy.reverse_proxy: servicename:PORT
|
||||
caddy.import: crowdsec
|
||||
caddy.import_1: authentik
|
||||
```
|
||||
|
||||
- `caddy.import_1` is required for the second import — duplicate YAML keys cause deploy errors
|
||||
- `caddy.reverse_proxy` uses `servicename:PORT` (internal Docker DNS), never `{{upstreams PORT}}`
|
||||
- No `https://` prefix on the `caddy:` address line
|
||||
- Services managed by Caddyfile (static config) must **not** also have Caddy Docker labels — mixing causes caddy-docker-proxy to merge them into a malformed upstream pool
|
||||
|
||||
### Networking
|
||||
|
||||
- Default VIP mode for services that publish ports
|
||||
- `endpoint_mode: dnsrr` for internal-only services (no published ports) — avoids Swarm VIP stale DNS
|
||||
- **Never use `dnsrr` on services with published ports** — incompatible with ingress mesh routing
|
||||
|
||||
### Placement Constraints
|
||||
|
||||
**Architecture exclusions** (always include unless ARM-specific):
|
||||
```yaml
|
||||
constraints:
|
||||
- node.platform.arch != aarch64 # Pi 4 reports aarch64
|
||||
- node.platform.arch != arm # Pi 3 reports arm
|
||||
```
|
||||
|
||||
**Note:** Docker Swarm uses the kernel's arch string. Pi 4 reports `aarch64` not `arm64`. The constraint `!= arm64` does **not** exclude Pi 4s.
|
||||
|
||||
Verified node architectures in Netgrimoire:
|
||||
```
|
||||
DockerPi1: linux/aarch64
|
||||
docker3: linux/x86_64
|
||||
docker4: linux/x86_64
|
||||
docker5: linux/x86_64
|
||||
znas: linux/x86_64
|
||||
```
|
||||
|
||||
### Label Sections
|
||||
|
||||
Labels are organized with comment dividers in this order:
|
||||
|
||||
```yaml
|
||||
labels:
|
||||
# --- Caddy ---
|
||||
# --- Uptime Kuma ---
|
||||
# --- Homepage ---
|
||||
# --- DIUN ---
|
||||
```
|
||||
|
||||
Use map syntax (not list syntax) for labels. List syntax (`- key=value`) and map syntax (`key: value`) both work, but map syntax is the standard.
|
||||
|
||||
---
|
||||
|
||||
## Services Without a UI
|
||||
|
||||
Some services have no web interface and need no Caddy, Homepage, or Kuma labels:
|
||||
|
||||
**Example: DIUN**
|
||||
```yaml
|
||||
deploy:
|
||||
labels:
|
||||
# --- DIUN ---
|
||||
diun.enable: "true"
|
||||
```
|
||||
|
||||
Services in this category: DIUN, background workers, one-shot jobs.
|
||||
|
||||
---
|
||||
|
||||
## Kuma Label Format
|
||||
|
||||
```yaml
|
||||
kuma.<unique-id>.<monitor-type>.<field>: <value>
|
||||
```
|
||||
|
||||
- `unique-id` — must be unique across all services in the entire Swarm
|
||||
- `monitor-type` — `http`, `tcp`, `ping`, `dns`
|
||||
- Common fields: `name`, `url`, `interval`, `maxretries`
|
||||
|
||||
Example:
|
||||
```yaml
|
||||
kuma.forgejo.http.name: Forgejo
|
||||
kuma.forgejo.http.url: https://git.netgrimoire.com
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Homepage Label Format
|
||||
|
||||
```yaml
|
||||
homepage.group: Group Name
|
||||
homepage.name: Display Name
|
||||
homepage.icon: icon.png
|
||||
homepage.href: https://service.netgrimoire.com
|
||||
homepage.description: Short description
|
||||
```
|
||||
|
||||
For services with widgets:
|
||||
```yaml
|
||||
homepage.widget.type: radarr
|
||||
homepage.widget.url: http://radarr:7878
|
||||
homepage.widget.key: apikey
|
||||
```
|
||||
|
||||
**Important:** Every `homepage.group` value must have a matching entry in `settings.yaml` with `style: column`, or the group will render full-width.
|
||||
|
||||
---
|
||||
|
||||
## Secrets Approach
|
||||
|
||||
Docker Swarm native secrets are preferred. For services that support `_FILE` env vars:
|
||||
|
||||
```yaml
|
||||
environment:
|
||||
MY_PASSWORD_FILE: /run/secrets/my_password
|
||||
secrets:
|
||||
- my_password
|
||||
```
|
||||
|
||||
Plain credentials in environment vars are acceptable as an interim approach for services that don't support `_FILE`.
|
||||
|
||||
**Never use `env_file:` in Swarm stacks** — it's read by the Docker client at deploy time from the deploying machine, not injected into the container at runtime. Use `environment:` directly.
|
||||
216
False Grimoire/Netgrimoire/Infrastructure/Monitor.md
Normal file
216
False Grimoire/Netgrimoire/Infrastructure/Monitor.md
Normal file
|
|
@ -0,0 +1,216 @@
|
|||
---
|
||||
title: Monitors and Alerts
|
||||
description: DIUN/NTFY on Netgrimoire
|
||||
published: true
|
||||
date: 2026-04-10T19:35:18.743Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-10T19:35:18.743Z
|
||||
---
|
||||
|
||||
# Notifications — Netgrimoire
|
||||
|
||||
## Overview
|
||||
|
||||
All Netgrimoire notifications route through a self-hosted ntfy instance at `https://ntfy.netgrimoire.com`. Topics are organized by service category.
|
||||
|
||||
## ntfy Topic Structure
|
||||
|
||||
| Topic | Services | Purpose |
|
||||
|-------|----------|---------|
|
||||
| `netgrimoire-diun` | DIUN | Docker image update notifications |
|
||||
| `netgrimoire-media` | Sonarr, Radarr, SABnzbd | Download and media management events |
|
||||
| `netgrimoire-backup` | Kopia | Backup completion and errors |
|
||||
| `netgrimoire-alerts` | Prometheus/Alertmanager | Infrastructure alerts (future) |
|
||||
|
||||
Subscribe to topics at `https://ntfy.netgrimoire.com/<topic>` or via the ntfy mobile app.
|
||||
|
||||
---
|
||||
|
||||
## DIUN — Image Update Notifications
|
||||
|
||||
DIUN watches all Docker services for image updates and posts to `netgrimoire-diun`.
|
||||
|
||||
**Configuration** (`swarm/diun.yaml`):
|
||||
|
||||
```yaml
|
||||
environment:
|
||||
DIUN_NOTIF_NTFY_ENDPOINT: https://ntfy.netgrimoire.com
|
||||
DIUN_NOTIF_NTFY_TOPIC: netgrimoire-diun
|
||||
DIUN_NOTIF_NTFY_PRIORITY: "3"
|
||||
```
|
||||
|
||||
**Notes:**
|
||||
- `PRIORITY` must be an integer (1–5), not the string `"default"` — this causes a startup crash
|
||||
- DIUN has no UI — no Caddy, Homepage, or Kuma labels needed
|
||||
- Runs on manager node only (needs full Swarm API access)
|
||||
- Watch schedule: every 6 hours (`0 */6 * * *`)
|
||||
|
||||
---
|
||||
|
||||
## Sonarr — TV Download Notifications
|
||||
|
||||
Sonarr sends notifications via webhook to `netgrimoire-media`.
|
||||
|
||||
**Setup** (done via UI — not compose):
|
||||
|
||||
1. Settings → Connect → + → **Webhook**
|
||||
2. Name: `ntfy`
|
||||
3. URL: `https://ntfy.netgrimoire.com/netgrimoire-media`
|
||||
4. Method: `POST`
|
||||
5. Triggers: On Grab, On Download, On Upgrade, On Health Issue
|
||||
6. Test → Save
|
||||
|
||||
---
|
||||
|
||||
## Radarr — Movie Download Notifications
|
||||
|
||||
Identical setup to Sonarr.
|
||||
|
||||
**Setup** (done via UI):
|
||||
|
||||
1. Settings → Connect → + → **Webhook**
|
||||
2. Name: `ntfy`
|
||||
3. URL: `https://ntfy.netgrimoire.com/netgrimoire-media`
|
||||
4. Method: `POST`
|
||||
5. Triggers: On Grab, On Download, On Upgrade, On Health Issue
|
||||
6. Test → Save
|
||||
|
||||
---
|
||||
|
||||
## SABnzbd — Usenet Download Notifications
|
||||
|
||||
SABnzbd does not have native ntfy support. Notifications are handled via a custom shell script.
|
||||
|
||||
### Script Location
|
||||
|
||||
```
|
||||
/data/nfs/znas/Docker/Sabnzbd/scripts/ntfy-notify.sh
|
||||
```
|
||||
|
||||
Mounted into the container at `/config/scripts/ntfy-notify.sh`.
|
||||
|
||||
### Script
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# SABnzbd ntfy notification script
|
||||
# SABnzbd passes: $1=Job name, $2=Final dir, $3=NZB file,
|
||||
# $4=Category, $5=Group, $6=Status, $7=Fail message
|
||||
|
||||
NTFY_URL="https://ntfy.netgrimoire.com/netgrimoire-media"
|
||||
|
||||
JOB_NAME="$1"
|
||||
STATUS_CODE="$6"
|
||||
FAIL_MSG="$7"
|
||||
|
||||
case "$STATUS_CODE" in
|
||||
0) TITLE="✅ SABnzbd — Download Complete"
|
||||
MSG="$JOB_NAME"; PRIORITY=3 ;;
|
||||
1) TITLE="⚠️ SABnzbd — Post-Processing Error"
|
||||
MSG="$JOB_NAME — $FAIL_MSG"; PRIORITY=4 ;;
|
||||
2) TITLE="❌ SABnzbd — Download Failed"
|
||||
MSG="$JOB_NAME — $FAIL_MSG"; PRIORITY=5 ;;
|
||||
*) TITLE="ℹ️ SABnzbd — Notification"
|
||||
MSG="$JOB_NAME (status: $STATUS_CODE)"; PRIORITY=3 ;;
|
||||
esac
|
||||
|
||||
curl -s \
|
||||
-H "Title: $TITLE" \
|
||||
-H "Priority: $PRIORITY" \
|
||||
-H "Tags: floppy_disk" \
|
||||
-d "$MSG" \
|
||||
"$NTFY_URL"
|
||||
|
||||
exit 0
|
||||
```
|
||||
|
||||
### SABnzbd UI Setup
|
||||
|
||||
1. Config → Folders → **Post-Processing Scripts Folder** → set to `/config/scripts`
|
||||
2. Config → Notifications → Notification Script section
|
||||
3. Check **Enable notification script**
|
||||
4. Script dropdown → select `ntfy-notify.sh`
|
||||
5. Check: Job finished, Job failed, Warning, Error, Disk full
|
||||
6. Test → Save
|
||||
|
||||
**Note:** The scripts folder must be configured under Config → Folders first or the script won't appear in the dropdown.
|
||||
|
||||
---
|
||||
|
||||
## Kopia — Backup Notifications
|
||||
|
||||
Kopia has no native webhook support. Notifications are handled via a cron script on znas that uses the Kopia CLI inside the Docker container.
|
||||
|
||||
### Script Location
|
||||
|
||||
```
|
||||
/usr/local/bin/kopia-notify.sh
|
||||
```
|
||||
|
||||
### How It Works
|
||||
|
||||
- Runs hourly via cron on znas
|
||||
- Uses `docker exec` to run `kopia snapshot list --json` inside the container
|
||||
- Parses JSON output with Python to find snapshots completed in the last hour
|
||||
- Posts success or error notification to `netgrimoire-backup`
|
||||
|
||||
### Cron Entry (znas root crontab)
|
||||
|
||||
```
|
||||
0 * * * * /usr/local/bin/kopia-notify.sh
|
||||
```
|
||||
|
||||
### Notification Format
|
||||
|
||||
**Success:** `✅ Kopia — Backup Complete`
|
||||
```
|
||||
host:path
|
||||
N files • X.X GB
|
||||
```
|
||||
|
||||
**Error:** `❌ Kopia — Backup Errors`
|
||||
```
|
||||
host:path
|
||||
N error(s) • N files • X.X GB
|
||||
```
|
||||
|
||||
### Kopia API Access
|
||||
|
||||
The Kopia API is accessible inside the container only. Direct host access via port 51515 does not work due to network routing. Use `docker exec` instead:
|
||||
|
||||
```bash
|
||||
docker exec $(docker ps -q -f name=kopia_kopia) \
|
||||
kopia snapshot list --json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## ntfy Compose Reference
|
||||
|
||||
```yaml
|
||||
# swarm/ntfy.yaml
|
||||
services:
|
||||
ntfy:
|
||||
image: binwiederhier/ntfy
|
||||
command: serve
|
||||
user: "1964:1964"
|
||||
environment:
|
||||
TZ: America/Chicago
|
||||
volumes:
|
||||
- /data/nfs/znas/Docker/ntfy/cache:/var/cache/ntfy
|
||||
- /data/nfs/znas/Docker/ntfy/etc:/etc/ntfy
|
||||
ports:
|
||||
- 81:80
|
||||
networks:
|
||||
- netgrimoire
|
||||
deploy:
|
||||
labels:
|
||||
caddy: ntfy.netgrimoire.com
|
||||
caddy.reverse_proxy: ntfy:80
|
||||
caddy.import: crowdsec
|
||||
# Note: no authentik — ntfy must be publicly reachable
|
||||
# for external services to post notifications
|
||||
```
|
||||
|
||||
**Note:** ntfy intentionally has no `caddy.import_1: authentik` — it must remain publicly accessible so external services (OPNsense CrowdSec plugin, Monit, etc.) can post to it without authentication.
|
||||
174
False Grimoire/Netgrimoire/Netgrimoire_Theme.md
Normal file
174
False Grimoire/Netgrimoire/Netgrimoire_Theme.md
Normal file
|
|
@ -0,0 +1,174 @@
|
|||
---
|
||||
title: Documentation Style Guide
|
||||
description: Applying a theme
|
||||
published: true
|
||||
date: 2026-02-25T21:32:16.786Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-02-24T14:03:00.791Z
|
||||
---
|
||||
|
||||
# Netgrimoire Theme — Wiki.js Implementation Guide
|
||||
|
||||
## What You're Getting
|
||||
|
||||
Two files to transform your Wiki.js library into the Netgrimoire aesthetic:
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `netgrimoire-theme.css` | Global site theme — dark background, teal glow, Cinzel headers, animated sidebar |
|
||||
| `netgrimoire-hero-block.html` | Animated constellation hero banner for your library landing page |
|
||||
|
||||
---
|
||||
|
||||
## Part 1 — Apply the Global Theme CSS
|
||||
|
||||
This is the main transformation. It reskins the entire Wiki.js UI.
|
||||
|
||||
### Step 1: Open the Wiki.js Admin Panel
|
||||
|
||||
Navigate to your Wiki.js instance and go to:
|
||||
|
||||
```
|
||||
Administration (gear icon) → Theme
|
||||
```
|
||||
|
||||
### Step 2: Locate "Custom CSS"
|
||||
|
||||
On the Theme page, scroll down until you see the **"Custom CSS"** text area. It may be labelled "CSS Override" depending on your Wiki.js version.
|
||||
|
||||
### Step 3: Paste the CSS
|
||||
|
||||
Open `netgrimoire-theme.css`, select all (`Ctrl+A`), copy, and paste the entire contents into the Custom CSS field.
|
||||
|
||||
### Step 4: Apply
|
||||
|
||||
Click **"Apply"** or **"Save"** at the top or bottom of the Theme page. Wiki.js applies the CSS live — you do not need to restart the container.
|
||||
|
||||
### Step 5: Verify
|
||||
|
||||
Open your wiki in a new browser tab. You should see:
|
||||
|
||||
- Dark `#0a0d12` background
|
||||
- Teal/cyan navigation links and headers
|
||||
- Cinzel serif font on headings
|
||||
- Glowing active sidebar item
|
||||
- Teal-bordered code blocks and tables
|
||||
|
||||
**If styles are not applying**, do a hard refresh (`Ctrl+Shift+R`) to clear cached CSS.
|
||||
|
||||
---
|
||||
|
||||
## Part 2 — Add the Animated Hero Banner to Your Library Page
|
||||
|
||||
This places a live constellation animation at the top of your document library index page.
|
||||
|
||||
### Step 1: Open the Library Page for Editing
|
||||
|
||||
Navigate to your document library landing page and click **Edit** (pencil icon, top right).
|
||||
|
||||
### Step 2: Switch to Source / HTML Mode
|
||||
|
||||
In the Wiki.js editor toolbar, look for one of the following depending on your editor:
|
||||
|
||||
- **Markdown editor**: Click the `<>` or "Insert HTML Block" button
|
||||
- **Visual editor (WYSIWYG)**: Look for `< >` Source button, or Insert → HTML Block
|
||||
|
||||
### Step 3: Paste the Hero HTML
|
||||
|
||||
Open `netgrimoire-hero-block.html`, copy the full contents, and paste into the HTML block at the very top of your page, before any other content.
|
||||
|
||||
### Step 4: Save the Page
|
||||
|
||||
Click **Save**. The constellation animation will render automatically when the page loads.
|
||||
|
||||
### Step 5: Customize (Optional)
|
||||
|
||||
To change the banner title text, find this line in the HTML:
|
||||
|
||||
```html
|
||||
>DOCUMENT LIBRARY</div>
|
||||
```
|
||||
|
||||
Replace `DOCUMENT LIBRARY` with whatever you want (e.g., `THE GRIMOIRE`, `KNOWLEDGE VAULT`).
|
||||
|
||||
To change the subtitle:
|
||||
|
||||
```html
|
||||
>Netgrimoire Knowledge Vault</div>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Part 3 — Google Fonts (Internet Access Required)
|
||||
|
||||
The theme imports three fonts automatically via Google Fonts:
|
||||
|
||||
| Font | Used For |
|
||||
|------|---------|
|
||||
| Cinzel | Headers, nav section labels, card titles |
|
||||
| Share Tech Mono | Code blocks, inline code, footer |
|
||||
| Raleway | Body text, nav items, descriptions |
|
||||
|
||||
These load via a `@import` at the top of the CSS and require your browser to have internet access when loading the page. Since Netgrimoire is a local server, this means:
|
||||
|
||||
- **If your browser machine has internet**: Fonts load automatically — no action needed.
|
||||
- **If fully air-gapped**: The fonts will fall back to system serif/monospace. To self-host, download the font files and serve them from your Forgejo or a local nginx path, then replace the `@import` line with `@font-face` blocks pointing to your local URLs.
|
||||
|
||||
---
|
||||
|
||||
## Part 4 — Fine-Tuning
|
||||
|
||||
### Adjusting the Teal Color
|
||||
|
||||
All colors are defined as CSS variables at the top of the CSS file. To shift the color tone, change `--ng-teal`:
|
||||
|
||||
```css
|
||||
:root {
|
||||
--ng-teal: #00e5cc; /* default — cyan-teal */
|
||||
/* try: #00cfff for more blue */
|
||||
/* try: #39ff14 for neon green */
|
||||
/* try: #bf5fff for purple arcane */
|
||||
}
|
||||
```
|
||||
|
||||
### Making the Background Darker
|
||||
|
||||
Adjust `--ng-bg-base` and `--ng-bg-deep`:
|
||||
|
||||
```css
|
||||
:root {
|
||||
--ng-bg-base: #070a0e; /* even darker */
|
||||
--ng-bg-deep: #030507;
|
||||
}
|
||||
```
|
||||
|
||||
### Constellation Node Count
|
||||
|
||||
In `netgrimoire-hero-block.html`, find:
|
||||
|
||||
```javascript
|
||||
var NODE_COUNT = 55;
|
||||
```
|
||||
|
||||
Increase for a denser network, decrease for a sparser, more minimal look.
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Symptom | Fix |
|
||||
|---------|-----|
|
||||
| CSS not applying | Hard refresh (`Ctrl+Shift+R`); check for syntax errors in the CSS field |
|
||||
| Fonts showing as Times New Roman | Browser lacks internet access; see Part 3 above |
|
||||
| Hero animation not rendering | Check browser console for JS errors; ensure the page saved the HTML block |
|
||||
| Sidebar colors still white | Some Wiki.js versions use different class names; inspect with browser DevTools and let Claude know which element needs targeting |
|
||||
| Dark mode toggle fighting the theme | Wiki.js's built-in dark mode toggle may conflict — set it to Dark in Administration → Theme before applying custom CSS |
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
- Wiki.js stores custom CSS in the database, so it survives container restarts.
|
||||
- After updating Wiki.js, re-check the Theme page — major version upgrades occasionally reset the CSS field.
|
||||
- The hero block is per-page; you can add it to any page you want the constellation effect on.
|
||||
60
False Grimoire/Netgrimoire/Network/Port_Assignments.md
Normal file
60
False Grimoire/Netgrimoire/Network/Port_Assignments.md
Normal file
|
|
@ -0,0 +1,60 @@
|
|||
---
|
||||
title: Port Assignments
|
||||
description:
|
||||
published: true
|
||||
date: 2026-02-20T04:21:52.996Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-01-27T03:42:58.945Z
|
||||
---
|
||||
|
||||
# Physical Paths
|
||||
|
||||
|Device|IP|Room|Home Infra|DLink|TPLink|Closet|Inter Rack|Rack|Ubiquity|
|
||||
|------|--|----|------|------|-------|------|----|----|--------|
|
||||
|Dlink |5.2 |Office | |1| | | | |1 |
|
||||
|ZNAS |5.10 | | |2| | | | | |
|
||||
|Docker3 | | | |3| | | | | |
|
||||
|Docker5 | | | |4| | | | | |
|
||||
|DockerPi1 | | | |5| | | | | |
|
||||
|DNS |5.7 | | |6| | | | | |
|
||||
|Docker4 | | | | | | |W:7 |19|4 |
|
||||
|Docker2 | | Office | | | | |W:5 |17|11|
|
||||
|Time Machine| | | | | | |W:6 |18|12|
|
||||
|Deco Satt | |Room 1 |1 | | | | | |15|
|
||||
|Deco AP | |Office(E)|10-24| | |24|W:9 |21|20|
|
||||
|TP Link | | | | |1|22|W:10|22|23|
|
||||
|OpnSense |3.4 | | | | |23|W:11|23|24|
|
||||
|OPnSense-Cox| | | | | | | | | |
|
||||
| | | | | | | | | | |
|
||||
| | |Room 2 |2 | | | | |2 | |
|
||||
| | |Room 3 |3 | | | | |3 | |
|
||||
| | |Living(E)|4 | | | | |4 | |
|
||||
| | |Living(W)|5 | | | | |5 | |
|
||||
| | |Family |6 | | | | |6 | |
|
||||
| | |Pantry |7 | | | | |7 | |
|
||||
| | |Room 4 |8 | | | | |8 | |
|
||||
| | |Gym |9 | | | | |9 | |
|
||||
| | |Office(S)|11 | | | | |11| |
|
||||
| | |Office(W)|12 | | | | |12| |
|
||||
| | |Office(W)|13 | | | | |13| |
|
||||
| | |Office(W)|14 | | | | |14| |
|
||||
| | |Office(W)|15 | | | | |15| |
|
||||
| | |Office(W)|16 | | | | |16| |
|
||||
| | |Office(N)|17 | | | | |17| |
|
||||
| | |Office(N)|18 | | | | |18| |
|
||||
| | |Office(N)|19 | | | | |19| |
|
||||
| | |Office(N)|20 | | | | |20| |
|
||||
|
||||
Note: For rooms N,E,S,W are compass directions
|
||||
For InterRack, W - wall, H - Hallway
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
522
False Grimoire/Netgrimoire/Network/Security/Caddy.md
Normal file
522
False Grimoire/Netgrimoire/Network/Security/Caddy.md
Normal file
|
|
@ -0,0 +1,522 @@
|
|||
---
|
||||
title: Caddy Reverse Proxy
|
||||
description: Curreent and future config
|
||||
published: true
|
||||
date: 2026-02-25T01:50:20.558Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-02-23T22:09:16.106Z
|
||||
---
|
||||
|
||||
# Caddy Reverse Proxy
|
||||
|
||||
**Host:** znas (Docker Swarm node)
|
||||
**Internal IP:** 192.168.5.10
|
||||
**Data Path:** `/export/Docker/caddy/`
|
||||
**Networks:** `netgrimoire` (service network), `vpn`
|
||||
**Ports:** 80 (mapped to host 8900), 443
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Caddy serves as the primary reverse proxy for all public and internal web services. It uses the `caddy-docker-proxy` pattern, which allows services to register themselves with Caddy by adding Docker labels to their compose files — no manual Caddyfile edits required per service.
|
||||
|
||||
Configuration is **hybrid**: some services are defined entirely via Docker labels, others are defined statically in the Caddyfile, and most use both (labels for routing, Caddyfile for shared snippets). The `caddy-docker-proxy` container merges both sources at runtime.
|
||||
|
||||
---
|
||||
|
||||
## Current State
|
||||
|
||||
### Image
|
||||
|
||||
```yaml
|
||||
image: lucaslorentz/caddy-docker-proxy:ci-alpine
|
||||
```
|
||||
|
||||
This image provides the Docker Proxy module only. It has no CrowdSec, GeoIP, or rate limiting built in.
|
||||
|
||||
### Docker Compose (`/export/Docker/caddy/docker-compose.yml`)
|
||||
|
||||
```yaml
|
||||
configs:
|
||||
caddy-basic-content:
|
||||
file: ./Caddyfile
|
||||
labels:
|
||||
caddy:
|
||||
|
||||
services:
|
||||
caddy:
|
||||
image: lucaslorentz/caddy-docker-proxy:ci-alpine
|
||||
ports:
|
||||
- 8900:80
|
||||
- 443:443
|
||||
environment:
|
||||
- CADDY_INGRESS_NETWORKS=netgrimoire
|
||||
networks:
|
||||
- netgrimoire
|
||||
- vpn
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
- /export/Docker/caddy/Caddyfile:/etc/caddy/Caddyfile
|
||||
- /export/Docker/caddy:/data
|
||||
#- /export/Docker/caddy/logs:/var/log/caddy # Placeholder for CrowdSec log mount
|
||||
deploy:
|
||||
placement:
|
||||
constraints:
|
||||
- node.hostname == znas
|
||||
|
||||
networks:
|
||||
netgrimoire:
|
||||
external: true
|
||||
vpn:
|
||||
external: true
|
||||
```
|
||||
|
||||
### Caddyfile (`/export/Docker/caddy/Caddyfile`)
|
||||
|
||||
The Caddyfile defines shared authentication snippets and static site blocks. These snippets are available to all services — including label-defined ones — via `import`.
|
||||
|
||||
```caddyfile
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# AUTH SNIPPETS
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
(authentik) {
|
||||
route /outpost.goauthentik.io/* {
|
||||
reverse_proxy http://authentik:9000
|
||||
}
|
||||
|
||||
forward_auth http://authentik:9000 {
|
||||
uri /outpost.goauthentik.io/auth/caddy
|
||||
header_up X-Forwarded-URI {http.request.uri}
|
||||
copy_headers X-Authentik-Username X-Authentik-Groups X-Authentik-Email \
|
||||
X-Authentik-Name X-Authentik-Uid X-Authentik-Jwt \
|
||||
X-Authentik-Meta-Jwks X-Authentik-Meta-Outpost X-Authentik-Meta-Provider \
|
||||
X-Authentik-Meta-App X-Authentik-Meta-Version
|
||||
}
|
||||
}
|
||||
|
||||
(authelia) {
|
||||
forward_auth http://authelia:9091 {
|
||||
uri /api/verify?rd=https://login.wasted-bandwidth.net/
|
||||
copy_headers Remote-User Remote-Groups Remote-Email Remote-Name
|
||||
}
|
||||
}
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# MAIL SNIPPETS
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
(email-proxy) {
|
||||
redir https://mail.netgrimoire.com/sogo 301
|
||||
}
|
||||
|
||||
(mailcow-proxy) {
|
||||
reverse_proxy nginx-mailcow:80
|
||||
}
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# STATIC SITE BLOCKS — NETGRIMOIRE.COM
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
cloud.netgrimoire.com {
|
||||
reverse_proxy http://nextcloud-aio-apache:11000
|
||||
}
|
||||
|
||||
log.netgrimoire.com {
|
||||
reverse_proxy http://graylog:9000
|
||||
}
|
||||
|
||||
win.netgrimoire.com {
|
||||
reverse_proxy http://192.168.5.10:8006
|
||||
}
|
||||
|
||||
docker.netgrimoire.com {
|
||||
reverse_proxy http://portainer:9000
|
||||
}
|
||||
|
||||
immich.netgrimoire.com {
|
||||
reverse_proxy http://192.168.5.10:2283
|
||||
}
|
||||
|
||||
npm.netgrimoire.com {
|
||||
reverse_proxy http://librenms:8000
|
||||
}
|
||||
|
||||
#jellyfin.netgrimoire.com {
|
||||
# reverse_proxy http://jellyfin:8096
|
||||
#}
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# AUTHENTICATED — NETGRIMOIRE.COM
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
dozzle.netgrimoire.com {
|
||||
import authentik
|
||||
reverse_proxy http://192.168.4.72:8043
|
||||
}
|
||||
|
||||
dns.netgrimoire.com {
|
||||
import authentik
|
||||
reverse_proxy http://192.168.5.7:5380
|
||||
}
|
||||
|
||||
webtop.netgrimoire.com {
|
||||
import authentik
|
||||
reverse_proxy http://webtop:3000
|
||||
}
|
||||
|
||||
jackett.netgrimoire.com {
|
||||
import authentik
|
||||
reverse_proxy http://gluetun:9117
|
||||
}
|
||||
|
||||
transmission.netgrimoire.com {
|
||||
import authentik
|
||||
reverse_proxy http://gluetun:9091
|
||||
}
|
||||
|
||||
scrutiny.netgrimoire.com {
|
||||
import authentik
|
||||
reverse_proxy http://192.168.5.10:8081
|
||||
}
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# AUTHENTICATED — WASTED-BANDWIDTH.NET (Authelia)
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
stash.wasted-bandwidth.net {
|
||||
import authelia
|
||||
reverse_proxy http://192.168.5.10:9999
|
||||
}
|
||||
|
||||
namer.wasted-bandwidth.net {
|
||||
import authelia
|
||||
reverse_proxy http://192.168.5.10:6980
|
||||
}
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# PUBLIC — PNCHARRIS.COM / WASTED-BANDWIDTH.NET
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
fish.pncharris.com {
|
||||
reverse_proxy http://web
|
||||
}
|
||||
|
||||
www.wasted-bandwidth.net {
|
||||
reverse_proxy http://web
|
||||
}
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# MAILCOW — MULTI-DOMAIN
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
mail.netgrimoire.com, autodiscover.netgrimoire.com, autoconfig.netgrimoire.com, \
|
||||
mail.wasted-bandwidth.net, autodiscover.wasted-bandwidth.net, autoconfig.wasted-bandwidth.net, \
|
||||
mail.gnarlypandaproductions.com, autodiscover.gnarlypandaproductions.com, autoconfig.gnarlypandaproductions.com, \
|
||||
mail.pncfishandmore.com, autodiscover.pncfishandmore.com, autoconfig.pncfishandmore.com, \
|
||||
mail.pncharrisenterprises.com, autodiscover.pncharrisenterprises.com, autoconfig.pncharrisenterprises.com, \
|
||||
mail.pncharris.com, autodiscover.pncharris.com, autoconfig.pncharris.com, \
|
||||
mail.florosafd.org, autodiscover.florosafd.org, autoconfig.florosafd.org {
|
||||
import mailcow-proxy
|
||||
}
|
||||
```
|
||||
|
||||
### Docker Label Pattern (label-defined services)
|
||||
|
||||
Services not in the Caddyfile are registered via labels on their own containers. The snippet defined in the Caddyfile is available to them via `caddy.import`:
|
||||
|
||||
```yaml
|
||||
labels:
|
||||
- caddy=homepage.netgrimoire.com
|
||||
- caddy.import=authentik
|
||||
- caddy.reverse_proxy={{upstreams 3000}}
|
||||
```
|
||||
|
||||
For services that need no auth:
|
||||
```yaml
|
||||
labels:
|
||||
- caddy=myservice.netgrimoire.com
|
||||
- caddy.reverse_proxy={{upstreams 8080}}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Authentication Layers
|
||||
|
||||
Two identity proxies are in use, each serving different domains/use cases:
|
||||
|
||||
| Provider | Domain Pattern | Snippet |
|
||||
|----------|----------------|---------|
|
||||
| Authentik | `*.netgrimoire.com` internal tools | `import authentik` |
|
||||
| Authelia | `*.wasted-bandwidth.net` | `import authelia` |
|
||||
|
||||
Services without an auth import are either public (e.g. `fish.pncharris.com`) or carry their own authentication (e.g. Nextcloud, Graylog, Portainer).
|
||||
|
||||
---
|
||||
|
||||
## Current Security Posture
|
||||
|
||||
CrowdSec protection exists only at the **OPNsense firewall level** — IP reputation blocking before traffic reaches Caddy. CrowdSec does not currently inspect HTTP traffic at the application layer. This means:
|
||||
|
||||
- Known-bad IPs are blocked at the perimeter
|
||||
- Application-layer attacks (SQLi in URLs, malicious paths, bad user agents, brute force on specific endpoints) are not blocked at the Caddy level
|
||||
- Services behind Authentik/Authelia have an additional protection layer; unauthenticated public services do not
|
||||
|
||||
---
|
||||
|
||||
## Future State: CrowdSec + GeoIP + Rate Limiting
|
||||
|
||||
### Target Image
|
||||
|
||||
```yaml
|
||||
image: ghcr.io/serfriz/caddy-crowdsec-geoip-ratelimit-security-dockerproxy:latest
|
||||
```
|
||||
|
||||
This is a drop-in replacement for `lucaslorentz/caddy-docker-proxy`. All existing Docker labels and Caddyfile site blocks continue to work unchanged. The image is automatically rebuilt monthly when Caddy releases updates — no custom image maintenance required.
|
||||
|
||||
**Included modules:**
|
||||
- `caddy-docker-proxy` — same label-based config as current
|
||||
- `caddy-crowdsec-bouncer` — inline HTTP blocking based on CrowdSec decisions
|
||||
- `caddy-geoip` — GeoIP filtering at the application layer
|
||||
- `caddy-ratelimit` — per-endpoint rate limiting
|
||||
- `caddy-security` — additional auth/security middleware
|
||||
|
||||
### Updated Compose
|
||||
|
||||
```yaml
|
||||
configs:
|
||||
caddy-basic-content:
|
||||
file: ./Caddyfile
|
||||
labels:
|
||||
caddy:
|
||||
|
||||
services:
|
||||
caddy:
|
||||
image: ghcr.io/serfriz/caddy-crowdsec-geoip-ratelimit-security-dockerproxy:latest
|
||||
ports:
|
||||
- 8900:80
|
||||
- 443:443
|
||||
environment:
|
||||
- CADDY_INGRESS_NETWORKS=netgrimoire
|
||||
- CADDY_DOCKER_EVENT_THROTTLE_INTERVAL=2000 # Prevents non-deterministic reload with CrowdSec module
|
||||
- CROWDSEC_API_KEY=BYSLg/wKOa7wlHYzChJpBVJA06Ukc7G6fKJCvBwjyZg
|
||||
networks:
|
||||
- netgrimoire
|
||||
- vpn
|
||||
- crowdsec_net
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
- /export/Docker/caddy/Caddyfile:/etc/caddy/Caddyfile
|
||||
- /export/Docker/caddy:/data
|
||||
- caddy-logs:/var/log/caddy
|
||||
deploy:
|
||||
placement:
|
||||
constraints:
|
||||
- node.hostname == znas
|
||||
|
||||
crowdsec:
|
||||
image: crowdsecurity/crowdsec
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
COLLECTIONS: "crowdsecurity/caddy crowdsecurity/http-cve crowdsecurity/whitelist-good-actors"
|
||||
BOUNCER_KEY_CADDY: BYSLg/wKOa7wlHYzChJpBVJA06Ukc7G6fKJCvBwjyZg # Pre-registers the Caddy bouncer automatically
|
||||
volumes:
|
||||
- crowdsec-db:/var/lib/crowdsec/data
|
||||
- ./crowdsec/acquis.yaml:/etc/crowdsec/acquis.yaml
|
||||
- caddy-logs:/var/log/caddy:ro
|
||||
networks:
|
||||
- crowdsec_net
|
||||
deploy:
|
||||
placement:
|
||||
constraints:
|
||||
- node.hostname == znas
|
||||
|
||||
volumes:
|
||||
caddy-logs:
|
||||
crowdsec-db:
|
||||
|
||||
networks:
|
||||
netgrimoire:
|
||||
external: true
|
||||
vpn:
|
||||
external: true
|
||||
crowdsec_net:
|
||||
driver: overlay # Swarm overlay network
|
||||
```
|
||||
|
||||
### CrowdSec Log Acquisition (`./crowdsec/acquis.yaml`)
|
||||
|
||||
```yaml
|
||||
filenames:
|
||||
- /var/log/caddy/access.log
|
||||
labels:
|
||||
type: caddy
|
||||
```
|
||||
|
||||
### Environment File (`.env`)
|
||||
|
||||
```env
|
||||
CROWDSEC_API_KEY=<generate-with-cscli-or-set-before-first-boot>
|
||||
```
|
||||
|
||||
The `BOUNCER_KEY_CADDY` env var in the CrowdSec container pre-registers the bouncer key at startup. Set the same value in `.env` as `CROWDSEC_API_KEY` and both sides will be in sync on first boot — no need to run `cscli bouncers add` manually.
|
||||
|
||||
### Updated Caddyfile Additions
|
||||
|
||||
Add a global block at the top of the Caddyfile and a new `crowdsec` snippet. All other existing content remains unchanged.
|
||||
|
||||
```caddyfile
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# GLOBAL BLOCK — add this at the very top before any snippets
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
{
|
||||
crowdsec {
|
||||
api_url http://crowdsec:8080
|
||||
api_key {$CROWDSEC_API_KEY}
|
||||
}
|
||||
log {
|
||||
output file /var/log/caddy/access.log {
|
||||
roll_size 50mb
|
||||
roll_keep 5
|
||||
}
|
||||
format json
|
||||
}
|
||||
}
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# CROWDSEC SNIPPET — add alongside existing auth snippets
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
(crowdsec) {
|
||||
route {
|
||||
crowdsec
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Applying CrowdSec to Existing Services
|
||||
|
||||
Once the snippet exists, add `import crowdsec` to site blocks and container labels. This is a **gradual rollout** — services without it remain fully functional, just without Caddy-level CrowdSec inspection (they still have OPNsense perimeter protection).
|
||||
|
||||
**In the Caddyfile:**
|
||||
```caddyfile
|
||||
# Before
|
||||
cloud.netgrimoire.com {
|
||||
reverse_proxy http://nextcloud-aio-apache:11000
|
||||
}
|
||||
|
||||
# After
|
||||
cloud.netgrimoire.com {
|
||||
import crowdsec
|
||||
reverse_proxy http://nextcloud-aio-apache:11000
|
||||
}
|
||||
|
||||
# With auth
|
||||
dozzle.netgrimoire.com {
|
||||
import crowdsec
|
||||
import authentik
|
||||
reverse_proxy http://192.168.4.72:8043
|
||||
}
|
||||
```
|
||||
|
||||
**In Docker labels:**
|
||||
```yaml
|
||||
labels:
|
||||
- caddy=homepage.netgrimoire.com
|
||||
- caddy.import=crowdsec
|
||||
- caddy.import=authentik
|
||||
- caddy.reverse_proxy={{upstreams 3000}}
|
||||
```
|
||||
|
||||
### CrowdSec Rollout Priority
|
||||
|
||||
Roll out `import crowdsec` in this order based on risk exposure:
|
||||
|
||||
**High priority — do first (public, no auth):**
|
||||
- `cloud.netgrimoire.com` (Nextcloud)
|
||||
- `immich.netgrimoire.com`
|
||||
- `docker.netgrimoire.com` (Portainer)
|
||||
- `fish.pncharris.com`
|
||||
- `www.wasted-bandwidth.net`
|
||||
|
||||
**Medium priority — high value behind auth:**
|
||||
- `log.netgrimoire.com` (Graylog)
|
||||
- `win.netgrimoire.com` (Proxmox)
|
||||
- All `dozzle`, `dns`, `webtop`, `jackett`, `transmission`, `scrutiny`
|
||||
|
||||
**Lower priority — already protected by Authelia/Authentik:**
|
||||
- `stash.wasted-bandwidth.net`
|
||||
- `namer.wasted-bandwidth.net`
|
||||
- All label-defined services behind auth
|
||||
|
||||
**Skip:**
|
||||
- Mailcow block — handled by nginx-mailcow, different threat model
|
||||
|
||||
### Behavior if CrowdSec Container Goes Down
|
||||
|
||||
The bouncer is designed to **fail open** by default. If `crowdsec` is unreachable, Caddy continues serving traffic normally — enforcement is temporarily suspended but the site stays up. This is the safe default for a homelab. To change this behavior, set `enable_hard_fails true` in the global crowdsec block (will cause 500 errors if CrowdSec is down — not recommended for homelab).
|
||||
|
||||
---
|
||||
|
||||
## Bootstrap Steps
|
||||
|
||||
When ready to migrate to the new image:
|
||||
|
||||
**Step 1 — Add the CrowdSec global block and snippet to the Caddyfile** before changing the image. This ensures the Caddyfile is valid for the new image on startup.
|
||||
|
||||
**Step 2 — Create `./crowdsec/acquis.yaml`** with the content above.
|
||||
|
||||
**Step 3 — Create `.env`** with a strong random value for `CROWDSEC_API_KEY`:
|
||||
```bash
|
||||
openssl rand -hex 32
|
||||
```
|
||||
|
||||
**Step 4 — Update the image and add the CrowdSec service to the compose file**, then redeploy:
|
||||
```bash
|
||||
docker stack deploy -c docker-compose.yml caddy
|
||||
```
|
||||
|
||||
**Step 5 — Verify CrowdSec is reading Caddy logs:**
|
||||
```bash
|
||||
docker exec <crowdsec_container> cscli metrics
|
||||
```
|
||||
Look for the `Acquisition Metrics` table showing hits from `/var/log/caddy/access.log`.
|
||||
|
||||
**Step 6 — Test a ban manually:**
|
||||
```bash
|
||||
docker exec <crowdsec_container> cscli decisions add --ip 1.2.3.4 --duration 5m
|
||||
# Verify the IP gets a 403 from Caddy
|
||||
curl -I https://yoursite.com --resolve yoursite.com:443:1.2.3.4
|
||||
docker exec <crowdsec_container> cscli decisions delete --ip 1.2.3.4
|
||||
```
|
||||
|
||||
**Step 7 — Gradually add `import crowdsec`** to site blocks and labels per the priority order above.
|
||||
|
||||
---
|
||||
|
||||
## File Layout
|
||||
|
||||
```
|
||||
/export/Docker/caddy/
|
||||
├── Caddyfile # Shared snippets and static site blocks
|
||||
├── docker-compose.yml # Caddy + CrowdSec services
|
||||
├── .env # CROWDSEC_API_KEY (future)
|
||||
├── data/ # Caddy data volume (TLS certs, etc.)
|
||||
├── logs/ # caddy-logs volume mount point (future)
|
||||
└── crowdsec/
|
||||
└── acquis.yaml # Tells CrowdSec where to read Caddy logs (future)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Known Issues / Notes
|
||||
|
||||
- Port 80 is mapped to host port 8900 — this is intentional for Swarm. OPNsense NAT handles the external 80→8900 translation.
|
||||
- The `CADDY_DOCKER_EVENT_THROTTLE_INTERVAL=2000` setting is **required** with the CrowdSec module to prevent non-deterministic domain matching behavior during container label reloads (see [issue #61](https://github.com/hslatman/caddy-crowdsec-bouncer/issues/61)).
|
||||
- Jellyfin is commented out in the Caddyfile — likely served via a different path or disabled temporarily.
|
||||
- The `web` upstream referenced by `fish.pncharris.com` and `www.wasted-bandwidth.net` resolves to a container named `web` on the `netgrimoire` network.
|
||||
- Authelia redirect URL is `https://login.wasted-bandwidth.net/` — update if this changes.
|
||||
- The serfriz image is rebuilt on the **1st of each month** for module updates, and on every new Caddy release. Force a module update by recreating the container: `docker service update --force caddy_caddy`.
|
||||
212
False Grimoire/Netgrimoire/Network/Security/OPnSense_IDS.md
Normal file
212
False Grimoire/Netgrimoire/Network/Security/OPnSense_IDS.md
Normal file
|
|
@ -0,0 +1,212 @@
|
|||
---
|
||||
title: OpnSense-IDS/IPS
|
||||
description: IDS
|
||||
published: true
|
||||
date: 2026-02-23T21:51:49.920Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-02-23T21:49:16.861Z
|
||||
---
|
||||
|
||||
# Suricata IDS/IPS
|
||||
|
||||
**Service:** Suricata Intrusion Detection & Prevention System
|
||||
**Host:** OPNsense firewall
|
||||
**Interfaces:** ATT (opt1) — add WAN (igc0) while still active
|
||||
**Mode:** IPS (inline blocking)
|
||||
**Rulesets:** ET Open, Feodo Tracker, Abuse.ch SSL
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Suricata is OPNsense's built-in deep packet inspection engine. Unlike CrowdSec (which blocks based on IP reputation) and GeoIP (which blocks by country), Suricata inspects the **content** of traffic — detecting exploit patterns, malware C2 communication, vulnerability scans, and known CVE exploitation attempts in real time.
|
||||
|
||||
The two systems complement each other and do not overlap:
|
||||
|
||||
| Layer | Tool | What It Stops |
|
||||
|---|---|---|
|
||||
| IP reputation | CrowdSec | Known bad IPs from community threat intel |
|
||||
| Geography | GeoIP | Traffic from blocked countries |
|
||||
| Content inspection | Suricata | Malicious payloads, exploit patterns, C2 traffic |
|
||||
|
||||
Suricata uses **Netmap** for high-performance inline packet processing with minimal CPU overhead.
|
||||
|
||||
> ⚠ **Before enabling IPS mode:** Disable hardware offloading on your interfaces or Netmap will not function correctly. This is done in **Interfaces → Settings**.
|
||||
|
||||
---
|
||||
|
||||
## Pre-requisite: Disable Hardware Offloading
|
||||
|
||||
1. Go to **Interfaces → Settings**
|
||||
2. Disable the following options:
|
||||
- Hardware CRC
|
||||
- Hardware TSO
|
||||
- Hardware LRO
|
||||
- VLAN Hardware Filtering
|
||||
3. Click **Save**
|
||||
4. Reboot the firewall
|
||||
|
||||
> ✓ This is a one-time change. It has no meaningful impact on performance for home/small business use and is required for Suricata IPS mode to function.
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
Suricata is built into OPNsense — no plugin install required. Navigate directly to:
|
||||
|
||||
**Services → Intrusion Detection → Administration**
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
### Step 1 — General Settings
|
||||
|
||||
Navigate to **Services → Intrusion Detection → Administration**
|
||||
|
||||
| Setting | Value | Notes |
|
||||
|---|---|---|
|
||||
| Enabled | ✓ | Turns on the IDS/IPS engine |
|
||||
| IPS Mode | ✓ | Enables inline blocking (not just alerting) |
|
||||
| Promiscuous Mode | Leave default | Only needed for mirrored traffic setups |
|
||||
| Default Packet Size | Leave default | Auto-detected |
|
||||
| Interfaces | ATT, WAN | Add both while dual-WAN is active; remove WAN after migration |
|
||||
| Home Networks | 192.168.3.0/24, 192.168.5.0/24, 192.168.32.0/24 | Your internal subnets — critical for rule accuracy |
|
||||
| Log Level | Info | |
|
||||
| Log Retention | 7 days | Adjust based on disk space |
|
||||
|
||||
> ⚠ **Home Networks is critical.** Suricata rules use `$HOME_NET` and `$EXTERNAL_NET` to determine direction. If your internal subnets are not listed here, many rules will fail to trigger correctly or will produce false positives.
|
||||
|
||||
Click **Apply** after setting these values.
|
||||
|
||||
### Step 2 — Download Rulesets
|
||||
|
||||
Navigate to **Services → Intrusion Detection → Download**
|
||||
|
||||
Enable the following rulesets:
|
||||
|
||||
| Ruleset | Provider | Priority | Notes |
|
||||
|---|---|---|---|
|
||||
| ET Open | Proofpoint Emerging Threats | 🔴 Essential | Comprehensive free ruleset — 40,000+ rules covering exploits, malware, scanning, C2 |
|
||||
| Abuse.ch SSL Blacklist | Abuse.ch | 🔴 Essential | Blocks connections to malicious SSL certificates used by malware |
|
||||
| Feodo Tracker Botnet | Abuse.ch | 🔴 Essential | Blocks botnet C2 IP communication |
|
||||
| OSIF | OPNsense | 🟡 Recommended | OPNsense internal feed |
|
||||
| PT Research | Positive Technologies | 🟡 Recommended | Additional threat intelligence |
|
||||
|
||||
To enable each ruleset:
|
||||
1. Find it in the list
|
||||
2. Toggle the **Enabled** switch
|
||||
3. Click **Download & Update Rules** at the top of the page
|
||||
|
||||
> ✓ ET Open is the most important ruleset. It is maintained by Proofpoint, updated daily, and covers the vast majority of common attack patterns you will encounter.
|
||||
|
||||
### Step 3 — Configure Policies
|
||||
|
||||
Policies control what Suricata does when a rule matches — alert only, or drop the packet.
|
||||
|
||||
Navigate to **Services → Intrusion Detection → Policy**
|
||||
|
||||
**Recommended policy setup:**
|
||||
|
||||
Add the following policies in order:
|
||||
|
||||
**Policy 1 — Drop high-severity ET threats**
|
||||
| Field | Value |
|
||||
|---|---|
|
||||
| Description | Drop ET High Severity |
|
||||
| Priority | 1 |
|
||||
| Rulesets | ET Open |
|
||||
| Action | Drop |
|
||||
| Severity | ≥ High |
|
||||
|
||||
**Policy 2 — Alert on medium-severity (tuning period)**
|
||||
| Field | Value |
|
||||
|---|---|
|
||||
| Description | Alert ET Medium |
|
||||
| Priority | 2 |
|
||||
| Rulesets | ET Open |
|
||||
| Action | Alert |
|
||||
| Severity | Medium |
|
||||
|
||||
**Policy 3 — Drop all Feodo/Abuse.ch matches**
|
||||
| Field | Value |
|
||||
|---|---|
|
||||
| Description | Drop Botnet C2 and SSL Blacklist |
|
||||
| Priority | 1 |
|
||||
| Rulesets | Feodo Tracker, Abuse.ch SSL |
|
||||
| Action | Drop |
|
||||
| Severity | Any |
|
||||
|
||||
> ✓ Start with medium-severity rules in **alert** mode for the first 1–2 weeks. Review alerts in the log for false positives before switching to drop. High-severity rules and the abuse.ch lists are safe to drop immediately.
|
||||
|
||||
### Step 4 — Apply and Verify
|
||||
|
||||
1. Click **Apply** on the Administration tab
|
||||
2. Navigate to **Services → Intrusion Detection → Alerts**
|
||||
3. Wait a few minutes — alerts should begin populating
|
||||
4. Check **Services → Intrusion Detection → Stats** to confirm traffic is being processed
|
||||
|
||||
---
|
||||
|
||||
## Tuning & False Positives
|
||||
|
||||
After running in alert mode for a week, review the Alerts tab. Common false positives from home lab environments include:
|
||||
|
||||
- **Nextcloud sync traffic** — may trigger file transfer rules
|
||||
- **Torrents/P2P** — will trigger multiple ET rules by design
|
||||
- **Internal port scanning tools** — Nmap from internal hosts triggers scan rules
|
||||
|
||||
To suppress a false positive rule without disabling it entirely:
|
||||
|
||||
1. Note the rule SID from the alert
|
||||
2. Go to **Services → Intrusion Detection → Rules**
|
||||
3. Search for the SID
|
||||
4. Change the rule action to **Alert** (instead of Drop) for that specific rule
|
||||
|
||||
Alternatively, add a suppression in **Services → Intrusion Detection → Suppressions**:
|
||||
- Enter the SID
|
||||
- Set the direction (source or destination)
|
||||
- Enter the IP to suppress for that rule
|
||||
|
||||
---
|
||||
|
||||
## Monitoring
|
||||
|
||||
### Alert Dashboard
|
||||
|
||||
**Services → Intrusion Detection → Alerts** — real-time view of matched rules.
|
||||
|
||||
Useful filters:
|
||||
- Filter by `severity: high` to see the most critical events
|
||||
- Filter by `action: drop` to see what is being actively blocked
|
||||
- Filter by source IP to investigate a specific host
|
||||
|
||||
### Graylog Integration
|
||||
|
||||
Forward Suricata alerts to Graylog for centralized analysis:
|
||||
|
||||
1. Suricata logs to `/var/log/suricata/eve.json` in EVE JSON format
|
||||
2. In Graylog, add a **Beats input** or **Syslog UDP input**
|
||||
3. In OPNsense **System → Settings → Logging → Remote**, add Graylog as syslog target
|
||||
4. Create a Graylog stream filtering on `application_name: suricata`
|
||||
|
||||
---
|
||||
|
||||
## Key Files & Paths
|
||||
|
||||
| Path | Purpose |
|
||||
|---|---|
|
||||
| `/var/log/suricata/eve.json` | EVE JSON alert log — used by Graylog |
|
||||
| `/var/log/suricata/stats.log` | Performance statistics |
|
||||
| `/usr/local/etc/suricata/suricata.yaml` | Main config (managed by OPNsense UI) |
|
||||
| `/usr/local/share/suricata/rules/` | Downloaded rulesets |
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [OPNsense Firewall](./opnsense-firewall) — parent firewall documentation
|
||||
- [CrowdSec](./crowdsec) — complementary IP reputation layer
|
||||
- [Additional Blocklists](./opnsense-blocklists) — Feodo, Abuse.ch, ET IP blocklists at firewall level
|
||||
- [Graylog](./graylog) — centralized log target for Suricata alerts
|
||||
|
|
@ -0,0 +1,159 @@
|
|||
---
|
||||
title: OpnSense - App Protection
|
||||
description: App Inspection
|
||||
published: true
|
||||
date: 2026-02-23T21:52:43.630Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-02-23T21:50:37.324Z
|
||||
---
|
||||
|
||||
# Zenarmor (NGFW)
|
||||
|
||||
**Service:** Zenarmor Next-Generation Firewall
|
||||
**Plugin:** os-sunnyvalley
|
||||
**Tier:** Free Edition
|
||||
**Host:** OPNsense firewall
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Zenarmor adds application-layer awareness and web filtering to OPNsense that the base firewall does not provide. Where Suricata inspects packet content for known threat signatures, Zenarmor identifies **what application or service** is generating traffic and can block or allow based on that — regardless of port.
|
||||
|
||||
| Feature | Free Tier | Paid Tier |
|
||||
|---|---|---|
|
||||
| Layer-7 app identification | ✓ | ✓ |
|
||||
| Web category filtering | Default policy only | Custom policies |
|
||||
| Malware/phishing blocking | ✓ | ✓ |
|
||||
| Real-time network analytics | ✓ | ✓ |
|
||||
| Device tracking & alerts | ✗ | ✓ |
|
||||
| Multiple policies | ✗ | ✓ |
|
||||
| TLS inspection | ✗ | ✓ |
|
||||
|
||||
The free tier is useful primarily for **visibility** (seeing what applications are running on your network) and **basic threat blocking** (malware, phishing, PUP domains). The analytics dashboard alone makes it worthwhile.
|
||||
|
||||
> ✓ Zenarmor and Suricata can run simultaneously. They operate at different layers and do not conflict. Zenarmor handles application identity; Suricata handles content signatures.
|
||||
|
||||
> ⚠ **MongoDB deprecation note:** As of September 2025, MongoDB is being deprecated as the Zenarmor database backend. Use **SQLite** when prompted during setup — it is the supported path going forward.
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
### Step 1 — Install the Plugin
|
||||
|
||||
1. Go to **System → Firmware → Plugins**
|
||||
2. Search for `os-sunnyvalley`
|
||||
3. Click the **+** install button
|
||||
4. Wait for installation to complete
|
||||
5. **Refresh the browser** — a new **Zenarmor** menu item will appear in the sidebar
|
||||
|
||||
### Step 2 — Initial Setup Wizard
|
||||
|
||||
Navigate to **Zenarmor → Dashboard** — this launches the setup wizard on first run.
|
||||
|
||||
**Deployment Mode:** Select **Routed Mode (L3)** for standard OPNsense setups. This is correct for your configuration.
|
||||
|
||||
**Database:** Select **SQLite** — do not select MongoDB (deprecated September 2025).
|
||||
|
||||
**Interface:** Select **ATT (opt1)** as the primary interface. Add **WAN (igc0)** while dual-WAN is still active.
|
||||
|
||||
> ⚠ Zenarmor should be applied to the **LAN-facing side** of the firewall for internal traffic inspection, or the **WAN-facing side** for inbound threat blocking. For your setup, applying it to both ATT and LAN gives the most coverage.
|
||||
|
||||
**Cloud Connectivity:** Leave enabled — Zenarmor uses cloud-based category lookups for web filtering. If you want fully offline operation, this can be disabled but web filtering accuracy degrades significantly.
|
||||
|
||||
Click **Complete** to finish the wizard.
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
### Step 3 — Security Policy
|
||||
|
||||
Navigate to **Zenarmor → Security**
|
||||
|
||||
Enable the following threat categories in the default policy:
|
||||
|
||||
| Category | Action | Notes |
|
||||
|---|---|---|
|
||||
| Malware | Block | Domains known to serve malware |
|
||||
| Phishing | Block | Credential harvesting sites |
|
||||
| Botnet | Block | C2 communication |
|
||||
| PUP/Adware | Block | Potentially unwanted programs |
|
||||
| SPAM Sources | Block | Known spam infrastructure |
|
||||
| Parked Domains | Block | Often used for malicious redirects |
|
||||
|
||||
Leave the following as **Alert** initially (review before blocking):
|
||||
- Anonymizers / Proxies — may block legitimate VPN services
|
||||
- Peer-to-peer — may affect legitimate use cases
|
||||
|
||||
### Step 4 — Application Control
|
||||
|
||||
Navigate to **Zenarmor → Policies → Application Control**
|
||||
|
||||
The free tier allows one default policy. Useful applications to consider blocking or monitoring:
|
||||
|
||||
| Application Category | Recommendation | Reason |
|
||||
|---|---|---|
|
||||
| Cryptocurrency mining | Block | Resource theft if unauthorized |
|
||||
| Remote access tools (unknown) | Alert | Unexpected remote tools are a red flag |
|
||||
| Tor | Alert | Monitor — may be legitimate or evasion |
|
||||
| Anonymous proxies | Block | Bypass attempts |
|
||||
|
||||
### Step 5 — Web Filtering
|
||||
|
||||
Navigate to **Zenarmor → Policies → Web Controls**
|
||||
|
||||
In the free tier, the default policy controls all web filtering. Recommended categories to block:
|
||||
|
||||
| Category | Action |
|
||||
|---|---|
|
||||
| Malware sites | Block |
|
||||
| Phishing | Block |
|
||||
| Hacking / exploit sites | Block |
|
||||
| Illegal content | Block |
|
||||
|
||||
Enable **Safe Search enforcement** if desired — forces Google, Bing, and YouTube into safe search mode network-wide.
|
||||
|
||||
---
|
||||
|
||||
## Dashboard & Analytics
|
||||
|
||||
Navigate to **Zenarmor → Dashboard**
|
||||
|
||||
The dashboard provides real-time visibility into:
|
||||
- **Top talkers** — which internal hosts generate the most traffic
|
||||
- **Top applications** — what services are being used
|
||||
- **Blocked threats** — real-time feed of blocked requests
|
||||
- **Bandwidth usage** — per-host and per-application
|
||||
|
||||
This is the primary value of the free tier — even without advanced policy control, the visibility into what is running on your network is significant.
|
||||
|
||||
Navigate to **Zenarmor → Reports** for historical analysis and trend data.
|
||||
|
||||
---
|
||||
|
||||
## Performance Notes
|
||||
|
||||
Zenarmor uses deep packet inspection which adds some CPU overhead. On modern hardware (anything with i226-V NICs) this is negligible at home lab traffic volumes. Monitor CPU usage in **Zenarmor → Dashboard → System** after enabling.
|
||||
|
||||
If performance degrades, you can limit Zenarmor to specific interfaces rather than all interfaces.
|
||||
|
||||
---
|
||||
|
||||
## Known Limitations (Free Tier)
|
||||
|
||||
- Only one web filtering policy — all devices get the same rules
|
||||
- No per-device or per-group policies
|
||||
- No TLS/SSL inspection — encrypted traffic is identified by SNI only
|
||||
- No device inventory or unknown device alerts
|
||||
- Web category database is cloud-dependent
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [OPNsense Firewall](./opnsense-firewall) — parent firewall documentation
|
||||
- [Suricata IDS/IPS](./suricata-ids-ips) — complementary content inspection layer
|
||||
- [CrowdSec](./crowdsec) — IP reputation layer
|
||||
508
False Grimoire/Netgrimoire/Network/Security/OpnSense_Firewall.md
Normal file
508
False Grimoire/Netgrimoire/Network/Security/OpnSense_Firewall.md
Normal file
|
|
@ -0,0 +1,508 @@
|
|||
---
|
||||
title: OpnSense
|
||||
description: Grimoire Firewall Configuration
|
||||
published: true
|
||||
date: 2026-02-23T21:31:26.008Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-02-23T21:31:15.244Z
|
||||
---
|
||||
|
||||
# OPNsense Firewall
|
||||
|
||||
**Host:** OPNsense.localdomain
|
||||
**Timezone:** America/Chicago
|
||||
**Documented:** February 23, 2026
|
||||
**Status:** Active — AT&T migration in progress
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The network perimeter is protected by an OPNsense firewall running on dedicated hardware with four physical Intel i226-V NICs (igc0–igc3). The firewall operates in a dual-WAN configuration during the transition from the legacy ISP to AT&T fiber, with AT&T becoming the permanent primary WAN. CrowdSec threat intelligence, GeoIP blocking, and Spamhaus DROP/EDROP lists provide layered perimeter security.
|
||||
|
||||
---
|
||||
|
||||
## Hardware & System
|
||||
|
||||
| Parameter | Value |
|
||||
|---|---|
|
||||
| Hostname | OPNsense |
|
||||
| Domain | localdomain |
|
||||
| Timezone | America/Chicago |
|
||||
| Language | en_US |
|
||||
| NAT Outbound Mode | Hybrid |
|
||||
| System DNS | 8.8.8.8 (Google) — see DNS notes |
|
||||
| DNS Allow Override | Enabled |
|
||||
| SSH | Enabled (port 22) |
|
||||
| Console Menu | Disabled (hardened) |
|
||||
|
||||
> ⚠ **DNS Note:** The system upstream DNS is set to 8.8.8.8. If dnscrypt-proxy or Unbound is configured, this should be updated to point to localhost or the internal DNS resolver (192.168.5.7). Review before enabling encrypted DNS.
|
||||
|
||||
---
|
||||
|
||||
## Network Interfaces
|
||||
|
||||
| Interface | Label | Physical NIC | IP Address | Role |
|
||||
|---|---|---|---|---|
|
||||
| wan | WAN | igc0 | 24.249.193.114/28 | Legacy primary WAN — being retired |
|
||||
| opt1 | ATT | igc1 | 107.133.34.145/28 | New primary WAN — AT&T fiber |
|
||||
| lan | LAN | igc3 | 192.168.3.4/29 | Internal LAN management segment |
|
||||
| opt3 | OPT3 | igc2 | DHCP | Unassigned — spare interface |
|
||||
| opt2 / wg1 | WG1 | wg1 (virtual) | WireGuard tunnel | WireGuard VPN interface |
|
||||
| openvpn | OpenVPN | virtual | Tunnel only | OpenVPN (server + client configured) |
|
||||
| lo0 | Loopback | lo0 | 127.0.0.1/8 | System loopback |
|
||||
|
||||
> ⚠ **OPT3 (igc2)** is on DHCP and currently unassigned. Disable this interface or assign it a role to reduce unnecessary attack surface.
|
||||
|
||||
---
|
||||
|
||||
## Gateways & Routing
|
||||
|
||||
### Active Gateways
|
||||
|
||||
| Gateway Name | Interface | IP | Role |
|
||||
|---|---|---|---|
|
||||
| WAN_DefRoute | wan (igc0) | 24.249.193.114 | Legacy default route — being retired |
|
||||
| ATT | opt1 (igc1) | 107.133.34.145 | AT&T — becoming primary |
|
||||
| LAN_GWv4 | lan (igc3) | 192.168.3.4 | LAN gateway |
|
||||
|
||||
### NAT Outbound Rules
|
||||
|
||||
Outbound NAT runs in **Hybrid** mode — automatic rules supplemented by manual overrides below.
|
||||
|
||||
| Interface | Source | NAT Target | Purpose |
|
||||
|---|---|---|---|
|
||||
| opt1 (ATT) | ATT_Out_1 group | opt1ip | Dad's Laptop + 192.168.5.128/25 out ATT |
|
||||
| wan | MailCow_Ngnx (192.168.5.16) | 24.249.193.115 | Mail server — dedicated WAN IP |
|
||||
| wan | PNCHarris_Internal | wanip | Internal subnets egress |
|
||||
| wan | WireGuard (opt2) | — | WireGuard outbound NAT |
|
||||
|
||||
> ✓ The mail server already has a dedicated outbound IP (24.249.193.115) on WAN. This pattern should be replicated on ATT using a dedicated virtual IP from the static block.
|
||||
|
||||
---
|
||||
|
||||
## Firewall Aliases
|
||||
|
||||
### Host Aliases
|
||||
|
||||
| Alias | IP Address | Used For |
|
||||
|---|---|---|
|
||||
| caddy | 192.168.5.10 | Caddy reverse proxy |
|
||||
| MailCow_Ngnx | 192.168.5.16 | MailCow nginx container |
|
||||
| JellyFin_Host | 192.168.5.18 | Jellyfin media server |
|
||||
| ISPConfig_Host | 192.168.4.11 | ISPConfig control panel |
|
||||
| Dads_Laptop | 192.168.5.176 | Routed out ATT interface |
|
||||
|
||||
### Network Aliases
|
||||
|
||||
| Alias | Value | Used For |
|
||||
|---|---|---|
|
||||
| PNCHarris_Internal | 192.168.5.0/25, 192.168.3.0/24 | Primary internal subnets |
|
||||
| Subnet_5_128_Mask_25 | 192.168.5.128/25 | Upper half of 192.168.5.x |
|
||||
| ATT_Out_1 | Dads_Laptop + Subnet_5_128_Mask_25 | Traffic routed out ATT interface |
|
||||
| Family_Subnet | (empty) | Defined but unpopulated |
|
||||
|
||||
### Port Aliases
|
||||
|
||||
| Alias | Ports | Used For |
|
||||
|---|---|---|
|
||||
| Web_Services | 80, 443 | HTTP/HTTPS |
|
||||
| MailCow | 25, 110, 143, 465, 587, 993, 995, 4190 | Full MailCow mail protocol suite |
|
||||
| ISPConfig | 25, 53, 143, 465, 587, 993, 995, 8080 | ISPConfig mail + DNS + admin |
|
||||
| JellyFin_Port | 8096, 7096 | Jellyfin HTTP + HTTPS |
|
||||
| Plex_Port_2 | (empty) | Defined but unpopulated |
|
||||
|
||||
### Security & Threat Intelligence Aliases
|
||||
|
||||
| Alias | Type | Source | Status |
|
||||
|---|---|---|---|
|
||||
| SpamHaus_Drop | URL Table | https://www.spamhaus.org/drop/drop.txt | ⚠ Rule DISABLED |
|
||||
| Spamhaus_edrop | URL Table | https://www.spamhaus.org/drop/edrop.txt | ⚠ Rule DISABLED |
|
||||
| Blocked_Countries | GeoIP | 70 countries — see GeoIP section | ⚠ Rule DISABLED |
|
||||
| crowdsec_blacklists | External | CrowdSec IPv4 decisions | ✓ Active |
|
||||
| crowdsec6_blacklists | External | CrowdSec IPv6 decisions | ✓ Active |
|
||||
| crowdsec_blocklists | External | CrowdSec IPv4 (duplicate) | ✓ Active |
|
||||
| crowdsec6_blocklists | External | CrowdSec IPv6 decisions (duplicate) | ✓ Active |
|
||||
|
||||
> ⚠ **Critical:** Spamhaus DROP, Spamhaus EDROP, and GeoIP country blocking are all defined and populated but their firewall rules are **disabled**. These are not currently being enforced. Re-enable these rules as an immediate priority.
|
||||
|
||||
> ⚠ There are duplicate CrowdSec alias pairs (`crowdsec_blacklists` and `crowdsec_blocklists` both handle IPv4). Review and consolidate to avoid confusion.
|
||||
|
||||
---
|
||||
|
||||
## Firewall Rules
|
||||
|
||||
### WAN Rules
|
||||
|
||||
| Action | Protocol | Source | Destination | Port(s) | Enabled | Description |
|
||||
|---|---|---|---|---|---|---|
|
||||
| BLOCK | Any | SpamHaus_Drop | Any | Any | ❌ No | Block Spamhaus DROP list |
|
||||
| BLOCK | Any | Spamhaus_edrop | Any | Any | ❌ No | Block Spamhaus EDROP list |
|
||||
| BLOCK | Any | Blocked_Countries | Any | Any | ❌ No | GeoIP country block |
|
||||
| PASS | TCP | Any | MailCow_Ngnx | MailCow ports | ✓ Yes | Inbound mail |
|
||||
| PASS | TCP | Any | JellyFin_Host | 8096, 7096 | ✓ Yes | Jellyfin access |
|
||||
| PASS | UDP | Any | WAN IP | 51820 | ✓ Yes | WireGuard VPN ingress |
|
||||
| PASS | TCP | Any | MailCow_Ngnx | 80, 443 | ✓ Yes | MailCow webmail |
|
||||
| PASS | TCP | Any | caddy (192.168.5.10) | 80, 443 | ✓ Yes | Caddy reverse proxy |
|
||||
|
||||
> ⚠ All three block rules at the top of the WAN ruleset are disabled. The firewall is currently not enforcing Spamhaus or GeoIP blocking despite the aliases being populated.
|
||||
|
||||
### LAN Rules
|
||||
|
||||
| Action | Protocol | Source | Destination | Description |
|
||||
|---|---|---|---|---|
|
||||
| PASS | Any | ATT_Out_1 group | Any | Dad's Laptop + upper subnet out ATT |
|
||||
| PASS | Any | LAN subnet | Any | Default allow LAN to any |
|
||||
| PASS | Any | PNCHarris_Internal | Any | Internal subnets to any |
|
||||
| PASS | Any | LAN subnet | Any | Default allow LAN IPv6 to any |
|
||||
| PASS | TCP | PNCHarris_Internal | ISPConfig_Host:ISPConfig | LAN → ISPConfig redirect |
|
||||
| PASS | TCP | PNCHarris_Internal | ISPConfig_Host:80/443 | LAN → ISPConfig web redirect |
|
||||
| PASS | TCP | PNCHarris_Internal | caddy:80/443 | LAN → Caddy redirect |
|
||||
| PASS | TCP | PNCHarris_Internal | MailCow_Ngnx:MailCow | LAN → MailCow redirect |
|
||||
|
||||
### WireGuard Interface Rules
|
||||
|
||||
| Action | Protocol | Source | Destination | Description |
|
||||
|---|---|---|---|---|
|
||||
| PASS | Any | Any | Any | Allow all from WireGuard peers — unrestricted |
|
||||
|
||||
> ⚠ The WireGuard interface allows all traffic from all peers with no restrictions. Consider scoping rules per peer as needs are better understood — some remote sites may only need access to specific services.
|
||||
|
||||
---
|
||||
|
||||
## NAT Port Forwards
|
||||
|
||||
### WAN Inbound
|
||||
|
||||
| Protocol | Public Port(s) | Internal Target | Internal Port(s) | Service |
|
||||
|---|---|---|---|---|
|
||||
| TCP | MailCow ports | 192.168.5.16 (MailCow_Ngnx) | MailCow ports | Mail (SMTP/IMAP/POP3/Sieve) |
|
||||
| TCP | 80, 443 | 192.168.5.16 (MailCow_Ngnx) | 80, 443 | MailCow webmail |
|
||||
| TCP | 8096, 7096 | 192.168.5.18 (JellyFin_Host) | 8096, 7096 | Jellyfin |
|
||||
| TCP | 80, 443 | 192.168.5.10 (caddy) | 80, 443 | Caddy (all web services) |
|
||||
|
||||
### LAN Hairpin (Internal Redirect)
|
||||
|
||||
| Protocol | Port(s) | Internal Target | Description |
|
||||
|---|---|---|---|
|
||||
| TCP | MailCow ports | 192.168.5.16 | Internal mail access |
|
||||
| TCP | 80, 443 | 192.168.5.10 (caddy) | Internal web via Caddy |
|
||||
| TCP | ISPConfig ports | 192.168.4.11 | Internal ISPConfig access |
|
||||
| TCP | 80, 443 | 192.168.4.11 | Internal ISPConfig web |
|
||||
|
||||
---
|
||||
|
||||
## VPN
|
||||
|
||||
### WireGuard
|
||||
|
||||
**Server: pncharris**
|
||||
|
||||
| Parameter | Value |
|
||||
|---|---|
|
||||
| Tunnel Address | 192.168.32.1/24 |
|
||||
| Listen Port | 51820 (UDP) |
|
||||
| DNS for Peers | 192.168.5.7 (internal DNS) |
|
||||
| Interface | wg1 (OPT2) |
|
||||
| Status | Enabled |
|
||||
|
||||
**Peers**
|
||||
|
||||
| Peer | Tunnel IP | Status | Notes |
|
||||
|---|---|---|---|
|
||||
| Obie | 192.168.32.2/32 | ✓ Enabled | |
|
||||
| pncfishandmore | 192.168.32.3/32 | ✓ Enabled | Business location |
|
||||
| GLNet (1) | 192.168.32.4/32 | ✓ Enabled | GL.iNet travel router |
|
||||
| PortaPotty | 192.168.32.5/32 | ✓ Enabled | Remote site |
|
||||
| GLNet (2) | 192.168.32.6/32 | ✓ Enabled | Second GL.iNet device |
|
||||
|
||||
> ✓ WireGuard peers use the internal DNS server (192.168.5.7) — internal hostnames resolve correctly over VPN.
|
||||
|
||||
### OpenVPN
|
||||
|
||||
An OpenVPN server and client are configured but details were not populated in the backup. Verify status in **VPN → OpenVPN** in the OPNsense UI.
|
||||
|
||||
---
|
||||
|
||||
## Security Features
|
||||
|
||||
### CrowdSec
|
||||
|
||||
CrowdSec is installed and fully operational at the firewall level.
|
||||
|
||||
| Parameter | Value |
|
||||
|---|---|
|
||||
| Agent | Enabled |
|
||||
| Local API (LAPI) | Enabled — 127.0.0.1:8080 |
|
||||
| Firewall Bouncer | Enabled |
|
||||
| Rules | Enabled with logging |
|
||||
| Firewall Bouncer Verbose | Disabled |
|
||||
| Manual LAPI Config | Disabled (auto) |
|
||||
|
||||
CrowdSec decisions are fed into two alias pairs used in firewall rules:
|
||||
- `crowdsec_blacklists` / `crowdsec6_blacklists` — IPv4 and IPv6 block lists
|
||||
- `crowdsec_blocklists` / `crowdsec6_blocklists` — duplicate set (consolidate)
|
||||
|
||||
### GeoIP Blocking
|
||||
|
||||
GeoIP uses the MaxMind GeoLite2 database with a configured license key. **The blocking rule is currently disabled** — the alias is populated but not enforced.
|
||||
|
||||
**70 countries are blocked across four regions:**
|
||||
|
||||
| Region | Countries |
|
||||
|---|---|
|
||||
| Africa (49) | AO, BF, BI, BJ, BW, CD, CF, CG, CI, CM, DJ, DZ, EG, EH, ER, ET, GA, GH, GM, GN, GQ, GW, KE, LR, LS, LY, MA, ML, MR, MW, MZ, NA, NE, NG, RW, SD, SL, SN, SO, SS, ST, SZ, TD, TG, TN, TZ, UG, ZA, ZM, ZW |
|
||||
| Middle East / Asia (12) | AF, BN, BT, CN, IQ, IR, KG, KP, KW, PH, QA, SA |
|
||||
| Eastern Europe (4) | BG, RS, RU, RO |
|
||||
| Latin America (4) | BR, EC, GT, HN |
|
||||
|
||||
### Spamhaus Blocklists
|
||||
|
||||
Both lists are configured as URL table aliases that auto-refresh, but **both blocking rules are currently disabled.**
|
||||
|
||||
| List | URL | Update |
|
||||
|---|---|---|
|
||||
| Spamhaus DROP | https://www.spamhaus.org/drop/drop.txt | Auto (URL table) |
|
||||
| Spamhaus EDROP | https://www.spamhaus.org/drop/edrop.txt | Auto (URL table) |
|
||||
|
||||
---
|
||||
|
||||
## Internal Network Layout
|
||||
|
||||
### Known Subnets
|
||||
|
||||
| Subnet | Alias | Purpose |
|
||||
|---|---|---|
|
||||
| 192.168.3.0/24 | PNCHarris_Internal | LAN management segment |
|
||||
| 192.168.5.0/25 | PNCHarris_Internal | Primary server subnet |
|
||||
| 192.168.5.128/25 | Subnet_5_128_Mask_25 | Secondary server subnet / ATT routing |
|
||||
| 192.168.32.0/24 | — | WireGuard tunnel network |
|
||||
|
||||
### Key Internal Hosts
|
||||
|
||||
| Hostname / Alias | IP | Role |
|
||||
|---|---|---|
|
||||
| caddy | 192.168.5.10 | Caddy reverse proxy (all web services) |
|
||||
| MailCow_Ngnx | 192.168.5.16 | MailCow nginx container |
|
||||
| JellyFin_Host | 192.168.5.18 | Jellyfin media server |
|
||||
| ISPConfig_Host | 192.168.4.11 | ISPConfig control panel |
|
||||
| Dads_Laptop | 192.168.5.176 | Routed via ATT interface |
|
||||
| Internal DNS | 192.168.5.7 | DNS server (served to WireGuard peers) |
|
||||
|
||||
### DHCP
|
||||
|
||||
DHCP on the LAN interface (192.168.3.0/24) is currently **disabled**. No KEA or ISC DHCP ranges are active on the firewall. Devices likely use static IPs or a separate DHCP server downstream.
|
||||
|
||||
---
|
||||
|
||||
## Installed Plugins & Services
|
||||
|
||||
The following OPNsense components are present in the configuration:
|
||||
|
||||
| Plugin / Service | Status |
|
||||
|---|---|
|
||||
| WireGuard | ✓ Active — 1 server, 5 peers |
|
||||
| CrowdSec | ✓ Active — agent + bouncer + LAPI |
|
||||
| OpenVPN | Configured — verify in UI |
|
||||
| IPsec / Swanctl | Present — verify in UI |
|
||||
| Unbound Plus | Present — verify DNS configuration |
|
||||
| Kea DHCP | Present — not active on LAN |
|
||||
| DHCP Relay | Present |
|
||||
| Netflow | Present |
|
||||
| IDS/IPS (Suricata) | ❌ Not configured — see hardening plan |
|
||||
| Proxy | Present — not actively used |
|
||||
| Traffic Shaper | Present |
|
||||
| Monit | Present |
|
||||
| SNMP | Present |
|
||||
| Syslog | Not configured — see hardening plan |
|
||||
| Git Backup | Not installed — see hardening plan |
|
||||
|
||||
---
|
||||
|
||||
## AT&T Migration & Static IP Plan
|
||||
|
||||
### Current AT&T Interface
|
||||
|
||||
**Interface:** opt1 (igc1)
|
||||
**Current IP:** 107.133.34.145/28
|
||||
**Block:** /28 — up to 14 usable addresses, 5 static IPs allocated for use
|
||||
|
||||
### Recommended Static IP Allocation
|
||||
|
||||
| IP Slot | Dedicated To | Justification |
|
||||
|---|---|---|
|
||||
| IP 1 | **Mail (MailCow)** | Dedicated mail IP protects sender reputation. Never share with web services. Only ports 25/465/587/993/995/4190 NAT to 192.168.5.16. |
|
||||
| IP 2 | **Web / Caddy** | All reverse-proxied services via Caddy. Keeps web and mail reputation independent. Replace current WAN NAT for ports 80/443 → 192.168.5.10. |
|
||||
| IP 3 | **WireGuard VPN** | Dedicated IP for UDP/51820 only. Cleaner peer configs, stable endpoint, easy to firewall tightly — that IP accepts nothing else. |
|
||||
| IP 4 | **Spare / Jellyfin** | Hold in reserve. Best candidate: dedicated Jellyfin IP (currently on WAN with ports 8096/7096). Media servers benefit from a clean IP separate from your main web presence. |
|
||||
| IP 5 | **Admin / Out-of-band** | A locked-down IP for emergency remote OPNsense access. Firewall tightly — accept only from WireGuard peers or specific trusted source IPs. Never advertise publicly. |
|
||||
|
||||
### Implementation Steps
|
||||
|
||||
**Step 1 — Add Virtual IPs**
|
||||
|
||||
In OPNsense: **Firewall → Virtual IPs → Add**
|
||||
|
||||
For each additional static IP (IPs 1–5 excluding the interface IP):
|
||||
- Type: `IP Alias`
|
||||
- Interface: `ATT (opt1)`
|
||||
- Address: `<static IP>/28`
|
||||
- Description: e.g. `ATT_Mail`, `ATT_Web`, `ATT_WireGuard`
|
||||
|
||||
**Step 2 — Create NAT Rules Per Virtual IP**
|
||||
|
||||
In **Firewall → NAT → Port Forward**, create new rules on the ATT interface using the virtual IPs as the destination. Example for mail:
|
||||
|
||||
```
|
||||
Interface: ATT (opt1)
|
||||
Protocol: TCP
|
||||
Destination: ATT_Mail virtual IP
|
||||
Destination Port: MailCow alias
|
||||
Redirect Target: 192.168.5.16 (MailCow_Ngnx)
|
||||
Redirect Port: MailCow alias
|
||||
```
|
||||
|
||||
Repeat for web (→ caddy 192.168.5.10) and WireGuard (UDP/51820).
|
||||
|
||||
**Step 3 — Update Outbound NAT**
|
||||
|
||||
Add manual outbound NAT rules so that each internal service exits through its dedicated virtual IP:
|
||||
|
||||
```
|
||||
Interface: ATT (opt1)
|
||||
Source: 192.168.5.16 (MailCow_Ngnx)
|
||||
Target: ATT_Mail virtual IP
|
||||
|
||||
Interface: ATT (opt1)
|
||||
Source: 192.168.5.10 (caddy)
|
||||
Target: ATT_Web virtual IP
|
||||
```
|
||||
|
||||
**Step 4 — Migrate WireGuard Endpoint**
|
||||
|
||||
Update peer configs to point to the ATT_WireGuard virtual IP on port 51820. Move the WAN WireGuard rule to ATT interface. Update DNS records if you have a hostname for the WireGuard endpoint.
|
||||
|
||||
**Step 5 — Update Firewall Block Rules**
|
||||
|
||||
Re-enable the Spamhaus and GeoIP block rules on the ATT interface. Apply them to the ATT WAN rules the same way they are (currently disabled) on WAN.
|
||||
|
||||
**Step 6 — DNS Updates**
|
||||
|
||||
Update all public DNS records to point to the new ATT static IPs:
|
||||
- `mail.*` domains → ATT_Mail IP
|
||||
- `*.netgrimoire.com`, `*.wasted-bandwidth.net`, etc. → ATT_Web IP
|
||||
- WireGuard endpoint hostname → ATT_WireGuard IP
|
||||
|
||||
**Step 7 — Retire WAN (igc0)**
|
||||
|
||||
Once all services are verified on ATT, disable WAN NAT rules, remove port forward rules on WAN, and eventually disable the interface.
|
||||
|
||||
---
|
||||
|
||||
## Hardening Plan
|
||||
|
||||
The following items are recommended improvements, ordered by priority.
|
||||
|
||||
### Priority 1 — Re-enable Disabled Security Rules (Immediate)
|
||||
|
||||
All three security block rules on the WAN interface are currently disabled. These should be re-enabled immediately as they represent threat intelligence you have already configured but are not using.
|
||||
|
||||
1. Navigate to **Firewall → Rules → WAN**
|
||||
2. Find rules: `Block DROP`, `Block EDROP`, and the GeoIP block rule
|
||||
3. Click the enable toggle on each rule
|
||||
4. Click **Apply Changes**
|
||||
|
||||
Repeat on the ATT interface once migrated.
|
||||
|
||||
### Priority 2 — Suricata IDS/IPS
|
||||
|
||||
Suricata is built into OPNsense but not yet configured. This is the most significant security gap — without it, there is no deep packet inspection or content-based threat detection.
|
||||
|
||||
**Setup steps:**
|
||||
|
||||
1. Go to **Services → Intrusion Detection → Administration**
|
||||
2. Enable IDS/IPS, set interface to **ATT** (and WAN while active)
|
||||
3. Set mode to **IPS** (inline blocking, not just alerting)
|
||||
4. Under **Download**, enable the following rulesets:
|
||||
- `ET Open` — Proofpoint Emerging Threats (free, comprehensive)
|
||||
- `Abuse.ch SSL Blacklist` — malicious SSL certificate detection
|
||||
- `Feodo Tracker` — botnet C2 blocking
|
||||
5. Under **Policies**, set default action to `drop` for high-severity rules
|
||||
6. Click **Download & Update Rules**, then **Apply**
|
||||
|
||||
> ✓ Suricata complements CrowdSec well. CrowdSec handles IP reputation; Suricata handles traffic content inspection. They do not overlap.
|
||||
|
||||
### Priority 3 — Additional Blocklists
|
||||
|
||||
Add these URL table aliases to supplement Spamhaus DROP/EDROP:
|
||||
|
||||
| List | URL | Purpose |
|
||||
|---|---|---|
|
||||
| Feodo Tracker | https://feodotracker.abuse.ch/downloads/ipblocklist.txt | Botnet C2 IPs |
|
||||
| Abuse.ch SSLBL | https://sslbl.abuse.ch/blacklist/sslipblacklist.txt | Malicious SSL IPs |
|
||||
| Emerging Threats | https://rules.emergingthreats.net/fwrules/emerging-Block-IPs.txt | ET block list |
|
||||
|
||||
For each: **Firewall → Aliases → Add**, type `URL Table`, set refresh to 1 day. Then add a WAN block rule using each alias as the source.
|
||||
|
||||
### Priority 4 — dnscrypt-proxy (Encrypted DNS)
|
||||
|
||||
Encrypts DNS queries leaving the firewall and adds DNS-level malware/tracking blocklists.
|
||||
|
||||
1. Go to **System → Firmware → Plugins**, install `os-dnscrypt-proxy`
|
||||
2. Navigate to **Services → DNSCrypt-Proxy**
|
||||
3. Enable, set listen port to `5353`
|
||||
4. Select resolvers: `cloudflare`, `quad9-dnscrypt-ip4-nofilter-pri` (or similar)
|
||||
5. Enable DNSSEC validation
|
||||
6. Update **System → Settings → General** — set DNS server to `127.0.0.1:5353`
|
||||
7. Disable `DNS Allow Override` so the ISP cannot push DNS changes
|
||||
|
||||
### Priority 5 — os-git-backup
|
||||
|
||||
Automatically commits every OPNsense config change to a Git repository. Invaluable for auditing changes after an incident and for rapid recovery.
|
||||
|
||||
1. Go to **System → Firmware → Plugins**, install `os-git-backup`
|
||||
2. Navigate to **System → Configuration → Git Backup**
|
||||
3. Configure a Forgejo repository on Netgrimoire as the remote
|
||||
4. Set SSH key for authentication
|
||||
5. Enable automatic backup on config change
|
||||
|
||||
### Priority 6 — Syslog to Graylog
|
||||
|
||||
Syslog is not currently configured. Sending firewall logs to Graylog (already running at `http://graylog:9000`) enables centralized log analysis and alerting.
|
||||
|
||||
1. Go to **System → Settings → Logging → Remote**
|
||||
2. Add a syslog destination: `graylog:514` (UDP) or use GELF input on Graylog
|
||||
3. Enable logging for: Firewall, DHCP, VPN, Authentication, CrowdSec
|
||||
|
||||
---
|
||||
|
||||
## Known Issues & Action Items
|
||||
|
||||
| Item | Priority | Notes |
|
||||
|---|---|---|
|
||||
| Spamhaus DROP rule disabled | 🔴 High | Re-enable in Firewall → Rules → WAN |
|
||||
| Spamhaus EDROP rule disabled | 🔴 High | Re-enable in Firewall → Rules → WAN |
|
||||
| GeoIP block rule disabled | 🔴 High | Re-enable in Firewall → Rules → WAN |
|
||||
| Suricata not configured | 🔴 High | Most significant security gap — configure with ET Open rules |
|
||||
| Duplicate CrowdSec aliases | 🟡 Medium | crowdsec_blacklists and crowdsec_blocklists both do IPv4 — consolidate |
|
||||
| WireGuard rule too permissive | 🟡 Medium | Allow-all from peers — scope per peer when needs are known |
|
||||
| OPT3 interface unassigned | 🟡 Medium | Disable or assign a role |
|
||||
| System DNS points to Google | 🟡 Medium | Should point to internal resolver or localhost after dnscrypt-proxy setup |
|
||||
| No syslog configured | 🟡 Medium | Forward to Graylog for centralized logging |
|
||||
| os-git-backup not installed | 🟡 Medium | Install for config change auditing |
|
||||
| OpenVPN config unpopulated | 🟢 Low | Verify status — backup shows server+client but no details |
|
||||
| ATT migration incomplete | 🟢 Low | In progress — see migration plan above |
|
||||
| Family_Subnet alias empty | 🟢 Low | Populate or remove |
|
||||
| Plex_Port_2 alias empty | 🟢 Low | Populate or remove |
|
||||
| DHCP disabled on LAN | 🟢 Info | Intentional if using static IPs — verify |
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Caddy Reverse Proxy](./caddy-reverse-proxy) — services exposed through the firewall
|
||||
- [MailCow Mail Server](./mailcow) — mail server behind the firewall, dedicated WAN IP
|
||||
- [WireGuard VPN](./wireguard) — peer configuration and access
|
||||
- [Graylog](./graylog) — target for firewall syslog
|
||||
- [CrowdSec](./crowdsec) — threat intelligence integration
|
||||
182
False Grimoire/Netgrimoire/Network/Security/OpnSense_Git.md
Normal file
182
False Grimoire/Netgrimoire/Network/Security/OpnSense_Git.md
Normal file
|
|
@ -0,0 +1,182 @@
|
|||
---
|
||||
title: OpnSense - GIT Integration
|
||||
description: Git Integration
|
||||
published: true
|
||||
date: 2026-02-23T21:53:24.522Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-02-23T21:48:01.779Z
|
||||
---
|
||||
|
||||
# OPNsense Git Backup (os-git-backup)
|
||||
|
||||
**Service:** os-git-backup
|
||||
**Plugin:** os-git-backup
|
||||
**Host:** OPNsense firewall
|
||||
**Remote:** Forgejo on Netgrimoire
|
||||
**Trigger:** Automatic on every config change
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Every change made to OPNsense — adding a firewall rule, updating an alias, changing a VPN config — modifies the underlying XML configuration file. By default there is no history of these changes. If a misconfiguration causes an outage, or if you need to audit what changed after a security incident, you have no record to work from.
|
||||
|
||||
os-git-backup solves this by committing the OPNsense configuration to a Git repository automatically every time a change is saved. Each commit records exactly what changed, when, and (if configured) which user made the change.
|
||||
|
||||
**Benefits:**
|
||||
- Full audit trail of every configuration change
|
||||
- One-command rollback to any previous state
|
||||
- Offsite backup of firewall config via Forgejo → Kopia chain
|
||||
- Diff view to understand exactly what a change did
|
||||
|
||||
---
|
||||
|
||||
## Pre-requisite: Create Forgejo Repository
|
||||
|
||||
Before installing the plugin, create a dedicated repository in Forgejo to receive the OPNsense config backups.
|
||||
|
||||
1. Log into your Forgejo instance on Netgrimoire
|
||||
2. Create a new repository: `opnsense-config`
|
||||
3. Set visibility to **Private** — firewall configs contain sensitive network topology
|
||||
4. Do not initialize with a README (the plugin will push the first commit)
|
||||
5. Note the SSH clone URL: `git@git.netgrimoire.com:youruser/opnsense-config.git`
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
### Step 1 — Install the Plugin
|
||||
|
||||
1. Go to **System → Firmware → Plugins**
|
||||
2. Search for `os-git-backup`
|
||||
3. Click the **+** install button
|
||||
4. Wait for installation to complete
|
||||
5. Navigate to **System → Configuration → Backups** — a **Git** tab will appear
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
### Step 2 — Generate SSH Deploy Key
|
||||
|
||||
The OPNsense firewall needs an SSH key to authenticate to Forgejo without a password.
|
||||
|
||||
Navigate to **System → Configuration → Backups → Git**
|
||||
|
||||
1. Click **Generate SSH Key**
|
||||
2. Copy the displayed **public key** — you will add this to Forgejo next
|
||||
|
||||
### Step 3 — Add Deploy Key to Forgejo
|
||||
|
||||
1. In Forgejo, go to your `opnsense-config` repository
|
||||
2. Navigate to **Settings → Deploy Keys**
|
||||
3. Click **Add Deploy Key**
|
||||
4. Title: `OPNsense Firewall`
|
||||
5. Key: paste the public key from Step 2
|
||||
6. Enable **Allow Write Access** — the firewall needs to push commits
|
||||
7. Click **Add Key**
|
||||
|
||||
### Step 4 — Configure the Plugin
|
||||
|
||||
Navigate to **System → Configuration → Backups → Git**
|
||||
|
||||
| Setting | Value | Notes |
|
||||
|---|---|---|
|
||||
| Enabled | ✓ | |
|
||||
| URL | `git@git.netgrimoire.com:youruser/opnsense-config.git` | SSH URL from your Forgejo repo |
|
||||
| Branch | `main` | |
|
||||
| Name | `OPNsense Firewall` | Author name shown in commits |
|
||||
| Email | `opnsense@netgrimoire.com` | Author email shown in commits |
|
||||
| SSH Private Key | (auto-populated from Step 2) | |
|
||||
| Backup Interval | On change | Commits every time config is saved |
|
||||
|
||||
Click **Save**.
|
||||
|
||||
### Step 5 — Test the Connection
|
||||
|
||||
Click **Backup Now** to trigger a manual backup. Then check your Forgejo repository — you should see an initial commit containing the OPNsense configuration XML.
|
||||
|
||||
If the push fails, check:
|
||||
1. The deploy key has write access in Forgejo
|
||||
2. The SSH URL is correct (use SSH, not HTTPS)
|
||||
3. Forgejo is reachable from the firewall — test from OPNsense shell:
|
||||
```bash
|
||||
ssh -T git@git.netgrimoire.com
|
||||
# Expected: Hi youruser! You've successfully authenticated...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## What Gets Backed Up
|
||||
|
||||
The plugin commits the OPNsense configuration file:
|
||||
|
||||
`/conf/config.xml`
|
||||
|
||||
This single file contains **everything** — interfaces, firewall rules, NAT, VPN configs, aliases, users, certificates, DHCP, DNS settings, and all plugin configurations. A restore from this file fully recreates the firewall state.
|
||||
|
||||
> ⚠ The config.xml contains **hashed passwords**, **VPN private keys**, and **API credentials**. The Forgejo repository must remain private. Ensure your Forgejo instance is not publicly accessible or that this repository is explicitly private.
|
||||
|
||||
---
|
||||
|
||||
## Using the Backup
|
||||
|
||||
### Viewing History
|
||||
|
||||
In Forgejo, navigate to the `opnsense-config` repository. Each commit represents one configuration save, with:
|
||||
- Timestamp of the change
|
||||
- Diff showing exactly what XML changed
|
||||
- Author (OPNsense Firewall)
|
||||
|
||||
### Rolling Back a Change
|
||||
|
||||
If a configuration change causes problems:
|
||||
|
||||
**Option 1 — Restore via OPNsense UI:**
|
||||
1. In Forgejo, find the commit you want to restore
|
||||
2. Download the `config.xml` from that commit
|
||||
3. In OPNsense: **System → Configuration → Backups → Restore**
|
||||
4. Upload the config.xml and restore
|
||||
|
||||
**Option 2 — Restore via shell (if UI is unreachable):**
|
||||
```bash
|
||||
# SSH into OPNsense
|
||||
ssh root@192.168.3.4
|
||||
|
||||
# The git repo is cloned locally — find it
|
||||
find /conf -name ".git" -type d
|
||||
|
||||
# Check out the previous config
|
||||
cd /conf/backup # or wherever the repo is cloned
|
||||
git log --oneline -10
|
||||
git checkout <commit-hash> -- config.xml
|
||||
|
||||
# Apply the restored config
|
||||
/usr/local/sbin/opnsense-importer config.xml
|
||||
```
|
||||
|
||||
### Diffing Changes
|
||||
|
||||
To see exactly what a specific change did:
|
||||
|
||||
```bash
|
||||
# In Forgejo: click any commit → view the diff
|
||||
# Alternatively, from the OPNsense shell:
|
||||
cd <git repo path>
|
||||
git diff HEAD~1 HEAD -- config.xml
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Integration with Kopia Backups
|
||||
|
||||
Since the git repository lives in Forgejo on Netgrimoire, it is automatically included in the Netgrimoire Kopia backup chain — no additional configuration needed. The OPNsense config history is backed up offsite along with everything else.
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [OPNsense Firewall](./opnsense-firewall) — parent firewall documentation
|
||||
- [Forgejo](./forgejo) — Git repository host on Netgrimoire
|
||||
- [Kopia Backups](./kopia) — offsite backup chain
|
||||
463
False Grimoire/Netgrimoire/Network/Security/OpnSense_Ntfy.md
Normal file
463
False Grimoire/Netgrimoire/Network/Security/OpnSense_Ntfy.md
Normal file
|
|
@ -0,0 +1,463 @@
|
|||
---
|
||||
title: OpnSense - NTFY Integration
|
||||
description: Security Notifications
|
||||
published: true
|
||||
date: 2026-02-23T22:00:46.462Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-02-23T22:00:37.268Z
|
||||
---
|
||||
|
||||
# OPNsense ntfy Alerts
|
||||
|
||||
**Service:** ntfy push notifications from OPNsense
|
||||
**Host:** OPNsense firewall
|
||||
**ntfy Server:** Your self-hosted ntfy instance on Netgrimoire
|
||||
**Methods:** CrowdSec HTTP plugin · Monit custom script · Suricata EVE watcher
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
OPNsense does not have a built-in ntfy notification channel, but there are three distinct integration points that together provide complete coverage:
|
||||
|
||||
| Method | What It Alerts On | Priority |
|
||||
|---|---|---|
|
||||
| **CrowdSec HTTP plugin** | Every IP ban decision CrowdSec makes | 🔴 Best for threat intel alerts |
|
||||
| **Monit + curl script** | System health, service failures, Suricata EVE matches, login failures | 🔴 Best for operational alerts |
|
||||
| **Suricata EVE watcher** | Suricata high-severity IDS hits (via Monit watching eve.json) | 🟡 Covered via Monit |
|
||||
|
||||
All three use your self-hosted ntfy instance. None require external services.
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before starting, confirm:
|
||||
- ntfy is running and reachable at `https://ntfy.netgrimoire.com` (or your internal URL)
|
||||
- ntfy topic created: e.g. `opnsense-alerts`
|
||||
- If ntfy has auth enabled, have a token ready
|
||||
- SSH access to OPNsense as root
|
||||
|
||||
---
|
||||
|
||||
## Method 1 — CrowdSec HTTP Notification Plugin
|
||||
|
||||
This is the cleanest integration for security alerts. CrowdSec has a built-in HTTP notification plugin. Every time it makes a ban decision — whether from community intel, a Suricata match passed through CrowdSec, or a brute-force detection — it POSTs to ntfy.
|
||||
|
||||
### Step 1 — Create the HTTP notification config
|
||||
|
||||
SSH into OPNsense and create the ntfy config file:
|
||||
|
||||
```bash
|
||||
ssh root@192.168.3.4
|
||||
```
|
||||
|
||||
```bash
|
||||
cat > /usr/local/etc/crowdsec/notifications/ntfy.yaml << 'EOF'
|
||||
# ntfy notification plugin for CrowdSec
|
||||
# CrowdSec uses its built-in HTTP plugin pointed at ntfy
|
||||
type: http
|
||||
name: ntfy_default
|
||||
|
||||
log_level: info
|
||||
|
||||
# ntfy accepts plain POST body as the notification message
|
||||
# format is a Go template — .[]Alert is the list of alerts
|
||||
format: |
|
||||
{{range .}}
|
||||
🚨 CrowdSec Decision
|
||||
Scenario: {{.Scenario}}
|
||||
Attacker IP: {{.Source.IP}}
|
||||
Country: {{.Source.Cn}}
|
||||
Action: {{.Decisions | len}} x {{(index .Decisions 0).Type}}
|
||||
Duration: {{(index .Decisions 0).Duration}}
|
||||
{{end}}
|
||||
|
||||
url: https://ntfy.netgrimoire.com/opnsense-alerts
|
||||
|
||||
method: POST
|
||||
|
||||
headers:
|
||||
Title: "CrowdSec Ban — OPNsense"
|
||||
Priority: "high"
|
||||
Tags: "rotating_light,shield"
|
||||
# Uncomment and set token if ntfy auth is enabled:
|
||||
# Authorization: "Bearer YOUR_NTFY_TOKEN"
|
||||
|
||||
# skip_tls_verify: false
|
||||
EOF
|
||||
```
|
||||
|
||||
> ⚠ Replace `https://ntfy.netgrimoire.com/opnsense-alerts` with your actual ntfy URL and topic. If ntfy is internal-only and OPNsense can reach it by hostname, the internal URL works fine.
|
||||
|
||||
### Step 2 — Register the plugin in profiles.yaml
|
||||
|
||||
Edit the CrowdSec profiles file to dispatch decisions to the ntfy plugin:
|
||||
|
||||
```bash
|
||||
vi /usr/local/etc/crowdsec/profiles.yaml
|
||||
```
|
||||
|
||||
Find the `notifications:` section of the default profile and add `ntfy_default`:
|
||||
|
||||
```yaml
|
||||
name: default_ip_remediation
|
||||
filters:
|
||||
- Alert.Remediation == true && Alert.GetScope() == "Ip"
|
||||
decisions:
|
||||
- type: ban
|
||||
duration: 4h
|
||||
notifications:
|
||||
- ntfy_default # ← add this line
|
||||
on_success: break
|
||||
```
|
||||
|
||||
> ✓ The `ntfy_default` name must match the `name:` field in the yaml file you created above exactly.
|
||||
|
||||
### Step 3 — Set correct file ownership
|
||||
|
||||
CrowdSec rejects plugins if the configuration file is not owned by the root user and root group. Ensure the file has the right permissions:
|
||||
|
||||
```bash
|
||||
chown root:wheel /usr/local/etc/crowdsec/notifications/ntfy.yaml
|
||||
chmod 600 /usr/local/etc/crowdsec/notifications/ntfy.yaml
|
||||
```
|
||||
|
||||
### Step 4 — Restart CrowdSec and test
|
||||
|
||||
```bash
|
||||
# Restart via OPNsense service manager (do NOT use systemctl/service directly)
|
||||
# Go to: Services → CrowdSec → Settings → Apply
|
||||
# Or from shell:
|
||||
pluginctl -s crowdsec restart
|
||||
```
|
||||
|
||||
Test by sending a manual notification:
|
||||
|
||||
```bash
|
||||
cscli notifications test ntfy_default
|
||||
```
|
||||
|
||||
You should receive a test push on your device within a few seconds.
|
||||
|
||||
Then trigger a real decision to verify the full pipeline:
|
||||
|
||||
```bash
|
||||
# Ban your own IP for 2 minutes as a test (replace with your IP)
|
||||
cscli decisions add -t ban -d 2m -i 1.2.3.4
|
||||
# Watch for ntfy notification
|
||||
# Remove the test ban:
|
||||
cscli decisions delete -i 1.2.3.4
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Method 2 — Monit + curl Script
|
||||
|
||||
Monit is OPNsense's built-in service monitor. It can watch processes, files, system resources, and log patterns — and call a custom shell script when a condition is met. The script fires a curl POST to ntfy.
|
||||
|
||||
This covers things CrowdSec doesn't — service failures, high CPU, gateway down events, SSH login failures, disk usage, and Suricata EVE alerts.
|
||||
|
||||
### Step 2.1 — Create the ntfy alert script
|
||||
|
||||
```bash
|
||||
cat > /usr/local/bin/ntfy-alert.sh << 'EOF'
|
||||
#!/usr/local/bin/bash
|
||||
# ntfy-alert.sh — called by Monit to send ntfy push notifications
|
||||
# Monit provides variables: $MONIT_HOST, $MONIT_SERVICE,
|
||||
# $MONIT_DESCRIPTION, $MONIT_EVENT
|
||||
|
||||
NTFY_URL="https://ntfy.netgrimoire.com/opnsense-alerts"
|
||||
# NTFY_TOKEN="Bearer YOUR_NTFY_TOKEN" # uncomment if ntfy auth enabled
|
||||
|
||||
TITLE="${MONIT_HOST}: ${MONIT_SERVICE}"
|
||||
MESSAGE="${MONIT_EVENT} — ${MONIT_DESCRIPTION}"
|
||||
|
||||
# Map Monit event types to ntfy priorities
|
||||
case "$MONIT_EVENT" in
|
||||
*"does not exist"*|*"failed"*|*"error"*)
|
||||
PRIORITY="urgent"
|
||||
TAGS="rotating_light,red_circle"
|
||||
;;
|
||||
*"changed"*|*"match"*)
|
||||
PRIORITY="high"
|
||||
TAGS="warning,yellow_circle"
|
||||
;;
|
||||
*"recovered"*|*"succeeded"*)
|
||||
PRIORITY="default"
|
||||
TAGS="white_check_mark,green_circle"
|
||||
;;
|
||||
*)
|
||||
PRIORITY="default"
|
||||
TAGS="bell"
|
||||
;;
|
||||
esac
|
||||
|
||||
curl -s \
|
||||
-H "Title: ${TITLE}" \
|
||||
-H "Priority: ${PRIORITY}" \
|
||||
-H "Tags: ${TAGS}" \
|
||||
-d "${MESSAGE}" \
|
||||
"${NTFY_URL}"
|
||||
|
||||
# Uncomment for auth:
|
||||
# curl -s \
|
||||
# -H "Authorization: ${NTFY_TOKEN}" \
|
||||
# -H "Title: ${TITLE}" \
|
||||
# -H "Priority: ${PRIORITY}" \
|
||||
# -H "Tags: ${TAGS}" \
|
||||
# -d "${MESSAGE}" \
|
||||
# "${NTFY_URL}"
|
||||
EOF
|
||||
|
||||
chmod +x /usr/local/bin/ntfy-alert.sh
|
||||
```
|
||||
|
||||
### Step 2.2 — Enable Monit
|
||||
|
||||
Navigate to **Services → Monit → Settings → General Settings**
|
||||
|
||||
| Setting | Value |
|
||||
|---|---|
|
||||
| Enabled | ✓ |
|
||||
| Polling Interval | 30 seconds |
|
||||
| Start Delay | 120 seconds |
|
||||
| Mail Server | Leave blank (using script instead) |
|
||||
|
||||
Click **Save**.
|
||||
|
||||
### Step 2.3 — Add Service Tests
|
||||
|
||||
Navigate to **Services → Monit → Service Tests Settings** and add the following tests:
|
||||
|
||||
**Test 1 — Custom Alert via Script**
|
||||
|
||||
| Field | Value |
|
||||
|---|---|
|
||||
| Name | `ntfy_alert` |
|
||||
| Condition | `failed` |
|
||||
| Action | Execute |
|
||||
| Path | `/usr/local/bin/ntfy-alert.sh` |
|
||||
|
||||
This is the reusable action that all other tests will invoke.
|
||||
|
||||
**Test 2 — Suricata EVE High Alert**
|
||||
|
||||
| Field | Value |
|
||||
|---|---|
|
||||
| Name | `SuricataHighAlert` |
|
||||
| Condition | `content = "\"severity\":1"` |
|
||||
| Action | Execute → `/usr/local/bin/ntfy-alert.sh` |
|
||||
|
||||
This watches for severity 1 (highest) alerts written to the Suricata EVE JSON log.
|
||||
|
||||
**Test 3 — Suricata Process Down**
|
||||
|
||||
| Field | Value |
|
||||
|---|---|
|
||||
| Name | `SuricataRunning` |
|
||||
| Condition | `failed` |
|
||||
| Action | Execute → `/usr/local/bin/ntfy-alert.sh` |
|
||||
|
||||
**Test 4 — CrowdSec Process Down**
|
||||
|
||||
| Field | Value |
|
||||
|---|---|
|
||||
| Name | `CrowdSecRunning` |
|
||||
| Condition | `failed` |
|
||||
| Action | Execute → `/usr/local/bin/ntfy-alert.sh` |
|
||||
|
||||
**Test 5 — SSH Login Failure**
|
||||
|
||||
| Field | Value |
|
||||
|---|---|
|
||||
| Name | `SSHFailedLogin` |
|
||||
| Condition | `content = "Failed password"` |
|
||||
| Action | Execute → `/usr/local/bin/ntfy-alert.sh` |
|
||||
|
||||
**Test 6 — OPNsense Web UI Login Failure**
|
||||
|
||||
| Field | Value |
|
||||
|---|---|
|
||||
| Name | `WebUILoginFail` |
|
||||
| Condition | `content = "webgui"` |
|
||||
| Action | Execute → `/usr/local/bin/ntfy-alert.sh` |
|
||||
|
||||
### Step 2.4 — Add Service Monitors
|
||||
|
||||
Navigate to **Services → Monit → Service Settings** and add:
|
||||
|
||||
**Monitor 1 — Suricata EVE Log (high alerts)**
|
||||
|
||||
| Field | Value |
|
||||
|---|---|
|
||||
| Name | `SuricataEVE` |
|
||||
| Type | File |
|
||||
| Path | `/var/log/suricata/eve.json` |
|
||||
| Tests | `SuricataHighAlert` |
|
||||
|
||||
**Monitor 2 — Suricata Process**
|
||||
|
||||
| Field | Value |
|
||||
|---|---|
|
||||
| Name | `Suricata` |
|
||||
| Type | Process |
|
||||
| PID File | `/var/run/suricata.pid` |
|
||||
| Tests | `SuricataRunning` |
|
||||
| Restart Method | /usr/local/etc/rc.d/suricata restart |
|
||||
|
||||
**Monitor 3 — CrowdSec Process**
|
||||
|
||||
| Field | Value |
|
||||
|---|---|
|
||||
| Name | `CrowdSec` |
|
||||
| Type | Process |
|
||||
| Match | `crowdsec` |
|
||||
| Tests | `CrowdSecRunning` |
|
||||
|
||||
**Monitor 4 — SSH Auth Log**
|
||||
|
||||
| Field | Value |
|
||||
|---|---|
|
||||
| Name | `SSHAuth` |
|
||||
| Type | File |
|
||||
| Path | `/var/log/auth.log` |
|
||||
| Tests | `SSHFailedLogin` |
|
||||
|
||||
**Monitor 5 — System Resources (optional)**
|
||||
|
||||
| Field | Value |
|
||||
|---|---|
|
||||
| Name | `System` |
|
||||
| Type | System |
|
||||
| Tests | `ntfy_alert` (on resource threshold exceeded) |
|
||||
|
||||
Click **Apply** after adding all services.
|
||||
|
||||
### Step 2.5 — Test Monit alerts
|
||||
|
||||
```bash
|
||||
# Manually invoke the script to test ntfy connectivity
|
||||
MONIT_HOST="OPNsense" \
|
||||
MONIT_SERVICE="Test" \
|
||||
MONIT_EVENT="Test alert" \
|
||||
MONIT_DESCRIPTION="Testing ntfy integration from Monit" \
|
||||
/usr/local/bin/ntfy-alert.sh
|
||||
```
|
||||
|
||||
You should receive a push notification immediately.
|
||||
|
||||
---
|
||||
|
||||
## Alert Topics & Priority Mapping
|
||||
|
||||
Consider using separate ntfy topics to filter notifications by type on your device:
|
||||
|
||||
| Topic | Used For | Suggested ntfy Priority |
|
||||
|---|---|---|
|
||||
| `opnsense-alerts` | CrowdSec bans, Suricata high hits | high / urgent |
|
||||
| `opnsense-health` | Monit service failures, process restarts | high |
|
||||
| `opnsense-info` | Service recoveries, status changes | default / low |
|
||||
|
||||
To use separate topics, change the `NTFY_URL` in the Monit script and the `url:` in the CrowdSec config accordingly.
|
||||
|
||||
---
|
||||
|
||||
## ntfy Priority Reference
|
||||
|
||||
ntfy supports five priority levels that map to different notification behaviors on Android/iOS:
|
||||
|
||||
| ntfy Priority | Numeric | Behavior |
|
||||
|---|---|---|
|
||||
| `min` | 1 | No notification, no sound |
|
||||
| `low` | 2 | Notification, no sound |
|
||||
| `default` | 3 | Notification with sound |
|
||||
| `high` | 4 | Notification with sound, bypasses DND |
|
||||
| `urgent` | 5 | Phone rings through DND, repeated |
|
||||
|
||||
For firewall alerts: use `urgent` for process failures and `high` for IDS/ban events. Reserve `urgent` sparingly to avoid alert fatigue.
|
||||
|
||||
---
|
||||
|
||||
## Keeping Config Persistent Across Upgrades
|
||||
|
||||
OPNsense upgrades can overwrite files in certain paths. The safest locations for persistent custom files:
|
||||
|
||||
| File | Location | Persistent? |
|
||||
|---|---|---|
|
||||
| ntfy-alert.sh | `/usr/local/bin/ntfy-alert.sh` | ✓ Yes — not touched by upgrades |
|
||||
| CrowdSec ntfy.yaml | `/usr/local/etc/crowdsec/notifications/ntfy.yaml` | ✓ Yes — plugin config directory |
|
||||
| CrowdSec profiles.yaml | `/usr/local/etc/crowdsec/profiles.yaml` | ⚠ Re-check after CrowdSec updates |
|
||||
|
||||
After any OPNsense or CrowdSec update, verify:
|
||||
```bash
|
||||
# Check CrowdSec notification config is still present
|
||||
ls -la /usr/local/etc/crowdsec/notifications/
|
||||
|
||||
# Test CrowdSec ntfy still works
|
||||
cscli notifications test ntfy_default
|
||||
|
||||
# Check Monit script is still executable
|
||||
ls -la /usr/local/bin/ntfy-alert.sh
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**No notification received from CrowdSec test:**
|
||||
|
||||
```bash
|
||||
# Check CrowdSec logs for plugin errors
|
||||
tail -50 /var/log/crowdsec.log | grep -i ntfy
|
||||
tail -50 /var/log/crowdsec.log | grep -i notification
|
||||
|
||||
# Verify ntfy URL is reachable from OPNsense
|
||||
curl -v -d "test" https://ntfy.netgrimoire.com/opnsense-alerts
|
||||
|
||||
# Check profiles.yaml has ntfy_default in notifications section
|
||||
grep -A5 "notifications:" /usr/local/etc/crowdsec/profiles.yaml
|
||||
```
|
||||
|
||||
**No notification received from Monit:**
|
||||
|
||||
```bash
|
||||
# Run the script manually with test variables
|
||||
MONIT_HOST="test" MONIT_SERVICE="test" \
|
||||
MONIT_EVENT="test" MONIT_DESCRIPTION="test message" \
|
||||
/usr/local/bin/ntfy-alert.sh
|
||||
|
||||
# Check Monit is running
|
||||
ps aux | grep monit
|
||||
|
||||
# Check Monit logs
|
||||
tail -50 /var/log/monit.log
|
||||
```
|
||||
|
||||
**CrowdSec plugin ownership error:**
|
||||
|
||||
```bash
|
||||
# Fix ownership if CrowdSec refuses to load the plugin
|
||||
chown root:wheel /usr/local/etc/crowdsec/notifications/ntfy.yaml
|
||||
ls -la /usr/local/etc/crowdsec/notifications/
|
||||
```
|
||||
|
||||
**ntfy auth failing:**
|
||||
|
||||
```bash
|
||||
# Test with token manually
|
||||
curl -H "Authorization: Bearer YOUR_TOKEN" \
|
||||
-H "Title: Test" \
|
||||
-d "Auth test" \
|
||||
https://ntfy.netgrimoire.com/opnsense-alerts
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [OPNsense Firewall](./opnsense-firewall) — parent firewall documentation
|
||||
- [CrowdSec](./crowdsec) — threat intelligence engine sending these alerts
|
||||
- [Suricata IDS/IPS](./suricata-ids-ips) — source of EVE alerts watched by Monit
|
||||
- [ntfy](./ntfy) — self-hosted notification server on Netgrimoire
|
||||
|
|
@ -0,0 +1,239 @@
|
|||
---
|
||||
title: Opnsense - Additional Blocklists
|
||||
description: Blocklists
|
||||
published: true
|
||||
date: 2026-02-23T21:54:13.019Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-02-23T21:46:39.562Z
|
||||
---
|
||||
|
||||
# OPNsense Additional Blocklists
|
||||
|
||||
**Service:** Firewall Aliases — URL Table blocklists
|
||||
**Host:** OPNsense firewall
|
||||
**Applies To:** WAN and ATT interfaces
|
||||
**Update Frequency:** Daily (automatic)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Your firewall already uses Spamhaus DROP and EDROP as IP blocklists. These three additional lists fill specific gaps that Spamhaus does not cover:
|
||||
|
||||
| List | What It Blocks | Why It's Needed |
|
||||
|---|---|---|
|
||||
| Feodo Tracker | Botnet command & control IPs | Stops malware on your network phoning home |
|
||||
| Abuse.ch SSLBL | IPs with malicious SSL certificates | Catches malware that uses HTTPS to hide C2 traffic |
|
||||
| Emerging Threats | Confirmed active attack IPs | Broad coverage of IPs currently conducting scans and exploits |
|
||||
|
||||
These work at the **firewall alias level** — the same mechanism as your existing Spamhaus lists. Traffic from/to these IPs is blocked before it reaches any service.
|
||||
|
||||
> ✓ These lists are also used by Suricata internally. Adding them as firewall aliases provides a second, independent enforcement point at the packet filter level — meaning blocks happen even if Suricata is restarted or temporarily inactive.
|
||||
|
||||
---
|
||||
|
||||
## Current Blocklist State
|
||||
|
||||
From your configuration, these lists are already present and working:
|
||||
|
||||
| Alias | List | Status |
|
||||
|---|---|---|
|
||||
| SpamHaus_Drop | Spamhaus DROP | ⚠ Alias active, **rule disabled** |
|
||||
| Spamhaus_edrop | Spamhaus EDROP | ⚠ Alias active, **rule disabled** |
|
||||
| crowdsec_blacklists | CrowdSec IPv4 | ✓ Active |
|
||||
| crowdsec6_blacklists | CrowdSec IPv6 | ✓ Active |
|
||||
|
||||
> ⚠ **First priority:** Before adding new blocklists, re-enable the existing Spamhaus block rules. See the Re-enable Existing Rules section at the bottom of this document.
|
||||
|
||||
---
|
||||
|
||||
## Step 1 — Add Feodo Tracker Alias
|
||||
|
||||
Navigate to **Firewall → Aliases → Add**
|
||||
|
||||
| Field | Value |
|
||||
|---|---|
|
||||
| Name | `Feodo_Tracker` |
|
||||
| Type | `URL Table (IPs)` |
|
||||
| Description | `Abuse.ch Feodo Tracker — Botnet C2 IPs` |
|
||||
| URL | `https://feodotracker.abuse.ch/downloads/ipblocklist.txt` |
|
||||
| Refresh Frequency | `1` day |
|
||||
| Enabled | ✓ |
|
||||
|
||||
Click **Save**, then **Apply Changes**.
|
||||
|
||||
**Verify the list loaded:**
|
||||
Go to **Firewall → Diagnostics → Aliases**, select `Feodo_Tracker` — you should see a list of IP addresses populated.
|
||||
|
||||
---
|
||||
|
||||
## Step 2 — Add Abuse.ch SSLBL Alias
|
||||
|
||||
Navigate to **Firewall → Aliases → Add**
|
||||
|
||||
| Field | Value |
|
||||
|---|---|
|
||||
| Name | `AbuseCH_SSLBL` |
|
||||
| Type | `URL Table (IPs)` |
|
||||
| Description | `Abuse.ch SSL Blacklist — Malicious SSL certificate IPs` |
|
||||
| URL | `https://sslbl.abuse.ch/blacklist/sslipblacklist.txt` |
|
||||
| Refresh Frequency | `1` day |
|
||||
| Enabled | ✓ |
|
||||
|
||||
Click **Save**, then **Apply Changes**.
|
||||
|
||||
> ✓ The SSL Blacklist specifically targets IPs that have been observed using SSL/TLS certificates associated with malware botnets. It catches C2 traffic that would otherwise be hidden inside HTTPS.
|
||||
|
||||
---
|
||||
|
||||
## Step 3 — Add Emerging Threats Alias
|
||||
|
||||
Navigate to **Firewall → Aliases → Add**
|
||||
|
||||
| Field | Value |
|
||||
|---|---|
|
||||
| Name | `ET_Block_IPs` |
|
||||
| Type | `URL Table (IPs)` |
|
||||
| Description | `Emerging Threats — Active attack and scanning IPs` |
|
||||
| URL | `https://rules.emergingthreats.net/fwrules/emerging-Block-IPs.txt` |
|
||||
| Refresh Frequency | `1` day |
|
||||
| Enabled | ✓ |
|
||||
|
||||
Click **Save**, then **Apply Changes**.
|
||||
|
||||
---
|
||||
|
||||
## Step 4 — Create Firewall Block Rules
|
||||
|
||||
One block rule per alias, applied to both WAN and ATT interfaces. Add these rules **above** your existing PASS rules on each interface.
|
||||
|
||||
Navigate to **Firewall → Rules → WAN**
|
||||
|
||||
### Rule 1 — Block Feodo Tracker (WAN)
|
||||
|
||||
Click **Add** (add to top of ruleset):
|
||||
|
||||
| Field | Value |
|
||||
|---|---|
|
||||
| Action | Block |
|
||||
| Interface | WAN |
|
||||
| Direction | in |
|
||||
| Protocol | any |
|
||||
| Source | `Feodo_Tracker` (single host or alias) |
|
||||
| Destination | any |
|
||||
| Description | `Block Feodo Tracker botnet C2` |
|
||||
| Log | ✓ Enable logging |
|
||||
|
||||
Click **Save**.
|
||||
|
||||
### Rule 2 — Block Abuse.ch SSLBL (WAN)
|
||||
|
||||
| Field | Value |
|
||||
|---|---|
|
||||
| Action | Block |
|
||||
| Interface | WAN |
|
||||
| Direction | in |
|
||||
| Protocol | any |
|
||||
| Source | `AbuseCH_SSLBL` |
|
||||
| Destination | any |
|
||||
| Description | `Block Abuse.ch SSL Blacklist` |
|
||||
| Log | ✓ Enable logging |
|
||||
|
||||
Click **Save**.
|
||||
|
||||
### Rule 3 — Block Emerging Threats (WAN)
|
||||
|
||||
| Field | Value |
|
||||
|---|---|
|
||||
| Action | Block |
|
||||
| Interface | WAN |
|
||||
| Direction | in |
|
||||
| Protocol | any |
|
||||
| Source | `ET_Block_IPs` |
|
||||
| Destination | any |
|
||||
| Description | `Block Emerging Threats IPs` |
|
||||
| Log | ✓ Enable logging |
|
||||
|
||||
Click **Save**.
|
||||
|
||||
Click **Apply Changes** on the WAN rules page.
|
||||
|
||||
### Repeat for ATT Interface
|
||||
|
||||
Navigate to **Firewall → Rules → ATT** and add the same three rules with `Interface: ATT`. This ensures blocking applies to both WANs during the transition period, and only ATT after WAN is retired.
|
||||
|
||||
---
|
||||
|
||||
## Step 5 — Also Block Outbound (Optional but Recommended)
|
||||
|
||||
Adding outbound blocks catches the case where an internal device is already compromised and attempting to contact C2 infrastructure. Apply to the LAN interface, direction **out**:
|
||||
|
||||
Navigate to **Firewall → Rules → LAN**, add rules with:
|
||||
- Direction: `out`
|
||||
- Source: `any`
|
||||
- Destination: the respective alias (`Feodo_Tracker`, `AbuseCH_SSLBL`, `ET_Block_IPs`)
|
||||
- Action: `Block`
|
||||
|
||||
This means even if malware bypasses inbound filtering, outbound connections to known C2 IPs are still blocked.
|
||||
|
||||
---
|
||||
|
||||
## Re-enable Existing Spamhaus Rules
|
||||
|
||||
While you are in the firewall rules, re-enable the three currently disabled rules:
|
||||
|
||||
Navigate to **Firewall → Rules → WAN**
|
||||
|
||||
Find these three rules (they appear greyed out):
|
||||
1. `Block DROP` — source: SpamHaus_Drop
|
||||
2. `Block EDROP` — source: Spamhaus_edrop
|
||||
3. GeoIP country block — source: Blocked_Countries
|
||||
|
||||
Click the **enable toggle** (grey circle icon) on each rule to enable them. Click **Apply Changes**.
|
||||
|
||||
> ✓ These aliases are already populated and refreshing automatically. The only reason they were not blocking is because the rules were disabled. Enabling them requires no other changes.
|
||||
|
||||
---
|
||||
|
||||
## Verifying Blocklists Are Working
|
||||
|
||||
### Check Alias Contents
|
||||
|
||||
**Firewall → Diagnostics → Aliases** — select each alias to see the current list of blocked IPs and confirm they are populated.
|
||||
|
||||
### Check Firewall Logs
|
||||
|
||||
**Firewall → Log Files → Live View** — filter by the rule description (e.g., `Feodo Tracker`) to see blocks in real time.
|
||||
|
||||
### Check Update Schedule
|
||||
|
||||
Aliases refresh on the schedule set during creation. To force an immediate refresh:
|
||||
**Firewall → Diagnostics → Aliases → select alias → Flush + Force Update**
|
||||
|
||||
---
|
||||
|
||||
## Complete Blocklist Summary
|
||||
|
||||
After implementing all of the above, your firewall enforces the following IP blocklists:
|
||||
|
||||
| Alias | List | Covers | Update |
|
||||
|---|---|---|---|
|
||||
| SpamHaus_Drop | Spamhaus DROP | Hijacked/compromised netblocks | Daily |
|
||||
| Spamhaus_edrop | Spamhaus EDROP | Extended DROP — bogon routes | Daily |
|
||||
| Feodo_Tracker | Feodo Tracker | Botnet C2 IPs | Daily |
|
||||
| AbuseCH_SSLBL | Abuse.ch SSLBL | Malicious SSL certificate IPs | Daily |
|
||||
| ET_Block_IPs | Emerging Threats | Active scanners & attack IPs | Daily |
|
||||
| crowdsec_blacklists | CrowdSec | Community-reported bad IPs (IPv4) | Real-time |
|
||||
| crowdsec6_blacklists | CrowdSec | Community-reported bad IPs (IPv6) | Real-time |
|
||||
| Blocked_Countries | MaxMind GeoIP | 70 blocked countries | Weekly |
|
||||
|
||||
Combined with Suricata (content inspection) and CrowdSec (IP reputation), this gives you a comprehensive multi-layer perimeter.
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [OPNsense Firewall](./opnsense-firewall) — parent firewall documentation, full alias list
|
||||
- [Suricata IDS/IPS](./suricata-ids-ips) — content inspection layer, also uses these feed sources
|
||||
- [CrowdSec](./crowdsec) — real-time IP reputation blocking
|
||||
|
|
@ -0,0 +1,531 @@
|
|||
---
|
||||
title: Video Restoration Script
|
||||
description: Restore VHS Video Captures
|
||||
published: true
|
||||
date: 2026-03-06T03:48:12.713Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-03-06T03:48:05.841Z
|
||||
---
|
||||
|
||||
# VHS Video Restoration — User Guide
|
||||
|
||||
A pipeline script for cleaning up and upscaling old VHS captures on Ubuntu 24.04.
|
||||
Runs in two modes: a fast FFmpeg-only cleanup pass, and a full AI upscale using Real-ESRGAN.
|
||||
|
||||
---
|
||||
|
||||
## Requirements
|
||||
|
||||
- **Ubuntu 24.04**
|
||||
- **FFmpeg** — `sudo apt install ffmpeg`
|
||||
- **bc** — `sudo apt install bc`
|
||||
- **Real-ESRGAN** (optional, for AI upscaling — see setup below)
|
||||
|
||||
---
|
||||
|
||||
## File Setup
|
||||
|
||||
Place everything in a working folder with this structure:
|
||||
|
||||
```
|
||||
~/your-folder/
|
||||
├── vhs_restore.sh
|
||||
├── realesrgan-ncnn-vulkan ← AI upscaler binary (optional)
|
||||
├── models/ ← Real-ESRGAN model files
|
||||
├── input/ ← Put your source videos here
|
||||
├── output/ ← Restored videos appear here
|
||||
└── work/ ← Temporary scratch files (auto-created)
|
||||
```
|
||||
|
||||
Supported input formats: `.mpg`, `.mpeg`, `.mp4`, `.avi`, `.mov`, `.mkv`, `.wmv`, `.m4v`, `.ts`
|
||||
|
||||
---
|
||||
|
||||
## First-Time Setup
|
||||
|
||||
```bash
|
||||
# Make the script executable
|
||||
chmod +x vhs_restore.sh
|
||||
|
||||
# Create the input folder and add your videos
|
||||
mkdir input
|
||||
cp /path/to/your/videos/*.mpg input/
|
||||
```
|
||||
|
||||
### Installing Real-ESRGAN (one-time, for AI upscaling)
|
||||
|
||||
1. Download the latest Ubuntu release from:
|
||||
https://github.com/xinntao/Real-ESRGAN/releases
|
||||
→ look for `realesrgan-ncnn-vulkan-*-ubuntu.zip`
|
||||
2. Unzip into your working folder
|
||||
3. `chmod +x realesrgan-ncnn-vulkan`
|
||||
|
||||
---
|
||||
|
||||
## Running the Script
|
||||
|
||||
### Quick cleanup only (recommended first pass)
|
||||
|
||||
Fast — processes in a few minutes per file. No AI upscaling.
|
||||
|
||||
```bash
|
||||
./vhs_restore.sh --no-ai
|
||||
```
|
||||
|
||||
### Full pipeline with AI upscaling
|
||||
|
||||
Slow on CPU (plan for several hours per hour of footage). Produces the best results.
|
||||
|
||||
```bash
|
||||
./vhs_restore.sh
|
||||
```
|
||||
|
||||
### All options
|
||||
|
||||
| Flag | Description | Default |
|
||||
|------|-------------|---------|
|
||||
| `-i DIR` | Input directory | `./input` |
|
||||
| `-o DIR` | Output directory | `./output` |
|
||||
| `-w DIR` | Scratch/work directory | `./work` |
|
||||
| `-b PATH` | Path to Real-ESRGAN binary | `./realesrgan-ncnn-vulkan` |
|
||||
| `-s 2` or `-s 4` | Upscale factor | `2` |
|
||||
| `-q 16` | Output quality (0–51, lower = better) | `16` |
|
||||
| `--no-ai` | Skip AI upscaling, FFmpeg only | off |
|
||||
| `--keep` | Keep extracted PNG frames after processing | off |
|
||||
| `-h` | Show help | |
|
||||
|
||||
**Examples:**
|
||||
|
||||
```bash
|
||||
# Process files from a custom folder
|
||||
./vhs_restore.sh -i ~/Videos/VHS -o ~/Videos/Restored
|
||||
|
||||
# 4x upscale with slightly smaller output file
|
||||
./vhs_restore.sh -s 4 -q 18
|
||||
|
||||
# FFmpeg cleanup only, custom folders
|
||||
./vhs_restore.sh -i ~/Videos/VHS -o ~/Videos/Restored --no-ai
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## What the Script Does
|
||||
|
||||
**Stage 1 — FFmpeg cleanup** (always runs):
|
||||
- Deinterlaces the video (`yadif`) — removes the horizontal combing artifacts common in VHS captures
|
||||
- Denoises (`hqdn3d=2:1:2:2`) — gentle noise reduction that avoids motion blocking
|
||||
- Sharpens edges (`unsharp`) — recovers detail softened by the denoise step
|
||||
- Colour corrects — boosts washed-out VHS colour, adjusts contrast and gamma, corrects the green/yellow cast common in aged tape
|
||||
|
||||
**Stage 2 — Frame extraction** (AI mode only):
|
||||
- Extracts every frame as a PNG into a temporary folder
|
||||
|
||||
**Stage 3 — Real-ESRGAN upscaling** (AI mode only):
|
||||
- Runs the `realesr-animevideov3` model on each frame
|
||||
- Default: 2× upscale (e.g. 640×480 → 1280×960)
|
||||
|
||||
**Reassembly:**
|
||||
- Rebuilds the video from upscaled frames with the original audio
|
||||
|
||||
---
|
||||
|
||||
## Live Progress
|
||||
|
||||
The script shows live FFmpeg output. Watch for:
|
||||
|
||||
- `speed=3.5x` — processing at 3.5× realtime (good)
|
||||
- `speed=0.5x` — slow, likely a very heavy filter load
|
||||
- `corrupt decoded frame` — normal for damaged VHS files, FFmpeg will push through
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Script hangs with no output**
|
||||
Run with `--no-ai` first to confirm FFmpeg is working, then check that your Real-ESRGAN binary is executable (`chmod +x realesrgan-ncnn-vulkan`).
|
||||
|
||||
**Output looks blocky during motion**
|
||||
The denoise values may still be too high for your footage. Edit the script and reduce `hqdn3d=2:1:2:2` to `hqdn3d=1:1:1:1`, or remove `hqdn3d` entirely — Real-ESRGAN handles noise well on its own.
|
||||
|
||||
**Colour looks over-saturated**
|
||||
Reduce `saturation=1.8` in the filter chain to `saturation=1.4` or `1.2`.
|
||||
|
||||
**Real-ESRGAN not found**
|
||||
Ensure the binary is in the same folder as the script and is executable. Or pass the path explicitly: `./vhs_restore.sh -b /path/to/realesrgan-ncnn-vulkan`
|
||||
|
||||
**Error logs**
|
||||
All FFmpeg and Real-ESRGAN logs are saved to `/tmp/` for diagnosis:
|
||||
- `/tmp/ffmpeg_stage1.log`
|
||||
- `/tmp/ffmpeg_extract.log`
|
||||
- `/tmp/realesrgan.log`
|
||||
- `/tmp/ffmpeg_reassemble.log`
|
||||
|
||||
---
|
||||
|
||||
## Workflow Recommendation
|
||||
|
||||
1. Run `--no-ai` first on one file to check the cleanup result
|
||||
2. If it looks good, run the full pipeline on all files overnight
|
||||
3. For heavily damaged footage, consider also running **CodeFormer** (face restoration) on top of the output — particularly effective if the video contains people
|
||||
|
||||
---
|
||||
|
||||
## Output
|
||||
|
||||
Restored files are saved to `./output/` as `<original_name>_restored.mp4` encoded as H.264 with AAC audio.
|
||||
|
||||
|
||||
## vhs_restore.sh Script
|
||||
|
||||
`#!/usr/bin/env bash
|
||||
# =============================================================================
|
||||
# vhs_restore.sh — Automated VHS Video Restoration Pipeline
|
||||
# Stages: Deinterlace → Denoise → Colour correct → AI Upscale → Reassemble
|
||||
#
|
||||
# Changes from v1:
|
||||
# - Gentle hqdn3d (2:1:2:2) to prevent motion blocking/pixelation
|
||||
# - Aggressive colour correction for washed-out VHS footage
|
||||
# - Live FFmpeg progress shown in terminal (no silent hanging)
|
||||
# - Logs still saved to /tmp/ for error diagnosis
|
||||
# =============================================================================
|
||||
set -euo pipefail
|
||||
|
||||
# ── Colour output helpers ────────────────────────────────────────────────────
|
||||
RED='\033[0;31m'; GREEN='\033[0;32m'; YELLOW='\033[1;33m'
|
||||
CYAN='\033[0;36m'; BOLD='\033[1m'; NC='\033[0m'
|
||||
info() { echo -e "${CYAN}[INFO]${NC} $*"; }
|
||||
success() { echo -e "${GREEN}[OK]${NC} $*"; }
|
||||
warn() { echo -e "${YELLOW}[WARN]${NC} $*"; }
|
||||
error() { echo -e "${RED}[ERROR]${NC} $*" >&2; }
|
||||
header() { echo -e "\n${BOLD}${CYAN}══ $* ══${NC}"; }
|
||||
|
||||
# ── Default configuration ────────────────────────────────────────────────────
|
||||
INPUT_DIR="./input" # Folder containing your source VHS videos
|
||||
OUTPUT_DIR="./output" # Final restored videos land here
|
||||
WORK_DIR="./work" # Scratch space (frames, temp files)
|
||||
REALESRGAN_BIN="./realesrgan-ncnn-vulkan" # Path to Real-ESRGAN binary
|
||||
REALESRGAN_MODEL="realesr-animevideov3" # Best model for home video
|
||||
UPSCALE_FACTOR=2 # 2x or 4x (4x is very slow on CPU)
|
||||
OUTPUT_WIDTH=1920 # Target width used in --no-ai mode
|
||||
OUTPUT_HEIGHT=1080 # Target height used in --no-ai mode
|
||||
CRF=16 # Output quality 0-51, lower = better
|
||||
PRESET="slow" # FFmpeg encode preset
|
||||
SKIP_UPSCALE=false # --no-ai flag sets this true
|
||||
KEEP_FRAMES=false # --keep flag sets this true
|
||||
|
||||
# ── Parse CLI flags ──────────────────────────────────────────────────────────
|
||||
usage() {
|
||||
cat <<EOF
|
||||
Usage: $(basename "$0") [options]
|
||||
|
||||
Options:
|
||||
-i DIR Input directory (default: ./input)
|
||||
-o DIR Output directory (default: ./output)
|
||||
-w DIR Work/scratch dir (default: ./work)
|
||||
-b PATH Path to realesrgan-ncnn-vulkan binary
|
||||
-s FACTOR Upscale factor: 2 or 4 (default: 2)
|
||||
-q CRF Output quality 0-51, lower=better (default: 16)
|
||||
--no-ai Skip Real-ESRGAN; FFmpeg cleanup only (fast)
|
||||
--keep Keep extracted frames after processing
|
||||
-h Show this help
|
||||
|
||||
Examples:
|
||||
$(basename "$0") -i ~/Videos/VHS -o ~/Videos/Restored
|
||||
$(basename "$0") -i ~/Videos/VHS --no-ai # Quick cleanup only
|
||||
$(basename "$0") -i ~/Videos/VHS -s 4 -q 18 # 4x upscale
|
||||
EOF
|
||||
exit 0
|
||||
}
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
-i) INPUT_DIR="$2"; shift 2 ;;
|
||||
-o) OUTPUT_DIR="$2"; shift 2 ;;
|
||||
-w) WORK_DIR="$2"; shift 2 ;;
|
||||
-b) REALESRGAN_BIN="$2"; shift 2 ;;
|
||||
-s) UPSCALE_FACTOR="$2"; shift 2 ;;
|
||||
-q) CRF="$2"; shift 2 ;;
|
||||
--no-ai) SKIP_UPSCALE=true; shift ;;
|
||||
--keep) KEEP_FRAMES=true; shift ;;
|
||||
-h|--help) usage ;;
|
||||
*) error "Unknown option: $1"; usage ;;
|
||||
esac
|
||||
done
|
||||
|
||||
# ── Dependency checks ────────────────────────────────────────────────────────
|
||||
header "Checking dependencies"
|
||||
|
||||
check_cmd() {
|
||||
if command -v "$1" &>/dev/null; then
|
||||
success "$1 found"
|
||||
else
|
||||
error "$1 not found. Install with: $2"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
check_cmd ffmpeg "sudo apt install ffmpeg"
|
||||
check_cmd ffprobe "sudo apt install ffmpeg"
|
||||
check_cmd bc "sudo apt install bc"
|
||||
|
||||
if [[ "$SKIP_UPSCALE" == false ]]; then
|
||||
if [[ ! -x "$REALESRGAN_BIN" ]]; then
|
||||
warn "Real-ESRGAN binary not found at: $REALESRGAN_BIN"
|
||||
echo
|
||||
echo -e "${YELLOW}To install Real-ESRGAN:${NC}"
|
||||
echo " 1. Download: https://github.com/xinntao/Real-ESRGAN/releases"
|
||||
echo " -> realesrgan-ncnn-vulkan-*-ubuntu.zip"
|
||||
echo " 2. Unzip into this directory"
|
||||
echo " 3. chmod +x realesrgan-ncnn-vulkan"
|
||||
echo " 4. Re-run this script"
|
||||
echo
|
||||
echo "Or run with --no-ai for FFmpeg-only cleanup (no upscaling)."
|
||||
exit 1
|
||||
fi
|
||||
success "Real-ESRGAN found"
|
||||
fi
|
||||
|
||||
# ── Locate input files ───────────────────────────────────────────────────────
|
||||
header "Scanning input directory: $INPUT_DIR"
|
||||
|
||||
if [[ ! -d "$INPUT_DIR" ]]; then
|
||||
error "Input directory not found: $INPUT_DIR"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
mapfile -t VIDEO_FILES < <(find "$INPUT_DIR" -maxdepth 1 \
|
||||
-type f \( -iname "*.mp4" -o -iname "*.avi" -o -iname "*.mov" \
|
||||
-o -iname "*.mkv" -o -iname "*.mpg" -o -iname "*.mpeg" \
|
||||
-o -iname "*.wmv" -o -iname "*.m4v" -o -iname "*.ts" \) \
|
||||
| sort)
|
||||
|
||||
if [[ ${#VIDEO_FILES[@]} -eq 0 ]]; then
|
||||
error "No video files found in $INPUT_DIR"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
info "Found ${#VIDEO_FILES[@]} video file(s):"
|
||||
for f in "${VIDEO_FILES[@]}"; do echo " * $(basename "$f")"; done
|
||||
|
||||
# ── Helpers ──────────────────────────────────────────────────────────────────
|
||||
probe() {
|
||||
ffprobe -v error -select_streams v:0 \
|
||||
-show_entries "stream=$2" -of csv=p=0 "$1" 2>/dev/null | head -1
|
||||
}
|
||||
|
||||
human_time() {
|
||||
local s="${1%.*}"
|
||||
printf '%dh %dm %ds' $((s/3600)) $(( (s%3600)/60 )) $((s%60))
|
||||
}
|
||||
|
||||
# ── Create directories ───────────────────────────────────────────────────────
|
||||
mkdir -p "$OUTPUT_DIR" "$WORK_DIR"
|
||||
|
||||
# ── Overall stats ────────────────────────────────────────────────────────────
|
||||
TOTAL_FILES=${#VIDEO_FILES[@]}
|
||||
PROCESSED=0
|
||||
FAILED=0
|
||||
PIPELINE_START=$(date +%s)
|
||||
|
||||
# ════════════════════════════════════════════════════════════════════════════
|
||||
# MAIN LOOP
|
||||
# ════════════════════════════════════════════════════════════════════════════
|
||||
for INPUT_FILE in "${VIDEO_FILES[@]}"; do
|
||||
|
||||
BASENAME=$(basename "$INPUT_FILE")
|
||||
STEM="${BASENAME%.*}"
|
||||
CLEANED="$WORK_DIR/${STEM}_cleaned.mp4"
|
||||
FRAMES_IN="$WORK_DIR/${STEM}_frames_in"
|
||||
FRAMES_OUT="$WORK_DIR/${STEM}_frames_out"
|
||||
FINAL_OUTPUT="$OUTPUT_DIR/${STEM}_restored.mp4"
|
||||
|
||||
header "Processing: $BASENAME ($((PROCESSED+1))/$TOTAL_FILES)"
|
||||
FILE_START=$(date +%s)
|
||||
|
||||
# ── Probe source ──────────────────────────────────────────────────────────
|
||||
FPS=$(probe "$INPUT_FILE" "r_frame_rate")
|
||||
FPS_DEC=$(echo "scale=3; $FPS" | bc 2>/dev/null || echo "25")
|
||||
WIDTH=$(probe "$INPUT_FILE" "width")
|
||||
HEIGHT=$(probe "$INPUT_FILE" "height")
|
||||
FIELD_ORDER=$(probe "$INPUT_FILE" "field_order")
|
||||
DURATION=$(ffprobe -v error -show_entries format=duration \
|
||||
-of csv=p=0 "$INPUT_FILE" 2>/dev/null | head -1)
|
||||
|
||||
info "Source: ${WIDTH}x${HEIGHT} ${FPS_DEC}fps $(human_time "${DURATION%.*}") field_order=${FIELD_ORDER:-unknown}"
|
||||
|
||||
# Always deinterlace for VHS -- safe even if not flagged as interlaced
|
||||
if [[ "$FIELD_ORDER" =~ ^(tt|tb|bt|bb)$ ]]; then
|
||||
DEINTERLACE_FILTER="yadif=mode=1,"
|
||||
info "Interlacing detected — applying yadif deinterlacer"
|
||||
else
|
||||
DEINTERLACE_FILTER="yadif=mode=1,"
|
||||
warn "Interlacing not confirmed by probe — applying yadif anyway (safe for VHS)"
|
||||
fi
|
||||
|
||||
# ── Stage 1: FFmpeg cleanup ───────────────────────────────────────────────
|
||||
header "Stage 1/3 — FFmpeg cleanup & colour correction"
|
||||
info "Watch fps= and speed= for live progress."
|
||||
info "Corrupt frame warnings are normal for old VHS captures."
|
||||
echo
|
||||
|
||||
if [[ "$SKIP_UPSCALE" == true ]]; then
|
||||
SCALE_FILTER="scale=${OUTPUT_WIDTH}:${OUTPUT_HEIGHT}:flags=lanczos,"
|
||||
else
|
||||
SCALE_FILTER=""
|
||||
fi
|
||||
|
||||
# Filter chain notes:
|
||||
# hqdn3d=2:1:2:2 -- gentle denoise; low temporal values (3rd/4th)
|
||||
# prevent the motion blocking seen with higher values
|
||||
# unsharp -- moderate sharpening to recover edge detail
|
||||
# eq -- aggressive colour boost for washed-out VHS
|
||||
# colorbalance -- corrects the green/yellow cast common in aged VHS
|
||||
VFILTER="${DEINTERLACE_FILTER}\
|
||||
hqdn3d=2:1:2:2,\
|
||||
unsharp=3:3:0.5:3:3:0.3,\
|
||||
eq=contrast=1.2:brightness=0.05:saturation=1.8:gamma=1.1,\
|
||||
colorbalance=rs=0.1:gs=0.0:bs=-0.1,\
|
||||
${SCALE_FILTER}\
|
||||
format=yuv420p"
|
||||
|
||||
if ! ffmpeg -y -i "$INPUT_FILE" \
|
||||
-vf "$VFILTER" \
|
||||
-c:v libx264 -crf 18 -preset medium \
|
||||
-c:a aac -b:a 192k -ac 2 \
|
||||
-stats \
|
||||
"$CLEANED" 2>&1 | tee /tmp/ffmpeg_stage1.log | \
|
||||
grep --line-buffered -E "(frame=|speed=|error|Error|Invalid)"; then
|
||||
error "FFmpeg stage 1 failed. Full log: /tmp/ffmpeg_stage1.log"
|
||||
FAILED=$((FAILED+1))
|
||||
continue
|
||||
fi
|
||||
|
||||
echo
|
||||
success "Stage 1 complete -> $(du -sh "$CLEANED" | cut -f1)"
|
||||
|
||||
if [[ "$SKIP_UPSCALE" == true ]]; then
|
||||
cp "$CLEANED" "$FINAL_OUTPUT"
|
||||
success "Output (no AI): $FINAL_OUTPUT"
|
||||
PROCESSED=$((PROCESSED+1))
|
||||
[[ "$KEEP_FRAMES" == false ]] && rm -f "$CLEANED"
|
||||
continue
|
||||
fi
|
||||
|
||||
# ── Stage 2: Extract frames ───────────────────────────────────────────────
|
||||
header "Stage 2/3 — Extracting frames for AI upscaling"
|
||||
mkdir -p "$FRAMES_IN" "$FRAMES_OUT"
|
||||
|
||||
FRAME_COUNT=$(ffprobe -v error -count_packets \
|
||||
-select_streams v:0 -show_entries stream=nb_read_packets \
|
||||
-of csv=p=0 "$CLEANED" 2>/dev/null | head -1)
|
||||
FRAME_COUNT=${FRAME_COUNT:-0}
|
||||
info "Extracting ~${FRAME_COUNT} frames..."
|
||||
|
||||
if ! ffmpeg -y -i "$CLEANED" \
|
||||
-vsync 0 -stats \
|
||||
"$FRAMES_IN/frame%08d.png" 2>&1 | tee /tmp/ffmpeg_extract.log | \
|
||||
grep --line-buffered -E "(frame=|speed=|error|Error)"; then
|
||||
error "Frame extraction failed. Full log: /tmp/ffmpeg_extract.log"
|
||||
FAILED=$((FAILED+1))
|
||||
continue
|
||||
fi
|
||||
|
||||
ACTUAL_FRAMES=$(find "$FRAMES_IN" -name "*.png" | wc -l)
|
||||
echo
|
||||
success "Extracted $ACTUAL_FRAMES frames"
|
||||
|
||||
# ── Stage 3: Real-ESRGAN ──────────────────────────────────────────────────
|
||||
header "Stage 3/3 — Real-ESRGAN AI upscaling (${UPSCALE_FACTOR}x)"
|
||||
warn "Slow on CPU — est. $(echo "scale=0; $ACTUAL_FRAMES * 10 / 60" | bc)-$(echo "scale=0; $ACTUAL_FRAMES * 30 / 60" | bc) minutes"
|
||||
info "Upscaled frames will appear in: $FRAMES_OUT"
|
||||
echo
|
||||
|
||||
UPSCALE_START=$(date +%s)
|
||||
if ! "$REALESRGAN_BIN" \
|
||||
-i "$FRAMES_IN" \
|
||||
-o "$FRAMES_OUT" \
|
||||
-n "$REALESRGAN_MODEL" \
|
||||
-s "$UPSCALE_FACTOR" \
|
||||
-f png 2>&1 | tee /tmp/realesrgan.log; then
|
||||
error "Real-ESRGAN failed. Full log: /tmp/realesrgan.log"
|
||||
FAILED=$((FAILED+1))
|
||||
continue
|
||||
fi
|
||||
|
||||
UPSCALE_END=$(date +%s)
|
||||
UPSCALE_ELAPSED=$((UPSCALE_END - UPSCALE_START))
|
||||
success "AI upscaling complete in $(human_time $UPSCALE_ELAPSED)"
|
||||
|
||||
# ── Reassemble ────────────────────────────────────────────────────────────
|
||||
REASSEMBLE_FPS=$(ffprobe -v error -select_streams v:0 \
|
||||
-show_entries stream=r_frame_rate \
|
||||
-of csv=p=0 "$CLEANED" 2>/dev/null | head -1)
|
||||
|
||||
info "Reassembling video from upscaled frames..."
|
||||
echo
|
||||
|
||||
if ! ffmpeg -y \
|
||||
-framerate "$REASSEMBLE_FPS" \
|
||||
-i "$FRAMES_OUT/frame%08d.png" \
|
||||
-i "$CLEANED" \
|
||||
-map 0:v -map 1:a \
|
||||
-c:v libx264 -crf "$CRF" -preset "$PRESET" \
|
||||
-c:a copy \
|
||||
-movflags +faststart \
|
||||
-stats \
|
||||
"$FINAL_OUTPUT" 2>&1 | tee /tmp/ffmpeg_reassemble.log | \
|
||||
grep --line-buffered -E "(frame=|speed=|error|Error)"; then
|
||||
error "Reassembly failed. Full log: /tmp/ffmpeg_reassemble.log"
|
||||
FAILED=$((FAILED+1))
|
||||
continue
|
||||
fi
|
||||
|
||||
# ── Cleanup ───────────────────────────────────────────────────────────────
|
||||
if [[ "$KEEP_FRAMES" == false ]]; then
|
||||
rm -rf "$FRAMES_IN" "$FRAMES_OUT" "$CLEANED"
|
||||
info "Scratch files cleaned up"
|
||||
else
|
||||
info "Frames kept in: $FRAMES_IN / $FRAMES_OUT"
|
||||
fi
|
||||
|
||||
FILE_END=$(date +%s)
|
||||
FILE_ELAPSED=$((FILE_END - FILE_START))
|
||||
PROCESSED=$((PROCESSED+1))
|
||||
|
||||
OUT_SIZE=$(du -sh "$FINAL_OUTPUT" | cut -f1)
|
||||
echo
|
||||
success "Done: $FINAL_OUTPUT"
|
||||
info " File size : $OUT_SIZE"
|
||||
info " Time taken: $(human_time $FILE_ELAPSED)"
|
||||
|
||||
done
|
||||
|
||||
# ════════════════════════════════════════════════════════════════════════════
|
||||
# Final summary
|
||||
# ════════════════════════════════════════════════════════════════════════════
|
||||
PIPELINE_END=$(date +%s)
|
||||
PIPELINE_ELAPSED=$((PIPELINE_END - PIPELINE_START))
|
||||
|
||||
header "Pipeline Complete"
|
||||
echo -e " ${GREEN}Processed : $PROCESSED / $TOTAL_FILES${NC}"
|
||||
[[ $FAILED -gt 0 ]] && echo -e " ${RED}Failed : $FAILED${NC}"
|
||||
echo -e " Total time: $(human_time $PIPELINE_ELAPSED)"
|
||||
echo -e " Output dir: $OUTPUT_DIR"
|
||||
echo
|
||||
|
||||
if [[ $PROCESSED -gt 0 ]]; then
|
||||
echo "Restored files:"
|
||||
find "$OUTPUT_DIR" -name "*_restored.mp4" | while read -r f; do
|
||||
SIZE=$(du -sh "$f" | cut -f1)
|
||||
echo " * $(basename "$f") ($SIZE)"
|
||||
done
|
||||
fi
|
||||
`
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,453 @@
|
|||
---
|
||||
title: Stashapp Workflow
|
||||
description:
|
||||
published: true
|
||||
date: 2026-02-20T04:25:56.467Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-02-18T13:08:53.604Z
|
||||
---
|
||||
|
||||
# StashApp: Automated Library Management with Community Scrapers
|
||||
|
||||
> **Goal:** Automatically identify, tag, rename, and organize your media library with minimal manual intervention using StashDB, ThePornDB, and the CommunityScrapers repository.
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Prerequisites](#1-prerequisites)
|
||||
2. [Installing CommunityScrapers](#2-installing-community-scrapers)
|
||||
3. [Configuring Metadata Providers](#3-configuring-metadata-providers)
|
||||
- [StashDB](#31-stashdb)
|
||||
- [ThePornDB (TPDB)](#32-theporndbtpdb)
|
||||
4. [Configuring Your Library](#4-configuring-your-library)
|
||||
5. [Automated File Naming & Moving](#5-automated-file-naming--moving)
|
||||
6. [The Core Workflow](#6-the-core-workflow)
|
||||
7. [Handling ABMEA & Amateur Content](#7-handling-abmea--amateur-content)
|
||||
8. [Automation with Scheduled Tasks](#8-automation-with-scheduled-tasks)
|
||||
9. [Tips & Troubleshooting](#9-tips--troubleshooting)
|
||||
|
||||
---
|
||||
|
||||
## 1. Prerequisites
|
||||
|
||||
Before starting, make sure you have:
|
||||
|
||||
- **StashApp installed and running** — see the [official install docs](https://github.com/stashapp/stash/wiki/Installation)
|
||||
- **Git installed** on your system (needed to clone the scrapers repo)
|
||||
- **A ThePornDB account** — free tier available at [metadataapi.net](https://metadataapi.net)
|
||||
- **A StashDB account** — requires a community invite; request one on [the Discord](https://discord.gg/2TsNFKt)
|
||||
- Your Stash config directory noted — default locations:
|
||||
|
||||
| OS | Default Path |
|
||||
|----|-------------|
|
||||
| Windows | `%APPDATA%\stash` |
|
||||
| macOS | `~/.stash` |
|
||||
| Linux | `~/.stash` |
|
||||
| Docker | `/root/.stash` |
|
||||
|
||||
---
|
||||
|
||||
## 2. Installing CommunityScrapers
|
||||
|
||||
The [CommunityScrapers](https://github.com/stashapp/CommunityScrapers) repository contains scrapers for hundreds of sites maintained by the Stash community. This is the primary source for site-specific scrapers including ABMEA.
|
||||
|
||||
### Step 1 — Navigate to your Stash config directory
|
||||
|
||||
```bash
|
||||
cd ~/.stash
|
||||
```
|
||||
|
||||
### Step 2 — Create a scrapers directory if it doesn't exist
|
||||
|
||||
```bash
|
||||
mkdir -p scrapers
|
||||
cd scrapers
|
||||
```
|
||||
|
||||
### Step 3 — Clone the CommunityScrapers repository
|
||||
|
||||
```bash
|
||||
git clone https://github.com/stashapp/CommunityScrapers.git
|
||||
```
|
||||
|
||||
This creates `~/.stash/scrapers/CommunityScrapers/` containing all available scrapers.
|
||||
|
||||
### Step 4 — Verify Stash detects the scrapers
|
||||
|
||||
1. Open Stash in your browser (default: `http://localhost:9999`)
|
||||
2. Go to **Settings → Metadata Providers → Scrapers**
|
||||
3. Click **Reload Scrapers**
|
||||
4. You should now see a long list of scrapers including entries for ABMEA, ManyVids, Clips4Sale, etc.
|
||||
|
||||
### Step 5 — Keep scrapers updated
|
||||
|
||||
Since community scrapers are actively maintained, set up a periodic update:
|
||||
|
||||
```bash
|
||||
cd ~/.stash/scrapers/CommunityScrapers
|
||||
git pull
|
||||
```
|
||||
|
||||
> 💡 **Tip:** You can automate this with a cron job or scheduled task. See [Section 8](#8-automation-with-scheduled-tasks).
|
||||
|
||||
### Installing Python Dependencies (if prompted)
|
||||
|
||||
Some scrapers require Python packages. If you see scraper errors mentioning missing modules:
|
||||
|
||||
```bash
|
||||
pip install requests cloudscraper py-cord lxml
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Configuring Metadata Providers
|
||||
|
||||
Stash uses **metadata providers** to automatically match scenes by fingerprint (phash/oshash). This is what enables true automation — no filename matching required.
|
||||
|
||||
### 3.1 StashDB
|
||||
|
||||
StashDB is the official community-run fingerprint and metadata database. It is the most reliable source for mainstream and studio content.
|
||||
|
||||
1. Go to **Settings → Metadata Providers**
|
||||
2. Under **Stash-Box Endpoints**, click **Add**
|
||||
3. Fill in:
|
||||
- **Name:** `StashDB`
|
||||
- **Endpoint:** `https://stashdb.org/graphql`
|
||||
- **API Key:** *(generate this from your StashDB account → API Keys)*
|
||||
4. Click **Confirm**
|
||||
|
||||
### 3.2 ThePornDB (TPDB)
|
||||
|
||||
TPDB aggregates metadata from a large number of sites and is especially useful for amateur, clip site, and ABMEA content that may not be on StashDB.
|
||||
|
||||
1. Log in at [metadataapi.net](https://metadataapi.net) and go to your **API Settings** to get your key
|
||||
2. In Stash, go to **Settings → Metadata Providers**
|
||||
3. Under **Stash-Box Endpoints**, click **Add**
|
||||
4. Fill in:
|
||||
- **Name:** `ThePornDB`
|
||||
- **Endpoint:** `https://theporndb.net/graphql`
|
||||
- **API Key:** *(your TPDB API key)*
|
||||
5. Click **Confirm**
|
||||
|
||||
### Provider Priority Order
|
||||
|
||||
Set your identify task to query providers in this order for best results:
|
||||
|
||||
1. **StashDB** — highest quality, community-verified
|
||||
2. **ThePornDB** — broad coverage including amateur/clip sites
|
||||
3. **CommunityScrapers** (site-specific) — for anything not matched above
|
||||
|
||||
---
|
||||
|
||||
## 4. Configuring Your Library
|
||||
|
||||
### Adding Library Paths
|
||||
|
||||
1. Go to **Settings → Library**
|
||||
2. Under **Directories**, click **Add** and point to your media folders
|
||||
3. You can add multiple directories (e.g., separate drives or folders)
|
||||
|
||||
> ⚠️ **Do not** set your organized output folder as a source directory. Keep source and destination separate until you are confident in your setup.
|
||||
|
||||
### Recommended Directory Structure
|
||||
|
||||
```
|
||||
/media/
|
||||
├── stash-incoming/ ← Source: where new files land
|
||||
└── stash-library/ ← Destination: where Stash moves organized files
|
||||
├── Studios/
|
||||
│ └── ABMEA/
|
||||
└── Amateur/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Automated File Naming & Moving
|
||||
|
||||
This is the section that does the heavy lifting. Stash will rename and move files **only when a scene is marked as Organized**, which gives you a review gate before anything is touched.
|
||||
|
||||
### Enable File Moving
|
||||
|
||||
1. Go to **Settings → Library**
|
||||
2. Enable **"Move files to organized folder on organize"**
|
||||
3. Set your **Organized folder path** (e.g., `/media/stash-library`)
|
||||
|
||||
### Configure the File Naming Template
|
||||
|
||||
Still in **Settings → Library**, set your **Filename template**. These use Go template syntax with Stash variables.
|
||||
|
||||
**Recommended template for mixed studio/amateur libraries:**
|
||||
|
||||
```
|
||||
{studio}/{date} {title}
|
||||
```
|
||||
|
||||
**For performer-centric amateur libraries:**
|
||||
|
||||
```
|
||||
{performers}/{studio}/{date} {title}
|
||||
```
|
||||
|
||||
**Full example with fallbacks:**
|
||||
|
||||
```
|
||||
{{if .Studio}}{{.Studio.Name}}{{else}}Unknown{{end}}/{{if .Date}}{{.Date}}{{else}}0000-00-00{{end}} {{.Title}}
|
||||
```
|
||||
|
||||
### Available Template Variables
|
||||
|
||||
| Variable | Example Output |
|
||||
|----------|---------------|
|
||||
| `{title}` | `Scene Title Here` |
|
||||
| `{date}` | `2024-03-15` |
|
||||
| `{studio}` | `ABMEA` |
|
||||
| `{performers}` | `Jane Doe` |
|
||||
| `{resolution}` | `1080p` |
|
||||
| `{duration}` | `00-32-15` |
|
||||
| `{rating}` | `5` |
|
||||
|
||||
> 💡 If a field is empty (e.g., no studio), Stash skips that path segment. Test with a few scenes before running on your whole library.
|
||||
|
||||
---
|
||||
|
||||
## 6. The Core Workflow
|
||||
|
||||
Follow these steps **in order** every time you add new content. This is the automated pipeline.
|
||||
|
||||
```
|
||||
New Files → Scan → Generate Fingerprints → Identify → Review → Organize (Move + Rename)
|
||||
```
|
||||
|
||||
### Step 1 — Scan
|
||||
|
||||
**Tasks → Scan**
|
||||
|
||||
- Discovers new files and adds them to the database
|
||||
- Does not move or rename anything yet
|
||||
- Options to enable: **Generate covers on scan**
|
||||
|
||||
### Step 2 — Generate Fingerprints
|
||||
|
||||
**Tasks → Generate**
|
||||
|
||||
Select these options:
|
||||
|
||||
| Option | Purpose |
|
||||
|--------|---------|
|
||||
| ✅ **Phashes** | Used for fingerprint matching against StashDB/TPDB |
|
||||
| ✅ **Checksums (MD5/SHA256)** | Used for duplicate detection |
|
||||
| ✅ **Previews** | Thumbnail previews in the UI |
|
||||
| ✅ **Sprites** | Timeline scrubber images |
|
||||
|
||||
> ⏳ This step is CPU/GPU intensive. Let it complete before proceeding. On a large library, this may take hours.
|
||||
|
||||
### Step 3 — Identify (Auto-Scrape by Fingerprint)
|
||||
|
||||
**Tasks → Identify**
|
||||
|
||||
This is the magic step. Stash sends your file fingerprints to StashDB and TPDB and pulls back metadata automatically.
|
||||
|
||||
Configure the task:
|
||||
1. Click **Add Source** and add **StashDB** first
|
||||
2. Click **Add Source** again and add **ThePornDB**
|
||||
3. Under **Options**, enable:
|
||||
- ✅ Set cover image
|
||||
- ✅ Set performers
|
||||
- ✅ Set studio
|
||||
- ✅ Set tags
|
||||
- ✅ Set date
|
||||
4. Click **Identify**
|
||||
|
||||
Stash will now automatically match and populate metadata for any scene it recognizes by fingerprint.
|
||||
|
||||
### Step 4 — Auto Tag (Filename-Based Fallback)
|
||||
|
||||
For scenes that didn't match by fingerprint (common with amateur content), use Auto Tag to extract metadata from filenames.
|
||||
|
||||
**Tasks → Auto Tag**
|
||||
|
||||
- Matches **Performers**, **Studios**, and **Tags** from filenames against your existing database entries
|
||||
- Works best when filenames contain names (e.g., `JaneDoe_SceneTitle_1080p.mp4`)
|
||||
|
||||
### Step 5 — Review Unmatched Scenes
|
||||
|
||||
Filter to find scenes that still need attention:
|
||||
|
||||
1. Go to **Scenes**
|
||||
2. Filter by: **Organized = false** and **Studio = none** (or **Performers = none**)
|
||||
3. Use the **Tagger view** (icon in top right of Scenes) for rapid URL-based scraping
|
||||
|
||||
In Tagger view:
|
||||
- Paste the original source URL into the scrape field
|
||||
- Click **Scrape** — Stash fills in all metadata from that URL
|
||||
- Review and click **Save**
|
||||
|
||||
### Step 6 — Organize (Move & Rename)
|
||||
|
||||
Once you're satisfied with a scene's metadata:
|
||||
|
||||
1. Open the scene
|
||||
2. Click the **Organize** button (checkmark icon), OR
|
||||
3. Use **bulk organize**: select multiple scenes → Edit → Mark as Organized
|
||||
|
||||
When a scene is marked Organized, Stash will:
|
||||
- ✅ Rename the file according to your template
|
||||
- ✅ Move it to your organized folder
|
||||
- ✅ Update the database path
|
||||
|
||||
> ⚠️ **This action cannot be easily undone at scale.** Always verify metadata on a small batch first.
|
||||
|
||||
---
|
||||
|
||||
## 7. Handling ABMEA & Amateur Content
|
||||
|
||||
ABMEA and amateur clips often lack fingerprint matches. Use these additional strategies:
|
||||
|
||||
### ABMEA-Specific Scraper
|
||||
|
||||
The CommunityScrapers repo includes an ABMEA scraper. To use it manually:
|
||||
|
||||
1. Open a scene in Stash
|
||||
2. Click **Edit → Scrape with → ABMEA**
|
||||
3. If the scene URL is known, enter it; otherwise the scraper will search by title
|
||||
|
||||
### Batch URL Scraping Workflow for ABMEA
|
||||
|
||||
If you have many files sourced from ABMEA:
|
||||
|
||||
1. Before ingesting files, **rename them to include the ABMEA scene ID** in the filename if possible (e.g., `ABMEA-0123_title.mp4`)
|
||||
2. After scanning, go to **Tagger View**
|
||||
3. Filter to unmatched scenes and paste ABMEA URLs one by one
|
||||
|
||||
### Amateur Content Without a Source Site
|
||||
|
||||
For truly anonymous amateur clips:
|
||||
|
||||
1. Create a **Studio** entry called `Amateur` (or more specific names like `Amateur - Reddit`)
|
||||
2. Create **Performer** entries for recurring people you can identify
|
||||
3. Use **Auto Tag** to match these once entries exist
|
||||
4. Use tags liberally to compensate for missing structured metadata: `amateur`, `homemade`, `POV`, etc.
|
||||
|
||||
### Tag Hierarchy Recommendation
|
||||
|
||||
Set up tag parents in **Settings → Tags** to create a browsable hierarchy:
|
||||
|
||||
```
|
||||
Content Type
|
||||
├── Amateur
|
||||
├── Professional
|
||||
└── Compilation
|
||||
|
||||
Source
|
||||
├── ABMEA
|
||||
├── Clip Site
|
||||
└── Unknown
|
||||
|
||||
Quality
|
||||
├── 4K
|
||||
├── 1080p
|
||||
└── SD
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. Automation with Scheduled Tasks
|
||||
|
||||
Minimize manual steps by scheduling recurring tasks.
|
||||
|
||||
### Setting Up Scheduled Tasks in Stash
|
||||
|
||||
Go to **Settings → Tasks → Scheduled Tasks** and create:
|
||||
|
||||
| Task | Schedule | Purpose |
|
||||
|------|----------|---------|
|
||||
| Scan | Every 6 hours | Pick up new files automatically |
|
||||
| Generate (Phashes only) | Every 6 hours | Fingerprint new files |
|
||||
| Identify | Daily at 2am | Match new fingerprinted files |
|
||||
| Auto Tag | Daily at 3am | Filename-based fallback tagging |
|
||||
| Clean | Weekly | Remove missing files from database |
|
||||
|
||||
### Auto-Update CommunityScrapers (Linux/macOS)
|
||||
|
||||
Add to your crontab (`crontab -e`):
|
||||
|
||||
```bash
|
||||
# Update CommunityScrapers every Sunday at midnight
|
||||
0 0 * * 0 cd ~/.stash/scrapers/CommunityScrapers && git pull
|
||||
```
|
||||
|
||||
### Auto-Update CommunityScrapers (Windows)
|
||||
|
||||
Create a scheduled task in Task Scheduler running:
|
||||
|
||||
```powershell
|
||||
cd C:\Users\YourUser\.stash\scrapers\CommunityScrapers; git pull
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9. Tips & Troubleshooting
|
||||
|
||||
### Scraper not appearing in Stash
|
||||
|
||||
- Go to **Settings → Metadata Providers → Scrapers** and click **Reload Scrapers**
|
||||
- Check that the `.yml` scraper file is in a subdirectory of your scrapers folder
|
||||
- Check Stash logs (**Settings → Logs**) for scraper loading errors
|
||||
|
||||
### Identify finds no matches
|
||||
|
||||
- Confirm phashes were generated (check scene details — phash should be populated)
|
||||
- Confirm your StashDB/TPDB API keys are correctly entered and not expired
|
||||
- The file may simply not be in either database — proceed to manual URL scraping
|
||||
|
||||
### Files not moving after marking as Organized
|
||||
|
||||
- Confirm **"Move files to organized folder"** is enabled in Settings → Library
|
||||
- Confirm the organized folder path is set and the folder exists
|
||||
- Check that Stash has write permissions to both source and destination
|
||||
|
||||
### Duplicate files
|
||||
|
||||
Run **Tasks → Clean → Find Duplicates** before organizing to avoid moving duplicates into your library. Stash uses phash to find visual duplicates even if filenames differ.
|
||||
|
||||
### Metadata keeps getting overwritten
|
||||
|
||||
In **Settings → Scraping**, set the **Scrape behavior** to `If not set` instead of `Always` to prevent already-populated fields from being overwritten during re-scrapes.
|
||||
|
||||
### Useful Stash Plugins
|
||||
|
||||
Install via **Settings → Plugins → Browse Available Plugins**:
|
||||
|
||||
| Plugin | Purpose |
|
||||
|--------|---------|
|
||||
| **Performer Image Cleanup** | Remove duplicate performer images |
|
||||
| **Tag Graph** | Visualize tag relationships |
|
||||
| **Duplicate Finder** | Advanced duplicate management |
|
||||
| **Stats** | Library analytics dashboard |
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference Checklist
|
||||
|
||||
Use this checklist every time you add new content:
|
||||
|
||||
```
|
||||
[ ] Drop files into stash-incoming directory
|
||||
[ ] Tasks → Scan
|
||||
[ ] Tasks → Generate → Phashes + Checksums
|
||||
[ ] Tasks → Identify (StashDB → TPDB)
|
||||
[ ] Tasks → Auto Tag
|
||||
[ ] Review unmatched scenes in Tagger View
|
||||
[ ] Manually scrape remaining unmatched scenes by URL
|
||||
[ ] Spot-check metadata on a sample of scenes
|
||||
[ ] Bulk select reviewed scenes → Mark as Organized
|
||||
[ ] Verify a few files moved and renamed correctly
|
||||
[ ] Done ✓
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
*Last updated: February 2026 | Stash version compatibility: 0.25+*
|
||||
*Community resources: [Stash Discord](https://discord.gg/2TsNFKt) | [GitHub](https://github.com/stashapp/stash) | [Wiki](https://github.com/stashapp/stash/wiki)*
|
||||
3703
False Grimoire/Netgrimoire/Pocket/Deployment_Guide.md
Normal file
3703
False Grimoire/Netgrimoire/Pocket/Deployment_Guide.md
Normal file
File diff suppressed because it is too large
Load diff
54
False Grimoire/Netgrimoire/Pocket/Hardware.md
Normal file
54
False Grimoire/Netgrimoire/Pocket/Hardware.md
Normal file
|
|
@ -0,0 +1,54 @@
|
|||
---
|
||||
title: Pocket Grimoire - Hardware
|
||||
description: Hardware for Pocket Grimoire
|
||||
published: true
|
||||
date: 2026-02-20T04:29:06.922Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-01-28T23:07:03.685Z
|
||||
---
|
||||
|
||||
# Hardware Inventory
|
||||
|
||||
## Core Compute
|
||||
- Raspberry Pi 4 (8GB preferred)
|
||||
- Passive or low-noise cooling case
|
||||
- Retro Pi
|
||||
|
||||
## Storage
|
||||
- SSD #1 – Vault (ZFS, encrypted)
|
||||
- SSD #2 – Media (ZFS, encrypted)
|
||||
- SSD #3 - Family Movies/TV Shows (ZFS, Not encrypted)
|
||||
- USB - For ISO images and emergency rebuilds
|
||||
- USB - For emergence data transfer
|
||||
|
||||
## Networking
|
||||
- GL.iNet Beryl AX (GL-MT3000)
|
||||
- Short CAT5/6 Ethernet cable (router ↔ Pi)
|
||||
|
||||
## Power
|
||||
- Anker Prime Charging Station, 200W 6-Port GaN Desktop Charger
|
||||
- Short USB-C cables (router)
|
||||
- Short USB-A - USB-C cable (Pi)
|
||||
- 2x Short USB-3 cables (SSDs)
|
||||
- Longer USB-C - USB-C (Laptop Power)
|
||||
- Longer USB-C - USB-C (phone/tablet charger)
|
||||
|
||||
## Media Players
|
||||
- 2x Onn 4K stream boxes w/power
|
||||
- FireTV Stick w/power
|
||||
|
||||
## Cables
|
||||
- 2x HDMI Cables
|
||||
- 1x MiniHDMI - HDMI
|
||||
- 1x HDMI Extender
|
||||
|
||||
|
||||
## Optional
|
||||
- Speaker
|
||||
- Portable retro game
|
||||
- universal TV Remote
|
||||
- Carry case for travel
|
||||
- Go pro +xtra batteries and desktop tripod
|
||||
- Mini Wireless Keyboard
|
||||
- two wireless controllers
|
||||
863
False Grimoire/Netgrimoire/Pocket/ONN_Media_Streamer.md
Normal file
863
False Grimoire/Netgrimoire/Pocket/ONN_Media_Streamer.md
Normal file
|
|
@ -0,0 +1,863 @@
|
|||
---
|
||||
title: Stream Box
|
||||
description: Configure ONN Media Box
|
||||
published: true
|
||||
date: 2026-02-20T04:50:44.701Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-02-20T04:50:34.384Z
|
||||
---
|
||||
|
||||
# Onn 4K Streaming Box Setup Guide
|
||||
|
||||
**Complete configuration guide for Onn 4K streaming boxes used with Pocket Grimoire**
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
This guide covers the complete setup of your Onn 4K streaming boxes for use with Pocket Grimoire, including:
|
||||
- Initial device setup
|
||||
- WiFi configuration (portapotty network)
|
||||
- Required app installations (Jellyfin, StashApp, Netflix, YouTube TV)
|
||||
- Connection to Pocket Grimoire services
|
||||
- Troubleshooting common issues
|
||||
|
||||
**Network Configuration:**
|
||||
- **WiFi SSID:** `portapotty` (GL.iNet Beryl AX travel router)
|
||||
- **Connection:** All devices connect wirelessly to portapotty
|
||||
- **Exception:** Raspberry Pi connects to router via CAT5 ethernet
|
||||
|
||||
---
|
||||
|
||||
## Hardware Information
|
||||
|
||||
### Onn 4K Streaming Box Specifications
|
||||
- **Model:** Onn 4K Streaming Box (Walmart exclusive)
|
||||
- **OS:** Android TV (Google TV interface)
|
||||
- **CPU:** Amlogic S905Y4 quad-core
|
||||
- **RAM:** 2GB
|
||||
- **Storage:** 8GB internal
|
||||
- **Video:** 4K HDR, Dolby Vision, Dolby Atmos
|
||||
- **WiFi:** 802.11ac (WiFi 5) dual-band
|
||||
- **Bluetooth:** 5.0
|
||||
- **Ports:** HDMI 2.1, Micro-USB (power)
|
||||
- **Remote:** Voice remote with Google Assistant
|
||||
|
||||
### What's in the Box
|
||||
- Onn 4K streaming box
|
||||
- Voice remote with batteries
|
||||
- USB power adapter
|
||||
- HDMI cable (short)
|
||||
- Quick start guide
|
||||
|
||||
---
|
||||
|
||||
## Initial Setup
|
||||
|
||||
### First Power-On
|
||||
|
||||
1. **Connect to TV:**
|
||||
- Plug HDMI cable into Onn box
|
||||
- Connect other end to hotel TV HDMI port
|
||||
- Plug Micro-USB power into Onn box
|
||||
- Connect USB power adapter to wall or Anker Prime
|
||||
|
||||
2. **Power On:**
|
||||
- TV should auto-detect HDMI input
|
||||
- If not, use TV remote to select correct HDMI input
|
||||
- Onn box LED will light up (solid white when ready)
|
||||
- Wait for Google TV home screen
|
||||
|
||||
3. **Select Language:**
|
||||
- Use remote to select language (English)
|
||||
- Click OK
|
||||
|
||||
4. **Accessibility Options:**
|
||||
- Skip unless needed (click "Skip")
|
||||
|
||||
### WiFi Configuration
|
||||
|
||||
**Critical: Connect to portapotty network**
|
||||
|
||||
1. **WiFi Setup Screen:**
|
||||
- List of available networks will appear
|
||||
- Scroll to find `portapotty`
|
||||
- Select `portapotty`
|
||||
- Click "Connect"
|
||||
|
||||
2. **Enter Password:**
|
||||
- Enter WiFi password for portapotty network
|
||||
- Use on-screen keyboard
|
||||
- Click "Connect"
|
||||
- Wait for connection (should take 5-10 seconds)
|
||||
- "Connected" message will appear
|
||||
|
||||
3. **Verify Connection:**
|
||||
- Should show "portapotty" with signal strength
|
||||
- Should show "Connected" status
|
||||
|
||||
**Troubleshooting WiFi:**
|
||||
- If portapotty doesn't appear: Ensure Beryl AX router is powered on
|
||||
- If password fails: Double-check portapotty WiFi password
|
||||
- If connection drops: Move closer to router
|
||||
- Signal strength: Should be "Excellent" or "Good" in hotel room
|
||||
|
||||
### Google Account Setup
|
||||
|
||||
**Option A: Sign in with Google Account**
|
||||
1. Select "Sign in"
|
||||
2. Use phone to scan QR code or enter code
|
||||
3. Follow prompts on phone
|
||||
4. Account will sync to Onn box
|
||||
|
||||
**Option B: Set up without Google Account (Limited)**
|
||||
1. Select "Skip"
|
||||
2. Click "Skip" again to confirm
|
||||
3. Some features will be limited (Play Store, purchases)
|
||||
4. **Recommendation:** Use Option A for full functionality
|
||||
|
||||
**For Pocket Grimoire:**
|
||||
- Need Google account for: Play Store (to install apps)
|
||||
- StashApp requires sideloading (see separate section)
|
||||
|
||||
### Complete Initial Setup
|
||||
|
||||
1. **Google Services:**
|
||||
- Accept terms (or skip)
|
||||
- Location services: Your choice
|
||||
- Device name: Name it (e.g., "Onn Box 1", "Onn Box 2")
|
||||
|
||||
2. **Voice Match:**
|
||||
- Set up "Hey Google" voice commands (optional)
|
||||
- Can skip and set up later
|
||||
|
||||
3. **Apps to Install:**
|
||||
- Google will suggest popular apps
|
||||
- Skip for now (we'll install specific apps later)
|
||||
- Click "Next" or "Skip"
|
||||
|
||||
4. **Complete:**
|
||||
- Should arrive at Google TV home screen
|
||||
- Remote should control interface
|
||||
- Ready to install apps
|
||||
|
||||
---
|
||||
|
||||
## App Installations
|
||||
|
||||
### 1. Jellyfin for Android TV
|
||||
|
||||
**Install from Google Play Store:**
|
||||
|
||||
1. **Open Play Store:**
|
||||
- Press Home button on remote
|
||||
- Navigate to "Apps" tab at top
|
||||
- Select "Play Store"
|
||||
|
||||
2. **Search for Jellyfin:**
|
||||
- Click search icon (magnifying glass)
|
||||
- Type "Jellyfin" using on-screen keyboard
|
||||
- Select "Jellyfin for Android TV" from results
|
||||
- **Developer:** Jellyfin
|
||||
- **Note:** Choose "Jellyfin for Android TV" not regular Jellyfin
|
||||
|
||||
3. **Install:**
|
||||
- Click "Install"
|
||||
- Wait for download and installation (~30 seconds)
|
||||
- Click "Open" when complete
|
||||
|
||||
4. **Configure Jellyfin:**
|
||||
- Click "Connect to Server"
|
||||
- **Method 1 - Manual Entry:**
|
||||
- Click "Add server manually"
|
||||
- Host: `pocket-grimoire.local` or `10.0.0.10` (Pi's IP)
|
||||
- Port: `8096`
|
||||
- Click "Connect"
|
||||
|
||||
- **Method 2 - Auto-Discovery (if available):**
|
||||
- Wait for Jellyfin to discover Pocket Grimoire
|
||||
- Select "Pocket Grimoire" from list
|
||||
- Click "Connect"
|
||||
|
||||
5. **Login:**
|
||||
- Enter username and password
|
||||
- Or select "Quick Connect" if configured
|
||||
- Click "Sign In"
|
||||
|
||||
6. **Verify:**
|
||||
- Should see Jellyfin home screen
|
||||
- Libraries (Movies, TV Shows) should appear
|
||||
- Test playing a video (should be direct play, no buffering)
|
||||
|
||||
**Jellyfin Settings (Optional but Recommended):**
|
||||
- Settings → Playback
|
||||
- Video quality: Maximum
|
||||
- Allow direct play: ON
|
||||
- Allow direct stream: ON
|
||||
- Allow video transcoding: OFF (should be disabled on server already)
|
||||
|
||||
### 2. StashApp for Android TV
|
||||
|
||||
**Installation: Requires Sideloading (GitHub Release)**
|
||||
|
||||
StashApp is not available in Play Store, must be installed manually via APK file.
|
||||
|
||||
#### Prerequisites
|
||||
- USB drive (for APK transfer)
|
||||
- Computer with internet access
|
||||
- OR Android phone with file transfer capability
|
||||
|
||||
#### Method 1: USB Drive Installation (Recommended)
|
||||
|
||||
**On Your Computer:**
|
||||
|
||||
1. **Download StashApp APK:**
|
||||
- Open browser: https://github.com/damontecres/StashAppAndroidTV/releases
|
||||
- Find latest release (e.g., v1.x.x)
|
||||
- Download file: `stashapp-tv-release-vX.X.X.apk`
|
||||
- Save to USB drive
|
||||
|
||||
2. **Prepare USB Drive:**
|
||||
- Format as FAT32 or exFAT (if not already)
|
||||
- Copy APK to root of USB drive
|
||||
- Safely eject USB drive
|
||||
|
||||
**On Onn Box:**
|
||||
|
||||
3. **Enable Unknown Sources:**
|
||||
- Press Home button
|
||||
- Navigate to Settings (gear icon)
|
||||
- Select "Device Preferences"
|
||||
- Select "Security & Restrictions"
|
||||
- Enable "Unknown Sources"
|
||||
- Confirm warning (accept risk)
|
||||
|
||||
4. **Install File Manager (if needed):**
|
||||
- Open Play Store
|
||||
- Search "File Commander" or "X-plore File Manager"
|
||||
- Install one of these apps
|
||||
- Open the file manager app
|
||||
|
||||
5. **Connect USB Drive:**
|
||||
- Plug USB drive into Onn box USB port
|
||||
- **Note:** Onn box only has Micro-USB (power), so you need:
|
||||
- USB OTG adapter (Micro-USB to USB-A female)
|
||||
- OR transfer APK via network/Bluetooth
|
||||
|
||||
**Alternative: Network Transfer**
|
||||
|
||||
Since Onn box doesn't have easy USB access:
|
||||
|
||||
1. **Use Send Files to TV App:**
|
||||
- On Onn box: Install "Send Files to TV" from Play Store
|
||||
- On phone/computer: Install companion app
|
||||
- Transfer APK wirelessly
|
||||
- Open with package installer
|
||||
|
||||
2. **Or Use Cloud Storage:**
|
||||
- Upload APK to Google Drive
|
||||
- On Onn box: Install Google Drive app
|
||||
- Download APK from Drive
|
||||
- Open with package installer
|
||||
|
||||
#### Method 2: Direct Download on Onn Box (Easiest)
|
||||
|
||||
**On Onn Box:**
|
||||
|
||||
1. **Install Downloader App:**
|
||||
- Open Play Store
|
||||
- Search "Downloader" (by AFTVnews)
|
||||
- Install and open
|
||||
|
||||
2. **Download StashApp APK:**
|
||||
- In Downloader app, click URL field
|
||||
- Enter: `https://github.com/damontecres/StashAppAndroidTV/releases`
|
||||
- Navigate to latest release
|
||||
- Click APK download link
|
||||
- Save APK
|
||||
|
||||
3. **Install APK:**
|
||||
- Downloader will prompt to install after download
|
||||
- Click "Install"
|
||||
- Click "Done" when complete
|
||||
- APK will be installed
|
||||
|
||||
**Configure StashApp:**
|
||||
|
||||
1. **Open StashApp:**
|
||||
- Find in Apps list (may be under "See all apps")
|
||||
- Or search "Stash" in search bar
|
||||
|
||||
2. **Connect to Server:**
|
||||
- Enter server URL: `http://pocket-grimoire.local:9999`
|
||||
- Or use IP: `http://10.0.0.10:9999`
|
||||
- Enter API key (if required)
|
||||
- Click "Connect"
|
||||
|
||||
3. **Test Connection:**
|
||||
- Should load Stash interface
|
||||
- Browse library
|
||||
- Test playing a preview
|
||||
- Verify scene markers work
|
||||
|
||||
**StashApp Settings:**
|
||||
- Video quality: Original (for direct play)
|
||||
- Hardware acceleration: ON
|
||||
- Cache previews: ON (if storage available)
|
||||
|
||||
### 3. Netflix
|
||||
|
||||
**Install from Google Play Store:**
|
||||
|
||||
1. **Open Play Store:**
|
||||
- Press Home button
|
||||
- Navigate to "Apps"
|
||||
- Select "Play Store"
|
||||
|
||||
2. **Search Netflix:**
|
||||
- Search bar → type "Netflix"
|
||||
- Select "Netflix" (official app)
|
||||
- Click "Install"
|
||||
- Wait for installation
|
||||
|
||||
3. **Open Netflix:**
|
||||
- Click "Open" after installation
|
||||
- Or find in Apps list
|
||||
|
||||
4. **Sign In:**
|
||||
- Enter Netflix email and password
|
||||
- Or scan QR code with phone
|
||||
- Select profile
|
||||
|
||||
5. **Test:**
|
||||
- Browse content
|
||||
- Play a video to verify streaming works
|
||||
- Check video quality (should be HD/4K)
|
||||
|
||||
**Netflix Settings:**
|
||||
- Profile: Select your profile
|
||||
- Video quality: High (auto)
|
||||
- Subtitles/audio: Configure as preferred
|
||||
|
||||
### 4. YouTube TV
|
||||
|
||||
**Install from Google Play Store:**
|
||||
|
||||
1. **Open Play Store:**
|
||||
- Navigate to Play Store
|
||||
- Search "YouTube TV"
|
||||
|
||||
2. **Install:**
|
||||
- Select "YouTube TV" (official app)
|
||||
- Click "Install"
|
||||
- Wait for installation
|
||||
|
||||
3. **Sign In:**
|
||||
- Open YouTube TV
|
||||
- Sign in with Google account (YouTube TV subscription)
|
||||
- Or use TV code activation:
|
||||
- Visit tv.youtube.com/start on computer/phone
|
||||
- Enter code shown on TV
|
||||
- Sign in and authorize
|
||||
|
||||
4. **Test:**
|
||||
- Browse live TV channels
|
||||
- Test DVR recordings
|
||||
- Verify streaming quality
|
||||
|
||||
**YouTube TV Settings:**
|
||||
- Live guide: Configure preferences
|
||||
- DVR: Verify recordings accessible
|
||||
- Picture quality: Auto or 4K (if available)
|
||||
|
||||
---
|
||||
|
||||
## Network Configuration Details
|
||||
|
||||
### portapotty WiFi Network (GL.iNet Beryl AX)
|
||||
|
||||
**Network Details:**
|
||||
- **SSID:** `portapotty`
|
||||
- **Frequency:** 2.4GHz + 5GHz (dual-band)
|
||||
- **Security:** WPA2/WPA3
|
||||
- **DHCP:** Enabled (automatic IP assignment)
|
||||
- **Subnet:** 192.168.8.0/24 (default GL.iNet)
|
||||
- **Router IP:** 192.168.8.1 (Beryl AX admin panel)
|
||||
- **DNS:** Handled by Beryl AX (AdGuard Home)
|
||||
|
||||
**Devices on portapotty Network:**
|
||||
- Raspberry Pi 4: Ethernet (CAT5) → 10.0.0.10 (static, or check DHCP)
|
||||
- Onn Box 1: WiFi → 192.168.8.x (DHCP assigned)
|
||||
- Onn Box 2: WiFi → 192.168.8.x (DHCP assigned)
|
||||
- Laptop: WiFi → 192.168.8.x (DHCP assigned)
|
||||
- Phone/tablet: WiFi → 192.168.8.x (DHCP assigned)
|
||||
|
||||
### Pocket Grimoire Service Addresses
|
||||
|
||||
**When connected to portapotty network:**
|
||||
|
||||
```
|
||||
Jellyfin: http://pocket-grimoire.local:8096
|
||||
or http://10.0.0.10:8096
|
||||
|
||||
Stash: http://pocket-grimoire.local:9999
|
||||
or http://10.0.0.10:9999
|
||||
|
||||
Wiki.js: http://pocket-grimoire.local:3000
|
||||
or http://10.0.0.10:3000
|
||||
|
||||
File Browser: http://pocket-grimoire.local:8080
|
||||
or http://10.0.0.10:8080
|
||||
|
||||
Router Admin: http://192.168.8.1
|
||||
```
|
||||
|
||||
**If `.local` names don't resolve:**
|
||||
- Use IP addresses directly (10.0.0.10)
|
||||
- Check Beryl AX DNS settings
|
||||
- Restart Onn box
|
||||
|
||||
---
|
||||
|
||||
## Configuration Checklist
|
||||
|
||||
### Pre-Deployment (At Home)
|
||||
|
||||
**Before traveling, complete these tasks:**
|
||||
|
||||
- [ ] Both Onn boxes powered on and tested
|
||||
- [ ] Both connected to test WiFi network
|
||||
- [ ] Google accounts signed in on both boxes
|
||||
- [ ] All 4 apps installed on both boxes:
|
||||
- [ ] Jellyfin for Android TV
|
||||
- [ ] StashApp for Android TV (sideloaded)
|
||||
- [ ] Netflix
|
||||
- [ ] YouTube TV
|
||||
- [ ] Jellyfin configured and tested (play test video)
|
||||
- [ ] StashApp configured and tested (browse library)
|
||||
- [ ] Netflix signed in (test streaming)
|
||||
- [ ] YouTube TV signed in (test live TV)
|
||||
- [ ] Both remotes have fresh batteries
|
||||
- [ ] Both boxes labeled (Box 1, Box 2) or distinguishable
|
||||
|
||||
### Hotel Deployment
|
||||
|
||||
**Setup sequence at hotel:**
|
||||
|
||||
1. **Setup Beryl AX Router:**
|
||||
- Power on Beryl AX
|
||||
- Connect to hotel WiFi (via Beryl AX admin or phone app)
|
||||
- Verify internet connection
|
||||
- portapotty WiFi should be active
|
||||
|
||||
2. **Setup Pocket Grimoire:**
|
||||
- Power on Raspberry Pi
|
||||
- Connect via CAT5 to Beryl AX
|
||||
- Wait 2-3 minutes for boot
|
||||
- SSH in and unlock ZFS (if needed)
|
||||
- Verify Docker containers running
|
||||
|
||||
3. **Setup Onn Box 1:**
|
||||
- Connect to TV HDMI port
|
||||
- Power on
|
||||
- Wait for boot (30 seconds)
|
||||
- Should auto-connect to portapotty
|
||||
- If not: Settings → Network → portapotty → Connect
|
||||
- Test Jellyfin (should connect automatically)
|
||||
- Test StashApp (should connect automatically)
|
||||
|
||||
4. **Setup Onn Box 2 (if using):**
|
||||
- Connect to second TV or different HDMI port
|
||||
- Repeat setup steps above
|
||||
- Verify connection to portapotty
|
||||
|
||||
5. **Verify All Services:**
|
||||
- Open Jellyfin → Browse library → Play test video
|
||||
- Open StashApp → Browse library → Test preview
|
||||
- Open Netflix → Test streaming
|
||||
- Open YouTube TV → Test live channel
|
||||
|
||||
**Total setup time: 10-15 minutes**
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### WiFi Connection Issues
|
||||
|
||||
**Onn box won't connect to portapotty:**
|
||||
|
||||
1. **Verify Router is Online:**
|
||||
- Check Beryl AX power LED (should be solid)
|
||||
- Check Beryl AX WiFi LED (should be blinking/solid)
|
||||
- Use phone to verify portapotty network is visible
|
||||
|
||||
2. **Forget and Reconnect:**
|
||||
- Settings → Network & Internet
|
||||
- Select portapotty
|
||||
- Click "Forget network"
|
||||
- Scan again
|
||||
- Reconnect with password
|
||||
|
||||
3. **Check Router Settings:**
|
||||
- Access Beryl AX admin: http://192.168.8.1
|
||||
- Verify WiFi is enabled
|
||||
- Check if DHCP is active
|
||||
- Verify no MAC filtering enabled
|
||||
|
||||
4. **Restart Devices:**
|
||||
- Power cycle Onn box (unplug, wait 10 seconds, plug back in)
|
||||
- Restart Beryl AX router
|
||||
- Try connecting again
|
||||
|
||||
**Weak WiFi Signal:**
|
||||
|
||||
- Move Beryl AX closer to TV/Onn box
|
||||
- Reduce obstacles between router and box
|
||||
- Use 2.4GHz band instead of 5GHz (better range, slower speed)
|
||||
- Check for interference (hotel WiFi channels)
|
||||
|
||||
### Jellyfin Connection Issues
|
||||
|
||||
**Can't connect to Jellyfin server:**
|
||||
|
||||
1. **Verify Server is Running:**
|
||||
- SSH into Pocket Grimoire
|
||||
- Run: `docker ps | grep jellyfin`
|
||||
- Should show `pocketgrimoire_jellyfin` running
|
||||
|
||||
2. **Check Network Connectivity:**
|
||||
- On Onn box, open browser app
|
||||
- Navigate to: `http://pocket-grimoire.local:8096`
|
||||
- Or try IP: `http://10.0.0.10:8096`
|
||||
- Should load Jellyfin web interface
|
||||
|
||||
3. **Reconnect Jellyfin App:**
|
||||
- Open Jellyfin app
|
||||
- Settings → Server
|
||||
- Delete existing server
|
||||
- Add server manually:
|
||||
- Host: `pocket-grimoire.local` or `10.0.0.10`
|
||||
- Port: `8096`
|
||||
- Connect and login
|
||||
|
||||
4. **Check Firewall:**
|
||||
- SSH into Pi
|
||||
- Verify port 8096 is open: `sudo netstat -tlnp | grep 8096`
|
||||
- Should show jellyfin listening
|
||||
|
||||
**Jellyfin Playback Issues:**
|
||||
|
||||
**Video won't play:**
|
||||
- Check media is H.264/AAC (see encoding guide)
|
||||
- Verify network bandwidth (should be strong WiFi)
|
||||
- Try different video file
|
||||
- Check Jellyfin logs: `docker logs pocketgrimoire_jellyfin`
|
||||
|
||||
**Video buffers/stutters:**
|
||||
- Check WiFi signal strength (move router closer)
|
||||
- Verify direct play (check playback info, should NOT say "transcoding")
|
||||
- If transcoding occurs: Media is not properly encoded
|
||||
- Check network activity: `ssh user@pocket-grimoire.local` then `iftop`
|
||||
|
||||
**Subtitles don't work:**
|
||||
- Ensure subtitles are SRT format (not PGS/VobSub)
|
||||
- External .srt files work best
|
||||
- Embedded SRT in MKV also works
|
||||
|
||||
### StashApp Connection Issues
|
||||
|
||||
**Can't connect to Stash server:**
|
||||
|
||||
1. **Verify Stash is Running:**
|
||||
- SSH into Pocket Grimoire
|
||||
- Run: `docker ps | grep stash`
|
||||
- Should show `pocketgrimoire_stash` running
|
||||
|
||||
2. **Test Server Connection:**
|
||||
- Open browser on Onn box
|
||||
- Navigate to: `http://pocket-grimoire.local:9999`
|
||||
- Or try: `http://10.0.0.10:9999`
|
||||
- Should load Stash web interface
|
||||
|
||||
3. **Reconfigure StashApp:**
|
||||
- Open StashApp
|
||||
- Settings → Server
|
||||
- Remove existing server
|
||||
- Add server:
|
||||
- URL: `http://pocket-grimoire.local:9999`
|
||||
- Or: `http://10.0.0.10:9999`
|
||||
- Enter API key (if required)
|
||||
- Connect
|
||||
|
||||
4. **Check API Key:**
|
||||
- If StashApp requires API key
|
||||
- SSH into Pi: `cat /srv/vaultpg/stash/config/config.yml | grep api_key`
|
||||
- Or access Stash web UI → Settings → Security → API Key
|
||||
- Copy key into StashApp
|
||||
|
||||
**StashApp Crashes or Freezes:**
|
||||
- Clear app cache: Settings → Apps → StashApp → Clear cache
|
||||
- Restart Onn box
|
||||
- Reinstall StashApp (download latest APK)
|
||||
- Check Stash server logs: `docker logs pocketgrimoire_stash`
|
||||
|
||||
**Previews won't play:**
|
||||
- Verify previews synced from Netgrimoire
|
||||
- Check: `ssh user@pocket-grimoire.local`
|
||||
- Run: `ls /srv/vaultpg/stash/generated/` (should show preview files)
|
||||
- If empty: Sync hasn't completed, or previews not generated on Netgrimoire
|
||||
|
||||
### Netflix/YouTube TV Issues
|
||||
|
||||
**Netflix won't sign in:**
|
||||
- Verify Netflix subscription is active
|
||||
- Try signing in on phone/computer first
|
||||
- Use "Sign in with code" option (visit netflix.com/tv8 on another device)
|
||||
- Check internet connection (portapotty → hotel WiFi)
|
||||
|
||||
**YouTube TV won't play:**
|
||||
- Verify YouTube TV subscription is active
|
||||
- Check location restrictions (some content blocked outside home area)
|
||||
- Try signing out and back in
|
||||
- Verify internet connection speed
|
||||
|
||||
**Streaming quality poor:**
|
||||
- Check WiFi signal strength
|
||||
- Verify hotel internet speed (not throttled)
|
||||
- Switch to lower quality in app settings temporarily
|
||||
- Move router closer to TV
|
||||
|
||||
### General Onn Box Issues
|
||||
|
||||
**Box won't turn on:**
|
||||
- Check power adapter is plugged in
|
||||
- Check Micro-USB cable is secure
|
||||
- Try different power source
|
||||
- LED should light up (white when on)
|
||||
|
||||
**Remote not working:**
|
||||
- Check batteries (replace if needed)
|
||||
- Re-pair remote: Hold Back + Home for 5 seconds
|
||||
- Check for obstructions between remote and box
|
||||
- Try using Google Home app as remote backup
|
||||
|
||||
**Box is slow/laggy:**
|
||||
- Clear cache: Settings → Storage → Cached data → Clear
|
||||
- Uninstall unused apps
|
||||
- Restart box: Settings → Device Preferences → About → Restart
|
||||
- Factory reset (last resort)
|
||||
|
||||
**Apps keep crashing:**
|
||||
- Clear app cache and data
|
||||
- Uninstall and reinstall app
|
||||
- Check for OS updates: Settings → Device Preferences → About → System update
|
||||
- Factory reset if persistent
|
||||
|
||||
**No sound:**
|
||||
- Check TV volume (not muted)
|
||||
- Check HDMI connection (reseat cable)
|
||||
- Settings → Display & Sound → Audio output → Test
|
||||
- Try different HDMI port on TV
|
||||
- Check if audio is set to "Auto" or "Stereo"
|
||||
|
||||
### DNS Resolution Issues
|
||||
|
||||
**`.local` addresses don't work (pocket-grimoire.local fails):**
|
||||
|
||||
1. **Use IP Address Instead:**
|
||||
- Replace `pocket-grimoire.local` with `10.0.0.10`
|
||||
- Example: `http://10.0.0.10:8096` for Jellyfin
|
||||
|
||||
2. **Check Pi's IP Address:**
|
||||
- SSH into Pi: `ip addr show eth0`
|
||||
- Look for inet address (e.g., 192.168.8.50)
|
||||
- Use this IP in apps instead of .local
|
||||
|
||||
3. **Check Beryl AX DNS:**
|
||||
- Access http://192.168.8.1
|
||||
- Check DNS settings
|
||||
- Verify AdGuard Home is running
|
||||
- Ensure mDNS/Bonjour reflection is enabled (if option available)
|
||||
|
||||
4. **Add Static DNS Entry:**
|
||||
- In Beryl AX admin panel
|
||||
- Add static DNS entry: pocket-grimoire → 10.0.0.10
|
||||
|
||||
---
|
||||
|
||||
## Advanced Configuration
|
||||
|
||||
### Setting Static IP for Raspberry Pi
|
||||
|
||||
**On Beryl AX router:**
|
||||
|
||||
1. Access admin panel: http://192.168.8.1
|
||||
2. Navigate to Network → DHCP Server
|
||||
3. Find Raspberry Pi in client list
|
||||
4. Assign static IP: 10.0.0.10
|
||||
5. Save and apply
|
||||
|
||||
**Or on Raspberry Pi directly:**
|
||||
|
||||
```bash
|
||||
# Edit network config
|
||||
sudo nano /etc/dhcpcd.conf
|
||||
|
||||
# Add at end:
|
||||
interface eth0
|
||||
static ip_address=10.0.0.10/24
|
||||
static routers=192.168.8.1
|
||||
static domain_name_servers=192.168.8.1
|
||||
```
|
||||
|
||||
### Optimizing Video Playback
|
||||
|
||||
**Jellyfin Video Settings (on Onn box):**
|
||||
- Settings → Playback
|
||||
- Max streaming bitrate: Maximum (Auto)
|
||||
- Video quality: Maximum
|
||||
- Allow video playback that may require conversion: OFF
|
||||
- Skip intro: ON (if desired)
|
||||
|
||||
**StashApp Video Settings:**
|
||||
- Settings → Playback
|
||||
- Video quality: Original
|
||||
- Hardware acceleration: ON
|
||||
- Buffer size: Large
|
||||
|
||||
### Remote Control Tips
|
||||
|
||||
**Voice Commands:**
|
||||
- "Hey Google, open Jellyfin"
|
||||
- "Hey Google, play [movie name] on Jellyfin"
|
||||
- "Hey Google, pause"
|
||||
- "Hey Google, turn off TV"
|
||||
|
||||
**Useful Remote Shortcuts:**
|
||||
- Home button (twice): Recent apps
|
||||
- Back button (hold): Return to home
|
||||
- Play/Pause: Works in most video apps
|
||||
- Voice button: Google Assistant
|
||||
|
||||
---
|
||||
|
||||
## App Locations
|
||||
|
||||
**After installation, find apps here:**
|
||||
|
||||
**Home Screen:**
|
||||
- Netflix, YouTube TV usually appear automatically
|
||||
|
||||
**Apps Tab:**
|
||||
- All installed apps listed alphabetically
|
||||
- Jellyfin, StashApp will be here
|
||||
|
||||
**Quick Access:**
|
||||
- Long-press Home → Add to Favorites
|
||||
- Apps appear on home screen for quick access
|
||||
|
||||
---
|
||||
|
||||
## Maintenance
|
||||
|
||||
### Weekly (While Using)
|
||||
- Check for app updates (Play Store → Updates)
|
||||
- Clear cache if apps feel slow
|
||||
- Verify WiFi connection strength
|
||||
|
||||
### Before Each Trip
|
||||
- Test all apps at home
|
||||
- Update apps if updates available
|
||||
- Check remote batteries
|
||||
- Verify all logins still active
|
||||
|
||||
### After Each Trip
|
||||
- Check for OS updates
|
||||
- Review installed apps (remove if unused)
|
||||
- Clear cache to free storage
|
||||
|
||||
---
|
||||
|
||||
## Factory Reset (If Needed)
|
||||
|
||||
**When to factory reset:**
|
||||
- Box is extremely slow
|
||||
- Apps constantly crash
|
||||
- Persistent connection issues
|
||||
- Selling/giving away box
|
||||
|
||||
**How to factory reset:**
|
||||
|
||||
1. **Via Settings:**
|
||||
- Settings → Device Preferences
|
||||
- About → Factory Reset
|
||||
- Confirm reset
|
||||
- Wait for reboot (3-5 minutes)
|
||||
|
||||
2. **Via Recovery Mode:**
|
||||
- Power off box
|
||||
- Hold reset button (if present)
|
||||
- Power on while holding
|
||||
- Navigate with remote to "Factory Reset"
|
||||
- Confirm
|
||||
|
||||
**After reset:**
|
||||
- Complete initial setup again (see beginning of guide)
|
||||
- Reinstall all apps
|
||||
- Reconfigure WiFi and services
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference Card
|
||||
|
||||
**Essential Information:**
|
||||
|
||||
```
|
||||
WiFi Network: portapotty
|
||||
Router Admin: http://192.168.8.1
|
||||
|
||||
Pocket Grimoire Services:
|
||||
- Jellyfin: http://pocket-grimoire.local:8096
|
||||
- Stash: http://pocket-grimoire.local:9999
|
||||
- Wiki: http://pocket-grimoire.local:3000
|
||||
|
||||
If .local fails, use IP: http://10.0.0.10:[PORT]
|
||||
|
||||
Apps Required:
|
||||
✓ Jellyfin for Android TV (Play Store)
|
||||
✓ StashApp for Android TV (Sideload APK)
|
||||
✓ Netflix (Play Store)
|
||||
✓ YouTube TV (Play Store)
|
||||
|
||||
Troubleshooting:
|
||||
1. Restart Onn box
|
||||
2. Check portapotty WiFi connection
|
||||
3. Verify Pocket Grimoire is running (SSH check)
|
||||
4. Use IP addresses instead of .local names
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Appendix: StashApp APK Sources
|
||||
|
||||
**Official GitHub Repository:**
|
||||
- https://github.com/damontecres/StashAppAndroidTV
|
||||
- Releases: https://github.com/damontecres/StashAppAndroidTV/releases
|
||||
- Latest version: Check releases page
|
||||
|
||||
**Verification:**
|
||||
- Download only from official GitHub releases
|
||||
- Verify file integrity (check file size, release notes)
|
||||
- Watch for malware warnings (false positives common with sideloaded APKs)
|
||||
|
||||
**Update Process:**
|
||||
- Check GitHub for new releases periodically
|
||||
- Download new APK
|
||||
- Install over existing app (data preserved)
|
||||
- Or uninstall and reinstall clean
|
||||
|
||||
---
|
||||
|
||||
*This guide was created for Onn 4K streaming box configuration with Pocket Grimoire. Keep updated as apps and configurations change.*
|
||||
31
False Grimoire/Netgrimoire/Pocket/Software.md
Normal file
31
False Grimoire/Netgrimoire/Pocket/Software.md
Normal file
|
|
@ -0,0 +1,31 @@
|
|||
---
|
||||
title: Pocket Grimoire Software
|
||||
description:
|
||||
published: true
|
||||
date: 2026-02-20T04:30:28.681Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-01-29T04:37:33.794Z
|
||||
---
|
||||
|
||||
# Software Overview
|
||||
|
||||
Pocket Grimoire runs a minimal software stack focused on reliability and offline access.
|
||||
|
||||
## Host Services
|
||||
- Linux (Raspberry Pi OS Lite or Ubuntu Server)
|
||||
- ZFS
|
||||
- NFS
|
||||
- Cockpit
|
||||
- systemd timers
|
||||
|
||||
## Containerized Services
|
||||
- Wiki.js (read-only documentation mirror)
|
||||
- PostgreSQL (Wiki.js backend)
|
||||
- Optional utility containers (file browser, status page)
|
||||
- Beszel
|
||||
- Retro Web page
|
||||
|
||||
## External Services
|
||||
- DNS + Ad blocking via Beryl AX
|
||||
- VPN via WireGuard to Netgrimoire
|
||||
1927
False Grimoire/Netgrimoire/Pocket/Stash_Integration.md
Normal file
1927
False Grimoire/Netgrimoire/Pocket/Stash_Integration.md
Normal file
File diff suppressed because it is too large
Load diff
122
False Grimoire/Netgrimoire/Service_Document_Template.md
Normal file
122
False Grimoire/Netgrimoire/Service_Document_Template.md
Normal file
|
|
@ -0,0 +1,122 @@
|
|||
---
|
||||
title: Service Documentation Template
|
||||
description: Describe the service
|
||||
published: true
|
||||
date: 2026-04-10T13:23:01.021Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-02-03T02:57:07.462Z
|
||||
---
|
||||
|
||||
# Service Documentation Template - 1
|
||||
|
||||
Use this template for **every new service** documented under `services/`.
|
||||
|
||||
Copy this file, rename it, and fill in all sections.
|
||||
|
||||
---
|
||||
|
||||
# Service Name
|
||||
|
||||
## Overview
|
||||
|
||||
Brief description of what this service does and why it exists.
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
Describe how the service is deployed.
|
||||
|
||||
Include:
|
||||
- Host(s)
|
||||
- Containers
|
||||
- External dependencies
|
||||
- Network exposure
|
||||
|
||||
---
|
||||
|
||||
## Volumes & Data
|
||||
|
||||
List all persistent data locations.
|
||||
```
|
||||
/path/on/host → purpose
|
||||
```
|
||||
|
||||
Include:
|
||||
- What data is stored
|
||||
- Whether it is critical
|
||||
- Where backups occur
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
Document:
|
||||
- Environment variables (non-secret)
|
||||
- Configuration files
|
||||
- Important defaults
|
||||
|
||||
**Secrets must not be stored here.** Reference where they live instead.
|
||||
|
||||
---
|
||||
|
||||
## Authentication & Access
|
||||
|
||||
Describe:
|
||||
- Authentication method
|
||||
- Local access
|
||||
- Break-glass access (if applicable)
|
||||
|
||||
---
|
||||
|
||||
## Backups
|
||||
|
||||
Explain:
|
||||
- What is backed up
|
||||
- How often
|
||||
- Using what tool
|
||||
- Where backups are stored
|
||||
|
||||
Link to infrastructure backup docs if applicable.
|
||||
|
||||
---
|
||||
|
||||
## Restore Procedure
|
||||
|
||||
Step-by-step recovery instructions.
|
||||
```bash
|
||||
# example commands
|
||||
```
|
||||
|
||||
This section must be usable when the service is broken.
|
||||
|
||||
---
|
||||
|
||||
## Monitoring & Health
|
||||
|
||||
Describe:
|
||||
- How service health is checked
|
||||
- Logs of interest
|
||||
- Alerting (if any)
|
||||
|
||||
---
|
||||
|
||||
## Common Failures
|
||||
|
||||
List known failure modes and fixes.
|
||||
|
||||
---
|
||||
|
||||
## Diagrams
|
||||
|
||||
Embed architecture diagrams here.
|
||||
```markdown
|
||||

|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
Anything that does not fit elsewhere.
|
||||
503
False Grimoire/Netgrimoire/Services/Gremlin/Netgrimoire_Agent.md
Normal file
503
False Grimoire/Netgrimoire/Services/Gremlin/Netgrimoire_Agent.md
Normal file
|
|
@ -0,0 +1,503 @@
|
|||
---
|
||||
title: Ollama with agent
|
||||
description: The smart home reference
|
||||
published: true
|
||||
date: 2026-04-02T21:11:09.564Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-02-18T22:14:41.533Z
|
||||
---
|
||||
|
||||
# AI Automation Stack - Ollama + n8n + Open WebUI
|
||||
|
||||
## Overview
|
||||
|
||||
This stack provides a complete self-hosted AI automation solution for homelab infrastructure management, documentation generation, and intelligent monitoring. The system consists of four core components that work together to provide AI-powered workflows and knowledge management.
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────┐
|
||||
│ AI Automation Stack │
|
||||
│ │
|
||||
│ Open WebUI ────────┐ │
|
||||
│ (Chat Interface) │ │
|
||||
│ │ │ │
|
||||
│ ▼ ▼ │
|
||||
│ Ollama ◄──── Qdrant │
|
||||
│ (LLM Runtime) (Vector DB) │
|
||||
│ ▲ │
|
||||
│ │ │
|
||||
│ n8n │
|
||||
│ (Workflow Engine) │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ Forgejo │ Wiki.js │ Monitoring │
|
||||
└─────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Components
|
||||
|
||||
### Ollama
|
||||
- **Purpose**: Local LLM runtime engine
|
||||
- **Port**: 11434
|
||||
- **Resource Usage**: 4-6GB RAM (depending on model)
|
||||
- **Recommended Models**:
|
||||
- `qwen2.5-coder:7b` - Code analysis and documentation
|
||||
- `llama3.2:3b` - General queries and chat
|
||||
- `phi3:mini` - Lightweight alternative
|
||||
|
||||
### Open WebUI
|
||||
- **Purpose**: User-friendly chat interface with built-in RAG (Retrieval Augmented Generation)
|
||||
- **Port**: 3000
|
||||
- **Features**:
|
||||
- Document ingestion from Wiki.js
|
||||
- Conversational interface for querying documentation
|
||||
- RAG pipeline for context-aware responses
|
||||
- Multi-model support
|
||||
- **Access**: `http://your-server-ip:3000`
|
||||
|
||||
### Qdrant
|
||||
- **Purpose**: Vector database for semantic search and RAG
|
||||
- **Ports**: 6333 (HTTP), 6334 (gRPC)
|
||||
- **Resource Usage**: ~1GB RAM
|
||||
- **Function**: Stores embeddings of your documentation, code, and markdown files
|
||||
|
||||
### n8n
|
||||
- **Purpose**: Workflow automation and orchestration
|
||||
- **Port**: 5678
|
||||
- **Default Credentials**:
|
||||
- Username: `admin`
|
||||
- Password: `change-this-password` (⚠️ **Change this immediately**)
|
||||
- **Access**: `http://your-server-ip:5678`
|
||||
|
||||
## Installation
|
||||
|
||||
### Prerequisites
|
||||
- Docker and Docker Compose installed
|
||||
- 16GB RAM minimum (8GB available for the stack)
|
||||
- 50GB disk space for models and data
|
||||
|
||||
### Deployment Steps
|
||||
|
||||
1. **Create directory structure**:
|
||||
```bash
|
||||
mkdir -p ~/ai-stack/{n8n/workflows}
|
||||
cd ~/ai-stack
|
||||
```
|
||||
|
||||
2. **Download the compose file**:
|
||||
```bash
|
||||
# Place the ai-stack-compose.yml in this directory
|
||||
wget [your-internal-url]/ai-stack-compose.yml
|
||||
```
|
||||
|
||||
3. **Configure environment variables**:
|
||||
```bash
|
||||
# Edit the compose file and change:
|
||||
# - WEBUI_SECRET_KEY
|
||||
# - N8N_BASIC_AUTH_PASSWORD
|
||||
# - WEBHOOK_URL (use your server's IP)
|
||||
# - GENERIC_TIMEZONE
|
||||
nano ai-stack-compose.yml
|
||||
```
|
||||
|
||||
4. **Start the stack**:
|
||||
```bash
|
||||
docker-compose -f ai-stack-compose.yml up -d
|
||||
```
|
||||
|
||||
5. **Pull Ollama models**:
|
||||
```bash
|
||||
docker exec -it ollama ollama pull qwen2.5-coder:7b
|
||||
docker exec -it ollama ollama pull llama3.2:3b
|
||||
```
|
||||
|
||||
6. **Verify services**:
|
||||
```bash
|
||||
docker-compose -f ai-stack-compose.yml ps
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Open WebUI Setup
|
||||
|
||||
1. Navigate to `http://your-server-ip:3000`
|
||||
2. Create your admin account (first user becomes admin)
|
||||
3. Go to **Settings → Connections** and verify Ollama connection
|
||||
4. Configure Qdrant:
|
||||
- Host: `qdrant`
|
||||
- Port: `6333`
|
||||
|
||||
### Setting Up RAG for Wiki.js
|
||||
|
||||
1. In Open WebUI, go to **Workspace → Knowledge**
|
||||
2. Create a new collection: "Homelab Documentation"
|
||||
3. Add sources:
|
||||
- **URL Crawl**: Enter your Wiki.js base URL
|
||||
- **File Upload**: Upload markdown files from repositories
|
||||
4. Process and index the documents
|
||||
|
||||
### n8n Initial Configuration
|
||||
|
||||
1. Navigate to `http://your-server-ip:5678`
|
||||
2. Log in with credentials from docker-compose file
|
||||
3. Import starter workflows from `/n8n/workflows/` directory
|
||||
|
||||
## Use Cases
|
||||
|
||||
### 1. Automated Documentation Generation
|
||||
|
||||
**Workflow**: Forgejo webhook → n8n → Ollama → Wiki.js
|
||||
|
||||
When code is pushed to Forgejo:
|
||||
1. n8n receives webhook from Forgejo
|
||||
2. Extracts changed files and repo context
|
||||
3. Sends to Ollama with prompt: "Generate documentation for this code"
|
||||
4. Posts generated docs to Wiki.js via API
|
||||
|
||||
**Example n8n Workflow**:
|
||||
```
|
||||
Webhook Trigger
|
||||
→ HTTP Request (Forgejo API - get file contents)
|
||||
→ Ollama LLM Node (generate docs)
|
||||
→ HTTP Request (Wiki.js API - create/update page)
|
||||
→ Send notification (completion)
|
||||
```
|
||||
|
||||
### 2. Docker-Compose Standardization
|
||||
|
||||
**Workflow**: Repository scan → compliance check → issue creation
|
||||
|
||||
1. n8n runs on schedule (daily/weekly)
|
||||
2. Queries Forgejo API for all repositories
|
||||
3. Scans for `docker-compose.yml` files
|
||||
4. Compares against template standards stored in Qdrant
|
||||
5. Generates compliance report with Ollama
|
||||
6. Creates Forgejo issues for non-compliant repos
|
||||
|
||||
### 3. Intelligent Alert Processing
|
||||
|
||||
**Workflow**: Monitoring alert → AI analysis → smart routing
|
||||
|
||||
1. Beszel/Uptime Kuma sends webhook to n8n
|
||||
2. n8n queries historical data and context
|
||||
3. Ollama analyzes:
|
||||
- Is this expected? (scheduled backup, known maintenance)
|
||||
- Severity level
|
||||
- Recommended action
|
||||
4. Routes appropriately:
|
||||
- Critical: Immediate notification (Telegram/email)
|
||||
- Warning: Log and monitor
|
||||
- Info: Suppress (expected behavior)
|
||||
|
||||
### 4. Email Monitoring & Triage
|
||||
|
||||
**Workflow**: IMAP polling → AI classification → action routing
|
||||
|
||||
1. n8n polls email inbox every 5 minutes
|
||||
2. Filters for keywords: "alert", "critical", "down", "failed"
|
||||
3. Ollama classifies urgency and determines if actionable
|
||||
4. Routes based on classification:
|
||||
- Urgent: Forward to you immediately
|
||||
- Informational: Daily digest
|
||||
- Spam: Archive
|
||||
|
||||
## Common Workflows
|
||||
|
||||
### Example: Repository Documentation Generator
|
||||
|
||||
```javascript
|
||||
// n8n workflow nodes:
|
||||
|
||||
1. Schedule Trigger (daily at 2 AM)
|
||||
↓
|
||||
2. HTTP Request - Forgejo API
|
||||
URL: http://forgejo:3000/api/v1/repos/search
|
||||
Method: GET
|
||||
↓
|
||||
3. Loop Over Items (each repo)
|
||||
↓
|
||||
4. HTTP Request - Get repo files
|
||||
URL: {{$node["Forgejo API"].json["clone_url"]}}/contents
|
||||
↓
|
||||
5. Filter - Find docker-compose.yml and README.md
|
||||
↓
|
||||
6. Ollama Node
|
||||
Model: qwen2.5-coder:7b
|
||||
Prompt: "Analyze this docker-compose file and generate comprehensive
|
||||
documentation including: purpose, services, ports, volumes,
|
||||
environment variables, and setup instructions."
|
||||
↓
|
||||
7. HTTP Request - Wiki.js API
|
||||
URL: http://wikijs:3000/graphql
|
||||
Method: POST
|
||||
Body: {mutation: createPage(...)}
|
||||
↓
|
||||
8. Send Notification
|
||||
Service: Telegram/Email
|
||||
Message: "Documentation updated for {{repo_name}}"
|
||||
```
|
||||
|
||||
### Example: Alert Intelligence Workflow
|
||||
|
||||
```javascript
|
||||
// n8n workflow nodes:
|
||||
|
||||
1. Webhook Trigger
|
||||
Path: /webhook/monitoring-alert
|
||||
↓
|
||||
2. Function Node - Parse Alert Data
|
||||
JavaScript: Extract service, metric, value, timestamp
|
||||
↓
|
||||
3. HTTP Request - Query Historical Data
|
||||
URL: http://beszel:8090/api/metrics/history
|
||||
↓
|
||||
4. Ollama Node
|
||||
Model: llama3.2:3b
|
||||
Context: Your knowledge base in Qdrant
|
||||
Prompt: "Alert: {{alert_message}}
|
||||
Historical context: {{historical_data}}
|
||||
Is this expected behavior?
|
||||
What's the severity?
|
||||
What action should be taken?"
|
||||
↓
|
||||
5. Switch Node - Route by Severity
|
||||
Conditions:
|
||||
- Critical: Route to immediate notification
|
||||
- Warning: Route to monitoring channel
|
||||
- Info: Route to log only
|
||||
↓
|
||||
6a. Send Telegram (Critical path)
|
||||
6b. Post to Slack (Warning path)
|
||||
6c. Write to Log (Info path)
|
||||
```
|
||||
|
||||
## Maintenance
|
||||
|
||||
### Model Management
|
||||
|
||||
```bash
|
||||
# List installed models
|
||||
docker exec -it ollama ollama list
|
||||
|
||||
# Update a model
|
||||
docker exec -it ollama ollama pull qwen2.5-coder:7b
|
||||
|
||||
# Remove unused models
|
||||
docker exec -it ollama ollama rm old-model:tag
|
||||
```
|
||||
|
||||
### Backup Important Data
|
||||
|
||||
```bash
|
||||
# Backup Qdrant vector database
|
||||
docker-compose -f ai-stack-compose.yml stop qdrant
|
||||
tar -czf qdrant-backup-$(date +%Y%m%d).tar.gz ./qdrant_data/
|
||||
docker-compose -f ai-stack-compose.yml start qdrant
|
||||
|
||||
# Backup n8n workflows (automatic to ./n8n/workflows)
|
||||
tar -czf n8n-backup-$(date +%Y%m%d).tar.gz ./n8n_data/
|
||||
|
||||
# Backup Open WebUI data
|
||||
tar -czf openwebui-backup-$(date +%Y%m%d).tar.gz ./open_webui_data/
|
||||
```
|
||||
|
||||
### Log Monitoring
|
||||
|
||||
```bash
|
||||
# View all stack logs
|
||||
docker-compose -f ai-stack-compose.yml logs -f
|
||||
|
||||
# View specific service
|
||||
docker logs -f ollama
|
||||
docker logs -f n8n
|
||||
docker logs -f open-webui
|
||||
```
|
||||
|
||||
### Resource Monitoring
|
||||
|
||||
```bash
|
||||
# Check resource usage
|
||||
docker stats
|
||||
|
||||
# Expected usage:
|
||||
# - ollama: 4-6GB RAM (with model loaded)
|
||||
# - open-webui: ~500MB RAM
|
||||
# - qdrant: ~1GB RAM
|
||||
# - n8n: ~200MB RAM
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Ollama Not Responding
|
||||
|
||||
```bash
|
||||
# Check if Ollama is running
|
||||
docker logs ollama
|
||||
|
||||
# Restart Ollama
|
||||
docker restart ollama
|
||||
|
||||
# Test Ollama API
|
||||
curl http://localhost:11434/api/tags
|
||||
```
|
||||
|
||||
### Open WebUI Can't Connect to Ollama
|
||||
|
||||
1. Check network connectivity:
|
||||
```bash
|
||||
docker exec -it open-webui ping ollama
|
||||
```
|
||||
|
||||
2. Verify Ollama URL in Open WebUI settings
|
||||
3. Restart both containers:
|
||||
```bash
|
||||
docker restart ollama open-webui
|
||||
```
|
||||
|
||||
### n8n Workflows Failing
|
||||
|
||||
1. Check n8n logs:
|
||||
```bash
|
||||
docker logs n8n
|
||||
```
|
||||
|
||||
2. Verify webhook URLs are accessible
|
||||
3. Test Ollama connection from n8n:
|
||||
- Create test workflow
|
||||
- Add Ollama node
|
||||
- Run execution
|
||||
|
||||
### Qdrant Connection Issues
|
||||
|
||||
```bash
|
||||
# Check Qdrant health
|
||||
curl http://localhost:6333/health
|
||||
|
||||
# View Qdrant logs
|
||||
docker logs qdrant
|
||||
|
||||
# Restart if needed
|
||||
docker restart qdrant
|
||||
```
|
||||
|
||||
## Performance Optimization
|
||||
|
||||
### Model Selection by Use Case
|
||||
|
||||
- **Quick queries, chat**: `llama3.2:3b` or `phi3:mini` (fastest)
|
||||
- **Code analysis**: `qwen2.5-coder:7b` or `deepseek-coder:6.7b`
|
||||
- **Complex reasoning**: `mistral:7b` or `llama3.1:8b`
|
||||
|
||||
### n8n Workflow Optimization
|
||||
|
||||
- Use **Wait** nodes to batch operations
|
||||
- Enable **Execute Once** for loops to reduce memory
|
||||
- Store large data in temporary files instead of node output
|
||||
- Use **Split In Batches** for processing large datasets
|
||||
|
||||
### Qdrant Performance
|
||||
|
||||
- Default settings are optimized for homelab use
|
||||
- Increase `collection_shards` if indexing >100,000 documents
|
||||
- Enable quantization for large collections
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Change Default Credentials
|
||||
|
||||
```bash
|
||||
# Generate secure password
|
||||
openssl rand -base64 32
|
||||
|
||||
# Update in docker-compose.yml:
|
||||
# - WEBUI_SECRET_KEY
|
||||
# - N8N_BASIC_AUTH_PASSWORD
|
||||
```
|
||||
|
||||
### Network Isolation
|
||||
|
||||
Consider using a reverse proxy (Traefik, Nginx Proxy Manager) with authentication:
|
||||
- Limit external access to Open WebUI only
|
||||
- Keep n8n, Ollama, Qdrant on internal network
|
||||
- Use VPN for remote access
|
||||
|
||||
### API Security
|
||||
|
||||
- Use strong API tokens for Wiki.js and Forgejo integrations
|
||||
- Rotate credentials periodically
|
||||
- Audit n8n workflow permissions
|
||||
|
||||
## Integration Points
|
||||
|
||||
### Connecting to Existing Services
|
||||
|
||||
**Uptime Kuma**:
|
||||
- Configure webhook alerts → n8n webhook URL
|
||||
- Path: `http://your-server-ip:5678/webhook/uptime-kuma`
|
||||
|
||||
**Beszel**:
|
||||
- Use Shoutrrr webhook format
|
||||
- URL: `http://your-server-ip:5678/webhook/beszel`
|
||||
|
||||
**Forgejo**:
|
||||
- Repository webhooks for push events
|
||||
- URL: `http://your-server-ip:5678/webhook/forgejo-push`
|
||||
- Enable in repo settings → Webhooks
|
||||
|
||||
**Wiki.js**:
|
||||
- GraphQL API endpoint: `http://wikijs:3000/graphql`
|
||||
- Create API key in Wiki.js admin panel
|
||||
- Store in n8n credentials
|
||||
|
||||
## Advanced Features
|
||||
|
||||
### Creating Custom n8n Nodes
|
||||
|
||||
For frequently used Ollama prompts, create custom nodes:
|
||||
|
||||
1. Go to n8n → Settings → Community Nodes
|
||||
2. Install `n8n-nodes-ollama-advanced` if available
|
||||
3. Or create Function nodes with reusable code
|
||||
|
||||
### Training Custom Models
|
||||
|
||||
While Ollama doesn't support fine-tuning directly, you can:
|
||||
1. Use RAG with your specific documentation
|
||||
2. Create detailed system prompts in n8n
|
||||
3. Store organization-specific context in Qdrant
|
||||
|
||||
### Multi-Agent Workflows
|
||||
|
||||
Chain multiple Ollama calls for complex tasks:
|
||||
```
|
||||
Planning Agent → Execution Agent → Review Agent → Output
|
||||
```
|
||||
|
||||
Example: Code refactoring
|
||||
1. Planning: Analyze code and create refactoring plan
|
||||
2. Execution: Generate refactored code
|
||||
3. Review: Check for errors and improvements
|
||||
4. Output: Create pull request with changes
|
||||
|
||||
## Resources
|
||||
|
||||
- **Ollama Documentation**: https://ollama.ai/docs
|
||||
- **Open WebUI Docs**: https://docs.openwebui.com
|
||||
- **n8n Documentation**: https://docs.n8n.io
|
||||
- **Qdrant Docs**: https://qdrant.tech/documentation
|
||||
|
||||
## Support
|
||||
|
||||
For issues or questions:
|
||||
1. Check container logs first
|
||||
2. Review this documentation
|
||||
3. Search n8n community forums
|
||||
4. Check Ollama Discord/GitHub issues
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: {{current_date}}
|
||||
**Maintained By**: Homelab Admin
|
||||
**Status**: Production
|
||||
361
False Grimoire/Netgrimoire/Services/Gremlin/Readme.md
Normal file
361
False Grimoire/Netgrimoire/Services/Gremlin/Readme.md
Normal file
|
|
@ -0,0 +1,361 @@
|
|||
---
|
||||
title: Readme
|
||||
description: Readme file generated by AI
|
||||
published: true
|
||||
date: 2026-04-02T21:09:39.376Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-03-05T02:27:57.522Z
|
||||
---
|
||||
|
||||
# Homelab AI & Monitoring Stack - Deployment Guide
|
||||
|
||||
This repository contains everything you need to deploy a complete AI-powered homelab monitoring and automation stack.
|
||||
|
||||
## What's Included
|
||||
|
||||
### 📦 Docker Compose Files
|
||||
1. **ai-stack-compose.yml** - Main AI automation stack (Ollama, Open WebUI, n8n, Qdrant)
|
||||
2. **librenms-compose.yml** - Network monitoring system (LibreNMS + MariaDB + Redis)
|
||||
|
||||
### 📚 Wiki.js Documentation
|
||||
1. **wiki-ai-stack.md** - Complete documentation for the AI stack
|
||||
2. **wiki-librenms.md** - Complete documentation for LibreNMS
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Prerequisites
|
||||
- Docker and Docker Compose installed
|
||||
- 16GB RAM minimum (8GB+ available)
|
||||
- 70GB disk space (50GB for AI stack + 20GB for LibreNMS)
|
||||
- Network devices with SNMP enabled (for LibreNMS)
|
||||
|
||||
### Step 1: Deploy AI Stack
|
||||
|
||||
```bash
|
||||
# Create directory
|
||||
mkdir -p ~/homelab/ai-stack
|
||||
cd ~/homelab/ai-stack
|
||||
|
||||
# Copy ai-stack-compose.yml to this directory
|
||||
|
||||
# Edit environment variables
|
||||
nano ai-stack-compose.yml
|
||||
# Change:
|
||||
# - WEBUI_SECRET_KEY (generate random string)
|
||||
# - N8N_BASIC_AUTH_PASSWORD (use strong password)
|
||||
# - WEBHOOK_URL (your server IP)
|
||||
# - GENERIC_TIMEZONE (your timezone)
|
||||
|
||||
# Start the stack
|
||||
docker-compose -f ai-stack-compose.yml up -d
|
||||
|
||||
# Pull AI models
|
||||
docker exec -it ollama ollama pull qwen2.5-coder:7b
|
||||
docker exec -it ollama ollama pull llama3.2:3b
|
||||
|
||||
# Verify all services are running
|
||||
docker-compose -f ai-stack-compose.yml ps
|
||||
```
|
||||
|
||||
**Access points:**
|
||||
- Open WebUI: http://your-server-ip:3000
|
||||
- n8n: http://your-server-ip:5678
|
||||
- Ollama API: http://your-server-ip:11434
|
||||
|
||||
### Step 2: Deploy LibreNMS
|
||||
|
||||
```bash
|
||||
# Create directory
|
||||
mkdir -p ~/homelab/librenms
|
||||
cd ~/homelab/librenms
|
||||
|
||||
# Copy librenms-compose.yml to this directory
|
||||
|
||||
# Edit environment variables
|
||||
nano librenms-compose.yml
|
||||
# Change:
|
||||
# - DB_PASSWORD (use strong password)
|
||||
# - MYSQL_ROOT_PASSWORD (use strong password)
|
||||
# - BASE_URL (your server IP)
|
||||
# - TZ (your timezone)
|
||||
|
||||
# Start LibreNMS
|
||||
docker-compose -f librenms-compose.yml up -d
|
||||
|
||||
# Wait for initialization (2-3 minutes)
|
||||
docker logs -f librenms
|
||||
|
||||
# Access web interface
|
||||
# http://your-server-ip:8000
|
||||
# Default login: librenms/librenms
|
||||
# CHANGE PASSWORD IMMEDIATELY!
|
||||
```
|
||||
|
||||
### Step 3: Import Documentation to Wiki.js
|
||||
|
||||
```bash
|
||||
# Option 1: Via Wiki.js Web Interface
|
||||
# 1. Login to Wiki.js
|
||||
# 2. Create new page: "AI Stack Documentation"
|
||||
# 3. Copy contents of wiki-ai-stack.md
|
||||
# 4. Create new page: "LibreNMS Documentation"
|
||||
# 5. Copy contents of wiki-librenms.md
|
||||
|
||||
# Option 2: Via Wiki.js API (if configured)
|
||||
# Use the provided markdown files with Wiki.js GraphQL API
|
||||
```
|
||||
|
||||
## Initial Configuration
|
||||
|
||||
### Open WebUI Setup
|
||||
1. Navigate to http://your-server-ip:3000
|
||||
2. Create admin account (first user becomes admin)
|
||||
3. Verify Ollama connection in Settings
|
||||
4. Configure Qdrant connection (host: qdrant, port: 6333)
|
||||
5. Import your Wiki.js documentation for RAG
|
||||
|
||||
### n8n Setup
|
||||
1. Navigate to http://your-server-ip:5678
|
||||
2. Login with credentials from compose file
|
||||
3. Create first workflow (see documentation for examples)
|
||||
4. Configure Ollama node connection
|
||||
|
||||
### LibreNMS Setup
|
||||
1. Navigate to http://your-server-ip:8000
|
||||
2. Login and CHANGE PASSWORD
|
||||
3. Add your first network device
|
||||
4. Configure alert transport (webhook to n8n)
|
||||
5. Generate API token for n8n integration
|
||||
|
||||
## Integrations
|
||||
|
||||
### Connect Existing Services
|
||||
|
||||
**Uptime Kuma → n8n:**
|
||||
- Configure webhook in Uptime Kuma notification settings
|
||||
- URL: http://your-server-ip:5678/webhook/uptime-kuma
|
||||
|
||||
**Beszel → n8n:**
|
||||
- Use Shoutrrr webhook format
|
||||
- URL: http://your-server-ip:5678/webhook/beszel
|
||||
|
||||
**Forgejo → n8n:**
|
||||
- Add webhook in repository settings
|
||||
- URL: http://your-server-ip:5678/webhook/forgejo-push
|
||||
- Events: Push, Pull Request
|
||||
|
||||
**LibreNMS → n8n:**
|
||||
- Alerts → Alert Transports → Add Webhook
|
||||
- URL: http://your-server-ip:5678/webhook/librenms-alert
|
||||
|
||||
## Resource Usage
|
||||
|
||||
Expected memory usage with all services running:
|
||||
|
||||
| Service | Memory |
|
||||
|---------|--------|
|
||||
| Ollama (with model loaded) | 4-6GB |
|
||||
| Open WebUI | 500MB |
|
||||
| Qdrant | 1GB |
|
||||
| n8n | 200MB |
|
||||
| LibreNMS | 300-500MB |
|
||||
| MariaDB | 500MB-1GB |
|
||||
| Redis | 50-100MB |
|
||||
| **Total** | **~7-10GB** |
|
||||
|
||||
Remaining ~6-9GB for other services and system.
|
||||
|
||||
## Example Workflows
|
||||
|
||||
### 1. Intelligent Alert Processing
|
||||
```
|
||||
Monitoring Alert → n8n webhook
|
||||
→ Query historical data
|
||||
→ Ollama analysis (Is this expected? Severity? Action needed?)
|
||||
→ Route based on AI decision
|
||||
→ Critical: Immediate notification
|
||||
→ Warning: Log and monitor
|
||||
→ Info: Suppress
|
||||
```
|
||||
|
||||
### 2. Automated Documentation
|
||||
```
|
||||
Code Push to Forgejo → n8n webhook
|
||||
→ Get changed files
|
||||
→ Ollama generates documentation
|
||||
→ Post to Wiki.js via API
|
||||
→ Notify team
|
||||
```
|
||||
|
||||
### 3. Docker-Compose Standardization
|
||||
```
|
||||
n8n scheduled workflow (daily)
|
||||
→ Scan all Forgejo repos
|
||||
→ Find docker-compose.yml files
|
||||
→ Compare against template (stored in Qdrant)
|
||||
→ Ollama generates compliance report
|
||||
→ Create Forgejo issues for non-compliant repos
|
||||
```
|
||||
|
||||
## Backup Strategy
|
||||
|
||||
### AI Stack Backup
|
||||
```bash
|
||||
# Weekly backup
|
||||
cd ~/homelab/ai-stack
|
||||
docker-compose -f ai-stack-compose.yml stop qdrant
|
||||
tar -czf ai-stack-backup-$(date +%Y%m%d).tar.gz \
|
||||
qdrant_data/ n8n_data/ open_webui_data/
|
||||
docker-compose -f ai-stack-compose.yml start qdrant
|
||||
```
|
||||
|
||||
### LibreNMS Backup
|
||||
```bash
|
||||
# Weekly backup
|
||||
cd ~/homelab/librenms
|
||||
docker exec librenms_db mysqldump -u root -p librenms > \
|
||||
librenms-db-backup-$(date +%Y%m%d).sql
|
||||
tar -czf librenms-data-backup-$(date +%Y%m%d).tar.gz librenms_data/
|
||||
```
|
||||
|
||||
### Automated Backup via n8n
|
||||
Create a scheduled workflow that:
|
||||
1. Runs weekly (Sunday 2 AM)
|
||||
2. Executes backup commands
|
||||
3. Uploads to external storage (optional)
|
||||
4. Verifies backup integrity
|
||||
5. Sends notification with results
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Services Won't Start
|
||||
```bash
|
||||
# Check logs
|
||||
docker-compose -f ai-stack-compose.yml logs [service-name]
|
||||
|
||||
# Common issues:
|
||||
# - Port conflicts (check with: netstat -tulpn)
|
||||
# - Insufficient memory (check with: free -h)
|
||||
# - Permissions on volume directories
|
||||
```
|
||||
|
||||
### Ollama Not Responding
|
||||
```bash
|
||||
# Restart Ollama
|
||||
docker restart ollama
|
||||
|
||||
# Test API
|
||||
curl http://localhost:11434/api/tags
|
||||
|
||||
# If still failing, check if model is loaded
|
||||
docker exec -it ollama ollama list
|
||||
```
|
||||
|
||||
### Can't Connect to Services
|
||||
```bash
|
||||
# Check if services are running
|
||||
docker ps
|
||||
|
||||
# Check network connectivity
|
||||
docker network ls
|
||||
docker network inspect [network-name]
|
||||
|
||||
# Verify firewall isn't blocking ports
|
||||
sudo ufw status
|
||||
```
|
||||
|
||||
## Security Recommendations
|
||||
|
||||
1. **Change all default passwords immediately**
|
||||
2. **Use strong, unique passwords for:**
|
||||
- n8n basic auth
|
||||
- LibreNMS admin user
|
||||
- Database passwords
|
||||
- Open WebUI admin account
|
||||
|
||||
3. **Network security:**
|
||||
- Use reverse proxy (Traefik, Nginx Proxy Manager)
|
||||
- Enable SSL/TLS certificates
|
||||
- Restrict access to trusted networks
|
||||
- Consider VPN for remote access
|
||||
|
||||
4. **API security:**
|
||||
- Generate strong API tokens
|
||||
- Rotate credentials periodically
|
||||
- Use read-only tokens when possible
|
||||
|
||||
## Maintenance Schedule
|
||||
|
||||
**Daily (automated):**
|
||||
- Service polling and monitoring
|
||||
- Alert processing
|
||||
- Automatic discovery
|
||||
|
||||
**Weekly:**
|
||||
- Review alerts and adjust thresholds
|
||||
- Check service logs for errors
|
||||
- Verify backups completed successfully
|
||||
|
||||
**Monthly:**
|
||||
- Database optimization
|
||||
- Review disk space usage
|
||||
- Update containers (test in dev first)
|
||||
- Audit user accounts and permissions
|
||||
|
||||
**Quarterly:**
|
||||
- Full backup verification and restoration test
|
||||
- Security audit
|
||||
- Review and update documentation
|
||||
- Clean up old data
|
||||
|
||||
## Getting Help
|
||||
|
||||
### Documentation
|
||||
- Check the Wiki.js pages for detailed information
|
||||
- Review container logs for error messages
|
||||
- Search community forums for similar issues
|
||||
|
||||
### Useful Commands
|
||||
```bash
|
||||
# View all logs
|
||||
docker-compose logs -f
|
||||
|
||||
# View specific service
|
||||
docker logs -f [container-name]
|
||||
|
||||
# Restart single service
|
||||
docker restart [container-name]
|
||||
|
||||
# Restart entire stack
|
||||
docker-compose -f [compose-file] restart
|
||||
|
||||
# Update containers
|
||||
docker-compose -f [compose-file] pull
|
||||
docker-compose -f [compose-file] up -d
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. ✅ Deploy AI stack
|
||||
2. ✅ Deploy LibreNMS
|
||||
3. ✅ Import documentation to Wiki.js
|
||||
4. ⬜ Configure integrations with existing services
|
||||
5. ⬜ Create first n8n workflow
|
||||
6. ⬜ Add network devices to LibreNMS
|
||||
7. ⬜ Set up automated backups
|
||||
8. ⬜ Create custom dashboards
|
||||
|
||||
## Support
|
||||
|
||||
For issues specific to:
|
||||
- **Ollama**: https://github.com/ollama/ollama/issues
|
||||
- **Open WebUI**: https://github.com/open-webui/open-webui/issues
|
||||
- **n8n**: https://community.n8n.io
|
||||
- **LibreNMS**: https://community.librenms.org
|
||||
|
||||
---
|
||||
|
||||
**Last Updated:** February 2025
|
||||
**Maintained By:** Homelab Admin
|
||||
**License:** MIT (for custom configurations)
|
||||
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
128
False Grimoire/Netgrimoire/Services/Immich/Convert_Immich.md
Normal file
128
False Grimoire/Netgrimoire/Services/Immich/Convert_Immich.md
Normal file
|
|
@ -0,0 +1,128 @@
|
|||
---
|
||||
title: Immich on ZFS
|
||||
description: Moving Immich to its own ZFS dataset
|
||||
published: true
|
||||
date: 2026-02-20T04:13:02.502Z
|
||||
tags: service zfs immich dataset
|
||||
editor: markdown
|
||||
dateCreated: 2026-02-06T15:57:04.261Z
|
||||
---
|
||||
|
||||
# Moving Immich to a ZFS Dataset
|
||||
|
||||
## Overview
|
||||
This guide covers moving an existing Immich installation to its own ZFS dataset to enable `zfs send` backups.
|
||||
|
||||
## Prerequisites
|
||||
- ZFS pool mounted at `/srv/vault`
|
||||
- Existing Immich installation at `/srv/vault/immich`
|
||||
- Immich running via Docker Compose
|
||||
|
||||
## Steps
|
||||
|
||||
### 1. Stop Immich Services
|
||||
```bash
|
||||
cd /srv/vault/immich # or wherever your docker-compose.yml is
|
||||
docker compose down
|
||||
```
|
||||
|
||||
### 2. Create the New Dataset
|
||||
```bash
|
||||
sudo zfs create vault/immich
|
||||
```
|
||||
|
||||
### 3. Move Existing Data Temporarily
|
||||
```bash
|
||||
sudo mv /srv/vault/immich /srv/vault/immich_old
|
||||
```
|
||||
|
||||
### 4. Set Mountpoint and Mount Dataset
|
||||
```bash
|
||||
sudo zfs set mountpoint=/srv/immich vault/immich
|
||||
sudo zfs mount vault/immich
|
||||
```
|
||||
|
||||
### 5. Copy Data to New Dataset
|
||||
```bash
|
||||
sudo rsync -avxHAX /srv//immich_old/ /srv/immich/
|
||||
```
|
||||
|
||||
Flags preserve permissions, ownership, and special attributes.
|
||||
|
||||
### 6. Verify Data Copy
|
||||
```bash
|
||||
sudo du -sh /srv/vault/immich_old
|
||||
sudo du -sh /srv/vault/immich
|
||||
```
|
||||
|
||||
Sizes should match closely.
|
||||
|
||||
### 7. Start Immich
|
||||
```bash
|
||||
cd /srv/vault/immich
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
### 8. Test and Clean Up
|
||||
Verify everything works, then remove old data:
|
||||
```bash
|
||||
sudo rm -rf /srv/vault/immich_old
|
||||
```
|
||||
|
||||
## ZFS Dataset Properties
|
||||
|
||||
### Recommended Settings
|
||||
```bash
|
||||
# Compression - helps with photos and database
|
||||
sudo zfs set compression=lz4 vault/immich
|
||||
|
||||
# Record size - balance for mixed workload
|
||||
sudo zfs set recordsize=128K vault/immich
|
||||
|
||||
# Better database performance
|
||||
sudo zfs set primarycache=all vault/immich
|
||||
sudo zfs set atime=off vault/immich
|
||||
```
|
||||
|
||||
### Property Explanations
|
||||
- **compression=lz4**: Fast, low CPU overhead, works well for both photos and database
|
||||
- **recordsize=128K**: Good compromise between database (8K blocks) and photos (larger files)
|
||||
- **atime=off**: Disables access time updates, reduces unnecessary writes
|
||||
- **primarycache=all**: Keeps both metadata and data in ARC cache (default)
|
||||
|
||||
## Backup with ZFS Send/Receive
|
||||
|
||||
### Create Snapshot
|
||||
```bash
|
||||
zfs snapshot vault/immich@backup-$(date +%Y%m%d)
|
||||
```
|
||||
|
||||
### Send to Remote Server
|
||||
```bash
|
||||
zfs send vault/immich@backup-$(date +%Y%m%d) | ssh backup-server zfs receive tank/backups/immich
|
||||
```
|
||||
|
||||
### Incremental Backups
|
||||
```bash
|
||||
# After first full backup
|
||||
zfs snapshot vault/immich@backup-$(date +%Y%m%d)
|
||||
zfs send -i vault/immich@previous-snapshot vault/immich@backup-$(date +%Y%m%d) | \
|
||||
ssh backup-server zfs receive tank/backups/immich
|
||||
```
|
||||
|
||||
## Optional: Separate Datasets for Database and Photos
|
||||
|
||||
For optimal performance, split into separate datasets:
|
||||
```bash
|
||||
sudo zfs create vault/immich/database
|
||||
sudo zfs create vault/immich/photos
|
||||
|
||||
# Database optimized
|
||||
sudo zfs set recordsize=16K vault/immich/database
|
||||
sudo zfs set logbias=latency vault/immich/database
|
||||
|
||||
# Photos optimized
|
||||
sudo zfs set recordsize=1M vault/immich/photos
|
||||
```
|
||||
|
||||
Then update your Docker Compose volume mounts accordingly.
|
||||
|
|
@ -0,0 +1,430 @@
|
|||
---
|
||||
title: Integrating MXRoute with MailCow
|
||||
description:
|
||||
published: true
|
||||
date: 2026-02-25T21:04:37.135Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-02-25T19:22:31.514Z
|
||||
---
|
||||
|
||||
# MXRoute — Master Configuration Reference
|
||||
|
||||
## Overview
|
||||
|
||||
MXRoute serves two roles in Netgrimoire mail infrastructure:
|
||||
|
||||
- **Inbound gateway** — MX records for all domains point to MXRoute's commercial IPs, solving residential AT&T IP filtering by banks and financial institutions. MXRoute receives mail and forwards to Mailcow via per-address forwarders.
|
||||
- **Outbound relay** — Mailcow sends all outbound mail through MXRoute via sender-dependent transports for improved deliverability.
|
||||
|
||||
**Mail flow:**
|
||||
|
||||
```
|
||||
Inbound: Internet → MXRoute (commercial IP) → Mailcow (192.168.5.16)
|
||||
Outbound: Mailcow (192.168.5.16) → MXRoute SMTP relay → Internet
|
||||
```
|
||||
|
||||
**Mailcow host:** 192.168.5.16
|
||||
**MXRoute control panel:** confirm server hostname from MXRoute welcome email (e.g. `arrow.mxrouting.net`)
|
||||
**MXRoute SMTP relay:** confirm from welcome email (e.g. `smtp.mxroute.com:587`)
|
||||
|
||||
---
|
||||
|
||||
## Architecture — Why Two Domains Per Hosted Domain
|
||||
|
||||
MXRoute forwarders require a valid destination email address. Forwarding `user@domain.com` back to `user@domain.com` creates a mail loop because MXRoute would look up the MX for `domain.com` and find itself. The solution is a `mail.domain.com` subdomain with its own MX record pointing directly to Mailcow. MXRoute forwards to `user@mail.domain.com`, Mailcow accepts and delivers, and an alias domain maps `@domain.com` back so users only ever see `@domain.com`.
|
||||
|
||||
```
|
||||
domain.com MX → MXRoute (public-facing, receives from internet)
|
||||
mail.domain.com MX → 192.168.5.16 (internal, MXRoute forwards here)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## MXRoute Control Panel
|
||||
|
||||
**Login:** confirm URL from MXRoute welcome email
|
||||
**Interface:** MXRoute 4.0 (new UI — not old DirectAdmin)
|
||||
|
||||
### Creating a Forwarder
|
||||
|
||||
1. Go to **Forwarders**
|
||||
2. Click **Create New Forwarder**
|
||||
3. Set **Forwarder Name:** `username` (domain shown automatically)
|
||||
4. Set **Destination Type:** `Forward to Email(s)`
|
||||
5. Set **Recipients:** `username@mail.domain.com`
|
||||
6. Click **Create Forwarder**
|
||||
|
||||
> Recipients field accepts multiple addresses comma or newline separated.
|
||||
|
||||
---
|
||||
|
||||
## Mailcow Configuration
|
||||
|
||||
### Adding a New Domain (One-Time Per Domain)
|
||||
|
||||
1. **Mail Setup → Domains → Add domain**
|
||||
- Domain: `mail.domain.com` (the subdomain Mailcow owns)
|
||||
- Leave relay settings as default
|
||||
|
||||
2. **Mail Setup → Alias Domains → Add alias domain**
|
||||
- Alias Domain: `domain.com`
|
||||
- Target Domain: `mail.domain.com`
|
||||
- This makes Mailcow accept and deliver mail for `@domain.com` to `@mail.domain.com` mailboxes
|
||||
|
||||
3. **Configuration → ARC/DKIM Keys**
|
||||
- Select domain `mail.domain.com`
|
||||
- Selector: `mailcow`
|
||||
- Key length: 2048
|
||||
- Generate and copy TXT record for DNS
|
||||
|
||||
4. **Configuration → Extra Postfix configuration → extra.cf**
|
||||
|
||||
```
|
||||
# Trust MXRoute forwarding IPs — prevents SPF scoring on forwarded mail
|
||||
mynetworks = 127.0.0.1/8 [::1]/128 192.168.5.0/24 69.167.160.0/19 198.54.120.0/22
|
||||
```
|
||||
|
||||
Restart affected containers after saving.
|
||||
|
||||
### Adding a New Mailbox
|
||||
|
||||
1. **Mail Setup → Mailboxes → Add mailbox**
|
||||
- Username: `user`
|
||||
- Domain: `mail.domain.com`
|
||||
|
||||
2. **MXRoute control panel → Forwarders → Create New Forwarder**
|
||||
- Forwarder: `user@domain.com`
|
||||
- Destination: `user@mail.domain.com`
|
||||
|
||||
### Outbound Relay — Sender-Dependent Transports
|
||||
|
||||
One transport entry per domain. **Configuration → Routing → Sender-Dependent Transports**
|
||||
|
||||
| Domain | Relay Host | Username | Password |
|
||||
|--------|-----------|----------|----------|
|
||||
| pncharris.com | `[smtp.mxroute.com]:587` | relay@pncharris.com | H@rv3yD)G123 |
|
||||
| wasted-bandwidth.net | `[smtp.mxroute.com]:587` | relay@wasted-bandwidth.net | dZ4yLYznVvgSJtqWZJFA |
|
||||
| netgrimoire.com | `[smtp.mxroute.com]:587` | relay@netgrimoire.com | TVGCnJp9SxRbWU8EhkMw |
|
||||
| florosafd.org | `[smtp.mxroute.com]:587` | relay@florosafd.org | 2Fe8XMyaeh6Z5dvdHYdq |
|
||||
| gnarlypandaproductions.com | `[smtp.mxroute.com]:587` | relay@gnarlypandaproductions.com | vG5ZsUQhRWD2UyzLPsqA |
|
||||
|
||||
> Confirm SMTP relay hostname from MXRoute welcome email — substitute actual hostname for `smtp.mxroute.com` if different.
|
||||
|
||||
### Email Client Settings (All Domains)
|
||||
|
||||
| Setting | Value |
|
||||
|---------|-------|
|
||||
| IMAP server | `mail.domain.com` |
|
||||
| IMAP port | `993` (SSL/TLS) |
|
||||
| SMTP server | `mail.domain.com` |
|
||||
| SMTP port | `465` (SSL/TLS) |
|
||||
| Username | `user@domain.com` |
|
||||
|
||||
> Users log in with `@domain.com`. Mailcow resolves to the internal `@mail.domain.com` mailbox via alias domain — transparent to the user.
|
||||
|
||||
---
|
||||
|
||||
## DNS Reference — All Domains
|
||||
|
||||
### DNS Pattern (Apply to Every Domain)
|
||||
|
||||
Two sets of MX records are required — one for the public domain (pointing to MXRoute) and one for the mail subdomain (pointing directly to Mailcow).
|
||||
|
||||
| Type | Host | Value | Notes |
|
||||
|------|------|-------|-------|
|
||||
| A | `mail` | `YOUR_ATT_MAIL_IP` | Mailcow server — MXRoute forwards here |
|
||||
| MX | `@` | MXRoute primary (priority 10) | From MXRoute welcome email |
|
||||
| MX | `@` | MXRoute secondary (priority 20) | From MXRoute welcome email |
|
||||
| MX | `mail` | `mail.domain.com` (priority 10) | Mailcow handles subdomain directly |
|
||||
| CNAME | `imap` | `mail.domain.com` | Client autoconfiguration |
|
||||
| CNAME | `smtp` | `mail.domain.com` | Client autoconfiguration |
|
||||
| CNAME | `webmail` | `mail.domain.com` | Roundcube access |
|
||||
| CNAME | `autodiscover` | `mail.domain.com` | Outlook autodiscover |
|
||||
| CNAME | `autoconfig` | `mail.domain.com` | Thunderbird autoconfig |
|
||||
| TXT | `@` | `v=spf1 ip4:YOUR_ATT_MAIL_IP include:mxroute.com -all` | SPF — both Mailcow direct and MXRoute relay |
|
||||
| TXT | `mail` | `v=spf1 ip4:YOUR_ATT_MAIL_IP -all` | SPF for subdomain — Mailcow direct only |
|
||||
| TXT | `_dmarc` | `v=DMARC1; p=reject; rua=mailto:admin@netgrimoire.com` | DMARC enforcement |
|
||||
| TXT | `mailcow._domainkey.mail` | *(generated in Mailcow ARC/DKIM Keys)* | Mailcow DKIM selector |
|
||||
| TXT | `x._domainkey` | *(from MXRoute control panel)* | MXRoute DKIM selector — confirm actual selector name |
|
||||
|
||||
---
|
||||
|
||||
### pncharris.com
|
||||
|
||||
| Type | Host | Value |
|
||||
|------|------|-------|
|
||||
| A | `mail` | YOUR_ATT_MAIL_IP |
|
||||
| MX | `@` | MXRoute primary (priority 10) |
|
||||
| MX | `@` | MXRoute secondary (priority 20) |
|
||||
| MX | `mail` | `mail.pncharris.com` (priority 10) |
|
||||
| CNAME | `imap` | `mail.pncharris.com` |
|
||||
| CNAME | `smtp` | `mail.pncharris.com` |
|
||||
| CNAME | `webmail` | `mail.pncharris.com` |
|
||||
| CNAME | `autodiscover` | `mail.pncharris.com` |
|
||||
| CNAME | `autoconfig` | `mail.pncharris.com` |
|
||||
| TXT | `@` | `v=spf1 ip4:YOUR_ATT_MAIL_IP include:mxroute.com -all` |
|
||||
| TXT | `mail` | `v=spf1 ip4:YOUR_ATT_MAIL_IP -all` |
|
||||
| TXT | `_dmarc` | `v=DMARC1; p=reject; rua=mailto:admin@netgrimoire.com` |
|
||||
| TXT | `mailcow._domainkey.mail` | *(from Mailcow ARC/DKIM Keys for mail.pncharris.com)* |
|
||||
| TXT | `x._domainkey` | *(from MXRoute control panel)* |
|
||||
|
||||
**Mailcow domains:** `mail.pncharris.com` (primary), `pncharris.com` (alias domain → mail.pncharris.com)
|
||||
|
||||
**Relay credentials:**
|
||||
|
||||
| Account | Password | Notes |
|
||||
|---------|----------|-------|
|
||||
| relay@pncharris.com | H@rv3yD)G123 | Current relay account |
|
||||
| forwarder@pncharris.com | *(see password history below)* | Legacy account |
|
||||
| passer@pncharris.com | bBJtPhrGkHvvhxhukkae | Current |
|
||||
| kylr pncharris | -,68,incTeR | |
|
||||
| G4@rlyf1ng3r | *(Feb 14)* | |
|
||||
|
||||
**passer@pncharris.com password history** (most recent last):
|
||||
- !5!,_\*zDyLEhhR4
|
||||
- sh7dXWnTPqbkDGsTcwtn
|
||||
- MY3V8p69b2HYksygxhXX
|
||||
- RS6U2GU6rcYe3THKKgYx
|
||||
- yzqNysrd73yzWptVEZ5H (current)
|
||||
|
||||
---
|
||||
|
||||
### wasted-bandwidth.net
|
||||
|
||||
| Type | Host | Value |
|
||||
|------|------|-------|
|
||||
| A | `mail` | YOUR_ATT_MAIL_IP |
|
||||
| MX | `@` | MXRoute primary (priority 10) |
|
||||
| MX | `@` | MXRoute secondary (priority 20) |
|
||||
| MX | `mail` | `mail.wasted-bandwidth.net` (priority 10) |
|
||||
| CNAME | `imap` | `mail.wasted-bandwidth.net` |
|
||||
| CNAME | `smtp` | `mail.wasted-bandwidth.net` |
|
||||
| CNAME | `webmail` | `mail.wasted-bandwidth.net` |
|
||||
| CNAME | `autodiscover` | `mail.wasted-bandwidth.net` |
|
||||
| CNAME | `autoconfig` | `mail.wasted-bandwidth.net` |
|
||||
| TXT | `@` | `v=spf1 ip4:YOUR_ATT_MAIL_IP include:mxroute.com -all` |
|
||||
| TXT | `mail` | `v=spf1 ip4:YOUR_ATT_MAIL_IP -all` |
|
||||
| TXT | `_dmarc` | `v=DMARC1; p=reject; rua=mailto:admin@netgrimoire.com` |
|
||||
| TXT | `mailcow._domainkey.mail` | *(from Mailcow ARC/DKIM Keys for mail.wasted-bandwidth.net)* |
|
||||
| TXT | `x._domainkey` | *(from MXRoute control panel)* |
|
||||
|
||||
**Mailcow domains:** `mail.wasted-bandwidth.net` (primary), `wasted-bandwidth.net` (alias domain)
|
||||
|
||||
**Relay credentials:**
|
||||
|
||||
| Account | Password |
|
||||
|---------|----------|
|
||||
| relay@wasted-bandwidth.net | dZ4yLYznVvgSJtqWZJFA |
|
||||
|
||||
---
|
||||
|
||||
### netgrimoire.com
|
||||
|
||||
| Type | Host | Value |
|
||||
|------|------|-------|
|
||||
| A | `mail` | YOUR_ATT_MAIL_IP |
|
||||
| MX | `@` | MXRoute primary (priority 10) |
|
||||
| MX | `@` | MXRoute secondary (priority 20) |
|
||||
| MX | `mail` | `mail.netgrimoire.com` (priority 10) |
|
||||
| CNAME | `imap` | `mail.netgrimoire.com` |
|
||||
| CNAME | `smtp` | `mail.netgrimoire.com` |
|
||||
| CNAME | `webmail` | `mail.netgrimoire.com` |
|
||||
| CNAME | `autodiscover` | `mail.netgrimoire.com` |
|
||||
| CNAME | `autoconfig` | `mail.netgrimoire.com` |
|
||||
| TXT | `@` | `v=spf1 ip4:YOUR_ATT_MAIL_IP include:mxroute.com -all` |
|
||||
| TXT | `mail` | `v=spf1 ip4:YOUR_ATT_MAIL_IP -all` |
|
||||
| TXT | `_dmarc` | `v=DMARC1; p=reject; rua=mailto:admin@netgrimoire.com` |
|
||||
| TXT | `mailcow._domainkey.mail` | *(from Mailcow ARC/DKIM Keys for mail.netgrimoire.com)* |
|
||||
| TXT | `x._domainkey` | *(from MXRoute control panel)* |
|
||||
|
||||
**Mailcow domains:** `mail.netgrimoire.com` (primary), `netgrimoire.com` (alias domain)
|
||||
|
||||
**Relay credentials:**
|
||||
|
||||
| Account | Password |
|
||||
|---------|----------|
|
||||
| relay@netgrimoire.com | TVGCnJp9SxRbWU8EhkMw |
|
||||
|
||||
---
|
||||
|
||||
### florosafd.org
|
||||
|
||||
| Type | Host | Value |
|
||||
|------|------|-------|
|
||||
| A | `mail` | YOUR_ATT_MAIL_IP |
|
||||
| MX | `@` | MXRoute primary (priority 10) |
|
||||
| MX | `@` | MXRoute secondary (priority 20) |
|
||||
| MX | `mail` | `mail.florosafd.org` (priority 10) |
|
||||
| CNAME | `imap` | `mail.florosafd.org` |
|
||||
| CNAME | `smtp` | `mail.florosafd.org` |
|
||||
| CNAME | `webmail` | `mail.florosafd.org` |
|
||||
| CNAME | `autodiscover` | `mail.florosafd.org` |
|
||||
| CNAME | `autoconfig` | `mail.florosafd.org` |
|
||||
| TXT | `@` | `v=spf1 ip4:YOUR_ATT_MAIL_IP include:mxroute.com -all` |
|
||||
| TXT | `mail` | `v=spf1 ip4:YOUR_ATT_MAIL_IP -all` |
|
||||
| TXT | `_dmarc` | `v=DMARC1; p=reject; rua=mailto:admin@netgrimoire.com` |
|
||||
| TXT | `mailcow._domainkey.mail` | *(from Mailcow ARC/DKIM Keys for mail.florosafd.org)* |
|
||||
| TXT | `x._domainkey` | *(from MXRoute control panel)* |
|
||||
|
||||
**Mailcow domains:** `mail.florosafd.org` (primary), `florosafd.org` (alias domain)
|
||||
|
||||
**Relay credentials:**
|
||||
|
||||
| Account | Password |
|
||||
|---------|----------|
|
||||
| relay@florosafd.org | 2Fe8XMyaeh6Z5dvdHYdq |
|
||||
|
||||
---
|
||||
|
||||
### gnarlypandaproductions.com
|
||||
|
||||
| Type | Host | Value |
|
||||
|------|------|-------|
|
||||
| A | `mail` | YOUR_ATT_MAIL_IP |
|
||||
| MX | `@` | MXRoute primary (priority 10) |
|
||||
| MX | `@` | MXRoute secondary (priority 20) |
|
||||
| MX | `mail` | `mail.gnarlypandaproductions.com` (priority 10) |
|
||||
| CNAME | `imap` | `mail.gnarlypandaproductions.com` |
|
||||
| CNAME | `smtp` | `mail.gnarlypandaproductions.com` |
|
||||
| CNAME | `webmail` | `mail.gnarlypandaproductions.com` |
|
||||
| CNAME | `roundcube` | `roundcube.netgrimoire.com` |
|
||||
| CNAME | `autodiscover` | `mail.gnarlypandaproductions.com` |
|
||||
| CNAME | `autoconfig` | `mail.gnarlypandaproductions.com` |
|
||||
| TXT | `@` | `v=spf1 ip4:YOUR_ATT_MAIL_IP include:mxroute.com -all` |
|
||||
| TXT | `mail` | `v=spf1 ip4:YOUR_ATT_MAIL_IP -all` |
|
||||
| TXT | `_dmarc` | `v=DMARC1; p=reject; rua=mailto:admin@gnarlypandaproductions.com` |
|
||||
| TXT | `mailcow._domainkey.mail` | *(from Mailcow ARC/DKIM Keys for mail.gnarlypandaproductions.com)* |
|
||||
| TXT | `default._domainkey` | `v=DKIM1; t=s; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA3D3vyPoBHB4eMSMq8HygVWHzYbketRX4yjk9wV4bdaar0/c89dK230FMOW6zVXEsY1sXKFk1kBxerHVw0wY8qnQyooHgINEQcEXrtB/x93Sl/cqBQXk+PHOIOymQwgni8WCUhCSnvunxXK8qX5f9J56qzd0/wpY2WSEHho+XrnQjc+c7HMvkcC3+nKJe59ZNgvQW/Y9B/L6zFDjAp+QOUYp9wwX4L+j1T4fQSygYxAJZ0aIoR8FsbOuXc38pht99HyUnYwH08HoK7xv3DL2BrVo3KVZ7xMe2S4YMxd1HkJz2evbV/ziNsJcKW/le3fFS7mza09yJXDLDcLOKLXbYUQIDAQAB` |
|
||||
| TXT | `x._domainkey` | *(from MXRoute control panel — confirm actual selector)* |
|
||||
|
||||
**Mailcow domains:** `mail.gnarlypandaproductions.com` (primary), `gnarlypandaproductions.com` (alias domain)
|
||||
|
||||
**Relay credentials:**
|
||||
|
||||
| Account | Password |
|
||||
|---------|----------|
|
||||
| relay@gnarlypandaproductions.com | vG5ZsUQhRWD2UyzLPsqA |
|
||||
|
||||
---
|
||||
|
||||
### nucking-futz.com
|
||||
|
||||
New domain — see [Mail Setup — nucking-futz.com](./mail-setup-nucking-futz) for full setup guide.
|
||||
|
||||
| Type | Host | Value |
|
||||
|------|------|-------|
|
||||
| A | `mail` | YOUR_ATT_MAIL_IP |
|
||||
| MX | `@` | MXRoute primary (priority 10) |
|
||||
| MX | `@` | MXRoute secondary (priority 20) |
|
||||
| MX | `mail` | `mail.nucking-futz.com` (priority 10) |
|
||||
| CNAME | `imap` | `mail.nucking-futz.com` |
|
||||
| CNAME | `smtp` | `mail.nucking-futz.com` |
|
||||
| CNAME | `webmail` | `mail.nucking-futz.com` |
|
||||
| CNAME | `autodiscover` | `mail.nucking-futz.com` |
|
||||
| CNAME | `autoconfig` | `mail.nucking-futz.com` |
|
||||
| TXT | `@` | `v=spf1 ip4:YOUR_ATT_MAIL_IP include:mxroute.com -all` |
|
||||
| TXT | `mail` | `v=spf1 ip4:YOUR_ATT_MAIL_IP -all` |
|
||||
| TXT | `_dmarc` | `v=DMARC1; p=reject; rua=mailto:admin@netgrimoire.com` |
|
||||
| TXT | `mailcow._domainkey.mail` | *(from Mailcow ARC/DKIM Keys for mail.nucking-futz.com)* |
|
||||
| TXT | `x._domainkey` | *(from MXRoute control panel)* |
|
||||
|
||||
**Mailcow domains:** `mail.nucking-futz.com` (primary), `nucking-futz.com` (alias domain)
|
||||
|
||||
**Relay credentials:**
|
||||
|
||||
| Account | Password |
|
||||
|---------|----------|
|
||||
| relay@nucking-futz.com | *(set during MXRoute domain creation)* |
|
||||
|
||||
---
|
||||
|
||||
## Adding a New Domain — Checklist
|
||||
|
||||
Use this checklist every time a new domain is added to the stack.
|
||||
|
||||
**DNS (at registrar):**
|
||||
- [ ] A record: `mail.newdomain.com` → YOUR_ATT_MAIL_IP
|
||||
- [ ] MX records: `@` → MXRoute servers
|
||||
- [ ] MX record: `mail` → `mail.newdomain.com`
|
||||
- [ ] CNAME records: imap, smtp, webmail, autodiscover, autoconfig
|
||||
- [ ] SPF TXT: `@` — includes both ATT IP and `include:mxroute.com`
|
||||
- [ ] SPF TXT: `mail` — ATT IP only
|
||||
- [ ] DMARC TXT: `_dmarc`
|
||||
- [ ] DKIM TXT: `mailcow._domainkey.mail` — after generating in Mailcow
|
||||
- [ ] DKIM TXT: `x._domainkey` — after retrieving from MXRoute
|
||||
|
||||
**Mailcow:**
|
||||
- [ ] Add domain: `mail.newdomain.com`
|
||||
- [ ] Add alias domain: `newdomain.com` → `mail.newdomain.com`
|
||||
- [ ] Generate DKIM key (selector: `mailcow`) for `mail.newdomain.com`
|
||||
- [ ] Add sender-dependent transport for `newdomain.com`
|
||||
- [ ] Add sender-dependent transport for `mail.newdomain.com`
|
||||
- [ ] Create mailboxes as `user@mail.newdomain.com`
|
||||
|
||||
**MXRoute:**
|
||||
- [ ] Add domain in control panel
|
||||
- [ ] Create forwarder for each mailbox: `user@newdomain.com` → `user@mail.newdomain.com`
|
||||
- [ ] Retrieve DKIM key for DNS
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Mail not delivering inbound (not reaching Mailcow)
|
||||
|
||||
- Check MX records for `@` point to MXRoute servers: `dig MX domain.com +short`
|
||||
- Check MX record for `mail` subdomain points to Mailcow: `dig MX mail.domain.com +short`
|
||||
- Verify MXRoute forwarder exists for the address in the control panel
|
||||
- Check Mailcow logs: **Logs → Postfix** — look for the delivery attempt and any rejection reason
|
||||
- Verify MXRoute IP ranges are in Mailcow `extra.cf` trusted networks
|
||||
|
||||
### Mail not delivering inbound (banks / financial institutions)
|
||||
|
||||
- This is the residential AT&T IP problem — confirm MX records point to MXRoute, not directly to your IP
|
||||
- Run `dig MX domain.com +short` — should show MXRoute servers, not your IP
|
||||
- If MX still points to your ATT IP, update DNS and wait for propagation
|
||||
|
||||
### Outbound mail rejected or going to spam
|
||||
|
||||
- Verify sender-dependent transport is configured for the domain in Mailcow
|
||||
- Check relay credentials are current in the transport entry
|
||||
- Run an SPF check: `dig TXT domain.com +short` — confirm `include:mxroute.com` is present
|
||||
- Send test to check-auth@verifier.port25.com for full SPF/DKIM/DMARC report
|
||||
- Run through https://mail-tester.com for a deliverability score
|
||||
|
||||
### DKIM verification failing
|
||||
|
||||
- Confirm both selectors are published in DNS:
|
||||
- `dig TXT mailcow._domainkey.mail.domain.com +short`
|
||||
- `dig TXT x._domainkey.domain.com +short` (substitute actual MXRoute selector)
|
||||
- Allow up to 48 hours for DNS propagation after adding records
|
||||
- Verify selector names match exactly what Mailcow and MXRoute are using to sign
|
||||
|
||||
### DMARC failures
|
||||
|
||||
- SPF and DKIM must both pass and align with the From: domain
|
||||
- Check DMARC reports sent to `admin@netgrimoire.com` — use [Postmark DMARC](https://dmarc.postmarkapp.com/) or [dmarcian.com](https://dmarcian.com) to parse raw XML reports
|
||||
- Common cause: outbound mail going through MXRoute but `include:mxroute.com` missing from SPF
|
||||
|
||||
### Forwarded mail getting spam-scored
|
||||
|
||||
- Confirm MXRoute IP ranges are in Mailcow `extra.cf` mynetworks
|
||||
- Check that Mailcow trusted networks were saved and containers restarted
|
||||
- Verify SRS is working: in Roundcube open a forwarded message → More → View Source → `Return-Path` should begin with `SRS0=`
|
||||
|
||||
### New mailbox not receiving mail
|
||||
|
||||
- Two steps are required — confirm both were done:
|
||||
1. Mailbox created in Mailcow as `user@mail.domain.com`
|
||||
2. Forwarder created in MXRoute as `user@domain.com` → `user@mail.domain.com`
|
||||
- If the MXRoute forwarder is missing, inbound mail silently goes nowhere
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [MailCow Configuration](./mailcow)
|
||||
- [MailCow Security Hardening](./mailcow-security-hardening)
|
||||
- [Mail Setup — nucking-futz.com](./mail-setup-nucking-futz)
|
||||
- [OPNsense Firewall](./opnsense-firewall) — ATT_Mail static IP allocation
|
||||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue