New Grimoire
This commit is contained in:
parent
77d589a13d
commit
cc574f8aed
157 changed files with 29420 additions and 0 deletions
453
Green-Grimoire/Library/Stash-Management.md
Normal file
453
Green-Grimoire/Library/Stash-Management.md
Normal file
|
|
@ -0,0 +1,453 @@
|
|||
---
|
||||
title: Stashapp Workflow
|
||||
description:
|
||||
published: true
|
||||
date: 2026-02-20T04:25:56.467Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-02-18T13:08:53.604Z
|
||||
---
|
||||
|
||||
# StashApp: Automated Library Management with Community Scrapers
|
||||
|
||||
> **Goal:** Automatically identify, tag, rename, and organize your media library with minimal manual intervention using StashDB, ThePornDB, and the CommunityScrapers repository.
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Prerequisites](#1-prerequisites)
|
||||
2. [Installing CommunityScrapers](#2-installing-community-scrapers)
|
||||
3. [Configuring Metadata Providers](#3-configuring-metadata-providers)
|
||||
- [StashDB](#31-stashdb)
|
||||
- [ThePornDB (TPDB)](#32-theporndbtpdb)
|
||||
4. [Configuring Your Library](#4-configuring-your-library)
|
||||
5. [Automated File Naming & Moving](#5-automated-file-naming--moving)
|
||||
6. [The Core Workflow](#6-the-core-workflow)
|
||||
7. [Handling ABMEA & Amateur Content](#7-handling-abmea--amateur-content)
|
||||
8. [Automation with Scheduled Tasks](#8-automation-with-scheduled-tasks)
|
||||
9. [Tips & Troubleshooting](#9-tips--troubleshooting)
|
||||
|
||||
---
|
||||
|
||||
## 1. Prerequisites
|
||||
|
||||
Before starting, make sure you have:
|
||||
|
||||
- **StashApp installed and running** — see the [official install docs](https://github.com/stashapp/stash/wiki/Installation)
|
||||
- **Git installed** on your system (needed to clone the scrapers repo)
|
||||
- **A ThePornDB account** — free tier available at [metadataapi.net](https://metadataapi.net)
|
||||
- **A StashDB account** — requires a community invite; request one on [the Discord](https://discord.gg/2TsNFKt)
|
||||
- Your Stash config directory noted — default locations:
|
||||
|
||||
| OS | Default Path |
|
||||
|----|-------------|
|
||||
| Windows | `%APPDATA%\stash` |
|
||||
| macOS | `~/.stash` |
|
||||
| Linux | `~/.stash` |
|
||||
| Docker | `/root/.stash` |
|
||||
|
||||
---
|
||||
|
||||
## 2. Installing CommunityScrapers
|
||||
|
||||
The [CommunityScrapers](https://github.com/stashapp/CommunityScrapers) repository contains scrapers for hundreds of sites maintained by the Stash community. This is the primary source for site-specific scrapers including ABMEA.
|
||||
|
||||
### Step 1 — Navigate to your Stash config directory
|
||||
|
||||
```bash
|
||||
cd ~/.stash
|
||||
```
|
||||
|
||||
### Step 2 — Create a scrapers directory if it doesn't exist
|
||||
|
||||
```bash
|
||||
mkdir -p scrapers
|
||||
cd scrapers
|
||||
```
|
||||
|
||||
### Step 3 — Clone the CommunityScrapers repository
|
||||
|
||||
```bash
|
||||
git clone https://github.com/stashapp/CommunityScrapers.git
|
||||
```
|
||||
|
||||
This creates `~/.stash/scrapers/CommunityScrapers/` containing all available scrapers.
|
||||
|
||||
### Step 4 — Verify Stash detects the scrapers
|
||||
|
||||
1. Open Stash in your browser (default: `http://localhost:9999`)
|
||||
2. Go to **Settings → Metadata Providers → Scrapers**
|
||||
3. Click **Reload Scrapers**
|
||||
4. You should now see a long list of scrapers including entries for ABMEA, ManyVids, Clips4Sale, etc.
|
||||
|
||||
### Step 5 — Keep scrapers updated
|
||||
|
||||
Since community scrapers are actively maintained, set up a periodic update:
|
||||
|
||||
```bash
|
||||
cd ~/.stash/scrapers/CommunityScrapers
|
||||
git pull
|
||||
```
|
||||
|
||||
> 💡 **Tip:** You can automate this with a cron job or scheduled task. See [Section 8](#8-automation-with-scheduled-tasks).
|
||||
|
||||
### Installing Python Dependencies (if prompted)
|
||||
|
||||
Some scrapers require Python packages. If you see scraper errors mentioning missing modules:
|
||||
|
||||
```bash
|
||||
pip install requests cloudscraper py-cord lxml
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Configuring Metadata Providers
|
||||
|
||||
Stash uses **metadata providers** to automatically match scenes by fingerprint (phash/oshash). This is what enables true automation — no filename matching required.
|
||||
|
||||
### 3.1 StashDB
|
||||
|
||||
StashDB is the official community-run fingerprint and metadata database. It is the most reliable source for mainstream and studio content.
|
||||
|
||||
1. Go to **Settings → Metadata Providers**
|
||||
2. Under **Stash-Box Endpoints**, click **Add**
|
||||
3. Fill in:
|
||||
- **Name:** `StashDB`
|
||||
- **Endpoint:** `https://stashdb.org/graphql`
|
||||
- **API Key:** *(generate this from your StashDB account → API Keys)*
|
||||
4. Click **Confirm**
|
||||
|
||||
### 3.2 ThePornDB (TPDB)
|
||||
|
||||
TPDB aggregates metadata from a large number of sites and is especially useful for amateur, clip site, and ABMEA content that may not be on StashDB.
|
||||
|
||||
1. Log in at [metadataapi.net](https://metadataapi.net) and go to your **API Settings** to get your key
|
||||
2. In Stash, go to **Settings → Metadata Providers**
|
||||
3. Under **Stash-Box Endpoints**, click **Add**
|
||||
4. Fill in:
|
||||
- **Name:** `ThePornDB`
|
||||
- **Endpoint:** `https://theporndb.net/graphql`
|
||||
- **API Key:** *(your TPDB API key)*
|
||||
5. Click **Confirm**
|
||||
|
||||
### Provider Priority Order
|
||||
|
||||
Set your identify task to query providers in this order for best results:
|
||||
|
||||
1. **StashDB** — highest quality, community-verified
|
||||
2. **ThePornDB** — broad coverage including amateur/clip sites
|
||||
3. **CommunityScrapers** (site-specific) — for anything not matched above
|
||||
|
||||
---
|
||||
|
||||
## 4. Configuring Your Library
|
||||
|
||||
### Adding Library Paths
|
||||
|
||||
1. Go to **Settings → Library**
|
||||
2. Under **Directories**, click **Add** and point to your media folders
|
||||
3. You can add multiple directories (e.g., separate drives or folders)
|
||||
|
||||
> ⚠️ **Do not** set your organized output folder as a source directory. Keep source and destination separate until you are confident in your setup.
|
||||
|
||||
### Recommended Directory Structure
|
||||
|
||||
```
|
||||
/media/
|
||||
├── stash-incoming/ ← Source: where new files land
|
||||
└── stash-library/ ← Destination: where Stash moves organized files
|
||||
├── Studios/
|
||||
│ └── ABMEA/
|
||||
└── Amateur/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Automated File Naming & Moving
|
||||
|
||||
This is the section that does the heavy lifting. Stash will rename and move files **only when a scene is marked as Organized**, which gives you a review gate before anything is touched.
|
||||
|
||||
### Enable File Moving
|
||||
|
||||
1. Go to **Settings → Library**
|
||||
2. Enable **"Move files to organized folder on organize"**
|
||||
3. Set your **Organized folder path** (e.g., `/media/stash-library`)
|
||||
|
||||
### Configure the File Naming Template
|
||||
|
||||
Still in **Settings → Library**, set your **Filename template**. These use Go template syntax with Stash variables.
|
||||
|
||||
**Recommended template for mixed studio/amateur libraries:**
|
||||
|
||||
```
|
||||
{studio}/{date} {title}
|
||||
```
|
||||
|
||||
**For performer-centric amateur libraries:**
|
||||
|
||||
```
|
||||
{performers}/{studio}/{date} {title}
|
||||
```
|
||||
|
||||
**Full example with fallbacks:**
|
||||
|
||||
```
|
||||
{{if .Studio}}{{.Studio.Name}}{{else}}Unknown{{end}}/{{if .Date}}{{.Date}}{{else}}0000-00-00{{end}} {{.Title}}
|
||||
```
|
||||
|
||||
### Available Template Variables
|
||||
|
||||
| Variable | Example Output |
|
||||
|----------|---------------|
|
||||
| `{title}` | `Scene Title Here` |
|
||||
| `{date}` | `2024-03-15` |
|
||||
| `{studio}` | `ABMEA` |
|
||||
| `{performers}` | `Jane Doe` |
|
||||
| `{resolution}` | `1080p` |
|
||||
| `{duration}` | `00-32-15` |
|
||||
| `{rating}` | `5` |
|
||||
|
||||
> 💡 If a field is empty (e.g., no studio), Stash skips that path segment. Test with a few scenes before running on your whole library.
|
||||
|
||||
---
|
||||
|
||||
## 6. The Core Workflow
|
||||
|
||||
Follow these steps **in order** every time you add new content. This is the automated pipeline.
|
||||
|
||||
```
|
||||
New Files → Scan → Generate Fingerprints → Identify → Review → Organize (Move + Rename)
|
||||
```
|
||||
|
||||
### Step 1 — Scan
|
||||
|
||||
**Tasks → Scan**
|
||||
|
||||
- Discovers new files and adds them to the database
|
||||
- Does not move or rename anything yet
|
||||
- Options to enable: **Generate covers on scan**
|
||||
|
||||
### Step 2 — Generate Fingerprints
|
||||
|
||||
**Tasks → Generate**
|
||||
|
||||
Select these options:
|
||||
|
||||
| Option | Purpose |
|
||||
|--------|---------|
|
||||
| ✅ **Phashes** | Used for fingerprint matching against StashDB/TPDB |
|
||||
| ✅ **Checksums (MD5/SHA256)** | Used for duplicate detection |
|
||||
| ✅ **Previews** | Thumbnail previews in the UI |
|
||||
| ✅ **Sprites** | Timeline scrubber images |
|
||||
|
||||
> ⏳ This step is CPU/GPU intensive. Let it complete before proceeding. On a large library, this may take hours.
|
||||
|
||||
### Step 3 — Identify (Auto-Scrape by Fingerprint)
|
||||
|
||||
**Tasks → Identify**
|
||||
|
||||
This is the magic step. Stash sends your file fingerprints to StashDB and TPDB and pulls back metadata automatically.
|
||||
|
||||
Configure the task:
|
||||
1. Click **Add Source** and add **StashDB** first
|
||||
2. Click **Add Source** again and add **ThePornDB**
|
||||
3. Under **Options**, enable:
|
||||
- ✅ Set cover image
|
||||
- ✅ Set performers
|
||||
- ✅ Set studio
|
||||
- ✅ Set tags
|
||||
- ✅ Set date
|
||||
4. Click **Identify**
|
||||
|
||||
Stash will now automatically match and populate metadata for any scene it recognizes by fingerprint.
|
||||
|
||||
### Step 4 — Auto Tag (Filename-Based Fallback)
|
||||
|
||||
For scenes that didn't match by fingerprint (common with amateur content), use Auto Tag to extract metadata from filenames.
|
||||
|
||||
**Tasks → Auto Tag**
|
||||
|
||||
- Matches **Performers**, **Studios**, and **Tags** from filenames against your existing database entries
|
||||
- Works best when filenames contain names (e.g., `JaneDoe_SceneTitle_1080p.mp4`)
|
||||
|
||||
### Step 5 — Review Unmatched Scenes
|
||||
|
||||
Filter to find scenes that still need attention:
|
||||
|
||||
1. Go to **Scenes**
|
||||
2. Filter by: **Organized = false** and **Studio = none** (or **Performers = none**)
|
||||
3. Use the **Tagger view** (icon in top right of Scenes) for rapid URL-based scraping
|
||||
|
||||
In Tagger view:
|
||||
- Paste the original source URL into the scrape field
|
||||
- Click **Scrape** — Stash fills in all metadata from that URL
|
||||
- Review and click **Save**
|
||||
|
||||
### Step 6 — Organize (Move & Rename)
|
||||
|
||||
Once you're satisfied with a scene's metadata:
|
||||
|
||||
1. Open the scene
|
||||
2. Click the **Organize** button (checkmark icon), OR
|
||||
3. Use **bulk organize**: select multiple scenes → Edit → Mark as Organized
|
||||
|
||||
When a scene is marked Organized, Stash will:
|
||||
- ✅ Rename the file according to your template
|
||||
- ✅ Move it to your organized folder
|
||||
- ✅ Update the database path
|
||||
|
||||
> ⚠️ **This action cannot be easily undone at scale.** Always verify metadata on a small batch first.
|
||||
|
||||
---
|
||||
|
||||
## 7. Handling ABMEA & Amateur Content
|
||||
|
||||
ABMEA and amateur clips often lack fingerprint matches. Use these additional strategies:
|
||||
|
||||
### ABMEA-Specific Scraper
|
||||
|
||||
The CommunityScrapers repo includes an ABMEA scraper. To use it manually:
|
||||
|
||||
1. Open a scene in Stash
|
||||
2. Click **Edit → Scrape with → ABMEA**
|
||||
3. If the scene URL is known, enter it; otherwise the scraper will search by title
|
||||
|
||||
### Batch URL Scraping Workflow for ABMEA
|
||||
|
||||
If you have many files sourced from ABMEA:
|
||||
|
||||
1. Before ingesting files, **rename them to include the ABMEA scene ID** in the filename if possible (e.g., `ABMEA-0123_title.mp4`)
|
||||
2. After scanning, go to **Tagger View**
|
||||
3. Filter to unmatched scenes and paste ABMEA URLs one by one
|
||||
|
||||
### Amateur Content Without a Source Site
|
||||
|
||||
For truly anonymous amateur clips:
|
||||
|
||||
1. Create a **Studio** entry called `Amateur` (or more specific names like `Amateur - Reddit`)
|
||||
2. Create **Performer** entries for recurring people you can identify
|
||||
3. Use **Auto Tag** to match these once entries exist
|
||||
4. Use tags liberally to compensate for missing structured metadata: `amateur`, `homemade`, `POV`, etc.
|
||||
|
||||
### Tag Hierarchy Recommendation
|
||||
|
||||
Set up tag parents in **Settings → Tags** to create a browsable hierarchy:
|
||||
|
||||
```
|
||||
Content Type
|
||||
├── Amateur
|
||||
├── Professional
|
||||
└── Compilation
|
||||
|
||||
Source
|
||||
├── ABMEA
|
||||
├── Clip Site
|
||||
└── Unknown
|
||||
|
||||
Quality
|
||||
├── 4K
|
||||
├── 1080p
|
||||
└── SD
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. Automation with Scheduled Tasks
|
||||
|
||||
Minimize manual steps by scheduling recurring tasks.
|
||||
|
||||
### Setting Up Scheduled Tasks in Stash
|
||||
|
||||
Go to **Settings → Tasks → Scheduled Tasks** and create:
|
||||
|
||||
| Task | Schedule | Purpose |
|
||||
|------|----------|---------|
|
||||
| Scan | Every 6 hours | Pick up new files automatically |
|
||||
| Generate (Phashes only) | Every 6 hours | Fingerprint new files |
|
||||
| Identify | Daily at 2am | Match new fingerprinted files |
|
||||
| Auto Tag | Daily at 3am | Filename-based fallback tagging |
|
||||
| Clean | Weekly | Remove missing files from database |
|
||||
|
||||
### Auto-Update CommunityScrapers (Linux/macOS)
|
||||
|
||||
Add to your crontab (`crontab -e`):
|
||||
|
||||
```bash
|
||||
# Update CommunityScrapers every Sunday at midnight
|
||||
0 0 * * 0 cd ~/.stash/scrapers/CommunityScrapers && git pull
|
||||
```
|
||||
|
||||
### Auto-Update CommunityScrapers (Windows)
|
||||
|
||||
Create a scheduled task in Task Scheduler running:
|
||||
|
||||
```powershell
|
||||
cd C:\Users\YourUser\.stash\scrapers\CommunityScrapers; git pull
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9. Tips & Troubleshooting
|
||||
|
||||
### Scraper not appearing in Stash
|
||||
|
||||
- Go to **Settings → Metadata Providers → Scrapers** and click **Reload Scrapers**
|
||||
- Check that the `.yml` scraper file is in a subdirectory of your scrapers folder
|
||||
- Check Stash logs (**Settings → Logs**) for scraper loading errors
|
||||
|
||||
### Identify finds no matches
|
||||
|
||||
- Confirm phashes were generated (check scene details — phash should be populated)
|
||||
- Confirm your StashDB/TPDB API keys are correctly entered and not expired
|
||||
- The file may simply not be in either database — proceed to manual URL scraping
|
||||
|
||||
### Files not moving after marking as Organized
|
||||
|
||||
- Confirm **"Move files to organized folder"** is enabled in Settings → Library
|
||||
- Confirm the organized folder path is set and the folder exists
|
||||
- Check that Stash has write permissions to both source and destination
|
||||
|
||||
### Duplicate files
|
||||
|
||||
Run **Tasks → Clean → Find Duplicates** before organizing to avoid moving duplicates into your library. Stash uses phash to find visual duplicates even if filenames differ.
|
||||
|
||||
### Metadata keeps getting overwritten
|
||||
|
||||
In **Settings → Scraping**, set the **Scrape behavior** to `If not set` instead of `Always` to prevent already-populated fields from being overwritten during re-scrapes.
|
||||
|
||||
### Useful Stash Plugins
|
||||
|
||||
Install via **Settings → Plugins → Browse Available Plugins**:
|
||||
|
||||
| Plugin | Purpose |
|
||||
|--------|---------|
|
||||
| **Performer Image Cleanup** | Remove duplicate performer images |
|
||||
| **Tag Graph** | Visualize tag relationships |
|
||||
| **Duplicate Finder** | Advanced duplicate management |
|
||||
| **Stats** | Library analytics dashboard |
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference Checklist
|
||||
|
||||
Use this checklist every time you add new content:
|
||||
|
||||
```
|
||||
[ ] Drop files into stash-incoming directory
|
||||
[ ] Tasks → Scan
|
||||
[ ] Tasks → Generate → Phashes + Checksums
|
||||
[ ] Tasks → Identify (StashDB → TPDB)
|
||||
[ ] Tasks → Auto Tag
|
||||
[ ] Review unmatched scenes in Tagger View
|
||||
[ ] Manually scrape remaining unmatched scenes by URL
|
||||
[ ] Spot-check metadata on a sample of scenes
|
||||
[ ] Bulk select reviewed scenes → Mark as Organized
|
||||
[ ] Verify a few files moved and renamed correctly
|
||||
[ ] Done ✓
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
*Last updated: February 2026 | Stash version compatibility: 0.25+*
|
||||
*Community resources: [Stash Discord](https://discord.gg/2TsNFKt) | [GitHub](https://github.com/stashapp/stash) | [Wiki](https://github.com/stashapp/stash/wiki)*
|
||||
58
Green-Grimoire/Overview.md
Normal file
58
Green-Grimoire/Overview.md
Normal file
|
|
@ -0,0 +1,58 @@
|
|||
---
|
||||
title: Green Grimoire
|
||||
description: Adult media stack — the satyr's private library
|
||||
published: true
|
||||
date: 2026-04-12T00:00:00.000Z
|
||||
tags: green, adult, stash
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-12T00:00:00.000Z
|
||||
---
|
||||
|
||||
# Green Grimoire
|
||||
|
||||

|
||||
|
||||
The Green Grimoire is the self-hosted adult media stack. Separate host and domain from Netgrimoire. All services sit behind `*.wasted-bandwidth.net` and Authelia. Homepage tab: **Nucking-Futz**.
|
||||
|
||||
Data lives at `/data/nfs/Baxter/Green/` with two libraries: Clips and Movies.
|
||||
|
||||
---
|
||||
|
||||
## Services
|
||||
|
||||
| Service | URL | Port | Purpose | Host |
|
||||
|---------|-----|------|---------|------|
|
||||
| Stash (main) | `stash.wasted-bandwidth.net` | 9999 | Primary adult content library | znas / Compose |
|
||||
| GreenFin (Jellyfinx) | Internal | 7096 | Green Door media server | docker5 / Compose |
|
||||
| Namer | `namer.wasted-bandwidth.net` | 6980 | Scene file namer | znas / Compose |
|
||||
| Whisparr | — | — | Adult content acquisition | znas / Swarm |
|
||||
| NZBGet | — | — | Downloader | znas / Swarm |
|
||||
| PocketStash | Internal | 9998 | Stash instance for Pocket Grimoire sync | znas / Compose |
|
||||
|
||||
---
|
||||
|
||||
## Data Structure
|
||||
|
||||
```
|
||||
/data/nfs/Baxter/Green/
|
||||
├── Clips/ ← Clips library
|
||||
├── Movies/ ← Movies library
|
||||
└── Pocket/ ← Synced to Pocket Grimoire pre-travel
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Pocket Integration
|
||||
|
||||
PocketStash (port 9998) is a separate Stash instance that maintains a curated subset for travel. Before a trip, `syncoid` pushes `vault/Green/Pocket` to the Pocket Grimoire laptop. The Pocket instance runs in read-only travel mode — no writes while traveling.
|
||||
|
||||
See [Stash Integration](/Pocket-Grimoire/Software/Stash-Integration) in Pocket Grimoire docs.
|
||||
|
||||
---
|
||||
|
||||
## Sections
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| [Stash Management](/Green-Grimoire/Library/Stash-Management) | Library config, scrapers, metadata workflow |
|
||||
| [VHS Restoration](/Green-Grimoire/Scripts/VHS-Restoration) | Encoding, deinterlace, restoration scripts |
|
||||
531
Green-Grimoire/Scripts/VHS-Restoration.md
Normal file
531
Green-Grimoire/Scripts/VHS-Restoration.md
Normal file
|
|
@ -0,0 +1,531 @@
|
|||
---
|
||||
title: Video Restoration Script
|
||||
description: Restore VHS Video Captures
|
||||
published: true
|
||||
date: 2026-03-06T03:48:12.713Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-03-06T03:48:05.841Z
|
||||
---
|
||||
|
||||
# VHS Video Restoration — User Guide
|
||||
|
||||
A pipeline script for cleaning up and upscaling old VHS captures on Ubuntu 24.04.
|
||||
Runs in two modes: a fast FFmpeg-only cleanup pass, and a full AI upscale using Real-ESRGAN.
|
||||
|
||||
---
|
||||
|
||||
## Requirements
|
||||
|
||||
- **Ubuntu 24.04**
|
||||
- **FFmpeg** — `sudo apt install ffmpeg`
|
||||
- **bc** — `sudo apt install bc`
|
||||
- **Real-ESRGAN** (optional, for AI upscaling — see setup below)
|
||||
|
||||
---
|
||||
|
||||
## File Setup
|
||||
|
||||
Place everything in a working folder with this structure:
|
||||
|
||||
```
|
||||
~/your-folder/
|
||||
├── vhs_restore.sh
|
||||
├── realesrgan-ncnn-vulkan ← AI upscaler binary (optional)
|
||||
├── models/ ← Real-ESRGAN model files
|
||||
├── input/ ← Put your source videos here
|
||||
├── output/ ← Restored videos appear here
|
||||
└── work/ ← Temporary scratch files (auto-created)
|
||||
```
|
||||
|
||||
Supported input formats: `.mpg`, `.mpeg`, `.mp4`, `.avi`, `.mov`, `.mkv`, `.wmv`, `.m4v`, `.ts`
|
||||
|
||||
---
|
||||
|
||||
## First-Time Setup
|
||||
|
||||
```bash
|
||||
# Make the script executable
|
||||
chmod +x vhs_restore.sh
|
||||
|
||||
# Create the input folder and add your videos
|
||||
mkdir input
|
||||
cp /path/to/your/videos/*.mpg input/
|
||||
```
|
||||
|
||||
### Installing Real-ESRGAN (one-time, for AI upscaling)
|
||||
|
||||
1. Download the latest Ubuntu release from:
|
||||
https://github.com/xinntao/Real-ESRGAN/releases
|
||||
→ look for `realesrgan-ncnn-vulkan-*-ubuntu.zip`
|
||||
2. Unzip into your working folder
|
||||
3. `chmod +x realesrgan-ncnn-vulkan`
|
||||
|
||||
---
|
||||
|
||||
## Running the Script
|
||||
|
||||
### Quick cleanup only (recommended first pass)
|
||||
|
||||
Fast — processes in a few minutes per file. No AI upscaling.
|
||||
|
||||
```bash
|
||||
./vhs_restore.sh --no-ai
|
||||
```
|
||||
|
||||
### Full pipeline with AI upscaling
|
||||
|
||||
Slow on CPU (plan for several hours per hour of footage). Produces the best results.
|
||||
|
||||
```bash
|
||||
./vhs_restore.sh
|
||||
```
|
||||
|
||||
### All options
|
||||
|
||||
| Flag | Description | Default |
|
||||
|------|-------------|---------|
|
||||
| `-i DIR` | Input directory | `./input` |
|
||||
| `-o DIR` | Output directory | `./output` |
|
||||
| `-w DIR` | Scratch/work directory | `./work` |
|
||||
| `-b PATH` | Path to Real-ESRGAN binary | `./realesrgan-ncnn-vulkan` |
|
||||
| `-s 2` or `-s 4` | Upscale factor | `2` |
|
||||
| `-q 16` | Output quality (0–51, lower = better) | `16` |
|
||||
| `--no-ai` | Skip AI upscaling, FFmpeg only | off |
|
||||
| `--keep` | Keep extracted PNG frames after processing | off |
|
||||
| `-h` | Show help | |
|
||||
|
||||
**Examples:**
|
||||
|
||||
```bash
|
||||
# Process files from a custom folder
|
||||
./vhs_restore.sh -i ~/Videos/VHS -o ~/Videos/Restored
|
||||
|
||||
# 4x upscale with slightly smaller output file
|
||||
./vhs_restore.sh -s 4 -q 18
|
||||
|
||||
# FFmpeg cleanup only, custom folders
|
||||
./vhs_restore.sh -i ~/Videos/VHS -o ~/Videos/Restored --no-ai
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## What the Script Does
|
||||
|
||||
**Stage 1 — FFmpeg cleanup** (always runs):
|
||||
- Deinterlaces the video (`yadif`) — removes the horizontal combing artifacts common in VHS captures
|
||||
- Denoises (`hqdn3d=2:1:2:2`) — gentle noise reduction that avoids motion blocking
|
||||
- Sharpens edges (`unsharp`) — recovers detail softened by the denoise step
|
||||
- Colour corrects — boosts washed-out VHS colour, adjusts contrast and gamma, corrects the green/yellow cast common in aged tape
|
||||
|
||||
**Stage 2 — Frame extraction** (AI mode only):
|
||||
- Extracts every frame as a PNG into a temporary folder
|
||||
|
||||
**Stage 3 — Real-ESRGAN upscaling** (AI mode only):
|
||||
- Runs the `realesr-animevideov3` model on each frame
|
||||
- Default: 2× upscale (e.g. 640×480 → 1280×960)
|
||||
|
||||
**Reassembly:**
|
||||
- Rebuilds the video from upscaled frames with the original audio
|
||||
|
||||
---
|
||||
|
||||
## Live Progress
|
||||
|
||||
The script shows live FFmpeg output. Watch for:
|
||||
|
||||
- `speed=3.5x` — processing at 3.5× realtime (good)
|
||||
- `speed=0.5x` — slow, likely a very heavy filter load
|
||||
- `corrupt decoded frame` — normal for damaged VHS files, FFmpeg will push through
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Script hangs with no output**
|
||||
Run with `--no-ai` first to confirm FFmpeg is working, then check that your Real-ESRGAN binary is executable (`chmod +x realesrgan-ncnn-vulkan`).
|
||||
|
||||
**Output looks blocky during motion**
|
||||
The denoise values may still be too high for your footage. Edit the script and reduce `hqdn3d=2:1:2:2` to `hqdn3d=1:1:1:1`, or remove `hqdn3d` entirely — Real-ESRGAN handles noise well on its own.
|
||||
|
||||
**Colour looks over-saturated**
|
||||
Reduce `saturation=1.8` in the filter chain to `saturation=1.4` or `1.2`.
|
||||
|
||||
**Real-ESRGAN not found**
|
||||
Ensure the binary is in the same folder as the script and is executable. Or pass the path explicitly: `./vhs_restore.sh -b /path/to/realesrgan-ncnn-vulkan`
|
||||
|
||||
**Error logs**
|
||||
All FFmpeg and Real-ESRGAN logs are saved to `/tmp/` for diagnosis:
|
||||
- `/tmp/ffmpeg_stage1.log`
|
||||
- `/tmp/ffmpeg_extract.log`
|
||||
- `/tmp/realesrgan.log`
|
||||
- `/tmp/ffmpeg_reassemble.log`
|
||||
|
||||
---
|
||||
|
||||
## Workflow Recommendation
|
||||
|
||||
1. Run `--no-ai` first on one file to check the cleanup result
|
||||
2. If it looks good, run the full pipeline on all files overnight
|
||||
3. For heavily damaged footage, consider also running **CodeFormer** (face restoration) on top of the output — particularly effective if the video contains people
|
||||
|
||||
---
|
||||
|
||||
## Output
|
||||
|
||||
Restored files are saved to `./output/` as `<original_name>_restored.mp4` encoded as H.264 with AAC audio.
|
||||
|
||||
|
||||
## vhs_restore.sh Script
|
||||
|
||||
`#!/usr/bin/env bash
|
||||
# =============================================================================
|
||||
# vhs_restore.sh — Automated VHS Video Restoration Pipeline
|
||||
# Stages: Deinterlace → Denoise → Colour correct → AI Upscale → Reassemble
|
||||
#
|
||||
# Changes from v1:
|
||||
# - Gentle hqdn3d (2:1:2:2) to prevent motion blocking/pixelation
|
||||
# - Aggressive colour correction for washed-out VHS footage
|
||||
# - Live FFmpeg progress shown in terminal (no silent hanging)
|
||||
# - Logs still saved to /tmp/ for error diagnosis
|
||||
# =============================================================================
|
||||
set -euo pipefail
|
||||
|
||||
# ── Colour output helpers ────────────────────────────────────────────────────
|
||||
RED='\033[0;31m'; GREEN='\033[0;32m'; YELLOW='\033[1;33m'
|
||||
CYAN='\033[0;36m'; BOLD='\033[1m'; NC='\033[0m'
|
||||
info() { echo -e "${CYAN}[INFO]${NC} $*"; }
|
||||
success() { echo -e "${GREEN}[OK]${NC} $*"; }
|
||||
warn() { echo -e "${YELLOW}[WARN]${NC} $*"; }
|
||||
error() { echo -e "${RED}[ERROR]${NC} $*" >&2; }
|
||||
header() { echo -e "\n${BOLD}${CYAN}══ $* ══${NC}"; }
|
||||
|
||||
# ── Default configuration ────────────────────────────────────────────────────
|
||||
INPUT_DIR="./input" # Folder containing your source VHS videos
|
||||
OUTPUT_DIR="./output" # Final restored videos land here
|
||||
WORK_DIR="./work" # Scratch space (frames, temp files)
|
||||
REALESRGAN_BIN="./realesrgan-ncnn-vulkan" # Path to Real-ESRGAN binary
|
||||
REALESRGAN_MODEL="realesr-animevideov3" # Best model for home video
|
||||
UPSCALE_FACTOR=2 # 2x or 4x (4x is very slow on CPU)
|
||||
OUTPUT_WIDTH=1920 # Target width used in --no-ai mode
|
||||
OUTPUT_HEIGHT=1080 # Target height used in --no-ai mode
|
||||
CRF=16 # Output quality 0-51, lower = better
|
||||
PRESET="slow" # FFmpeg encode preset
|
||||
SKIP_UPSCALE=false # --no-ai flag sets this true
|
||||
KEEP_FRAMES=false # --keep flag sets this true
|
||||
|
||||
# ── Parse CLI flags ──────────────────────────────────────────────────────────
|
||||
usage() {
|
||||
cat <<EOF
|
||||
Usage: $(basename "$0") [options]
|
||||
|
||||
Options:
|
||||
-i DIR Input directory (default: ./input)
|
||||
-o DIR Output directory (default: ./output)
|
||||
-w DIR Work/scratch dir (default: ./work)
|
||||
-b PATH Path to realesrgan-ncnn-vulkan binary
|
||||
-s FACTOR Upscale factor: 2 or 4 (default: 2)
|
||||
-q CRF Output quality 0-51, lower=better (default: 16)
|
||||
--no-ai Skip Real-ESRGAN; FFmpeg cleanup only (fast)
|
||||
--keep Keep extracted frames after processing
|
||||
-h Show this help
|
||||
|
||||
Examples:
|
||||
$(basename "$0") -i ~/Videos/VHS -o ~/Videos/Restored
|
||||
$(basename "$0") -i ~/Videos/VHS --no-ai # Quick cleanup only
|
||||
$(basename "$0") -i ~/Videos/VHS -s 4 -q 18 # 4x upscale
|
||||
EOF
|
||||
exit 0
|
||||
}
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
-i) INPUT_DIR="$2"; shift 2 ;;
|
||||
-o) OUTPUT_DIR="$2"; shift 2 ;;
|
||||
-w) WORK_DIR="$2"; shift 2 ;;
|
||||
-b) REALESRGAN_BIN="$2"; shift 2 ;;
|
||||
-s) UPSCALE_FACTOR="$2"; shift 2 ;;
|
||||
-q) CRF="$2"; shift 2 ;;
|
||||
--no-ai) SKIP_UPSCALE=true; shift ;;
|
||||
--keep) KEEP_FRAMES=true; shift ;;
|
||||
-h|--help) usage ;;
|
||||
*) error "Unknown option: $1"; usage ;;
|
||||
esac
|
||||
done
|
||||
|
||||
# ── Dependency checks ────────────────────────────────────────────────────────
|
||||
header "Checking dependencies"
|
||||
|
||||
check_cmd() {
|
||||
if command -v "$1" &>/dev/null; then
|
||||
success "$1 found"
|
||||
else
|
||||
error "$1 not found. Install with: $2"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
check_cmd ffmpeg "sudo apt install ffmpeg"
|
||||
check_cmd ffprobe "sudo apt install ffmpeg"
|
||||
check_cmd bc "sudo apt install bc"
|
||||
|
||||
if [[ "$SKIP_UPSCALE" == false ]]; then
|
||||
if [[ ! -x "$REALESRGAN_BIN" ]]; then
|
||||
warn "Real-ESRGAN binary not found at: $REALESRGAN_BIN"
|
||||
echo
|
||||
echo -e "${YELLOW}To install Real-ESRGAN:${NC}"
|
||||
echo " 1. Download: https://github.com/xinntao/Real-ESRGAN/releases"
|
||||
echo " -> realesrgan-ncnn-vulkan-*-ubuntu.zip"
|
||||
echo " 2. Unzip into this directory"
|
||||
echo " 3. chmod +x realesrgan-ncnn-vulkan"
|
||||
echo " 4. Re-run this script"
|
||||
echo
|
||||
echo "Or run with --no-ai for FFmpeg-only cleanup (no upscaling)."
|
||||
exit 1
|
||||
fi
|
||||
success "Real-ESRGAN found"
|
||||
fi
|
||||
|
||||
# ── Locate input files ───────────────────────────────────────────────────────
|
||||
header "Scanning input directory: $INPUT_DIR"
|
||||
|
||||
if [[ ! -d "$INPUT_DIR" ]]; then
|
||||
error "Input directory not found: $INPUT_DIR"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
mapfile -t VIDEO_FILES < <(find "$INPUT_DIR" -maxdepth 1 \
|
||||
-type f \( -iname "*.mp4" -o -iname "*.avi" -o -iname "*.mov" \
|
||||
-o -iname "*.mkv" -o -iname "*.mpg" -o -iname "*.mpeg" \
|
||||
-o -iname "*.wmv" -o -iname "*.m4v" -o -iname "*.ts" \) \
|
||||
| sort)
|
||||
|
||||
if [[ ${#VIDEO_FILES[@]} -eq 0 ]]; then
|
||||
error "No video files found in $INPUT_DIR"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
info "Found ${#VIDEO_FILES[@]} video file(s):"
|
||||
for f in "${VIDEO_FILES[@]}"; do echo " * $(basename "$f")"; done
|
||||
|
||||
# ── Helpers ──────────────────────────────────────────────────────────────────
|
||||
probe() {
|
||||
ffprobe -v error -select_streams v:0 \
|
||||
-show_entries "stream=$2" -of csv=p=0 "$1" 2>/dev/null | head -1
|
||||
}
|
||||
|
||||
human_time() {
|
||||
local s="${1%.*}"
|
||||
printf '%dh %dm %ds' $((s/3600)) $(( (s%3600)/60 )) $((s%60))
|
||||
}
|
||||
|
||||
# ── Create directories ───────────────────────────────────────────────────────
|
||||
mkdir -p "$OUTPUT_DIR" "$WORK_DIR"
|
||||
|
||||
# ── Overall stats ────────────────────────────────────────────────────────────
|
||||
TOTAL_FILES=${#VIDEO_FILES[@]}
|
||||
PROCESSED=0
|
||||
FAILED=0
|
||||
PIPELINE_START=$(date +%s)
|
||||
|
||||
# ════════════════════════════════════════════════════════════════════════════
|
||||
# MAIN LOOP
|
||||
# ════════════════════════════════════════════════════════════════════════════
|
||||
for INPUT_FILE in "${VIDEO_FILES[@]}"; do
|
||||
|
||||
BASENAME=$(basename "$INPUT_FILE")
|
||||
STEM="${BASENAME%.*}"
|
||||
CLEANED="$WORK_DIR/${STEM}_cleaned.mp4"
|
||||
FRAMES_IN="$WORK_DIR/${STEM}_frames_in"
|
||||
FRAMES_OUT="$WORK_DIR/${STEM}_frames_out"
|
||||
FINAL_OUTPUT="$OUTPUT_DIR/${STEM}_restored.mp4"
|
||||
|
||||
header "Processing: $BASENAME ($((PROCESSED+1))/$TOTAL_FILES)"
|
||||
FILE_START=$(date +%s)
|
||||
|
||||
# ── Probe source ──────────────────────────────────────────────────────────
|
||||
FPS=$(probe "$INPUT_FILE" "r_frame_rate")
|
||||
FPS_DEC=$(echo "scale=3; $FPS" | bc 2>/dev/null || echo "25")
|
||||
WIDTH=$(probe "$INPUT_FILE" "width")
|
||||
HEIGHT=$(probe "$INPUT_FILE" "height")
|
||||
FIELD_ORDER=$(probe "$INPUT_FILE" "field_order")
|
||||
DURATION=$(ffprobe -v error -show_entries format=duration \
|
||||
-of csv=p=0 "$INPUT_FILE" 2>/dev/null | head -1)
|
||||
|
||||
info "Source: ${WIDTH}x${HEIGHT} ${FPS_DEC}fps $(human_time "${DURATION%.*}") field_order=${FIELD_ORDER:-unknown}"
|
||||
|
||||
# Always deinterlace for VHS -- safe even if not flagged as interlaced
|
||||
if [[ "$FIELD_ORDER" =~ ^(tt|tb|bt|bb)$ ]]; then
|
||||
DEINTERLACE_FILTER="yadif=mode=1,"
|
||||
info "Interlacing detected — applying yadif deinterlacer"
|
||||
else
|
||||
DEINTERLACE_FILTER="yadif=mode=1,"
|
||||
warn "Interlacing not confirmed by probe — applying yadif anyway (safe for VHS)"
|
||||
fi
|
||||
|
||||
# ── Stage 1: FFmpeg cleanup ───────────────────────────────────────────────
|
||||
header "Stage 1/3 — FFmpeg cleanup & colour correction"
|
||||
info "Watch fps= and speed= for live progress."
|
||||
info "Corrupt frame warnings are normal for old VHS captures."
|
||||
echo
|
||||
|
||||
if [[ "$SKIP_UPSCALE" == true ]]; then
|
||||
SCALE_FILTER="scale=${OUTPUT_WIDTH}:${OUTPUT_HEIGHT}:flags=lanczos,"
|
||||
else
|
||||
SCALE_FILTER=""
|
||||
fi
|
||||
|
||||
# Filter chain notes:
|
||||
# hqdn3d=2:1:2:2 -- gentle denoise; low temporal values (3rd/4th)
|
||||
# prevent the motion blocking seen with higher values
|
||||
# unsharp -- moderate sharpening to recover edge detail
|
||||
# eq -- aggressive colour boost for washed-out VHS
|
||||
# colorbalance -- corrects the green/yellow cast common in aged VHS
|
||||
VFILTER="${DEINTERLACE_FILTER}\
|
||||
hqdn3d=2:1:2:2,\
|
||||
unsharp=3:3:0.5:3:3:0.3,\
|
||||
eq=contrast=1.2:brightness=0.05:saturation=1.8:gamma=1.1,\
|
||||
colorbalance=rs=0.1:gs=0.0:bs=-0.1,\
|
||||
${SCALE_FILTER}\
|
||||
format=yuv420p"
|
||||
|
||||
if ! ffmpeg -y -i "$INPUT_FILE" \
|
||||
-vf "$VFILTER" \
|
||||
-c:v libx264 -crf 18 -preset medium \
|
||||
-c:a aac -b:a 192k -ac 2 \
|
||||
-stats \
|
||||
"$CLEANED" 2>&1 | tee /tmp/ffmpeg_stage1.log | \
|
||||
grep --line-buffered -E "(frame=|speed=|error|Error|Invalid)"; then
|
||||
error "FFmpeg stage 1 failed. Full log: /tmp/ffmpeg_stage1.log"
|
||||
FAILED=$((FAILED+1))
|
||||
continue
|
||||
fi
|
||||
|
||||
echo
|
||||
success "Stage 1 complete -> $(du -sh "$CLEANED" | cut -f1)"
|
||||
|
||||
if [[ "$SKIP_UPSCALE" == true ]]; then
|
||||
cp "$CLEANED" "$FINAL_OUTPUT"
|
||||
success "Output (no AI): $FINAL_OUTPUT"
|
||||
PROCESSED=$((PROCESSED+1))
|
||||
[[ "$KEEP_FRAMES" == false ]] && rm -f "$CLEANED"
|
||||
continue
|
||||
fi
|
||||
|
||||
# ── Stage 2: Extract frames ───────────────────────────────────────────────
|
||||
header "Stage 2/3 — Extracting frames for AI upscaling"
|
||||
mkdir -p "$FRAMES_IN" "$FRAMES_OUT"
|
||||
|
||||
FRAME_COUNT=$(ffprobe -v error -count_packets \
|
||||
-select_streams v:0 -show_entries stream=nb_read_packets \
|
||||
-of csv=p=0 "$CLEANED" 2>/dev/null | head -1)
|
||||
FRAME_COUNT=${FRAME_COUNT:-0}
|
||||
info "Extracting ~${FRAME_COUNT} frames..."
|
||||
|
||||
if ! ffmpeg -y -i "$CLEANED" \
|
||||
-vsync 0 -stats \
|
||||
"$FRAMES_IN/frame%08d.png" 2>&1 | tee /tmp/ffmpeg_extract.log | \
|
||||
grep --line-buffered -E "(frame=|speed=|error|Error)"; then
|
||||
error "Frame extraction failed. Full log: /tmp/ffmpeg_extract.log"
|
||||
FAILED=$((FAILED+1))
|
||||
continue
|
||||
fi
|
||||
|
||||
ACTUAL_FRAMES=$(find "$FRAMES_IN" -name "*.png" | wc -l)
|
||||
echo
|
||||
success "Extracted $ACTUAL_FRAMES frames"
|
||||
|
||||
# ── Stage 3: Real-ESRGAN ──────────────────────────────────────────────────
|
||||
header "Stage 3/3 — Real-ESRGAN AI upscaling (${UPSCALE_FACTOR}x)"
|
||||
warn "Slow on CPU — est. $(echo "scale=0; $ACTUAL_FRAMES * 10 / 60" | bc)-$(echo "scale=0; $ACTUAL_FRAMES * 30 / 60" | bc) minutes"
|
||||
info "Upscaled frames will appear in: $FRAMES_OUT"
|
||||
echo
|
||||
|
||||
UPSCALE_START=$(date +%s)
|
||||
if ! "$REALESRGAN_BIN" \
|
||||
-i "$FRAMES_IN" \
|
||||
-o "$FRAMES_OUT" \
|
||||
-n "$REALESRGAN_MODEL" \
|
||||
-s "$UPSCALE_FACTOR" \
|
||||
-f png 2>&1 | tee /tmp/realesrgan.log; then
|
||||
error "Real-ESRGAN failed. Full log: /tmp/realesrgan.log"
|
||||
FAILED=$((FAILED+1))
|
||||
continue
|
||||
fi
|
||||
|
||||
UPSCALE_END=$(date +%s)
|
||||
UPSCALE_ELAPSED=$((UPSCALE_END - UPSCALE_START))
|
||||
success "AI upscaling complete in $(human_time $UPSCALE_ELAPSED)"
|
||||
|
||||
# ── Reassemble ────────────────────────────────────────────────────────────
|
||||
REASSEMBLE_FPS=$(ffprobe -v error -select_streams v:0 \
|
||||
-show_entries stream=r_frame_rate \
|
||||
-of csv=p=0 "$CLEANED" 2>/dev/null | head -1)
|
||||
|
||||
info "Reassembling video from upscaled frames..."
|
||||
echo
|
||||
|
||||
if ! ffmpeg -y \
|
||||
-framerate "$REASSEMBLE_FPS" \
|
||||
-i "$FRAMES_OUT/frame%08d.png" \
|
||||
-i "$CLEANED" \
|
||||
-map 0:v -map 1:a \
|
||||
-c:v libx264 -crf "$CRF" -preset "$PRESET" \
|
||||
-c:a copy \
|
||||
-movflags +faststart \
|
||||
-stats \
|
||||
"$FINAL_OUTPUT" 2>&1 | tee /tmp/ffmpeg_reassemble.log | \
|
||||
grep --line-buffered -E "(frame=|speed=|error|Error)"; then
|
||||
error "Reassembly failed. Full log: /tmp/ffmpeg_reassemble.log"
|
||||
FAILED=$((FAILED+1))
|
||||
continue
|
||||
fi
|
||||
|
||||
# ── Cleanup ───────────────────────────────────────────────────────────────
|
||||
if [[ "$KEEP_FRAMES" == false ]]; then
|
||||
rm -rf "$FRAMES_IN" "$FRAMES_OUT" "$CLEANED"
|
||||
info "Scratch files cleaned up"
|
||||
else
|
||||
info "Frames kept in: $FRAMES_IN / $FRAMES_OUT"
|
||||
fi
|
||||
|
||||
FILE_END=$(date +%s)
|
||||
FILE_ELAPSED=$((FILE_END - FILE_START))
|
||||
PROCESSED=$((PROCESSED+1))
|
||||
|
||||
OUT_SIZE=$(du -sh "$FINAL_OUTPUT" | cut -f1)
|
||||
echo
|
||||
success "Done: $FINAL_OUTPUT"
|
||||
info " File size : $OUT_SIZE"
|
||||
info " Time taken: $(human_time $FILE_ELAPSED)"
|
||||
|
||||
done
|
||||
|
||||
# ════════════════════════════════════════════════════════════════════════════
|
||||
# Final summary
|
||||
# ════════════════════════════════════════════════════════════════════════════
|
||||
PIPELINE_END=$(date +%s)
|
||||
PIPELINE_ELAPSED=$((PIPELINE_END - PIPELINE_START))
|
||||
|
||||
header "Pipeline Complete"
|
||||
echo -e " ${GREEN}Processed : $PROCESSED / $TOTAL_FILES${NC}"
|
||||
[[ $FAILED -gt 0 ]] && echo -e " ${RED}Failed : $FAILED${NC}"
|
||||
echo -e " Total time: $(human_time $PIPELINE_ELAPSED)"
|
||||
echo -e " Output dir: $OUTPUT_DIR"
|
||||
echo
|
||||
|
||||
if [[ $PROCESSED -gt 0 ]]; then
|
||||
echo "Restored files:"
|
||||
find "$OUTPUT_DIR" -name "*_restored.mp4" | while read -r f; do
|
||||
SIZE=$(du -sh "$f" | cut -f1)
|
||||
echo " * $(basename "$f") ($SIZE)"
|
||||
done
|
||||
fi
|
||||
`
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
72
Gremlin-Grimoire/Overview.md
Normal file
72
Gremlin-Grimoire/Overview.md
Normal file
|
|
@ -0,0 +1,72 @@
|
|||
---
|
||||
title: Gremlin Grimoire
|
||||
description: Netgrimoire's local AI — the gremlin that runs the machine
|
||||
published: true
|
||||
date: 2026-04-12T00:00:00.000Z
|
||||
tags: gremlin, ai, ollama, n8n
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-12T00:00:00.000Z
|
||||
---
|
||||
|
||||
# Gremlin Grimoire
|
||||
|
||||

|
||||
|
||||
Gremlin is the local AI layer of Netgrimoire. It's not just a chat interface — it's an autonomous agent that watches the infrastructure, audits the codebase, triages alerts, and answers questions about the lab. The gremlin lives inside the machine and knows every dark corner of it.
|
||||
|
||||
---
|
||||
|
||||
## What Gremlin Is
|
||||
|
||||
Gremlin is a stack of four services running together on `docker4`, all pinned to the same Swarm node:
|
||||
|
||||
| Service | Role | URL |
|
||||
|---------|------|-----|
|
||||
| **Ollama** | Local LLM inference (CPU-only, Ryzen) | `http://ollama:11434` · `ollama.netgrimoire.com:11434` |
|
||||
| **Open WebUI** | Chat interface + RAG frontend | `https://ai.netgrimoire.com` |
|
||||
| **Qdrant** | Vector database for RAG knowledge base | `http://qdrant:6333` · dashboard `:6333/dashboard` |
|
||||
| **n8n** | Automation brain — autonomous workflows | `https://n8n.netgrimoire.com` |
|
||||
|
||||
---
|
||||
|
||||
## What Gremlin Does Today
|
||||
|
||||
| Capability | Status | Workflow |
|
||||
|-----------|--------|---------|
|
||||
| Weekly YAML audit of all compose files | ✅ Live | Forgejo Audit — Monday 06:00 |
|
||||
| Uptime Kuma alert triage | ✅ Live | Kuma Triage — webhook-triggered |
|
||||
| Interactive chat with lab context | ✅ Live | Open WebUI + Ollama |
|
||||
| RAG over wiki/docs | 🔧 Wired, not populated | Qdrant connected, knowledge base empty |
|
||||
| Doc generation from compose files | 🟡 Parked | CPU quality insufficient — awaiting GPU |
|
||||
| Email triage | 📋 Planned | Phase 3 — not built |
|
||||
|
||||
---
|
||||
|
||||
## Models
|
||||
|
||||
| Model | Size | Used For |
|
||||
|-------|------|---------|
|
||||
| `qwen2.5-coder:7b` | ~5 GB | Code review, YAML audits, compose analysis |
|
||||
| `llama3.2:3b` | ~2 GB | Alert triage, Q&A, summarization |
|
||||
|
||||
Models must be pulled before workflows run. See [Ollama Model Management](/Gremlin-Grimoire/Runbooks/Model-Management).
|
||||
|
||||
---
|
||||
|
||||
## Sections
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| [Stack](/Gremlin-Grimoire/Stack/Build-Config) | Full build config, volumes, env vars, compose YAML |
|
||||
| [Workflows](/Gremlin-Grimoire/Workflows/Forgejo-Audit) | All n8n workflows — architecture, patterns, gotchas |
|
||||
| [Runbooks](/Gremlin-Grimoire/Runbooks/Deploy) | Deploy, model management, troubleshooting |
|
||||
|
||||
---
|
||||
|
||||
## Planned Evolution
|
||||
|
||||
- **Homelable MCP backend** — next up. Provides tool-use for infra Q&A (topology, running services, resource usage). Blocked until Homelable stack is deployed.
|
||||
- **GPU support** — unlocks doc generation and larger models. Compose GPU block is commented out, ready to enable.
|
||||
- **Gremlin role variants** — specialized personas per domain (Proxy Gremlin, Storage Gremlin, Security Gremlin, etc.) with mood states and dynamic badge serving via Caddy.
|
||||
- **RAG knowledge base population** — index all Wiki.js pages and the compose template standard into Qdrant.
|
||||
- **Gremlin Router** — dedicated Flask container for webhook routing (currently handled directly by n8n).
|
||||
73
Gremlin-Grimoire/Runbooks/Deploy.md
Normal file
73
Gremlin-Grimoire/Runbooks/Deploy.md
Normal file
|
|
@ -0,0 +1,73 @@
|
|||
---
|
||||
title: Deploy Gremlin Stack
|
||||
description: How to deploy and redeploy the Gremlin AI stack
|
||||
published: true
|
||||
date: 2026-04-12T00:00:00.000Z
|
||||
tags: gremlin, deploy, runbook
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-12T00:00:00.000Z
|
||||
---
|
||||
|
||||
# Deploy Gremlin Stack
|
||||
|
||||
All Gremlin services run on `docker4` (hermes), pinned via `node.hostname == docker4`.
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
```bash
|
||||
# On docker4 — create volume directories
|
||||
mkdir -p /DockerVol/ollama
|
||||
mkdir -p /DockerVol/open-webui
|
||||
mkdir -p /DockerVol/qdrant
|
||||
|
||||
# n8n requires specific ownership
|
||||
mkdir -p /DockerVol/n8n
|
||||
chown -R 1000:1000 /DockerVol/n8n
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Deploy
|
||||
|
||||
```bash
|
||||
cd ~/services && git pull
|
||||
cd swarm/stack/Gremlin
|
||||
set -a && source .env && set +a
|
||||
docker stack config --compose-file gremlin-stack.yml > resolved.yml
|
||||
docker stack deploy --compose-file resolved.yml gremlin
|
||||
rm resolved.yml
|
||||
docker stack services gremlin
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Pull Models After Deploy
|
||||
|
||||
Models must be pulled before n8n workflows run. Ollama returns a silent model-not-found error if workflows fire first.
|
||||
|
||||
```bash
|
||||
docker exec $(docker ps -qf name=gremlin_ollama) ollama pull llama3.2:3b
|
||||
docker exec $(docker ps -qf name=gremlin_ollama) ollama pull qwen2.5-coder:7b
|
||||
|
||||
# Verify
|
||||
docker exec $(docker ps -qf name=gremlin_ollama) ollama list
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Verify Open WebUI Secret Key
|
||||
|
||||
Check that `WEBUI_SECRET_KEY` in `.env` on docker4 is set to a real secret, not the placeholder `change-this-secret-key`.
|
||||
|
||||
---
|
||||
|
||||
## Service URLs After Deploy
|
||||
|
||||
| Service | Internal | External |
|
||||
|---------|----------|---------|
|
||||
| Ollama | `http://ollama:11434` | `http://ollama.netgrimoire.com:11434` |
|
||||
| Open WebUI | `http://open-webui:8080` | `https://ai.netgrimoire.com` |
|
||||
| Qdrant | `http://qdrant:6333` | `http://qdrant.netgrimoire.com:6333/dashboard` |
|
||||
| n8n | `http://n8n:5678` | `https://n8n.netgrimoire.com` |
|
||||
41
Gremlin-Grimoire/Runbooks/Model-Management.md
Normal file
41
Gremlin-Grimoire/Runbooks/Model-Management.md
Normal file
|
|
@ -0,0 +1,41 @@
|
|||
---
|
||||
title: Ollama Model Management
|
||||
description: Pulling, verifying, and managing models on the Gremlin stack
|
||||
published: true
|
||||
date: 2026-04-12T00:00:00.000Z
|
||||
tags: gremlin, ollama, models, runbook
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-12T00:00:00.000Z
|
||||
---
|
||||
|
||||
# Ollama Model Management
|
||||
|
||||
## Pull Required Models
|
||||
|
||||
Run on docker4 after any fresh deploy or after the Ollama container is recreated:
|
||||
|
||||
```bash
|
||||
docker exec $(docker ps -qf name=gremlin_ollama) ollama pull llama3.2:3b
|
||||
docker exec $(docker ps -qf name=gremlin_ollama) ollama pull qwen2.5-coder:7b
|
||||
```
|
||||
|
||||
## Verify Models Loaded
|
||||
|
||||
```bash
|
||||
docker exec $(docker ps -qf name=gremlin_ollama) ollama list
|
||||
```
|
||||
|
||||
## Model Reference
|
||||
|
||||
| Model | Size | Pull Time (CPU) | Used By |
|
||||
|-------|------|----------------|---------|
|
||||
| `llama3.2:3b` | ~2 GB | ~5 min | Kuma triage, Open WebUI |
|
||||
| `qwen2.5-coder:7b` | ~5 GB | ~15 min | Forgejo audit, Open WebUI |
|
||||
|
||||
## Models Storage Path
|
||||
|
||||
`/DockerVol/ollama` — survives container restarts and redeployments.
|
||||
|
||||
## ⚠ Pull Before Workflows Run
|
||||
|
||||
n8n workflows fail silently if models aren't present. Ollama returns a model-not-found response but n8n may not surface this as an obvious error. Always pull models immediately after deploy before enabling workflows.
|
||||
64
Gremlin-Grimoire/Runbooks/Troubleshooting.md
Normal file
64
Gremlin-Grimoire/Runbooks/Troubleshooting.md
Normal file
|
|
@ -0,0 +1,64 @@
|
|||
---
|
||||
title: Gremlin Troubleshooting
|
||||
description: Common Gremlin stack problems and fixes
|
||||
published: true
|
||||
date: 2026-04-12T00:00:00.000Z
|
||||
tags: gremlin, troubleshooting, runbook
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-12T00:00:00.000Z
|
||||
---
|
||||
|
||||
# Gremlin Troubleshooting
|
||||
|
||||
## n8n Won't Start / Permission Error
|
||||
|
||||
```bash
|
||||
# On docker4
|
||||
chown -R 1000:1000 /DockerVol/n8n
|
||||
docker service update --force gremlin_n8n
|
||||
```
|
||||
|
||||
## Workflow Fails Silently on Ollama Call
|
||||
|
||||
Model not pulled. Ollama returns model-not-found but n8n may not surface it clearly.
|
||||
|
||||
```bash
|
||||
docker exec $(docker ps -qf name=gremlin_ollama) ollama list
|
||||
# If model missing:
|
||||
docker exec $(docker ps -qf name=gremlin_ollama) ollama pull llama3.2:3b
|
||||
docker exec $(docker ps -qf name=gremlin_ollama) ollama pull qwen2.5-coder:7b
|
||||
```
|
||||
|
||||
## Forgejo Webhook Not Reaching n8n
|
||||
|
||||
Add to Forgejo `app.ini`:
|
||||
```ini
|
||||
[webhook]
|
||||
ALLOWED_HOST_LIST = *
|
||||
```
|
||||
Restart Forgejo. Required when `OFFLINE_MODE = true`.
|
||||
|
||||
## Caddy Routes to Wrong Container IP
|
||||
|
||||
Ensure all Gremlin services include in labels:
|
||||
```yaml
|
||||
caddy_ingress_network: netgrimoire
|
||||
```
|
||||
|
||||
Never use `{{upstreams PORT}}` — breaks during `docker stack config` preprocessing. Use `caddy.reverse_proxy: servicename:PORT`.
|
||||
|
||||
## Audit Workflow Times Out
|
||||
|
||||
Check `N8N_RUNNERS_TASK_TIMEOUT` is set to `3600` in n8n environment. Default timeout is too short for 67-file audit runs.
|
||||
|
||||
## n8n Code Node Can't Access Env Vars
|
||||
|
||||
Set `N8N_BLOCK_ENV_ACCESS_IN_NODE=false` in n8n environment.
|
||||
|
||||
## Open WebUI Can't Connect to Qdrant
|
||||
|
||||
Verify both services are on the `netgrimoire` overlay and pinned to `docker4`. Qdrant gRPC port is 6334, REST is 6333.
|
||||
|
||||
## Audit Reports Not Committing to Forgejo
|
||||
|
||||
Check write token is set in n8n credentials. The read and write tokens are separate — confirm the workflow is using the write token for commit operations (POST new files, PUT+SHA for updates).
|
||||
503
Gremlin-Grimoire/Stack/Agent-Docs.md
Normal file
503
Gremlin-Grimoire/Stack/Agent-Docs.md
Normal file
|
|
@ -0,0 +1,503 @@
|
|||
---
|
||||
title: Ollama with agent
|
||||
description: The smart home reference
|
||||
published: true
|
||||
date: 2026-04-02T21:11:09.564Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-02-18T22:14:41.533Z
|
||||
---
|
||||
|
||||
# AI Automation Stack - Ollama + n8n + Open WebUI
|
||||
|
||||
## Overview
|
||||
|
||||
This stack provides a complete self-hosted AI automation solution for homelab infrastructure management, documentation generation, and intelligent monitoring. The system consists of four core components that work together to provide AI-powered workflows and knowledge management.
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────┐
|
||||
│ AI Automation Stack │
|
||||
│ │
|
||||
│ Open WebUI ────────┐ │
|
||||
│ (Chat Interface) │ │
|
||||
│ │ │ │
|
||||
│ ▼ ▼ │
|
||||
│ Ollama ◄──── Qdrant │
|
||||
│ (LLM Runtime) (Vector DB) │
|
||||
│ ▲ │
|
||||
│ │ │
|
||||
│ n8n │
|
||||
│ (Workflow Engine) │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ Forgejo │ Wiki.js │ Monitoring │
|
||||
└─────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Components
|
||||
|
||||
### Ollama
|
||||
- **Purpose**: Local LLM runtime engine
|
||||
- **Port**: 11434
|
||||
- **Resource Usage**: 4-6GB RAM (depending on model)
|
||||
- **Recommended Models**:
|
||||
- `qwen2.5-coder:7b` - Code analysis and documentation
|
||||
- `llama3.2:3b` - General queries and chat
|
||||
- `phi3:mini` - Lightweight alternative
|
||||
|
||||
### Open WebUI
|
||||
- **Purpose**: User-friendly chat interface with built-in RAG (Retrieval Augmented Generation)
|
||||
- **Port**: 3000
|
||||
- **Features**:
|
||||
- Document ingestion from Wiki.js
|
||||
- Conversational interface for querying documentation
|
||||
- RAG pipeline for context-aware responses
|
||||
- Multi-model support
|
||||
- **Access**: `http://your-server-ip:3000`
|
||||
|
||||
### Qdrant
|
||||
- **Purpose**: Vector database for semantic search and RAG
|
||||
- **Ports**: 6333 (HTTP), 6334 (gRPC)
|
||||
- **Resource Usage**: ~1GB RAM
|
||||
- **Function**: Stores embeddings of your documentation, code, and markdown files
|
||||
|
||||
### n8n
|
||||
- **Purpose**: Workflow automation and orchestration
|
||||
- **Port**: 5678
|
||||
- **Default Credentials**:
|
||||
- Username: `admin`
|
||||
- Password: `change-this-password` (⚠️ **Change this immediately**)
|
||||
- **Access**: `http://your-server-ip:5678`
|
||||
|
||||
## Installation
|
||||
|
||||
### Prerequisites
|
||||
- Docker and Docker Compose installed
|
||||
- 16GB RAM minimum (8GB available for the stack)
|
||||
- 50GB disk space for models and data
|
||||
|
||||
### Deployment Steps
|
||||
|
||||
1. **Create directory structure**:
|
||||
```bash
|
||||
mkdir -p ~/ai-stack/{n8n/workflows}
|
||||
cd ~/ai-stack
|
||||
```
|
||||
|
||||
2. **Download the compose file**:
|
||||
```bash
|
||||
# Place the ai-stack-compose.yml in this directory
|
||||
wget [your-internal-url]/ai-stack-compose.yml
|
||||
```
|
||||
|
||||
3. **Configure environment variables**:
|
||||
```bash
|
||||
# Edit the compose file and change:
|
||||
# - WEBUI_SECRET_KEY
|
||||
# - N8N_BASIC_AUTH_PASSWORD
|
||||
# - WEBHOOK_URL (use your server's IP)
|
||||
# - GENERIC_TIMEZONE
|
||||
nano ai-stack-compose.yml
|
||||
```
|
||||
|
||||
4. **Start the stack**:
|
||||
```bash
|
||||
docker-compose -f ai-stack-compose.yml up -d
|
||||
```
|
||||
|
||||
5. **Pull Ollama models**:
|
||||
```bash
|
||||
docker exec -it ollama ollama pull qwen2.5-coder:7b
|
||||
docker exec -it ollama ollama pull llama3.2:3b
|
||||
```
|
||||
|
||||
6. **Verify services**:
|
||||
```bash
|
||||
docker-compose -f ai-stack-compose.yml ps
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Open WebUI Setup
|
||||
|
||||
1. Navigate to `http://your-server-ip:3000`
|
||||
2. Create your admin account (first user becomes admin)
|
||||
3. Go to **Settings → Connections** and verify Ollama connection
|
||||
4. Configure Qdrant:
|
||||
- Host: `qdrant`
|
||||
- Port: `6333`
|
||||
|
||||
### Setting Up RAG for Wiki.js
|
||||
|
||||
1. In Open WebUI, go to **Workspace → Knowledge**
|
||||
2. Create a new collection: "Homelab Documentation"
|
||||
3. Add sources:
|
||||
- **URL Crawl**: Enter your Wiki.js base URL
|
||||
- **File Upload**: Upload markdown files from repositories
|
||||
4. Process and index the documents
|
||||
|
||||
### n8n Initial Configuration
|
||||
|
||||
1. Navigate to `http://your-server-ip:5678`
|
||||
2. Log in with credentials from docker-compose file
|
||||
3. Import starter workflows from `/n8n/workflows/` directory
|
||||
|
||||
## Use Cases
|
||||
|
||||
### 1. Automated Documentation Generation
|
||||
|
||||
**Workflow**: Forgejo webhook → n8n → Ollama → Wiki.js
|
||||
|
||||
When code is pushed to Forgejo:
|
||||
1. n8n receives webhook from Forgejo
|
||||
2. Extracts changed files and repo context
|
||||
3. Sends to Ollama with prompt: "Generate documentation for this code"
|
||||
4. Posts generated docs to Wiki.js via API
|
||||
|
||||
**Example n8n Workflow**:
|
||||
```
|
||||
Webhook Trigger
|
||||
→ HTTP Request (Forgejo API - get file contents)
|
||||
→ Ollama LLM Node (generate docs)
|
||||
→ HTTP Request (Wiki.js API - create/update page)
|
||||
→ Send notification (completion)
|
||||
```
|
||||
|
||||
### 2. Docker-Compose Standardization
|
||||
|
||||
**Workflow**: Repository scan → compliance check → issue creation
|
||||
|
||||
1. n8n runs on schedule (daily/weekly)
|
||||
2. Queries Forgejo API for all repositories
|
||||
3. Scans for `docker-compose.yml` files
|
||||
4. Compares against template standards stored in Qdrant
|
||||
5. Generates compliance report with Ollama
|
||||
6. Creates Forgejo issues for non-compliant repos
|
||||
|
||||
### 3. Intelligent Alert Processing
|
||||
|
||||
**Workflow**: Monitoring alert → AI analysis → smart routing
|
||||
|
||||
1. Beszel/Uptime Kuma sends webhook to n8n
|
||||
2. n8n queries historical data and context
|
||||
3. Ollama analyzes:
|
||||
- Is this expected? (scheduled backup, known maintenance)
|
||||
- Severity level
|
||||
- Recommended action
|
||||
4. Routes appropriately:
|
||||
- Critical: Immediate notification (Telegram/email)
|
||||
- Warning: Log and monitor
|
||||
- Info: Suppress (expected behavior)
|
||||
|
||||
### 4. Email Monitoring & Triage
|
||||
|
||||
**Workflow**: IMAP polling → AI classification → action routing
|
||||
|
||||
1. n8n polls email inbox every 5 minutes
|
||||
2. Filters for keywords: "alert", "critical", "down", "failed"
|
||||
3. Ollama classifies urgency and determines if actionable
|
||||
4. Routes based on classification:
|
||||
- Urgent: Forward to you immediately
|
||||
- Informational: Daily digest
|
||||
- Spam: Archive
|
||||
|
||||
## Common Workflows
|
||||
|
||||
### Example: Repository Documentation Generator
|
||||
|
||||
```javascript
|
||||
// n8n workflow nodes:
|
||||
|
||||
1. Schedule Trigger (daily at 2 AM)
|
||||
↓
|
||||
2. HTTP Request - Forgejo API
|
||||
URL: http://forgejo:3000/api/v1/repos/search
|
||||
Method: GET
|
||||
↓
|
||||
3. Loop Over Items (each repo)
|
||||
↓
|
||||
4. HTTP Request - Get repo files
|
||||
URL: {{$node["Forgejo API"].json["clone_url"]}}/contents
|
||||
↓
|
||||
5. Filter - Find docker-compose.yml and README.md
|
||||
↓
|
||||
6. Ollama Node
|
||||
Model: qwen2.5-coder:7b
|
||||
Prompt: "Analyze this docker-compose file and generate comprehensive
|
||||
documentation including: purpose, services, ports, volumes,
|
||||
environment variables, and setup instructions."
|
||||
↓
|
||||
7. HTTP Request - Wiki.js API
|
||||
URL: http://wikijs:3000/graphql
|
||||
Method: POST
|
||||
Body: {mutation: createPage(...)}
|
||||
↓
|
||||
8. Send Notification
|
||||
Service: Telegram/Email
|
||||
Message: "Documentation updated for {{repo_name}}"
|
||||
```
|
||||
|
||||
### Example: Alert Intelligence Workflow
|
||||
|
||||
```javascript
|
||||
// n8n workflow nodes:
|
||||
|
||||
1. Webhook Trigger
|
||||
Path: /webhook/monitoring-alert
|
||||
↓
|
||||
2. Function Node - Parse Alert Data
|
||||
JavaScript: Extract service, metric, value, timestamp
|
||||
↓
|
||||
3. HTTP Request - Query Historical Data
|
||||
URL: http://beszel:8090/api/metrics/history
|
||||
↓
|
||||
4. Ollama Node
|
||||
Model: llama3.2:3b
|
||||
Context: Your knowledge base in Qdrant
|
||||
Prompt: "Alert: {{alert_message}}
|
||||
Historical context: {{historical_data}}
|
||||
Is this expected behavior?
|
||||
What's the severity?
|
||||
What action should be taken?"
|
||||
↓
|
||||
5. Switch Node - Route by Severity
|
||||
Conditions:
|
||||
- Critical: Route to immediate notification
|
||||
- Warning: Route to monitoring channel
|
||||
- Info: Route to log only
|
||||
↓
|
||||
6a. Send Telegram (Critical path)
|
||||
6b. Post to Slack (Warning path)
|
||||
6c. Write to Log (Info path)
|
||||
```
|
||||
|
||||
## Maintenance
|
||||
|
||||
### Model Management
|
||||
|
||||
```bash
|
||||
# List installed models
|
||||
docker exec -it ollama ollama list
|
||||
|
||||
# Update a model
|
||||
docker exec -it ollama ollama pull qwen2.5-coder:7b
|
||||
|
||||
# Remove unused models
|
||||
docker exec -it ollama ollama rm old-model:tag
|
||||
```
|
||||
|
||||
### Backup Important Data
|
||||
|
||||
```bash
|
||||
# Backup Qdrant vector database
|
||||
docker-compose -f ai-stack-compose.yml stop qdrant
|
||||
tar -czf qdrant-backup-$(date +%Y%m%d).tar.gz ./qdrant_data/
|
||||
docker-compose -f ai-stack-compose.yml start qdrant
|
||||
|
||||
# Backup n8n workflows (automatic to ./n8n/workflows)
|
||||
tar -czf n8n-backup-$(date +%Y%m%d).tar.gz ./n8n_data/
|
||||
|
||||
# Backup Open WebUI data
|
||||
tar -czf openwebui-backup-$(date +%Y%m%d).tar.gz ./open_webui_data/
|
||||
```
|
||||
|
||||
### Log Monitoring
|
||||
|
||||
```bash
|
||||
# View all stack logs
|
||||
docker-compose -f ai-stack-compose.yml logs -f
|
||||
|
||||
# View specific service
|
||||
docker logs -f ollama
|
||||
docker logs -f n8n
|
||||
docker logs -f open-webui
|
||||
```
|
||||
|
||||
### Resource Monitoring
|
||||
|
||||
```bash
|
||||
# Check resource usage
|
||||
docker stats
|
||||
|
||||
# Expected usage:
|
||||
# - ollama: 4-6GB RAM (with model loaded)
|
||||
# - open-webui: ~500MB RAM
|
||||
# - qdrant: ~1GB RAM
|
||||
# - n8n: ~200MB RAM
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Ollama Not Responding
|
||||
|
||||
```bash
|
||||
# Check if Ollama is running
|
||||
docker logs ollama
|
||||
|
||||
# Restart Ollama
|
||||
docker restart ollama
|
||||
|
||||
# Test Ollama API
|
||||
curl http://localhost:11434/api/tags
|
||||
```
|
||||
|
||||
### Open WebUI Can't Connect to Ollama
|
||||
|
||||
1. Check network connectivity:
|
||||
```bash
|
||||
docker exec -it open-webui ping ollama
|
||||
```
|
||||
|
||||
2. Verify Ollama URL in Open WebUI settings
|
||||
3. Restart both containers:
|
||||
```bash
|
||||
docker restart ollama open-webui
|
||||
```
|
||||
|
||||
### n8n Workflows Failing
|
||||
|
||||
1. Check n8n logs:
|
||||
```bash
|
||||
docker logs n8n
|
||||
```
|
||||
|
||||
2. Verify webhook URLs are accessible
|
||||
3. Test Ollama connection from n8n:
|
||||
- Create test workflow
|
||||
- Add Ollama node
|
||||
- Run execution
|
||||
|
||||
### Qdrant Connection Issues
|
||||
|
||||
```bash
|
||||
# Check Qdrant health
|
||||
curl http://localhost:6333/health
|
||||
|
||||
# View Qdrant logs
|
||||
docker logs qdrant
|
||||
|
||||
# Restart if needed
|
||||
docker restart qdrant
|
||||
```
|
||||
|
||||
## Performance Optimization
|
||||
|
||||
### Model Selection by Use Case
|
||||
|
||||
- **Quick queries, chat**: `llama3.2:3b` or `phi3:mini` (fastest)
|
||||
- **Code analysis**: `qwen2.5-coder:7b` or `deepseek-coder:6.7b`
|
||||
- **Complex reasoning**: `mistral:7b` or `llama3.1:8b`
|
||||
|
||||
### n8n Workflow Optimization
|
||||
|
||||
- Use **Wait** nodes to batch operations
|
||||
- Enable **Execute Once** for loops to reduce memory
|
||||
- Store large data in temporary files instead of node output
|
||||
- Use **Split In Batches** for processing large datasets
|
||||
|
||||
### Qdrant Performance
|
||||
|
||||
- Default settings are optimized for homelab use
|
||||
- Increase `collection_shards` if indexing >100,000 documents
|
||||
- Enable quantization for large collections
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Change Default Credentials
|
||||
|
||||
```bash
|
||||
# Generate secure password
|
||||
openssl rand -base64 32
|
||||
|
||||
# Update in docker-compose.yml:
|
||||
# - WEBUI_SECRET_KEY
|
||||
# - N8N_BASIC_AUTH_PASSWORD
|
||||
```
|
||||
|
||||
### Network Isolation
|
||||
|
||||
Consider using a reverse proxy (Traefik, Nginx Proxy Manager) with authentication:
|
||||
- Limit external access to Open WebUI only
|
||||
- Keep n8n, Ollama, Qdrant on internal network
|
||||
- Use VPN for remote access
|
||||
|
||||
### API Security
|
||||
|
||||
- Use strong API tokens for Wiki.js and Forgejo integrations
|
||||
- Rotate credentials periodically
|
||||
- Audit n8n workflow permissions
|
||||
|
||||
## Integration Points
|
||||
|
||||
### Connecting to Existing Services
|
||||
|
||||
**Uptime Kuma**:
|
||||
- Configure webhook alerts → n8n webhook URL
|
||||
- Path: `http://your-server-ip:5678/webhook/uptime-kuma`
|
||||
|
||||
**Beszel**:
|
||||
- Use Shoutrrr webhook format
|
||||
- URL: `http://your-server-ip:5678/webhook/beszel`
|
||||
|
||||
**Forgejo**:
|
||||
- Repository webhooks for push events
|
||||
- URL: `http://your-server-ip:5678/webhook/forgejo-push`
|
||||
- Enable in repo settings → Webhooks
|
||||
|
||||
**Wiki.js**:
|
||||
- GraphQL API endpoint: `http://wikijs:3000/graphql`
|
||||
- Create API key in Wiki.js admin panel
|
||||
- Store in n8n credentials
|
||||
|
||||
## Advanced Features
|
||||
|
||||
### Creating Custom n8n Nodes
|
||||
|
||||
For frequently used Ollama prompts, create custom nodes:
|
||||
|
||||
1. Go to n8n → Settings → Community Nodes
|
||||
2. Install `n8n-nodes-ollama-advanced` if available
|
||||
3. Or create Function nodes with reusable code
|
||||
|
||||
### Training Custom Models
|
||||
|
||||
While Ollama doesn't support fine-tuning directly, you can:
|
||||
1. Use RAG with your specific documentation
|
||||
2. Create detailed system prompts in n8n
|
||||
3. Store organization-specific context in Qdrant
|
||||
|
||||
### Multi-Agent Workflows
|
||||
|
||||
Chain multiple Ollama calls for complex tasks:
|
||||
```
|
||||
Planning Agent → Execution Agent → Review Agent → Output
|
||||
```
|
||||
|
||||
Example: Code refactoring
|
||||
1. Planning: Analyze code and create refactoring plan
|
||||
2. Execution: Generate refactored code
|
||||
3. Review: Check for errors and improvements
|
||||
4. Output: Create pull request with changes
|
||||
|
||||
## Resources
|
||||
|
||||
- **Ollama Documentation**: https://ollama.ai/docs
|
||||
- **Open WebUI Docs**: https://docs.openwebui.com
|
||||
- **n8n Documentation**: https://docs.n8n.io
|
||||
- **Qdrant Docs**: https://qdrant.tech/documentation
|
||||
|
||||
## Support
|
||||
|
||||
For issues or questions:
|
||||
1. Check container logs first
|
||||
2. Review this documentation
|
||||
3. Search n8n community forums
|
||||
4. Check Ollama Discord/GitHub issues
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: {{current_date}}
|
||||
**Maintained By**: Homelab Admin
|
||||
**Status**: Production
|
||||
383
Gremlin-Grimoire/Stack/Build-Config.md
Normal file
383
Gremlin-Grimoire/Stack/Build-Config.md
Normal file
File diff suppressed because one or more lines are too long
194
Gremlin-Grimoire/Stack/User-Guide.md
Normal file
194
Gremlin-Grimoire/Stack/User-Guide.md
Normal file
File diff suppressed because one or more lines are too long
105
Gremlin-Grimoire/Workflows/Forgejo-Audit.md
Normal file
105
Gremlin-Grimoire/Workflows/Forgejo-Audit.md
Normal file
|
|
@ -0,0 +1,105 @@
|
|||
---
|
||||
title: Forgejo Audit Workflow
|
||||
description: Weekly automated YAML compliance audit via n8n + Ollama
|
||||
published: true
|
||||
date: 2026-04-12T00:00:00.000Z
|
||||
tags: gremlin, n8n, audit, forgejo
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-12T00:00:00.000Z
|
||||
---
|
||||
|
||||
# Forgejo Audit Workflow
|
||||
|
||||
**Status:** ✅ Live and confirmed working
|
||||
|
||||
Runs every Monday at 06:00. Walks all compose YAML files in `services/swarm/` and `services/swarm/stack/*/`, audits each one against the Swarm template standard using `qwen2.5-coder:7b`, and commits full reports to Forgejo + sends a summary to ntfy.
|
||||
|
||||
---
|
||||
|
||||
## What It Audits
|
||||
|
||||
Each file is checked for:
|
||||
- Homepage labels on all services
|
||||
- Uptime Kuma labels on all services
|
||||
- Caddy labels on exposed services
|
||||
- `node.platform.arch` exclusion constraints (ARM default)
|
||||
- Volume paths follow `/DockerVol/` or `/data/nfs/znas/Docker/` convention
|
||||
- No forbidden fields (`version:`, `container_name:`, `restart:`, `depends_on:`)
|
||||
- `endpoint_mode: dnsrr` not used
|
||||
- `diun.enable: "true"` present
|
||||
- Network references `netgrimoire` external overlay
|
||||
|
||||
---
|
||||
|
||||
## Scope
|
||||
|
||||
~67 files total across `swarm/` (flat single-service YAMLs) and `swarm/stack/*/` (grouped stacks).
|
||||
|
||||
---
|
||||
|
||||
## Outputs
|
||||
|
||||
| Output | Where | Content |
|
||||
|--------|-------|---------|
|
||||
| ntfy notification | `gremlin-audits` topic | Short FAIL summary per file |
|
||||
| Forgejo commit | `Netgrimoire/Audits/AUDIT-<name>-<date>.md` | Full audit report (POST new / PUT+SHA update) |
|
||||
|
||||
---
|
||||
|
||||
## n8n Architecture
|
||||
|
||||
```
|
||||
Schedule Trigger (Mon 06:00)
|
||||
→ Forgejo API: list all files in swarm/ and swarm/stack/*/
|
||||
→ Loop Over Items (splitInBatches, batch=1)
|
||||
→ Code node: fetch file content via Forgejo API
|
||||
→ Code node: build Ollama prompt
|
||||
→ Code node: POST to Ollama (qwen2.5-coder:7b)
|
||||
→ Code node: parse result, build report markdown
|
||||
→ Code node: commit report to Forgejo (POST or PUT+SHA)
|
||||
→ Code node: send ntfy summary if FAIL
|
||||
→ Loop feedback connection drives iteration
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Critical Patterns
|
||||
|
||||
All Forgejo and Ollama API calls use `this.helpers.httpRequest()` in Code nodes — **not** HTTP Request nodes. HTTP Request nodes hit body expression limits on large prompts.
|
||||
|
||||
Code nodes in "Run Once for Each Item" mode must return `{ json: ... }` not `[{ json: ... }]`.
|
||||
|
||||
Loop Over Items (splitInBatches, batch=1) + feedback connection from last node back to loop drives iteration over multiple files.
|
||||
|
||||
---
|
||||
|
||||
## Critical Environment Variables
|
||||
|
||||
| Variable | Value | Why |
|
||||
|----------|-------|-----|
|
||||
| `N8N_BLOCK_ENV_ACCESS_IN_NODE` | `false` | Allows env var access inside Code nodes |
|
||||
| `N8N_RUNNERS_TASK_TIMEOUT` | `3600` | Prevents timeout on 67-file audit runs |
|
||||
|
||||
---
|
||||
|
||||
## Forgejo API Tokens
|
||||
|
||||
| Token | Scope |
|
||||
|-------|-------|
|
||||
| Read token | Fetch file content from `traveler/services` |
|
||||
| Write token | Commit audit reports to `traveler/Netgrimoire` |
|
||||
|
||||
Tokens stored in n8n credentials, not in compose env vars.
|
||||
|
||||
---
|
||||
|
||||
## Forgejo Webhook Gotcha
|
||||
|
||||
If Forgejo webhooks fail to reach n8n, add to Forgejo `app.ini`:
|
||||
|
||||
```ini
|
||||
[webhook]
|
||||
ALLOWED_HOST_LIST = *
|
||||
```
|
||||
|
||||
Required when `OFFLINE_MODE = true`. Restart Forgejo after edit.
|
||||
63
Gremlin-Grimoire/Workflows/Kuma-Triage.md
Normal file
63
Gremlin-Grimoire/Workflows/Kuma-Triage.md
Normal file
|
|
@ -0,0 +1,63 @@
|
|||
---
|
||||
title: Kuma Alert Triage Workflow
|
||||
description: Uptime Kuma webhook → Ollama analysis → ntfy alert
|
||||
published: true
|
||||
date: 2026-04-12T00:00:00.000Z
|
||||
tags: gremlin, n8n, kuma, alerts
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-12T00:00:00.000Z
|
||||
---
|
||||
|
||||
# Kuma Alert Triage Workflow
|
||||
|
||||
**Status:** ✅ Live and confirmed working
|
||||
|
||||
Triggered by Uptime Kuma webhook on service DOWN or RECOVERED events. DOWN events are analyzed by `llama3.2:3b` before alerting. RECOVERED events skip AI and send a simple notification.
|
||||
|
||||
---
|
||||
|
||||
## Webhook URL
|
||||
|
||||
```
|
||||
https://n8n.netgrimoire.com/webhook/gremlin-kuma-alert
|
||||
```
|
||||
|
||||
Configure in Uptime Kuma: Settings → Notifications → Webhook → apply to all monitors.
|
||||
|
||||
---
|
||||
|
||||
## Flow
|
||||
|
||||
```
|
||||
Kuma Webhook
|
||||
├── DOWN path:
|
||||
│ → Parse payload (service name, URL, error)
|
||||
│ → Ollama (llama3.2:3b): triage prompt
|
||||
│ → ntfy gremlin-alerts (urgent priority) with AI analysis
|
||||
│
|
||||
└── RECOVERED path:
|
||||
→ ntfy gremlin-alerts (normal priority, no AI call)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Why Two Paths
|
||||
|
||||
AI triage is only useful for DOWN events — there's nothing to analyze on a recovery. Skipping Ollama on RECOVERED keeps notification latency near-instant for good news.
|
||||
|
||||
---
|
||||
|
||||
## ntfy Output Format
|
||||
|
||||
DOWN alert includes:
|
||||
- Service name and URL
|
||||
- Kuma error message
|
||||
- Ollama's triage assessment (probable cause, suggested first step)
|
||||
|
||||
RECOVERED alert is a simple one-liner.
|
||||
|
||||
---
|
||||
|
||||
## Parked: Doc Generation Workflows
|
||||
|
||||
Two additional doc generation workflows were built but are currently inactive. CPU-only `llama3.2:3b` output barely exceeds reformatting the source compose file — not useful enough to commit. Will be revisited when GPU support is added to the Gremlin stack.
|
||||
522
Keystone-Grimoire/Docker/Caddy.md
Normal file
522
Keystone-Grimoire/Docker/Caddy.md
Normal file
|
|
@ -0,0 +1,522 @@
|
|||
---
|
||||
title: Caddy Reverse Proxy
|
||||
description: Curreent and future config
|
||||
published: true
|
||||
date: 2026-02-25T01:50:20.558Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-02-23T22:09:16.106Z
|
||||
---
|
||||
|
||||
# Caddy Reverse Proxy
|
||||
|
||||
**Host:** znas (Docker Swarm node)
|
||||
**Internal IP:** 192.168.5.10
|
||||
**Data Path:** `/export/Docker/caddy/`
|
||||
**Networks:** `netgrimoire` (service network), `vpn`
|
||||
**Ports:** 80 (mapped to host 8900), 443
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Caddy serves as the primary reverse proxy for all public and internal web services. It uses the `caddy-docker-proxy` pattern, which allows services to register themselves with Caddy by adding Docker labels to their compose files — no manual Caddyfile edits required per service.
|
||||
|
||||
Configuration is **hybrid**: some services are defined entirely via Docker labels, others are defined statically in the Caddyfile, and most use both (labels for routing, Caddyfile for shared snippets). The `caddy-docker-proxy` container merges both sources at runtime.
|
||||
|
||||
---
|
||||
|
||||
## Current State
|
||||
|
||||
### Image
|
||||
|
||||
```yaml
|
||||
image: lucaslorentz/caddy-docker-proxy:ci-alpine
|
||||
```
|
||||
|
||||
This image provides the Docker Proxy module only. It has no CrowdSec, GeoIP, or rate limiting built in.
|
||||
|
||||
### Docker Compose (`/export/Docker/caddy/docker-compose.yml`)
|
||||
|
||||
```yaml
|
||||
configs:
|
||||
caddy-basic-content:
|
||||
file: ./Caddyfile
|
||||
labels:
|
||||
caddy:
|
||||
|
||||
services:
|
||||
caddy:
|
||||
image: lucaslorentz/caddy-docker-proxy:ci-alpine
|
||||
ports:
|
||||
- 8900:80
|
||||
- 443:443
|
||||
environment:
|
||||
- CADDY_INGRESS_NETWORKS=netgrimoire
|
||||
networks:
|
||||
- netgrimoire
|
||||
- vpn
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
- /export/Docker/caddy/Caddyfile:/etc/caddy/Caddyfile
|
||||
- /export/Docker/caddy:/data
|
||||
#- /export/Docker/caddy/logs:/var/log/caddy # Placeholder for CrowdSec log mount
|
||||
deploy:
|
||||
placement:
|
||||
constraints:
|
||||
- node.hostname == znas
|
||||
|
||||
networks:
|
||||
netgrimoire:
|
||||
external: true
|
||||
vpn:
|
||||
external: true
|
||||
```
|
||||
|
||||
### Caddyfile (`/export/Docker/caddy/Caddyfile`)
|
||||
|
||||
The Caddyfile defines shared authentication snippets and static site blocks. These snippets are available to all services — including label-defined ones — via `import`.
|
||||
|
||||
```caddyfile
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# AUTH SNIPPETS
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
(authentik) {
|
||||
route /outpost.goauthentik.io/* {
|
||||
reverse_proxy http://authentik:9000
|
||||
}
|
||||
|
||||
forward_auth http://authentik:9000 {
|
||||
uri /outpost.goauthentik.io/auth/caddy
|
||||
header_up X-Forwarded-URI {http.request.uri}
|
||||
copy_headers X-Authentik-Username X-Authentik-Groups X-Authentik-Email \
|
||||
X-Authentik-Name X-Authentik-Uid X-Authentik-Jwt \
|
||||
X-Authentik-Meta-Jwks X-Authentik-Meta-Outpost X-Authentik-Meta-Provider \
|
||||
X-Authentik-Meta-App X-Authentik-Meta-Version
|
||||
}
|
||||
}
|
||||
|
||||
(authelia) {
|
||||
forward_auth http://authelia:9091 {
|
||||
uri /api/verify?rd=https://login.wasted-bandwidth.net/
|
||||
copy_headers Remote-User Remote-Groups Remote-Email Remote-Name
|
||||
}
|
||||
}
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# MAIL SNIPPETS
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
(email-proxy) {
|
||||
redir https://mail.netgrimoire.com/sogo 301
|
||||
}
|
||||
|
||||
(mailcow-proxy) {
|
||||
reverse_proxy nginx-mailcow:80
|
||||
}
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# STATIC SITE BLOCKS — NETGRIMOIRE.COM
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
cloud.netgrimoire.com {
|
||||
reverse_proxy http://nextcloud-aio-apache:11000
|
||||
}
|
||||
|
||||
log.netgrimoire.com {
|
||||
reverse_proxy http://graylog:9000
|
||||
}
|
||||
|
||||
win.netgrimoire.com {
|
||||
reverse_proxy http://192.168.5.10:8006
|
||||
}
|
||||
|
||||
docker.netgrimoire.com {
|
||||
reverse_proxy http://portainer:9000
|
||||
}
|
||||
|
||||
immich.netgrimoire.com {
|
||||
reverse_proxy http://192.168.5.10:2283
|
||||
}
|
||||
|
||||
npm.netgrimoire.com {
|
||||
reverse_proxy http://librenms:8000
|
||||
}
|
||||
|
||||
#jellyfin.netgrimoire.com {
|
||||
# reverse_proxy http://jellyfin:8096
|
||||
#}
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# AUTHENTICATED — NETGRIMOIRE.COM
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
dozzle.netgrimoire.com {
|
||||
import authentik
|
||||
reverse_proxy http://192.168.4.72:8043
|
||||
}
|
||||
|
||||
dns.netgrimoire.com {
|
||||
import authentik
|
||||
reverse_proxy http://192.168.5.7:5380
|
||||
}
|
||||
|
||||
webtop.netgrimoire.com {
|
||||
import authentik
|
||||
reverse_proxy http://webtop:3000
|
||||
}
|
||||
|
||||
jackett.netgrimoire.com {
|
||||
import authentik
|
||||
reverse_proxy http://gluetun:9117
|
||||
}
|
||||
|
||||
transmission.netgrimoire.com {
|
||||
import authentik
|
||||
reverse_proxy http://gluetun:9091
|
||||
}
|
||||
|
||||
scrutiny.netgrimoire.com {
|
||||
import authentik
|
||||
reverse_proxy http://192.168.5.10:8081
|
||||
}
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# AUTHENTICATED — WASTED-BANDWIDTH.NET (Authelia)
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
stash.wasted-bandwidth.net {
|
||||
import authelia
|
||||
reverse_proxy http://192.168.5.10:9999
|
||||
}
|
||||
|
||||
namer.wasted-bandwidth.net {
|
||||
import authelia
|
||||
reverse_proxy http://192.168.5.10:6980
|
||||
}
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# PUBLIC — PNCHARRIS.COM / WASTED-BANDWIDTH.NET
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
fish.pncharris.com {
|
||||
reverse_proxy http://web
|
||||
}
|
||||
|
||||
www.wasted-bandwidth.net {
|
||||
reverse_proxy http://web
|
||||
}
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# MAILCOW — MULTI-DOMAIN
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
mail.netgrimoire.com, autodiscover.netgrimoire.com, autoconfig.netgrimoire.com, \
|
||||
mail.wasted-bandwidth.net, autodiscover.wasted-bandwidth.net, autoconfig.wasted-bandwidth.net, \
|
||||
mail.gnarlypandaproductions.com, autodiscover.gnarlypandaproductions.com, autoconfig.gnarlypandaproductions.com, \
|
||||
mail.pncfishandmore.com, autodiscover.pncfishandmore.com, autoconfig.pncfishandmore.com, \
|
||||
mail.pncharrisenterprises.com, autodiscover.pncharrisenterprises.com, autoconfig.pncharrisenterprises.com, \
|
||||
mail.pncharris.com, autodiscover.pncharris.com, autoconfig.pncharris.com, \
|
||||
mail.florosafd.org, autodiscover.florosafd.org, autoconfig.florosafd.org {
|
||||
import mailcow-proxy
|
||||
}
|
||||
```
|
||||
|
||||
### Docker Label Pattern (label-defined services)
|
||||
|
||||
Services not in the Caddyfile are registered via labels on their own containers. The snippet defined in the Caddyfile is available to them via `caddy.import`:
|
||||
|
||||
```yaml
|
||||
labels:
|
||||
- caddy=homepage.netgrimoire.com
|
||||
- caddy.import=authentik
|
||||
- caddy.reverse_proxy={{upstreams 3000}}
|
||||
```
|
||||
|
||||
For services that need no auth:
|
||||
```yaml
|
||||
labels:
|
||||
- caddy=myservice.netgrimoire.com
|
||||
- caddy.reverse_proxy={{upstreams 8080}}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Authentication Layers
|
||||
|
||||
Two identity proxies are in use, each serving different domains/use cases:
|
||||
|
||||
| Provider | Domain Pattern | Snippet |
|
||||
|----------|----------------|---------|
|
||||
| Authentik | `*.netgrimoire.com` internal tools | `import authentik` |
|
||||
| Authelia | `*.wasted-bandwidth.net` | `import authelia` |
|
||||
|
||||
Services without an auth import are either public (e.g. `fish.pncharris.com`) or carry their own authentication (e.g. Nextcloud, Graylog, Portainer).
|
||||
|
||||
---
|
||||
|
||||
## Current Security Posture
|
||||
|
||||
CrowdSec protection exists only at the **OPNsense firewall level** — IP reputation blocking before traffic reaches Caddy. CrowdSec does not currently inspect HTTP traffic at the application layer. This means:
|
||||
|
||||
- Known-bad IPs are blocked at the perimeter
|
||||
- Application-layer attacks (SQLi in URLs, malicious paths, bad user agents, brute force on specific endpoints) are not blocked at the Caddy level
|
||||
- Services behind Authentik/Authelia have an additional protection layer; unauthenticated public services do not
|
||||
|
||||
---
|
||||
|
||||
## Future State: CrowdSec + GeoIP + Rate Limiting
|
||||
|
||||
### Target Image
|
||||
|
||||
```yaml
|
||||
image: ghcr.io/serfriz/caddy-crowdsec-geoip-ratelimit-security-dockerproxy:latest
|
||||
```
|
||||
|
||||
This is a drop-in replacement for `lucaslorentz/caddy-docker-proxy`. All existing Docker labels and Caddyfile site blocks continue to work unchanged. The image is automatically rebuilt monthly when Caddy releases updates — no custom image maintenance required.
|
||||
|
||||
**Included modules:**
|
||||
- `caddy-docker-proxy` — same label-based config as current
|
||||
- `caddy-crowdsec-bouncer` — inline HTTP blocking based on CrowdSec decisions
|
||||
- `caddy-geoip` — GeoIP filtering at the application layer
|
||||
- `caddy-ratelimit` — per-endpoint rate limiting
|
||||
- `caddy-security` — additional auth/security middleware
|
||||
|
||||
### Updated Compose
|
||||
|
||||
```yaml
|
||||
configs:
|
||||
caddy-basic-content:
|
||||
file: ./Caddyfile
|
||||
labels:
|
||||
caddy:
|
||||
|
||||
services:
|
||||
caddy:
|
||||
image: ghcr.io/serfriz/caddy-crowdsec-geoip-ratelimit-security-dockerproxy:latest
|
||||
ports:
|
||||
- 8900:80
|
||||
- 443:443
|
||||
environment:
|
||||
- CADDY_INGRESS_NETWORKS=netgrimoire
|
||||
- CADDY_DOCKER_EVENT_THROTTLE_INTERVAL=2000 # Prevents non-deterministic reload with CrowdSec module
|
||||
- CROWDSEC_API_KEY=BYSLg/wKOa7wlHYzChJpBVJA06Ukc7G6fKJCvBwjyZg
|
||||
networks:
|
||||
- netgrimoire
|
||||
- vpn
|
||||
- crowdsec_net
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
- /export/Docker/caddy/Caddyfile:/etc/caddy/Caddyfile
|
||||
- /export/Docker/caddy:/data
|
||||
- caddy-logs:/var/log/caddy
|
||||
deploy:
|
||||
placement:
|
||||
constraints:
|
||||
- node.hostname == znas
|
||||
|
||||
crowdsec:
|
||||
image: crowdsecurity/crowdsec
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
COLLECTIONS: "crowdsecurity/caddy crowdsecurity/http-cve crowdsecurity/whitelist-good-actors"
|
||||
BOUNCER_KEY_CADDY: BYSLg/wKOa7wlHYzChJpBVJA06Ukc7G6fKJCvBwjyZg # Pre-registers the Caddy bouncer automatically
|
||||
volumes:
|
||||
- crowdsec-db:/var/lib/crowdsec/data
|
||||
- ./crowdsec/acquis.yaml:/etc/crowdsec/acquis.yaml
|
||||
- caddy-logs:/var/log/caddy:ro
|
||||
networks:
|
||||
- crowdsec_net
|
||||
deploy:
|
||||
placement:
|
||||
constraints:
|
||||
- node.hostname == znas
|
||||
|
||||
volumes:
|
||||
caddy-logs:
|
||||
crowdsec-db:
|
||||
|
||||
networks:
|
||||
netgrimoire:
|
||||
external: true
|
||||
vpn:
|
||||
external: true
|
||||
crowdsec_net:
|
||||
driver: overlay # Swarm overlay network
|
||||
```
|
||||
|
||||
### CrowdSec Log Acquisition (`./crowdsec/acquis.yaml`)
|
||||
|
||||
```yaml
|
||||
filenames:
|
||||
- /var/log/caddy/access.log
|
||||
labels:
|
||||
type: caddy
|
||||
```
|
||||
|
||||
### Environment File (`.env`)
|
||||
|
||||
```env
|
||||
CROWDSEC_API_KEY=<generate-with-cscli-or-set-before-first-boot>
|
||||
```
|
||||
|
||||
The `BOUNCER_KEY_CADDY` env var in the CrowdSec container pre-registers the bouncer key at startup. Set the same value in `.env` as `CROWDSEC_API_KEY` and both sides will be in sync on first boot — no need to run `cscli bouncers add` manually.
|
||||
|
||||
### Updated Caddyfile Additions
|
||||
|
||||
Add a global block at the top of the Caddyfile and a new `crowdsec` snippet. All other existing content remains unchanged.
|
||||
|
||||
```caddyfile
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# GLOBAL BLOCK — add this at the very top before any snippets
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
{
|
||||
crowdsec {
|
||||
api_url http://crowdsec:8080
|
||||
api_key {$CROWDSEC_API_KEY}
|
||||
}
|
||||
log {
|
||||
output file /var/log/caddy/access.log {
|
||||
roll_size 50mb
|
||||
roll_keep 5
|
||||
}
|
||||
format json
|
||||
}
|
||||
}
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# CROWDSEC SNIPPET — add alongside existing auth snippets
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
(crowdsec) {
|
||||
route {
|
||||
crowdsec
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Applying CrowdSec to Existing Services
|
||||
|
||||
Once the snippet exists, add `import crowdsec` to site blocks and container labels. This is a **gradual rollout** — services without it remain fully functional, just without Caddy-level CrowdSec inspection (they still have OPNsense perimeter protection).
|
||||
|
||||
**In the Caddyfile:**
|
||||
```caddyfile
|
||||
# Before
|
||||
cloud.netgrimoire.com {
|
||||
reverse_proxy http://nextcloud-aio-apache:11000
|
||||
}
|
||||
|
||||
# After
|
||||
cloud.netgrimoire.com {
|
||||
import crowdsec
|
||||
reverse_proxy http://nextcloud-aio-apache:11000
|
||||
}
|
||||
|
||||
# With auth
|
||||
dozzle.netgrimoire.com {
|
||||
import crowdsec
|
||||
import authentik
|
||||
reverse_proxy http://192.168.4.72:8043
|
||||
}
|
||||
```
|
||||
|
||||
**In Docker labels:**
|
||||
```yaml
|
||||
labels:
|
||||
- caddy=homepage.netgrimoire.com
|
||||
- caddy.import=crowdsec
|
||||
- caddy.import=authentik
|
||||
- caddy.reverse_proxy={{upstreams 3000}}
|
||||
```
|
||||
|
||||
### CrowdSec Rollout Priority
|
||||
|
||||
Roll out `import crowdsec` in this order based on risk exposure:
|
||||
|
||||
**High priority — do first (public, no auth):**
|
||||
- `cloud.netgrimoire.com` (Nextcloud)
|
||||
- `immich.netgrimoire.com`
|
||||
- `docker.netgrimoire.com` (Portainer)
|
||||
- `fish.pncharris.com`
|
||||
- `www.wasted-bandwidth.net`
|
||||
|
||||
**Medium priority — high value behind auth:**
|
||||
- `log.netgrimoire.com` (Graylog)
|
||||
- `win.netgrimoire.com` (Proxmox)
|
||||
- All `dozzle`, `dns`, `webtop`, `jackett`, `transmission`, `scrutiny`
|
||||
|
||||
**Lower priority — already protected by Authelia/Authentik:**
|
||||
- `stash.wasted-bandwidth.net`
|
||||
- `namer.wasted-bandwidth.net`
|
||||
- All label-defined services behind auth
|
||||
|
||||
**Skip:**
|
||||
- Mailcow block — handled by nginx-mailcow, different threat model
|
||||
|
||||
### Behavior if CrowdSec Container Goes Down
|
||||
|
||||
The bouncer is designed to **fail open** by default. If `crowdsec` is unreachable, Caddy continues serving traffic normally — enforcement is temporarily suspended but the site stays up. This is the safe default for a homelab. To change this behavior, set `enable_hard_fails true` in the global crowdsec block (will cause 500 errors if CrowdSec is down — not recommended for homelab).
|
||||
|
||||
---
|
||||
|
||||
## Bootstrap Steps
|
||||
|
||||
When ready to migrate to the new image:
|
||||
|
||||
**Step 1 — Add the CrowdSec global block and snippet to the Caddyfile** before changing the image. This ensures the Caddyfile is valid for the new image on startup.
|
||||
|
||||
**Step 2 — Create `./crowdsec/acquis.yaml`** with the content above.
|
||||
|
||||
**Step 3 — Create `.env`** with a strong random value for `CROWDSEC_API_KEY`:
|
||||
```bash
|
||||
openssl rand -hex 32
|
||||
```
|
||||
|
||||
**Step 4 — Update the image and add the CrowdSec service to the compose file**, then redeploy:
|
||||
```bash
|
||||
docker stack deploy -c docker-compose.yml caddy
|
||||
```
|
||||
|
||||
**Step 5 — Verify CrowdSec is reading Caddy logs:**
|
||||
```bash
|
||||
docker exec <crowdsec_container> cscli metrics
|
||||
```
|
||||
Look for the `Acquisition Metrics` table showing hits from `/var/log/caddy/access.log`.
|
||||
|
||||
**Step 6 — Test a ban manually:**
|
||||
```bash
|
||||
docker exec <crowdsec_container> cscli decisions add --ip 1.2.3.4 --duration 5m
|
||||
# Verify the IP gets a 403 from Caddy
|
||||
curl -I https://yoursite.com --resolve yoursite.com:443:1.2.3.4
|
||||
docker exec <crowdsec_container> cscli decisions delete --ip 1.2.3.4
|
||||
```
|
||||
|
||||
**Step 7 — Gradually add `import crowdsec`** to site blocks and labels per the priority order above.
|
||||
|
||||
---
|
||||
|
||||
## File Layout
|
||||
|
||||
```
|
||||
/export/Docker/caddy/
|
||||
├── Caddyfile # Shared snippets and static site blocks
|
||||
├── docker-compose.yml # Caddy + CrowdSec services
|
||||
├── .env # CROWDSEC_API_KEY (future)
|
||||
├── data/ # Caddy data volume (TLS certs, etc.)
|
||||
├── logs/ # caddy-logs volume mount point (future)
|
||||
└── crowdsec/
|
||||
└── acquis.yaml # Tells CrowdSec where to read Caddy logs (future)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Known Issues / Notes
|
||||
|
||||
- Port 80 is mapped to host port 8900 — this is intentional for Swarm. OPNsense NAT handles the external 80→8900 translation.
|
||||
- The `CADDY_DOCKER_EVENT_THROTTLE_INTERVAL=2000` setting is **required** with the CrowdSec module to prevent non-deterministic domain matching behavior during container label reloads (see [issue #61](https://github.com/hslatman/caddy-crowdsec-bouncer/issues/61)).
|
||||
- Jellyfin is commented out in the Caddyfile — likely served via a different path or disabled temporarily.
|
||||
- The `web` upstream referenced by `fish.pncharris.com` and `www.wasted-bandwidth.net` resolves to a container named `web` on the `netgrimoire` network.
|
||||
- Authelia redirect URL is `https://login.wasted-bandwidth.net/` — update if this changes.
|
||||
- The serfriz image is rebuilt on the **1st of each month** for module updates, and on every new Caddy release. Force a module update by recreating the container: `docker service update --force caddy_caddy`.
|
||||
144
Keystone-Grimoire/Docker/Swarm-Template.md
Normal file
144
Keystone-Grimoire/Docker/Swarm-Template.md
Normal file
|
|
@ -0,0 +1,144 @@
|
|||
---
|
||||
title: Docker Swarm Template Standard
|
||||
description: Canonical YAML template and label rules for all Netgrimoire swarm services
|
||||
published: true
|
||||
date: 2026-04-12T00:00:00.000Z
|
||||
tags: keystone, docker, swarm
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-12T00:00:00.000Z
|
||||
---
|
||||
|
||||
# Docker Swarm Template Standard
|
||||
|
||||
All Swarm YAML files in `services/swarm/` and `services/swarm/stack/` must follow this standard. The Gremlin audit workflow checks compliance weekly.
|
||||
|
||||
---
|
||||
|
||||
## Canonical Template
|
||||
|
||||
```yaml
|
||||
# Deploy: docker stack deploy -c <service>.yaml <service>
|
||||
services:
|
||||
<servicename>:
|
||||
image: <image>:latest
|
||||
environment:
|
||||
TZ: America/Chicago
|
||||
volumes:
|
||||
- /DockerVol/<servicename>:/config
|
||||
# - /data/nfs/znas/Docker/<servicename>:/data
|
||||
networks:
|
||||
- netgrimoire
|
||||
deploy:
|
||||
restart_policy:
|
||||
condition: any
|
||||
delay: 5s
|
||||
max_attempts: 3
|
||||
window: 120s
|
||||
placement:
|
||||
constraints:
|
||||
- node.hostname == znas
|
||||
- node.platform.arch != aarch64
|
||||
- node.platform.arch != arm
|
||||
labels:
|
||||
# Caddy
|
||||
caddy: <servicename>.netgrimoire.com
|
||||
caddy.reverse_proxy: <servicename>:<PORT>
|
||||
caddy.import: crowdsec
|
||||
caddy.import_1: authentik
|
||||
|
||||
# Uptime Kuma
|
||||
kuma.<servicename>.http.name: <Service Name>
|
||||
kuma.<servicename>.http.url: https://<servicename>.netgrimoire.com
|
||||
|
||||
# Homepage
|
||||
homepage.group: <Group>
|
||||
homepage.name: <Service Name>
|
||||
homepage.icon: <service>.png
|
||||
homepage.href: https://<servicename>.netgrimoire.com
|
||||
homepage.description: <Description>
|
||||
|
||||
# DIUN
|
||||
diun.enable: "true"
|
||||
|
||||
networks:
|
||||
netgrimoire:
|
||||
external: true
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Forbidden Fields
|
||||
|
||||
Never use these at the service level:
|
||||
|
||||
| Field | Reason |
|
||||
|-------|--------|
|
||||
| `version:` | Deprecated in Compose v2+ |
|
||||
| `container_name:` | Incompatible with Swarm replicas |
|
||||
| `restart:` | Use `deploy.restart_policy` instead |
|
||||
| `depends_on:` | Not supported in Swarm mode |
|
||||
| `endpoint_mode: dnsrr` | Breaks internal DNS — always use VIP |
|
||||
|
||||
---
|
||||
|
||||
## Volume Path Rules
|
||||
|
||||
| Path | When to Use |
|
||||
|------|-------------|
|
||||
| `/DockerVol/<service>` | Config, SQLite DBs, small app state. **Only valid with a `node.hostname` placement constraint.** |
|
||||
| `/data/nfs/znas/Docker/<service>` | Bulk data, media, or any service without a hostname constraint |
|
||||
|
||||
---
|
||||
|
||||
## Placement Constraints
|
||||
|
||||
**Default (all services):**
|
||||
```yaml
|
||||
constraints:
|
||||
- node.hostname == znas
|
||||
- node.platform.arch != aarch64
|
||||
- node.platform.arch != arm
|
||||
```
|
||||
|
||||
ARM exclusion prevents accidental scheduling on Pi vault/worker nodes. Override only if the service is ARM-specific.
|
||||
|
||||
For services pinned to docker4 (Gremlin stack):
|
||||
```yaml
|
||||
constraints:
|
||||
- node.hostname == docker4
|
||||
- node.platform.arch != aarch64
|
||||
- node.platform.arch != arm
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Caddy Label Rules
|
||||
|
||||
```yaml
|
||||
caddy: servicename.netgrimoire.com # no https:// prefix
|
||||
caddy.reverse_proxy: servicename:PORT # container name:port, NOT {{upstreams PORT}}
|
||||
caddy.import: crowdsec # always both
|
||||
caddy.import_1: authentik # always both, no exceptions
|
||||
```
|
||||
|
||||
Never use `{{upstreams PORT}}` — it breaks during `docker stack config` preprocessing.
|
||||
|
||||
**Wasted-bandwidth services** use `wasted-bandwidth.net` domain and `caddy.import_1: authelia` instead of authentik.
|
||||
|
||||
---
|
||||
|
||||
## Deploy Workflow
|
||||
|
||||
```bash
|
||||
# From services repo root
|
||||
git add . && git commit -m "Add/update <service>" && git push
|
||||
|
||||
# On znas (or docker4 for Gremlin services)
|
||||
cd ~/services && git pull
|
||||
cd swarm/stack/<StackName>
|
||||
set -a && source .env && set +a
|
||||
docker stack config --compose-file <service>.yaml > resolved.yml
|
||||
docker stack deploy --compose-file resolved.yml <service>
|
||||
rm resolved.yml
|
||||
docker stack services <service>
|
||||
```
|
||||
59
Keystone-Grimoire/Hosts/Host-Inventory.md
Normal file
59
Keystone-Grimoire/Hosts/Host-Inventory.md
Normal file
|
|
@ -0,0 +1,59 @@
|
|||
---
|
||||
title: Host Inventory
|
||||
description: All Netgrimoire nodes — roles, IPs, services, hardware
|
||||
published: true
|
||||
date: 2026-04-12T00:00:00.000Z
|
||||
tags: keystone, hosts
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-12T00:00:00.000Z
|
||||
---
|
||||
|
||||
# Host Inventory
|
||||
|
||||
## Swarm Cluster
|
||||
|
||||
| Host | Hostname | IP | Role | Runtime |
|
||||
|------|----------|----|------|---------|
|
||||
| znas | znas | 192.168.5.10 | NAS + Primary Swarm manager | Swarm manager + Compose |
|
||||
| docker2 | — | — | VPN gateway | Compose only |
|
||||
| docker3 | — | — | LibreNMS | Compose only |
|
||||
| docker4 | hermes | 192.168.5.16 | Mail server + AI worker | Compose + Swarm worker |
|
||||
| docker5 | — | 192.168.5.18 | Media host | Compose only |
|
||||
| Pi nodes | various | various | Swarm workers + vault nodes | Swarm workers |
|
||||
|
||||
## Other Infrastructure
|
||||
|
||||
| Device | IP | Purpose |
|
||||
|--------|----|---------|
|
||||
| OPNsense firewall | 192.168.3.4 | Firewall, dual-WAN, NAT, WireGuard |
|
||||
| Internal DNS | 192.168.5.7 | Technitium DNS |
|
||||
| ISPConfig | 192.168.4.11 | Web/DNS hosting control panel |
|
||||
|
||||
## WAN
|
||||
|
||||
| Interface | IP | Status | Purpose |
|
||||
|-----------|----|----|---------|
|
||||
| ATT (`igc1`) | 107.133.34.145/28 | Primary | 5 static IPs allocated |
|
||||
| Cox | — | Retiring | Legacy WAN |
|
||||
|
||||
**ATT Static IP Assignments:**
|
||||
|
||||
| IP | Assigned To |
|
||||
|----|-------------|
|
||||
| .145 | Admin / default |
|
||||
| .146 | Web services |
|
||||
| .147 | Jellyfin |
|
||||
| .148 | Mail (ATT_Mail — pending) |
|
||||
| .149 | WireGuard / Spare |
|
||||
|
||||
## Pinned Services by Host
|
||||
|
||||
**znas** — Caddy, Forgejo, Wiki.js, Homepage, Uptime Kuma, AutoKuma, ntfy, Portainer, Authentik, LLDAP, Kopia, Vault, Nextcloud AIO, Immich, Joplin, n8n (Gremlin), all arr services, all media services
|
||||
|
||||
**docker4 (hermes)** — MailCow (Compose), Ollama, Open WebUI, Qdrant (Swarm, pinned docker4), Roundcube
|
||||
|
||||
**docker5** — Jellyfin, Jellyfinx (Compose)
|
||||
|
||||
**docker2** — Gluetun, Jackett, Transmission (Compose)
|
||||
|
||||
**docker3** — LibreNMS (Compose)
|
||||
401
Keystone-Grimoire/Mail/Domain-Setup.md
Normal file
401
Keystone-Grimoire/Mail/Domain-Setup.md
Normal file
|
|
@ -0,0 +1,401 @@
|
|||
---
|
||||
title: Sample Domain Setup
|
||||
description: Graymutt@nucking-futz.com
|
||||
published: true
|
||||
date: 2026-03-16T00:34:08.387Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-02-25T22:02:27.719Z
|
||||
---
|
||||
|
||||
# Mail Setup — nucking-futz.com
|
||||
|
||||
## Part 0 — OPNsense: Configure ATT_Mail Secondary IP
|
||||
|
||||
Before configuring DNS or Mailcow, the secondary AT&T static IP must be configured in OPNsense as a virtual IP on the WAN interface and NAT rules must be set so only raw SMTP traffic (ports 25, 465, 587, 993, 143) uses this address. Webmail, the Mailcow admin UI, and all other traffic continue to use the primary WAN IP (107.133.34.145).
|
||||
|
||||
| Address | Purpose |
|
||||
|---------|---------|
|
||||
| 107.133.34.145 | Primary WAN — web, admin, everything else |
|
||||
| 107.133.34.146 | ATT_Mail — SMTP/IMAP inbound and outbound only |
|
||||
|
||||
### Step 0.1 — Add Virtual IP
|
||||
|
||||
1. Go to **Interfaces → Virtual IPs → Settings**
|
||||
2. Click **+ Add**
|
||||
3. Set the following:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| Mode | IP Alias |
|
||||
| Interface | WAN (igc1) |
|
||||
| Network / Address | `107.133.34.146 / 28` |
|
||||
| Description | `ATT_Mail` |
|
||||
|
||||
4. Click **Save**, then **Apply changes**
|
||||
|
||||
> The /28 subnet mask matches the AT&T block (107.133.34.144/28). All 5 static IPs in the block share this mask.
|
||||
|
||||
### Step 0.2 — Outbound NAT for SMTP Traffic
|
||||
|
||||
This ensures Mailcow's outbound SMTP connections leave through the ATT_Mail IP rather than the primary WAN IP. OPNsense must be in **Hybrid** or **Manual** outbound NAT mode.
|
||||
|
||||
1. Go to **Firewall → NAT → Outbound**
|
||||
2. Confirm mode is set to **Hybrid Outbound NAT** (or Manual — either works)
|
||||
3. Click **Add** to create a new rule
|
||||
|
||||
**Rule for outbound SMTP (port 587 relay to MXRoute):**
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| Interface | WAN |
|
||||
| TCP/IP Version | IPv4 |
|
||||
| Protocol | TCP |
|
||||
| Source | `192.168.5.16 / 32` (Mailcow host) |
|
||||
| Source Port | any |
|
||||
| Destination | any |
|
||||
| Destination Port | 587 |
|
||||
| Translation / Target | `107.133.34.146` (ATT_Mail) |
|
||||
| Description | `Mailcow outbound relay via ATT_Mail` |
|
||||
|
||||
4. Repeat for port **25** (direct outbound SMTP, if used) and port **465** (SMTPS)
|
||||
5. Click **Save** and **Apply changes**
|
||||
|
||||
### Step 0.3 — Inbound NAT (Port Forwards) for Mail Ports
|
||||
|
||||
Route inbound connections on mail ports to Mailcow using the ATT_Mail IP as the external address.
|
||||
|
||||
1. Go to **Firewall → NAT → Port Forward**
|
||||
2. Create rules for each mail port:
|
||||
|
||||
| External IP | Port(s) | Forward to | Description |
|
||||
|-------------|---------|-----------|-------------|
|
||||
| 107.133.34.146 | 25 | 192.168.5.16:25 | SMTP inbound |
|
||||
| 107.133.34.146 | 465 | 192.168.5.16:465 | SMTPS inbound |
|
||||
| 107.133.34.146 | 587 | 192.168.5.16:587 | Submission inbound |
|
||||
| 107.133.34.146 | 993 | 192.168.5.16:993 | IMAPS |
|
||||
| 107.133.34.146 | 143 | 192.168.5.16:143 | IMAP (if needed) |
|
||||
|
||||
> **Do not** add port forwards for 80, 443, or 3443 (Mailcow admin/webmail ports) on this IP. Those remain on the primary WAN IP via Caddy.
|
||||
|
||||
3. Click **Save** and **Apply changes**
|
||||
|
||||
### Step 0.4 — Firewall Rules
|
||||
|
||||
Ensure the WAN firewall rules permit inbound traffic on the mail ports to the ATT_Mail IP. If you have a default deny-all WAN rule (recommended), add explicit pass rules:
|
||||
|
||||
1. Go to **Firewall → Rules → WAN**
|
||||
2. Add pass rules for each port in the table above with destination `107.133.34.146`
|
||||
|
||||
### Step 0.5 — Verify
|
||||
|
||||
```bash
|
||||
# From outside your network, confirm the mail IP is live
|
||||
telnet 107.133.34.146 25
|
||||
# Should see: 220 hermes.netgrimoire.com ESMTP
|
||||
|
||||
# Confirm primary WAN IP does NOT respond on port 25
|
||||
telnet 107.133.34.145 25
|
||||
# Should time out or be refused
|
||||
|
||||
# Check that Mailcow outbound connections leave from the ATT_Mail IP
|
||||
# Send a test to check-auth@verifier.port25.com and inspect the Return-Path
|
||||
# or check the Received: header — the sending IP should be 107.133.34.146
|
||||
```
|
||||
|
||||
> ⚠ If the verify step shows port 25 still responding on 107.133.34.145, check that no leftover port forward rules exist on the primary WAN IP for mail ports.
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
This guide covers complete mail setup for `nucking-futz.com` using MXRoute as the inbound gateway and Mailcow as the mailbox host. MXRoute receives all inbound mail from the internet (solving residential IP filtering issues with banks and financial institutions) and forwards to Mailcow for storage and retrieval. Mailcow handles outbound mail via the MXRoute SMTP relay.
|
||||
|
||||
**Architecture:**
|
||||
|
||||
```
|
||||
Inbound: Internet → MXRoute (commercial IP) → Mailcow (192.168.5.16)
|
||||
Outbound: Mailcow → MXRoute SMTP relay → Internet
|
||||
```
|
||||
|
||||
**Why two domains in Mailcow:**
|
||||
MXRoute forwarders require a valid destination email address. You cannot forward `graymutt@nucking-futz.com` back to `graymutt@nucking-futz.com` — that loops. The solution is to have Mailcow own a subdomain (`mail.nucking-futz.com`) with its own MX record pointing directly to your server. MXRoute forwards to `graymutt@mail.nucking-futz.com`, Mailcow delivers locally, and an alias domain maps `nucking-futz.com` back so users only ever see and use `graymutt@nucking-futz.com`.
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- MXRoute account active with DirectAdmin access
|
||||
- Mailcow running at 192.168.5.16
|
||||
- DNS management access for nucking-futz.com
|
||||
- Your MXRoute server hostname from your MXRoute welcome email (e.g. `arrow.mxrouting.net`)
|
||||
|
||||
---
|
||||
|
||||
## Step 1 — DNS Records
|
||||
|
||||
Create all DNS records before configuring either service. Keep TTL at 300 during setup — raise to 3600 once confirmed working.
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
### Required DNS Records
|
||||
|
||||
| Type | Host | Value | Notes |
|
||||
|------|------|-------|-------|
|
||||
| A | `mail` | `YOUR_ATT_MAIL_IP` | Points to Mailcow — MXRoute forwards to this server |
|
||||
| MX | `@` | `heracles.mxrouting.net (Priority 10)` | Check MXRoute welcome email for exact hostname |
|
||||
| MX | `@` | `heracles-relay.mxrouting.net (Priority 20)` (priority 20) | Secondary MXRoute server from welcome email |
|
||||
| MX | `mail` | `mail.nucking-futz.com` (priority 10) | Mailcow handles this subdomain directly |
|
||||
| CNAME | `imap` | `mail.nucking-futz.com` | Client autoconfiguration |
|
||||
| CNAME | `smtp` | `mail.nucking-futz.com` | Client autoconfiguration |
|
||||
| CNAME | `webmail` | `mail.nucking-futz.com` | Roundcube access |
|
||||
| CNAME | `autodiscover` | `mail.nucking-futz.com` | Outlook autodiscover |
|
||||
| CNAME | `autoconfig` | `mail.nucking-futz.com` | Thunderbird autoconfig |
|
||||
| TXT | `@` | `v=spf1 ip4:YOUR_ATT_MAIL_IP include:mxroute.com -all` | SPF — authorizes both Mailcow direct and MXRoute relay |
|
||||
| TXT | `mail` | `v=spf1 ip4:YOUR_ATT_MAIL_IP -all` | SPF for subdomain — Mailcow sends directly from here |
|
||||
| TXT | `_dmarc` | `v=DMARC1; p=reject; rua=mailto:admin@netgrimoire.com` | DMARC enforcement |
|
||||
|
||||
> DKIM TXT records (two selectors) are added in Steps 2 and 3 after generating keys in Mailcow and MXRoute.
|
||||
|
||||
---
|
||||
|
||||
## Step 2 — Mailcow Configuration
|
||||
|
||||
### 2.1 Add the Subdomain as Primary Domain
|
||||
|
||||
Mailcow owns `mail.nucking-futz.com` as its active mail domain. Mailboxes live internally on this subdomain.
|
||||
|
||||
1. Log into Mailcow admin UI → **Mail Setup → Domains**
|
||||
2. Click **Add domain**
|
||||
3. Set **Domain:** `mail.nucking-futz.com`
|
||||
4. Leave all other settings as default
|
||||
5. Click **Add domain**
|
||||
|
||||
### 2.2 Add the Alias Domain
|
||||
|
||||
This makes Mailcow accept mail addressed to `@nucking-futz.com` and deliver it to the matching `@mail.nucking-futz.com` mailbox. Users send and receive as `@nucking-futz.com` — the subdomain is invisible to them.
|
||||
|
||||
1. Go to **Mail Setup → Alias Domains**
|
||||
2. Click **Add alias domain**
|
||||
3. Set **Alias Domain:** `nucking-futz.com`
|
||||
4. Set **Target Domain:** `mail.nucking-futz.com`
|
||||
5. Click **Add**
|
||||
|
||||
### 2.3 Create Mailbox
|
||||
|
||||
1. Go to **Mail Setup → Mailboxes**
|
||||
2. Click **Add mailbox**
|
||||
3. Set **Username:** `graymutt`
|
||||
4. Set **Domain:** `mail.nucking-futz.com`
|
||||
5. Set a strong password
|
||||
6. Set quota as needed
|
||||
7. Click **Add**
|
||||
|
||||
The mailbox is internally `graymutt@mail.nucking-futz.com`. The alias domain from Step 2.2 means Mailcow also accepts and delivers mail for `graymutt@nucking-futz.com` to this same mailbox.
|
||||
|
||||
### 2.4 Generate DKIM Key
|
||||
|
||||
1. Go to **Configuration → Configuration & Diagnostics → Configuration**
|
||||
2. Click **ARC/DKIM Keys** tab
|
||||
3. Select domain `mail.nucking-futz.com`
|
||||
4. Set **Selector:** `mailcow`
|
||||
5. Set **Key length:** 2048
|
||||
6. Click **Generate**
|
||||
7. Copy the full TXT record value — needed for DNS
|
||||
|
||||
### 2.5 Add Mailcow DKIM DNS Record
|
||||
|
||||
| Type | Host | Value |
|
||||
|------|------|-------|
|
||||
| TXT | `mailcow._domainkey.mail` | *(full key string from Mailcow — begins with `v=DKIM1;`)* |
|
||||
|
||||
### 2.6 Add MXRoute to Trusted Networks
|
||||
|
||||
Prevents Mailcow from applying spam scoring to forwarded mail arriving from MXRoute's IPs.
|
||||
|
||||
1. Go to **Configuration → Configuration & Diagnostics → Configuration**
|
||||
2. Click **Extra Postfix configuration** tab
|
||||
3. Add to `extra.cf`:
|
||||
|
||||
```
|
||||
# Trust MXRoute forwarding IPs
|
||||
mynetworks = 127.0.0.1/8 [::1]/128 192.168.5.0/24 69.167.160.0/19 198.54.120.0/22
|
||||
```
|
||||
|
||||
> Verify current MXRoute IP ranges in your MXRoute account documentation — these may change.
|
||||
|
||||
4. Click **Save**
|
||||
5. Click **Restart affected containers**
|
||||
|
||||
### 2.7 Configure Outbound Relay
|
||||
|
||||
Routes outbound mail through MXRoute for best deliverability.
|
||||
|
||||
1. Go to **Configuration → Routing → Sender-Dependent Transports**
|
||||
2. Click **Add transport**
|
||||
3. Set **Domain:** `nucking-futz.com`
|
||||
4. Set **Relay host:** `[smtp.mxroute.com]:587` (confirm SMTP hostname from MXRoute welcome email)
|
||||
5. Set **Username:** your MXRoute relay username
|
||||
6. Set **Password:** your MXRoute relay password
|
||||
7. Click **Add**
|
||||
8. Repeat for domain `mail.nucking-futz.com` using the same relay credentials
|
||||
|
||||
---
|
||||
|
||||
## Step 3 — MXRoute Configuration
|
||||
|
||||
### 3.1 Add Domain in DirectAdmin
|
||||
|
||||
1. Log into MXRoute DirectAdmin
|
||||
2. Go to **Account Manager → Domain Setup**
|
||||
3. Add domain: `nucking-futz.com`
|
||||
4. Complete the domain wizard
|
||||
|
||||
### 3.2 Create Forwarder
|
||||
|
||||
MXRoute does not support domain-level remote MX routing — forwarders must be created per address. The destination must be on a domain whose MX resolves to Mailcow, not back to MXRoute.
|
||||
|
||||
1. Go to **Forwarders** in the MXRoute control panel
|
||||
2. Click **Create New Forwarder**
|
||||
3. Set **Forwarder Name:** `graymutt` (the `@nucking-futz.com` part is shown automatically)
|
||||
4. Set **Destination Type:** `Forward to Email(s)`
|
||||
5. Set **Recipients:** `graymutt@mail.nucking-futz.com`
|
||||
6. Click **Create Forwarder**
|
||||
|
||||
> Every new mailbox requires a matching forwarder entry. The pattern is always `user@nucking-futz.com` → `user@mail.nucking-futz.com`. See the Adding a New Mailbox section below.
|
||||
|
||||
### 3.3 Get MXRoute DKIM Key
|
||||
|
||||
1. Go to **Email Manager → DKIM Keys** for `nucking-futz.com`
|
||||
2. Generate or view the DKIM key — note the selector name assigned (often `x`)
|
||||
3. Copy the full TXT record value
|
||||
|
||||
### 3.4 Add MXRoute DKIM DNS Record
|
||||
|
||||
| Type | Host | Value |
|
||||
|------|------|-------|
|
||||
| TXT | `x._domainkey` *(replace `x` with MXRoute's actual selector)* | *(full key string from MXRoute DirectAdmin)* |
|
||||
|
||||
---
|
||||
|
||||
## Step 4 — Verify DNS
|
||||
|
||||
Once DNS has propagated, verify all records:
|
||||
|
||||
```bash
|
||||
# MX for main domain — should show MXRoute servers
|
||||
dig MX nucking-futz.com +short
|
||||
|
||||
# MX for subdomain — should show mail.nucking-futz.com
|
||||
dig MX mail.nucking-futz.com +short
|
||||
|
||||
# A record — should show your ATT IP
|
||||
dig A mail.nucking-futz.com +short
|
||||
|
||||
# SPF
|
||||
dig TXT nucking-futz.com +short
|
||||
dig TXT mail.nucking-futz.com +short
|
||||
|
||||
# DMARC
|
||||
dig TXT _dmarc.nucking-futz.com +short
|
||||
|
||||
# DKIM — Mailcow
|
||||
dig TXT mailcow._domainkey.mail.nucking-futz.com +short
|
||||
|
||||
# DKIM — MXRoute (replace x with your selector)
|
||||
dig TXT x._domainkey.nucking-futz.com +short
|
||||
```
|
||||
|
||||
Run a full check at [https://mxtoolbox.com](https://mxtoolbox.com) → Email Health for `nucking-futz.com`.
|
||||
|
||||
---
|
||||
|
||||
## Step 5 — Test Mail Flow
|
||||
|
||||
### Inbound Test
|
||||
|
||||
Send a test email to `graymutt@nucking-futz.com` from an external Gmail or Outlook account. Verify:
|
||||
|
||||
- Mail arrives in the Mailcow mailbox
|
||||
- Headers show the MXRoute → Mailcow forwarding path (two `Received:` hops)
|
||||
- No spam flagging
|
||||
|
||||
In Roundcube open the test message → **More → View Source** and check the `Received:` chain.
|
||||
|
||||
### Outbound Test
|
||||
|
||||
Send from `graymutt@nucking-futz.com` to an external Gmail address. Run through [https://mail-tester.com](https://mail-tester.com) for a full delivery score.
|
||||
|
||||
### DKIM/SPF/DMARC Test
|
||||
|
||||
Send a test to `check-auth@verifier.port25.com` — you will receive an automated reply confirming pass/fail for SPF, DKIM, and DMARC.
|
||||
|
||||
### Bank/Financial Test
|
||||
|
||||
Send from a bank address to `graymutt@nucking-futz.com` and confirm delivery. This is the primary goal — banks see MXRoute's commercial IPs in the MX record, not your residential AT&T IP.
|
||||
|
||||
---
|
||||
|
||||
## Email Client Settings
|
||||
|
||||
| Setting | Value |
|
||||
|---------|-------|
|
||||
| Email address | `graymutt@nucking-futz.com` |
|
||||
| IMAP server | `mail.nucking-futz.com` |
|
||||
| IMAP port | `993` (SSL/TLS) |
|
||||
| SMTP server | `mail.nucking-futz.com` |
|
||||
| SMTP port | `465` (SSL/TLS) |
|
||||
| Username | `graymutt@nucking-futz.com` |
|
||||
| Password | *(mailbox password set in Step 2.3)* |
|
||||
|
||||
> Users log in and send as `graymutt@nucking-futz.com`. Mailcow resolves this to the internal `mail.nucking-futz.com` mailbox transparently via the alias domain.
|
||||
|
||||
---
|
||||
|
||||
## Adding a New Mailbox
|
||||
|
||||
Every new address on `nucking-futz.com` requires entries in both Mailcow and MXRoute.
|
||||
|
||||
**In Mailcow:**
|
||||
1. Mail Setup → Mailboxes → Add mailbox
|
||||
2. Username: `newuser`, Domain: `mail.nucking-futz.com`
|
||||
|
||||
**In MXRoute control panel:**
|
||||
1. Forwarders → Create New Forwarder
|
||||
2. Forwarder Name: `newuser`, Destination Type: `Forward to Email(s)`, Recipients: `newuser@mail.nucking-futz.com`
|
||||
|
||||
---
|
||||
|
||||
## Credentials Reference
|
||||
|
||||
| Service | Account | Password |
|
||||
|---------|---------|----------|
|
||||
| Mailcow mailbox | `graymutt@mail.nucking-futz.com` | *(set during mailbox creation)* |
|
||||
| MXRoute relay | *(from MXRoute welcome email)* | *(from MXRoute welcome email)* |
|
||||
| MXRoute DirectAdmin | *(from MXRoute welcome email)* | *(from MXRoute welcome email)* |
|
||||
|
||||
---
|
||||
|
||||
## Known Gotchas
|
||||
|
||||
**Forwarder destination must not loop.** Never set the MXRoute forwarder destination to an address on the same domain that has MXRoute as its MX. `graymutt@nucking-futz.com` → `graymutt@nucking-futz.com` will loop. Always forward to `@mail.nucking-futz.com` which has its own MX resolving directly to Mailcow.
|
||||
|
||||
**Two DKIM selectors required.** `mailcow._domainkey.mail.nucking-futz.com` covers mail Mailcow sends directly from the subdomain. `x._domainkey.nucking-futz.com` (MXRoute selector) covers outbound mail relayed through MXRoute. Both must exist for DMARC to pass on all paths.
|
||||
|
||||
**New mailboxes need matching MXRoute forwarders.** MXRoute has no catch-all forwarding to remote servers. Every address that needs to receive mail must have an explicit forwarder in DirectAdmin. Add the MXRoute forwarder step to your mailbox creation checklist.
|
||||
|
||||
**Alias domain vs. alias mailbox.** The alias domain in Step 2.2 maps the entire `nucking-futz.com` domain to `mail.nucking-futz.com`. Do not also create individual alias mailboxes for the same addresses — this creates duplicate delivery and may cause unexpected behavior.
|
||||
|
||||
**SPF differs between the two domains.** The main domain SPF includes `include:mxroute.com` because MXRoute relay sends outbound from there. The subdomain SPF (`mail.nucking-futz.com`) only needs your ATT IP — Mailcow sends directly from that domain without going through MXRoute. Two different records for two different send paths.
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [MailCow Configuration](./mailcow)
|
||||
- [MXRoute Outbound Relay Setup](./mxroute-outbound-relay)
|
||||
- [OPNsense Firewall](./opnsense-firewall) — static IP allocation for ATT_Mail
|
||||
391
Keystone-Grimoire/Mail/Hardening.md
Normal file
391
Keystone-Grimoire/Mail/Hardening.md
Normal file
|
|
@ -0,0 +1,391 @@
|
|||
---
|
||||
title: MailCow Hardening
|
||||
description: Securing Mailcow
|
||||
published: true
|
||||
date: 2026-02-23T21:56:32.211Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-02-23T21:56:22.997Z
|
||||
---
|
||||
|
||||
# MailCow Security Hardening
|
||||
|
||||
**Service:** MailCow Dockerized
|
||||
**Host:** 192.168.5.16 (MailCow_Ngnx alias)
|
||||
**Relay:** MXRoute (outbound only)
|
||||
**Last Reviewed:** February 2026
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Running MailCow with MXRoute as an outbound relay creates a specific threat model that's different from either a fully self-hosted or fully managed setup. Your server receives inbound directly (MX points to your IP), stores all mailboxes locally, and hands outbound to MXRoute. This means you carry the risk surface of both — inbound SMTP exposure plus the credential and reputation exposure of a relay relationship.
|
||||
|
||||
The security areas that matter most for this setup:
|
||||
|
||||
| Area | Risk | Priority |
|
||||
|---|---|---|
|
||||
| DNS authentication (SPF/DKIM/DMARC) | Spoofing, deliverability failure, relay abuse | 🔴 Critical |
|
||||
| MTA-STS + TLS-RPT | SMTP downgrade attacks on inbound | 🔴 Critical |
|
||||
| MXRoute relay credential security | Relay hijacking, spam abuse on your reputation | 🔴 Critical |
|
||||
| Mailcow admin hardening | Account takeover, open relay creation | 🔴 Critical |
|
||||
| Postfix TLS hardening | Weak cipher negotiation | 🟡 High |
|
||||
| Nginx header hardening | XSS, clickjacking on webmail | 🟡 High |
|
||||
| Rspamd tuning | Inbound spam, outbound policy enforcement | 🟡 High |
|
||||
| DMARC reporting | Visibility into spoofing and misdelivery | 🟡 High |
|
||||
| ClamAV / attachment scanning | Malware distribution via your domain | 🟢 Medium |
|
||||
| Rate limiting | Compromised account spam runs | 🟢 Medium |
|
||||
|
||||
---
|
||||
|
||||
## DNS Authentication
|
||||
|
||||
This is the foundation. If any of these are misconfigured your mail either doesn't deliver or your domain gets spoofed. With MXRoute in the mix the SPF record requires special attention.
|
||||
|
||||
### SPF — Include Both Sources
|
||||
|
||||
Your SPF must authorize **both** your own IP (for any direct sends) and MXRoute's sending infrastructure:
|
||||
|
||||
```dns
|
||||
@ IN TXT "v=spf1 ip4:YOUR_ATT_MAIL_IP include:mxroute.com ~all"
|
||||
```
|
||||
|
||||
Replace `YOUR_ATT_MAIL_IP` with the static IP you've dedicated to mail (ATT_Mail virtual IP). The `include:mxroute.com` covers MXRoute's sending servers.
|
||||
|
||||
> ⚠ Do not use `-all` (hard fail) until you have confirmed all your sending sources are covered. Use `~all` (softfail) initially, then tighten after verifying DMARC reports show no legitimate sources failing.
|
||||
|
||||
> ⚠ SPF has a **10 DNS lookup limit**. Each `include:` costs lookups. If you add more includes (e.g. transactional services), check your SPF lookup count at [mxtoolbox.com/spf](https://mxtoolbox.com/spf.aspx).
|
||||
|
||||
### DKIM — Two Selectors for Two Signers
|
||||
|
||||
Because MXRoute re-signs outbound mail with their own DKIM key, you need a DKIM record for both signers:
|
||||
|
||||
| Selector | Signer | Where to get the key |
|
||||
|---|---|---|
|
||||
| `mailcow._domainkey` | MailCow (inbound, internal sends) | MailCow UI → Configuration → ARC/DKIM Keys |
|
||||
| `mxroute._domainkey` (or `x._domainkey`) | MXRoute (outbound relay) | MXRoute control panel |
|
||||
|
||||
Add both as TXT records. Having both means DMARC passes regardless of which path the mail took.
|
||||
|
||||
> ✓ MailCow lets you choose the DKIM selector name. Use `mailcow` as the selector to avoid confusion with the MXRoute selector.
|
||||
|
||||
### DMARC — Start Monitoring, Then Enforce
|
||||
|
||||
DMARC ties SPF and DKIM together and tells receiving servers what to do with failures. Start in monitoring mode, review reports for 2–4 weeks, then advance to enforcement.
|
||||
|
||||
**Phase 1 — Monitor (add immediately):**
|
||||
```dns
|
||||
_dmarc IN TXT "v=DMARC1; p=none; rua=mailto:dmarc-reports@yourdomain.com; ruf=mailto:dmarc-failures@yourdomain.com; fo=1"
|
||||
```
|
||||
|
||||
**Phase 2 — Quarantine (after reviewing reports, no legitimate failures):**
|
||||
```dns
|
||||
_dmarc IN TXT "v=DMARC1; p=quarantine; pct=100; rua=mailto:dmarc-reports@yourdomain.com; fo=1"
|
||||
```
|
||||
|
||||
**Phase 3 — Reject (final enforcement):**
|
||||
```dns
|
||||
_dmarc IN TXT "v=DMARC1; p=reject; pct=100; rua=mailto:dmarc-reports@yourdomain.com; fo=1"
|
||||
```
|
||||
|
||||
> ✓ `fo=1` requests forensic reports on any authentication failure — more detail for debugging.
|
||||
|
||||
**DMARC Report Processing:** Raw DMARC reports are XML and not human-readable. Use one of these free tools to process them:
|
||||
- [Postmark DMARC](https://dmarc.postmarkapp.com/) — free, email-based weekly digest
|
||||
- [dmarcian.com](https://dmarcian.com) — free tier, dashboard view
|
||||
- Self-hosted: [Parsedmarc](https://github.com/domainaware/parsedmarc) → send to Graylog/Grafana
|
||||
|
||||
---
|
||||
|
||||
## MTA-STS (MailCow September 2025+)
|
||||
|
||||
MTA-STS forces other mail servers to use TLS when delivering to you, preventing downgrade attacks that try to force plaintext SMTP. The September 2025 MailCow update added the `postfix-tlspol-mailcow` container which enforces MTA-STS on **outbound** connections too.
|
||||
|
||||
### What You Need
|
||||
|
||||
**1. DNS records** — three records for each domain:
|
||||
|
||||
```dns
|
||||
# For your mail server's hostname domain (e.g. netgrimoire.com)
|
||||
mta-sts IN CNAME mail.netgrimoire.com.
|
||||
_mta-sts IN TXT "v=STSv1; id=20260223"
|
||||
_smtp._tls IN TXT "v=TLSRPTv1; rua=mailto:tls-reports@netgrimoire.com"
|
||||
```
|
||||
|
||||
The `id` value in `_mta-sts` is a version string — update it (e.g. to today's date) whenever you change your MTA-STS policy.
|
||||
|
||||
**2. Policy file** — served by MailCow's nginx at `https://mta-sts.yourdomain.com/.well-known/mta-sts.txt`:
|
||||
|
||||
```bash
|
||||
# On your MailCow host:
|
||||
mkdir -p /opt/mailcow-dockerized/data/web/.well-known/
|
||||
cat > /opt/mailcow-dockerized/data/web/.well-known/mta-sts.txt << 'EOF'
|
||||
version: STSv1
|
||||
mode: enforce
|
||||
max_age: 86400
|
||||
mx: mail.netgrimoire.com
|
||||
EOF
|
||||
```
|
||||
|
||||
Start with `mode: testing` for the first week, then switch to `mode: enforce`.
|
||||
|
||||
**3. For additional domains** — add CNAMEs pointing to your primary domain's records:
|
||||
|
||||
```dns
|
||||
# For each additional mail domain you host on MailCow:
|
||||
mta-sts.otherdomain.com IN CNAME mail.netgrimoire.com.
|
||||
_mta-sts.otherdomain.com IN CNAME _mta-sts.netgrimoire.com.
|
||||
_smtp._tls.otherdomain.com IN CNAME _smtp._tls.netgrimoire.com.
|
||||
```
|
||||
|
||||
> ✓ TLS-RPT (`_smtp._tls` TXT record) sends you reports about TLS failures when other servers connect to you. Pipe these to Graylog or Postmark for visibility.
|
||||
|
||||
---
|
||||
|
||||
## MXRoute Relay Security
|
||||
|
||||
This is the most overlooked area. Your MXRoute credentials can send mail as your domain — if they're compromised, someone else is spamming from your reputation.
|
||||
|
||||
### Credential Hardening
|
||||
|
||||
- Use a **unique, strong password** for your MXRoute account — not shared with anything else
|
||||
- Store the MXRoute SMTP credentials in MailCow's relay configuration only, not in any config file or environment variable that gets committed to git
|
||||
- If MXRoute supports API tokens or app passwords, use those instead of your main account password
|
||||
|
||||
### Relay Configuration in MailCow
|
||||
|
||||
In MailCow UI: **Configuration → Routing → Sender-Dependent Transports**
|
||||
|
||||
Verify the relay is configured to authenticate via TLS (port 587 with STARTTLS or port 465 with SSL). Do not relay over port 25 without authentication.
|
||||
|
||||
```
|
||||
# What the relay entry should look like in Postfix terms:
|
||||
# relayhost = [smtp.mxroute.com]:587
|
||||
# smtp_sasl_auth_enable = yes
|
||||
# smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
|
||||
# smtp_tls_security_level = encrypt ← ensures TLS is required, not optional
|
||||
```
|
||||
|
||||
> ⚠ Set `smtp_tls_security_level = encrypt` (not `may`) so the connection to MXRoute is always encrypted. If the TLS negotiation fails, Postfix should reject rather than fall back to plaintext.
|
||||
|
||||
### Rate Limiting (Prevent Relay Abuse if Account Compromised)
|
||||
|
||||
Add rate limits in MailCow UI: **Configuration → Mail Setup → Domains → [your domain] → Rate Limit**
|
||||
|
||||
| Setting | Recommended Value | Notes |
|
||||
|---|---|---|
|
||||
| Outbound messages/hour | 500 | Adjust for your actual sending volume |
|
||||
| Outbound messages/day | 2000 | A sudden spike above this = red flag |
|
||||
|
||||
This doesn't stop abuse but limits blast radius if a mailbox is compromised and starts spamming through MXRoute.
|
||||
|
||||
---
|
||||
|
||||
## MailCow Admin Hardening
|
||||
|
||||
### Two-Factor Authentication
|
||||
|
||||
Enable 2FA on the admin account and all mailbox accounts that have access to the admin panel.
|
||||
|
||||
MailCow UI: **Edit mailbox → Two-Factor Authentication → TOTP**
|
||||
|
||||
> ⚠ There was a session fixation vulnerability in the MailCow web panel (GHSA-23c8-4wwr-g3c6, January 2025) and a critical SSTI vulnerability (GHSA-8p7g-6cjj-wr9m, July 2025). Both require staying current on updates. Enable auto-updates or check the MailCow blog monthly.
|
||||
|
||||
### Restrict Admin UI to Internal Network
|
||||
|
||||
The MailCow admin panel should not be reachable from the public internet. Access should require being on your internal network or connected via WireGuard.
|
||||
|
||||
In OPNsense, add a firewall rule blocking external access to port 443 on 192.168.5.16 except from your static admin IP or WireGuard peers.
|
||||
|
||||
Alternatively, configure MailCow's nginx to restrict the admin path by IP:
|
||||
|
||||
```nginx
|
||||
# In data/conf/nginx/includes/site-defaults.conf
|
||||
# Add inside the server block for the admin panel:
|
||||
location /admin {
|
||||
allow 192.168.3.0/24;
|
||||
allow 192.168.5.0/24;
|
||||
allow 192.168.32.0/24; # WireGuard peers
|
||||
deny all;
|
||||
}
|
||||
```
|
||||
|
||||
### API Key Rotation
|
||||
|
||||
If you use the MailCow API (for automation or Netgrimoire tooling), generate a dedicated read-only key where possible, and rotate keys annually or after any suspected compromise.
|
||||
|
||||
---
|
||||
|
||||
## Postfix TLS Hardening
|
||||
|
||||
Add to `/opt/mailcow-dockerized/data/conf/postfix/extra.cf`:
|
||||
|
||||
```ini
|
||||
# Enforce TLS 1.2+ and strong ciphers
|
||||
tls_high_cipherlist = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256
|
||||
tls_preempt_cipherlist = yes
|
||||
|
||||
# Inbound SMTP (smtpd) — receiving from other mail servers
|
||||
smtpd_tls_protocols = !SSLv2, !SSLv3, !TLSv1, !TLSv1.1
|
||||
smtpd_tls_ciphers = high
|
||||
smtpd_tls_mandatory_ciphers = high
|
||||
|
||||
# Outbound SMTP (smtp) — delivery to MXRoute and direct sends
|
||||
smtp_tls_protocols = !SSLv2, !SSLv3, !TLSv1, !TLSv1.1
|
||||
smtp_tls_ciphers = high
|
||||
smtp_tls_mandatory_ciphers = high
|
||||
|
||||
# Require encryption on the MXRoute relay connection
|
||||
smtp_tls_security_level = encrypt
|
||||
```
|
||||
|
||||
After editing, restart Postfix:
|
||||
```bash
|
||||
cd /opt/mailcow-dockerized
|
||||
docker compose restart postfix-mailcow
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Nginx Header Hardening
|
||||
|
||||
Add to `/opt/mailcow-dockerized/data/conf/nginx/includes/site-defaults.conf`:
|
||||
|
||||
```nginx
|
||||
# Strong SSL ciphers only
|
||||
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
|
||||
ssl_conf_command Options PrioritizeChaCha;
|
||||
|
||||
# HSTS — include subdomains if all your services use HTTPS
|
||||
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";
|
||||
|
||||
# Disable X-XSS-Protection (deprecated, CSP replaces it)
|
||||
add_header X-XSS-Protection "0";
|
||||
|
||||
# Deny unused browser permissions
|
||||
add_header Permissions-Policy "accelerometer=(), ambient-light-sensor=(), autoplay=(), battery=(), camera=(), geolocation=(), gyroscope=(), magnetometer=(), microphone=(), payment=(), usb=()";
|
||||
|
||||
# Content Security Policy — if NOT using Gravatar with SOGo
|
||||
add_header Content-Security-Policy "default-src 'none'; connect-src 'self' https://api.github.com; font-src 'self' https://fonts.gstatic.com; img-src 'self' data:; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline' https://fonts.googleapis.com; frame-ancestors 'none'; upgrade-insecure-requests; block-all-mixed-content; base-uri 'none'";
|
||||
|
||||
# Cross-origin isolation headers
|
||||
add_header Cross-Origin-Resource-Policy same-origin;
|
||||
add_header Cross-Origin-Opener-Policy same-origin;
|
||||
add_header Cross-Origin-Embedder-Policy require-corp;
|
||||
|
||||
# Disable gzip to prevent BREACH attack
|
||||
# Change gzip on; → gzip off; in the main nginx conf
|
||||
```
|
||||
|
||||
> ⚠ The December 2025 MailCow update already removed the deprecated `X-XSS-Protection` header from defaults. If you're current, you may already have this. Check before duplicating.
|
||||
|
||||
After editing, restart nginx:
|
||||
```bash
|
||||
docker compose restart nginx-mailcow
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Rspamd Tuning
|
||||
|
||||
Rspamd is MailCow's spam filter. The defaults are reasonable but a few adjustments improve both inbound protection and outbound policy enforcement.
|
||||
|
||||
### Key Settings to Review
|
||||
|
||||
Navigate to **MailCow UI → Configuration → Rspamd UI** (or directly at `https://mail.yourdomain.com/rspamd/`)
|
||||
|
||||
**Actions → Score Thresholds:**
|
||||
|
||||
| Action | Default | Recommended |
|
||||
|---|---|---|
|
||||
| Greylist | 4 | 3 |
|
||||
| Add header | 6 | 5 |
|
||||
| Reject | 15 | 12 |
|
||||
|
||||
Lowering the reject threshold from 15 to 12 catches more aggressive spam while avoiding false positives.
|
||||
|
||||
**Modules to enable/verify:**
|
||||
|
||||
| Module | Purpose |
|
||||
|---|---|
|
||||
| DKIM verification | Verify incoming DKIM signatures |
|
||||
| SPF | Verify incoming SPF |
|
||||
| DMARC | Enforce DMARC on inbound |
|
||||
| MX Check | Verify sending domain has a valid MX |
|
||||
| RBL (Realtime Blacklists) | Check sending IPs against blocklists |
|
||||
| Greylisting | Temporary reject new senders (forces retry) |
|
||||
|
||||
### Add CrowdSec as an Rspamd Feed
|
||||
|
||||
If you also have the CrowdSec bouncer running on the MailCow host (or can reach it), you can feed CrowdSec decisions into Rspamd to reject mail from banned IPs. This is advanced but powerful — see the [CrowdSec Bouncer for Rspamd](https://hub.crowdsec.net) hub entry.
|
||||
|
||||
---
|
||||
|
||||
## Deliverability Verification
|
||||
|
||||
Run these checks after making any DNS or config changes:
|
||||
|
||||
| Tool | What It Checks | URL |
|
||||
|---|---|---|
|
||||
| MXToolbox | SPF, DKIM, DMARC, MX, PTR, blacklists | mxtoolbox.com |
|
||||
| mail-tester.com | Send a test email, get a 1–10 score | mail-tester.com |
|
||||
| Port25 verifier | Send to check-auth@verifier.port25.com | Email-based |
|
||||
| DKIM validator | Validates DKIM signature | dkimvalidator.com |
|
||||
| Google Postmaster Tools | Gmail reputation monitoring (requires setup) | postmaster.google.com |
|
||||
| Microsoft SNDS | Outlook/Hotmail reputation | sendersupport.olc.protection.outlook.com |
|
||||
|
||||
> ✓ Aim for 9–10/10 on mail-tester.com. Anything below 8 indicates a misconfiguration that will hurt deliverability.
|
||||
|
||||
---
|
||||
|
||||
## Keeping MailCow Updated
|
||||
|
||||
MailCow has had several critical security vulnerabilities in 2025 (session fixation, SSTI, password reset poisoning). Staying current is non-negotiable.
|
||||
|
||||
```bash
|
||||
cd /opt/mailcow-dockerized
|
||||
|
||||
# Pull latest images
|
||||
docker compose pull
|
||||
|
||||
# Apply update
|
||||
./update.sh
|
||||
|
||||
# Or if using the newer helper:
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
> ✓ Subscribe to the [MailCow blog](https://mailcow.email/posts/) or watch the [GitHub releases](https://github.com/mailcow/mailcow-dockerized/releases) for security advisories. The update cadence is roughly monthly.
|
||||
|
||||
Set up a cron job or Monit check to alert you when MailCow is more than 30 days behind the latest release.
|
||||
|
||||
---
|
||||
|
||||
## Checklist Summary
|
||||
|
||||
| Item | Status |
|
||||
|---|---|
|
||||
| SPF includes both own IP and mxroute.com | ☐ |
|
||||
| Two DKIM selectors (mailcow + mxroute) | ☐ |
|
||||
| DMARC in monitoring mode, advancing to reject | ☐ |
|
||||
| DMARC reports being processed (Postmark/dmarcian) | ☐ |
|
||||
| MTA-STS policy published and enforced | ☐ |
|
||||
| TLS-RPT record in DNS | ☐ |
|
||||
| MXRoute relay connection uses TLS/encrypt level | ☐ |
|
||||
| Admin UI restricted to internal network | ☐ |
|
||||
| 2FA on admin and all privileged accounts | ☐ |
|
||||
| Postfix TLS 1.2+ enforced via extra.cf | ☐ |
|
||||
| Nginx security headers added | ☐ |
|
||||
| Rate limits set on outbound per-domain | ☐ |
|
||||
| MailCow updated to latest (monthly check) | ☐ |
|
||||
| Rspamd thresholds reviewed | ☐ |
|
||||
| PTR/rDNS record matches mail hostname | ☐ |
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [OPNsense Firewall](./opnsense-firewall) — dedicated ATT_Mail virtual IP, port NAT
|
||||
- [CrowdSec](./crowdsec) — IP reputation blocking at firewall level
|
||||
- [Graylog](./graylog) — DMARC report and TLS-RPT ingestion target
|
||||
- [Caddy Reverse Proxy](./caddy-reverse-proxy) — if MailCow webmail is proxied through Caddy
|
||||
490
Keystone-Grimoire/Mail/Install.md
Normal file
490
Keystone-Grimoire/Mail/Install.md
Normal file
|
|
@ -0,0 +1,490 @@
|
|||
---
|
||||
title: Mailcow Dockerized Install and Config
|
||||
description:
|
||||
published: true
|
||||
date: 2026-02-25T21:05:48.256Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-02-25T21:05:38.864Z
|
||||
---
|
||||
|
||||
# MailCow — Installation & Configuration
|
||||
|
||||
**Host:** docker4 (192.168.5.16)
|
||||
**Hostname:** hermes.netgrimoire.com
|
||||
**Admin URL:** https://mail.netgrimoire.com
|
||||
**Version:** 2025-10a (update 2026-01 available as of documentation date)
|
||||
**Installed:** /opt/mailcow-dockerized
|
||||
**Timezone:** America/Chicago
|
||||
**Architecture:** x86_64
|
||||
**CPU:** 16 cores
|
||||
**RAM:** 30.63 GB
|
||||
**Disk:** /dev/nvme0n1p2 — 442G / 502G used (93% — monitor this)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Mailcow runs as a Docker stack on docker4, attached to the `netgrimoire` overlay network. All containers use `restart: unless-stopped` via a compose override. Outbound mail routes through MXRoute via sender-dependent transports. Inbound mail arrives from MXRoute which acts as the public-facing inbound gateway (solving residential AT&T IP filtering issues with banks).
|
||||
|
||||
See [MXRoute Master Configuration](./mxroute-master) for full inbound/outbound/DNS detail per domain.
|
||||
|
||||
---
|
||||
|
||||
## Installation Paths
|
||||
|
||||
| Path | Purpose |
|
||||
|------|---------|
|
||||
| `/opt/mailcow-dockerized/` | Mailcow root |
|
||||
| `/opt/mailcow-dockerized/mailcow.conf` | Primary configuration file |
|
||||
| `/opt/mailcow-dockerized/docker-compose.yml` | Base compose (do not edit) |
|
||||
| `/opt/mailcow-dockerized/docker-compose.override.yml` | Local overrides — network and restart policy |
|
||||
| `/opt/mailcow-dockerized/data/conf/postfix/extra.cf` | Persistent Postfix overrides |
|
||||
| `/opt/mailcow-dockerized/data/conf/postfix/main.cf` | Postfix base config (managed by Mailcow) |
|
||||
| `/opt/mailcow-dockerized/data/conf/rspamd/` | Rspamd configuration |
|
||||
| `/opt/mailcow-dockerized/data/assets/ssl/` | TLS certificates |
|
||||
|
||||
---
|
||||
|
||||
## mailcow.conf — Key Settings
|
||||
|
||||
```ini
|
||||
MAILCOW_HOSTNAME=hermes.netgrimoire.com
|
||||
MAILCOW_PASS_SCHEME=BLF-CRYPT
|
||||
|
||||
# Database
|
||||
DBNAME=mailcow
|
||||
DBUSER=mailcow
|
||||
DBPASS=mg7Z8W9UsPlOh0S6vF7TmmPb6n1s
|
||||
DBROOT=JdymsZFFACHkDcOdziQ53QruCTG2
|
||||
|
||||
# Redis
|
||||
REDISPASS=6AduWQsmBYGMKfOi1CNEGQfTE3RH
|
||||
|
||||
# Ports — HTTPS runs on 3443, proxied through Caddy
|
||||
HTTP_PORT=80
|
||||
HTTP_BIND=
|
||||
HTTPS_PORT=3443
|
||||
HTTPS_BIND=
|
||||
HTTP_REDIRECT=n
|
||||
|
||||
# Mail ports (standard)
|
||||
SMTP_PORT=25
|
||||
SMTPS_PORT=465
|
||||
SUBMISSION_PORT=587
|
||||
IMAP_PORT=143
|
||||
IMAPS_PORT=993
|
||||
POP_PORT=110
|
||||
POPS_PORT=995
|
||||
SIEVE_PORT=4190
|
||||
|
||||
# Internal ports (localhost only)
|
||||
DOVEADM_PORT=127.0.0.1:19991
|
||||
SQL_PORT=127.0.0.1:13306
|
||||
REDIS_PORT=127.0.0.1:7654
|
||||
|
||||
# TLS cert coverage
|
||||
ADDITIONAL_SAN=smtp.*,imap.*
|
||||
AUTODISCOVER_SAN=y
|
||||
|
||||
# ACME / Let's Encrypt
|
||||
SKIP_LETS_ENCRYPT=n
|
||||
SKIP_IP_CHECK=y
|
||||
SKIP_HTTP_VERIFICATION=y
|
||||
|
||||
# Services — all enabled
|
||||
SKIP_CLAMD=n
|
||||
SKIP_OLEFY=n
|
||||
SKIP_SOGO=n
|
||||
SKIP_FTS=n
|
||||
|
||||
# FTS (Flatcurve/Xapian)
|
||||
FTS_HEAP=128
|
||||
FTS_PROCS=1
|
||||
|
||||
# Watchdog
|
||||
USE_WATCHDOG=y
|
||||
WATCHDOG_NOTIFY_START=y
|
||||
WATCHDOG_NOTIFY_BAN=n
|
||||
WATCHDOG_EXTERNAL_CHECKS=n
|
||||
|
||||
# Networking
|
||||
IPV4_NETWORK=172.22.1
|
||||
IPV6_NETWORK=fd4d:6169:6c63:6f77::/64
|
||||
ENABLE_IPV6=false
|
||||
|
||||
# Misc
|
||||
MAILDIR_GC_TIME=7200
|
||||
MAILDIR_SUB=Maildir
|
||||
SOGO_EXPIRE_SESSION=480
|
||||
SOGO_URL_ENCRYPTION_KEY=ojmPfhnM4MYMsA2f
|
||||
ACL_ANYONE=disallow
|
||||
ALLOW_ADMIN_EMAIL_LOGIN=n
|
||||
DOCKER_COMPOSE_VERSION=native
|
||||
COMPOSE_PROJECT_NAME=mailcow
|
||||
LOG_LINES=9999
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## docker-compose.override.yml
|
||||
|
||||
All services are attached to the external `netgrimoire` overlay network and set to `restart: unless-stopped`. The override does not change any image versions or environment variables — it only adds network membership and restart policy.
|
||||
|
||||
```yaml
|
||||
services:
|
||||
unbound-mailcow:
|
||||
networks:
|
||||
netgrimoire:
|
||||
restart: unless-stopped
|
||||
|
||||
mysql-mailcow:
|
||||
networks:
|
||||
- netgrimoire
|
||||
restart: unless-stopped
|
||||
|
||||
redis-mailcow:
|
||||
networks:
|
||||
- netgrimoire
|
||||
restart: unless-stopped
|
||||
|
||||
clamd-mailcow:
|
||||
networks:
|
||||
- netgrimoire
|
||||
restart: unless-stopped
|
||||
|
||||
rspamd-mailcow:
|
||||
networks:
|
||||
- netgrimoire
|
||||
restart: unless-stopped
|
||||
|
||||
php-fpm-mailcow:
|
||||
networks:
|
||||
- netgrimoire
|
||||
restart: unless-stopped
|
||||
|
||||
sogo-mailcow:
|
||||
networks:
|
||||
- netgrimoire
|
||||
restart: unless-stopped
|
||||
|
||||
dovecot-mailcow:
|
||||
networks:
|
||||
- netgrimoire
|
||||
restart: unless-stopped
|
||||
|
||||
postfix-mailcow:
|
||||
networks:
|
||||
- netgrimoire
|
||||
restart: unless-stopped
|
||||
|
||||
postfix-tlspol-mailcow:
|
||||
networks:
|
||||
- netgrimoire
|
||||
restart: unless-stopped
|
||||
|
||||
memcached-mailcow:
|
||||
restart: unless-stopped
|
||||
|
||||
nginx-mailcow:
|
||||
networks:
|
||||
- netgrimoire
|
||||
restart: unless-stopped
|
||||
|
||||
acme-mailcow:
|
||||
networks:
|
||||
- netgrimoire
|
||||
restart: unless-stopped
|
||||
|
||||
watchdog-mailcow:
|
||||
networks:
|
||||
- netgrimoire
|
||||
restart: unless-stopped
|
||||
|
||||
dockerapi-mailcow:
|
||||
networks:
|
||||
- netgrimoire
|
||||
restart: unless-stopped
|
||||
|
||||
olefy-mailcow:
|
||||
networks:
|
||||
- netgrimoire
|
||||
restart: unless-stopped
|
||||
|
||||
ofelia-mailcow:
|
||||
networks:
|
||||
- netgrimoire
|
||||
restart: unless-stopped
|
||||
|
||||
networks:
|
||||
netgrimoire:
|
||||
external: true
|
||||
driver: overlay
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Container Image Versions
|
||||
|
||||
From `docker-compose.yml` (base file — version 2025-10a):
|
||||
|
||||
| Service | Image |
|
||||
|---------|-------|
|
||||
| unbound-mailcow | ghcr.io/mailcow/unbound:1.24 |
|
||||
| mysql-mailcow | mariadb:10.11 |
|
||||
| redis-mailcow | redis:7.4.6-alpine |
|
||||
| clamd-mailcow | ghcr.io/mailcow/clamd:1.71 |
|
||||
| rspamd-mailcow | ghcr.io/mailcow/rspamd:2.4 |
|
||||
| php-fpm-mailcow | ghcr.io/mailcow/phpfpm:1.94 |
|
||||
| sogo-mailcow | ghcr.io/mailcow/sogo:1.136 |
|
||||
| dovecot-mailcow | ghcr.io/mailcow/dovecot:2.35 |
|
||||
| postfix-mailcow | ghcr.io/mailcow/postfix:1.81 |
|
||||
| postfix-tlspol-mailcow | ghcr.io/mailcow/postfix-tlspol:1.0 |
|
||||
| memcached-mailcow | memcached:alpine |
|
||||
| nginx-mailcow | ghcr.io/mailcow/nginx:1.05 |
|
||||
| acme-mailcow | ghcr.io/mailcow/acme:1.94 |
|
||||
| netfilter-mailcow | ghcr.io/mailcow/netfilter:1.63 |
|
||||
| watchdog-mailcow | ghcr.io/mailcow/watchdog:2.09 |
|
||||
| dockerapi-mailcow | ghcr.io/mailcow/dockerapi:2.11 |
|
||||
| olefy-mailcow | ghcr.io/mailcow/olefy:1.15 |
|
||||
| ofelia-mailcow | mcuadros/ofelia:latest |
|
||||
|
||||
---
|
||||
|
||||
## Postfix Configuration
|
||||
|
||||
### extra.cf
|
||||
|
||||
```
|
||||
myhostname = hermes.netgrimoire.com
|
||||
```
|
||||
|
||||
> The MXRoute trusted network entries should also be here. Current extra.cf only contains myhostname — confirm mynetworks is set correctly or add the MXRoute IP ranges if not already present via the UI.
|
||||
|
||||
### Key Postfix Settings (from running config)
|
||||
|
||||
```
|
||||
mynetworks = 127.0.0.0/8 172.22.1.0/24 10.0.1.0/24 [::1]/128 [fd4d:6169:6c63:6f77::]/64 [fe80::]/64
|
||||
message_size_limit = 104857600 # 100MB
|
||||
mailbox_size_limit = 0 # unlimited
|
||||
bounce_queue_lifetime = 1d
|
||||
maximal_queue_lifetime = 5d
|
||||
delay_warning_time = 4h
|
||||
postscreen_dnsbl_threshold = 6
|
||||
postscreen_dnsbl_action = enforce
|
||||
postscreen_greet_action = enforce
|
||||
smtpd_relay_restrictions = permit_mynetworks, permit_sasl_authenticated, defer_unauth_destination
|
||||
disable_vrfy_command = yes
|
||||
broken_sasl_auth_clients = yes
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Domains
|
||||
|
||||
10 domains configured. All active.
|
||||
|
||||
| Domain | Mailboxes | Sender-Dependent Transport | Created |
|
||||
|--------|-----------|---------------------------|---------|
|
||||
| bamalady.com | 0 / 10 | *(not confirmed)* | — |
|
||||
| bill740.com | 1 / 10 | *(not confirmed)* | — |
|
||||
| florosafd.org | 4 / 10 | ID 4: heracles.mxrouting.net:587 (relay@florosafd.org) | 2025-11-21 |
|
||||
| gnarlypandaproductions.com | 2 / 10 | ID 5: heracles.mxrouting.net:587 (relay@gnarlypandaproductions.com) | 2025-11-21 |
|
||||
| netgrimoire.com | 2 / 10 | ID 2: heracles.mxrouting.net:587 (relay@netgrimoire.com) | 2025-11-21 |
|
||||
| nucking-futz.net | 0 / 10 | *(not confirmed)* | — |
|
||||
| pncfishandmore.com | 4 / 10 | ID 6: heracles.mxrouting.net:587 (relay@pncfishandmore.com) | — |
|
||||
| pncharris.com | 4 / 10 | ID 3: heracles.mxrouting.net:587 (passer@pncharris.com) | 2025-11-21 |
|
||||
| pncharrisenterprises.com | 2 / 10 | *(not confirmed from screenshots)* | — |
|
||||
| wasted-bandwidth.net | 1 / 10 | ID 1: heracles.mxrouting.net:587 (relay@wasted-bandwidth.net) | — |
|
||||
|
||||
> MXRoute relay hostname is `heracles.mxrouting.net:587` — note this differs from the generic `smtp.mxroute.com` placeholder used in setup docs. Always use `heracles.mxrouting.net:587` for this account.
|
||||
|
||||
---
|
||||
|
||||
## Mailboxes
|
||||
|
||||
19 active mailboxes across all domains:
|
||||
|
||||
| Mailbox | Messages | Domain |
|
||||
|---------|----------|--------|
|
||||
| bill@bill740.com | 1 | bill740.com |
|
||||
| chieflee@florosafd.org | 2124 | florosafd.org |
|
||||
| cindy@pncfishandmore.com | 1109 | pncfishandmore.com |
|
||||
| cindy@pncharris.com | 33797 | pncharris.com |
|
||||
| cindy@pncharrisenterprises.com | 819 | pncharrisenterprises.com |
|
||||
| dads_attic@pncharris.com | 0 | pncharris.com |
|
||||
| jim.harris@florosafd.org | 8 | florosafd.org |
|
||||
| kyle@gnarlypandaproductions.com | 486 | gnarlypandaproductions.com |
|
||||
| kyle@pncfishandmore.com | 110 | pncfishandmore.com |
|
||||
| kyle@pncharris.com | 31182 | pncharris.com |
|
||||
| phil@florosafd.org | 5 | florosafd.org |
|
||||
| phil@gnarlypandaproductions.com | 5 | gnarlypandaproductions.com |
|
||||
| phil@netgrimoire.com | 1 | netgrimoire.com |
|
||||
| phil@pncfishandmore.com | 10 | pncfishandmore.com |
|
||||
| phil@pncharris.com | 3210 | pncharris.com |
|
||||
| phil@pncharrisenterprises.com | 1 | pncharrisenterprises.com |
|
||||
| times@florosafd.org | 191 | florosafd.org |
|
||||
| traveler@netgrimoire.com | 3 | netgrimoire.com |
|
||||
| traveler@wasted-bandwidth.net | 138 | wasted-bandwidth.net |
|
||||
|
||||
---
|
||||
|
||||
## Aliases
|
||||
|
||||
| ID | Alias | Target Domain | Internal |
|
||||
|----|-------|---------------|---------|
|
||||
| 7 | cindy@bamalady.com | bamalady.com | No |
|
||||
|
||||
---
|
||||
|
||||
## Sender-Dependent Transports
|
||||
|
||||
All outbound relay routes through `heracles.mxrouting.net:587`. This is your MXRoute server hostname — use this exact value when adding new transports.
|
||||
|
||||
| ID | Host | Username | Password |
|
||||
|----|------|----------|----------|
|
||||
| 1 | heracles.mxrouting.net:587 | relay@wasted-bandwidth.net | dZ4yLYznVvgSJtqWZJFA |
|
||||
| 2 | heracles.mxrouting.net:587 | relay@netgrimoire.com | TVGCnJp9SxRbWU8EhkMw |
|
||||
| 3 | heracles.mxrouting.net:587 | passer@pncharris.com | bBJtPhrGkHvvhxhukkae |
|
||||
| 4 | heracles.mxrouting.net:587 | relay@florosafd.org | 2Fe8XMyaeh6Z5dvdHYdq |
|
||||
| 5 | heracles.mxrouting.net:587 | relay@gnarlypandaproductions.com | vG5ZsUQhRWD2UyzLPsqA |
|
||||
| 6 | heracles.mxrouting.net:587 | relay@pncfishandmore.com | *(confirm from MXRoute panel)* |
|
||||
|
||||
---
|
||||
|
||||
## DKIM Keys
|
||||
|
||||
Two DKIM selectors are configured per domain — one for Mailcow (selector: `dkim`) and one added separately for MXRoute outbound signing. The Mailcow-managed keys use selector `dkim._domainkey`.
|
||||
|
||||
### pncharris.com
|
||||
```
|
||||
v=DKIM1;k=rsa;t=s;s=email;p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAqhgQV7r+KKQwJceWenZ3FNq8AsllgW6cIm/0jpsLT62vF1yy0nh2MdhjYgQAX2MK9HHYzNZcCB3+OPpqBbXeNbSDckxB/dC+z/vboMHrJmYonfaSYshZjSR80V/a2Yoq+hiXQ9eBcuOggENtMm4XvEsl/vOWLBMfasqe+X11gzQBeRv1tTaXJB0C4i7tAcfi0O/AxH8QFTr2099+k2iepn8J15ukk1zu4zemBJj4Z3uFTNnBP8YpgKbYoUDyMVIKIxGjANVBBypcrMKavpQ4F1JLhgGFhWAsAuFRwZsnOaftZyMuzAZxM37DTd/bF2WanmK3Xe75SN5uOnEXjuzW/wIDAQAB
|
||||
```
|
||||
|
||||
### netgrimoire.com
|
||||
```
|
||||
v=DKIM1;k=rsa;t=s;s=email;p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAoJ9YKqV9+6gOcVKI+UJ0TRcMmergxU8HLO+mwTMfqOhblsEcDPO60c8ya24iIXg51AA2k5Xcbb0bLScaaIi0P/TRzP/bonAZkPS1Y8Fx1se9dikTsA9Lazho u6DvoFkkV/IPH1ZNg68Cd9teAD5tvoY18OSneJJsocXwFo57c+XccUaTxjpV7eReuT4da7iNHMmUmZNfKenxVMKD740zrDJAeAsXtEb/71CochHYSm+qAvuG9/WPixJbMsJLF/iVhV3Byp0LCrB+CwGTwnsiUcd7QpuD6rRs/7zzdGBtoN22m/j390GimFstYvB61I20h8sHWGAG66dLko6Sgvs47wIDAQAB
|
||||
```
|
||||
|
||||
### gnarlypandaproductions.com
|
||||
```
|
||||
v=DKIM1;k=rsa;t=s;s=email;p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA...
|
||||
```
|
||||
*(scroll cut off in screenshot — retrieve full key from Mailcow UI → Edit domain → bottom of page)*
|
||||
|
||||
> All other domain DKIM keys should be retrieved from the Mailcow domain edit page and recorded here for disaster recovery completeness.
|
||||
|
||||
---
|
||||
|
||||
## Network Configuration
|
||||
|
||||
Mailcow containers join the `netgrimoire` external overlay network, allowing communication with other Docker Swarm services (Caddy reverse proxy, etc.) without exposing ports directly to the host network.
|
||||
|
||||
**Internal Docker network:** `172.22.1.0/24`
|
||||
|
||||
Key container IPs within the mailcow-network:
|
||||
- unbound: 172.22.1.254
|
||||
- redis: 172.22.1.249
|
||||
- sogo: 172.22.1.248
|
||||
- dovecot: 172.22.1.250
|
||||
- postfix: 172.22.1.253
|
||||
|
||||
**IPv6:** disabled (`ENABLE_IPV6=false`)
|
||||
|
||||
---
|
||||
|
||||
## Caddy Reverse Proxy
|
||||
|
||||
Mailcow's nginx listens on HTTPS port 3443 internally. Caddy proxies external requests to it. Mailcow handles its own TLS for direct mail client connections (IMAP 993, SMTP 465/587).
|
||||
|
||||
The admin UI at `mail.netgrimoire.com` is proxied through Caddy on the `netgrimoire` overlay network.
|
||||
|
||||
---
|
||||
|
||||
## Updating Mailcow
|
||||
|
||||
```bash
|
||||
cd /opt/mailcow-dockerized
|
||||
|
||||
# Pull latest
|
||||
git fetch origin
|
||||
git checkout origin/master
|
||||
|
||||
# Update containers
|
||||
docker compose pull
|
||||
./update.sh
|
||||
```
|
||||
|
||||
> As of documentation date, version 2026-01 is available. Current running version is 2025-10a. Update when convenient — check the [MailCow changelog](https://github.com/mailcow/mailcow-dockerized/releases) for breaking changes first.
|
||||
|
||||
Monthly update check is recommended. MailCow had multiple security vulnerabilities in 2025 — staying current is important.
|
||||
|
||||
---
|
||||
|
||||
## Common Operations
|
||||
|
||||
### Restart all containers
|
||||
```bash
|
||||
cd /opt/mailcow-dockerized
|
||||
docker compose restart
|
||||
```
|
||||
|
||||
### Restart single container (e.g. after extra.cf change)
|
||||
```bash
|
||||
docker compose restart postfix-mailcow
|
||||
```
|
||||
|
||||
### View logs
|
||||
```bash
|
||||
# Postfix
|
||||
docker compose logs postfix-mailcow -f
|
||||
|
||||
# Dovecot
|
||||
docker compose logs dovecot-mailcow -f
|
||||
|
||||
# All containers
|
||||
docker compose logs -f
|
||||
```
|
||||
|
||||
### Check queue
|
||||
```bash
|
||||
docker exec mailcow-postfix-mailcow-1 postqueue -p
|
||||
```
|
||||
|
||||
### Flush queue
|
||||
```bash
|
||||
docker exec mailcow-postfix-mailcow-1 postqueue -f
|
||||
```
|
||||
|
||||
### Check container health
|
||||
```bash
|
||||
docker compose ps
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Known Gotchas
|
||||
|
||||
**Disk usage is at 93%.** The nvme0n1p2 volume has 442G used of 502G. This needs attention — vmail storage grows over time and garbage collection runs hourly but only removes items older than 7200 minutes (5 days). Monitor this and consider quota enforcement per mailbox if growth continues.
|
||||
|
||||
**extra.cf is minimal.** The MXRoute trusted network IPs should be confirmed in the running Postfix config. The `mynetworks` value from `postconf` shows `10.0.1.0/24` is already trusted — confirm whether MXRoute IP ranges `69.167.160.0/19` and `198.54.120.0/22` are included. If not, add them to extra.cf and restart postfix.
|
||||
|
||||
**MXRoute relay hostname.** The actual relay hostname for this account is `heracles.mxrouting.net:587` — not the generic `smtp.mxroute.com` placeholder. All 6 transports use `heracles.mxrouting.net:587`. Use this exact hostname for any new transport entries.
|
||||
|
||||
**pncharris.com uses passer@ not relay@.** Transport ID 3 for pncharris.com authenticates as `passer@pncharris.com`, not `relay@pncharris.com`. This is intentional — the relay@ account exists but passer@ is the current active relay credential.
|
||||
|
||||
**HTTPS on port 3443.** Mailcow's web UI is not on the standard 443 — it binds to 3443 and Caddy handles the public-facing 443 proxy. Direct access to the UI requires going through Caddy or using the internal port.
|
||||
|
||||
**nucking-futz.net vs nucking-futz.com.** The domains list shows `nucking-futz.net` but the intended new domain is `nucking-futz.com`. Verify which is actually configured and correct if needed.
|
||||
|
||||
**bamalady.com and bill740.com** have no transport assigned in the screenshots. Confirm whether these domains need MXRoute relay configured.
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [MXRoute Master Configuration](./mxroute-master) — per-domain DNS, inbound forwarding, outbound relay credentials
|
||||
- [Mail Setup — nucking-futz.com](./mail-setup-nucking-futz) — new domain setup guide
|
||||
- [MailCow Security Hardening](./mailcow-security-hardening)
|
||||
- [Caddy Reverse Proxy](./caddy-reverse-proxy) — proxies mail.netgrimoire.com to port 3443
|
||||
- [OPNsense Firewall](./opnsense-firewall) — ATT_Mail static IP, port forwarding rules
|
||||
430
Keystone-Grimoire/Mail/MXRoute-Integration.md
Normal file
430
Keystone-Grimoire/Mail/MXRoute-Integration.md
Normal file
|
|
@ -0,0 +1,430 @@
|
|||
---
|
||||
title: Integrating MXRoute with MailCow
|
||||
description:
|
||||
published: true
|
||||
date: 2026-02-25T21:04:37.135Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-02-25T19:22:31.514Z
|
||||
---
|
||||
|
||||
# MXRoute — Master Configuration Reference
|
||||
|
||||
## Overview
|
||||
|
||||
MXRoute serves two roles in Netgrimoire mail infrastructure:
|
||||
|
||||
- **Inbound gateway** — MX records for all domains point to MXRoute's commercial IPs, solving residential AT&T IP filtering by banks and financial institutions. MXRoute receives mail and forwards to Mailcow via per-address forwarders.
|
||||
- **Outbound relay** — Mailcow sends all outbound mail through MXRoute via sender-dependent transports for improved deliverability.
|
||||
|
||||
**Mail flow:**
|
||||
|
||||
```
|
||||
Inbound: Internet → MXRoute (commercial IP) → Mailcow (192.168.5.16)
|
||||
Outbound: Mailcow (192.168.5.16) → MXRoute SMTP relay → Internet
|
||||
```
|
||||
|
||||
**Mailcow host:** 192.168.5.16
|
||||
**MXRoute control panel:** confirm server hostname from MXRoute welcome email (e.g. `arrow.mxrouting.net`)
|
||||
**MXRoute SMTP relay:** confirm from welcome email (e.g. `smtp.mxroute.com:587`)
|
||||
|
||||
---
|
||||
|
||||
## Architecture — Why Two Domains Per Hosted Domain
|
||||
|
||||
MXRoute forwarders require a valid destination email address. Forwarding `user@domain.com` back to `user@domain.com` creates a mail loop because MXRoute would look up the MX for `domain.com` and find itself. The solution is a `mail.domain.com` subdomain with its own MX record pointing directly to Mailcow. MXRoute forwards to `user@mail.domain.com`, Mailcow accepts and delivers, and an alias domain maps `@domain.com` back so users only ever see `@domain.com`.
|
||||
|
||||
```
|
||||
domain.com MX → MXRoute (public-facing, receives from internet)
|
||||
mail.domain.com MX → 192.168.5.16 (internal, MXRoute forwards here)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## MXRoute Control Panel
|
||||
|
||||
**Login:** confirm URL from MXRoute welcome email
|
||||
**Interface:** MXRoute 4.0 (new UI — not old DirectAdmin)
|
||||
|
||||
### Creating a Forwarder
|
||||
|
||||
1. Go to **Forwarders**
|
||||
2. Click **Create New Forwarder**
|
||||
3. Set **Forwarder Name:** `username` (domain shown automatically)
|
||||
4. Set **Destination Type:** `Forward to Email(s)`
|
||||
5. Set **Recipients:** `username@mail.domain.com`
|
||||
6. Click **Create Forwarder**
|
||||
|
||||
> Recipients field accepts multiple addresses comma or newline separated.
|
||||
|
||||
---
|
||||
|
||||
## Mailcow Configuration
|
||||
|
||||
### Adding a New Domain (One-Time Per Domain)
|
||||
|
||||
1. **Mail Setup → Domains → Add domain**
|
||||
- Domain: `mail.domain.com` (the subdomain Mailcow owns)
|
||||
- Leave relay settings as default
|
||||
|
||||
2. **Mail Setup → Alias Domains → Add alias domain**
|
||||
- Alias Domain: `domain.com`
|
||||
- Target Domain: `mail.domain.com`
|
||||
- This makes Mailcow accept and deliver mail for `@domain.com` to `@mail.domain.com` mailboxes
|
||||
|
||||
3. **Configuration → ARC/DKIM Keys**
|
||||
- Select domain `mail.domain.com`
|
||||
- Selector: `mailcow`
|
||||
- Key length: 2048
|
||||
- Generate and copy TXT record for DNS
|
||||
|
||||
4. **Configuration → Extra Postfix configuration → extra.cf**
|
||||
|
||||
```
|
||||
# Trust MXRoute forwarding IPs — prevents SPF scoring on forwarded mail
|
||||
mynetworks = 127.0.0.1/8 [::1]/128 192.168.5.0/24 69.167.160.0/19 198.54.120.0/22
|
||||
```
|
||||
|
||||
Restart affected containers after saving.
|
||||
|
||||
### Adding a New Mailbox
|
||||
|
||||
1. **Mail Setup → Mailboxes → Add mailbox**
|
||||
- Username: `user`
|
||||
- Domain: `mail.domain.com`
|
||||
|
||||
2. **MXRoute control panel → Forwarders → Create New Forwarder**
|
||||
- Forwarder: `user@domain.com`
|
||||
- Destination: `user@mail.domain.com`
|
||||
|
||||
### Outbound Relay — Sender-Dependent Transports
|
||||
|
||||
One transport entry per domain. **Configuration → Routing → Sender-Dependent Transports**
|
||||
|
||||
| Domain | Relay Host | Username | Password |
|
||||
|--------|-----------|----------|----------|
|
||||
| pncharris.com | `[smtp.mxroute.com]:587` | relay@pncharris.com | H@rv3yD)G123 |
|
||||
| wasted-bandwidth.net | `[smtp.mxroute.com]:587` | relay@wasted-bandwidth.net | dZ4yLYznVvgSJtqWZJFA |
|
||||
| netgrimoire.com | `[smtp.mxroute.com]:587` | relay@netgrimoire.com | TVGCnJp9SxRbWU8EhkMw |
|
||||
| florosafd.org | `[smtp.mxroute.com]:587` | relay@florosafd.org | 2Fe8XMyaeh6Z5dvdHYdq |
|
||||
| gnarlypandaproductions.com | `[smtp.mxroute.com]:587` | relay@gnarlypandaproductions.com | vG5ZsUQhRWD2UyzLPsqA |
|
||||
|
||||
> Confirm SMTP relay hostname from MXRoute welcome email — substitute actual hostname for `smtp.mxroute.com` if different.
|
||||
|
||||
### Email Client Settings (All Domains)
|
||||
|
||||
| Setting | Value |
|
||||
|---------|-------|
|
||||
| IMAP server | `mail.domain.com` |
|
||||
| IMAP port | `993` (SSL/TLS) |
|
||||
| SMTP server | `mail.domain.com` |
|
||||
| SMTP port | `465` (SSL/TLS) |
|
||||
| Username | `user@domain.com` |
|
||||
|
||||
> Users log in with `@domain.com`. Mailcow resolves to the internal `@mail.domain.com` mailbox via alias domain — transparent to the user.
|
||||
|
||||
---
|
||||
|
||||
## DNS Reference — All Domains
|
||||
|
||||
### DNS Pattern (Apply to Every Domain)
|
||||
|
||||
Two sets of MX records are required — one for the public domain (pointing to MXRoute) and one for the mail subdomain (pointing directly to Mailcow).
|
||||
|
||||
| Type | Host | Value | Notes |
|
||||
|------|------|-------|-------|
|
||||
| A | `mail` | `YOUR_ATT_MAIL_IP` | Mailcow server — MXRoute forwards here |
|
||||
| MX | `@` | MXRoute primary (priority 10) | From MXRoute welcome email |
|
||||
| MX | `@` | MXRoute secondary (priority 20) | From MXRoute welcome email |
|
||||
| MX | `mail` | `mail.domain.com` (priority 10) | Mailcow handles subdomain directly |
|
||||
| CNAME | `imap` | `mail.domain.com` | Client autoconfiguration |
|
||||
| CNAME | `smtp` | `mail.domain.com` | Client autoconfiguration |
|
||||
| CNAME | `webmail` | `mail.domain.com` | Roundcube access |
|
||||
| CNAME | `autodiscover` | `mail.domain.com` | Outlook autodiscover |
|
||||
| CNAME | `autoconfig` | `mail.domain.com` | Thunderbird autoconfig |
|
||||
| TXT | `@` | `v=spf1 ip4:YOUR_ATT_MAIL_IP include:mxroute.com -all` | SPF — both Mailcow direct and MXRoute relay |
|
||||
| TXT | `mail` | `v=spf1 ip4:YOUR_ATT_MAIL_IP -all` | SPF for subdomain — Mailcow direct only |
|
||||
| TXT | `_dmarc` | `v=DMARC1; p=reject; rua=mailto:admin@netgrimoire.com` | DMARC enforcement |
|
||||
| TXT | `mailcow._domainkey.mail` | *(generated in Mailcow ARC/DKIM Keys)* | Mailcow DKIM selector |
|
||||
| TXT | `x._domainkey` | *(from MXRoute control panel)* | MXRoute DKIM selector — confirm actual selector name |
|
||||
|
||||
---
|
||||
|
||||
### pncharris.com
|
||||
|
||||
| Type | Host | Value |
|
||||
|------|------|-------|
|
||||
| A | `mail` | YOUR_ATT_MAIL_IP |
|
||||
| MX | `@` | MXRoute primary (priority 10) |
|
||||
| MX | `@` | MXRoute secondary (priority 20) |
|
||||
| MX | `mail` | `mail.pncharris.com` (priority 10) |
|
||||
| CNAME | `imap` | `mail.pncharris.com` |
|
||||
| CNAME | `smtp` | `mail.pncharris.com` |
|
||||
| CNAME | `webmail` | `mail.pncharris.com` |
|
||||
| CNAME | `autodiscover` | `mail.pncharris.com` |
|
||||
| CNAME | `autoconfig` | `mail.pncharris.com` |
|
||||
| TXT | `@` | `v=spf1 ip4:YOUR_ATT_MAIL_IP include:mxroute.com -all` |
|
||||
| TXT | `mail` | `v=spf1 ip4:YOUR_ATT_MAIL_IP -all` |
|
||||
| TXT | `_dmarc` | `v=DMARC1; p=reject; rua=mailto:admin@netgrimoire.com` |
|
||||
| TXT | `mailcow._domainkey.mail` | *(from Mailcow ARC/DKIM Keys for mail.pncharris.com)* |
|
||||
| TXT | `x._domainkey` | *(from MXRoute control panel)* |
|
||||
|
||||
**Mailcow domains:** `mail.pncharris.com` (primary), `pncharris.com` (alias domain → mail.pncharris.com)
|
||||
|
||||
**Relay credentials:**
|
||||
|
||||
| Account | Password | Notes |
|
||||
|---------|----------|-------|
|
||||
| relay@pncharris.com | H@rv3yD)G123 | Current relay account |
|
||||
| forwarder@pncharris.com | *(see password history below)* | Legacy account |
|
||||
| passer@pncharris.com | bBJtPhrGkHvvhxhukkae | Current |
|
||||
| kylr pncharris | -,68,incTeR | |
|
||||
| G4@rlyf1ng3r | *(Feb 14)* | |
|
||||
|
||||
**passer@pncharris.com password history** (most recent last):
|
||||
- !5!,_\*zDyLEhhR4
|
||||
- sh7dXWnTPqbkDGsTcwtn
|
||||
- MY3V8p69b2HYksygxhXX
|
||||
- RS6U2GU6rcYe3THKKgYx
|
||||
- yzqNysrd73yzWptVEZ5H (current)
|
||||
|
||||
---
|
||||
|
||||
### wasted-bandwidth.net
|
||||
|
||||
| Type | Host | Value |
|
||||
|------|------|-------|
|
||||
| A | `mail` | YOUR_ATT_MAIL_IP |
|
||||
| MX | `@` | MXRoute primary (priority 10) |
|
||||
| MX | `@` | MXRoute secondary (priority 20) |
|
||||
| MX | `mail` | `mail.wasted-bandwidth.net` (priority 10) |
|
||||
| CNAME | `imap` | `mail.wasted-bandwidth.net` |
|
||||
| CNAME | `smtp` | `mail.wasted-bandwidth.net` |
|
||||
| CNAME | `webmail` | `mail.wasted-bandwidth.net` |
|
||||
| CNAME | `autodiscover` | `mail.wasted-bandwidth.net` |
|
||||
| CNAME | `autoconfig` | `mail.wasted-bandwidth.net` |
|
||||
| TXT | `@` | `v=spf1 ip4:YOUR_ATT_MAIL_IP include:mxroute.com -all` |
|
||||
| TXT | `mail` | `v=spf1 ip4:YOUR_ATT_MAIL_IP -all` |
|
||||
| TXT | `_dmarc` | `v=DMARC1; p=reject; rua=mailto:admin@netgrimoire.com` |
|
||||
| TXT | `mailcow._domainkey.mail` | *(from Mailcow ARC/DKIM Keys for mail.wasted-bandwidth.net)* |
|
||||
| TXT | `x._domainkey` | *(from MXRoute control panel)* |
|
||||
|
||||
**Mailcow domains:** `mail.wasted-bandwidth.net` (primary), `wasted-bandwidth.net` (alias domain)
|
||||
|
||||
**Relay credentials:**
|
||||
|
||||
| Account | Password |
|
||||
|---------|----------|
|
||||
| relay@wasted-bandwidth.net | dZ4yLYznVvgSJtqWZJFA |
|
||||
|
||||
---
|
||||
|
||||
### netgrimoire.com
|
||||
|
||||
| Type | Host | Value |
|
||||
|------|------|-------|
|
||||
| A | `mail` | YOUR_ATT_MAIL_IP |
|
||||
| MX | `@` | MXRoute primary (priority 10) |
|
||||
| MX | `@` | MXRoute secondary (priority 20) |
|
||||
| MX | `mail` | `mail.netgrimoire.com` (priority 10) |
|
||||
| CNAME | `imap` | `mail.netgrimoire.com` |
|
||||
| CNAME | `smtp` | `mail.netgrimoire.com` |
|
||||
| CNAME | `webmail` | `mail.netgrimoire.com` |
|
||||
| CNAME | `autodiscover` | `mail.netgrimoire.com` |
|
||||
| CNAME | `autoconfig` | `mail.netgrimoire.com` |
|
||||
| TXT | `@` | `v=spf1 ip4:YOUR_ATT_MAIL_IP include:mxroute.com -all` |
|
||||
| TXT | `mail` | `v=spf1 ip4:YOUR_ATT_MAIL_IP -all` |
|
||||
| TXT | `_dmarc` | `v=DMARC1; p=reject; rua=mailto:admin@netgrimoire.com` |
|
||||
| TXT | `mailcow._domainkey.mail` | *(from Mailcow ARC/DKIM Keys for mail.netgrimoire.com)* |
|
||||
| TXT | `x._domainkey` | *(from MXRoute control panel)* |
|
||||
|
||||
**Mailcow domains:** `mail.netgrimoire.com` (primary), `netgrimoire.com` (alias domain)
|
||||
|
||||
**Relay credentials:**
|
||||
|
||||
| Account | Password |
|
||||
|---------|----------|
|
||||
| relay@netgrimoire.com | TVGCnJp9SxRbWU8EhkMw |
|
||||
|
||||
---
|
||||
|
||||
### florosafd.org
|
||||
|
||||
| Type | Host | Value |
|
||||
|------|------|-------|
|
||||
| A | `mail` | YOUR_ATT_MAIL_IP |
|
||||
| MX | `@` | MXRoute primary (priority 10) |
|
||||
| MX | `@` | MXRoute secondary (priority 20) |
|
||||
| MX | `mail` | `mail.florosafd.org` (priority 10) |
|
||||
| CNAME | `imap` | `mail.florosafd.org` |
|
||||
| CNAME | `smtp` | `mail.florosafd.org` |
|
||||
| CNAME | `webmail` | `mail.florosafd.org` |
|
||||
| CNAME | `autodiscover` | `mail.florosafd.org` |
|
||||
| CNAME | `autoconfig` | `mail.florosafd.org` |
|
||||
| TXT | `@` | `v=spf1 ip4:YOUR_ATT_MAIL_IP include:mxroute.com -all` |
|
||||
| TXT | `mail` | `v=spf1 ip4:YOUR_ATT_MAIL_IP -all` |
|
||||
| TXT | `_dmarc` | `v=DMARC1; p=reject; rua=mailto:admin@netgrimoire.com` |
|
||||
| TXT | `mailcow._domainkey.mail` | *(from Mailcow ARC/DKIM Keys for mail.florosafd.org)* |
|
||||
| TXT | `x._domainkey` | *(from MXRoute control panel)* |
|
||||
|
||||
**Mailcow domains:** `mail.florosafd.org` (primary), `florosafd.org` (alias domain)
|
||||
|
||||
**Relay credentials:**
|
||||
|
||||
| Account | Password |
|
||||
|---------|----------|
|
||||
| relay@florosafd.org | 2Fe8XMyaeh6Z5dvdHYdq |
|
||||
|
||||
---
|
||||
|
||||
### gnarlypandaproductions.com
|
||||
|
||||
| Type | Host | Value |
|
||||
|------|------|-------|
|
||||
| A | `mail` | YOUR_ATT_MAIL_IP |
|
||||
| MX | `@` | MXRoute primary (priority 10) |
|
||||
| MX | `@` | MXRoute secondary (priority 20) |
|
||||
| MX | `mail` | `mail.gnarlypandaproductions.com` (priority 10) |
|
||||
| CNAME | `imap` | `mail.gnarlypandaproductions.com` |
|
||||
| CNAME | `smtp` | `mail.gnarlypandaproductions.com` |
|
||||
| CNAME | `webmail` | `mail.gnarlypandaproductions.com` |
|
||||
| CNAME | `roundcube` | `roundcube.netgrimoire.com` |
|
||||
| CNAME | `autodiscover` | `mail.gnarlypandaproductions.com` |
|
||||
| CNAME | `autoconfig` | `mail.gnarlypandaproductions.com` |
|
||||
| TXT | `@` | `v=spf1 ip4:YOUR_ATT_MAIL_IP include:mxroute.com -all` |
|
||||
| TXT | `mail` | `v=spf1 ip4:YOUR_ATT_MAIL_IP -all` |
|
||||
| TXT | `_dmarc` | `v=DMARC1; p=reject; rua=mailto:admin@gnarlypandaproductions.com` |
|
||||
| TXT | `mailcow._domainkey.mail` | *(from Mailcow ARC/DKIM Keys for mail.gnarlypandaproductions.com)* |
|
||||
| TXT | `default._domainkey` | `v=DKIM1; t=s; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA3D3vyPoBHB4eMSMq8HygVWHzYbketRX4yjk9wV4bdaar0/c89dK230FMOW6zVXEsY1sXKFk1kBxerHVw0wY8qnQyooHgINEQcEXrtB/x93Sl/cqBQXk+PHOIOymQwgni8WCUhCSnvunxXK8qX5f9J56qzd0/wpY2WSEHho+XrnQjc+c7HMvkcC3+nKJe59ZNgvQW/Y9B/L6zFDjAp+QOUYp9wwX4L+j1T4fQSygYxAJZ0aIoR8FsbOuXc38pht99HyUnYwH08HoK7xv3DL2BrVo3KVZ7xMe2S4YMxd1HkJz2evbV/ziNsJcKW/le3fFS7mza09yJXDLDcLOKLXbYUQIDAQAB` |
|
||||
| TXT | `x._domainkey` | *(from MXRoute control panel — confirm actual selector)* |
|
||||
|
||||
**Mailcow domains:** `mail.gnarlypandaproductions.com` (primary), `gnarlypandaproductions.com` (alias domain)
|
||||
|
||||
**Relay credentials:**
|
||||
|
||||
| Account | Password |
|
||||
|---------|----------|
|
||||
| relay@gnarlypandaproductions.com | vG5ZsUQhRWD2UyzLPsqA |
|
||||
|
||||
---
|
||||
|
||||
### nucking-futz.com
|
||||
|
||||
New domain — see [Mail Setup — nucking-futz.com](./mail-setup-nucking-futz) for full setup guide.
|
||||
|
||||
| Type | Host | Value |
|
||||
|------|------|-------|
|
||||
| A | `mail` | YOUR_ATT_MAIL_IP |
|
||||
| MX | `@` | MXRoute primary (priority 10) |
|
||||
| MX | `@` | MXRoute secondary (priority 20) |
|
||||
| MX | `mail` | `mail.nucking-futz.com` (priority 10) |
|
||||
| CNAME | `imap` | `mail.nucking-futz.com` |
|
||||
| CNAME | `smtp` | `mail.nucking-futz.com` |
|
||||
| CNAME | `webmail` | `mail.nucking-futz.com` |
|
||||
| CNAME | `autodiscover` | `mail.nucking-futz.com` |
|
||||
| CNAME | `autoconfig` | `mail.nucking-futz.com` |
|
||||
| TXT | `@` | `v=spf1 ip4:YOUR_ATT_MAIL_IP include:mxroute.com -all` |
|
||||
| TXT | `mail` | `v=spf1 ip4:YOUR_ATT_MAIL_IP -all` |
|
||||
| TXT | `_dmarc` | `v=DMARC1; p=reject; rua=mailto:admin@netgrimoire.com` |
|
||||
| TXT | `mailcow._domainkey.mail` | *(from Mailcow ARC/DKIM Keys for mail.nucking-futz.com)* |
|
||||
| TXT | `x._domainkey` | *(from MXRoute control panel)* |
|
||||
|
||||
**Mailcow domains:** `mail.nucking-futz.com` (primary), `nucking-futz.com` (alias domain)
|
||||
|
||||
**Relay credentials:**
|
||||
|
||||
| Account | Password |
|
||||
|---------|----------|
|
||||
| relay@nucking-futz.com | *(set during MXRoute domain creation)* |
|
||||
|
||||
---
|
||||
|
||||
## Adding a New Domain — Checklist
|
||||
|
||||
Use this checklist every time a new domain is added to the stack.
|
||||
|
||||
**DNS (at registrar):**
|
||||
- [ ] A record: `mail.newdomain.com` → YOUR_ATT_MAIL_IP
|
||||
- [ ] MX records: `@` → MXRoute servers
|
||||
- [ ] MX record: `mail` → `mail.newdomain.com`
|
||||
- [ ] CNAME records: imap, smtp, webmail, autodiscover, autoconfig
|
||||
- [ ] SPF TXT: `@` — includes both ATT IP and `include:mxroute.com`
|
||||
- [ ] SPF TXT: `mail` — ATT IP only
|
||||
- [ ] DMARC TXT: `_dmarc`
|
||||
- [ ] DKIM TXT: `mailcow._domainkey.mail` — after generating in Mailcow
|
||||
- [ ] DKIM TXT: `x._domainkey` — after retrieving from MXRoute
|
||||
|
||||
**Mailcow:**
|
||||
- [ ] Add domain: `mail.newdomain.com`
|
||||
- [ ] Add alias domain: `newdomain.com` → `mail.newdomain.com`
|
||||
- [ ] Generate DKIM key (selector: `mailcow`) for `mail.newdomain.com`
|
||||
- [ ] Add sender-dependent transport for `newdomain.com`
|
||||
- [ ] Add sender-dependent transport for `mail.newdomain.com`
|
||||
- [ ] Create mailboxes as `user@mail.newdomain.com`
|
||||
|
||||
**MXRoute:**
|
||||
- [ ] Add domain in control panel
|
||||
- [ ] Create forwarder for each mailbox: `user@newdomain.com` → `user@mail.newdomain.com`
|
||||
- [ ] Retrieve DKIM key for DNS
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Mail not delivering inbound (not reaching Mailcow)
|
||||
|
||||
- Check MX records for `@` point to MXRoute servers: `dig MX domain.com +short`
|
||||
- Check MX record for `mail` subdomain points to Mailcow: `dig MX mail.domain.com +short`
|
||||
- Verify MXRoute forwarder exists for the address in the control panel
|
||||
- Check Mailcow logs: **Logs → Postfix** — look for the delivery attempt and any rejection reason
|
||||
- Verify MXRoute IP ranges are in Mailcow `extra.cf` trusted networks
|
||||
|
||||
### Mail not delivering inbound (banks / financial institutions)
|
||||
|
||||
- This is the residential AT&T IP problem — confirm MX records point to MXRoute, not directly to your IP
|
||||
- Run `dig MX domain.com +short` — should show MXRoute servers, not your IP
|
||||
- If MX still points to your ATT IP, update DNS and wait for propagation
|
||||
|
||||
### Outbound mail rejected or going to spam
|
||||
|
||||
- Verify sender-dependent transport is configured for the domain in Mailcow
|
||||
- Check relay credentials are current in the transport entry
|
||||
- Run an SPF check: `dig TXT domain.com +short` — confirm `include:mxroute.com` is present
|
||||
- Send test to check-auth@verifier.port25.com for full SPF/DKIM/DMARC report
|
||||
- Run through https://mail-tester.com for a deliverability score
|
||||
|
||||
### DKIM verification failing
|
||||
|
||||
- Confirm both selectors are published in DNS:
|
||||
- `dig TXT mailcow._domainkey.mail.domain.com +short`
|
||||
- `dig TXT x._domainkey.domain.com +short` (substitute actual MXRoute selector)
|
||||
- Allow up to 48 hours for DNS propagation after adding records
|
||||
- Verify selector names match exactly what Mailcow and MXRoute are using to sign
|
||||
|
||||
### DMARC failures
|
||||
|
||||
- SPF and DKIM must both pass and align with the From: domain
|
||||
- Check DMARC reports sent to `admin@netgrimoire.com` — use [Postmark DMARC](https://dmarc.postmarkapp.com/) or [dmarcian.com](https://dmarcian.com) to parse raw XML reports
|
||||
- Common cause: outbound mail going through MXRoute but `include:mxroute.com` missing from SPF
|
||||
|
||||
### Forwarded mail getting spam-scored
|
||||
|
||||
- Confirm MXRoute IP ranges are in Mailcow `extra.cf` mynetworks
|
||||
- Check that Mailcow trusted networks were saved and containers restarted
|
||||
- Verify SRS is working: in Roundcube open a forwarded message → More → View Source → `Return-Path` should begin with `SRS0=`
|
||||
|
||||
### New mailbox not receiving mail
|
||||
|
||||
- Two steps are required — confirm both were done:
|
||||
1. Mailbox created in Mailcow as `user@mail.domain.com`
|
||||
2. Forwarder created in MXRoute as `user@domain.com` → `user@mail.domain.com`
|
||||
- If the MXRoute forwarder is missing, inbound mail silently goes nowhere
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [MailCow Configuration](./mailcow)
|
||||
- [MailCow Security Hardening](./mailcow-security-hardening)
|
||||
- [Mail Setup — nucking-futz.com](./mail-setup-nucking-futz)
|
||||
- [OPNsense Firewall](./opnsense-firewall) — ATT_Mail static IP allocation
|
||||
85
Keystone-Grimoire/Mail/MailCow-Overview.md
Normal file
85
Keystone-Grimoire/Mail/MailCow-Overview.md
Normal file
|
|
@ -0,0 +1,85 @@
|
|||
---
|
||||
title: MailCow Overview
|
||||
description: Self-hosted mail stack — architecture, domains, and key decisions
|
||||
published: true
|
||||
date: 2026-04-12T00:00:00.000Z
|
||||
tags: keystone, mail, mailcow
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-12T00:00:00.000Z
|
||||
---
|
||||
|
||||
# MailCow Overview
|
||||
|
||||
MailCow runs on `docker4` (hermes, 192.168.5.16) via Docker Compose — not Swarm. It manages mail for all 8 domains.
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
| Component | Role |
|
||||
|-----------|------|
|
||||
| MailCow stack | Postfix, Dovecot, Rspamd, ClamAV, SOGo, Roundcube, nginx-mailcow |
|
||||
| MXRoute | Inbound filtering + outbound relay for all domains |
|
||||
| nginx-mailcow | Only MailCow container connected to `netgrimoire` overlay |
|
||||
|
||||
**Critical:** Only `nginx-mailcow` is attached to the `netgrimoire` overlay network. All other MailCow containers stay on the internal `mailcow-network` bridge. Connecting other containers to the overlay causes Redis and PHP-FPM to resolve to wrong IPs, breaking the entire stack.
|
||||
|
||||
---
|
||||
|
||||
## Domains
|
||||
|
||||
`netgrimoire.com` · `pncharris.com` · `wasted-bandwidth.net` · `nucking-futz.com` · `florosafd.org` · `gnarlypandaproductions.com` · `pncfishandmore.com` · `pncharrisenterprises.com`
|
||||
|
||||
---
|
||||
|
||||
## Mail Flow
|
||||
|
||||
**Inbound:** MXRoute filters → forwards to MailCow → Dovecot delivers
|
||||
|
||||
**Outbound:** Postfix → MXRoute relay → recipient
|
||||
|
||||
**SRS rewriting:** MXRoute rewrites the envelope sender on forwarded mail. All domains using MXRoute inbound forwarding **must** have catch-all aliases configured in MailCow, or `reject_unlisted_sender` will reject the rewritten addresses.
|
||||
|
||||
---
|
||||
|
||||
## DKIM
|
||||
|
||||
Two selectors required:
|
||||
|
||||
| Selector | Purpose |
|
||||
|----------|---------|
|
||||
| `mailcow` | Direct sends from MailCow |
|
||||
| `mxroute` | MXRoute relay path |
|
||||
|
||||
---
|
||||
|
||||
## Key Limits (must match across all three)
|
||||
|
||||
Attachment size limits must be set identically in Postfix, Rspamd, and ClamAV. Changing only Postfix is insufficient — Rspamd and ClamAV reject large messages before Postfix processes them.
|
||||
|
||||
---
|
||||
|
||||
## Roundcube SSL
|
||||
|
||||
Internal connections to Dovecot use self-signed certs. In `config.inc.php`:
|
||||
|
||||
```php
|
||||
$config['imap_conn_options'] = ['ssl' => ['verify_peer' => false, 'verify_peer_name' => false]];
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Related Docs
|
||||
|
||||
- [MXRoute Integration](/Keystone-Grimoire/Mail/MXRoute-Integration)
|
||||
- [Domain Setup](/Keystone-Grimoire/Mail/Domain-Setup)
|
||||
- [MailCow Hardening](/Keystone-Grimoire/Mail/Hardening)
|
||||
- [MailCow Backup](/Vault-Grimoire/Backups/MailCow-Backup)
|
||||
|
||||
---
|
||||
|
||||
## Pending
|
||||
|
||||
- [ ] Dedicated ATT_Mail static IP for outbound mail (OPNsense outbound NAT rule)
|
||||
- [ ] Second DKIM selector (`mxroute`) validation
|
||||
- [ ] MTA-STS validation (supported since Sep 2025 update)
|
||||
60
Keystone-Grimoire/Network/Port-Assignments.md
Normal file
60
Keystone-Grimoire/Network/Port-Assignments.md
Normal file
|
|
@ -0,0 +1,60 @@
|
|||
---
|
||||
title: Port Assignments
|
||||
description:
|
||||
published: true
|
||||
date: 2026-02-20T04:21:52.996Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-01-27T03:42:58.945Z
|
||||
---
|
||||
|
||||
# Physical Paths
|
||||
|
||||
|Device|IP|Room|Home Infra|DLink|TPLink|Closet|Inter Rack|Rack|Ubiquity|
|
||||
|------|--|----|------|------|-------|------|----|----|--------|
|
||||
|Dlink |5.2 |Office | |1| | | | |1 |
|
||||
|ZNAS |5.10 | | |2| | | | | |
|
||||
|Docker3 | | | |3| | | | | |
|
||||
|Docker5 | | | |4| | | | | |
|
||||
|DockerPi1 | | | |5| | | | | |
|
||||
|DNS |5.7 | | |6| | | | | |
|
||||
|Docker4 | | | | | | |W:7 |19|4 |
|
||||
|Docker2 | | Office | | | | |W:5 |17|11|
|
||||
|Time Machine| | | | | | |W:6 |18|12|
|
||||
|Deco Satt | |Room 1 |1 | | | | | |15|
|
||||
|Deco AP | |Office(E)|10-24| | |24|W:9 |21|20|
|
||||
|TP Link | | | | |1|22|W:10|22|23|
|
||||
|OpnSense |3.4 | | | | |23|W:11|23|24|
|
||||
|OPnSense-Cox| | | | | | | | | |
|
||||
| | | | | | | | | | |
|
||||
| | |Room 2 |2 | | | | |2 | |
|
||||
| | |Room 3 |3 | | | | |3 | |
|
||||
| | |Living(E)|4 | | | | |4 | |
|
||||
| | |Living(W)|5 | | | | |5 | |
|
||||
| | |Family |6 | | | | |6 | |
|
||||
| | |Pantry |7 | | | | |7 | |
|
||||
| | |Room 4 |8 | | | | |8 | |
|
||||
| | |Gym |9 | | | | |9 | |
|
||||
| | |Office(S)|11 | | | | |11| |
|
||||
| | |Office(W)|12 | | | | |12| |
|
||||
| | |Office(W)|13 | | | | |13| |
|
||||
| | |Office(W)|14 | | | | |14| |
|
||||
| | |Office(W)|15 | | | | |15| |
|
||||
| | |Office(W)|16 | | | | |16| |
|
||||
| | |Office(N)|17 | | | | |17| |
|
||||
| | |Office(N)|18 | | | | |18| |
|
||||
| | |Office(N)|19 | | | | |19| |
|
||||
| | |Office(N)|20 | | | | |20| |
|
||||
|
||||
Note: For rooms N,E,S,W are compass directions
|
||||
For InterRack, W - wall, H - Hallway
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
49
Keystone-Grimoire/Network/Topology.md
Normal file
49
Keystone-Grimoire/Network/Topology.md
Normal file
|
|
@ -0,0 +1,49 @@
|
|||
---
|
||||
title: Network Topology
|
||||
description: Netgrimoire network layout — VLANs, subnets, routing
|
||||
published: true
|
||||
date: 2026-04-12T00:00:00.000Z
|
||||
tags: keystone, network
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-12T00:00:00.000Z
|
||||
---
|
||||
|
||||
# Network Topology
|
||||
|
||||
## Subnets
|
||||
|
||||
| Subnet | Purpose |
|
||||
|--------|---------|
|
||||
| 192.168.3.0/24 | OPNsense / firewall management |
|
||||
| 192.168.4.0/24 | ISPConfig / web hosting |
|
||||
| 192.168.5.0/24 | Primary LAN — all Docker hosts |
|
||||
| 192.168.8.0/24 | Pocket Grimoire (GL.iNet Beryl AX) |
|
||||
| 192.168.32.0/24 | WireGuard VPN peers |
|
||||
|
||||
## WireGuard Peers
|
||||
|
||||
| Peer | IP | Device |
|
||||
|------|----|--------|
|
||||
| Obie | 192.168.32.2 | — |
|
||||
| pncfishandmore | 192.168.32.3 | — |
|
||||
| GLNet | 192.168.32.4 | GL.iNet router |
|
||||
| PortaPotty | 192.168.32.5 | Pocket Grimoire laptop |
|
||||
| GLNet | 192.168.32.6 | Second GL.iNet |
|
||||
|
||||
## DNS
|
||||
|
||||
Internal DNS runs on Technitium at `192.168.5.7` (`dns.netgrimoire.com`), behind Authentik.
|
||||
|
||||
All `*.netgrimoire.com` and `*.wasted-bandwidth.net` internal hostnames resolve via Technitium. Public DNS managed via ISPConfig and domain registrars.
|
||||
|
||||
## Docker Overlay Network
|
||||
|
||||
All Swarm services share the `netgrimoire` external overlay network (VIP mode). This is the only overlay network in use.
|
||||
|
||||
```
|
||||
Name: netgrimoire
|
||||
Driver: overlay
|
||||
Mode: VIP (always — dnsrr is banned)
|
||||
```
|
||||
|
||||
See [Docker Swarm Template](/Keystone-Grimoire/Docker/Swarm-Template) for attachment rules.
|
||||
36
Keystone-Grimoire/Overview.md
Normal file
36
Keystone-Grimoire/Overview.md
Normal file
|
|
@ -0,0 +1,36 @@
|
|||
---
|
||||
title: Keystone Grimoire
|
||||
description: Architecture — the dwarven runesmith's blueprints
|
||||
published: true
|
||||
date: 2026-04-12T00:00:00.000Z
|
||||
tags: keystone, architecture
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-12T00:00:00.000Z
|
||||
---
|
||||
|
||||
# Keystone Grimoire
|
||||
|
||||

|
||||
|
||||
The Keystone Grimoire holds the architectural blueprints of Netgrimoire — how everything is wired together, how traffic flows, why decisions were made. Remove the keystone and the arch falls. This is the arch.
|
||||
|
||||
---
|
||||
|
||||
## Sections
|
||||
|
||||
| Section | Contents |
|
||||
|---------|----------|
|
||||
| [Hosts](/Keystone-Grimoire/Hosts/Host-Inventory) | Node inventory, roles, IPs, pinned services, hardware |
|
||||
| [Network](/Keystone-Grimoire/Network/Topology) | Topology, VLANs, DNS, WireGuard, OpenVPN, port assignments |
|
||||
| [Docker](/Keystone-Grimoire/Docker/Swarm-Template) | Swarm template standard, overlay network, label rules, volume paths |
|
||||
| [Mail](/Keystone-Grimoire/Mail/MailCow-Overview) | MailCow, MXRoute, DKIM, SRS, domain setup, hardening |
|
||||
|
||||
---
|
||||
|
||||
## Key Principles
|
||||
|
||||
- **Caddy is the single entry point** for all web traffic. Every public service goes through Caddy. No exceptions.
|
||||
- **Docker labels drive routing** — services register themselves with Caddy via `deploy.labels`. Static Caddyfile entries only for Compose stacks where label pickup is unreliable.
|
||||
- **Never mix label and static routing for the same hostname** — caddy-docker-proxy merges them into a broken upstream pool.
|
||||
- **Always VIP endpoint mode** — `endpoint_mode: dnsrr` is banned. It breaks internal DNS resolution.
|
||||
- **ARM nodes are excluded by default** — all swarm services carry `node.platform.arch != aarch64` and `node.platform.arch != arm` constraints unless explicitly ARM-specific.
|
||||
26
Netgrimoire/Audits/Calibre-web-2026-04-03.md
Normal file
26
Netgrimoire/Audits/Calibre-web-2026-04-03.md
Normal file
|
|
@ -0,0 +1,26 @@
|
|||
---
|
||||
title: Audit - Calibre-web.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:30:36.844Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:30:36.844Z
|
||||
---
|
||||
|
||||
# Audit Report — Calibre-web.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/Calibre-web.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
PASS: Homepage labels (homepage.group, homepage.name, homepage.icon, homepage.href, homepage.description) are all present and correctly configured.
|
||||
FAIL: Caddy labels on exposed services are incorrect. The caddy.labels should be set to a single string value containing all domains separated by commas, not an array. Correct format would be "caddy=books.netgrimoire.com, books.pncharris.com".
|
||||
PASS: Placement constraints (node.hostname) are correctly specified as 'znas'.
|
||||
PASS: Volumes use the /DockerVol/<service> path convention.
|
||||
PASS: Network references the external netgrimoire overlay.
|
||||
|
||||
VERDICT: FAIL
|
||||
47
Netgrimoire/Audits/JellySeer-2026-04-03.md
Normal file
47
Netgrimoire/Audits/JellySeer-2026-04-03.md
Normal file
|
|
@ -0,0 +1,47 @@
|
|||
---
|
||||
title: Audit - JellySeer.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:31:31.742Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:31:31.742Z
|
||||
---
|
||||
|
||||
# Audit Report — JellySeer.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/JellySeer.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### SWARM AUDIT
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: "Media Search" — **PASS**
|
||||
- `homepage.name`: "JellySeer" — **PASS**
|
||||
- `homepage.icon`: "sh-jellyseerr.svg" — **PASS**
|
||||
- `homepage.href`: "https://requests.netgrimoire.com" — **PASS**
|
||||
- `homepage.description`: "Media Server" — **PASS**
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- `kuma.jellyseer.http.name`: "JellySeer" — **PASS**
|
||||
- `kuma.jellyseer.http.url`: "http://jellyseer:5055" — **PASS**
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy: requests.netgrimoire.com` — **PASS**
|
||||
- `caddy.reverse_proxy: http://jellyseer:5055` — **PASS**
|
||||
|
||||
4. **Placement constraints**:
|
||||
- `node.hostname == docker5` — **PASS**
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- `/DockerVol/JellySeer/config:/app/config` — **PASS**
|
||||
- `/data/nfs/znas/Data/media:/data:shared` — **FAIL**: The volume `/data/nfs/znas/Data/media:/data:shared` does not follow the `/DockerVol/<service>` path convention. It is recommended to use a volume path that follows this convention for better organization and consistency.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- `netgrimoire` network — **PASS**
|
||||
|
||||
### VERDICT: FAIL
|
||||
50
Netgrimoire/Audits/JellyStat-2026-04-03.md
Normal file
50
Netgrimoire/Audits/JellyStat-2026-04-03.md
Normal file
|
|
@ -0,0 +1,50 @@
|
|||
---
|
||||
title: Audit - JellyStat.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:32:31.251Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:32:31.251Z
|
||||
---
|
||||
|
||||
# Audit Report — JellyStat.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/JellyStat.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### Audit Results:
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group=Library` — **PASS**
|
||||
- `homepage.name=JellyStat` — **PASS**
|
||||
- `homepage.icon=jellystat.png` — **FAIL**: The icon file path should be relative to the service's context or a valid absolute URL.
|
||||
- **Fix**: Update the icon path to use a valid location.
|
||||
- `homepage.href=http://jellystat.netgrimoire.com` — **PASS**
|
||||
- `homepage.description=Jelly Stats` — **PASS**
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- The service does not appear to be Uptime Kuma; the labels are irrelevant here. **PASS**
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy=jellystat.netgrimoire.com` — **PASS**
|
||||
- `caddy.reverse_proxy="{{upstreams 3000}}"` — **PASS**
|
||||
- **Note**: Ensure that the reverse proxy configuration is correct and functional within your Caddy setup.
|
||||
|
||||
4. **Placement constraints**:
|
||||
- `node.hostname == bruce` — **PASS**
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- `/DockerVol/jellystat/postgres-data` — **PASS**
|
||||
- `/DockerVol/jellystat/backup-data` — **PASS**
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- `netgrimoire` — **PASS**
|
||||
|
||||
### VERDICT: FAIL
|
||||
|
||||
The audit has identified one issue that needs to be addressed. Specifically, the `homepage.icon` label should use a valid file path or URL for the icon image. Once this is resolved, the audit will pass.
|
||||
31
Netgrimoire/Audits/README.md
Normal file
31
Netgrimoire/Audits/README.md
Normal file
|
|
@ -0,0 +1,31 @@
|
|||
---
|
||||
title: Audit Reports
|
||||
description: Gremlin-generated YAML compliance audit reports
|
||||
published: true
|
||||
date: 2026-04-12T00:00:00.000Z
|
||||
tags: audits, gremlin
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-12T00:00:00.000Z
|
||||
---
|
||||
|
||||
# Audit Reports
|
||||
|
||||
Audit reports are auto-generated weekly by the Gremlin Forgejo Audit workflow (n8n, Monday 06:00). Each report checks a single compose YAML file against the Netgrimoire Docker Swarm template standard.
|
||||
|
||||
See [Gremlin Grimoire — Forgejo Audit Workflow](/Gremlin-Grimoire/Workflows/Forgejo-Audit) for full workflow documentation.
|
||||
|
||||
## What Gets Checked
|
||||
|
||||
- Homepage labels present on all services
|
||||
- Uptime Kuma labels present on all services
|
||||
- Caddy labels on exposed services
|
||||
- Placement constraints (ARM exclusion defaults)
|
||||
- Volume paths follow `/DockerVol/` or `/data/nfs/znas/Docker/` convention
|
||||
- No forbidden fields (`version:`, `container_name:`, `restart:`, `depends_on:`)
|
||||
- `endpoint_mode: dnsrr` not used (always VIP)
|
||||
- `diun.enable: "true"` present
|
||||
- Network references `netgrimoire` external overlay
|
||||
|
||||
## Report Files
|
||||
|
||||
Reports follow the naming convention `<service>-<date>.md`. Files here are committed automatically by n8n — do not edit manually.
|
||||
107
Netgrimoire/Audits/SQL-mgmt-2026-04-03.md
Normal file
107
Netgrimoire/Audits/SQL-mgmt-2026-04-03.md
Normal file
|
|
@ -0,0 +1,107 @@
|
|||
---
|
||||
title: Audit - SQL-mgmt.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:34:04.814Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:34:04.814Z
|
||||
---
|
||||
|
||||
# Audit Report — SQL-mgmt.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/SQL-mgmt.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### SWARM AUDIT REPORT
|
||||
|
||||
#### Homepage Labels
|
||||
1. **PASS**: `phpmyadmin`
|
||||
- `homepage.group=Management`
|
||||
- `homepage.name=PHPMyadmin`
|
||||
- `homepage.icon=phpmyadmin.png`
|
||||
- `homepage.href=http://phpmyadmin.netgrimoire.com`
|
||||
- `homepage.description=MySQL Manager`
|
||||
|
||||
2. **PASS**: `phppgadmin`
|
||||
- `homepage.group=Management`
|
||||
- `homepage.name=PHPpgmyadmin`
|
||||
- `homepage.icon=phppgmyadmin.png`
|
||||
- `homepage.href=http://phppgmyadmin.netgrimoire.com`
|
||||
- `homepage.description=Postgres Manager`
|
||||
|
||||
#### Uptime Kuma Labels
|
||||
1. **FAIL**: `phpmyadmin` and `phppgadmin`
|
||||
- Missing labels: `kuma.msql.http.name`, `kuma.mealie.http.url`.
|
||||
|
||||
2. **FIX**:
|
||||
```yaml
|
||||
phpmyadmin:
|
||||
deploy:
|
||||
labels:
|
||||
...
|
||||
kuma.msql.http.name="PHPMyadmin"
|
||||
kuma.msql.http.url=http://phpmyadmin:80
|
||||
...
|
||||
|
||||
phppgadmin:
|
||||
deploy:
|
||||
labels:
|
||||
...
|
||||
kuma.mealie.http.url=http://phppgmyadmin:80
|
||||
...
|
||||
```
|
||||
|
||||
#### Caddy Labels on Exposed Services
|
||||
1. **PASS**: `phpmyadmin`
|
||||
- `caddy=phpmyadmin.netgrimoire.com`
|
||||
- `caddy.reverse_proxy="{{upstreams 80}}"`
|
||||
|
||||
2. **PASS**: `phppgadmin`
|
||||
- `caddy=phppgmyadmin.netgrimoire.com`
|
||||
- `caddy.reverse_proxy="{{upstreams 80}}"`
|
||||
|
||||
#### Placement Constraints
|
||||
1. **FAIL**: Both services are missing placement constraints (`node.hostname`).
|
||||
|
||||
2. **FIX**:
|
||||
```yaml
|
||||
phpmyadmin:
|
||||
deploy:
|
||||
labels:
|
||||
...
|
||||
placement:
|
||||
constraints:
|
||||
- node.hostname==<desired-hostname>
|
||||
|
||||
phppgadmin:
|
||||
deploy:
|
||||
labels:
|
||||
...
|
||||
placement:
|
||||
constraints:
|
||||
- node.hostname==<desired-hostname>
|
||||
```
|
||||
|
||||
#### Volumes Use /DockerVol/<service> Path Convention
|
||||
1. **FAIL**: Both services are missing volume configurations.
|
||||
|
||||
2. **FIX**:
|
||||
```yaml
|
||||
phpmyadmin:
|
||||
volumes:
|
||||
- /DockerVol/phpmyadmin:/var/lib/mysql
|
||||
|
||||
phppgadmin:
|
||||
volumes:
|
||||
- /DockerVol/phppgadmin:/var/lib/postgresql/data
|
||||
```
|
||||
|
||||
#### Network References External `netgrimoire` Overlay
|
||||
1. **PASS**: Both services correctly reference the external network `netgrimoire`.
|
||||
|
||||
### VERDICT: FAIL
|
||||
47
Netgrimoire/Audits/authelia-2026-04-03.md
Normal file
47
Netgrimoire/Audits/authelia-2026-04-03.md
Normal file
|
|
@ -0,0 +1,47 @@
|
|||
---
|
||||
title: Audit - authelia.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:34:59.760Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:34:59.760Z
|
||||
---
|
||||
|
||||
# Audit Report — authelia.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/authelia.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**Homepage labels:**
|
||||
- **PASS**: homepage.group=Management
|
||||
- **PASS**: homepage.name=Authelia
|
||||
- **PASS**: homepage.icon=authelia.png
|
||||
- **PASS**: homepage.href=https://login.wasted-bandwidth.net
|
||||
- **PASS**: homepage.description=SSO / Forward-Auth
|
||||
|
||||
**Uptime Kuma labels:**
|
||||
- **PASS**: kuma.authelia.http.name="Authelia"
|
||||
- **PASS**: kuma.authelia.http.url=http://authelia:9091
|
||||
|
||||
**Caddy labels on exposed services:**
|
||||
- **PASS**: caddy=login.wasted-bandwidth.net
|
||||
- **PASS**: caddy.reverse_proxy={{upstreams 9091}}
|
||||
|
||||
**Placement constraints:**
|
||||
- **FAIL**: Both 'authelia' and 'redis' are constrained to run on the node 'nas', but there is no guarantee that 'nas' will always be available. Consider using a more flexible constraint.
|
||||
- Fix: Change `constraints: - node.hostname == nas` to a more general placement strategy.
|
||||
|
||||
**Volumes use /DockerVol/<service> path convention:**
|
||||
- **PASS**: `/DockerVol/authelia/config:/config`
|
||||
- **PASS**: `/DockerVol/authelia/secrets:/secrets`
|
||||
- **PASS**: `/DockerVol/authelia/redis:/data`
|
||||
|
||||
**Network references external netgrimoire overlay:**
|
||||
- **PASS**: `networks: - netgrimoire`
|
||||
|
||||
**VERDICT: FAIL**
|
||||
48
Netgrimoire/Audits/authentik-2026-04-03.md
Normal file
48
Netgrimoire/Audits/authentik-2026-04-03.md
Normal file
|
|
@ -0,0 +1,48 @@
|
|||
---
|
||||
title: Audit - authentik.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:36:24.241Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:36:24.241Z
|
||||
---
|
||||
|
||||
# Audit Report — authentik.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/authentik.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**SWARM AUDIT REPORT**
|
||||
|
||||
1. **Homepage labels**
|
||||
- `homepage.group`: PASS
|
||||
- `homepage.name`: PASS
|
||||
- `homepage.icon`: PASS
|
||||
- `homepage.href`: PASS
|
||||
- `homepage.description`: PASS
|
||||
|
||||
2. **Uptime Kuma labels**
|
||||
- No Uptime Kuma service found, hence no labels to check.
|
||||
|
||||
3. **Caddy labels on exposed services**
|
||||
- `caddy=auth.netgrimoire.com` and `caddy.reverse_proxy="{{upstreams 9000}}"`: PASS
|
||||
|
||||
4. **Placement constraints**
|
||||
- `node.hostname == znas`: PASS for all services
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**
|
||||
- `/DockerVol/Authentik/Postgres`, `/DockerVol/Authentik/redis`, `/DockerVol/Authentik/media`, `/DockerVol/Authentik/custom-templates`: PASS
|
||||
- `/var/run/docker.sock` for `worker` service: FAIL
|
||||
|
||||
6. **Network references external netgrimoire overlay**
|
||||
- `netgrimoire` network is referenced by both `authentik` and `worker` services, and it is set to `external: true`: PASS
|
||||
|
||||
**Fixes Required**
|
||||
- Update the `worker` service volume `/var/run/docker.sock:/var/run/docker.sock` to match the convention by using a Docker volume or bind mount with `/DockerVol/Authentik/docker.sock`.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
44
Netgrimoire/Audits/bazarr-2026-04-03.md
Normal file
44
Netgrimoire/Audits/bazarr-2026-04-03.md
Normal file
|
|
@ -0,0 +1,44 @@
|
|||
---
|
||||
title: Audit - bazarr.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:37:15.344Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:37:15.344Z
|
||||
---
|
||||
|
||||
# Audit Report — bazarr.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/bazarr.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### Audit Report for `swarm/bazarr.yaml`
|
||||
|
||||
#### Homepage Labels
|
||||
- **PASS**: homepage.group, homepage.name, homepage.icon, homepage.href, homepage.description are all correctly defined.
|
||||
|
||||
#### Uptime Kuma Labels
|
||||
- **FAIL**: No Uptime Kuma labels found. Expected labels like `kuma.bazarr.http.name` and `kuma.bazarr.http.url`.
|
||||
- **Fix**: Add the necessary labels for Uptime Kuma integration.
|
||||
|
||||
#### Caddy Labels on Exposed Services
|
||||
- **PASS**: caddy label is correctly defined as `caddy=bazarr.netgrimoire.com`.
|
||||
- **FAIL**: The reverse proxy configuration in the Caddy label is incorrect. It should use `{{upstreams bazarr:6767}}` instead of `{{upstreams 6767}}`.
|
||||
- **Fix**: Correct the reverse proxy configuration to `caddy.reverse_proxy: "{{upstreams bazarr:6767}}"`.
|
||||
|
||||
#### Placement Constraints
|
||||
- **PASS**: The node hostname constraint is correctly defined as `node.hostname == docker4`.
|
||||
|
||||
#### Volumes Use /DockerVol/<service> Path Convention
|
||||
- **FAIL**: Volume paths do not follow the `/DockerVol/<service>` convention.
|
||||
- **Fix**: Correct volume paths to follow the convention. For example, change `/DockerVol/bazarr/config` to `/DockerVol/bazarr/config`.
|
||||
|
||||
#### Network References External Netgrimoire Overlay
|
||||
- **PASS**: The network reference is correctly set to an external `netgrimoire` overlay.
|
||||
|
||||
### VERDICT: FAIL
|
||||
50
Netgrimoire/Audits/beets-2026-04-03.md
Normal file
50
Netgrimoire/Audits/beets-2026-04-03.md
Normal file
|
|
@ -0,0 +1,50 @@
|
|||
---
|
||||
title: Audit - beets.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:38:00.938Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:38:00.938Z
|
||||
---
|
||||
|
||||
# Audit Report — beets.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/beets.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** PASS
|
||||
|
||||
---
|
||||
|
||||
### Audit Summary:
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: PASSED
|
||||
- `homepage.name`: PASSED
|
||||
- `homepage.icon`: PASSED
|
||||
- `homepage.href`: PASSED
|
||||
- `homepage.description`: PASSED
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- Not applicable as Uptime Kuma is not referenced in this configuration.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy=beets.netgrimoire.com`: PASSED
|
||||
- `caddy.reverse_proxy`: PASSED
|
||||
|
||||
4. **Placement constraints**:
|
||||
- `node.hostname == nas`: PASSED
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- `/DockerVol/beets/config`: PASSED
|
||||
- `/data/nfs/Baxter/Data/media/music/Collection`: FAIL (does not follow the path convention)
|
||||
- Fix: Update to `/DockerVol/beets/music`
|
||||
- `/data/nfs/Baxter/Data/media/music/ingest`: FAIL (does not follow the path convention)
|
||||
- Fix: Update to `/DockerVol/beets/downloads`
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- `netgrimoire` network: PASSED
|
||||
|
||||
### VERDICT:
|
||||
FAIL
|
||||
44
Netgrimoire/Audits/beszel-2026-04-03.md
Normal file
44
Netgrimoire/Audits/beszel-2026-04-03.md
Normal file
|
|
@ -0,0 +1,44 @@
|
|||
---
|
||||
title: Audit - beszel.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:38:47.782Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:38:47.782Z
|
||||
---
|
||||
|
||||
# Audit Report — beszel.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/beszel.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
1. **Homepage labels:** All homepage labels are present.
|
||||
- `homepage.group=Monitoring`
|
||||
- `homepage.name=Beszel`
|
||||
- `homepage.icon=beszel.png`
|
||||
- `homepage.href=https://beszel.netgrimoire.com`
|
||||
- `homepage.description=Beszel Service`
|
||||
|
||||
2. **Uptime Kuma labels:** The Uptime Kuma labels are not provided in the deploy block; they should be checked within the service's configuration.
|
||||
|
||||
3. **Caddy labels on exposed services:**
|
||||
- `caddy=beszel.netgrimoire.com`
|
||||
- `caddy.import=authentik`
|
||||
- `caddy.reverse_proxy="{{upstreams 8090}}"`
|
||||
|
||||
4. **Placement constraints:** The constraint is based on the node label, not the node hostname.
|
||||
- Current: `constraints: ["node.labels.general == true"]`
|
||||
- Fix: Update to use `node.hostname` if necessary.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention:**
|
||||
- Volume path: `/data/nfs/znas/Docker/beszel:/beszel_data`
|
||||
- Fix: The volume does not follow the `/DockerVol/<service>` pattern; update to use a standard Docker volume path like `/DockerVol/beszel`.
|
||||
|
||||
6. **Network references external netgrimoire overlay:** The network is correctly referenced as `netgrimoire`, which is an external overlay.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
46
Netgrimoire/Audits/beszel_agents-2026-04-03.md
Normal file
46
Netgrimoire/Audits/beszel_agents-2026-04-03.md
Normal file
|
|
@ -0,0 +1,46 @@
|
|||
---
|
||||
title: Audit - beszel_agents.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:40:11.085Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:40:11.085Z
|
||||
---
|
||||
|
||||
# Audit Report — beszel_agents.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/beszel_agents.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**SWARM AUDIT**
|
||||
|
||||
1. **Homepage labels**: No homepage labels are specified in the file.
|
||||
- **Fix**: Add `homepage.group`, `homepage.name`, `homepage.icon`, `homepage.href`, and `homepage.description` to your Docker Swarm configuration.
|
||||
|
||||
2. **Uptime Kuma labels**: No Uptime Kuma labels are specified in the file.
|
||||
- **Fix**: If you are using Uptime Kuma, add the appropriate labels as per its documentation.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `beszel-agent-docker2`, `beszel-agent-docker3`, `beszel-agent-docker4`, `beszel-agent-znas`, `beszel-agent-dockerpi1`: No Caddy labels are specified.
|
||||
- **Fix**: Add Caddy labels to specify the domain and reverse proxy configuration for these services.
|
||||
|
||||
4. **Placement constraints**:
|
||||
- All services use `node.hostname` placement constraints.
|
||||
- **PASS**: This is correctly configured.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- No volumes follow this specific path convention in the file.
|
||||
- **Fix**: Ensure that all volumes are specified with paths like `/DockerVol/beszel-agent-docker2`, `/DockerVol/beszel-agent-docker3`, etc.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- All services reference an external `netgrimoire` network.
|
||||
- **PASS**: This is correctly configured.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
The file fails the audit due to missing homepage, Uptime Kuma, and Caddy labels, and volumes not following the specified path convention.
|
||||
29
Netgrimoire/Audits/caddy-1-2026-04-03.md
Normal file
29
Netgrimoire/Audits/caddy-1-2026-04-03.md
Normal file
|
|
@ -0,0 +1,29 @@
|
|||
---
|
||||
title: Audit - caddy-1.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:30:38.025Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:30:38.025Z
|
||||
---
|
||||
|
||||
# Audit Report — caddy-1.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/stack/caddy/caddy-1.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
PASS Items:
|
||||
1. The Caddy labels `caddy=<domain>` and `caddy.reverse_proxy` are present on the exposed service.
|
||||
2. Placement constraints for node.hostname are correctly specified with `node.hostname == znas`.
|
||||
3. Volumes use the `/export/Docker/caddy` path convention.
|
||||
4. The network reference is to an external overlay named `netgrimoire`.
|
||||
|
||||
FAIL Items:
|
||||
1. No homepage labels (`homepage.group`, `homepage.name`, `homepage.icon`, `homepage.href`, `homepage.description`) are present in the configuration.
|
||||
|
||||
VERDICT: FAIL
|
||||
47
Netgrimoire/Audits/caddy-2026-04-03.md
Normal file
47
Netgrimoire/Audits/caddy-2026-04-03.md
Normal file
|
|
@ -0,0 +1,47 @@
|
|||
---
|
||||
title: Audit - caddy.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:31:34.043Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:31:34.043Z
|
||||
---
|
||||
|
||||
# Audit Report — caddy.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/stack/caddy/caddy.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**SWARM AUDIT REPORT**
|
||||
|
||||
1. **Homepage labels**: There are no homepage-related labels in the provided YAML file.
|
||||
- **FAIL**: Missing homepage labels.
|
||||
|
||||
2. **Uptime Kuma labels**: There are no Uptime Kuma-related labels in the provided YAML file.
|
||||
- **FAIL**: Missing Uptime Kuma labels.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- **PASS**: Caddy service does not have any specific labels as per the provided configuration.
|
||||
|
||||
4. **Placement constraints**:
|
||||
- **PASS**: Both `caddy` and `crowdsec` services are constrained to run on the node with hostname `znas`.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- **FAIL**: The volumes are not using the `/DockerVol/<service>` path convention.
|
||||
- `/var/run/docker.sock`
|
||||
- `/export/Docker/caddy/Caddyfile`
|
||||
- `/export/Docker/caddy:/data`
|
||||
- `caddy-logs`
|
||||
- `crowdsec-db`
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- **PASS**: The services reference the externally created `netgrimoire` and `vpn` networks.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
The provided YAML file contains several issues that need to be addressed to meet all the audit criteria, including missing homepage and Uptime Kuma labels, non-conforming volume paths, and lack of use of the external `netgrimoire` overlay network.
|
||||
52
Netgrimoire/Audits/cloudcmd-2026-04-03.md
Normal file
52
Netgrimoire/Audits/cloudcmd-2026-04-03.md
Normal file
|
|
@ -0,0 +1,52 @@
|
|||
---
|
||||
title: Audit - cloudcmd.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:40:56.554Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:40:56.554Z
|
||||
---
|
||||
|
||||
# Audit Report — cloudcmd.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/cloudcmd.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### Swarm Audit Report for `cloudcmd.yaml`
|
||||
|
||||
#### 1. Homepage Labels:
|
||||
- **PASS**: homepage.group=Application
|
||||
- **PASS**: homepage.name=Cloud Commander
|
||||
- **PASS**: homepage.icon=cloudcmd.png
|
||||
- **FAIL**: homepage.href=http://commander.netgrimoire.com - Incorrect URL, should be relative to the service.
|
||||
|
||||
**Fix**: Update `homepage.href` to a relative path such as `/`.
|
||||
|
||||
#### 2. Uptime Kuma Labels:
|
||||
- **FAIL**: kuma.cloud.http.name="Cloudcmd" - Missing `.name`.
|
||||
|
||||
**Fix**: Correct label to `kuma.cloud.http.name=Cloudcmd`.
|
||||
|
||||
#### 3. Caddy Labels on Exposed Services:
|
||||
- **PASS**: caddy=commander.netgrimoire.com
|
||||
- **PASS**: caddy.reverse_proxy="{{upstreams 8000}}"
|
||||
|
||||
#### 4. Placement Constraints:
|
||||
- **FAIL**: node.hostname == nas - Ensure that `nas` is correctly configured and available in the Swarm.
|
||||
|
||||
**Fix**: Verify that the hostname `nas` is correct and exists within your Swarm cluster.
|
||||
|
||||
#### 5. Volumes Use /DockerVol/<service> Path Convention:
|
||||
- **FAIL**: ~:/root - Home directory path should use a Docker volume convention.
|
||||
|
||||
**Fix**: Replace `~:/root` with `/DockerVol/cloudcmd/root`.
|
||||
|
||||
#### 6. Network References External netgrimoire Overlay:
|
||||
- **PASS**: References external network netgrimoire
|
||||
|
||||
### VERDICT: FAIL
|
||||
48
Netgrimoire/Audits/comixed-2026-04-03.md
Normal file
48
Netgrimoire/Audits/comixed-2026-04-03.md
Normal file
|
|
@ -0,0 +1,48 @@
|
|||
---
|
||||
title: Audit - comixed.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:41:45.208Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:41:45.208Z
|
||||
---
|
||||
|
||||
# Audit Report — comixed.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/comixed.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**Audit Results for `swarm/comixed.yaml`:**
|
||||
|
||||
1. **Homepage Labels:**
|
||||
- **PASS**: `homepage.group`, `homepage.name`, `homepage.href`
|
||||
- Values are correctly set.
|
||||
- **FAIL**: `homepage.icon`, `homepage.description`
|
||||
- Missing values. Set these to appropriate values.
|
||||
|
||||
2. **Uptime Kuma Labels:**
|
||||
- **FAIL**: Uptime Kuma labels not found.
|
||||
- No labels related to Uptime Kuma are present in the deployment block.
|
||||
|
||||
3. **Caddy Labels on Exposed Services:**
|
||||
- **PASS**: `caddy=<domain>`, `caddy.reverse_proxy`
|
||||
- Correctly configured for domain `comics.netgrimoire.com` and reverse proxy.
|
||||
|
||||
4. **Placement Constraints:**
|
||||
- **PASS**: `node.hostname == nas`
|
||||
- Constraint correctly placed to run on the node named `nas`.
|
||||
|
||||
5. **Volumes Use `/DockerVol/<service>` Path Convention:**
|
||||
- **PASS**: All volumes use the specified path convention (`/DockerVol/comixed/config`).
|
||||
|
||||
6. **Network References External Netgrimoire Overlay:**
|
||||
- **PASS**: The network `netgrimoire` is correctly referenced as external.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
The audit identified issues with the homepage labels and the absence of Uptime Kuma labels. These should be addressed to ensure compliance with the audit criteria.
|
||||
47
Netgrimoire/Audits/commander-2026-04-03.md
Normal file
47
Netgrimoire/Audits/commander-2026-04-03.md
Normal file
|
|
@ -0,0 +1,47 @@
|
|||
---
|
||||
title: Audit - commander.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:42:30.634Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:42:30.634Z
|
||||
---
|
||||
|
||||
# Audit Report — commander.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/commander.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**Audit Results:**
|
||||
|
||||
1. **Homepage labels:**
|
||||
- **PASS:** homepage.group=Applications
|
||||
- **PASS:** homepage.name=Cloud Commander
|
||||
- **PASS:** homepage.icon=mdi-cloud
|
||||
- **FAIL:** homepage.href is incorrect. The correct URL should be https://cloudcmd.netgrimoire.com instead of https://commander.netgrimoire.com.
|
||||
- **FAIL:** homepage.description is missing.
|
||||
|
||||
2. **Uptime Kuma labels:**
|
||||
- **FAIL:** Uptime Kuma labels are not present in the provided YAML file.
|
||||
|
||||
3. **Caddy labels on exposed services:**
|
||||
- **PASS:** caddy=commander.netgrimoire.com
|
||||
- **FAIL:** caddy.reverse_proxy is missing an upstreams configuration, which should reference the service port (e.g., {{upstreams 8000}}).
|
||||
|
||||
4. **Placement constraints:**
|
||||
- **PASS:** node.hostname=nas
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention:**
|
||||
- **FAIL:** Volumes are using relative paths instead of the /DockerVol/<service> convention. Example volumes should be `/DockerVol/cloudcmd:/root` and `/DockerVol/cloudcmd:/mnt/fs`.
|
||||
|
||||
6. **Network references external netgrimoire overlay:**
|
||||
- **PASS:** Network references an external netgrimoire overlay.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
One or more of the items failed during the audit, which prevents a full PASS verdict.
|
||||
54
Netgrimoire/Audits/configarr-2026-04-03.md
Normal file
54
Netgrimoire/Audits/configarr-2026-04-03.md
Normal file
|
|
@ -0,0 +1,54 @@
|
|||
---
|
||||
title: Audit - configarr.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:43:33.261Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:43:33.261Z
|
||||
---
|
||||
|
||||
# Audit Report — configarr.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/configarr.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### SWARM AUDIT RESULTS
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: "Jolly Roger" (PASS)
|
||||
- `homepage.name`: "Configarr" (PASS)
|
||||
- `homepage.icon`: "si-config" (PASS)
|
||||
- `homepage.href`: "https://configarr.netgrimoire.com" (PASS)
|
||||
- `homepage.description`: "Automatically sync TRaSH formats & configs" (PASS)
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- Missing Uptime Kuma labels (`kuma.configarr.http.name` and `kuma.configarr.http.url`). These are critical for monitoring and should be added.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy=configarr.netgrimoire.com` (PASS)
|
||||
- `caddy.reverse_proxy: "{{upstreams 8000}}"` (PASS)
|
||||
|
||||
4. **Placement constraints**:
|
||||
- No placement constraints specified (`node.hostname`). This is acceptable if there are no specific node requirements.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- Volumes do not follow the `/DockerVol/<service>` path convention. They should be adjusted as follows:
|
||||
```yaml
|
||||
volumes:
|
||||
- /data/nfs/Baxter/Docker/configarr/config:/DockerVol/configarr/config
|
||||
- /data/nfs/Baxter/Docker/configarr/repos:/DockerVol/configarr/repos
|
||||
- /data/nfs/Baxter/Docker/configarr/cfs:/DockerVol/configarr/cfs
|
||||
- /data/nfs/Baxter/Docker/configarr/templates:/DockerVol/configarr/templates
|
||||
```
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- Network `netgrimoire` is correctly referencing an external overlay (PASS)
|
||||
|
||||
### VERDICT: FAIL
|
||||
|
||||
The configuration includes critical issues that need to be addressed for it to meet the required standards, specifically missing Uptime Kuma labels and incorrect volume paths.
|
||||
26
Netgrimoire/Audits/dailytxt-2026-04-03.md
Normal file
26
Netgrimoire/Audits/dailytxt-2026-04-03.md
Normal file
|
|
@ -0,0 +1,26 @@
|
|||
---
|
||||
title: Audit - dailytxt.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:44:52.573Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:44:52.573Z
|
||||
---
|
||||
|
||||
# Audit Report — dailytxt.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/dailytxt.yaml
|
||||
**Type:** Docker Compose
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
PASS DailyTxT service is configured to expose port 8000 on localhost, which matches an entry in the Caddyfile.
|
||||
|
||||
FAIL Default password detected for `ADMIN_PASSWORD`. It's strongly recommended to change this to a strong, unique password.
|
||||
FAIL The `SECRET_TOKEN` environment variable is left as `...`, indicating it's not set. A secret token should be generated using a secure method and included here.
|
||||
FAIL The `ALLOW_REGISTRATION` setting is enabled, which can expose the service to unauthorized access. This should be disabled in production environments.
|
||||
|
||||
VERDICT: FAIL
|
||||
52
Netgrimoire/Audits/database-2026-04-03.md
Normal file
52
Netgrimoire/Audits/database-2026-04-03.md
Normal file
|
|
@ -0,0 +1,52 @@
|
|||
---
|
||||
title: Audit - database.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:45:35.594Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:45:35.594Z
|
||||
---
|
||||
|
||||
# Audit Report — database.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/database.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### Audit of `swarm/database.yaml`
|
||||
|
||||
#### 1. Homepage labels: homepage.group, homepage.name, homepage.icon, homepage.href, homepage.description
|
||||
|
||||
**PASS**: All homepage labels are present and correctly configured.
|
||||
|
||||
#### 2. Uptime Kuma labels: kuma.<n>.http.name, kuma.<n>.http.url
|
||||
|
||||
**FAIL**: No Uptime Kuma services or labels found in the configuration.
|
||||
- **Fix**: Add a section for Uptime Kuma if necessary, including the required labels.
|
||||
|
||||
#### 3. Caddy labels on exposed services: caddy=<domain>, caddy.reverse_proxy
|
||||
|
||||
**PASS**: The `pgadmin4` service has the correct Caddy labels:
|
||||
- `caddy=pgadmin.netgrimoire.com`
|
||||
- `caddy.import=authentik`
|
||||
- `caddy.reverse_proxy="{{upstreams 80}}"`
|
||||
|
||||
#### 4. Placement constraints: node.hostname
|
||||
|
||||
**FAIL**: There are no placement constraints defined in the configuration.
|
||||
- **Fix**: Consider adding constraints to ensure services run on specific nodes if necessary.
|
||||
|
||||
#### 5. Volumes use /DockerVol/<service> path convention
|
||||
|
||||
**FAIL**: No volumes are specified in the configuration.
|
||||
- **Fix**: If volumes are needed, add them following the `/DockerVol/<service>` path convention.
|
||||
|
||||
#### 6. Network references external netgrimoire overlay
|
||||
|
||||
**PASS**: The service references an external network named `netgrimoire`.
|
||||
|
||||
### VERDICT: FAIL
|
||||
42
Netgrimoire/Audits/diun-2026-04-03.md
Normal file
42
Netgrimoire/Audits/diun-2026-04-03.md
Normal file
|
|
@ -0,0 +1,42 @@
|
|||
---
|
||||
title: Audit - diun.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:46:19.248Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:46:19.248Z
|
||||
---
|
||||
|
||||
# Audit Report — diun.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/diun.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**Audit Results for swarm/diun.yaml**
|
||||
|
||||
1. **Homepage labels**: No homepage labels defined in the file.
|
||||
- **Fix**: Add appropriate homepage labels if applicable.
|
||||
|
||||
2. **Uptime Kuma labels**: No Uptime Kuma labels defined in the file.
|
||||
- **Fix**: Add appropriate Uptime Kuma labels if applicable.
|
||||
|
||||
3. **Caddy labels on exposed services**: Caddy labels are missing for this service.
|
||||
- **Fix**: Add Caddy labels if needed, e.g., `caddy=example.com,caddy.reverse_proxy=http://localhost:80`.
|
||||
|
||||
4. **Placement constraints**: The constraint is set to run only on manager nodes, which is appropriate.
|
||||
- **PASS**: Placement constraints are correctly defined.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**: The volume uses `/data/nfs/znas/Docker/diun`, which does not follow the `/DockerVol/<service>` convention.
|
||||
- **Fix**: Update the volume path to follow the convention, e.g., `/DockerVol/diun`.
|
||||
|
||||
6. **Network references external netgrimoire overlay**: The network `netgrimoire` is correctly referencing an external overlay network.
|
||||
- **PASS**: Network reference is correct.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
The volume path does not follow the recommended convention, and additional labels for homepage, Uptime Kuma, and Caddy are missing.
|
||||
53
Netgrimoire/Audits/dockpeek-2026-04-03.md
Normal file
53
Netgrimoire/Audits/dockpeek-2026-04-03.md
Normal file
|
|
@ -0,0 +1,53 @@
|
|||
---
|
||||
title: Audit - dockpeek.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:47:08.875Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:47:08.875Z
|
||||
---
|
||||
|
||||
# Audit Report — dockpeek.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/dockpeek.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** PASS
|
||||
|
||||
---
|
||||
|
||||
**SWARM AUDIT**
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: PASS
|
||||
- `homepage.name`: PASS
|
||||
- `homepage.icon`: PASS
|
||||
- `homepage.href`: PASS
|
||||
- **`homepage.description`: FAIL**
|
||||
- Issue: Missing
|
||||
- Fix: Add `homepage.description: "Description of the service"`
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- `kuma.dockpeek.http.name`: PASS
|
||||
- `kuma.dockpeek.http.url`: PASS
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy=dockpeek.netgrimoire.com`: PASS
|
||||
- `caddy.reverse_proxy`: PASS
|
||||
|
||||
4. **Placement constraints**:
|
||||
- `node.role == manager`: FAIL
|
||||
- Issue: Constraints should be based on node attributes (e.g., `node.hostname`), not roles.
|
||||
- Fix: Replace with specific hostname or other attribute-based constraint.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- The volume `/var/run/docker.sock:/var/run/docker.sock` does not follow the `/DockerVol/<service>` convention.
|
||||
- Issue: Volume should be mounted using a custom path within `/DockerVol`.
|
||||
- Fix: Replace with something like `/DockerVol/dockpeek/docker.sock`.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- The network `netgrimoire` is referenced as an external network.
|
||||
- PASS
|
||||
|
||||
**VERDICT:** FAIL
|
||||
47
Netgrimoire/Audits/dozzle-2026-04-03.md
Normal file
47
Netgrimoire/Audits/dozzle-2026-04-03.md
Normal file
|
|
@ -0,0 +1,47 @@
|
|||
---
|
||||
title: Audit - dozzle.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:47:44.863Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:47:44.863Z
|
||||
---
|
||||
|
||||
# Audit Report — dozzle.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/dozzle.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
1. **Homepage labels**:
|
||||
- **PASS**: homepage.group=Management
|
||||
- **PASS**: homepage.name=Dozzle
|
||||
- **FAIL**: homepage.icon is missing.
|
||||
- **PASS**: homepage.href=http://dozzle.netgrimoire.com
|
||||
- **PASS**: homepage.description=Docker logs
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- No Uptime Kuma service found in the configuration.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- No Caddy services found in the configuration.
|
||||
|
||||
4. **Placement constraints**:
|
||||
- No placement constraints defined.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- **FAIL**: Volumes should follow the /DockerVol/dozzle path convention, but they are set to /var/run/docker.sock.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- **PASS**: Network dozzle references an external netgrimoire overlay.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
Reasons for failure:
|
||||
- Missing homepage.icon.
|
||||
- Volumes are not using the recommended path convention.
|
||||
- The /var/run/docker.sock volume is exposed directly, which might pose security risks.
|
||||
52
Netgrimoire/Audits/dumbterm-2026-04-03.md
Normal file
52
Netgrimoire/Audits/dumbterm-2026-04-03.md
Normal file
|
|
@ -0,0 +1,52 @@
|
|||
---
|
||||
title: Audit - dumbterm.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:48:40.660Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:48:40.660Z
|
||||
---
|
||||
|
||||
# Audit Report — dumbterm.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/dumbterm.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**Homepage Labels:**
|
||||
PASS - homepage.group=Remote Access
|
||||
PASS - homepage.name=Dumbterm
|
||||
FAIL - homepage.icon=dumbterm.png (should be a valid path to the icon file)
|
||||
FAIL - homepage.href=https://cli.netgrimoire.com (URL should be http://dumbterm:3000 based on BASE_URL)
|
||||
FAIL - homepage.description=Terminal (description is too short and lacks detail)
|
||||
|
||||
**Uptime Kuma Labels:**
|
||||
PASS - kuma.cli.http.name="dumbterm"
|
||||
PASS - kuma.cli.http.url=http://dumbterm:3000
|
||||
|
||||
**Caddy Labels on Exposed Services:**
|
||||
FAIL - caddy=cli.netgrimoire.com (domain should match the actual domain used in Caddy configuration)
|
||||
FAIL - caddy.reverse_proxy="{{upstreams 3000}}" (reverse proxy should be configured correctly)
|
||||
|
||||
**Placement Constraints:**
|
||||
FAIL - node.hostname is not defined
|
||||
|
||||
**Volumes Use /DockerVol/<service> Path Convention:**
|
||||
FAIL - Volumes are using paths outside the convention, e.g., /data/nfs/Baxter/Docker/dumbterm/root:/root
|
||||
|
||||
**Network References External Netgrimoire Overlay:**
|
||||
PASS - Network references external netgrimoire overlay
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
To fix the issues:
|
||||
1. Update `homepage.icon` to a valid path.
|
||||
2. Correct `homepage.href` based on the actual base URL used by the application.
|
||||
3. Provide more detail in `homepage.description`.
|
||||
4. Configure Caddy with the correct domain and reverse proxy settings.
|
||||
5. Define placement constraints for node.hostname if specific nodes are required.
|
||||
6. Update volume paths to use the /DockerVol/<service> convention.
|
||||
40
Netgrimoire/Audits/dupecheck-2026-04-03.md
Normal file
40
Netgrimoire/Audits/dupecheck-2026-04-03.md
Normal file
|
|
@ -0,0 +1,40 @@
|
|||
---
|
||||
title: Audit - dupecheck.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:49:24.657Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:49:24.657Z
|
||||
---
|
||||
|
||||
# Audit Report — dupecheck.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/dupecheck.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### SWARM AUDIT CHECKS
|
||||
|
||||
1. **Homepage labels**:
|
||||
- **PASS**: `homepage.group`, `homepage.name`, `homepage.icon`, `homepage.href`, and `homepage.description` are correctly set.
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- **FAIL**: Uptime Kuma labels (`kuma.<n>.http.name` and `kuma.<n>.http.url`) are not provided in the configuration.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- **PASS**: Caddy labels (`caddy=<domain>`, `caddy.import`, and `caddy.reverse_proxy`) are correctly set.
|
||||
|
||||
4. **Placement constraints**:
|
||||
- **PASS**: The placement constraint (`node.hostname == znas`) is correctly specified.
|
||||
|
||||
5. **Volumes use `/DockerVol/<service>` path convention**:
|
||||
- **PASS**: Volumes follow the `/DockerVol/<service>` path convention, e.g., `/DockerVol/czkawka:/config`.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- **PASS**: The service uses an external network `netgrimoire` which is correctly referenced.
|
||||
|
||||
### VERDICT: FAIL
|
||||
52
Netgrimoire/Audits/filebrowser-2026-04-03.md
Normal file
52
Netgrimoire/Audits/filebrowser-2026-04-03.md
Normal file
|
|
@ -0,0 +1,52 @@
|
|||
---
|
||||
title: Audit - filebrowser.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:50:18.312Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:50:18.312Z
|
||||
---
|
||||
|
||||
# Audit Report — filebrowser.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/filebrowser.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### SWARM AUDIT REVIEW
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group=Jolly Roger`: PASS
|
||||
- `homepage.name=FileBrowser`: PASS
|
||||
- `homepage.icon=filebrowser.png`: PASS
|
||||
- `homepage.href=http://filebrowser.netgrimoire.com`: PASS
|
||||
- `homepage.description=Web-based file manager`: PASS
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- The document does not provide any Uptime Kuma labels, so this check cannot be verified with the provided YAML.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy=filebrowser.netgrimoire.com`: PASS
|
||||
- `caddy.reverse_proxy="{{upstreams 80}}"`: PASS
|
||||
|
||||
4. **Placement constraints**:
|
||||
- The document specifies a constraint `node.labels.general == true`, which is not provided in the YAML.
|
||||
- **Issue**: The placement constraint refers to a label that may not be present on all nodes.
|
||||
- **Fix**: Ensure that all target nodes have the label `general=true`.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- The volumes are located at `/data/nfs/Baxter/Docker/filebrowser/config` and `/data/nfs/Baxter/Docker/filebrowser/srv`.
|
||||
- **Issue**: These paths do not follow the `/DockerVol/<service>` convention.
|
||||
- **Fix**: Rename the volumes to follow the convention, e.g., `volume: /DockerVol/filebrowser/config`.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- The network is correctly set as `netgrimoire` with `external: true`.
|
||||
- **PASS**
|
||||
|
||||
### VERDICT: FAIL
|
||||
|
||||
- The placement constraint and volume naming do not meet the specified conventions, which prevents a complete PASS status.
|
||||
49
Netgrimoire/Audits/firefox-2026-04-03.md
Normal file
49
Netgrimoire/Audits/firefox-2026-04-03.md
Normal file
|
|
@ -0,0 +1,49 @@
|
|||
---
|
||||
title: Audit - firefox.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:51:09.611Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:51:09.611Z
|
||||
---
|
||||
|
||||
# Audit Report — firefox.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/firefox.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**SWARM AUDIT**
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: Remote Access (PASS)
|
||||
- `homepage.name`: Firefox (PASS)
|
||||
- `homepage.icon`: firefox.png (PASS)
|
||||
- `homepage.href`: https://firefox.netgrimoire.com (PASS)
|
||||
- `homepage.description`: Remote Browser (PASS)
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- No Uptime Kuma labels found in the provided YAML file (FAIL). Ensure that any services running on this host have proper Uptime Kuma labels for visibility.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy=firefox.netgrimoire.com` (PASS)
|
||||
- `caddy.reverse_proxy=http://firefox:5800` (PASS)
|
||||
|
||||
4. **Placement constraints**:
|
||||
- No placement constraints found in the provided YAML file (FAIL). Ensure that any critical services have proper placement constraints to meet availability requirements.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- Volume path `/data/nfs/znas/Docker/firefox` does not follow the `/DockerVol/<service>` convention (FAIL). Volumes should be placed in a directory following this naming scheme for consistency and ease of management.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- Network `netgrimoire` is referenced correctly and marked as external (PASS).
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
- The YAML file lacks Uptime Kuma labels, which are essential for monitoring the status of services.
|
||||
- No placement constraints are defined, which can lead to issues with service availability and redundancy.
|
||||
- Volumes do not follow the recommended path convention, which may cause confusion and difficulty in managing storage resources.
|
||||
53
Netgrimoire/Audits/forgejo-2026-04-03.md
Normal file
53
Netgrimoire/Audits/forgejo-2026-04-03.md
Normal file
|
|
@ -0,0 +1,53 @@
|
|||
---
|
||||
title: Audit - forgejo.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:52:02.048Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:52:02.048Z
|
||||
---
|
||||
|
||||
# Audit Report — forgejo.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/forgejo.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### SWARM AUDIT
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: Applications (PASS)
|
||||
- `homepage.name`: Forgejo (PASS)
|
||||
- `homepage.icon`: forgejo.png (FAIL)
|
||||
- Issue: The icon file path should be relative to the service's working directory or a valid URL.
|
||||
- `homepage.href`: https://git.netgrimoire.com (PASS)
|
||||
- `homepage.description`: Git Repository (PASS)
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- `kuma.git.http.name`: Forgejo (PASS)
|
||||
- `kuma.git.http.url`: http://forgejo:3000 (PASS)
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy=git.netgrimoire.com` (PASS)
|
||||
- `caddy.reverse_proxy=forgejo:3000` (PASS)
|
||||
|
||||
4. **Placement constraints**:
|
||||
- `node.hostname==znas` (PASS)
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- `/DockerVol/forgejo:/data` (PASS)
|
||||
- `/etc/timezone:/etc/timezone:ro` (FAIL)
|
||||
- Issue: The timezone files should be mounted from a local path within the host or a valid network location.
|
||||
- `/etc/localtime:/etc/localtime:ro` (FAIL)
|
||||
- Same issue as above.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- `netgrimoire`: (PASS)
|
||||
|
||||
### Final Line
|
||||
|
||||
VERDICT: FAIL
|
||||
46
Netgrimoire/Audits/freshrss-2026-04-03.md
Normal file
46
Netgrimoire/Audits/freshrss-2026-04-03.md
Normal file
|
|
@ -0,0 +1,46 @@
|
|||
---
|
||||
title: Audit - freshrss.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:52:41.486Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:52:41.486Z
|
||||
---
|
||||
|
||||
# Audit Report — freshrss.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/freshrss.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
```plaintext
|
||||
1. Homepage labels:
|
||||
- homepage.group: "Services" (PASS)
|
||||
- homepage.name: "FreshRSS" (PASS)
|
||||
- homepage.icon: "rss" (PASS)
|
||||
- homepage.href: "https://rss.netgrimoire.com" (PASS)
|
||||
- homepage.description is missing (FAIL)
|
||||
|
||||
2. Uptime Kuma labels:
|
||||
- kuma.freshrss.http.name: "FreshRSS" (PASS)
|
||||
- kuma.freshrss.http.url: "https://rss.netgrimoire.com" (PASS)
|
||||
|
||||
3. Caddy labels on exposed services:
|
||||
- caddy=<domain>: Missing specific domain (FAIL)
|
||||
- caddy.reverse_proxy: "{{upstreams 80}}" (PASS)
|
||||
|
||||
4. Placement constraints:
|
||||
- node.hostname is missing (FAIL)
|
||||
|
||||
5. Volumes use /DockerVol/<service> path convention:
|
||||
- /data/nfs/Baxter/Docker/freshrss:/config does not follow the convention (FAIL)
|
||||
|
||||
6. Network references external netgrimoire overlay:
|
||||
- netgrimoire network referenced correctly (PASS)
|
||||
|
||||
VERDICT: FAIL
|
||||
```
|
||||
40
Netgrimoire/Audits/gitrunner-2026-04-03.md
Normal file
40
Netgrimoire/Audits/gitrunner-2026-04-03.md
Normal file
|
|
@ -0,0 +1,40 @@
|
|||
---
|
||||
title: Audit - gitrunner.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:53:10.577Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:53:10.577Z
|
||||
---
|
||||
|
||||
# Audit Report — gitrunner.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/gitrunner.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** PASS
|
||||
|
||||
---
|
||||
|
||||
```plaintext
|
||||
1. Homepage labels:
|
||||
- PASS: homepage.group, homepage.name, homepage.icon, homepage.href, homepage.description are present.
|
||||
|
||||
2. Uptime Kuma labels:
|
||||
- FAIL: No Uptime Kuma services found in the YAML file.
|
||||
|
||||
3. Caddy labels on exposed services:
|
||||
- FAIL: No Caddy services or configuration found in the YAML file.
|
||||
|
||||
4. Placement constraints:
|
||||
- PASS: node.role == manager constraint is present.
|
||||
|
||||
5. Volumes use /DockerVol/<service> path convention:
|
||||
- PASS: All volumes follow the /DockerVol/<service> path convention.
|
||||
|
||||
6. Network references external netgrimoire overlay:
|
||||
- PASS: The network "netgrimoire" is referenced and marked as external.
|
||||
|
||||
VERDICT: PASS
|
||||
```
|
||||
42
Netgrimoire/Audits/glance-2026-04-03.md
Normal file
42
Netgrimoire/Audits/glance-2026-04-03.md
Normal file
|
|
@ -0,0 +1,42 @@
|
|||
---
|
||||
title: Audit - glance.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:53:50.034Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:53:50.034Z
|
||||
---
|
||||
|
||||
# Audit Report — glance.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/glance.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** PASS
|
||||
|
||||
---
|
||||
|
||||
### Audit Summary:
|
||||
|
||||
1. **Homepage labels**:
|
||||
- **PASS**: homepage.group, homepage.name, homepage.href, homepage.description are correctly set.
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- **FAIL**: No Uptime Kuma related labels found.
|
||||
- **Fix**: Add the necessary labels under `labels` for Uptime Kuma if needed.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- **PASS**: The `caddy` label is present with the domain and reverse proxy configuration.
|
||||
|
||||
4. **Placement constraints**:
|
||||
- **FAIL**: No placement constraints found.
|
||||
- **Fix**: Add constraints under `deploy.resources` to specify node.hostname or any other resource constraint if needed.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- **PASS**: The volume path `/data/nfs/znas/Docker/glance:/app/config` follows the recommended naming convention.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- **PASS**: The service references an external network `netgrimoire`.
|
||||
|
||||
### VERDICT: PASS
|
||||
41
Netgrimoire/Audits/gremlin-stack-2026-04-03.md
Normal file
41
Netgrimoire/Audits/gremlin-stack-2026-04-03.md
Normal file
|
|
@ -0,0 +1,41 @@
|
|||
---
|
||||
title: Audit - gremlin-stack.yml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:30:10.234Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:30:10.234Z
|
||||
---
|
||||
|
||||
# Audit Report — gremlin-stack.yml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/stack/Gremlin/gremlin-stack.yml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### SWARM AUDIT RESULTS:
|
||||
|
||||
1. **Homepage labels**:
|
||||
- **PASS**: All homepage labels are present for each service.
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- **FAIL (n8n)**: Missing kuma.n8n.http.name and kuma.n8n.http.url.
|
||||
**Fix**: Add these labels to the n8n service configuration.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- **PASS**: All caddy labels are present for each exposed service.
|
||||
|
||||
4. **Placement constraints**:
|
||||
- **PASS**: All placement constraints are correctly set for node.hostname.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- **PASS**: All volumes follow the /DockerVol/<service> path convention.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- **PASS**: The swarm network 'netgrimoire' is correctly referenced as an external network for all services.
|
||||
|
||||
### VERDICT: FAIL
|
||||
44
Netgrimoire/Audits/homepage-2026-04-03.md
Normal file
44
Netgrimoire/Audits/homepage-2026-04-03.md
Normal file
|
|
@ -0,0 +1,44 @@
|
|||
---
|
||||
title: Audit - homepage.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:54:34.224Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:54:34.224Z
|
||||
---
|
||||
|
||||
# Audit Report — homepage.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/homepage.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### SWARM AUDIT REVIEW
|
||||
|
||||
1. **Homepage labels**:
|
||||
- **FAIL**: `homepage.group` label is missing.
|
||||
- **Fix**: Add `homepage.group=<group>` to the labels.
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- **PASS**: No Uptime Kuma services are defined in this configuration, so no labels need to be checked.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- **FAIL**: The `caddy` label is incorrectly used as a boolean flag rather than specifying the domain.
|
||||
- **Fix**: Correctly define the `caddy` label with the domain and reverse proxy configuration: `caddy=homepage.netgrimoire.com caddy.reverse_proxy="{{upstreams 3000}}"`.
|
||||
|
||||
4. **Placement constraints**:
|
||||
- **PASS**: The `node.hostname==znas` constraint is correctly defined.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- **PASS**: All volume paths follow the `/DockerVol/<service>` convention.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- **PASS**: The `netgrimoire` network is correctly referenced as an external overlay.
|
||||
|
||||
### VERDICT: FAIL
|
||||
|
||||
The configuration is missing several required labels and has incorrect label syntax, resulting in a fail verdict.
|
||||
47
Netgrimoire/Audits/hydra-2026-04-03.md
Normal file
47
Netgrimoire/Audits/hydra-2026-04-03.md
Normal file
|
|
@ -0,0 +1,47 @@
|
|||
---
|
||||
title: Audit - hydra.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:55:21.784Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:55:21.784Z
|
||||
---
|
||||
|
||||
# Audit Report — hydra.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/hydra.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: PASSED
|
||||
- `homepage.name`: PASSED
|
||||
- `homepage.icon`: PASSED
|
||||
- `homepage.href`: PASSED
|
||||
- `homepage.description`: PASSED
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- `kuma.hydra.http.name`: PASSED
|
||||
- `kuma.hydra.http.url`: PASSED
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy=hydra.netgrimoire.com`: PASSED
|
||||
- `caddy.reverse_proxy: hydra2:5076`: PASSED
|
||||
|
||||
4. **Placement constraints**:
|
||||
- `node.labels.general == true`: PASSED
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- `/data/nfs/znas/Docker/hydra2/config`: FAIL
|
||||
- Fix: Update the volume to follow the convention, e.g., `/DockerVol/hydra2/config`.
|
||||
- `/data/nfs/znas/Docker/hydra2/downloads`: FAIL
|
||||
- Fix: Update the volume to follow the convention, e.g., `/DockerVol/hydra2/downloads`.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- `netgrimoire`: PASSED
|
||||
|
||||
VERDICT: FAIL
|
||||
50
Netgrimoire/Audits/joplin-2026-04-03.md
Normal file
50
Netgrimoire/Audits/joplin-2026-04-03.md
Normal file
|
|
@ -0,0 +1,50 @@
|
|||
---
|
||||
title: Audit - joplin.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:56:20.747Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:56:20.747Z
|
||||
---
|
||||
|
||||
# Audit Report — joplin.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/joplin.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**SWARM AUDIT**
|
||||
|
||||
1. **Homepage labels**:
|
||||
- **PASS**: `homepage.group=Services`
|
||||
- **PASS**: `homepage.name=Joplin`
|
||||
- **FAIL**: `homepage.icon=joplin.png` (should be a valid URL or path)
|
||||
- **PASS**: `homepage.href=https://joplin.netgrimoire.com`
|
||||
- **PASS**: `homepage.description=Note Server`
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- **FAIL**: No Uptime Kuma labels found.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- **PASS**: `caddy=joplin.netgrimoire.com`
|
||||
- **FAIL**: `caddy.reverse_proxy="{{upstreams 22300}}"` should be `caddy.reverse_proxy=["http://joplin:22300"]`
|
||||
|
||||
4. **Placement constraints**:
|
||||
- **PASS**: `node.hostname == docker3`
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- **PASS**: `/DockerVol/joplindb:/var/lib/postgresql/data`
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- **PASS**: Uses `netgrimoire` network which is marked as `external: true`.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
Fixes required:
|
||||
- Correct the icon URL in `homepage.icon`.
|
||||
- Add Uptime Kuma labels.
|
||||
- Correct the Caddy reverse proxy configuration.
|
||||
27
Netgrimoire/Audits/journiv-2026-04-03.md
Normal file
27
Netgrimoire/Audits/journiv-2026-04-03.md
Normal file
|
|
@ -0,0 +1,27 @@
|
|||
---
|
||||
title: Audit - journiv.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:57:23.495Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:57:23.495Z
|
||||
---
|
||||
|
||||
# Audit Report — journiv.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/journiv.yaml
|
||||
**Type:** Docker Compose
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
PASS: Caddyfile has a global block for Crowdsec configuration.
|
||||
PASS: All services are reverse-proxied through Caddy, ensuring they do not expose ports directly.
|
||||
|
||||
FAIL:
|
||||
- The service at `fish.pncharris.com` is missing a Caddyfile entry.
|
||||
- No entries exist for the subdomains of `webmail.netgrimoire.com`.
|
||||
|
||||
VERDICT: FAIL
|
||||
52
Netgrimoire/Audits/kavita-2026-04-03.md
Normal file
52
Netgrimoire/Audits/kavita-2026-04-03.md
Normal file
|
|
@ -0,0 +1,52 @@
|
|||
---
|
||||
title: Audit - kavita.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:58:18.686Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:58:18.686Z
|
||||
---
|
||||
|
||||
# Audit Report — kavita.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/kavita.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**SWARM AUDIT REPORT**
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: PASS
|
||||
- `homepage.name`: PASS
|
||||
- `homepage.icon`: PASS
|
||||
- `homepage.href`: PASS
|
||||
- `homepage.description`: PASS
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- Missing Uptime Kuma labels (e.g., `kuma.kavita.http.name` and `kuma.kavita.http.url`). These are not defined in the provided configuration.
|
||||
- **FAIL**: Add appropriate Uptime Kuma labels for monitoring.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy`: PASS
|
||||
- `caddy.reverse_proxy`: PASS
|
||||
|
||||
4. **Placement constraints**:
|
||||
- No placement constraints (e.g., `node.hostname`) specified.
|
||||
- **FAIL**: Consider adding placement constraints if specific nodes are required for service placement.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- `/data/nfs/Baxter/Data/media/comics`: FAIL
|
||||
- Volume paths do not follow the `/DockerVol/<service>` convention.
|
||||
- **Fix**: Update volume paths to conform to the convention, e.g., `/DockerVol/kavita/media/comics`.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- `netgrimoire`: PASS
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
- The configuration contains several issues that need resolution before it can be considered fully compliant with best practices.
|
||||
- Address the Uptime Kuma labels, placement constraints, and volume paths as indicated.
|
||||
46
Netgrimoire/Audits/kopia-2026-04-03.md
Normal file
46
Netgrimoire/Audits/kopia-2026-04-03.md
Normal file
|
|
@ -0,0 +1,46 @@
|
|||
---
|
||||
title: Audit - kopia.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:59:09.430Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:59:09.430Z
|
||||
---
|
||||
|
||||
# Audit Report — kopia.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/kopia.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** PASS
|
||||
|
||||
---
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: PASS (Backup)
|
||||
- `homepage.name`: PASS (Kopia)
|
||||
- `homepage.icon`: PASS (kopia.png)
|
||||
- `homepage.href`: PASS (https://kopia.netgrimoire.com)
|
||||
- `homepage.description`: PASS (Snapshot backup and deduplication)
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- Not applicable as there are no Uptime Kuma labels.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy`: PASS (kopia.netgrimoire.com)
|
||||
- `caddy.reverse_proxy`: PASS (kopia.netgrimoire.com:51515)
|
||||
|
||||
4. **Placement constraints**:
|
||||
- `node.hostname == znas`: PASS
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- `/DockerVol/kopia/config`: PASS
|
||||
- `/DockerVol/kopia/cache`: PASS
|
||||
- `/DockerVol/kopia/cert`: PASS
|
||||
- `/DockerVol/kopia/logs`: PASS
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- `netgrimoire`: PASS (external)
|
||||
|
||||
VERDICT: PASS
|
||||
44
Netgrimoire/Audits/kuma-2026-04-03.md
Normal file
44
Netgrimoire/Audits/kuma-2026-04-03.md
Normal file
|
|
@ -0,0 +1,44 @@
|
|||
---
|
||||
title: Audit - kuma.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T02:59:59.242Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T02:59:59.242Z
|
||||
---
|
||||
|
||||
# Audit Report — kuma.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/kuma.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**SWARM AUDIT**
|
||||
|
||||
1. **Homepage labels**:
|
||||
- **PASS**: homepage.group=Monitoring, homepage.name=Kuma Uptime, homepage.icon=kuma.png, homepage.href=https://kuma.netgrimoire.com, homepage.description=Services Monitor
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- **FAIL**: No labels found for Uptime Kuma service.
|
||||
- **Fix**: Add appropriate labels to the Uptime Kuma service under the `labels` section.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- **PASS**: caddy=kuma.netgrimoire.com, caddy.reverse_proxy=kuma:3001
|
||||
|
||||
4. **Placement constraints**:
|
||||
- **FAIL**: node.hostname constraint for autokuma service does not match the provided fix.
|
||||
- **Fix**: Use `node.role == manager` instead of `node.hostname`.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- **PASS**: All volumes follow the /DockerVol/<service> path convention.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- **PASS**: The swarm uses an external network netgrimoire.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
- Missing or incorrect labels for Uptime Kuma and placement constraints for autokuma service are preventing the audit from being PASS.
|
||||
64
Netgrimoire/Audits/library-2026-04-03.md
Normal file
64
Netgrimoire/Audits/library-2026-04-03.md
Normal file
|
|
@ -0,0 +1,64 @@
|
|||
---
|
||||
title: Audit - library.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:00:59.147Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:00:59.147Z
|
||||
---
|
||||
|
||||
# Audit Report — library.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/library.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**SWARM AUDIT REPORT**
|
||||
|
||||
1. **Homepage labels:**
|
||||
- `homepage.group=Library`
|
||||
- `homepage.name=Netgrimoire Library`
|
||||
- `homepage.icon=calibre-web.png`
|
||||
- `homepage.href=http://books.netgrimoire.com`
|
||||
- `homepage.description=Curated Library`
|
||||
|
||||
**PASS**: All homepage labels are correctly configured.
|
||||
|
||||
2. **Uptime Kuma labels:**
|
||||
- `kuma.calibre1.http.name="Calibre-Netgrimoire"`
|
||||
- `kuma.auth.http.url=http://calibre-netgrimoire:8083`
|
||||
|
||||
**PASS**: Uptime Kuma labels are correctly configured for the Calibre service.
|
||||
|
||||
3. **Caddy labels on exposed services:**
|
||||
- `caddy=books.netgrimoire.com`
|
||||
- `caddy.reverse_proxy="{{upstreams 8083}}"`
|
||||
|
||||
**PASS**: Caddy labels are correctly configured to reverse proxy to the Calibre service.
|
||||
|
||||
4. **Placement constraints:**
|
||||
- `node.labels.general == true`
|
||||
|
||||
**FAIL**: The placement constraint should use `node.hostname` instead of `node.labels.general`.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention:**
|
||||
- `/data/nfs/Baxter/Docker/Calibre-netgrimoire/Config:/config`
|
||||
- `/data/nfs/Baxter/Data:/data:shared`
|
||||
|
||||
**FAIL**: Volumes are not using the recommended `/DockerVol/<service>` path convention. They should be mounted under `/DockerVol/Calibre-Netgrimoire`.
|
||||
|
||||
6. **Network references external netgrimoire overlay:**
|
||||
- `networks:`
|
||||
- `- netgrimoire`
|
||||
|
||||
**PASS**: The service is correctly using an external network.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
Fixes required:
|
||||
1. Update the placement constraint to use `node.hostname`.
|
||||
2. Update volume paths to follow the `/DockerVol/<service>` convention.
|
||||
50
Netgrimoire/Audits/linkding-2026-04-03.md
Normal file
50
Netgrimoire/Audits/linkding-2026-04-03.md
Normal file
|
|
@ -0,0 +1,50 @@
|
|||
---
|
||||
title: Audit - linkding.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:01:44.209Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:01:44.209Z
|
||||
---
|
||||
|
||||
# Audit Report — linkding.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/linkding.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**Audit Results**
|
||||
|
||||
1. **Homepage labels:**
|
||||
- `homepage.group`: PASS
|
||||
- `homepage.name`: PASS
|
||||
- `homepage.icon`: PASS
|
||||
- `homepage.href`: PASS
|
||||
- `homepage.description`: PASS
|
||||
|
||||
2. **Uptime Kuma labels:**
|
||||
- `kuma.linkding.http.name`: PASS
|
||||
- `kuma.linkding.http.url`: PASS
|
||||
|
||||
3. **Caddy labels on exposed services:**
|
||||
- `caddy=link.netgrimoire.com`: PASS
|
||||
- `caddy.reverse_proxy=linkding:9090`: PASS
|
||||
|
||||
4. **Placement constraints:**
|
||||
- No placement constraints specified, which is acceptable if not needed. **PASS**
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention:**
|
||||
- Volume path is `/data/nfs/Baxter/Docker/linkding/data`, which does not follow the `/DockerVol/<service>` convention. **FAIL**
|
||||
|
||||
6. **Network references external netgrimoire overlay:**
|
||||
- `netgrimoire` network is referenced and set as external, which is correct. **PASS**
|
||||
|
||||
**Fixes Needed:**
|
||||
- Update volume path to use the `/DockerVol/linkding` convention.
|
||||
|
||||
**Final Line:**
|
||||
VERDICT: FAIL
|
||||
43
Netgrimoire/Audits/lldap-2026-04-03.md
Normal file
43
Netgrimoire/Audits/lldap-2026-04-03.md
Normal file
|
|
@ -0,0 +1,43 @@
|
|||
---
|
||||
title: Audit - lldap.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:02:52.353Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:02:52.353Z
|
||||
---
|
||||
|
||||
# Audit Report — lldap.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/lldap.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**Audit Report for swarm/lldap.yaml**
|
||||
|
||||
1. **Homepage Labels**:
|
||||
- **PASS**: All required labels (`homepage.group`, `homepage.name`, `homepage.icon`, `homepage.href`, `homepage.description`) are present and correctly formatted.
|
||||
|
||||
2. **Uptime Kuma Labels**:
|
||||
- **FAIL**: The Uptime Kuma labels are not specified in the provided YAML. The labels should be prefixed with `kuma.` and include details like `http.name` and `http.url`.
|
||||
**Fix**: Add appropriate Kuma labels under the `deploy.labels` section.
|
||||
|
||||
3. **Caddy Labels on Exposed Services**:
|
||||
- **PASS**: All required Caddy labels (`caddy=<domain>`, `caddy.reverse_proxy`) are present and correctly formatted.
|
||||
|
||||
4. **Placement Constraints**:
|
||||
- **PASS**: The placement constraint `node.hostname == docker4` is applied to both services as specified.
|
||||
|
||||
5. **Volumes Use /DockerVol/<service> Path Convention**:
|
||||
- **PASS**: All volumes adhere to the `/DockerVol/<service>` path convention, with examples provided for both `lldap-db` and `lldap`.
|
||||
|
||||
6. **Network References External netgrimoire Overlay**:
|
||||
- **PASS**: The network reference `netgrimoire` is correctly set as external.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
The Uptime Kuma labels are missing from the service configuration, which is a critical issue affecting monitoring and management visibility of the LLDAP service.
|
||||
40
Netgrimoire/Audits/logging-2026-04-03.md
Normal file
40
Netgrimoire/Audits/logging-2026-04-03.md
Normal file
|
|
@ -0,0 +1,40 @@
|
|||
---
|
||||
title: Audit - logging.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:03:52.043Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:03:52.043Z
|
||||
---
|
||||
|
||||
# Audit Report — logging.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/logging.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** PASS
|
||||
|
||||
---
|
||||
|
||||
### Audit Report
|
||||
|
||||
**1. Homepage labels:**
|
||||
PASS - All services have `homepage.group`, `homepage.name`, `homepage.icon`, `homepage.href`, and `homepage.description` labels.
|
||||
|
||||
**2. Uptime Kuma labels:**
|
||||
FAIL - Uptime Kuma is not present in the provided YAML, so this check is not applicable.
|
||||
|
||||
**3. Caddy labels on exposed services:**
|
||||
PASS - All services that expose ports (loki:3100, grafana:3000) have `caddy` and `caddy.reverse_proxy` labels correctly configured.
|
||||
|
||||
**4. Placement constraints:**
|
||||
PASS - All services use placement constraints to ensure they run only on a node with the hostname `docker4`.
|
||||
|
||||
**5. Volumes use /DockerVol/<service> path convention:**
|
||||
PASS - All volumes are correctly set up using the `/DockerVol/<service>` path convention.
|
||||
|
||||
**6. Network references external netgrimoire overlay:**
|
||||
PASS - The network `netgrimoire` is referenced and is marked as external, ensuring correct network isolation and configuration.
|
||||
|
||||
### VERDICT: PASS
|
||||
47
Netgrimoire/Audits/mealie-2026-04-03.md
Normal file
47
Netgrimoire/Audits/mealie-2026-04-03.md
Normal file
|
|
@ -0,0 +1,47 @@
|
|||
---
|
||||
title: Audit - mealie.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:04:46.697Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:04:46.697Z
|
||||
---
|
||||
|
||||
# Audit Report — mealie.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/mealie.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**Audit Results:**
|
||||
|
||||
1. **Homepage labels:**
|
||||
- PASS: homepage.group=PNCHarris Apps
|
||||
- PASS: homepage.name=Mealie
|
||||
- PASS: homepage.icon=mealie.png
|
||||
- PASS: homepage.href=https://recipe.netgrimoire.com
|
||||
- PASS: homepage.description=Recipe Manager
|
||||
|
||||
2. **Uptime Kuma labels:**
|
||||
- FAIL: Missing Uptime Kuma labels. No Kuma-related labels are present in the file.
|
||||
|
||||
3. **Caddy labels on exposed services:**
|
||||
- PASS: caddy=recipe.netgrimoire.com
|
||||
- PASS: caddy.reverse_proxy=recipe:9000
|
||||
|
||||
4. **Placement constraints:**
|
||||
- PASS: node.hostname == docker4
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention:**
|
||||
- PASS: /DockerVol/mealie:/app/data
|
||||
|
||||
6. **Network references external netgrimoire overlay:**
|
||||
- PASS: netgrimoire network is referenced as external.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
The file does not include any Uptime Kuma labels, which are necessary for monitoring the service with Uptime Kuma.
|
||||
41
Netgrimoire/Audits/ntfy-2026-04-03.md
Normal file
41
Netgrimoire/Audits/ntfy-2026-04-03.md
Normal file
|
|
@ -0,0 +1,41 @@
|
|||
---
|
||||
title: Audit - ntfy.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:05:29.837Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:05:29.837Z
|
||||
---
|
||||
|
||||
# Audit Report — ntfy.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/ntfy.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**Audit Report for swarm/ntfy.yaml**
|
||||
|
||||
1. **Homepage labels**: NOT APPLICABLE - The configuration file does not include any homepage labels.
|
||||
|
||||
2. **Uptime Kuma labels**: NOT APPLICABLE - The configuration file does not include Uptime Kuma labels.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- PASS: `caddy=ntfy.netgrimoire.com`
|
||||
- PASS: `caddy.reverse_proxy`
|
||||
|
||||
4. **Placement constraints**: NOT APPLICABLE - The configuration file does not include any placement constraints.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- FAIL: Volumes are using `/data/nfs/znas/Docker/ntfy/cache` and `/data/nfs/znas/Docker/ntfy/etc`, which do not follow the `/DockerVol/<service>` path convention.
|
||||
**Fix**: Update volumes to use a path like `/DockerVol/ntfy/cache` and `/DockerVol/ntfy/etc`.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- PASS: The network `netgrimoire` is referenced as an external network.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
The volume paths do not conform to the specified convention, which could lead to management and organization issues in the future. Ensure all volumes follow the `/DockerVol/<service>` path convention for better consistency and ease of maintenance.
|
||||
47
Netgrimoire/Audits/nzbget-2026-04-03.md
Normal file
47
Netgrimoire/Audits/nzbget-2026-04-03.md
Normal file
|
|
@ -0,0 +1,47 @@
|
|||
---
|
||||
title: Audit - nzbget.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:06:10.689Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:06:10.689Z
|
||||
---
|
||||
|
||||
# Audit Report — nzbget.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/nzbget.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### SWARM AUDIT
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: PASS
|
||||
- `homepage.name`: PASS
|
||||
- `homepage.icon`: PASS
|
||||
- `homepage.href`: PASS
|
||||
- `homepage.description`: PASS
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- `kuma.nzbget.http.name`: PASS
|
||||
- `kuma.nzbget.http.url`: PASS
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy=nzbget.netgrimoire.com`: PASS
|
||||
- `caddy.reverse_proxy="{{upstreams 6789}}"`: PASS
|
||||
|
||||
4. **Placement constraints**:
|
||||
- `node.hostname=docker5`: PASS
|
||||
|
||||
5. **Volumes use `/DockerVol/<service>` path convention**:
|
||||
- `/DockerVol/nzbget/config:/config`: PASS
|
||||
- `/data/nfs/znas/Green/:/data:shared`: FAIL (Volume paths should follow the `/DockerVol/<service>` convention)
|
||||
|
||||
6. **Network references external `netgrimoire` overlay**:
|
||||
- `networks`: PASS
|
||||
|
||||
### VERDICT: FAIL
|
||||
53
Netgrimoire/Audits/ollama-2026-04-03.md
Normal file
53
Netgrimoire/Audits/ollama-2026-04-03.md
Normal file
|
|
@ -0,0 +1,53 @@
|
|||
---
|
||||
title: Audit - ollama.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:07:35.106Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:07:35.106Z
|
||||
---
|
||||
|
||||
# Audit Report — ollama.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/ollama.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** PASS
|
||||
|
||||
---
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: PASS
|
||||
- `homepage.name`: PASS
|
||||
- `homepage.icon`: PASS
|
||||
- `homepage.href`: PASS
|
||||
- `homepage.description`: PASS
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- `kuma.ollama.http.name`: PASS
|
||||
- `kuma.ollama.http.url`: PASS
|
||||
- `kuma.openwebui.http.name`: PASS
|
||||
- `kuma.openwebui.http.url`: PASS
|
||||
- `kuma.qdrant.http.name`: PASS
|
||||
- `kuma.qdrant.http.url`: PASS
|
||||
- `kuma.n8n.http.name`: PASS
|
||||
- `kuma.n8n.http.url`: PASS
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy=ai.netgrimoire.com` and `caddy.reverse_proxy={{upstreams 8080}}`: PASS
|
||||
- `caddy=n8n.netgrimoire.com` and `caddy.reverse_proxy={{upstreams 5678}}`: PASS
|
||||
|
||||
4. **Placement constraints**:
|
||||
- `node.hostname == docker4`: PASS for all services
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- `/DockerVol/ollama` for ollama: PASS
|
||||
- `/DockerVol/open-webui` for open-webui: PASS
|
||||
- `/DockerVol/qdrant` for qdrant: PASS
|
||||
- `/DockerVol/n8n` for n8n: PASS
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- `netgrimoire`: PASS
|
||||
|
||||
**VERDICT: PASS**
|
||||
57
Netgrimoire/Audits/phpipam-2026-04-03.md
Normal file
57
Netgrimoire/Audits/phpipam-2026-04-03.md
Normal file
|
|
@ -0,0 +1,57 @@
|
|||
---
|
||||
title: Audit - phpipam.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:08:37.768Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:08:37.768Z
|
||||
---
|
||||
|
||||
# Audit Report — phpipam.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/phpipam.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** PASS
|
||||
|
||||
---
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: Management
|
||||
- `homepage.name`: phpIPAM
|
||||
- `homepage.icon`: ipam.png
|
||||
- `homepage.href`: http://ipam.netgrimoire.com
|
||||
- `homepage.description`: IP Address Management
|
||||
|
||||
**PASS**: All homepage labels are correctly set.
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- `kuma.<n>.http.name`
|
||||
- `kuma.<n>.http.url`
|
||||
|
||||
**FAIL**: No Uptime Kuma service is defined in the YAML file, so these labels do not apply.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy=ipam.netgrimoire.com`
|
||||
- `caddy.reverse_proxy="{{upstreams 80}}"`
|
||||
|
||||
**PASS**: Caddy labels are correctly set for the phpIPAM-web service.
|
||||
|
||||
4. **Placement constraints**:
|
||||
- `node.hostname == docker3`
|
||||
|
||||
**FAIL**: The placement constraint is applied to all services, but it should be verified that `docker3` node exists and is available. Additionally, consider using a more dynamic constraint if possible (e.g., based on resource availability).
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- `/DockerVol/phpipam/phpipam-logo:/phpipam/css/images/logo`
|
||||
- `/DockerVol/phpipam/mariadb:/var/lib/mysql`
|
||||
|
||||
**PASS**: All volumes follow the specified path convention.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- `netgrimoire` network is referenced by all services.
|
||||
|
||||
**PASS**: The `netgrimoire` network is correctly referenced as an external overlay network.
|
||||
|
||||
**VERDICT: PASS**
|
||||
54
Netgrimoire/Audits/pinchflat-2026-04-03.md
Normal file
54
Netgrimoire/Audits/pinchflat-2026-04-03.md
Normal file
|
|
@ -0,0 +1,54 @@
|
|||
---
|
||||
title: Audit - pinchflat.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:09:34.505Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:09:34.505Z
|
||||
---
|
||||
|
||||
# Audit Report — pinchflat.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/pinchflat.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** PASS
|
||||
|
||||
---
|
||||
|
||||
### SWARM AUDIT CHECKS:
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: "Downloaders" - PASS
|
||||
- `homepage.name`: "PinchFlat" - PASS
|
||||
- `homepage.icon`: "pinchflat.png" - FAIL (icon path should be accessible)
|
||||
- `homepage.href`: "https://pinchflat.netgrimoire.com" - PASS
|
||||
- `homepage.description`: "YouTube Library" - PASS
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- `kuma.pf.http.name`: "PinchFlat" - PASS
|
||||
- `kuma.pf.http.url`: "http://pinchflat:8945" - PASS
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy=pinchflat.netgrimoire.com` - PASS
|
||||
- `caddy.import=authentik` - PASS
|
||||
- `caddy.reverse_proxy=pinchflat:8945` - PASS
|
||||
|
||||
4. **Placement constraints**:
|
||||
- `node.hostname==nas` - PASS
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- `/DockerVol/pinchflat/config:/config` - PASS
|
||||
- `/data/nfs/Baxter/Data/media/other/pinchflat:/downloads` - FAIL (should follow the /DockerVol/<service> convention)
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- `netgrimoire` network is external - PASS
|
||||
|
||||
### FIXES:
|
||||
|
||||
- Update `homepage.icon` to a valid accessible path.
|
||||
- Change `/data/nfs/Baxter/Data/media/other/pinchflat:/downloads` to follow the convention by placing it under `/DockerVol/pinchflat/downloads`.
|
||||
|
||||
### VERDICT:
|
||||
FAIL
|
||||
67
Netgrimoire/Audits/portainer-agent-stack-2026-04-03.md
Normal file
67
Netgrimoire/Audits/portainer-agent-stack-2026-04-03.md
Normal file
|
|
@ -0,0 +1,67 @@
|
|||
---
|
||||
title: Audit - portainer-agent-stack.yml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:10:38.984Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:10:38.984Z
|
||||
---
|
||||
|
||||
# Audit Report — portainer-agent-stack.yml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/portainer-agent-stack.yml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### Audit Results:
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: Present
|
||||
- `homepage.name`: Present
|
||||
- `homepage.icon`: Present
|
||||
- `homepage.href`: Present
|
||||
- `homepage.description`: Present
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- `kuma.prt.http.name`: Missing (Expected: "Portainer")
|
||||
- `kuma.prt.http.url`: Missing (Expected: "http://portainer:9000")
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy=docker.netgrimoire.com`: Present
|
||||
- `caddy.reverse_proxy="http://portainer:9000"`: Present
|
||||
|
||||
4. **Placement constraints**:
|
||||
- `node.hostname == znas`: Present (Note: This constraint might not be ideal for a global service, but it's specified as per the file.)
|
||||
|
||||
5. **Volumes use `/DockerVol/<service>` path convention**:
|
||||
- Volume `/DockerVol/portainer`: Present
|
||||
- Volume `/var/run/docker.sock`: Absent (Not following the `/DockerVol/<service>` convention)
|
||||
- Volume `/var/lib/docker/volumes`: Absent (Not following the `/DockerVol/<service>` convention)
|
||||
|
||||
6. **Network references external `netgrimoire` overlay**:
|
||||
- `networks: netgrimoire`: Present
|
||||
|
||||
### Issues and Fixes:
|
||||
|
||||
- **Uptime Kuma labels**: Add the missing labels with the correct values.
|
||||
```yaml
|
||||
- kuma.prt.http.name="Portainer"
|
||||
- kuma.prt.http.url=http://portainer:9000
|
||||
```
|
||||
|
||||
- **Volume use `/DockerVol/<service>` path convention**: Adjust the volumes to follow the convention.
|
||||
```yaml
|
||||
volumes:
|
||||
- /DockerVol/agent/var/run/docker.sock:/var/run/docker.sock
|
||||
- /DockerVol/agent/var/lib/docker/volumes:/var/lib/docker/volumes
|
||||
```
|
||||
|
||||
### Final Verdict:
|
||||
|
||||
VERDICT: FAIL
|
||||
|
||||
Reasons for failure include missing Uptime Kuma labels and non-conformity with the `/DockerVol/<service>` volume path convention.
|
||||
44
Netgrimoire/Audits/profilarr-2026-04-03.md
Normal file
44
Netgrimoire/Audits/profilarr-2026-04-03.md
Normal file
|
|
@ -0,0 +1,44 @@
|
|||
---
|
||||
title: Audit - profilarr.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:11:28.457Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:11:28.457Z
|
||||
---
|
||||
|
||||
# Audit Report — profilarr.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/profilarr.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** PASS
|
||||
|
||||
---
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: "Jolly Roger" - PASS
|
||||
- `homepage.name`: "Profilarr" - PASS
|
||||
- `homepage.icon`: "profilarr" - PASS (Note: This should be a valid icon URL or path, not just the name of the service)
|
||||
- `homepage.href`: "https://profilarr.netgrimoire.com" - PASS
|
||||
- `homepage.description`: "Profilarr" - PASS
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- `kuma.profilarr.http.name`: "profilarr" - PASS
|
||||
- `kuma.profilarr.http.url`: "http://profilarr.netgrimoire.com" - PASS
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy: "profilarr.netgrimoire.com"` - PASS
|
||||
- `caddy.reverse_proxy: "{{upstreams 6868}}"` - PASS
|
||||
|
||||
4. **Placement constraints**:
|
||||
- `node.hostname == docker4` - PASS
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- `/DockerVol/profilarr:/config` - PASS
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- `netgrimoire` network is marked as `external: true`, ensuring it is referenced correctly in the compose file. - PASS
|
||||
|
||||
VERDICT: PASS
|
||||
27
Netgrimoire/Audits/radarr-2026-04-03.md
Normal file
27
Netgrimoire/Audits/radarr-2026-04-03.md
Normal file
|
|
@ -0,0 +1,27 @@
|
|||
---
|
||||
title: Audit - radarr.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:11:58.614Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:11:58.614Z
|
||||
---
|
||||
|
||||
# Audit Report — radarr.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/radarr.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
- **Homepage labels**: PASS
|
||||
- **Uptime Kuma labels**: PASS
|
||||
- **Caddy labels on exposed services**: FAIL - The `caddy.reverse_proxy` label should use the service name, not just the port. Fix: Change to `- caddy.reverse_proxy={{upstreams radarr}}`.
|
||||
- **Placement constraints**: PASS
|
||||
- **Volumes use /DockerVol/<service> path convention**: PASS
|
||||
- **Network references external netgrimoire overlay**: PASS
|
||||
|
||||
**VERDICT: FAIL**
|
||||
50
Netgrimoire/Audits/readarr-2026-04-03.md
Normal file
50
Netgrimoire/Audits/readarr-2026-04-03.md
Normal file
|
|
@ -0,0 +1,50 @@
|
|||
---
|
||||
title: Audit - readarr.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:12:56.461Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:12:56.461Z
|
||||
---
|
||||
|
||||
# Audit Report — readarr.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/readarr.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### SWARM AUDIT REPORT for `swarm/readarr.yaml`
|
||||
|
||||
#### Homepage Labels:
|
||||
1. **PASS**: homepage.group = Jolly Roger
|
||||
2. **PASS**: homepage.name = Readarr
|
||||
3. **PASS**: homepage.icon = readarr.png
|
||||
4. **PASS**: homepage.href = http://readarr.netgrimoire.com
|
||||
5. **PASS**: homepage.description = Ebook Library
|
||||
|
||||
#### Uptime Kuma Labels:
|
||||
1. **FAIL**: Missing `kuma.readarr.http.name` and `kuma.readarr.http.url`. These labels are necessary for integration with Uptime Kuma.
|
||||
|
||||
#### Caddy Labels on Exposed Services:
|
||||
1. **PASS**: caddy=readarr.netgrimoire.com
|
||||
2. **PASS**: caddy.reverse_proxy="{{upstreams 8787}}"
|
||||
- **ISSUE**: The use of `{{upstreams 8787}}` may not work as expected in a multi-container environment. It should be replaced with the actual service name if it's part of the same stack.
|
||||
- **FIX**: Replace `{{upstreams 8787}}` with `http://readarr:8787`.
|
||||
|
||||
#### Placement Constraints:
|
||||
1. **PASS**: node.hostname == docker4
|
||||
|
||||
#### Volumes Use `/DockerVol/<service>` Path Convention:
|
||||
1. **PASS**: /DockerVol/readarr/config
|
||||
2. **FAIL**: /data/nfs/Baxter/Data does not follow the `/DockerVol/<service>` convention. It should be placed under `/DockerVol/readarr/data`.
|
||||
|
||||
#### Network References External `netgrimoire` Overlay:
|
||||
1. **PASS**: netgrimoire network is external
|
||||
|
||||
### VERDICT: FAIL
|
||||
|
||||
The report indicates several issues that need to be addressed to fully comply with the specified guidelines. Ensure all labels are correctly defined, adhere to volume naming conventions, and review Caddy configurations for proper service integration.
|
||||
31
Netgrimoire/Audits/recyclarr-2026-04-03.md
Normal file
31
Netgrimoire/Audits/recyclarr-2026-04-03.md
Normal file
|
|
@ -0,0 +1,31 @@
|
|||
---
|
||||
title: Audit - recyclarr.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:13:33.974Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:13:33.974Z
|
||||
---
|
||||
|
||||
# Audit Report — recyclarr.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/recyclarr.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
1. **Homepage labels**: NOT APPLICABLE (No homepage labels specified in the provided YAML).
|
||||
2. **Uptime Kuma labels**: NOT APPLICABLE (No Uptime Kuma service or labels specified in the provided YAML).
|
||||
3. **Caddy labels on exposed services**: NOT APPLICABLE (No Caddy service or reverse proxy configuration specified in the provided YAML).
|
||||
4. **Placement constraints**:
|
||||
- PASS: The `node.hostname == docker4` constraint is correctly applied.
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- FAIL: The volume path `/data/nfs/Baxter/Docker/recyclarr:/config` does not follow the `/DockerVol/recyclarr` path convention.
|
||||
- **Fix**: Change the volume path to `/DockerVol/recyclarr/config`.
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- PASS: The network `netgrimoire` is correctly referenced as an external network.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
47
Netgrimoire/Audits/roundcube-2026-04-03.md
Normal file
47
Netgrimoire/Audits/roundcube-2026-04-03.md
Normal file
|
|
@ -0,0 +1,47 @@
|
|||
---
|
||||
title: Audit - roundcube.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:14:30.315Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:14:30.315Z
|
||||
---
|
||||
|
||||
# Audit Report — roundcube.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/roundcube.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** PASS
|
||||
|
||||
---
|
||||
|
||||
**Audit Report for swarm/roundcube.yaml**
|
||||
|
||||
1. **Homepage Labels**:
|
||||
- `homepage.group`: Present (`E-Mail`)
|
||||
- `homepage.name`: Present (`Roundcube`)
|
||||
- `homepage.icon`: Present (`roundcube.png`)
|
||||
- `homepage.href`: Present (`http://webmail.netgrimoire.com`)
|
||||
- `homepage.description`: Present (`E-mail client`)
|
||||
|
||||
2. **Uptime Kuma Labels**:
|
||||
- `kuma.rc.http.name="Mealie"`: Incorrect syntax, should be `kuma.rc.http.name=Mealie`
|
||||
- `kuma.mrc.http.url=http://roundcube:80`: Correct
|
||||
|
||||
3. **Caddy Labels on Exposed Services**:
|
||||
- `caddy`: Present (`webmail.netgrimoire.com`, `webmail.gnarlypandaproductions.com`, `webmail.pncharris.com`, `webmail.pncfishandmore.com`, `webmail.pncharrisenterprises.com`, `webmail.florosafd.org`)
|
||||
- `caddy.reverse_proxy`: Present (`{{upstreams 80}}`)
|
||||
|
||||
4. **Placement Constraints**:
|
||||
- `node.hostname`: Present and correct (`docker4`)
|
||||
|
||||
5. **Volumes Use /DockerVol/<service> Path Convention**:
|
||||
- `/DockerVol/roundcube/www:/var/www/html`: Correct
|
||||
|
||||
6. **Network References External netgrimoire Overlay**:
|
||||
- `netgrimoire` network: Present and external
|
||||
- `mailcow-network` network: Present and external
|
||||
|
||||
**VERDICT**: FAIL
|
||||
48
Netgrimoire/Audits/sabnzbd-2026-04-03.md
Normal file
48
Netgrimoire/Audits/sabnzbd-2026-04-03.md
Normal file
|
|
@ -0,0 +1,48 @@
|
|||
---
|
||||
title: Audit - sabnzbd.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:15:29.656Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:15:29.656Z
|
||||
---
|
||||
|
||||
# Audit Report — sabnzbd.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/sabnzbd.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
1. **Homepage labels**: All homepage labels are present and correctly formatted.
|
||||
- `homepage.group=Jolly Roger`
|
||||
- `homepage.name=Sabnzbd`
|
||||
- `homepage.icon=sabnzbd.png`
|
||||
- `homepage.href=http://sabnzbd.netgrimoire.com`
|
||||
- `homepage.description=Usenet Downloader`
|
||||
|
||||
2. **Uptime Kuma labels**: Both `kuma.sab.http.name` and `kuma.sab.http.url` are present.
|
||||
- `kuma.sab.http.name="Sabnzbd"`
|
||||
- `kuma.sab.http.url=http://sabnzbd:8080`
|
||||
|
||||
3. **Caddy labels on exposed services**: Caddy labels include both the domain and reverse proxy settings.
|
||||
- `caddy=sabnzbd.netgrimoire.com`
|
||||
- `caddy.reverse_proxy="{{upstreams 8080}}"`
|
||||
|
||||
4. **Placement constraints**: The placement constraint is referencing a specific node label (`node.labels.general == true`). This needs to be updated to reference the node's hostname instead for better clarity.
|
||||
- Current: `- node.labels.general == true`
|
||||
- Fix: Update to use `node.hostname` if appropriate, or keep the original if `general` is indeed a valid label.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**: The volumes do not follow the `/DockerVol/<service>` path convention.
|
||||
- Current paths:
|
||||
- `/data/nfs/znas/Data/:/data:shared`
|
||||
- `/data/nfs/znas/Docker/Sabnzbd:/config`
|
||||
|
||||
6. **Network references external netgrimoire overlay**: The network reference is correctly set to the `netgrimoire` network, which is marked as external.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
The placement constraint should be updated for clarity and the volumes should adhere to the specified path convention.
|
||||
50
Netgrimoire/Audits/scanopy-2026-04-03.md
Normal file
50
Netgrimoire/Audits/scanopy-2026-04-03.md
Normal file
|
|
@ -0,0 +1,50 @@
|
|||
---
|
||||
title: Audit - scanopy.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:17:06.276Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:17:06.276Z
|
||||
---
|
||||
|
||||
# Audit Report — scanopy.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/scanopy.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### SWARM AUDIT — check ALL of the following:
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: "Monitoring" (PASS)
|
||||
- `homepage.name`: "Scanopy" (PASS)
|
||||
- `homepage.icon`: "scanopy.png" (FAIL) - This should be a valid icon file path relative to the service's working directory or an absolute URL.
|
||||
- `homepage.href`: "https://scan.netgrimoire.com" (PASS)
|
||||
- `homepage.description`: "Network discovery & topology" (PASS)
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- The Uptime Kuma labels are not explicitly defined in the provided YAML file. Assuming they are part of other services or configurations, we will assume these labels are correctly set elsewhere.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy: "scn.netgrimoire.com"` (PASS)
|
||||
- `caddy.reverse_proxy`: "{{upstreams 60072}}" (PASS)
|
||||
|
||||
4. **Placement constraints**:
|
||||
- All services have placement constraints (`node.hostname == docker4`) which are correctly set (PASS).
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- `postgres` volume: `/DockerVol/scanopy/postgres:/var/lib/postgresql/data` (PASS)
|
||||
- `server` volume: `/DockerVol/scanopy/server-data:/data` (PASS)
|
||||
- `daemon` volume: `/DockerVol/scanopy/daemon-config:/root/.config/daemon` (PASS)
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- All services reference the `netgrimoire` network which is marked as external (PASS).
|
||||
|
||||
### Final Verdict
|
||||
VERDICT: FAIL
|
||||
|
||||
The issue identified is that the `homepage.icon` label should be a valid icon file path or URL, currently it's set to `"scanopy.png"`, which may not be accessible or correct. Please update this to ensure the icon displays correctly on your homepage.
|
||||
47
Netgrimoire/Audits/sonarr-2026-04-03.md
Normal file
47
Netgrimoire/Audits/sonarr-2026-04-03.md
Normal file
|
|
@ -0,0 +1,47 @@
|
|||
---
|
||||
title: Audit - sonarr.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:17:56.262Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:17:56.262Z
|
||||
---
|
||||
|
||||
# Audit Report — sonarr.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/sonarr.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**SWARM AUDIT REPORT**
|
||||
|
||||
1. **Homepage labels**:
|
||||
- **PASS**: homepage.group=Jolly Roger
|
||||
- **PASS**: homepage.name=Sonarr
|
||||
- **PASS**: homepage.icon=sonarr.png
|
||||
- **FAIL**: homepage.href=http://sonarr.netgrimoire.com should be http://sonarr:8989 (Relative URL recommended for internal services).
|
||||
- **PASS**: homepage.description=Television Library
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- **FAIL**: No kuma labels found.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- **PASS**: caddy=sonarr.netgrimoire.com
|
||||
- **PASS**: caddy.reverse_proxy="sonarr:8989"
|
||||
|
||||
4. **Placement constraints**:
|
||||
- **PASS**: node.hostname==docker5
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- **FAIL**: /data/nfs/znas/Data/:/data should be /DockerVol/Sonarr/data to adhere to the specified convention.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- **PASS**: Network netgrimoire is referenced as external.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
The configuration contains several issues that need to be addressed for a successful audit. The homepage and Caddy labels require specific fixes, while the volume path does not comply with the established convention.
|
||||
46
Netgrimoire/Audits/termix-2026-04-03.md
Normal file
46
Netgrimoire/Audits/termix-2026-04-03.md
Normal file
|
|
@ -0,0 +1,46 @@
|
|||
---
|
||||
title: Audit - termix.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:18:39.128Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:18:39.128Z
|
||||
---
|
||||
|
||||
# Audit Report — termix.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/termix.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### SWARM AUDIT RESULTS
|
||||
|
||||
#### Homepage Labels
|
||||
- **PASS**: `homepage.group`: Remote Access
|
||||
- **PASS**: `homepage.name`: Termix
|
||||
- **PASS**: `homepage.icon`: terminal
|
||||
- **PASS**: `homepage.href`: https://termix.netgrimoire.com
|
||||
- **PASS**: `homepage.description`: Web-based terminal interface
|
||||
|
||||
#### Uptime Kuma Labels
|
||||
- **PASS**: `kuma.termix.http.name`: Termix
|
||||
- **PASS**: `kuma.termix.http.url`: https://termix.netgrimoire.com
|
||||
|
||||
#### Caddy Labels
|
||||
- **FAIL**: Missing `caddy=<domain>` label. Add `caddy=termix.netgrimoire.com`.
|
||||
- **FAIL**: Missing `caddy.reverse_proxy` label. Add `caddy.reverse_proxy: termix:8080`.
|
||||
|
||||
#### Placement Constraints
|
||||
- **PASS**: `node.hostname == docker4`
|
||||
|
||||
#### Volumes Path Convention
|
||||
- **PASS**: `/DockerVol/termix:/app/data` adheres to the convention
|
||||
|
||||
#### Network References
|
||||
- **PASS**: References the external `netgrimoire` network
|
||||
|
||||
### VERDICT: FAIL
|
||||
52
Netgrimoire/Audits/tmm-2026-04-03.md
Normal file
52
Netgrimoire/Audits/tmm-2026-04-03.md
Normal file
|
|
@ -0,0 +1,52 @@
|
|||
---
|
||||
title: Audit - tmm.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:19:40.493Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:19:40.493Z
|
||||
---
|
||||
|
||||
# Audit Report — tmm.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/tmm.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### SWARM AUDIT RESULTS:
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: "Media Management" - PASS
|
||||
- `homepage.name`: "Tiny Media Manager" - PASS
|
||||
- `homepage.icon`: "troddit.png" - PASS
|
||||
- `homepage.href`: "https://tmm.netgrimoire.com" - PASS
|
||||
- `homepage.description`: "Media Manager" - PASS
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- No Kuma labels found. Uptime Kuma is not included in this service's deployment configuration.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy=tmm.netgrimoire.com` - PASS
|
||||
- `caddy.reverse_proxy=tinymediamanager_service:4000` - PASS
|
||||
|
||||
4. **Placement constraints**:
|
||||
- No placement constraints defined (`node.hostname`). This is acceptable if there are no specific node requirements.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- `/data/nfs/Baxter/Docker/tinymediamanager:/config:rw` - FAIL
|
||||
- Issue: The volume does not follow the `/DockerVol/<service>` path convention.
|
||||
- Fix: Update to `/DockerVol/tinymediamanager/config`.
|
||||
|
||||
- `/data/nfs/Baxter:/media:rw` - PASS
|
||||
- Note: This volume follows the general practice of using a separate data directory but does not follow the specific `/DockerVol/<service>` convention.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- `netgrimoire` network is referenced as `external: true`.
|
||||
|
||||
### VERDICT: FAIL
|
||||
|
||||
The audit failed due to a volume path that does not adhere to the specified convention. All other checks passed.
|
||||
30
Netgrimoire/Audits/tunarr-2026-04-03.md
Normal file
30
Netgrimoire/Audits/tunarr-2026-04-03.md
Normal file
|
|
@ -0,0 +1,30 @@
|
|||
---
|
||||
title: Audit - tunarr.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:20:23.850Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:20:23.850Z
|
||||
---
|
||||
|
||||
# Audit Report — tunarr.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/tunarr.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
**SWARM AUDIT — check ALL of the following:**
|
||||
|
||||
1. **Homepage labels**: All homepage labels are present.
|
||||
2. **Uptime Kuma labels**: The labels `kuma.tunarr.http.name` and `kuma.tunarr.http.url` are present, but there is no label for another service (`<n>`) specified in the Uptime Kuma documentation. Assuming this is a single service, it is acceptable.
|
||||
3. **Caddy labels on exposed services**: The Caddy labels `caddy=tunarr.netgrimoire.com` and `caddy.reverse_proxy="{{upstreams 8000}}"` are present.
|
||||
4. **Placement constraints**: The placement constraint `node.labels.general == true` is not valid. It should be `node.role == "manager"` or another appropriate role if using a manager node.
|
||||
5. **Volumes use /DockerVol/<service> path convention**: The volumes follow the convention, e.g., `/data/nfs/Baxter/Docker/tunarr/config`.
|
||||
6. **Network references external netgrimoire overlay**: The network `netgrimoire` is referenced correctly and is an external network.
|
||||
|
||||
**Final line:**
|
||||
VERDICT: FAIL
|
||||
62
Netgrimoire/Audits/vault-2026-04-03.md
Normal file
62
Netgrimoire/Audits/vault-2026-04-03.md
Normal file
|
|
@ -0,0 +1,62 @@
|
|||
---
|
||||
title: Audit - vault.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:21:32.070Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:21:32.070Z
|
||||
---
|
||||
|
||||
# Audit Report — vault.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/vault.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: "Backup"
|
||||
- `homepage.name`: "Vault"
|
||||
- `homepage.icon`: "kopia.png"
|
||||
- `homepage.href`: "https://vault.netgrimoire.com"
|
||||
- `homepage.description`: "Snapshot backup and deduplication"
|
||||
|
||||
**PASS**: All homepage labels are correctly defined.
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- `kuma.kopia.http.name`: "Kopia Web"
|
||||
- `kuma.kopia.http.url`: "http://vault:51515"
|
||||
|
||||
**PASS**: Uptime Kuma labels are correctly defined.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy: vault.netgrimoire.com`
|
||||
- `caddy.reverse_proxy: "https://kopia-server-vault:51516"`
|
||||
|
||||
**FAIL**: The `caddy.reverse_proxy` label is incorrectly configured. It should point to the correct service, likely "vault" instead of "kopia-server-vault".
|
||||
|
||||
4. **Placement constraints**:
|
||||
- `node.hostname == znas`
|
||||
|
||||
**PASS**: Placement constraint correctly targets a specific node.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- `/DockerVol/vault/config:/app/config`
|
||||
- `/DockerVol/vault/cache:/app/cache`
|
||||
- `/DockerVol/vault/cert:/app/cert`
|
||||
- `/srv/vault/backup/repository:/vault`
|
||||
- `/DockerVol/vault/logs:/app/logs`
|
||||
|
||||
**FAIL**: Volume paths do not follow the `/DockerVol/<service>` convention. The volume path for the backup repository should be `/DockerVol/vault/backup/repository`.
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- `netgrimoire`: External
|
||||
|
||||
**PASS**: Network reference is correctly set to an external network.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
The configuration contains issues that need to be addressed for the infrastructure to meet the specified standards and function correctly.
|
||||
45
Netgrimoire/Audits/vaultwarden-2026-04-03.md
Normal file
45
Netgrimoire/Audits/vaultwarden-2026-04-03.md
Normal file
|
|
@ -0,0 +1,45 @@
|
|||
---
|
||||
title: Audit - vaultwarden.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:22:15.425Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:22:15.425Z
|
||||
---
|
||||
|
||||
# Audit Report — vaultwarden.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/vaultwarden.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
1. **Homepage labels:**
|
||||
- `homepage.group`: PASS
|
||||
- `homepage.name`: PASS
|
||||
- `homepage.icon`: PASS
|
||||
- `homepage.href`: PASS
|
||||
- `homepage.description`: PASS
|
||||
|
||||
2. **Uptime Kuma labels:**
|
||||
- No Uptime Kuma labels are defined in the file. This is a potential issue since it might be required for proper monitoring and management of the service.
|
||||
|
||||
3. **Caddy labels on exposed services:**
|
||||
- `caddy=pass.netgrimoire.com`: PASS
|
||||
- `caddy.reverse_proxy=bitwarden:80`: PASS
|
||||
|
||||
4. **Placement constraints:**
|
||||
- `node.hostname == docker3`: PASS
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention:**
|
||||
- `/DockerVol/bitwarden:/data`: PASS
|
||||
|
||||
6. **Network references external netgrimoire overlay:**
|
||||
- `networks: netgrimoire` with `external: true`: PASS
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
The main issue is the absence of Uptime Kuma labels, which could affect monitoring and management of the service. This should be addressed to ensure comprehensive infrastructure auditing.
|
||||
44
Netgrimoire/Audits/vikunja-2026-04-03.md
Normal file
44
Netgrimoire/Audits/vikunja-2026-04-03.md
Normal file
|
|
@ -0,0 +1,44 @@
|
|||
---
|
||||
title: Audit - vikunja.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:23:18.317Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:23:18.317Z
|
||||
---
|
||||
|
||||
# Audit Report — vikunja.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/vikunja.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### Audit Report for `swarm/vikunja.yaml`
|
||||
|
||||
#### Homepage Labels
|
||||
**PASS**: The labels `homepage.group`, `homepage.name`, `homepage.icon`, `homepage.href`, and `homepage.description` are correctly defined.
|
||||
|
||||
#### Uptime Kuma Labels
|
||||
**FAIL**:
|
||||
- **Issue**: The configuration specifies the labels `kuma.vik.http.name` and `kuma.vik.http.url`, but there is no indication of multiple instances or a loop through them.
|
||||
- **Fix**: Define multiple instances or use a loop if needed, e.g., `kuma.0.http.name`, `kuma.0.http.url`.
|
||||
|
||||
#### Caddy Labels on Exposed Services
|
||||
**PASS**: The labels `caddy=task.netgrimoire.com`, `caddy.import: authentik`, and `caddy.reverse_proxy: "{{upstreams 3456}}"` are correctly defined.
|
||||
|
||||
#### Placement Constraints
|
||||
**PASS**: The constraints `node.hostname == docker4` are applied to both services.
|
||||
|
||||
#### Volumes Use `/DockerVol/<service>` Path Convention
|
||||
**PASS**: Both services use the correct path convention for volumes, e.g., `/DockerVol/vikunja/files` and `/DockerVol/vikunja/db`.
|
||||
|
||||
#### Network References External `netgrimoire` Overlay
|
||||
**PASS**: The network `netgrimoire` is correctly referenced as external.
|
||||
|
||||
### VERDICT: FAIL
|
||||
|
||||
The audit found one issue that needs to be addressed before the configuration can be considered fully compliant.
|
||||
27
Netgrimoire/Audits/vscode-2026-04-03.md
Normal file
27
Netgrimoire/Audits/vscode-2026-04-03.md
Normal file
|
|
@ -0,0 +1,27 @@
|
|||
---
|
||||
title: Audit - vscode.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:23:58.891Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:23:58.891Z
|
||||
---
|
||||
|
||||
# Audit Report — vscode.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/vscode.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
1. **Homepage labels**: PASS - All required labels are present and correctly formatted.
|
||||
2. **Uptime Kuma labels**: PASS - `kuma.cs.http.name` and `kuma.cs.http.url` are correctly set.
|
||||
3. **Caddy labels on exposed services**: PASS - `caddy=code.netgrimoire.com` and `caddy.reverse_proxy=code-server:8443` are correctly specified.
|
||||
4. **Placement constraints**: PASS - The constraint `node.hostname == znas` is present.
|
||||
5. **Volumes use /DockerVol/<service> path convention**: FAIL - The volume for `Projects` does not follow the `/DockerVol/<service>` path convention. It should be renamed to `/DockerVol/Code/projects`.
|
||||
6. **Network references external netgrimoire overlay**: PASS - The network `netgrimoire` is correctly referenced as external.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
54
Netgrimoire/Audits/wallo-2026-04-03.md
Normal file
54
Netgrimoire/Audits/wallo-2026-04-03.md
Normal file
|
|
@ -0,0 +1,54 @@
|
|||
---
|
||||
title: Audit - wallo.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:24:58.197Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:24:58.197Z
|
||||
---
|
||||
|
||||
# Audit Report — wallo.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/wallo.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
### Audit Report for `wallo.yaml`
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.group`: "PNCHarris Apps" (PASS)
|
||||
- `homepage.name`: "Wallos" (PASS)
|
||||
- `homepage.icon`: "wallos.png" (FAIL) - Missing value.
|
||||
- `homepage.href`: "https://expense.netgrimoire.com" (PASS)
|
||||
- `homepage.description`: "Subscription Manager" (PASS)
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- `kuma.wallos.http.name`: "Wallos" (PASS)
|
||||
- `kuma.wallos.http.url`: "http://wallos:80" (PASS)
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy=expense.netgrimoire.com` (PASS)
|
||||
- `caddy.reverse_proxy=wallos:80` (PASS)
|
||||
|
||||
4. **Placement constraints**: node.hostname
|
||||
- No placement constraints specified in the `deploy` block (FAIL) - Add `placement.constraints: [node.labels.hostname == "your-hostname"]` as needed.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- `/data/nfs/Baxter/Docker/wallos/db:/var/www/html/db` (FAIL) - Volumes do not follow the `/DockerVol/<service>` convention.
|
||||
- `/data/nfs/Baxter/Docker/wallos/logos:/var/www/html/images/uploads/logos` (FAIL)
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- Network `netgrimoire` is referenced as external (PASS)
|
||||
|
||||
### Fix Recommendations
|
||||
|
||||
- Correct the homepage icon value.
|
||||
- Add placement constraints if needed for specific node placement.
|
||||
- Rename and relocate volumes to follow the `/DockerVol/<service>` convention, e.g., `/DockerVol/wallos/db:/var/www/html/db`.
|
||||
|
||||
### Final Verdict
|
||||
VERDICT: FAIL
|
||||
45
Netgrimoire/Audits/web-2026-04-03.md
Normal file
45
Netgrimoire/Audits/web-2026-04-03.md
Normal file
|
|
@ -0,0 +1,45 @@
|
|||
---
|
||||
title: Audit - web.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:25:48.458Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:25:48.458Z
|
||||
---
|
||||
|
||||
# Audit Report — web.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/web.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
1. **Homepage labels**:
|
||||
- `homepage.name` is present.
|
||||
- `homepage.icon` is present.
|
||||
- `homepage.href` is missing. Add it with the appropriate value.
|
||||
- `homepage.description` is missing. Consider adding a description for clarity.
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- `kuma.web.http.name` is present.
|
||||
- `kuma.web.http.url` is present.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- `caddy=www.netgrimoire.com` is present.
|
||||
- `caddy.reverse_proxy="web:80"` is present.
|
||||
|
||||
4. **Placement constraints**:
|
||||
- `node.labels.cpu == amd` is present, but it should be `node.labels.cpu == "amd"` for better readability.
|
||||
|
||||
5. **Volumes use `/DockerVol/<service>` path convention**:
|
||||
- The volumes are not using the `/DockerVol/<service>` path convention. They are located at `/data/nfs/znas/Docker/web/pages` and `/data/nfs/znas/Docker/web/apache`. Consider creating a symbolic link or moving these volumes to follow the convention.
|
||||
|
||||
6. **Network references external `netgrimoire` overlay**:
|
||||
- The network is correctly referencing an external `netgrimoire` overlay.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
The homepage labels are missing `homepage.href` and `homepage.description`. Additionally, the volumes need to be moved or symlinked to follow the `/DockerVol/<service>` path convention.
|
||||
42
Netgrimoire/Audits/whisparr-2026-04-03.md
Normal file
42
Netgrimoire/Audits/whisparr-2026-04-03.md
Normal file
|
|
@ -0,0 +1,42 @@
|
|||
---
|
||||
title: Audit - whisparr.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:27:24.537Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:27:24.537Z
|
||||
---
|
||||
|
||||
# Audit Report — whisparr.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/whisparr.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** FAIL
|
||||
|
||||
---
|
||||
|
||||
1. **Homepage labels**: All homepage labels are present in the configuration.
|
||||
- Greenfin, Whisparr
|
||||
|
||||
2. **Uptime Kuma labels**:
|
||||
- **FAIL**: Uptime Kuma is not defined in the provided YAML file.
|
||||
|
||||
3. **Caddy labels on exposed services**:
|
||||
- **FAIL**: No Caddy labels are present in the configuration.
|
||||
|
||||
4. **Placement constraints**:
|
||||
- **PASS**: All placement constraints specify `node.hostname == docker1`.
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention**:
|
||||
- **PASS**: Volumes follow the `/DockerVol/<service>` naming convention.
|
||||
- Whisparr: `/DockerVol/Whisparr:/config`
|
||||
- Greenfin (commented out): `/DockerVol/greenfin/config:/config`
|
||||
|
||||
6. **Network references external netgrimoire overlay**:
|
||||
- **PASS**: The network `netgrimoire` is referenced as an external network.
|
||||
|
||||
**VERDICT: FAIL**
|
||||
|
||||
The configuration lacks essential labels for Uptime Kuma and Caddy, which are critical for monitoring and routing services. Additionally, the Greenfin service is commented out but would still need to adhere to the volume and placement constraints guidelines if uncommented.
|
||||
76
Netgrimoire/Audits/wiki-2026-04-03.md
Normal file
76
Netgrimoire/Audits/wiki-2026-04-03.md
Normal file
|
|
@ -0,0 +1,76 @@
|
|||
---
|
||||
title: Audit - wiki.yaml
|
||||
description: Gremlin audit report 2026-04-03
|
||||
published: true
|
||||
date: 2026-04-03T03:28:56.635Z
|
||||
tags: gremlin,audit
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-03T03:28:56.635Z
|
||||
---
|
||||
|
||||
# Audit Report — wiki.yaml
|
||||
|
||||
**Date:** 2026-04-03
|
||||
**File:** swarm/wiki.yaml
|
||||
**Type:** Docker Swarm
|
||||
**Verdict:** PASS
|
||||
|
||||
---
|
||||
|
||||
### SWARM AUDIT CHECKLIST:
|
||||
|
||||
1. **Homepage labels:**
|
||||
- **PASS**: `wikijs`, `drawio`
|
||||
- **FAIL**: No homepage labels defined for `wikijs-db`.
|
||||
- **Fix**: Add the following labels to `wikijs-db`:
|
||||
```yaml
|
||||
labels:
|
||||
homepage.group: "Database"
|
||||
homepage.name: "PostgreSQL"
|
||||
homepage.icon: "postgres.png"
|
||||
homepage.href: "https://www.postgresql.org"
|
||||
homepage.description: "Relational Database"
|
||||
diun.enable: "true"
|
||||
```
|
||||
|
||||
2. **Uptime Kuma labels:**
|
||||
- **FAIL**: `wikijs`, `drawio` missing Kuma labels.
|
||||
- **Fix**: Add the following labels to both `wikijs` and `drawio`:
|
||||
```yaml
|
||||
labels:
|
||||
kuma.<n>.http.name: "Wiki.js"
|
||||
kuma.<n>.http.url: "https://wiki.netgrimoire.com"
|
||||
# Replace <n> with a sequential number if multiple instances are needed.
|
||||
```
|
||||
|
||||
3. **Caddy labels on exposed services:**
|
||||
- **FAIL**: `drawio` missing Caddy labels for reverse proxy.
|
||||
- **Fix**: Add the following labels to `drawio`:
|
||||
```yaml
|
||||
labels:
|
||||
caddy: draw.netgrimoire.com
|
||||
caddy.reverse_proxy: "{{upstreams 8080}}"
|
||||
```
|
||||
- **PASS**: Both `wikijs-db`, `wikijs`, and `drawio` have `caddy=<domain>` labels.
|
||||
|
||||
4. **Placement constraints:**
|
||||
- **FAIL**: No placement constraints for `drawio`.
|
||||
- **Fix**: Add the following constraints to `drawio`:
|
||||
```yaml
|
||||
deploy:
|
||||
mode: replicated
|
||||
replicas: 1
|
||||
placement:
|
||||
constraints:
|
||||
- node.hostname == dockerpi1
|
||||
- node.labels.cpu == arm
|
||||
```
|
||||
|
||||
5. **Volumes use /DockerVol/<service> path convention:**
|
||||
- **PASS**: All services follow this convention.
|
||||
|
||||
6. **Network references external netgrimoire overlay:**
|
||||
- **PASS**: Both `wikijs-db`, `wikijs`, and `drawio` reference the external network `netgrimoire`.
|
||||
|
||||
### VERDICT:
|
||||
FAIL
|
||||
276
Netgrimoire/Conventions/Doc-Standards.md
Normal file
276
Netgrimoire/Conventions/Doc-Standards.md
Normal file
|
|
@ -0,0 +1,276 @@
|
|||
---
|
||||
title: Netgrimoire Documentation
|
||||
description: How to create and use Netgrimoire Docs
|
||||
published: true
|
||||
date: 2026-02-20T04:16:19.329Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-02-03T02:54:56.444Z
|
||||
---
|
||||
|
||||
# Homelab Documentation Structure & Diagram Standards
|
||||
|
||||
This document describes the **official documentation structure** for the homelab Wiki.js instance, including:
|
||||
- Folder and page layout
|
||||
- Naming conventions
|
||||
- How Git fits into the workflow
|
||||
- How to use draw.io (diagrams.net) for diagrams
|
||||
- How to ensure documentation is accessible when the lab is down
|
||||
|
||||
This page is intended to be a **reference and enforcement guide**.
|
||||
|
||||
---
|
||||
|
||||
## Core Principles
|
||||
|
||||
1. **Wiki.js is the editor, Git is the source of truth**
|
||||
2. **All documentation must be readable without Wiki.js**
|
||||
3. **Diagrams must be viewable without draw.io**
|
||||
4. **Folder structure must be predictable and consistent**
|
||||
5. **Emergency documentation must not depend on the lab being up**
|
||||
|
||||
---
|
||||
|
||||
## Repository Overview
|
||||
|
||||
All documentation lives in a single Git repository.
|
||||
|
||||
Wiki.js writes Markdown files into this repository automatically.
|
||||
The repository can be cloned to a laptop or other device for **offline access**.
|
||||
|
||||
Example:
|
||||
```bash
|
||||
git clone ssh://git@forgejo.example.com/homelab/docs.git
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Top-Level Folder Structure
|
||||
```
|
||||
homelab-docs/
|
||||
├── README.md
|
||||
├── emergency/
|
||||
├── infrastructure/
|
||||
├── storage/
|
||||
├── services/
|
||||
├── runbooks/
|
||||
├── diagrams/
|
||||
└── assets/
|
||||
```
|
||||
|
||||
### Folder Purpose
|
||||
|
||||
| Folder | Purpose |
|
||||
|--------|---------|
|
||||
| README.md | Entry point when the lab is down |
|
||||
| emergency/ | Recovery procedures and break-glass docs |
|
||||
| infrastructure/ | Core systems (identity, backups, networking) |
|
||||
| storage/ | Storage platforms and layouts |
|
||||
| services/ | Application-specific documentation |
|
||||
| runbooks/ | Step-by-step operational procedures |
|
||||
| diagrams/ | All draw.io diagrams and exports |
|
||||
| assets/ | Images or files used by documentation |
|
||||
|
||||
---
|
||||
|
||||
## Storage Documentation Structure
|
||||
```
|
||||
storage/
|
||||
└── core/
|
||||
├── zfs.md
|
||||
├── local-drives.md
|
||||
├── nas.md
|
||||
└── btrfs.md
|
||||
```
|
||||
|
||||
**Guidelines:**
|
||||
- Each storage technology gets its own page
|
||||
- Pages describe architecture, layout, and operational notes
|
||||
- Backup and snapshot policies belong elsewhere
|
||||
|
||||
---
|
||||
|
||||
## Infrastructure Documentation Structure
|
||||
```
|
||||
infrastructure/
|
||||
└── backups/
|
||||
├── zfs-snapshots.md
|
||||
└── application-backups.md
|
||||
```
|
||||
|
||||
**Guidelines:**
|
||||
- Infrastructure describes cross-cutting systems
|
||||
- Anything used by multiple hosts or services belongs here
|
||||
- Backup strategies are infrastructure, not storage
|
||||
|
||||
---
|
||||
|
||||
## Services Documentation Structure
|
||||
```
|
||||
services/
|
||||
└── mailcow.md
|
||||
```
|
||||
|
||||
**Guidelines:**
|
||||
- One page per service
|
||||
- Page should include:
|
||||
- Purpose
|
||||
- Architecture
|
||||
- Volumes
|
||||
- Backup considerations
|
||||
- Recovery notes
|
||||
|
||||
---
|
||||
|
||||
## Emergency Documentation
|
||||
```
|
||||
emergency/
|
||||
├── bring-up-order.md
|
||||
├── swarm-recovery.md
|
||||
├── zfs-import.md
|
||||
├── network-restore.md
|
||||
└── identity-break-glass.md
|
||||
```
|
||||
|
||||
**Rules:**
|
||||
|
||||
Emergency docs must be:
|
||||
- Text-first
|
||||
- Copy/paste friendly
|
||||
- Free of dependencies
|
||||
|
||||
These pages should be readable directly from Git.
|
||||
|
||||
---
|
||||
|
||||
## Naming Conventions (Mandatory)
|
||||
|
||||
**Folders:**
|
||||
- Lowercase
|
||||
- No spaces
|
||||
- Example: `infrastructure/backups`
|
||||
|
||||
**Page filenames:**
|
||||
- Lowercase
|
||||
- Hyphen-separated
|
||||
- Example: `zfs-snapshots.md`
|
||||
|
||||
**Page titles:**
|
||||
- Human readable
|
||||
- Proper case
|
||||
- Example: `# ZFS Snapshots`
|
||||
|
||||
---
|
||||
|
||||
## draw.io (diagrams.net) Usage
|
||||
|
||||
draw.io is used **only to create diagrams**, never as the sole storage location.
|
||||
|
||||
### Diagram Storage Layout
|
||||
```
|
||||
diagrams/
|
||||
├── network/
|
||||
│ ├── core.drawio
|
||||
│ ├── core.png
|
||||
│ └── core.svg
|
||||
├── docker/
|
||||
│ ├── swarm-architecture.drawio
|
||||
│ └── swarm-architecture.png
|
||||
└── storage/
|
||||
├── zfs-layout.drawio
|
||||
└── zfs-layout.png
|
||||
```
|
||||
|
||||
### File Types
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| .drawio | Editable source |
|
||||
| .png | Offline viewing |
|
||||
| .svg | Zoomable / high quality (optional) |
|
||||
|
||||
**Every diagram MUST have a PNG export.**
|
||||
|
||||
---
|
||||
|
||||
## Adding a Diagram (Required Workflow)
|
||||
|
||||
1. Create or edit the diagram in draw.io
|
||||
2. Save the `.drawio` file into `diagrams/<category>/`
|
||||
3. Export a `.png` (and optional `.svg`)
|
||||
4. Commit all files to Git
|
||||
|
||||
If a diagram cannot be viewed without draw.io running, it is **not complete**.
|
||||
|
||||
---
|
||||
|
||||
## Embedding Diagrams in Wiki.js Pages
|
||||
|
||||
Always embed PNG or SVG, never live editors.
|
||||
|
||||
Example:
|
||||
```markdown
|
||||

|
||||
|
||||
_Source file: core.drawio_
|
||||
```
|
||||
|
||||
This ensures:
|
||||
- Fast rendering
|
||||
- Offline viewing
|
||||
- No service dependency
|
||||
|
||||
---
|
||||
|
||||
## Git Workflow Expectations
|
||||
|
||||
**Authoring:**
|
||||
- All pages are created and edited in Wiki.js
|
||||
- Wiki.js commits changes automatically
|
||||
|
||||
**Offline Access:**
|
||||
- Documentation is read directly from the Git clone
|
||||
- Markdown and images must be sufficient without Wiki.js
|
||||
|
||||
**What Not To Do:**
|
||||
- Do not create wiki pages directly in Git
|
||||
- Do not rename paths outside Wiki.js
|
||||
- Do not store diagrams only inside draw.io
|
||||
|
||||
---
|
||||
|
||||
## Lab-Down Access Model
|
||||
|
||||
When the lab is unavailable:
|
||||
|
||||
1. Open the local Git clone
|
||||
2. Read `README.md`
|
||||
3. Navigate to `emergency/`
|
||||
4. View diagrams via `.png` files
|
||||
5. Execute recovery steps
|
||||
|
||||
**No services are required.**
|
||||
|
||||
---
|
||||
|
||||
## README.md (Recommended Content)
|
||||
|
||||
The root `README.md` should contain:
|
||||
- Purpose of the documentation
|
||||
- Where to start during an outage
|
||||
- Link list to emergency procedures
|
||||
- High-level architecture notes
|
||||
|
||||
---
|
||||
|
||||
## Final Notes
|
||||
|
||||
This structure is designed to:
|
||||
- Scale cleanly
|
||||
- Survive outages
|
||||
- Remain readable for years
|
||||
- Support automation and GitOps workflows
|
||||
|
||||
**If documentation cannot be read when the lab is down, it is incomplete.**
|
||||
|
||||
This structure makes that impossible.
|
||||
122
Netgrimoire/Conventions/Service-Doc-Template.md
Normal file
122
Netgrimoire/Conventions/Service-Doc-Template.md
Normal file
|
|
@ -0,0 +1,122 @@
|
|||
---
|
||||
title: Service Documentation Template
|
||||
description: Describe the service
|
||||
published: true
|
||||
date: 2026-04-10T13:23:01.021Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-02-03T02:57:07.462Z
|
||||
---
|
||||
|
||||
# Service Documentation Template - 1
|
||||
|
||||
Use this template for **every new service** documented under `services/`.
|
||||
|
||||
Copy this file, rename it, and fill in all sections.
|
||||
|
||||
---
|
||||
|
||||
# Service Name
|
||||
|
||||
## Overview
|
||||
|
||||
Brief description of what this service does and why it exists.
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
Describe how the service is deployed.
|
||||
|
||||
Include:
|
||||
- Host(s)
|
||||
- Containers
|
||||
- External dependencies
|
||||
- Network exposure
|
||||
|
||||
---
|
||||
|
||||
## Volumes & Data
|
||||
|
||||
List all persistent data locations.
|
||||
```
|
||||
/path/on/host → purpose
|
||||
```
|
||||
|
||||
Include:
|
||||
- What data is stored
|
||||
- Whether it is critical
|
||||
- Where backups occur
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
Document:
|
||||
- Environment variables (non-secret)
|
||||
- Configuration files
|
||||
- Important defaults
|
||||
|
||||
**Secrets must not be stored here.** Reference where they live instead.
|
||||
|
||||
---
|
||||
|
||||
## Authentication & Access
|
||||
|
||||
Describe:
|
||||
- Authentication method
|
||||
- Local access
|
||||
- Break-glass access (if applicable)
|
||||
|
||||
---
|
||||
|
||||
## Backups
|
||||
|
||||
Explain:
|
||||
- What is backed up
|
||||
- How often
|
||||
- Using what tool
|
||||
- Where backups are stored
|
||||
|
||||
Link to infrastructure backup docs if applicable.
|
||||
|
||||
---
|
||||
|
||||
## Restore Procedure
|
||||
|
||||
Step-by-step recovery instructions.
|
||||
```bash
|
||||
# example commands
|
||||
```
|
||||
|
||||
This section must be usable when the service is broken.
|
||||
|
||||
---
|
||||
|
||||
## Monitoring & Health
|
||||
|
||||
Describe:
|
||||
- How service health is checked
|
||||
- Logs of interest
|
||||
- Alerting (if any)
|
||||
|
||||
---
|
||||
|
||||
## Common Failures
|
||||
|
||||
List known failure modes and fixes.
|
||||
|
||||
---
|
||||
|
||||
## Diagrams
|
||||
|
||||
Embed architecture diagrams here.
|
||||
```markdown
|
||||

|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
Anything that does not fit elsewhere.
|
||||
174
Netgrimoire/Conventions/Theme.md
Normal file
174
Netgrimoire/Conventions/Theme.md
Normal file
|
|
@ -0,0 +1,174 @@
|
|||
---
|
||||
title: Documentation Style Guide
|
||||
description: Applying a theme
|
||||
published: true
|
||||
date: 2026-02-25T21:32:16.786Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-02-24T14:03:00.791Z
|
||||
---
|
||||
|
||||
# Netgrimoire Theme — Wiki.js Implementation Guide
|
||||
|
||||
## What You're Getting
|
||||
|
||||
Two files to transform your Wiki.js library into the Netgrimoire aesthetic:
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `netgrimoire-theme.css` | Global site theme — dark background, teal glow, Cinzel headers, animated sidebar |
|
||||
| `netgrimoire-hero-block.html` | Animated constellation hero banner for your library landing page |
|
||||
|
||||
---
|
||||
|
||||
## Part 1 — Apply the Global Theme CSS
|
||||
|
||||
This is the main transformation. It reskins the entire Wiki.js UI.
|
||||
|
||||
### Step 1: Open the Wiki.js Admin Panel
|
||||
|
||||
Navigate to your Wiki.js instance and go to:
|
||||
|
||||
```
|
||||
Administration (gear icon) → Theme
|
||||
```
|
||||
|
||||
### Step 2: Locate "Custom CSS"
|
||||
|
||||
On the Theme page, scroll down until you see the **"Custom CSS"** text area. It may be labelled "CSS Override" depending on your Wiki.js version.
|
||||
|
||||
### Step 3: Paste the CSS
|
||||
|
||||
Open `netgrimoire-theme.css`, select all (`Ctrl+A`), copy, and paste the entire contents into the Custom CSS field.
|
||||
|
||||
### Step 4: Apply
|
||||
|
||||
Click **"Apply"** or **"Save"** at the top or bottom of the Theme page. Wiki.js applies the CSS live — you do not need to restart the container.
|
||||
|
||||
### Step 5: Verify
|
||||
|
||||
Open your wiki in a new browser tab. You should see:
|
||||
|
||||
- Dark `#0a0d12` background
|
||||
- Teal/cyan navigation links and headers
|
||||
- Cinzel serif font on headings
|
||||
- Glowing active sidebar item
|
||||
- Teal-bordered code blocks and tables
|
||||
|
||||
**If styles are not applying**, do a hard refresh (`Ctrl+Shift+R`) to clear cached CSS.
|
||||
|
||||
---
|
||||
|
||||
## Part 2 — Add the Animated Hero Banner to Your Library Page
|
||||
|
||||
This places a live constellation animation at the top of your document library index page.
|
||||
|
||||
### Step 1: Open the Library Page for Editing
|
||||
|
||||
Navigate to your document library landing page and click **Edit** (pencil icon, top right).
|
||||
|
||||
### Step 2: Switch to Source / HTML Mode
|
||||
|
||||
In the Wiki.js editor toolbar, look for one of the following depending on your editor:
|
||||
|
||||
- **Markdown editor**: Click the `<>` or "Insert HTML Block" button
|
||||
- **Visual editor (WYSIWYG)**: Look for `< >` Source button, or Insert → HTML Block
|
||||
|
||||
### Step 3: Paste the Hero HTML
|
||||
|
||||
Open `netgrimoire-hero-block.html`, copy the full contents, and paste into the HTML block at the very top of your page, before any other content.
|
||||
|
||||
### Step 4: Save the Page
|
||||
|
||||
Click **Save**. The constellation animation will render automatically when the page loads.
|
||||
|
||||
### Step 5: Customize (Optional)
|
||||
|
||||
To change the banner title text, find this line in the HTML:
|
||||
|
||||
```html
|
||||
>DOCUMENT LIBRARY</div>
|
||||
```
|
||||
|
||||
Replace `DOCUMENT LIBRARY` with whatever you want (e.g., `THE GRIMOIRE`, `KNOWLEDGE VAULT`).
|
||||
|
||||
To change the subtitle:
|
||||
|
||||
```html
|
||||
>Netgrimoire Knowledge Vault</div>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Part 3 — Google Fonts (Internet Access Required)
|
||||
|
||||
The theme imports three fonts automatically via Google Fonts:
|
||||
|
||||
| Font | Used For |
|
||||
|------|---------|
|
||||
| Cinzel | Headers, nav section labels, card titles |
|
||||
| Share Tech Mono | Code blocks, inline code, footer |
|
||||
| Raleway | Body text, nav items, descriptions |
|
||||
|
||||
These load via a `@import` at the top of the CSS and require your browser to have internet access when loading the page. Since Netgrimoire is a local server, this means:
|
||||
|
||||
- **If your browser machine has internet**: Fonts load automatically — no action needed.
|
||||
- **If fully air-gapped**: The fonts will fall back to system serif/monospace. To self-host, download the font files and serve them from your Forgejo or a local nginx path, then replace the `@import` line with `@font-face` blocks pointing to your local URLs.
|
||||
|
||||
---
|
||||
|
||||
## Part 4 — Fine-Tuning
|
||||
|
||||
### Adjusting the Teal Color
|
||||
|
||||
All colors are defined as CSS variables at the top of the CSS file. To shift the color tone, change `--ng-teal`:
|
||||
|
||||
```css
|
||||
:root {
|
||||
--ng-teal: #00e5cc; /* default — cyan-teal */
|
||||
/* try: #00cfff for more blue */
|
||||
/* try: #39ff14 for neon green */
|
||||
/* try: #bf5fff for purple arcane */
|
||||
}
|
||||
```
|
||||
|
||||
### Making the Background Darker
|
||||
|
||||
Adjust `--ng-bg-base` and `--ng-bg-deep`:
|
||||
|
||||
```css
|
||||
:root {
|
||||
--ng-bg-base: #070a0e; /* even darker */
|
||||
--ng-bg-deep: #030507;
|
||||
}
|
||||
```
|
||||
|
||||
### Constellation Node Count
|
||||
|
||||
In `netgrimoire-hero-block.html`, find:
|
||||
|
||||
```javascript
|
||||
var NODE_COUNT = 55;
|
||||
```
|
||||
|
||||
Increase for a denser network, decrease for a sparser, more minimal look.
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Symptom | Fix |
|
||||
|---------|-----|
|
||||
| CSS not applying | Hard refresh (`Ctrl+Shift+R`); check for syntax errors in the CSS field |
|
||||
| Fonts showing as Times New Roman | Browser lacks internet access; see Part 3 above |
|
||||
| Hero animation not rendering | Check browser console for JS errors; ensure the page saved the HTML block |
|
||||
| Sidebar colors still white | Some Wiki.js versions use different class names; inspect with browser DevTools and let Claude know which element needs targeting |
|
||||
| Dark mode toggle fighting the theme | Wiki.js's built-in dark mode toggle may conflict — set it to Dark in Administration → Theme before applying custom CSS |
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
- Wiki.js stores custom CSS in the database, so it survives container restarts.
|
||||
- After updating Wiki.js, re-check the Theme page — major version upgrades occasionally reset the CSS field.
|
||||
- The hero block is per-page; you can add it to any page you want the constellation effect on.
|
||||
63
Netgrimoire/Overview.md
Normal file
63
Netgrimoire/Overview.md
Normal file
|
|
@ -0,0 +1,63 @@
|
|||
---
|
||||
title: Netgrimoire
|
||||
description: Core homelab overview — the spine of the grimoire ecosystem
|
||||
published: true
|
||||
date: 2026-04-12T00:00:00.000Z
|
||||
tags: netgrimoire
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-12T00:00:00.000Z
|
||||
---
|
||||
|
||||
# Netgrimoire
|
||||
|
||||

|
||||
|
||||
Netgrimoire is the primary self-hosted homelab infrastructure running on `znas` and a cluster of worker nodes under Docker Swarm. It is the foundation every other grimoire depends on.
|
||||
|
||||
This section is intentionally high-level — the spine. Detailed technical content lives in the specialized grimoires.
|
||||
|
||||
---
|
||||
|
||||
## Infrastructure at a Glance
|
||||
|
||||
| Host | Role | IP | Runtime |
|
||||
|------|------|----|---------|
|
||||
| znas | NAS + Primary Swarm manager | 192.168.5.10 | Docker Swarm manager + Compose |
|
||||
| docker2 | VPN gateway | — | Docker Compose |
|
||||
| docker3 | LibreNMS host | — | Docker Compose |
|
||||
| docker4 (hermes) | Mail + AI worker | 192.168.5.16 | Docker Compose + Swarm worker |
|
||||
| docker5 | Media host | 192.168.5.18 | Docker Compose |
|
||||
| Pi nodes | Swarm workers + vault nodes | various | Docker Swarm workers |
|
||||
|
||||
---
|
||||
|
||||
## The Grimoires
|
||||
|
||||
| Grimoire | What Lives There |
|
||||
|----------|-----------------|
|
||||
| [Keystone Grimoire](/Keystone-Grimoire/Overview) | Architecture, network topology, Caddy, Docker template, DNS, mail infrastructure |
|
||||
| [Vault Grimoire](/Vault-Grimoire/Overview) | ZFS storage, Kopia backups, NFS exports, offsite replication |
|
||||
| [Ward Grimoire](/Ward-Grimoire/Overview) | OPNsense, CrowdSec, Authentik, Authelia, LLDAP, WireGuard, blocklists |
|
||||
| [Watch Grimoire](/Watch-Grimoire/Overview) | Uptime Kuma, Beszel, LibreNMS, Grafana, Graylog, ntfy, DIUN |
|
||||
| [Gremlin Grimoire](/Gremlin-Grimoire/Overview) | Ollama, Open WebUI, Qdrant, n8n, AI workflows |
|
||||
| [Shadow Grimoire](/Shadow-Grimoire/Overview) | Usenet, torrents, arr stack, indexers, media acquisition |
|
||||
| [Green Grimoire](/Green-Grimoire/Overview) | Adult media: Stash, Jellyfinx, Namer, Whisparr |
|
||||
| [Pocket Grimoire](/Pocket-Grimoire/Overview) | Portable laptop lab, offline-first, travel vault node |
|
||||
|
||||
---
|
||||
|
||||
## Key Domains
|
||||
|
||||
`netgrimoire.com` · `pncharris.com` · `wasted-bandwidth.net` · `nucking-futz.com` · `florosafd.org` · `gnarlypandaproductions.com` · `pncfishandmore.com` · `pncharrisenterprises.com`
|
||||
|
||||
---
|
||||
|
||||
## Quick Links
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| 📋 [Service Catalog](/Netgrimoire/Service-Catalog) | Full service inventory with status and grimoire assignment |
|
||||
| 📖 [Documentation Standards](/Netgrimoire/Conventions/Doc-Standards) | How docs are structured, named, and maintained |
|
||||
| 📄 [Service Doc Template](/Netgrimoire/Conventions/Service-Doc-Template) | Template for writing new service docs |
|
||||
| 🎨 [Wiki Theme](/Netgrimoire/Conventions/Theme) | CSS customization and branding |
|
||||
| 🔍 [Audit Reports](/Netgrimoire/Audits/README) | Gremlin-generated weekly YAML audits |
|
||||
356
Netgrimoire/Service-Catalog.md
Normal file
356
Netgrimoire/Service-Catalog.md
Normal file
|
|
@ -0,0 +1,356 @@
|
|||
---
|
||||
title: Netgrimoire Service Catalog
|
||||
description: Full service inventory — all grimoires, status, host, URL
|
||||
published: true
|
||||
date: 2026-04-12T00:00:00.000Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-03-29T16:05:26.168Z
|
||||
---
|
||||
|
||||
# Netgrimoire Service Catalog
|
||||
|
||||
> **Living document** — tracks all deployed, configured, and planned services across the Netgrimoire homelab.
|
||||
> Source of truth: Forgejo repo — `compose/` = Docker Compose per host | `swarm/` = Docker Swarm | `archive/` = not running
|
||||
>
|
||||
> Status: ✅ Deployed & Configured | 🔧 Deployed, Needs Config | 📋 Planned | 🔍 Evaluating | ❌ Abandoned/Archived
|
||||
|
||||
---
|
||||
|
||||
## 🏗️ Infrastructure Overview
|
||||
|
||||
| Host | Role | IP | Runtime |
|
||||
|------|------|----|---------|
|
||||
| znas | NAS / Primary Swarm node | 192.168.5.10 | Docker Compose + Swarm manager |
|
||||
| docker2 | VPN gateway host | — | Docker Compose |
|
||||
| docker3 | LibreNMS host | — | Docker Compose |
|
||||
| docker4 (hermes) | Mail server host | 192.168.5.16 | Docker Compose |
|
||||
| docker5 | Media host | 192.168.5.18 | Docker Compose |
|
||||
| Pi4s / NUCs | Swarm worker nodes | various | Docker Swarm workers |
|
||||
|
||||
---
|
||||
|
||||
## 📡 Network & Reverse Proxy
|
||||
|
||||
| Status | App | Host / Runtime | URL | Purpose | Notes |
|
||||
|--------|-----|----------------|-----|---------|-------|
|
||||
| ✅ | OPNsense | Firewall appliance | — | Firewall / Dual-WAN / NAT | ATT igc1 primary; 5 static IPs allocated; legacy WAN retiring |
|
||||
| 🔧 | Caddy (new) | znas / Swarm | — | Reverse proxy — CrowdSec edition | `serfriz/caddy-crowdsec-geoip-ratelimit-security-dockerproxy`; migration in progress; `caddy.yaml` |
|
||||
| ✅ | Caddy (legacy) | znas / Swarm | — | Reverse proxy | `lucaslorentz/caddy-docker-proxy`; `caddy-1.yaml` |
|
||||
| ✅ | Authentik | znas / Swarm | — | SSO / IdP | Protects `*.netgrimoire.com` services |
|
||||
| ✅ | Authelia | znas / Swarm | — | SSO / IdP | Protects `*.wasted-bandwidth.net` services |
|
||||
| ✅ | WireGuard | OPNsense | — | VPN | Peers: Obie (.2), pncfishandmore (.3), GLNet (.4/.6), PortaPotty (.5) — 192.168.32.0/24 |
|
||||
| ✅ | OpenVPN | OPNsense | — | VPN | Configured alongside WireGuard |
|
||||
| ✅ | Gluetun | docker2 / Compose | — | VPN gateway container | PIA VPN; Jackett + Transmission share `network_mode: container:gluetun` |
|
||||
| ✅ | Internal DNS | 192.168.5.7 | dns.netgrimoire.com | Internal name resolution | Technitium DNS; behind Authentik |
|
||||
| ✅ | LLDAP | znas / Swarm | ldap.netgrimoire.com | Lightweight LDAP directory | `lldap/lldap:stable` + postgres; user management backend |
|
||||
| 📋 | dnscrypt-proxy | TBD | — | Encrypted upstream DNS | Pending install |
|
||||
| 📋 | Suricata | OPNsense | — | IDS/IPS | Pending config |
|
||||
| 📋 | Zenarmor | OPNsense | — | Deep packet inspection (free tier) | Pending install |
|
||||
| 📋 | os-git-backup | OPNsense | — | OPNsense config backup to git | Pending install |
|
||||
|
||||
---
|
||||
|
||||
## 🔒 Security
|
||||
|
||||
| Status | App | Host / Runtime | URL | Purpose | Notes |
|
||||
|--------|-----|----------------|-----|---------|-------|
|
||||
| ✅ | CrowdSec | OPNsense + Swarm | — | Threat intelligence / IP blocking | OPNsense bouncer active; Caddy bouncer in progress |
|
||||
| ✅ | Vaultwarden | znas / Swarm | pass.netgrimoire.com | Password manager | `vaultwarden/server` |
|
||||
| 🔧 | CrowdSec Caddy Bouncer | znas / Swarm | — | HTTP-level blocking | Gradual rollout via `caddy.import=crowdsec` label per service |
|
||||
| 🔧 | OPNsense Spamhaus + GeoIP | OPNsense | — | IP blocklist / geo-blocking | Currently DISABLED — needs fixing |
|
||||
| 📋 | YubiKey PIV (SSH) | All hosts | — | Smartcard SSH authentication | Highest-impact pending integration |
|
||||
| 📋 | YubiKey Challenge-Response | znas | — | LUKS / Kopia key derivation | Planned |
|
||||
|
||||
---
|
||||
|
||||
## 📧 Email
|
||||
|
||||
| Status | App | Host / Runtime | URL | Purpose | Notes |
|
||||
|--------|-----|----------------|-----|---------|-------|
|
||||
| ✅ | MailCow | docker4 / Compose | mail.netgrimoire.com + all domains | Self-hosted mail server | hermes.netgrimoire.com; MXRoute inbound filter + outbound relay for all 8 domains |
|
||||
| ✅ | Roundcube | docker4 / Swarm | — | Webmail | SSL peer verify disabled for internal dovecot; SRS catch-all aliases configured |
|
||||
| ✅ | MXRoute | External | — | Inbound filter + outbound relay | Two DKIM selectors: `mailcow` + `mxroute` |
|
||||
| 📋 | Dedicated ATT_Mail IP | OPNsense | — | Separate static IP for mail traffic | Assignment still pending |
|
||||
|
||||
**Domains:** netgrimoire.com · pncharris.com · nucking-futz.com · wasted-bandwidth.net · florosafd.org · gnarlypandaproductions.com · pncfishandmore.com · pncharrisenterprises.com
|
||||
|
||||
---
|
||||
|
||||
## 🎬 Media — Video
|
||||
|
||||
| Status | App | Host / Runtime | URL | Purpose | Notes |
|
||||
|--------|-----|----------------|-----|---------|-------|
|
||||
| ✅ | Jellyfin | docker5 / Compose | — | Media server | Port 8096; VAAPI via `/dev/dri`; dedicated static IP 107.133.34.147 |
|
||||
| ✅ | Jellyfinx | docker5 / Compose | — | Green Door media server | Port 7096; separate instance; Green + AfterDark library mounts |
|
||||
| ✅ | Sonarr | znas / Swarm | — | TV show downloader | `linuxserver/sonarr` |
|
||||
| ✅ | Radarr | znas / Swarm | — | Movie downloader | `linuxserver/radarr` |
|
||||
| ✅ | Bazarr | znas / Swarm | bazarr.netgrimoire.com | Subtitle management | `linuxserver/bazarr` |
|
||||
| ✅ | Tunarr | znas / Swarm | — | IPTV channel creation | `chrisbenincasa/tunarr`; ErsatzTV replacement (ErsatzTV archived Feb 2026) |
|
||||
| ✅ | JellySeerr | znas / Swarm | requests.netgrimoire.com | Media request management | `fallenbagel/jellyseerr` |
|
||||
| ✅ | JellyStat | znas / Swarm | — | Jellyfin usage statistics | `cyfershepard/jellystat` + postgres |
|
||||
| ✅ | TinyMediaManager | znas / Swarm | tmm.netgrimoire.com | Media metadata manager | `tinymediamanager/tinymediamanager` |
|
||||
| ✅ | Pinchflat | znas / Swarm | pinchflat.netgrimoire.com | YouTube channel downloader | `kieraneglin/pinchflat` |
|
||||
| 📋 | MeTube | TBD | — | YouTube downloader | Needed for Tunarr period-accurate filler sourcing workflow |
|
||||
| 🔍 | Wizarr | TBD | — | Jellyfin user onboarding | Evaluating |
|
||||
|
||||
---
|
||||
|
||||
## 🎵 Media — Audio
|
||||
|
||||
| Status | App | Host / Runtime | URL | Purpose | Notes |
|
||||
|--------|-----|----------------|-----|---------|-------|
|
||||
| ✅ | Lidarr | znas / Swarm | — | Music downloader | (Caddy label not found in yaml — likely static Caddyfile entry) |
|
||||
| ✅ | Beets | znas / Swarm | beets.netgrimoire.com | Music library tagging | `linuxserver/beets` |
|
||||
| 🔍 | Navidrome | TBD | — | Music streaming server | Lightweight Subsonic-compatible |
|
||||
| 🔍 | Soularr | TBD | — | Soulseek integration for Lidarr | Strongly recommended; fills gaps Usenet/torrents miss |
|
||||
| 🔍 | Tubifarry | TBD | — | Spotify playlists → YouTube → Lidarr | https://github.com/TypNull/Tubifarry |
|
||||
|
||||
---
|
||||
|
||||
## 📚 Media — Books & Comics
|
||||
|
||||
| Status | App | Host / Runtime | URL | Purpose | Notes |
|
||||
|--------|-----|----------------|-----|---------|-------|
|
||||
| ✅ | Calibre | znas / Compose | calibre.netgrimoire.com | Ebook library management | `linuxserver/calibre`; port 7070; behind Authentik; requires `seccomp=unconfined` (Compose-only) |
|
||||
| ✅ | Calibre-Web Automated | znas / Swarm | books.netgrimoire.com · books.pncharris.com | Web UI + auto-import | `crocodilestick/calibre-web-automated`; dual-domain Caddy label |
|
||||
| ✅ | Calibre-Web (library) | znas / Swarm | — | Secondary Calibre-Web instance | `linuxserver/calibre-web`; hostname `calibre-netgrimoire`; `library.yaml` |
|
||||
| ✅ | Readarr | znas / Swarm | — | Book downloader | Using `blampe/rreading-glasses` image |
|
||||
| 📋 | Mylar | znas / Swarm | — | Comic book downloader | Not currently running; needs setup soon. Reference `archive/arr.yaml` for old config |
|
||||
| ✅ | Kavita | znas / Swarm | kavita.netgrimoire.com | Ebook/comic reader | `jvmilazz0/kavita` |
|
||||
| ✅ | Comixed | znas / Swarm | comics.netgrimoire.com | Comic library server | `comixed/comixed` |
|
||||
| ✅ | FreshRSS | znas / Swarm | rss.netgrimoire.com | RSS aggregator | `linuxserver/freshrss` |
|
||||
| 🔍 | Komga | TBD | — | Comic/manga server | Evaluating vs Kavita/Comixed |
|
||||
| 🔍 | MyAnonaMouse | TBD | — | Private ebook tracker | Worth investigating |
|
||||
|
||||
---
|
||||
|
||||
## 📥 Download Stack
|
||||
|
||||
| Status | App | Host / Runtime | URL | Purpose | Notes |
|
||||
|--------|-----|----------------|-----|---------|-------|
|
||||
| ✅ | NZBGet | znas / Swarm | — | Usenet download manager | `linuxserver/nzbget` |
|
||||
| ✅ | SABnzbd | znas / Swarm | — | Usenet download manager | `linuxserver/sabnzbd` |
|
||||
| ✅ | NZBHydra | znas / Swarm | hydra.netgrimoire.com | Usenet indexer aggregator | `linuxserver/nzbhydra2:dev`; altHUB, NZBGeek, Drunken Slug, Usenet Crawler, DogNZB |
|
||||
| ✅ | Jackett | docker2 / Compose | jackett.netgrimoire.com | Torrent indexer | Runs inside Gluetun network; behind Authentik |
|
||||
| ✅ | Transmission | docker2 / Compose | — | Torrent client | `network_mode: container:gluetun`; shares Gluetun VPN |
|
||||
| ✅ | Recyclarr | znas / Swarm | — | Sonarr/Radarr quality profile sync | `recyclarr/recyclarr` |
|
||||
| ✅ | Profilarr | znas / Swarm | profilarr.netgrimoire.com | Quality profile management | `santiagosayshey/profilarr` |
|
||||
| ✅ | Configarr | znas / Swarm | configarr.netgrimoire.com | Arr config management | `raydak-labs/configarr` |
|
||||
| 📋 | Prowlarr | TBD | — | Unified indexer manager | Low priority — light torrent usage; NZBHydra covers current needs |
|
||||
|
||||
---
|
||||
|
||||
## 🤖 AI & Automation (Gremlin Stack)
|
||||
|
||||
> All pinned to `znas` node on Docker Swarm via `swarm/ollama.yaml`.
|
||||
|
||||
| Status | App | Host / Runtime | URL | Purpose | Notes |
|
||||
|--------|-----|----------------|-----|---------|-------|
|
||||
| ✅ | Ollama | znas / Swarm | — | Local LLM inference | CPU-only (Ryzen); 3B–14B models |
|
||||
| ✅ | Open WebUI | znas / Swarm | — | Chat interface for Ollama | `ghcr.io/open-webui/open-webui` |
|
||||
| ✅ | Qdrant | znas / Swarm | — | Vector database for RAG | Wiki.js / markdown doc search |
|
||||
| ✅ | n8n | znas / Swarm | — | Workflow automation | Forgejo webhook → doc gen, compose validation, alert triage |
|
||||
| 🔍 | Perplexica | TBD | — | Self-hosted AI search | https://github.com/ItzCrazyKns/Perplexica |
|
||||
|
||||
---
|
||||
|
||||
## ☁️ Files, Notes & Personal Apps
|
||||
|
||||
| Status | App | Host / Runtime | URL | Purpose | Notes |
|
||||
|--------|-----|----------------|-----|---------|-------|
|
||||
| ✅ | Nextcloud AIO | znas / Compose | cloud.netgrimoire.com | File sync / cloud storage | `nextcloud/all-in-one`; data at `/srv/NextCloud-AIO`; Caddy → port 11000 |
|
||||
| ✅ | Immich | znas / Compose | immich.netgrimoire.com | Photo management | Port 2283; Postgres dump + Kopia backup; external photo + Nextcloud mounts |
|
||||
| ✅ | Joplin Server | znas / Swarm | joplin.netgrimoire.com | Note sync server | `joplin/server` + postgres; Homepage widget configured |
|
||||
| ✅ | Vikunja | znas / Swarm | task.netgrimoire.com | Task management | `vikunja/vikunja` + MariaDB |
|
||||
| ✅ | Linkding | znas / Swarm | link.netgrimoire.com | Bookmark manager | `sissbruecker/linkding:1.13.0` |
|
||||
| ✅ | Mealie | znas / Swarm | recipe.netgrimoire.com | Recipe manager | `ghcr.io/mealie-recipes/mealie` |
|
||||
| ✅ | Wallos | znas / Swarm | expense.netgrimoire.com | Subscription / expense tracker | `bellamy/wallos` |
|
||||
| ✅ | DailyTxT | znas / Swarm | — | Encrypted diary | `phitux/dailytxt:2.x.x` |
|
||||
| ✅ | Bigcapital | docker5 / Compose | accounts.netgrimoire.com | Accounting / invoicing | Static Caddyfile entry; `{{upstreams}}` doesn't work for Compose stacks |
|
||||
| ✅ | Scanopy | znas / Swarm | scn.netgrimoire.com | Document scanner | `ghcr.io/scanopy/scanopy` (server + daemon) + postgres |
|
||||
| ✅ | Glance | znas / Swarm | home.netgrimoire.com | Alternative dashboard | `glanceapp/glance` |
|
||||
| 📋 | Memos | TBD | — | Self-hosted journaling | Preferred journal addition (alongside Joplin for notes) |
|
||||
| 🔍 | Wallabag | TBD | — | Read-it-later / article saving | |
|
||||
| 🔍 | Fluid Calendar | TBD | — | Self-hosted calendar | https://github.com/dotnetfactory/fluid-calendar |
|
||||
| 🔍 | Firefly III | TBD | — | Personal finance / budgeting | |
|
||||
| 🔍 | Stirling-PDF | TBD | — | PDF editor / tools | |
|
||||
| 🔍 | Excalidraw | TBD | — | Collaborative whiteboard | |
|
||||
| 🔍 | Baikal | TBD | — | CalDAV / CardDAV sync | https://sabre.io/baikal/ |
|
||||
|
||||
---
|
||||
|
||||
## 📝 Documentation & Dev
|
||||
|
||||
| Status | App | Host / Runtime | URL | Purpose | Notes |
|
||||
|--------|-----|----------------|-----|---------|-------|
|
||||
| ✅ | Wiki.js | znas / Swarm | wiki.netgrimoire.com | Documentation wiki | `requarks/wiki:2` + postgres; Grimoire theme; Forgejo git backend |
|
||||
| ✅ | Draw.io | znas / Swarm | draw.netgrimoire.com | Diagramming | `jgraph/drawio`; co-deployed in `wiki.yaml` |
|
||||
| ✅ | Forgejo | znas / Swarm | git.netgrimoire.com | Self-hosted Git | `codeberg.org/forgejo/forgejo:11`; source of truth for Wiki.js + Gremlin |
|
||||
| ✅ | Forgejo Runner | znas / Swarm | — | CI/CD | `data.forgejo.org/forgejo/runner:4.0.0`; `gitrunner.yaml` |
|
||||
| ✅ | VS Code Server | znas / Swarm | code.netgrimoire.com | Web-based IDE | `linuxserver/code-server` |
|
||||
| ✅ | Webtop (ubuntu-kde) | znas / Compose | webtop.netgrimoire.com | Browser-based desktop | Software rendering via llvmpipe; behind Authentik |
|
||||
| ✅ | Firefox (container) | znas / Swarm | firefox.netgrimoire.com | Containerized browser | `jlesage/firefox` |
|
||||
|
||||
---
|
||||
|
||||
## 📊 Monitoring & Observability
|
||||
|
||||
| Status | App | Host / Runtime | URL | Purpose | Notes |
|
||||
|--------|-----|----------------|-----|---------|-------|
|
||||
| ✅ | Uptime Kuma | znas / Swarm | — | Service uptime monitoring | `louislam/uptime-kuma:1` |
|
||||
| ✅ | AutoKuma | znas / Swarm | — | Auto-create Kuma monitors from labels | `ghcr.io/bigboot/autokuma`; co-deployed in `kuma.yaml` |
|
||||
| ✅ | Beszel | znas / Swarm | — | Docker resource monitoring | `henrygd/beszel` hub + agents on all nodes |
|
||||
| ✅ | DIUN | znas / Swarm | — | Docker image update notifications | `crazymax/diun`; label-based per-service |
|
||||
| ✅ | ntfy | znas / Swarm | ntfy.netgrimoire.com | Push notifications | `binwiederhier/ntfy`; OPNsense alerts via CrowdSec HTTP plugin |
|
||||
| ✅ | Dozzle | znas / Swarm | dozzle.netgrimoire.com | Real-time container logs | `amir20/dozzle`; behind Authentik |
|
||||
| ✅ | Scrutiny | znas / Compose | scrutiny.netgrimoire.com | Disk S.M.A.R.T. monitoring | `analogj/scrutiny:master-omnibus`; monitors /dev/sda–sdg; behind Authentik |
|
||||
| ✅ | Glances | znas / Compose | — | Real-time system stats | `nicolargo/glances`; `network_mode: host`; co-deployed in `monitor.yaml` |
|
||||
| ✅ | Graylog | docker4 / Compose | log.netgrimoire.com | Log aggregation | Graylog 6.0 + MongoDB 5 + DataNode (OpenSearch); compose-only (noted in file) |
|
||||
| ✅ | LibreNMS | docker3 / Compose | nms.netgrimoire.com | Network/SNMP monitoring | Full stack: librenms + dispatcher + syslog-ng + snmptrapd + MariaDB + Redis; port 8000 |
|
||||
| ✅ | Homelable | znas / Compose | — | Infrastructure visualizer | Frontend + Backend via GHCR; MCP deferred (requires build from source) |
|
||||
| ✅ | phpIPAM | znas / Swarm | ipam.netgrimoire.com | IP address management | `phpipam/phpipam-www` + cron + MariaDB |
|
||||
| ✅ | Homepage | znas / Swarm | — | Primary dashboard | `ghcr.io/gethomepage/homepage` |
|
||||
| ✅ | Glance | znas / Swarm | home.netgrimoire.com | Alternative dashboard | `glanceapp/glance` |
|
||||
| ✅ | Dockpeek | znas / Swarm | dockpeek.netgrimoire.com | Container inspector | `dockpeek/dockpeek` |
|
||||
| ✅ | Loki + Promtail + Grafana | znas / Swarm | — | Metrics/log stack | `logging.yaml`; Grafana 10.4.2 + Loki 2.9.3 + Promtail 2.9.3 |
|
||||
| ✅ | phpMyAdmin + phpPgAdmin | znas / Swarm | — | DB admin UIs | `SQL-mgmt.yaml` |
|
||||
| ✅ | pgAdmin | znas / Swarm | — | Postgres admin | `dpage/pgadmin4`; `database.yaml` |
|
||||
| 🔍 | WatchYourLAN | TBD | — | Network device tracker | https://github.com/aceberg/WatchYourLAN |
|
||||
| 🔍 | NUT UPS | TBD | — | UPS power management | https://hub.docker.com/r/instantlinux/nut-upsd |
|
||||
| 🔍 | OliveTin | TBD | — | Web button → shell command | Run commands from web UI |
|
||||
| 🔍 | Swarm Dashboard | TBD | — | Docker Swarm visualizer | https://github.com/mohsenasm/swarm-dashboard |
|
||||
|
||||
---
|
||||
|
||||
## 💾 Storage & Backup
|
||||
|
||||
| Status | App | Host / Runtime | URL | Purpose | Notes |
|
||||
|--------|-----|----------------|-----|---------|-------|
|
||||
| ✅ | OpenZFS (ZNAS) | znas | — | Primary storage | ~94TB raw, two RAIDZ1 VDEVs; vault pool |
|
||||
| ✅ | NFSv4 | znas | — | Shared storage for Swarm | Loopback NFS at `/data/nfs/znas`; ZFS must fully mount before NFS starts |
|
||||
| ✅ | Kopia (primary vault) | znas / Swarm | kopia.netgrimoire.com | Primary backup repo | `kopia.yaml`; dedup + replication |
|
||||
| ✅ | Kopia (offsite vault) | znas / Swarm | vault.netgrimoire.com | Offsite replication server | `vault.yaml`; port 51516; separate dataset → ZFS raw send to Pi vaults |
|
||||
| ✅ | syncoid | znas | — | ZFS replication | Syncs vault/Green/Pocket → Pocket Grimoire |
|
||||
| ✅ | Nextcloud AIO BorgBackup | znas | — | Nextcloud-native backup | Local snapshots before Kopia |
|
||||
| ✅ | Czkawka | znas / Swarm | dupes.netgrimoire.com | Duplicate file finder | `jlesage/czkawka` |
|
||||
| ✅ | Cloud Commander | znas / Swarm | — | Web file manager | `coderaiser/cloudcmd`; **two instances** (`cloudcmd.yaml` + `commander.yaml`) — verify if intentional |
|
||||
| ✅ | File Browser | znas / Swarm | — | Web file manager | `filebrowser/filebrowser` |
|
||||
| 🔍 | Manyfold | TBD | — | 3D print model collector | https://github.com/manyfold3d/manyfold |
|
||||
|
||||
---
|
||||
|
||||
## 🖥️ Management & Remote Access
|
||||
|
||||
| Status | App | Host / Runtime | URL | Purpose | Notes |
|
||||
|--------|-----|----------------|-----|---------|-------|
|
||||
| ✅ | Portainer | znas / Swarm | docker.netgrimoire.com | Container management UI | `portainer/portainer-ce:2.33.6` + agents on all nodes |
|
||||
| ✅ | ISPConfig | 192.168.4.11 | — | Web/DNS hosting control panel | |
|
||||
| ✅ | Cockpit | All hosts | win.netgrimoire.com | Linux server management | Caddy → `192.168.5.10:8006` |
|
||||
| ✅ | Termix | znas / Swarm | termix.netgrimoire.com | Web-based terminal | `ghcr.io/lukegus/termix` |
|
||||
| ✅ | DumbTerm | znas / Swarm | — | Simple web terminal | `dockwareio/dumbterm` |
|
||||
| ✅ | Windows 7 (VM) | znas / Compose | — | Windows VM | `dockurr/windows`; `windows7.yaml` |
|
||||
| 🔍 | Guacamole | TBD | — | Remote desktop gateway | Previously tried as `nxterm` — in archive |
|
||||
| 🔍 | SSHwifty | TBD | — | SSH web client | In archive; reconsidering |
|
||||
|
||||
---
|
||||
|
||||
## 🎭 Green Door (Adult Content)
|
||||
|
||||
> Protected behind Authelia (`*.wasted-bandwidth.net`)
|
||||
|
||||
| Status | App | Host / Runtime | URL | Purpose | Notes |
|
||||
|--------|-----|----------------|-----|---------|-------|
|
||||
| ✅ | Whisparr | znas / Swarm | — | Adult content downloader | `ghcr.io/hotio/whisparr` |
|
||||
| ✅ | Namer | znas / Compose | namer.wasted-bandwidth.net | Scene file namer | `theporndatabase/namer`; port 6980; data → `/data/nfs/Baxter/Green/` |
|
||||
| ✅ | Stash (main) | znas / Compose | stash.wasted-bandwidth.net | Adult content library | `stashapp/stash`; port 9999 |
|
||||
| ✅ | PocketStash | znas / Compose | — | Stash for Pocket Grimoire | Separate instance; port 9998; data → `/export/Green/Pocket/`; `pocketstash.yaml` |
|
||||
|
||||
---
|
||||
|
||||
## 🌐 Web Hosting
|
||||
|
||||
| Status | App | Host / Runtime | URL | Purpose | Notes |
|
||||
|--------|-----|----------------|-----|---------|-------|
|
||||
| ✅ | Apache/PHP web | znas / Swarm | fish.pncharris.com · www.wasted-bandwidth.net | Static/PHP web hosting | `php:8.2-apache`; `web.yaml`; replicas: 1 |
|
||||
|
||||
---
|
||||
|
||||
## 📦 Archive (Not Currently Running)
|
||||
|
||||
> Files in `archive/` — previously evaluated or deployed, not currently active.
|
||||
|
||||
| App | File | Notes |
|
||||
|-----|------|-------|
|
||||
| Plex | `plex.yaml` | Replaced by Jellyfin |
|
||||
| Komodo | `komodo.yaml` | Container management platform — evaluated, not deployed |
|
||||
| cAdvisor | `cadvisor.yaml` | Container metrics — not deployed |
|
||||
| Peekaping | `peekaping.yaml` | Uptime monitor — Kuma preferred |
|
||||
| WatchState | `WatchState.yaml` | Jellyfin/Plex watch state sync |
|
||||
| Nessus | `nessus.yaml` | Vulnerability scanner — evaluated |
|
||||
| NxTerm | `nxterm.yaml` | Guacamole-style remote desktop — evaluated |
|
||||
| SSHwifty | `sshwifty.yaml` | SSH web client — evaluated |
|
||||
| Wordpress Classifieds | `wordpress-classifieds.yaml` | Not deployed |
|
||||
| Cal (calendar?) | `cal.yaml` | Evaluated |
|
||||
| CrowdSec (standalone) | `crowdsec.yaml` | Merged into Caddy stack |
|
||||
| Arr stack | `arr.yaml` | Old consolidated arr compose — superseded by individual yamls |
|
||||
| Caddyfile.old | `Caddyfile.old` | Legacy Caddyfile |
|
||||
|
||||
---
|
||||
|
||||
## 🗃️ Ideas Backlog
|
||||
|
||||
| App | Category | Notes |
|
||||
|-----|----------|-------|
|
||||
| Soularr | Audio | Soulseek for Lidarr; strongly recommended |
|
||||
| Tubifarry | Audio | Spotify → YouTube → Lidarr |
|
||||
| MeTube | Video | YouTube downloader for Tunarr filler |
|
||||
| Memos | Journal | Preferred self-hosted journal pick |
|
||||
| Wallabag | Reading | Read-it-later |
|
||||
| Firefly III | Finance | Budgeting |
|
||||
| Baikal | PIM | CalDAV/CardDAV |
|
||||
| Fluid Calendar | PIM | https://github.com/dotnetfactory/fluid-calendar |
|
||||
| Perplexica | AI | Self-hosted AI search |
|
||||
| WatchYourLAN | Network | Device tracker |
|
||||
| OliveTin | Automation | Web UI → shell commands |
|
||||
| Swarm Dashboard | Monitoring | Swarm-aware visualizer |
|
||||
| ContainerNursery | Automation | On-demand container start/stop |
|
||||
| NUT UPS | Power | UPS management |
|
||||
| Wire-pod for Vector | IoT | Anki Vector local server |
|
||||
| Kindle reuse | IoT | Repurpose Kindle as weather/info display |
|
||||
| Collectarr | Media | https://github.com/RiffSphere/Collectarr |
|
||||
| SuggestArr | Media | Automated media recommendations |
|
||||
| Recommendarr | Media | AI media recommendations |
|
||||
| Manyfold | 3D Print | Model library |
|
||||
| OrcaSlicer | 3D Print | Slicer web UI |
|
||||
| Memos / Journiv | Journal | Self-hosted journaling (Memos preferred) |
|
||||
| Romm | Gaming | ROM library manager |
|
||||
| EmulatorJS | Gaming | Browser-based emulation |
|
||||
|
||||
---
|
||||
|
||||
## 🔑 Key Architecture Decisions & Gotchas
|
||||
|
||||
> Reference these before deploying or modifying services.
|
||||
|
||||
- **MailCow network isolation:** Only `nginx-mailcow` on the `netgrimoire` overlay. All other containers stay on internal bridge. Mixing causes PHP-FPM → Redis DNS conflicts.
|
||||
- **caddy-docker-proxy + static Caddyfile conflict:** Never manage the same hostname via both Docker labels AND a static block. Pick one method exclusively per service.
|
||||
- **`{{upstreams}}` is Swarm-only:** Does not work for Docker Compose stacks. Use static Caddyfile with container name or pinned IP.
|
||||
- **Docker Compose `ports: []` override:** Does not nullify ports from base file. Remap to unused host ports instead.
|
||||
- **Graylog is Compose-only:** The `graylog.yaml` file explicitly notes this — do not attempt to run it in Swarm.
|
||||
- **Calibre requires `seccomp=unconfined`:** Necessary for the desktop app container; incompatible with Swarm mode — must remain in `compose/znas/`.
|
||||
- **Kopia repos not ZFS-separable:** Use separate repositories with independent retention (`kopia.yaml` vs `vault.yaml`) rather than trying to separate at the ZFS snapshot level.
|
||||
- **ZFS encryption:** In-place encryption impossible. Use rsync migration + `-w` flag for raw send to Pi vaults (no key needed on vault side).
|
||||
- **SRS rewrite:** All domains using MXRoute inbound forwarding require catch-all aliases in MailCow to prevent `reject_unlisted_sender` rejections.
|
||||
- **Docker Swarm DNS caching:** Do NOT use `endpoint_mode: dnsrr` — always use default VIP mode. dnsrr breaks internal DNS resolution.
|
||||
- **NFS boot ordering on znas:** ZFS must fully mount before NFS starts — systemd override required (`After=zfs-import.target zfs-mount.service`). Loopback NFS mount needs `x-systemd.after=nfs-server.service` in fstab.
|
||||
- **Wiki.js angle brackets:** `<value>` placeholders cause rendering hangs. Use `VALUE` or backtick format instead.
|
||||
- **bcrypt in `.env`:** Wrap full hash in single quotes to preserve leading `$`.
|
||||
- **Webtop GPU rendering:** Requires `LIBGL_ALWAYS_SOFTWARE=1` + `GALLIUM_DRIVER=llvmpipe`; remove `devices:/dev/dri` mapping.
|
||||
- **Cloud Commander duplication:** Two nearly identical `coderaiser/cloudcmd` stacks exist (`cloudcmd.yaml` + `commander.yaml`) — verify if intentional or a duplicate to clean up.
|
||||
- **Lidarr missing Caddy label:** Lidarr yaml has no caddy label — either routed via static Caddyfile or not yet exposed. Confirm and standardize.
|
||||
- another potential mapping tool https://github.com/gelatinescreams/The-One-File/tree/main
|
||||
|
||||
---
|
||||
|
||||
*Last updated: March 2026 | Source: Forgejo repo git archive*
|
||||
72
Netgrimoire/Services/Media-Services.md
Normal file
72
Netgrimoire/Services/Media-Services.md
Normal file
|
|
@ -0,0 +1,72 @@
|
|||
---
|
||||
title: Media Services
|
||||
description: Jellyfin, Immich, Kavita, Calibre, Pinchflat, Tunarr — media stack overview
|
||||
published: true
|
||||
date: 2026-04-12T00:00:00.000Z
|
||||
tags: netgrimoire, media, jellyfin
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-12T00:00:00.000Z
|
||||
---
|
||||
|
||||
# Media Services
|
||||
|
||||
Media services span several grimoires. This page maps what lives where.
|
||||
|
||||
---
|
||||
|
||||
## Video
|
||||
|
||||
| Service | URL | Host | Grimoire |
|
||||
|---------|-----|------|---------|
|
||||
| Jellyfin | docker5:8096 | docker5 / Compose | Netgrimoire |
|
||||
| Jellyfinx (GreenFin) | docker5:7096 | docker5 / Compose | Green Grimoire |
|
||||
| JellySeerr | `requests.netgrimoire.com` | znas / Swarm | Shadow Grimoire |
|
||||
| Tunarr | — | znas / Swarm | Shadow Grimoire |
|
||||
| JellyStat | — | znas / Swarm | Watch Grimoire |
|
||||
| TinyMediaManager | `tmm.netgrimoire.com` | znas / Swarm | Shadow Grimoire |
|
||||
| Pinchflat | `pinchflat.netgrimoire.com` | znas / Swarm | Shadow Grimoire |
|
||||
|
||||
**Jellyfin** runs on docker5 via Compose. VAAPI GPU acceleration via `/dev/dri`. Dedicated static IP 107.133.34.147 for external access.
|
||||
|
||||
---
|
||||
|
||||
## Books & Comics
|
||||
|
||||
| Service | URL | Host | Grimoire |
|
||||
|---------|-----|------|---------|
|
||||
| Calibre | `calibre.netgrimoire.com` | znas / Compose | Netgrimoire |
|
||||
| Calibre-Web Automated | `books.netgrimoire.com`, `books.pncharris.com` | znas / Swarm | PNC Harris |
|
||||
| Readarr | — | znas / Swarm | Shadow Grimoire |
|
||||
| Kavita | `kavita.netgrimoire.com` | znas / Swarm | Netgrimoire |
|
||||
| Comixed | `comics.netgrimoire.com` | znas / Swarm | Netgrimoire |
|
||||
| FreshRSS | `rss.netgrimoire.com` | znas / Swarm | Netgrimoire |
|
||||
|
||||
**Calibre** requires `seccomp=unconfined` — runs in Compose, not Swarm.
|
||||
|
||||
---
|
||||
|
||||
## Music
|
||||
|
||||
| Service | URL | Host | Grimoire |
|
||||
|---------|-----|------|---------|
|
||||
| Lidarr | — | znas / Swarm | Shadow Grimoire |
|
||||
| Beets | `beets.netgrimoire.com` | znas / Swarm | Shadow Grimoire |
|
||||
|
||||
**Lidarr note:** No Caddy label in YAML — likely routed via static Caddyfile. Verify and standardize.
|
||||
|
||||
---
|
||||
|
||||
## Photos
|
||||
|
||||
| Service | URL | Host | Grimoire |
|
||||
|---------|-----|------|---------|
|
||||
| Immich | `immich.netgrimoire.com` | znas / Compose | PNC Harris |
|
||||
|
||||
---
|
||||
|
||||
## Pending
|
||||
|
||||
- Mylar (comic downloader) — in `archive/arr.yaml`, needs setup
|
||||
- Navidrome — evaluating (music streaming)
|
||||
- Soularr — evaluating (Soulseek for Lidarr)
|
||||
- MeTube — planned (YouTube → Tunarr filler workflow)
|
||||
28
PNC-Fish/IT/Overview.md
Normal file
28
PNC-Fish/IT/Overview.md
Normal file
|
|
@ -0,0 +1,28 @@
|
|||
---
|
||||
title: IT Overview
|
||||
description: PNC Fish & More IT infrastructure
|
||||
published: true
|
||||
date: 2026-04-12T00:00:00.000Z
|
||||
tags: pncfish, it
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-12T00:00:00.000Z
|
||||
---
|
||||
|
||||
# IT Overview
|
||||
|
||||
## Website
|
||||
|
||||
Hosted on `pncfishandmore.com`. Static/PHP stack via the Netgrimoire `web.yaml` Apache/PHP service.
|
||||
|
||||
## Email
|
||||
|
||||
Handled via MailCow + MXRoute. Domain configured as part of the 8-domain mail setup.
|
||||
See [MailCow Domain Setup](/Keystone-Grimoire/Mail/Domain-Setup).
|
||||
|
||||
## POS System
|
||||
|
||||
*Document POS system here.*
|
||||
|
||||
## Network
|
||||
|
||||
*Document store network here — router, AP, any on-site devices.*
|
||||
13
PNC-Fish/Marketing/Overview.md
Normal file
13
PNC-Fish/Marketing/Overview.md
Normal file
|
|
@ -0,0 +1,13 @@
|
|||
---
|
||||
title: Marketing Overview
|
||||
description: PNC Fish & More marketing and promotions
|
||||
published: true
|
||||
date: 2026-04-12T00:00:00.000Z
|
||||
tags: pncfish, marketing
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-12T00:00:00.000Z
|
||||
---
|
||||
|
||||
# Marketing Overview
|
||||
|
||||
*Add marketing documentation here: social media accounts, posting schedules, ad campaigns, promotions, photography workflow for livestock, etc.*
|
||||
13
PNC-Fish/Operations/Overview.md
Normal file
13
PNC-Fish/Operations/Overview.md
Normal file
|
|
@ -0,0 +1,13 @@
|
|||
---
|
||||
title: Operations Overview
|
||||
description: PNC Fish & More day-to-day operations documentation
|
||||
published: true
|
||||
date: 2026-04-12T00:00:00.000Z
|
||||
tags: pncfish, operations
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-12T00:00:00.000Z
|
||||
---
|
||||
|
||||
# Operations Overview
|
||||
|
||||
*Add operations documentation here: inventory management, supplier contacts, tank maintenance schedules, livestock sourcing, water chemistry protocols, etc.*
|
||||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue