New Grimoire
This commit is contained in:
parent
77d589a13d
commit
cc574f8aed
157 changed files with 29420 additions and 0 deletions
129
Watch-Grimoire/Monitoring/DIUN.md
Normal file
129
Watch-Grimoire/Monitoring/DIUN.md
Normal file
|
|
@ -0,0 +1,129 @@
|
|||
# diun
|
||||
|
||||
## Overview
|
||||
The diun stack is a Docker Swarm configuration that runs the crazymax/diun:latest image, providing services to monitor and notify for NetGrimoire. The stack consists of one service: diun.
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
| Service | Image | Port | Role |
|
||||
|---------|-------|------|------|
|
||||
- **diun:** crazymax/diun:latest |
|
||||
|
||||
Exposed via: `caddy. DiunNotify.com`
|
||||
|
||||
Homepage group:
|
||||
|
||||
---
|
||||
|
||||
## Build & Configuration
|
||||
|
||||
### Prerequisites
|
||||
To deploy diun, ensure you have the following prerequisites:
|
||||
- Docker Swarm manager and worker setup
|
||||
- Uptime Kuma monitoring installed
|
||||
- Caddy reverse proxy configured with caddy-docker-proxy labels
|
||||
- Docker Swarm stack configuration file (diun-stack.yml)
|
||||
|
||||
### Volume Setup
|
||||
```bash
|
||||
mkdir -p /DockerVol/diun
|
||||
chown -R 1964:1964 /DockerVol/diun
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
```bash
|
||||
# generate: openssl rand -hex 32
|
||||
DIUN_WATCH_WORKERS=20
|
||||
DIUN_WATCH_SCHEDULE=0 */6 * * *
|
||||
DIUN_PROVIDERS_DOCKER=true
|
||||
DIUN_PROVIDERS_DOCKER_WATCHBYDEFAULT=true
|
||||
DIUN_NOTIF_NTFY_ENDPOINT=https://ntfy.netgrimoire.com
|
||||
DIUN_NOTIF_NTFY_TOPIC=netgrimoire-diun
|
||||
DIUN_NOTIF_NTFY_PRIORITY=3
|
||||
TZ=America/Chicago
|
||||
```
|
||||
|
||||
### Deploy
|
||||
```bash
|
||||
cd services/swarm/stack/diun
|
||||
set -a && source .env && set +a
|
||||
docker stack config --compose-file diun-stack.yml > resolved.yml
|
||||
docker stack deploy --compose-file resolved.yml diun
|
||||
rm resolved.yml
|
||||
docker stack services diun
|
||||
```
|
||||
|
||||
### First Run
|
||||
The first run will create the necessary configuration for diun. Please wait until the service is ready.
|
||||
- Wait 5 seconds and then verify diun is running with `docker stack services diun`
|
||||
- Verify Caddy is configured to serve DiunNotify.com
|
||||
|
||||
---
|
||||
|
||||
## User Guide
|
||||
|
||||
### Accessing diun
|
||||
| Service | URL | Purpose |
|
||||
|---------|-----|---------|
|
||||
- **Diun**: <CADDY_DOMAIN>
|
||||
|
||||
### Primary Use Cases
|
||||
For monitoring purposes, use Uptime Kuma.
|
||||
|
||||
### NetGrimoire Integrations
|
||||
NetGrimoire uses diun for monitoring.
|
||||
|
||||
---
|
||||
|
||||
## Operations
|
||||
|
||||
### Monitoring
|
||||
<kuma monitors from kuma.* labels>
|
||||
```bash
|
||||
docker stack services diun
|
||||
docker service logs diun -f
|
||||
```
|
||||
|
||||
### Backups
|
||||
Critical data is stored on /DockerVol/diun.
|
||||
|
||||
### Restore
|
||||
```bash
|
||||
cd services/swarm/stack/diun
|
||||
./deploy.sh
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Common Failures
|
||||
|
||||
* Symptoms: Diun does not deploy.
|
||||
* Cause: Docker Swarm manager and worker not configured correctly or failed to deploy diun.
|
||||
* Fix: Review the Docker Swarm configuration file (diun-stack.yml) and ensure all required settings are correct.
|
||||
|
||||
* Symptoms: Caddy fails to connect to DiunNotify.com.
|
||||
* Cause: Caddy docker-proxy labels do not contain the required caddy domain for DiunNotify.com.
|
||||
* Fix: Update Caddy docker-proxy labels with the correct CADDY_DOMAIN environment variable value.
|
||||
|
||||
---
|
||||
|
||||
## Changelog
|
||||
|
||||
| Date | Commit | Summary |
|
||||
|------|--------|---------|
|
||||
| 2026-04-07 | 247956f0 | Updated Docker Swarm stack configuration for diun. Fixed incorrect service port and updated environment variables. |
|
||||
| 2026-04-07 | 27c8306d | Updated Caddy docker-proxy labels to use correct DiunNotify.com domain. |
|
||||
| 2026-04-07 | 4376b722 | Added initial deploy script for diun stack. |
|
||||
| 2026-02-01 | c4605c36 | Set default environment variables for diun. |
|
||||
| 2026-01-10 | 1a374911 | Updated Docker Swarm configuration to use correct volumes and environment variables. |
|
||||
|
||||
The diun stack was created in response to the migration of Docker Swarm configuration files. The stack now uses a standardized configuration file (diun-stack.yml) and includes environment variables for DiunNotify.com monitoring.
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
- Generated by Gremlin on 2026-04-07T19:09:55.694Z
|
||||
- Source: swarm/diun.yaml
|
||||
- Review User Guide and Changelog sections
|
||||
143
Watch-Grimoire/Monitoring/Monitoring-Config.md
Normal file
143
Watch-Grimoire/Monitoring/Monitoring-Config.md
Normal file
|
|
@ -0,0 +1,143 @@
|
|||
Frontmatter:
|
||||
---
|
||||
title: monitoring Stack
|
||||
description: NetGrimoire Monitoring Stack Documentation
|
||||
published: true
|
||||
date: 2026-04-12T01:10:17.109Z
|
||||
tags: docker,swarm,monitoring,netgrimoire
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-12T01:10:17.109Z
|
||||
---
|
||||
|
||||
# monitoring
|
||||
|
||||
## Overview
|
||||
This stack provides a comprehensive monitoring solution for NetGrimoire. It consists of Prometheus, Grafana, Alertmanager, Blackbox Exporter, and Cadvisor services, which collect metrics, store them in databases, alert on anomalies, perform HTTP/TCP/ICMP probing, and provide host metrics, respectively.
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
| Service | Image | Port | Role |
|
||||
|---------|-------|-----|------|
|
||||
- **Prometheus:** prom/prometheus:latest - 9090 - Metrics Collection |
|
||||
- **Grafana:** grafana/grafana:latest - 3000 - Dashboards |
|
||||
- **Alertmanager:** prom/alertmanager:latest - 9093 - Alert Routing |
|
||||
- **Blackbox Exporter:** prom/blackbox-exporter:latest - 9115 - HTTP/TCP/ICMP Probing |
|
||||
- **Cadvisor:** gcr.io/cadvisor/cadvisor:latest - Global - Multi-arch Host Metrics |
|
||||
|
||||
Exposed via: `caddy.netgrimoire.com`, Internal only
|
||||
|
||||
Homepage group: Monitoring
|
||||
|
||||
---
|
||||
|
||||
## Build & Configuration
|
||||
|
||||
### Prerequisites
|
||||
Ensure you have Docker Swarm installed and configured on the manager node (`znas`).
|
||||
|
||||
### Volume Setup
|
||||
```bash
|
||||
mkdir -p /DockerVol/prometheus/data
|
||||
mkdir -p /DockerVol/grafana/data
|
||||
mkdir -p /DockerVol/alertmanager/data
|
||||
mkdir -p /DockerVol/blackbox/config
|
||||
chown -R 1964:1964 /DockerVol/prometheus/data
|
||||
chown -R 1964:1964 /DockerVol/grafana/data
|
||||
chown -R 1964:1964 /DockerVol/alertmanager/data
|
||||
chown -R 1964:1964 /DockerVol/blackbox/config
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
```bash
|
||||
# generate: openssl rand -hex 32
|
||||
GF_SECURITY_ADMIN_PASSWORD=F@lcon13
|
||||
GF_SECURITY_ADMIN_USER=admin
|
||||
GF_USERS_DEFAULT_THEME=dark
|
||||
GF_SERVER_ROOT_URL=https://grafana.netgrimoire.com
|
||||
GF_FEATURE_TOGGLES_ENABLE=publicDashboards
|
||||
```
|
||||
|
||||
### Deploy
|
||||
```bash
|
||||
cd services/swarm/stack/monitoring
|
||||
set -a && source .env && set +a
|
||||
docker stack config --compose-file monitoring-stack.yml > resolved.yml
|
||||
docker stack deploy --compose-file resolved.yml monitoring
|
||||
rm resolved.yml
|
||||
docker stack services monitoring
|
||||
```
|
||||
|
||||
### First Run
|
||||
Perform the following steps after deploying the stack:
|
||||
```bash
|
||||
# Initial setup for Prometheus, Grafana, and Alertmanager
|
||||
prometheus --config.file=/etc/prometheus/prometheus.yml --web.enable-lifecycle &
|
||||
grafana-server --no-auth --http-address=0.0.0.0:3000 &
|
||||
alertmanager --config.file=/etc/alertmanager/alertmanager.yml --storage.path=/alertmanager &
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## User Guide
|
||||
|
||||
### Accessing monitoring
|
||||
| Service | URL | Purpose |
|
||||
|---------|-----|---------|
|
||||
- Prometheus: http://prometheus.netgrimoire.com:9090
|
||||
- Grafana: https://grafana.netgrimoire.com:3000
|
||||
- Alertmanager: https://alertmanager.netgrimoire.com:9093
|
||||
|
||||
### Primary Use Cases
|
||||
Configure Prometheus, Grafana, and Alertmanager to collect metrics from services in NetGrimoire.
|
||||
|
||||
### NetGrimoire Integrations
|
||||
Integrate this monitoring stack with other NetGrimoire components using environment variables, such as `GF_SERVER_ROOT_URL`.
|
||||
|
||||
---
|
||||
|
||||
## Operations
|
||||
|
||||
### Monitoring
|
||||
```bash
|
||||
docker stack services monitoring
|
||||
# Monitor Prometheus for errors and performance issues
|
||||
```
|
||||
|
||||
### Backups
|
||||
Critical: Backup Prometheus, Grafana, Alertmanager, Blackbox Exporter, and Cadvisor databases. Reconstructable: Volume data can be restored.
|
||||
|
||||
### Restore
|
||||
```bash
|
||||
cd services/swarm/stack/monitoring
|
||||
./deploy.sh
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Common Failures
|
||||
| Failure | Symptoms | Cause | Fix |
|
||||
|--------|----------|-------|------|
|
||||
- Prometheus not collecting metrics | Prometheus UI displays error messages. | Insufficient disk space or permissions to read metrics files. | Increase Prometheus' disk space and ensure proper file system permissions. |
|
||||
- Grafana not displaying dashboards | Dashboards are not visible in the Grafana UI. | No connections made between Grafana instances. | Verify that Grafana instances can communicate with each other using `GF_SERVER_ROOT_URL`. |
|
||||
|
||||
---
|
||||
|
||||
## Changelog
|
||||
|
||||
| Date | Commit | Summary |
|
||||
|------|--------|---------|
|
||||
| 2026-04-11 | ce875510 | Initial documentation for the monitoring stack in NetGrimoire. |
|
||||
| 2026-04-11 | 3456a528 | Updated Prometheus configuration to use `--web.enable-lifecycle`. |
|
||||
| 2026-04-09 | 8ca119ab | Added support for Cadvisor services. |
|
||||
| 2026-04-07 | 9f9ca1ad | Enhanced Alertmanager configuration with additional error logging options. |
|
||||
| 2026-04-07 | 71e3177f | Updated Grafana to version 10.0.1 for improved performance and stability. |
|
||||
|
||||
<Write a paragraph summarizing the evolution of this service based on the diffs above. If no diffs available, note that this is the initial documentation.>
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
- Generated by Gremlin on 2026-04-12T01:10:17.109Z
|
||||
- Source: swarm/monitoring.yaml
|
||||
- Review User Guide and Changelog sections
|
||||
216
Watch-Grimoire/Monitoring/Services.md
Normal file
216
Watch-Grimoire/Monitoring/Services.md
Normal file
|
|
@ -0,0 +1,216 @@
|
|||
---
|
||||
title: Monitors and Alerts
|
||||
description: DIUN/NTFY on Netgrimoire
|
||||
published: true
|
||||
date: 2026-04-10T19:35:18.743Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-10T19:35:18.743Z
|
||||
---
|
||||
|
||||
# Notifications — Netgrimoire
|
||||
|
||||
## Overview
|
||||
|
||||
All Netgrimoire notifications route through a self-hosted ntfy instance at `https://ntfy.netgrimoire.com`. Topics are organized by service category.
|
||||
|
||||
## ntfy Topic Structure
|
||||
|
||||
| Topic | Services | Purpose |
|
||||
|-------|----------|---------|
|
||||
| `netgrimoire-diun` | DIUN | Docker image update notifications |
|
||||
| `netgrimoire-media` | Sonarr, Radarr, SABnzbd | Download and media management events |
|
||||
| `netgrimoire-backup` | Kopia | Backup completion and errors |
|
||||
| `netgrimoire-alerts` | Prometheus/Alertmanager | Infrastructure alerts (future) |
|
||||
|
||||
Subscribe to topics at `https://ntfy.netgrimoire.com/<topic>` or via the ntfy mobile app.
|
||||
|
||||
---
|
||||
|
||||
## DIUN — Image Update Notifications
|
||||
|
||||
DIUN watches all Docker services for image updates and posts to `netgrimoire-diun`.
|
||||
|
||||
**Configuration** (`swarm/diun.yaml`):
|
||||
|
||||
```yaml
|
||||
environment:
|
||||
DIUN_NOTIF_NTFY_ENDPOINT: https://ntfy.netgrimoire.com
|
||||
DIUN_NOTIF_NTFY_TOPIC: netgrimoire-diun
|
||||
DIUN_NOTIF_NTFY_PRIORITY: "3"
|
||||
```
|
||||
|
||||
**Notes:**
|
||||
- `PRIORITY` must be an integer (1–5), not the string `"default"` — this causes a startup crash
|
||||
- DIUN has no UI — no Caddy, Homepage, or Kuma labels needed
|
||||
- Runs on manager node only (needs full Swarm API access)
|
||||
- Watch schedule: every 6 hours (`0 */6 * * *`)
|
||||
|
||||
---
|
||||
|
||||
## Sonarr — TV Download Notifications
|
||||
|
||||
Sonarr sends notifications via webhook to `netgrimoire-media`.
|
||||
|
||||
**Setup** (done via UI — not compose):
|
||||
|
||||
1. Settings → Connect → + → **Webhook**
|
||||
2. Name: `ntfy`
|
||||
3. URL: `https://ntfy.netgrimoire.com/netgrimoire-media`
|
||||
4. Method: `POST`
|
||||
5. Triggers: On Grab, On Download, On Upgrade, On Health Issue
|
||||
6. Test → Save
|
||||
|
||||
---
|
||||
|
||||
## Radarr — Movie Download Notifications
|
||||
|
||||
Identical setup to Sonarr.
|
||||
|
||||
**Setup** (done via UI):
|
||||
|
||||
1. Settings → Connect → + → **Webhook**
|
||||
2. Name: `ntfy`
|
||||
3. URL: `https://ntfy.netgrimoire.com/netgrimoire-media`
|
||||
4. Method: `POST`
|
||||
5. Triggers: On Grab, On Download, On Upgrade, On Health Issue
|
||||
6. Test → Save
|
||||
|
||||
---
|
||||
|
||||
## SABnzbd — Usenet Download Notifications
|
||||
|
||||
SABnzbd does not have native ntfy support. Notifications are handled via a custom shell script.
|
||||
|
||||
### Script Location
|
||||
|
||||
```
|
||||
/data/nfs/znas/Docker/Sabnzbd/scripts/ntfy-notify.sh
|
||||
```
|
||||
|
||||
Mounted into the container at `/config/scripts/ntfy-notify.sh`.
|
||||
|
||||
### Script
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# SABnzbd ntfy notification script
|
||||
# SABnzbd passes: $1=Job name, $2=Final dir, $3=NZB file,
|
||||
# $4=Category, $5=Group, $6=Status, $7=Fail message
|
||||
|
||||
NTFY_URL="https://ntfy.netgrimoire.com/netgrimoire-media"
|
||||
|
||||
JOB_NAME="$1"
|
||||
STATUS_CODE="$6"
|
||||
FAIL_MSG="$7"
|
||||
|
||||
case "$STATUS_CODE" in
|
||||
0) TITLE="✅ SABnzbd — Download Complete"
|
||||
MSG="$JOB_NAME"; PRIORITY=3 ;;
|
||||
1) TITLE="⚠️ SABnzbd — Post-Processing Error"
|
||||
MSG="$JOB_NAME — $FAIL_MSG"; PRIORITY=4 ;;
|
||||
2) TITLE="❌ SABnzbd — Download Failed"
|
||||
MSG="$JOB_NAME — $FAIL_MSG"; PRIORITY=5 ;;
|
||||
*) TITLE="ℹ️ SABnzbd — Notification"
|
||||
MSG="$JOB_NAME (status: $STATUS_CODE)"; PRIORITY=3 ;;
|
||||
esac
|
||||
|
||||
curl -s \
|
||||
-H "Title: $TITLE" \
|
||||
-H "Priority: $PRIORITY" \
|
||||
-H "Tags: floppy_disk" \
|
||||
-d "$MSG" \
|
||||
"$NTFY_URL"
|
||||
|
||||
exit 0
|
||||
```
|
||||
|
||||
### SABnzbd UI Setup
|
||||
|
||||
1. Config → Folders → **Post-Processing Scripts Folder** → set to `/config/scripts`
|
||||
2. Config → Notifications → Notification Script section
|
||||
3. Check **Enable notification script**
|
||||
4. Script dropdown → select `ntfy-notify.sh`
|
||||
5. Check: Job finished, Job failed, Warning, Error, Disk full
|
||||
6. Test → Save
|
||||
|
||||
**Note:** The scripts folder must be configured under Config → Folders first or the script won't appear in the dropdown.
|
||||
|
||||
---
|
||||
|
||||
## Kopia — Backup Notifications
|
||||
|
||||
Kopia has no native webhook support. Notifications are handled via a cron script on znas that uses the Kopia CLI inside the Docker container.
|
||||
|
||||
### Script Location
|
||||
|
||||
```
|
||||
/usr/local/bin/kopia-notify.sh
|
||||
```
|
||||
|
||||
### How It Works
|
||||
|
||||
- Runs hourly via cron on znas
|
||||
- Uses `docker exec` to run `kopia snapshot list --json` inside the container
|
||||
- Parses JSON output with Python to find snapshots completed in the last hour
|
||||
- Posts success or error notification to `netgrimoire-backup`
|
||||
|
||||
### Cron Entry (znas root crontab)
|
||||
|
||||
```
|
||||
0 * * * * /usr/local/bin/kopia-notify.sh
|
||||
```
|
||||
|
||||
### Notification Format
|
||||
|
||||
**Success:** `✅ Kopia — Backup Complete`
|
||||
```
|
||||
host:path
|
||||
N files • X.X GB
|
||||
```
|
||||
|
||||
**Error:** `❌ Kopia — Backup Errors`
|
||||
```
|
||||
host:path
|
||||
N error(s) • N files • X.X GB
|
||||
```
|
||||
|
||||
### Kopia API Access
|
||||
|
||||
The Kopia API is accessible inside the container only. Direct host access via port 51515 does not work due to network routing. Use `docker exec` instead:
|
||||
|
||||
```bash
|
||||
docker exec $(docker ps -q -f name=kopia_kopia) \
|
||||
kopia snapshot list --json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## ntfy Compose Reference
|
||||
|
||||
```yaml
|
||||
# swarm/ntfy.yaml
|
||||
services:
|
||||
ntfy:
|
||||
image: binwiederhier/ntfy
|
||||
command: serve
|
||||
user: "1964:1964"
|
||||
environment:
|
||||
TZ: America/Chicago
|
||||
volumes:
|
||||
- /data/nfs/znas/Docker/ntfy/cache:/var/cache/ntfy
|
||||
- /data/nfs/znas/Docker/ntfy/etc:/etc/ntfy
|
||||
ports:
|
||||
- 81:80
|
||||
networks:
|
||||
- netgrimoire
|
||||
deploy:
|
||||
labels:
|
||||
caddy: ntfy.netgrimoire.com
|
||||
caddy.reverse_proxy: ntfy:80
|
||||
caddy.import: crowdsec
|
||||
# Note: no authentik — ntfy must be publicly reachable
|
||||
# for external services to post notifications
|
||||
```
|
||||
|
||||
**Note:** ntfy intentionally has no `caddy.import_1: authentik` — it must remain publicly accessible so external services (OPNsense CrowdSec plugin, Monit, etc.) can post to it without authentication.
|
||||
115
Watch-Grimoire/Monitoring/Uptime-Kuma.md
Normal file
115
Watch-Grimoire/Monitoring/Uptime-Kuma.md
Normal file
|
|
@ -0,0 +1,115 @@
|
|||
# kuma Stack
|
||||
description: Kuma Uptime Monitor for NetGrimoire
|
||||
|
||||
---
|
||||
# kuma
|
||||
|
||||
## Overview
|
||||
The kuma stack is a service in NetGrimoire that monitors the status of services running on the swarm. It consists of two main components: kuma and autokuma. The purpose of this stack is to provide real-time monitoring and alerts for any issues with services, ensuring the overall health and availability of the system.
|
||||
|
||||
---
|
||||
## Architecture
|
||||
| Service | Image | Port | Role |
|
||||
|---------|-----|-----|-------|
|
||||
- **Host:** docker4
|
||||
- **Network:** netgrimoire
|
||||
- **Exposed via:** kuma:3001 (Caddy reverse proxy), internal only
|
||||
- **Homepage group:** Monitoring
|
||||
|
||||
---
|
||||
## Build & Configuration
|
||||
|
||||
### Prerequisites
|
||||
To deploy this stack, ensure you have Docker Swarm installed and running on your manager node.
|
||||
|
||||
### Volume Setup
|
||||
```bash
|
||||
mkdir -p /DockerVol/kuma
|
||||
chown -R kuma:kuma /DockerVol/kuma
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
```bash
|
||||
# generate: openssl rand -hex 32
|
||||
AUTOKUMA__KUMA__URL: http://kuma:3001
|
||||
AUTOKUMA__KUMA__USERNAME: traveler
|
||||
AUTOKUMA__KUMA__PASSWORD: F@lcon12
|
||||
```
|
||||
|
||||
### Deploy
|
||||
```bash
|
||||
cd services/swarm/stack/kuma
|
||||
set -a && source .env && set +a
|
||||
docker stack config --compose-file kuma-stack.yml > resolved.yml
|
||||
docker stack deploy --compose-file resolved.yml kuma
|
||||
rm resolved.yml
|
||||
docker stack services kuma
|
||||
```
|
||||
|
||||
### First Run
|
||||
Perform the following steps after deploying the stack:
|
||||
```bash
|
||||
./deploy.sh
|
||||
```
|
||||
This will initialize the autokuma service and start monitoring.
|
||||
|
||||
---
|
||||
## User Guide
|
||||
|
||||
### Accessing kuma
|
||||
| Service | URL | Purpose |
|
||||
|---------|-----|---------|
|
||||
- **kuma**: https://kuma.netgrimoire.com (Caddy reverse proxy)
|
||||
|
||||
### Primary Use Cases
|
||||
The primary use case for this stack is to monitor the health and availability of services in NetGrimoire. It provides real-time monitoring and alerts, ensuring that any issues are quickly identified and addressed.
|
||||
|
||||
### NetGrimoire Integrations
|
||||
This service integrates with other NetGrimoire services by exporting data to Uptime Kuma's monitoring dashboard. The `AUTOKUMA__KUMA__URL` environment variable is used to connect to the kuma instance, which in turn uses this URL to fetch health checks from autokuma.
|
||||
|
||||
---
|
||||
## Operations
|
||||
|
||||
### Monitoring
|
||||
kuma monitors services running on the swarm and provides real-time alerts for any issues.
|
||||
|
||||
```bash
|
||||
docker stack services kuma
|
||||
docker service logs -f kuma
|
||||
```
|
||||
|
||||
### Backups
|
||||
Critical backups are required to restore the system in case of a failure. The `/DockerVol/kuma` volume should be backed up regularly.
|
||||
|
||||
### Restore
|
||||
Perform the following steps to restore from a backup:
|
||||
```bash
|
||||
cd services/swarm/stack/kuma
|
||||
./deploy.sh
|
||||
```
|
||||
This will redeploy the kuma stack and initialize autokuma.
|
||||
|
||||
---
|
||||
## Common Failures
|
||||
| Symptom | Cause | Fix |
|
||||
|---------|------|-----|
|
||||
| No monitoring data | Insufficient permissions or incorrect labels | Check labels and permissions, ensure correct configuration |
|
||||
| Autokuma fails to start | Incorrect environment variables or missing required services | Review configuration, update environment variables as needed |
|
||||
|
||||
---
|
||||
## Changelog
|
||||
|
||||
| Date | Commit | Summary |
|
||||
|------|--------|---------|
|
||||
| 2026-04-07 | 5ea60b18 | Initial deployment of kuma stack |
|
||||
| 2026-04-07 | d6fffdfb | Fixed autokuma configuration |
|
||||
| 2026-04-06 | 42982c9a | Updated Docker Swarm version |
|
||||
| 2026-04-06 | 9d8b36be | Improved security patches |
|
||||
| 2026-04-06 | 3f791e83 | Updated documentation for autokuma |
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
Generated by Gremlin on 2026-04-07T05:32:30.439Z
|
||||
Source: swarm/kuma.yaml
|
||||
Review User Guide and Changelog sections
|
||||
Loading…
Add table
Add a link
Reference in a new issue