Netgrimoire/False Grimoire/Netgrimoire/Services/Gremlin
2026-04-12 09:39:57 -05:00
..
gremlin-build-config.md prep for new grimoire 2026-04-12 09:39:57 -05:00
gremlin-user-guide.md prep for new grimoire 2026-04-12 09:39:57 -05:00
Netgrimoire_Agent.md prep for new grimoire 2026-04-12 09:39:57 -05:00
Readme.md prep for new grimoire 2026-04-12 09:39:57 -05:00

title description published date tags editor dateCreated
Readme Readme file generated by AI true 2026-04-02T21:09:39.376Z markdown 2026-03-05T02:27:57.522Z

Homelab AI & Monitoring Stack - Deployment Guide

This repository contains everything you need to deploy a complete AI-powered homelab monitoring and automation stack.

What's Included

📦 Docker Compose Files

  1. ai-stack-compose.yml - Main AI automation stack (Ollama, Open WebUI, n8n, Qdrant)
  2. librenms-compose.yml - Network monitoring system (LibreNMS + MariaDB + Redis)

📚 Wiki.js Documentation

  1. wiki-ai-stack.md - Complete documentation for the AI stack
  2. wiki-librenms.md - Complete documentation for LibreNMS

Quick Start

Prerequisites

  • Docker and Docker Compose installed
  • 16GB RAM minimum (8GB+ available)
  • 70GB disk space (50GB for AI stack + 20GB for LibreNMS)
  • Network devices with SNMP enabled (for LibreNMS)

Step 1: Deploy AI Stack

# Create directory
mkdir -p ~/homelab/ai-stack
cd ~/homelab/ai-stack

# Copy ai-stack-compose.yml to this directory

# Edit environment variables
nano ai-stack-compose.yml
# Change:
# - WEBUI_SECRET_KEY (generate random string)
# - N8N_BASIC_AUTH_PASSWORD (use strong password)
# - WEBHOOK_URL (your server IP)
# - GENERIC_TIMEZONE (your timezone)

# Start the stack
docker-compose -f ai-stack-compose.yml up -d

# Pull AI models
docker exec -it ollama ollama pull qwen2.5-coder:7b
docker exec -it ollama ollama pull llama3.2:3b

# Verify all services are running
docker-compose -f ai-stack-compose.yml ps

Access points:

Step 2: Deploy LibreNMS

# Create directory
mkdir -p ~/homelab/librenms
cd ~/homelab/librenms

# Copy librenms-compose.yml to this directory

# Edit environment variables
nano librenms-compose.yml
# Change:
# - DB_PASSWORD (use strong password)
# - MYSQL_ROOT_PASSWORD (use strong password)
# - BASE_URL (your server IP)
# - TZ (your timezone)

# Start LibreNMS
docker-compose -f librenms-compose.yml up -d

# Wait for initialization (2-3 minutes)
docker logs -f librenms

# Access web interface
# http://your-server-ip:8000
# Default login: librenms/librenms
# CHANGE PASSWORD IMMEDIATELY!

Step 3: Import Documentation to Wiki.js

# Option 1: Via Wiki.js Web Interface
# 1. Login to Wiki.js
# 2. Create new page: "AI Stack Documentation"
# 3. Copy contents of wiki-ai-stack.md
# 4. Create new page: "LibreNMS Documentation"
# 5. Copy contents of wiki-librenms.md

# Option 2: Via Wiki.js API (if configured)
# Use the provided markdown files with Wiki.js GraphQL API

Initial Configuration

Open WebUI Setup

  1. Navigate to http://your-server-ip:3000
  2. Create admin account (first user becomes admin)
  3. Verify Ollama connection in Settings
  4. Configure Qdrant connection (host: qdrant, port: 6333)
  5. Import your Wiki.js documentation for RAG

n8n Setup

  1. Navigate to http://your-server-ip:5678
  2. Login with credentials from compose file
  3. Create first workflow (see documentation for examples)
  4. Configure Ollama node connection

LibreNMS Setup

  1. Navigate to http://your-server-ip:8000
  2. Login and CHANGE PASSWORD
  3. Add your first network device
  4. Configure alert transport (webhook to n8n)
  5. Generate API token for n8n integration

Integrations

Connect Existing Services

Uptime Kuma → n8n:

Beszel → n8n:

Forgejo → n8n:

LibreNMS → n8n:

Resource Usage

Expected memory usage with all services running:

Service Memory
Ollama (with model loaded) 4-6GB
Open WebUI 500MB
Qdrant 1GB
n8n 200MB
LibreNMS 300-500MB
MariaDB 500MB-1GB
Redis 50-100MB
Total ~7-10GB

Remaining ~6-9GB for other services and system.

Example Workflows

1. Intelligent Alert Processing

Monitoring Alert → n8n webhook
  → Query historical data
  → Ollama analysis (Is this expected? Severity? Action needed?)
  → Route based on AI decision
    → Critical: Immediate notification
    → Warning: Log and monitor
    → Info: Suppress

2. Automated Documentation

Code Push to Forgejo → n8n webhook
  → Get changed files
  → Ollama generates documentation
  → Post to Wiki.js via API
  → Notify team

3. Docker-Compose Standardization

n8n scheduled workflow (daily)
  → Scan all Forgejo repos
  → Find docker-compose.yml files
  → Compare against template (stored in Qdrant)
  → Ollama generates compliance report
  → Create Forgejo issues for non-compliant repos

Backup Strategy

AI Stack Backup

# Weekly backup
cd ~/homelab/ai-stack
docker-compose -f ai-stack-compose.yml stop qdrant
tar -czf ai-stack-backup-$(date +%Y%m%d).tar.gz \
  qdrant_data/ n8n_data/ open_webui_data/
docker-compose -f ai-stack-compose.yml start qdrant

LibreNMS Backup

# Weekly backup
cd ~/homelab/librenms
docker exec librenms_db mysqldump -u root -p librenms > \
  librenms-db-backup-$(date +%Y%m%d).sql
tar -czf librenms-data-backup-$(date +%Y%m%d).tar.gz librenms_data/

Automated Backup via n8n

Create a scheduled workflow that:

  1. Runs weekly (Sunday 2 AM)
  2. Executes backup commands
  3. Uploads to external storage (optional)
  4. Verifies backup integrity
  5. Sends notification with results

Troubleshooting

Services Won't Start

# Check logs
docker-compose -f ai-stack-compose.yml logs [service-name]

# Common issues:
# - Port conflicts (check with: netstat -tulpn)
# - Insufficient memory (check with: free -h)
# - Permissions on volume directories

Ollama Not Responding

# Restart Ollama
docker restart ollama

# Test API
curl http://localhost:11434/api/tags

# If still failing, check if model is loaded
docker exec -it ollama ollama list

Can't Connect to Services

# Check if services are running
docker ps

# Check network connectivity
docker network ls
docker network inspect [network-name]

# Verify firewall isn't blocking ports
sudo ufw status

Security Recommendations

  1. Change all default passwords immediately

  2. Use strong, unique passwords for:

    • n8n basic auth
    • LibreNMS admin user
    • Database passwords
    • Open WebUI admin account
  3. Network security:

    • Use reverse proxy (Traefik, Nginx Proxy Manager)
    • Enable SSL/TLS certificates
    • Restrict access to trusted networks
    • Consider VPN for remote access
  4. API security:

    • Generate strong API tokens
    • Rotate credentials periodically
    • Use read-only tokens when possible

Maintenance Schedule

Daily (automated):

  • Service polling and monitoring
  • Alert processing
  • Automatic discovery

Weekly:

  • Review alerts and adjust thresholds
  • Check service logs for errors
  • Verify backups completed successfully

Monthly:

  • Database optimization
  • Review disk space usage
  • Update containers (test in dev first)
  • Audit user accounts and permissions

Quarterly:

  • Full backup verification and restoration test
  • Security audit
  • Review and update documentation
  • Clean up old data

Getting Help

Documentation

  • Check the Wiki.js pages for detailed information
  • Review container logs for error messages
  • Search community forums for similar issues

Useful Commands

# View all logs
docker-compose logs -f

# View specific service
docker logs -f [container-name]

# Restart single service
docker restart [container-name]

# Restart entire stack
docker-compose -f [compose-file] restart

# Update containers
docker-compose -f [compose-file] pull
docker-compose -f [compose-file] up -d

Next Steps

  1. Deploy AI stack
  2. Deploy LibreNMS
  3. Import documentation to Wiki.js
  4. Configure integrations with existing services
  5. Create first n8n workflow
  6. Add network devices to LibreNMS
  7. Set up automated backups
  8. Create custom dashboards

Support

For issues specific to:


Last Updated: February 2025
Maintained By: Homelab Admin
License: MIT (for custom configurations)