| .. | ||
| gremlin-build-config.md | ||
| gremlin-user-guide.md | ||
| Netgrimoire_Agent.md | ||
| Readme.md | ||
| title | description | published | date | tags | editor | dateCreated |
|---|---|---|---|---|---|---|
| Readme | Readme file generated by AI | true | 2026-04-02T21:09:39.376Z | markdown | 2026-03-05T02:27:57.522Z |
Homelab AI & Monitoring Stack - Deployment Guide
This repository contains everything you need to deploy a complete AI-powered homelab monitoring and automation stack.
What's Included
📦 Docker Compose Files
- ai-stack-compose.yml - Main AI automation stack (Ollama, Open WebUI, n8n, Qdrant)
- librenms-compose.yml - Network monitoring system (LibreNMS + MariaDB + Redis)
📚 Wiki.js Documentation
- wiki-ai-stack.md - Complete documentation for the AI stack
- wiki-librenms.md - Complete documentation for LibreNMS
Quick Start
Prerequisites
- Docker and Docker Compose installed
- 16GB RAM minimum (8GB+ available)
- 70GB disk space (50GB for AI stack + 20GB for LibreNMS)
- Network devices with SNMP enabled (for LibreNMS)
Step 1: Deploy AI Stack
# Create directory
mkdir -p ~/homelab/ai-stack
cd ~/homelab/ai-stack
# Copy ai-stack-compose.yml to this directory
# Edit environment variables
nano ai-stack-compose.yml
# Change:
# - WEBUI_SECRET_KEY (generate random string)
# - N8N_BASIC_AUTH_PASSWORD (use strong password)
# - WEBHOOK_URL (your server IP)
# - GENERIC_TIMEZONE (your timezone)
# Start the stack
docker-compose -f ai-stack-compose.yml up -d
# Pull AI models
docker exec -it ollama ollama pull qwen2.5-coder:7b
docker exec -it ollama ollama pull llama3.2:3b
# Verify all services are running
docker-compose -f ai-stack-compose.yml ps
Access points:
- Open WebUI: http://your-server-ip:3000
- n8n: http://your-server-ip:5678
- Ollama API: http://your-server-ip:11434
Step 2: Deploy LibreNMS
# Create directory
mkdir -p ~/homelab/librenms
cd ~/homelab/librenms
# Copy librenms-compose.yml to this directory
# Edit environment variables
nano librenms-compose.yml
# Change:
# - DB_PASSWORD (use strong password)
# - MYSQL_ROOT_PASSWORD (use strong password)
# - BASE_URL (your server IP)
# - TZ (your timezone)
# Start LibreNMS
docker-compose -f librenms-compose.yml up -d
# Wait for initialization (2-3 minutes)
docker logs -f librenms
# Access web interface
# http://your-server-ip:8000
# Default login: librenms/librenms
# CHANGE PASSWORD IMMEDIATELY!
Step 3: Import Documentation to Wiki.js
# Option 1: Via Wiki.js Web Interface
# 1. Login to Wiki.js
# 2. Create new page: "AI Stack Documentation"
# 3. Copy contents of wiki-ai-stack.md
# 4. Create new page: "LibreNMS Documentation"
# 5. Copy contents of wiki-librenms.md
# Option 2: Via Wiki.js API (if configured)
# Use the provided markdown files with Wiki.js GraphQL API
Initial Configuration
Open WebUI Setup
- Navigate to http://your-server-ip:3000
- Create admin account (first user becomes admin)
- Verify Ollama connection in Settings
- Configure Qdrant connection (host: qdrant, port: 6333)
- Import your Wiki.js documentation for RAG
n8n Setup
- Navigate to http://your-server-ip:5678
- Login with credentials from compose file
- Create first workflow (see documentation for examples)
- Configure Ollama node connection
LibreNMS Setup
- Navigate to http://your-server-ip:8000
- Login and CHANGE PASSWORD
- Add your first network device
- Configure alert transport (webhook to n8n)
- Generate API token for n8n integration
Integrations
Connect Existing Services
Uptime Kuma → n8n:
- Configure webhook in Uptime Kuma notification settings
- URL: http://your-server-ip:5678/webhook/uptime-kuma
Beszel → n8n:
- Use Shoutrrr webhook format
- URL: http://your-server-ip:5678/webhook/beszel
Forgejo → n8n:
- Add webhook in repository settings
- URL: http://your-server-ip:5678/webhook/forgejo-push
- Events: Push, Pull Request
LibreNMS → n8n:
- Alerts → Alert Transports → Add Webhook
- URL: http://your-server-ip:5678/webhook/librenms-alert
Resource Usage
Expected memory usage with all services running:
| Service | Memory |
|---|---|
| Ollama (with model loaded) | 4-6GB |
| Open WebUI | 500MB |
| Qdrant | 1GB |
| n8n | 200MB |
| LibreNMS | 300-500MB |
| MariaDB | 500MB-1GB |
| Redis | 50-100MB |
| Total | ~7-10GB |
Remaining ~6-9GB for other services and system.
Example Workflows
1. Intelligent Alert Processing
Monitoring Alert → n8n webhook
→ Query historical data
→ Ollama analysis (Is this expected? Severity? Action needed?)
→ Route based on AI decision
→ Critical: Immediate notification
→ Warning: Log and monitor
→ Info: Suppress
2. Automated Documentation
Code Push to Forgejo → n8n webhook
→ Get changed files
→ Ollama generates documentation
→ Post to Wiki.js via API
→ Notify team
3. Docker-Compose Standardization
n8n scheduled workflow (daily)
→ Scan all Forgejo repos
→ Find docker-compose.yml files
→ Compare against template (stored in Qdrant)
→ Ollama generates compliance report
→ Create Forgejo issues for non-compliant repos
Backup Strategy
AI Stack Backup
# Weekly backup
cd ~/homelab/ai-stack
docker-compose -f ai-stack-compose.yml stop qdrant
tar -czf ai-stack-backup-$(date +%Y%m%d).tar.gz \
qdrant_data/ n8n_data/ open_webui_data/
docker-compose -f ai-stack-compose.yml start qdrant
LibreNMS Backup
# Weekly backup
cd ~/homelab/librenms
docker exec librenms_db mysqldump -u root -p librenms > \
librenms-db-backup-$(date +%Y%m%d).sql
tar -czf librenms-data-backup-$(date +%Y%m%d).tar.gz librenms_data/
Automated Backup via n8n
Create a scheduled workflow that:
- Runs weekly (Sunday 2 AM)
- Executes backup commands
- Uploads to external storage (optional)
- Verifies backup integrity
- Sends notification with results
Troubleshooting
Services Won't Start
# Check logs
docker-compose -f ai-stack-compose.yml logs [service-name]
# Common issues:
# - Port conflicts (check with: netstat -tulpn)
# - Insufficient memory (check with: free -h)
# - Permissions on volume directories
Ollama Not Responding
# Restart Ollama
docker restart ollama
# Test API
curl http://localhost:11434/api/tags
# If still failing, check if model is loaded
docker exec -it ollama ollama list
Can't Connect to Services
# Check if services are running
docker ps
# Check network connectivity
docker network ls
docker network inspect [network-name]
# Verify firewall isn't blocking ports
sudo ufw status
Security Recommendations
-
Change all default passwords immediately
-
Use strong, unique passwords for:
- n8n basic auth
- LibreNMS admin user
- Database passwords
- Open WebUI admin account
-
Network security:
- Use reverse proxy (Traefik, Nginx Proxy Manager)
- Enable SSL/TLS certificates
- Restrict access to trusted networks
- Consider VPN for remote access
-
API security:
- Generate strong API tokens
- Rotate credentials periodically
- Use read-only tokens when possible
Maintenance Schedule
Daily (automated):
- Service polling and monitoring
- Alert processing
- Automatic discovery
Weekly:
- Review alerts and adjust thresholds
- Check service logs for errors
- Verify backups completed successfully
Monthly:
- Database optimization
- Review disk space usage
- Update containers (test in dev first)
- Audit user accounts and permissions
Quarterly:
- Full backup verification and restoration test
- Security audit
- Review and update documentation
- Clean up old data
Getting Help
Documentation
- Check the Wiki.js pages for detailed information
- Review container logs for error messages
- Search community forums for similar issues
Useful Commands
# View all logs
docker-compose logs -f
# View specific service
docker logs -f [container-name]
# Restart single service
docker restart [container-name]
# Restart entire stack
docker-compose -f [compose-file] restart
# Update containers
docker-compose -f [compose-file] pull
docker-compose -f [compose-file] up -d
Next Steps
- ✅ Deploy AI stack
- ✅ Deploy LibreNMS
- ✅ Import documentation to Wiki.js
- ⬜ Configure integrations with existing services
- ⬜ Create first n8n workflow
- ⬜ Add network devices to LibreNMS
- ⬜ Set up automated backups
- ⬜ Create custom dashboards
Support
For issues specific to:
- Ollama: https://github.com/ollama/ollama/issues
- Open WebUI: https://github.com/open-webui/open-webui/issues
- n8n: https://community.n8n.io
- LibreNMS: https://community.librenms.org
Last Updated: February 2025
Maintained By: Homelab Admin
License: MIT (for custom configurations)