My Distributed Mini Homelab: Architecture Overview
Introduction
This post documents my current home lab setupβa distributed system built from repurposed hardware, running entirely on my local network. No cloud dependencies, no subscriptions, full control.
Hardware Inventory
Primary Server: Raspberry Pi 5
- CPU: Broadcom BCM2712 (Quad-core Cortex-A76)
- RAM: 8GB LPDDR4X
- Storage: 128GB NVMe SSD (via PCIe HAT)
- Power: ~5-12W idle
- Role: Main application server, OpenClaw host
Secondary Server: Lenovo K20
- CPU: Intel Core i3-5010U @ 2.1GHz (2C/4T)
- RAM: 8GB DDR3
- Storage: 60GB SSD
- Power: ~15-25W idle
- Role: Media server (Jellyfin + qBittorrent)
- Origin: Broken laptop (cracked screen) β headless server
Network Infrastructure
- Main Router: ChinaNet (ISP provided)
- Secondary Router: Redmi Router (room router)
- Proxy Gateway: Phicomm N1 (OpenWrt + Clash)
- Network Monitoring: Uptime Kuma, custom scripts
Network Topology
Service Distribution
Raspberry Pi 5 Services
- OpenClaw Gateway - AI assistant backend (port 8080)
- Control Center UI - Dashboard (port 3456)
- ClawLibrary - Web interface (port 5173)
- Proxy Monitor - Auto-failover script (every 2 min)
- Gateway Self-Heal - Auto-recovery system
- Browser Automation - Chromium for web tasks
K20 Server Services
- Jellyfin - Media server (port 8096)
- qBittorrent - Download client (port 8080)
- Debian 13 - Base OS (headless)
N1 (Phicomm) Services
- OpenWrt - Custom firmware
- OpenClash - Proxy management
- Clash Dashboard - Web UI (port 9090)
Key Design Decisions
1. Distributed Architecture
Instead of putting everything on one machine, I split services based on hardware strengths:
- Pi5: Low-power, always-on application server
- K20: x86_64 compatibility for media transcoding
- N1: Dedicated network gateway (isolated from services)
2. No Single Point of Failure
If one server goes down, others continue operating:
- Pi5 down β Media still available on K20
- K20 down β AI assistant still works on Pi5
- N1 down β Local network still functional (direct router access)
3. Local-First, Cloud-Optional
All core services run locally. Cloud is only used for:
- External access (via VPN, not port forwarding)
- Software updates
- Backup storage (encrypted)
4. Automation Over Manual Management
Self-healing systems reduce maintenance burden:
- Proxy auto-failover (every 2 minutes)
- Gateway auto-recovery (detects and fixes common errors)
- Browser cleanup (kills stuck processes)
- Health monitoring with auto-restart
Monitoring Stack
Uptime Kuma
- Location: Pi5
- Access: http://192.168.51.74:3001
- Monitors: 10+ services across all servers
Custom Scripts
proxy-failover-env.sh- Tests and switches proxiesopenclaw-self-heal.sh- Gateway recoverycloudflared-health.sh- Tunnel monitoringbrowser-cleanup.sh- Browser lifecycle management
Logging
# Centralized log locations
~/workspace/proxy-failover.log
~/workspace/openclaw-self-heal.log
~/workspace/cloudflared-health.log
~/workspace/browser-cleanup.log
~/workspace/heartbeat.log
Power Consumption
Total idle power: ~25-40W
- Pi5: 5-12W
- K20: 15-25W
- N1: ~5W
- Router: ~5W
Monthly electricity cost (at local rates): approximately Β₯30-50/month for 24/7 operation.
Security Considerations
Network Isolation
- All servers on separate subnet (192.168.51.0/24)
- SSH restricted to local network only
- No port forwarding to internal services
Access Control
- SSH key-only authentication
- Service-specific user accounts
- Firewall rules (UFW) on each server
Regular Audits
- Weekly security update checks
- Monthly log reviews
- Quarterly password rotations
Lessons Learned
What Worked Well
- Repurposed hardware is cost-effective - The K20 laptop-to-server conversion saved Β₯2000+ vs buying new.
- Distribution improves reliability - When one service crashes, others keep running.
- Automation is essential - Self-healing scripts catch issues before they become problems.
- Documentation matters - Writing these posts helped me remember why I made certain decisions.
What I'd Do Differently
- Start with VLANs - Network segmentation should have been day-one, not an afterthought.
- Centralized logging - Currently logs are scattered across servers. A proper ELK stack would help.
- Containerize everything - Docker would make migrations and backups easier.
- Better backup strategy - Currently ad-hoc. Need automated, versioned, offsite backups.
Future Plans
- Add NAS storage for centralized file serving
- Implement proper backup rotation (3-2-1 rule)
- Set up Home Assistant for smart home integration
- Experiment with Kubernetes (k3s) for service orchestration
- Add solar power monitoring (eventually go off-grid)
Conclusion
This mini homelab has been a learning journeyβpart infrastructure, part experimentation, part practical utility. It's not perfect, but it's mine, and it works.
The best part? Everything here can be rebuilt from scratch with the knowledge documented in these posts. That's real ownership.