mirror of
https://github.com/RWejlgaard/pez-docs.git
synced 2026-05-06 03:34:44 +00:00
- Update locations/london.md with current server state (london-a FreeBSD monitoring, london-b Ubuntu/ZFS storage, london-c offline) - Update locations/copenhagen.md with current servers (copenhagen-a Minecraft+WoW, copenhagen-b offline, copenhagen-c general) - Add locations/helsinki.md documenting helsinki-a (Caddy gateway, Authelia, Bitwarden, LDAP) and nuremberg-a (mail/poste.io) - Add workloads/jellyfin, navidrome, nextcloud, arr-stack, minecraft, wow, mail, monitoring, bitwarden, authelia - Add principles/zfs documenting the london-b ZFS pool setup - Add principles/caddy documenting the reverse proxy setup on helsinki-a - Update introduction.md to mention all locations
1,020 B
1,020 B
Monitoring
What
Prometheus and Grafana for metrics collection and visualisation across the fleet.
Where
Runs on london-a (FreeBSD 14.3). london-a is a dedicated monitoring host — very lightly loaded, disk at 6%.
| Service | URL | Port |
|---|---|---|
| Grafana | grafana.pez.sh | 3000 |
| Prometheus | prometheus.pez.sh | 9090 |
Both are behind Authelia.
Scrape Targets
Prometheus scrapes metrics from exporters running across the fleet. All connections are made over Tailscale.
Exporters in use:
- smartctl exporter — disk health metrics (london-b)
- prom-plex-exporter — Plex metrics (london-b)
- Node exporter on various hosts
Dashboards
Grafana dashboards cover:
- Server health (CPU, memory, disk, network)
- ZFS pool status
- Disk SMART data
- Plex activity
Status Page
status.pez.sh is a separate lightweight status page that pulls availability data from Prometheus. 90-day uptime history. Source: RWejlgaard/pez-status.