mirror of
https://github.com/RWejlgaard/pez-docs.git
synced 2026-05-06 03:34:44 +00:00
Update docs to reflect current setup (March 2026)
- Add Hetzner Cloud location (helsinki-a, nuremberg-a) - Update london-a to FreeBSD, london-b ZFS layout to 3x raidz1 - Note offline servers (london-c, copenhagen-b) - Update Plex docs with accurate ZFS and exporter behaviour - Add workload docs: Nextcloud AIO, Navidrome, slskd, Monitoring, Auth (Authelia/LLDAP/Bitwarden), Mail (poste.io), Gaming (Minecraft/MaNGOS) - Update README/intro with current service and location index
This commit is contained in:
parent
96d3d41f4c
commit
8e7269611d
11 changed files with 267 additions and 81 deletions
|
|
@ -1,18 +1,28 @@
|
|||
# Welcome to the Pez Docs Repository
|
||||
|
||||
This repository contains documentation for my homelab setup, which spans numerous servers around the world.
|
||||
This repository contains documentation for my homelab setup, which spans servers across London, Copenhagen, and two Hetzner cloud locations (Helsinki and Nuremberg).
|
||||
|
||||
## Principles
|
||||
|
||||
My thought process and how different components of my homelab is setup are documented in the subfolder "principles". Each component has its own folder which contains why I've chosen to use it and how I'm using it.
|
||||
My thought process and how different components of the homelab are set up are documented in the `principles` subfolder. Each component has its own folder covering why I chose it and how I use it.
|
||||
|
||||
## Workloads
|
||||
|
||||
All this thought and hardware has to run something for it to make sense. It would be ridiculous to only use it only to host a static webpage.
|
||||
Details on what's actually running and what it's used for are in the `workloads` subfolder.
|
||||
|
||||
Details on what I'm actually running and what I use it for can be found under the subfolder "workloads".
|
||||
Current workloads:
|
||||
- [Plex](workloads/plex) — media server, \*arr stack, Transmission, Jellyseer
|
||||
- [Nextcloud](workloads/nextcloud) — file sync, calendar, contacts, document editing
|
||||
- [Navidrome](workloads/navidrome) — music streaming (+ slskd for Soulseek)
|
||||
- [Monitoring](workloads/monitoring) — Prometheus, Grafana, exporters
|
||||
- [Auth](workloads/auth) — Authelia, LLDAP, Bitwarden
|
||||
- [Mail](workloads/mail) — poste.io self-hosted mail
|
||||
- [Gaming](workloads/gaming) — Minecraft (PaperMC), WoW vanilla (MaNGOS Zero)
|
||||
|
||||
## Locations
|
||||
|
||||
My homelab setup spans two locations: London and Copenhagen. Each location has its own set of servers, which are documented in the "locations" folder.
|
||||
The homelab spans four locations:
|
||||
|
||||
- [London](locations/london.md) — primary location, on-prem rack
|
||||
- [Copenhagen](locations/copenhagen.md) — off-site secondary, on-prem at family
|
||||
- [Hetzner Cloud](locations/hetzner.md) — cloud servers in Helsinki and Nuremberg
|
||||
|
|
|
|||
|
|
@ -1,18 +1,14 @@
|
|||
# Copenhagen
|
||||
|
||||
This location is self-hosted and serves as my secondary location. The main firepower is located in London.
|
||||
This location serves as my secondary off-site setup, hosted at my dad's place. The main firepower is in London; Copenhagen fills the gap for services that don't work well behind Cloudflare tunnels and benefit from a static IP.
|
||||
|
||||
I've set up a stack of servers at my Dad's place to work as an off-site location. These servers are not super powerful but they're excellent for hosting services that doesn't work with Cloudflare tunnels and require a static IP since my ISP in London likes to charge for that privilege.
|
||||
|
||||
At this location I have 3 servers setup, A, B and C. A and B are Lenovo "tiny" desktop computers. I really like these boxes. They're very compact at about the size of a lunchbox and power is provided via a normal ThinkPad charging brick. The last server is a Raspberry Pi 4.
|
||||
|
||||
I have some more thoughts about the hostname detailed in my principal page for [hostnames](../principles/hostnames)
|
||||
Three servers: A, B, and C. Copenhagen-A and B are Lenovo ThinkCentre Tiny desktops — very compact, about the size of a lunchbox, powered by standard ThinkPad bricks. Copenhagen-C is a Raspberry Pi 4. Copenhagen-B is currently offline pending reinstall.
|
||||
|
||||
## Networking
|
||||
|
||||
There's not much to talk about here. Since it's not my house I'm not really at liberty to install a balls-to-the-wall networking setup. So I'm using the ISP provided router with each of the 3 servers connected directly to the built in switch in the router.
|
||||
Nothing exotic here. Since it's not my house, I'm not at liberty to go all-out on networking. Each server plugs directly into the ISP-provided router via its built-in switch.
|
||||
|
||||
The connection is a symmetrical 500 Mbit. Plenty for hosting a few websites or other smaller services.
|
||||
The connection is a symmetrical 500 Mbit. Plenty for hosting services.
|
||||
|
||||
## Hardware Specs
|
||||
|
||||
|
|
@ -23,9 +19,10 @@ The connection is a symmetrical 500 Mbit. Plenty for hosting a few websites or o
|
|||
| CPU | Intel i5 4570T |
|
||||
| vCPUs | 4 |
|
||||
| Memory | 6 GB |
|
||||
|GPU|On-board CPU|
|
||||
| OS | Ubuntu |
|
||||
| Boot Storage | 500 GB |
|
||||
|Extra Storage|N/A|
|
||||
|
||||
Runs a Minecraft server (Docker) and a World of Warcraft vanilla server (MaNGOS Zero, running as systemd services). Mostly a fun/personal gaming server.
|
||||
|
||||
### copenhagen-b
|
||||
|
||||
|
|
@ -34,9 +31,10 @@ The connection is a symmetrical 500 Mbit. Plenty for hosting a few websites or o
|
|||
| CPU | Intel i5 4570T |
|
||||
| vCPUs | 4 |
|
||||
| Memory | 6 GB |
|
||||
|GPU|On-board CPU|
|
||||
| OS | — (pending reinstall) |
|
||||
| Boot Storage | 500 GB |
|
||||
|Extra Storage|N/A|
|
||||
|
||||
Currently offline. Pending OS reinstall.
|
||||
|
||||
### copenhagen-c
|
||||
|
||||
|
|
@ -45,6 +43,7 @@ The connection is a symmetrical 500 Mbit. Plenty for hosting a few websites or o
|
|||
| CPU | ARM Cortex-A72 |
|
||||
| vCPUs | 4 |
|
||||
| Memory | 8 GB |
|
||||
|GPU|On-board CPU|
|
||||
| OS | Linux |
|
||||
| Boot Storage | 128 GB |
|
||||
|Extra Storage|N/A|
|
||||
|
||||
Very lightly utilised at the moment.
|
||||
|
|
|
|||
28
locations/hetzner.md
Normal file
28
locations/hetzner.md
Normal file
|
|
@ -0,0 +1,28 @@
|
|||
# Hetzner Cloud
|
||||
|
||||
In contrast to the rest of the homelab which is entirely on-prem, I run two cloud servers on Hetzner for services that need a clean, reliable public IP — particularly the mail server (which really doesn't work well from residential addresses) and the traffic gateway.
|
||||
|
||||
Hetzner is my cloud provider of choice. Good prices, solid reliability, and datacenters in Germany and Finland.
|
||||
|
||||
Both servers are connected to the rest of the homelab via Tailscale, same as everything else.
|
||||
|
||||
## Servers
|
||||
|
||||
### helsinki-a
|
||||
|
||||
The main traffic gateway. All inbound HTTP traffic hits this server first and gets proxied where it needs to go via Caddy. Also runs the auth stack — Authelia, LLDAP, and Bitwarden.
|
||||
|
||||
Having the gateway on a cloud server with a clean IP keeps my home IP off DNS records and gives me flexibility to route traffic regardless of what's happening on-prem.
|
||||
|
||||
**Running services:**
|
||||
- Caddy (reverse proxy)
|
||||
- Authelia (SSO / authentication middleware)
|
||||
- LLDAP (lightweight LDAP, used by Authelia as the user directory)
|
||||
- Bitwarden (self-hosted password manager)
|
||||
|
||||
### nuremberg-a
|
||||
|
||||
Dedicated mail server. Running [poste.io](https://poste.io) in Docker, which bundles Postfix, Dovecot, and a web admin interface into a single container. Having mail on a Hetzner server with a proper PTR record and no residential IP baggage makes deliverability significantly easier.
|
||||
|
||||
**Running services:**
|
||||
- poste.io (full mail stack: SMTP, IMAP, webmail, spam filtering)
|
||||
|
|
@ -1,24 +1,18 @@
|
|||
# London
|
||||
|
||||
This location is my primary one. It's hosted at my address in northwest London in my rack cabinet in my bedroom (doubles as a white noise machine for falling asleep).
|
||||
This location is my primary one. It's hosted at my address in northwest London in a rack cabinet in my bedroom (doubles as a white noise machine for falling asleep).
|
||||
|
||||
It consists of 3 servers, A,B and C.
|
||||
|
||||
A and B are my old personal computers but since I don't play that many games anymore the servers are now retired into the coal mine that is their duty as servers.
|
||||
|
||||
The "C" server is a raspberry pi 4. I plan to replace this once raspberry pi 5's become generally available. As well as increase the amount to fill out the 1U of rack space that can contain up to 4 Pi's.
|
||||
It consists of three servers — A, B, and C. A and B are repurposed personal gaming rigs. London-C is currently offline pending a reinstall.
|
||||
|
||||
## Networking
|
||||
|
||||
Networking is fairly overkill I have to admit. I'm using a Ubiquiti Dream Machine Special Edition as my router which gives my excellent routing performance compared to a normal ISP-provided router.
|
||||
Networking is fairly overkill, I have to admit. I'm using a Ubiquiti Dream Machine Special Edition as my router, which gives me excellent routing performance compared to a normal ISP-provided router.
|
||||
|
||||
My ISP (BT) are charging me about £90 for a 1Gbit/300Mbit connection. This connection is fine, latency is a tad high at times for a supposedly "fiber" link.
|
||||
My ISP (BT) charges around £90 for a 1 Gbit/300 Mbit connection.
|
||||
|
||||
I've been pretty lucky that my flat is equipped with Cat 5 cabling in the walls which is accessible via a patch panel in my utility closet. All cables in the wall are connected to a Ubiquiti switch which is mounted onto the wall of the closet.
|
||||
My flat is equipped with Cat 5 cabling in the walls, accessible via a patch panel in the utility closet. All wall cables connect to a Ubiquiti switch mounted in that closet.
|
||||
|
||||
## Other bits
|
||||
|
||||
I was getting fed up having to load up grafana on my devices just to have a quick glance to check if all is doing okay so I invested in a refurbished tablet that I've stuck onto the side of my fridge in my living room. This way, I can sit in the couch and glance at various dashboards.
|
||||
I've got a refurbished tablet stuck to the side of my fridge in the living room showing various Grafana dashboards — a much nicer way to check on things than loading up a browser.
|
||||
|
||||
## Hardware Specs
|
||||
|
||||
|
|
@ -29,9 +23,10 @@ I was getting fed up having to load up grafana on my devices just to have a quic
|
|||
| CPU | Intel i7 4790K |
|
||||
| vCPUs | 8 |
|
||||
| Memory | 32 GB |
|
||||
|GPU|On-board CPU|
|
||||
| OS | FreeBSD |
|
||||
| Boot Storage | 1 TB |
|
||||
|Extra Storage|N/A|
|
||||
|
||||
london-a runs the monitoring stack — Prometheus and Grafana — on FreeBSD. The BSD base keeps it lean and stable for a machine that mostly just scrapes metrics and serves dashboards.
|
||||
|
||||
### london-b
|
||||
|
||||
|
|
@ -41,8 +36,13 @@ I was getting fed up having to load up grafana on my devices just to have a quic
|
|||
| vCPUs | 64 |
|
||||
| Memory | 64 GB |
|
||||
| GPU | Nvidia GTX 980 |
|
||||
| OS | Ubuntu |
|
||||
| Boot Storage | 500 GB |
|
||||
|Extra Storage|96 TB|
|
||||
| ZFS Pool | 87 TB raw (3× RAIDZ1, 4 disks each) |
|
||||
|
||||
london-b is the workhorse. It runs Plex Media Server natively, plus a stack of Docker services: the `*arr` suite, Nextcloud AIO, Navidrome, Jellyseer, slskd, and several Prometheus exporters.
|
||||
|
||||
Storage is a ZFS pool named `hdd` — three RAIDZ1 vdevs of four disks each (12 disks total), giving roughly 62 TB usable. Each vdev can tolerate one disk failure. Weekly scrubs run on Sundays.
|
||||
|
||||
### london-c
|
||||
|
||||
|
|
@ -51,6 +51,7 @@ I was getting fed up having to load up grafana on my devices just to have a quic
|
|||
| CPU | ARM Cortex-A72 |
|
||||
| vCPUs | 4 |
|
||||
| Memory | 8 GB |
|
||||
|GPU|On-board CPU|
|
||||
| OS | — (pending reinstall) |
|
||||
| Boot Storage | 128 GB |
|
||||
|Extra Storage|N/A|
|
||||
|
||||
Currently offline. Pending OS reinstall.
|
||||
|
|
|
|||
34
workloads/auth/README.md
Normal file
34
workloads/auth/README.md
Normal file
|
|
@ -0,0 +1,34 @@
|
|||
# Authentication
|
||||
|
||||
## Overview
|
||||
|
||||
All web-facing services are protected by a unified auth stack running on `helsinki-a`. This gives SSO across everything without having to configure per-service authentication.
|
||||
|
||||
## Stack
|
||||
|
||||
### Authelia
|
||||
|
||||
Authelia is the authentication and authorization gateway. It sits in front of services proxied by Caddy and handles:
|
||||
|
||||
- Username/password login
|
||||
- Two-factor authentication (TOTP)
|
||||
- Per-service access control rules
|
||||
|
||||
### LLDAP
|
||||
|
||||
LLDAP (Lightweight LDAP) is the user directory Authelia uses for authentication. It's simpler and easier to manage than a full OpenLDAP install, while still being compatible with anything that speaks LDAP.
|
||||
|
||||
All user management goes through LLDAP's web interface.
|
||||
|
||||
### Bitwarden (Vaultwarden)
|
||||
|
||||
Self-hosted Bitwarden running on `helsinki-a`. Stores all passwords and uses the official Bitwarden clients across devices.
|
||||
|
||||
## Flow
|
||||
|
||||
1. User hits a subdomain (e.g. `grafana.pez.sh`)
|
||||
2. Cloudflare routes traffic to `helsinki-a`
|
||||
3. Caddy receives the request and forwards it to Authelia middleware
|
||||
4. Authelia checks if the user has a valid session
|
||||
5. If not, redirect to the Authelia login portal (which authenticates against LLDAP)
|
||||
6. Once authenticated, Caddy proxies the request to the actual backend service (which may be on any server in the homelab)
|
||||
18
workloads/gaming/README.md
Normal file
18
workloads/gaming/README.md
Normal file
|
|
@ -0,0 +1,18 @@
|
|||
# Gaming Servers
|
||||
|
||||
Both gaming servers run on `copenhagen-a` — the off-site location in Copenhagen. It's a good fit: these services don't need the raw storage or horsepower of the London machines, and having them in Copenhagen gives them a separate network and power source.
|
||||
|
||||
## Minecraft
|
||||
|
||||
A PaperMC server running in Docker (`marctv/minecraft-papermc-server`).
|
||||
|
||||
PaperMC is a performance-focused Minecraft server fork. It's well-maintained and handles the plugin ecosystem cleanly.
|
||||
|
||||
## World of Warcraft — MaNGOS Zero
|
||||
|
||||
A vanilla WoW server running as two systemd services:
|
||||
|
||||
- `mangos-realmd` — the realm/authentication server
|
||||
- `mangos-world` — the world/game server
|
||||
|
||||
MaNGOS Zero implements the original 1.12 vanilla WoW server. Running it as native systemd services rather than Docker keeps things simple — the MaNGOS project provides good Linux packaging support and systemd integration.
|
||||
24
workloads/mail/README.md
Normal file
24
workloads/mail/README.md
Normal file
|
|
@ -0,0 +1,24 @@
|
|||
# Mail
|
||||
|
||||
## Setup
|
||||
|
||||
Self-hosted mail running on `nuremberg-a` (Hetzner, Germany) via [poste.io](https://poste.io) in Docker.
|
||||
|
||||
poste.io is an all-in-one mail server that bundles:
|
||||
|
||||
- Postfix (SMTP)
|
||||
- Dovecot (IMAP)
|
||||
- Rspamd (spam filtering)
|
||||
- ClamAV (virus scanning)
|
||||
- A web admin interface
|
||||
- Webmail (Roundcube)
|
||||
|
||||
## Why a Cloud Server for Mail?
|
||||
|
||||
Running mail from a residential IP is a recipe for deliverability problems. Most major mail providers will either reject or silently drop mail from residential addresses. Hetzner gives a clean datacenter IP with a proper PTR record, which makes a significant difference.
|
||||
|
||||
`nuremberg-a` exists almost entirely to host the mail server. It's a low-resource machine for the purpose.
|
||||
|
||||
## Domain
|
||||
|
||||
Mail is set up for `pez.sh`. SPF, DKIM, and DMARC records are managed via Cloudflare DNS (via Terraform).
|
||||
23
workloads/monitoring/README.md
Normal file
23
workloads/monitoring/README.md
Normal file
|
|
@ -0,0 +1,23 @@
|
|||
# Monitoring
|
||||
|
||||
## Stack
|
||||
|
||||
The monitoring stack runs on `london-a` — a FreeBSD machine dedicated to observability. The choice of FreeBSD here is deliberate: it's lightweight, stable, and well-suited for a machine whose job is to just sit there and watch things.
|
||||
|
||||
- **Prometheus** — scrapes metrics from all servers and services
|
||||
- **Grafana** — dashboards and visualisation
|
||||
- **node_exporter** — system metrics on each Linux/FreeBSD server
|
||||
- **smartctl_exporter** — disk health metrics from `london-b` (Docker)
|
||||
- **prom-plex-exporter** — Plex session and library metrics from `london-b` (Docker)
|
||||
|
||||
## What Gets Scraped
|
||||
|
||||
All servers in the homelab run `node_exporter` and are reachable by Prometheus via Tailscale. Prometheus scrapes each target over the Tailscale network, so nothing needs a public port.
|
||||
|
||||
## Dashboards
|
||||
|
||||
Grafana is accessible via Cloudflare tunnel + Authelia for SSO. There's also a refurbished tablet mounted on the fridge in the living room showing a few key dashboards — a quick way to see if everything is healthy without opening a browser.
|
||||
|
||||
## Alerting
|
||||
|
||||
Not yet configured. This is a gap worth filling.
|
||||
17
workloads/navidrome/README.md
Normal file
17
workloads/navidrome/README.md
Normal file
|
|
@ -0,0 +1,17 @@
|
|||
# Navidrome
|
||||
|
||||
## What
|
||||
|
||||
Navidrome is a self-hosted music streaming server. It's compatible with the Subsonic API, so it works with most existing Subsonic clients across all platforms.
|
||||
|
||||
## Setup
|
||||
|
||||
Running on `london-b` via Docker. Music library lives on the ZFS pool (`hdd`), same storage as everything else on that machine.
|
||||
|
||||
## Why Not Plex for Music?
|
||||
|
||||
Plex handles music reasonably but it's primarily built for video. Navidrome is purpose-built for music, has a nicer web UI for audio browsing, and the Subsonic API compatibility gives a lot of client flexibility.
|
||||
|
||||
## Companion: slskd
|
||||
|
||||
**slskd** also runs on `london-b` — it's a .NET-based Soulseek client that runs as a Docker container with a web UI. Used for finding and downloading music that isn't easily available through other means. The downloaded music feeds directly into the Navidrome library.
|
||||
28
workloads/nextcloud/README.md
Normal file
28
workloads/nextcloud/README.md
Normal file
|
|
@ -0,0 +1,28 @@
|
|||
# Nextcloud
|
||||
|
||||
## Why
|
||||
|
||||
Nextcloud is my self-hosted alternative to Google Drive / iCloud. It handles file sync across devices, calendar, contacts, and a few other things.
|
||||
|
||||
## Setup
|
||||
|
||||
Running **Nextcloud AIO** (All-In-One) on `london-b` via Docker. AIO bundles Nextcloud itself with all the supporting services into a managed stack:
|
||||
|
||||
- Nextcloud (PHP-FPM)
|
||||
- PostgreSQL
|
||||
- Redis
|
||||
- Elasticsearch (full-text search)
|
||||
- Collabora Office (online document editing)
|
||||
- Imaginary (image processing)
|
||||
- Notify Push (real-time push notifications)
|
||||
- Whiteboard
|
||||
|
||||
The AIO master container manages updates and health for all of these, which keeps maintenance overhead low.
|
||||
|
||||
## Storage
|
||||
|
||||
Files live on `london-b`'s ZFS pool (`hdd`), giving plenty of room to grow. ZFS gives checksumming and integrity verification essentially for free, which is nice to have under a file sync service.
|
||||
|
||||
## Access
|
||||
|
||||
Exposed via Caddy on `helsinki-a` through a Cloudflare tunnel, with Authelia SSO in front.
|
||||
|
|
@ -2,28 +2,32 @@
|
|||
|
||||
## History
|
||||
|
||||
My Plex server is what started my homelab. I had two 3 TB drives laying around so I thought I should try out setting up a plex server.
|
||||
My Plex server is what started my homelab. I had two 3 TB drives laying around so I set up a Plex server on a Proxmox host — the hardware that would later become `london-b`. I thought I was being clever by using separate VMs for each function with NFS for central storage, but NFS I/O limitations made themselves known pretty fast whenever I was downloading and streaming simultaneously.
|
||||
|
||||
This was hosted on a single Proxmox server on the hardware that would later turn into `london-a`. I thought I was being smart about the way I had it set up, using seperate VMs for each function of the setup with a VM serving the central storage over NFS.
|
||||
|
||||
I hadn't thought about the limitations of NFS when I set it up and I would often find that if I was downloading media while streaming I would reach the limits of NFS I/O.
|
||||
|
||||
Once I got hold of 3 new hard drives of 8 TBs (24 TB striped capacity). I bit the bullet and installed the OS on the bare metal which leads us to the current setup.
|
||||
Once I picked up three new 8 TB drives, I scrapped Proxmox and installed directly on bare metal.
|
||||
|
||||
## Current Setup
|
||||
|
||||
My current plex setup is running on my `london-b` server. The server is rediculously overpowered as a media server, it's equipped with a Threadripper CPU and an Nvidia GTX 980.
|
||||
Plex runs natively (not in Docker) on `london-b`. The machine is absurdly overpowered for a media server — Threadripper CPU, 64 GB RAM, and an Nvidia GTX 980 for GPU transcoding. The CPU can transcode plenty fast on its own; the GPU is just there.
|
||||
|
||||
The GPU helps a bit with transcoding while streaming but the CPU can easily transcode plenty fast by itself.
|
||||
### Storage
|
||||
|
||||
The storage is directly attached to the motherboard and my three 8 TB drives are striped to maximize the usable storage. I don't really care if I loose a disk, since it's only movies and TV shows anyway. Although, it would suck having to re-download everything.
|
||||
Storage is a ZFS pool named `hdd` — three RAIDZ1 vdevs with four disks each (12 disks total). Each vdev can tolerate one disk failure, so the pool can survive up to three concurrent disk failures as long as they're spread across different vdevs. Total usable capacity is around 62 TB; raw capacity is 87 TB.
|
||||
|
||||
I use the so-called `*arr` stack. Radarr, Sonarr & Prowlarr for movies, TV shows and trackers respectively.
|
||||
Weekly scrubs run on Sundays to catch any silent corruption.
|
||||
|
||||
For my download client I first went with Deluge which I **not** like. It was slow and sluggish, constantly corrupting downloads and not cleaning up after itself.
|
||||
### The \*arr Stack
|
||||
|
||||
So I'm now using Transmission, which is brilliant. It's so good I'm able to have 100 active torrents at once!
|
||||
Media management runs on the standard `*arr` stack:
|
||||
|
||||
## Future upgrades
|
||||
- **Radarr** — movie management
|
||||
- **Sonarr** — TV show management
|
||||
- **Prowlarr** — tracker/indexer aggregation
|
||||
|
||||
I'm planning a rather large purchase to expand my raid array with 21 additional disks which would bring my total capacity to 192 TB (this will not be striped).
|
||||
For downloads, I use **Transmission**. It's reliable, fast, and handles high concurrency well — easily handles 100+ active torrents without complaint. Previously used Deluge, which was slow, corrupted downloads regularly, and didn't clean up after itself.
|
||||
|
||||
**Jellyseer** provides a clean request interface for adding new movies and shows.
|
||||
|
||||
### Prometheus Exporter
|
||||
|
||||
`prom-plex-exporter` runs as a Docker container and exposes Plex metrics to Prometheus. It connects to Plex via websocket and exits cleanly when the connection closes (expected behavior); Docker's `unless-stopped` restart policy handles reconnection automatically.
|
||||
|
|
|
|||
Loading…
Add table
Reference in a new issue