Compare commits

..

2 commits

Author SHA1 Message Date
8d4e218455
Merge pull request #2 from RWejlgaard/samantha/document-setup
Update documentation for current setup
2026-03-07 15:19:22 +00:00
4508f740e5 Update documentation for current setup
- Update locations/london.md with current server state (london-a FreeBSD monitoring, london-b Ubuntu/ZFS storage, london-c offline)
- Update locations/copenhagen.md with current servers (copenhagen-a Minecraft+WoW, copenhagen-b offline, copenhagen-c general)
- Add locations/helsinki.md documenting helsinki-a (Caddy gateway, Authelia, Bitwarden, LDAP) and nuremberg-a (mail/poste.io)
- Add workloads/jellyfin, navidrome, nextcloud, arr-stack, minecraft, wow, mail, monitoring, bitwarden, authelia
- Add principles/zfs documenting the london-b ZFS pool setup
- Add principles/caddy documenting the reverse proxy setup on helsinki-a
- Update introduction.md to mention all locations
2026-03-04 09:41:57 +00:00
16 changed files with 444 additions and 65 deletions

View file

@ -14,5 +14,5 @@ Details on what I'm actually running and what I use it for can be found under th
## Locations ## Locations
My homelab setup spans two locations: London and Copenhagen. Each location has its own set of servers, which are documented in the "locations" folder. My homelab setup spans physical locations in London and Copenhagen, plus Hetzner Cloud servers in Helsinki and Nuremberg. All servers are connected via Tailscale. Each location is documented in the "locations" folder.

View file

@ -2,49 +2,49 @@
This location is self-hosted and serves as my secondary location. The main firepower is located in London. This location is self-hosted and serves as my secondary location. The main firepower is located in London.
I've set up a stack of servers at my Dad's place to work as an off-site location. These servers are not super powerful but they're excellent for hosting services that doesn't work with Cloudflare tunnels and require a static IP since my ISP in London likes to charge for that privilege. I've set up a stack of servers at my Dad's place to work as an off-site location. These servers are not super powerful but they're good for hosting services that need a static IP — my ISP in London charges for that privilege.
At this location I have 3 servers setup, A, B and C. A and B are Lenovo "tiny" desktop computers. I really like these boxes. They're very compact at about the size of a lunchbox and power is provided via a normal ThinkPad charging brick. The last server is a Raspberry Pi 4. At this location I have 3 servers: A, B, and C. A and B are Lenovo "tiny" desktop computers — compact, about the size of a lunchbox, powered by a standard ThinkPad charging brick. C is a Debian general-purpose box.
I have some more thoughts about the hostname detailed in my principal page for [hostnames](../principles/hostnames)
## Networking ## Networking
There's not much to talk about here. Since it's not my house I'm not really at liberty to install a balls-to-the-wall networking setup. So I'm using the ISP provided router with each of the 3 servers connected directly to the built in switch in the router. Since it's not my house I'm not really at liberty to install a serious networking setup. Using the ISP-provided router with each server connected directly to its built-in switch. The connection is a symmetrical 500 Mbit — plenty for what's running here.
The connection is a symmetrical 500 Mbit. Plenty for hosting a few websites or other smaller services. ## Servers
## Hardware Specs
### copenhagen-a ### copenhagen-a
|Component|Value| Game servers host. Runs Ubuntu 22.04. Tailscale IP: 100.89.206.60. Disk at 26%.
Docker workloads:
- Minecraft (marctv/minecraft-papermc-server)
Native (systemd) workloads:
- MaNGOS Zero WoW server (`mangos-realmd` + `mangos-world` services)
- Running as the `mangos` user from `/home/mangos/mangos/zero/`
- MariaDB for WoW databases
**Hardware:**
| Component | Value |
|---|---| |---|---|
|CPU|Intel i5 4570T| | CPU | Intel i5 4570T |
|vCPUs|4| | vCPUs | 4 |
|Memory|6 GB| | Memory | 16 GB |
|GPU|On-board CPU| | Boot Storage | 500 GB |
|Boot Storage|500 GB|
|Extra Storage|N/A|
### copenhagen-b ### copenhagen-b
|Component|Value| Offline. Pending reinstall.
|---|---|
|CPU|Intel i5 4570T|
|vCPUs|4|
|Memory|6 GB|
|GPU|On-board CPU|
|Boot Storage|500 GB|
|Extra Storage|N/A|
### copenhagen-c ### copenhagen-c
|Component|Value| General purpose, lightly used. Runs Debian 12. Tailscale IP: 100.115.45.53. Disk at 15%.
No active workloads at this time.
**Hardware:**
| Component | Value |
|---|---| |---|---|
|CPU|ARM Cortex-A72| | Boot Storage | 117 GB |
|vCPUs|4|
|Memory|8 GB|
|GPU|On-board CPU|
|Boot Storage|128 GB|
|Extra Storage|N/A|

51
locations/helsinki.md Normal file
View file

@ -0,0 +1,51 @@
# Helsinki / Nuremberg
These are my Hetzner Cloud servers — the public-facing edge of the infrastructure.
## Servers
### helsinki-a
Primary public-facing server. Runs Ubuntu/Debian on Hetzner Cloud. Tailscale IP: 100.67.6.27. Uptime: 182+ days. Disk at ~50%.
This is the traffic gateway for everything exposed to the internet. All public subdomains terminate here via Caddy, which proxies traffic back to the appropriate server over Tailscale.
Runs:
- Caddy (reverse proxy — see [principles/caddy](../principles/caddy))
- Authelia (SSO — see [workloads/authelia](../workloads/authelia))
- Bitwarden (self-hosted — see [workloads/bitwarden](../workloads/bitwarden))
- LDAP (user directory, used by Authelia)
### nuremberg-a
Dedicated mail server. Runs Debian on Hetzner Cloud. Tailscale IP: 100.117.235.28. Disk at ~25%.
Runs:
- poste.io (full mail stack in Docker)
Handles inbound and outbound mail for pez.sh. DNS records (MX, SPF, DKIM, DMARC) managed via Cloudflare.
## Public Services
All subdomains are DNS-proxied through Cloudflare and terminate at helsinki-a. Traffic is forwarded over Tailscale to the appropriate backend server.
| Subdomain | Backend | Auth |
|---|---|---|
| auth.pez.sh | helsinki-a:9091 | — |
| bitwarden.pez.sh | helsinki-a:8443 | — |
| status.pez.sh | helsinki-a:/srv/status | — |
| apps.pez.sh | helsinki-a:/srv/apps | Authelia |
| grafana.pez.sh | london-a:3000 | Authelia |
| prometheus.pez.sh | london-a:9090 | Authelia |
| jellyfin.pez.sh | london-b:8096 | — |
| plex.pez.sh | london-b:32400 | — |
| request.pez.sh | london-b:5055 | — |
| cloud.pez.sh | london-b:11000 | — |
| music.pez.sh | london-b:4533 | — |
| radarr.pez.sh | london-b:7878 | Authelia |
| sonarr.pez.sh | london-b:8989 | Authelia |
| lidarr.pez.sh | london-b:8686 | Authelia |
| readarr.pez.sh | london-b:8787 | Authelia |
| prowlarr.pez.sh | london-b:9696 | Authelia |
| soulseek.pez.sh | london-b:5030 | Authelia |
| download.pez.sh | london-b:9091 | Authelia |

View file

@ -2,55 +2,70 @@
This location is my primary one. It's hosted at my address in northwest London in my rack cabinet in my bedroom (doubles as a white noise machine for falling asleep). This location is my primary one. It's hosted at my address in northwest London in my rack cabinet in my bedroom (doubles as a white noise machine for falling asleep).
It consists of 3 servers, A,B and C. It consists of 3 servers: A, B, and C.
A and B are my old personal computers but since I don't play that many games anymore the servers are now retired into the coal mine that is their duty as servers. A and B are my old personal computers retired into server duty. london-a is now running FreeBSD as a dedicated monitoring host. london-b is the workhorse — primary storage and media server.
The "C" server is a raspberry pi 4. I plan to replace this once raspberry pi 5's become generally available. As well as increase the amount to fill out the 1U of rack space that can contain up to 4 Pi's. london-c is currently offline and pending reinstall.
## Networking ## Networking
Networking is fairly overkill I have to admit. I'm using a Ubiquiti Dream Machine Special Edition as my router which gives my excellent routing performance compared to a normal ISP-provided router. Networking is fairly overkill I have to admit. I'm using a Ubiquiti Dream Machine Special Edition as my router which gives me excellent routing performance compared to a normal ISP-provided router.
My ISP (BT) are charging me about £90 for a 1Gbit/300Mbit connection. This connection is fine, latency is a tad high at times for a supposedly "fiber" link. My ISP (BT) charges me about £90 for a 1Gbit/300Mbit connection. All servers are connected via Cat 5 cabling in the walls, accessible through a patch panel in the utility closet, connected to a Ubiquiti switch.
I've been pretty lucky that my flat is equipped with Cat 5 cabling in the walls which is accessible via a patch panel in my utility closet. All cables in the wall are connected to a Ubiquiti switch which is mounted onto the wall of the closet. ## Servers
## Other bits
I was getting fed up having to load up grafana on my devices just to have a quick glance to check if all is doing okay so I invested in a refurbished tablet that I've stuck onto the side of my fridge in my living room. This way, I can sit in the couch and glance at various dashboards.
## Hardware Specs
### london-a ### london-a
|Component|Value| Monitoring server. Runs FreeBSD 14.3. Very lightly loaded — disk at 6%.
Runs:
- Prometheus (metrics collection)
- Grafana (dashboards)
Accessible at:
- grafana.pez.sh (behind Authelia)
- prometheus.pez.sh (behind Authelia)
**Hardware:**
| Component | Value |
|---|---| |---|---|
|CPU|Intel i7 4790K| | CPU | Intel i7 4790K |
|vCPUs|8| | vCPUs | 8 |
|Memory|32 GB| | Memory | 32 GB |
|GPU|On-board CPU| | Boot Storage | 1 TB |
|Boot Storage|1 TB|
|Extra Storage|N/A|
### london-b ### london-b
|Component|Value| Primary storage and media server. Runs Ubuntu 24.04. Tailscale IP: 100.84.65.101.
ZFS pool `hdd`: 3× RAIDZ1 arrays (8 drives total). 46T used / 18T free / 64T total. Weekly scrub on Sundays. Disk at 72%.
Docker workloads:
- Nextcloud AIO (cloud.pez.sh)
- Jellyfin (jellyfin.pez.sh)
- Plex (plex.pez.sh)
- Radarr, Sonarr, Lidarr, Readarr, Prowlarr (arr stack)
- Transmission (download.pez.sh)
- Navidrome (music.pez.sh)
- Jellyseerr / Overseerr (request.pez.sh)
- slskd / Soulseek (soulseek.pez.sh)
- smartctl exporter
- prom-plex-exporter
**Hardware:**
| Component | Value |
|---|---| |---|---|
|CPU|Threadripper 3970X| | CPU | Threadripper 3970X |
|vCPUs|64| | vCPUs | 64 |
|Memory|64 GB| | Memory | 64 GB |
|GPU|Nvidia GTX 980| | GPU | Nvidia GTX 980 |
|Boot Storage|500 GB| | Boot Storage | 500 GB |
|Extra Storage|96 TB| | ZFS Storage | ~64 TB usable |
### london-c ### london-c
|Component|Value| Offline. Pending reinstall.
|---|---|
|CPU|ARM Cortex-A72|
|vCPUs|4|
|Memory|8 GB|
|GPU|On-board CPU|
|Boot Storage|128 GB|
|Extra Storage|N/A|

View file

@ -0,0 +1,37 @@
# Caddy
## Why
Caddy is my reverse proxy of choice. It handles TLS termination automatically via Let's Encrypt — no manual certificate management, no certbot cron jobs, no renewals to think about. You write a Caddyfile, point it at a subdomain, and TLS just works.
Compared to Nginx, the config is far less verbose. A reverse proxy block that takes 20 lines in Nginx takes 4 in Caddy.
## Where
Runs on **helsinki-a**, which is the public-facing edge server. All traffic from the internet hits helsinki-a first, then Caddy forwards it over Tailscale to the appropriate backend.
## How It Works
All public subdomains (pez.sh, pez.solutions) are DNS-proxied through Cloudflare. Cloudflare terminates the external TLS and forwards traffic to helsinki-a. Caddy then handles routing to the correct backend.
Backends are addressed by Tailscale IP or hostname — no need to open ports between servers on the public internet.
## Authelia Integration
For protected services, Caddy uses a `forward_auth` directive that calls Authelia before proxying the request. If the user isn't authenticated, Caddy redirects them to auth.pez.sh.
Example Caddyfile block:
```
radarr.pez.sh {
forward_auth helsinki-a:9091 {
uri /api/verify?rd=https://auth.pez.sh
copy_headers Remote-User Remote-Groups Remote-Name Remote-Email
}
reverse_proxy london-b:7878
}
```
## TLS
Caddy obtains and renews certificates automatically via ACME (Let's Encrypt). No manual intervention required.

34
principles/zfs/README.md Normal file
View file

@ -0,0 +1,34 @@
# ZFS
## Why
london-b is my primary storage server with 8 spinning disks. ZFS was the obvious choice — it gives me data integrity via checksumming, flexible RAID configurations, and built-in snapshot support, all without needing a separate RAID controller.
The alternative (mdraid + ext4 or XFS) works fine but gives up checksumming, which means silent data corruption is possible. With 8 drives and 46+ TB of data, I'd rather know about corruption than discover it years later.
## Current Setup
Pool name: `hdd`
Configuration: 3× RAIDZ1 vdevs (each with 2-3 drives, 8 drives total)
| Metric | Value |
|---|---|
| Used | 46 TB |
| Free | 18 TB |
| Total | ~64 TB |
| Health | Scrub weekly (Sundays) |
RAIDZ1 tolerates one drive failure per vdev. Given the drive count and age, this is the right trade-off between capacity and redundancy.
## Scrubbing
A weekly scrub runs every Sunday. This reads all data and verifies checksums, catching any silent corruption or drive errors early. Scrub results are visible in Grafana via the smartctl exporter.
## Snapshots
ZFS snapshots are used for point-in-time recovery. Fast and space-efficient — snapshots only consume space for data that's changed since the snapshot was taken.
## Notes
ZFS is somewhat memory-hungry (ARC cache). london-b has 64 GB of RAM which gives ZFS plenty of headroom.

View file

@ -0,0 +1,36 @@
# Arr Stack
## What
The `*arr` stack is my media automation pipeline. It handles finding, downloading, and organising movies, TV shows, music, and books.
## Components
| Service | Purpose | URL | Port |
|---|---|---|---|
| Radarr | Movie management | radarr.pez.sh | 7878 |
| Sonarr | TV show management | sonarr.pez.sh | 8989 |
| Lidarr | Music management | lidarr.pez.sh | 8686 |
| Readarr | Book management | readarr.pez.sh | 8787 |
| Prowlarr | Indexer aggregator | prowlarr.pez.sh | 9696 |
| Transmission | Download client | download.pez.sh | 9091 |
| Jellyseerr | Media request portal | request.pez.sh | 5055 |
All services are behind Authelia except Jellyseerr, which has its own user auth for request submissions.
## Where
All containers run on **london-b** in Docker. Media is stored on the ZFS pool `hdd`.
## How It Works
1. Jellyseerr accepts media requests
2. Radarr/Sonarr/Lidarr/Readarr pick up the request and search for a release via Prowlarr
3. Prowlarr aggregates torrent/usenet indexers and returns results
4. The chosen release is sent to Transmission for download
5. Once complete, the arr app moves the file to the appropriate library folder
6. Plex and Jellyfin pick up the new content automatically
## Download Client
Using Transmission. It's fast, reliable, and handles 100+ simultaneous torrents without issue.

View file

@ -0,0 +1,28 @@
# Authelia
## What
Authelia is my SSO (Single Sign-On) and 2FA provider. It sits in front of services that don't have their own auth or that I want under a unified login.
## Where
Runs on **helsinki-a** as a Docker container.
- URL: [auth.pez.sh](https://auth.pez.sh)
- Backend port: 9091
- Integrated with LDAP (also on helsinki-a) for user management
## How It Works
Caddy is configured with a forward auth middleware that calls Authelia before passing traffic to the backend. If the user isn't authenticated, they're redirected to auth.pez.sh to log in.
Services protected by Authelia:
- Grafana, Prometheus
- Radarr, Sonarr, Lidarr, Readarr, Prowlarr
- Transmission (download.pez.sh)
- Soulseek (soulseek.pez.sh)
- apps.pez.sh
## LDAP
User accounts are managed in LDAP on helsinki-a. Authelia authenticates against LDAP. This centralises user management — one place to add/remove users rather than configuring each service individually.

View file

@ -0,0 +1,17 @@
# Bitwarden
## What
Self-hosted Bitwarden (Vaultwarden) for password management. Running my own instance means my vault never leaves infrastructure I control.
## Where
Runs on **helsinki-a** as a Docker container.
- URL: [bitwarden.pez.sh](https://bitwarden.pez.sh)
- Backend port: 8443
- No Authelia — access is controlled by Bitwarden's own auth
## Notes
Bitwarden is on helsinki-a rather than london-b because it needs to be highly available. helsinki-a is the public-facing server and has better uptime characteristics than the physical london servers.

View file

@ -0,0 +1,17 @@
# Jellyfin
## What
Jellyfin is my open-source media server. It's the free alternative to Plex, and I run both so I can compare them and because some clients work better with one than the other.
## Where
Runs on **london-b** in Docker. Media is served directly from the ZFS pool `hdd`.
- URL: [jellyfin.pez.sh](https://jellyfin.pez.sh)
- Port: 8096
- No Authelia — handled by Jellyfin's own auth
## Storage
Media lives on the ZFS pool `hdd` on london-b. Jellyfin reads from the same library paths as Plex.

25
workloads/mail/README.md Normal file
View file

@ -0,0 +1,25 @@
# Mail
## What
Self-hosted email for pez.sh using poste.io, a batteries-included mail server Docker image that handles SMTP, IMAP, spam filtering, and a webmail interface.
## Where
Runs on **nuremberg-a** in Docker.
- Host: nuremberg-a (100.117.235.28)
- Disk: ~25% used
nuremberg-a is a dedicated Hetzner Cloud VPS for mail. Keeping mail on its own server isolates its IP reputation from everything else.
## DNS
DNS records managed via Cloudflare:
- MX record pointing to nuremberg-a
- SPF, DKIM, and DMARC configured for deliverability
## Notes
poste.io bundles everything needed for a mail server into a single container — no separate containers for postfix, dovecot, rspamd, etc. Makes updates straightforward.

View file

@ -0,0 +1,17 @@
# Minecraft
## What
A Minecraft Java Edition server running PaperMC for improved performance over vanilla.
## Where
Runs on **copenhagen-a** in Docker.
- Image: `marctv/minecraft-papermc-server`
- Host: copenhagen-a (100.89.206.60)
- Not publicly exposed via Caddy — accessed directly or via Tailscale
## Notes
Copenhagen-a has a static IP which makes it suitable for game servers that need direct connections without Cloudflare proxying.

View file

@ -0,0 +1,37 @@
# Monitoring
## What
Prometheus and Grafana for metrics collection and visualisation across the fleet.
## Where
Runs on **london-a** (FreeBSD 14.3). london-a is a dedicated monitoring host — very lightly loaded, disk at 6%.
| Service | URL | Port |
|---|---|---|
| Grafana | grafana.pez.sh | 3000 |
| Prometheus | prometheus.pez.sh | 9090 |
Both are behind Authelia.
## Scrape Targets
Prometheus scrapes metrics from exporters running across the fleet. All connections are made over Tailscale.
Exporters in use:
- **smartctl exporter** — disk health metrics (london-b)
- **prom-plex-exporter** — Plex metrics (london-b)
- Node exporter on various hosts
## Dashboards
Grafana dashboards cover:
- Server health (CPU, memory, disk, network)
- ZFS pool status
- Disk SMART data
- Plex activity
## Status Page
status.pez.sh is a separate lightweight status page that pulls availability data from Prometheus. 90-day uptime history. Source: [RWejlgaard/pez-status](https://github.com/RWejlgaard/pez-status).

View file

@ -0,0 +1,17 @@
# Navidrome
## What
Navidrome is a self-hosted music streaming server. It's compatible with the Subsonic API, which means I can use any Subsonic-compatible client (DSub, Symfonium, etc.) on my phone.
## Where
Runs on **london-b** in Docker. Music library is stored on the ZFS pool `hdd`.
- URL: [music.pez.sh](https://music.pez.sh)
- Port: 4533
- No Authelia — handled by Navidrome's own auth
## Storage
Music library lives on the ZFS pool `hdd` on london-b, alongside the rest of the media.

View file

@ -0,0 +1,17 @@
# Nextcloud
## What
Nextcloud is my self-hosted cloud storage and collaboration platform. I use it as a replacement for Google Drive / iCloud — files, calendar, contacts, and photos.
## Where
Runs on **london-b** via Nextcloud AIO (All-in-One) Docker image. AIO bundles Nextcloud, the database, Redis, and a reverse proxy into a single managed stack.
- URL: [cloud.pez.sh](https://cloud.pez.sh)
- Port: 11000 (AIO proxy)
- No Authelia — handled by Nextcloud's own auth
## Storage
Data stored on the ZFS pool `hdd` on london-b. The ZFS RAIDZ1 arrays provide redundancy for the file data.

31
workloads/wow/README.md Normal file
View file

@ -0,0 +1,31 @@
# World of Warcraft (MaNGOS Zero)
## What
A private WoW 1.12 (Vanilla) server running MaNGOS Zero. This is the open-source WoW emulator for the original game version.
## Where
Runs natively on **copenhagen-a** (not in Docker) via systemd services.
- Services: `mangos-realmd` and `mangos-world`
- Run as: `mangos` user
- Install path: `/home/mangos/mangos/zero/`
- Database: MariaDB (also on copenhagen-a)
## Architecture
MaNGOS Zero uses two processes:
- **mangos-realmd** — the authentication/realm server, handles login
- **mangos-world** — the game world server, handles gameplay
Both are managed by systemd and start automatically on boot.
## Database
MariaDB hosts the WoW databases (characters, world data, auth). Running locally on copenhagen-a.
## Notes
copenhagen-a was chosen for this because it has a static IP, which is required for the realm list entry that clients connect to.