- Update locations/london.md with current server state (london-a FreeBSD monitoring, london-b Ubuntu/ZFS storage, london-c offline) - Update locations/copenhagen.md with current servers (copenhagen-a Minecraft+WoW, copenhagen-b offline, copenhagen-c general) - Add locations/helsinki.md documenting helsinki-a (Caddy gateway, Authelia, Bitwarden, LDAP) and nuremberg-a (mail/poste.io) - Add workloads/jellyfin, navidrome, nextcloud, arr-stack, minecraft, wow, mail, monitoring, bitwarden, authelia - Add principles/zfs documenting the london-b ZFS pool setup - Add principles/caddy documenting the reverse proxy setup on helsinki-a - Update introduction.md to mention all locations
1.3 KiB
ZFS
Why
london-b is my primary storage server with 8 spinning disks. ZFS was the obvious choice — it gives me data integrity via checksumming, flexible RAID configurations, and built-in snapshot support, all without needing a separate RAID controller.
The alternative (mdraid + ext4 or XFS) works fine but gives up checksumming, which means silent data corruption is possible. With 8 drives and 46+ TB of data, I'd rather know about corruption than discover it years later.
Current Setup
Pool name: hdd
Configuration: 3× RAIDZ1 vdevs (each with 2-3 drives, 8 drives total)
| Metric | Value |
|---|---|
| Used | 46 TB |
| Free | 18 TB |
| Total | ~64 TB |
| Health | Scrub weekly (Sundays) |
RAIDZ1 tolerates one drive failure per vdev. Given the drive count and age, this is the right trade-off between capacity and redundancy.
Scrubbing
A weekly scrub runs every Sunday. This reads all data and verifies checksums, catching any silent corruption or drive errors early. Scrub results are visible in Grafana via the smartctl exporter.
Snapshots
ZFS snapshots are used for point-in-time recovery. Fast and space-efficient — snapshots only consume space for data that's changed since the snapshot was taken.
Notes
ZFS is somewhat memory-hungry (ARC cache). london-b has 64 GB of RAM which gives ZFS plenty of headroom.