Capture london-b media stack and systemd services

Add the full media automation stack (sonarr, radarr, prowlarr, lidarr,
readarr, whisparr), media servers (jellyfin, plex), and supporting
services (transmission, samba, ollama, promtail, cloudflared, vsftpd)
to the repo as a media_stack Ansible role.

Includes:
- Custom systemd unit files for non-package-managed services
- Config files for promtail, samba, transmission, vsftpd
- Cron jobs for movie-rename-fix, sonarr/radarr midnight restarts
- Updated deploy.yml to wire the role into london-b's stage
- Updated london-b docs with full service inventory

Backup script (backup.sh) already covered by the existing backup role.
Node/systemd exporters already covered by existing monitoring roles.

Closes PESO-92
This commit is contained in:
Rasmus Wejlgaard 2026-03-29 15:39:05 +00:00
parent b0acdb72e3
commit 522f0b2b84
17 changed files with 512 additions and 3 deletions

View file

@ -54,12 +54,13 @@
- role: caddy - role: caddy
- role: status_page - role: status_page
# london-b: Docker services (storage, apps) + backups # london-b: Docker services (storage, apps) + media stack + backups
- name: "Stage 4b: Docker services (london-b)" - name: "Stage 4b: Services (london-b)"
hosts: london-b hosts: london-b
tags: [services, london-b] tags: [services, london-b]
roles: roles:
- role: docker_services - role: docker_services
- role: media_stack
- role: backup - role: backup
# nuremberg-a: Mail (poste.io via Docker) # nuremberg-a: Mail (poste.io via Docker)

View file

@ -0,0 +1,24 @@
---
- name: Reload systemd daemon
ansible.builtin.systemd:
daemon_reload: true
- name: Restart promtail
ansible.builtin.systemd:
name: promtail
state: restarted
- name: Restart smbd
ansible.builtin.systemd:
name: smbd
state: restarted
- name: Restart transmission
ansible.builtin.systemd:
name: transmission-daemon
state: restarted
- name: Restart vsftpd
ansible.builtin.systemd:
name: vsftpd
state: restarted

View file

@ -0,0 +1,128 @@
---
# media_stack role — deploys the full media stack on london-b
# Manages: *arr suite, jellyfin, plex, transmission, samba,
# ollama, promtail, cloudflared, vsftpd, and cron jobs.
# ── Systemd service units (custom, not package-managed) ──
- name: Deploy custom systemd unit files
ansible.builtin.copy:
src: "{{ playbook_dir }}/services/{{ item }}/{{ item }}.service"
dest: "/etc/systemd/system/{{ item }}.service"
mode: '0644'
loop:
- radarr
- prowlarr
- lidarr
- readarr
- whisparr
- ollama
- promtail
notify: Reload systemd daemon
- name: Enable and start custom systemd services
ansible.builtin.systemd:
name: "{{ item }}"
state: started
enabled: true
loop:
- radarr
- prowlarr
- lidarr
- readarr
- ollama
- promtail
# Whisparr is installed but disabled (kept as-is)
- name: Ensure whisparr unit is present but disabled
ansible.builtin.systemd:
name: whisparr
enabled: false
# ── Package-managed services (ensure enabled) ──
- name: Ensure package-managed services are enabled
ansible.builtin.systemd:
name: "{{ item }}"
state: started
enabled: true
loop:
- sonarr
- jellyfin
- plexmediaserver
- transmission-daemon
- smbd
- vsftpd
- cloudflared
# ── Configuration files ──
- name: Deploy promtail config
ansible.builtin.copy:
src: "{{ playbook_dir }}/services/promtail/config/london-b.yml"
dest: /etc/promtail/config.yml
mode: '0644'
notify: Restart promtail
- name: Deploy samba config
ansible.builtin.copy:
src: "{{ playbook_dir }}/services/samba/config/london-b.conf"
dest: /etc/samba/smb.conf
mode: '0644'
backup: true
notify: Restart smbd
- name: Deploy transmission settings
ansible.builtin.copy:
src: "{{ playbook_dir }}/services/transmission/config/settings.json"
dest: /etc/transmission-daemon/settings.json
owner: debian-transmission
group: debian-transmission
mode: '0600'
notify: Restart transmission
- name: Deploy vsftpd config
ansible.builtin.copy:
src: "{{ playbook_dir }}/services/vsftpd/config/london-b.conf"
dest: /etc/vsftpd.conf
mode: '0644'
notify: Restart vsftpd
# ── Scripts ──
- name: Ensure scripts directory exists
ansible.builtin.file:
path: /root/scripts
state: directory
mode: '0755'
- name: Deploy movie-rename-fix script
ansible.builtin.copy:
src: "{{ playbook_dir }}/scripts/movie-rename-fix.fish"
dest: /root/scripts/movie-rename-fix.fish
mode: '0755'
# ── Cron jobs ──
- name: Movie rename fix (hourly)
ansible.builtin.cron:
name: "Movie rename fix"
minute: "0"
job: "/root/scripts/movie-rename-fix.fish"
user: root
- name: Restart radarr at midnight
ansible.builtin.cron:
name: "Restart radarr"
minute: "0"
hour: "0"
job: "systemctl restart radarr"
user: root
- name: Restart sonarr at midnight
ansible.builtin.cron:
name: "Restart sonarr"
minute: "0"
hour: "0"
job: "systemctl restart sonarr"
user: root

View file

@ -0,0 +1,7 @@
#!/usr/bin/env fish
cd /hdd/movies
for i in (find . -name "*www.UIndex.org - *")
mv $i (echo $i | sed 's/www.UIndex.org - //')
end

View file

@ -0,0 +1,16 @@
[Unit]
Description=Lidarr Daemon
After=syslog.target network.target
[Service]
User=root
Group=root
UMask=0002
Type=simple
ExecStart=/opt/Lidarr/Lidarr -nobrowser -data=/var/lib/lidarr/
TimeoutStopSec=20
KillMode=process
Restart=on-failure
[Install]
WantedBy=multi-user.target

View file

@ -0,0 +1,14 @@
[Unit]
Description=Ollama Service
After=network-online.target
[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"
[Install]
WantedBy=default.target

View file

@ -0,0 +1,31 @@
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://192.168.1.254:3100/loki/api/v1/push
scrape_configs:
- job_name: london-b
static_configs:
- targets:
- localhost
labels:
job: varlogs
instance: london-b
__path__: /var/log/*log
- targets:
- localhost
labels:
job: plex
instance: london-b
__path__: /var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Logs/*log
- targets:
- localhost
labels:
job: jellyfin
instance: london-b
__path__: /var/log/jellyfin/*log

View file

@ -0,0 +1,14 @@
[Unit]
Description=Promtail service
After=network.target
[Service]
Type=simple
User=root
ExecStart=/usr/bin/promtail -config.file /etc/promtail/config.yml
TimeoutSec=60
Restart=on-failure
RestartSec=2
[Install]
WantedBy=multi-user.target

View file

@ -0,0 +1,16 @@
[Unit]
Description=Prowlarr Daemon
After=syslog.target network.target
[Service]
User=root
Group=root
UMask=0002
Type=simple
ExecStart=/opt/Prowlarr/Prowlarr -nobrowser -data=/var/lib/prowlarr/
TimeoutStopSec=20
KillMode=process
Restart=on-failure
[Install]
WantedBy=multi-user.target

View file

@ -0,0 +1,16 @@
[Unit]
Description=Radarr Daemon
After=syslog.target network.target
[Service]
User=root
Group=root
UMask=0002
Type=simple
ExecStart=/opt/Radarr/Radarr -nobrowser -data=/var/lib/radarr/
TimeoutStopSec=20
KillMode=process
Restart=on-failure
[Install]
WantedBy=multi-user.target

View file

@ -0,0 +1,16 @@
[Unit]
Description=Readarr Daemon
After=syslog.target network.target
[Service]
User=root
Group=root
UMask=0002
Type=simple
ExecStart=/opt/Readarr/Readarr -nobrowser -data=/var/lib/readarr/
TimeoutStopSec=20
KillMode=process
Restart=on-failure
[Install]
WantedBy=multi-user.target

View file

@ -0,0 +1,53 @@
[global]
workgroup = WORKGROUP
server string = %h server (Samba, Ubuntu)
log file = /var/log/samba/log.%m
max log size = 1000
logging = file
panic action = /usr/share/samba/panic-action %d
server role = standalone server
obey pam restrictions = yes
unix password sync = yes
passwd program = /usr/bin/passwd %u
passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* .
pam password change = yes
map to guest = bad user
# COCKPIT ZFS MANAGER
# WARNING: DO NOT EDIT, AUTO-GENERATED CONFIGURATION
include = /etc/cockpit/zfs/shares.conf
[HDD]
comment = HDD
path = /hdd
valid users = pez root
public = no
writable = yes
[Movies]
comment = Movies
path = /hdd/movies
public = yes
writable = no
[TV Shows]
comment = TV Shows
path = /hdd/tv
public = yes
writable = no
[printers]
comment = All Printers
browseable = no
path = /var/tmp
printable = yes
guest ok = no
read only = yes
create mask = 0700
[print$]
comment = Printer Drivers
path = /var/lib/samba/printers
browseable = yes
read only = yes
guest ok = no

View file

@ -0,0 +1,13 @@
[Unit]
Description=cloudflared
After=network.target
[Service]
TimeoutStartSec=0
Type=notify
ExecStart=/usr/bin/cloudflared --no-autoupdate tunnel run
Restart=on-failure
RestartSec=5s
[Install]
WantedBy=multi-user.target

View file

@ -0,0 +1,84 @@
{
"alt-speed-down": 0,
"alt-speed-enabled": false,
"alt-speed-time-begin": 540,
"alt-speed-time-day": 127,
"alt-speed-time-enabled": false,
"alt-speed-time-end": 1020,
"alt-speed-up": 0,
"announce-ip": "",
"announce-ip-enabled": false,
"anti-brute-force-enabled": false,
"anti-brute-force-threshold": 100,
"bind-address-ipv4": "0.0.0.0",
"bind-address-ipv6": "::",
"blocklist-enabled": false,
"blocklist-url": "http://www.example.com/blocklist",
"cache-size-mb": 4,
"default-trackers": "",
"dht-enabled": true,
"download-dir": "/hdd/downloads",
"download-limit": 100,
"download-limit-enabled": 0,
"download-queue-enabled": true,
"download-queue-size": 100,
"encryption": 1,
"idle-seeding-limit": 30,
"idle-seeding-limit-enabled": false,
"incomplete-dir": "/var/lib/transmission-daemon/Downloads",
"incomplete-dir-enabled": false,
"lpd-enabled": false,
"max-peers-global": 200,
"message-level": 4,
"peer-congestion-algorithm": "",
"peer-id-ttl-hours": 6,
"peer-limit-global": 200,
"peer-limit-per-torrent": 50,
"peer-port": 6881,
"peer-port-random-high": 65535,
"peer-port-random-low": 49152,
"peer-port-random-on-start": false,
"peer-socket-tos": "le",
"pex-enabled": true,
"port-forwarding-enabled": false,
"preallocation": 1,
"prefetch-enabled": true,
"queue-stalled-enabled": true,
"queue-stalled-minutes": 30,
"ratio-limit": 0,
"ratio-limit-enabled": true,
"rename-partial-files": true,
"rpc-authentication-required": false,
"rpc-bind-address": "0.0.0.0",
"rpc-enabled": true,
"rpc-host-whitelist": "127.0.0.1,localhost,london-b,download.pez.sh,download.pez.solutions",
"rpc-host-whitelist-enabled": false,
"rpc-port": 9091,
"rpc-socket-mode": "0750",
"rpc-url": "/transmission/",
"rpc-username": "transmission",
"rpc-whitelist": "127.0.0.1,localhost,london-b,download.pez.sh,download.pez.solutions",
"rpc-whitelist-enabled": false,
"scrape-paused-torrents-enabled": true,
"script-torrent-added-enabled": false,
"script-torrent-added-filename": "",
"script-torrent-done-enabled": false,
"script-torrent-done-filename": "",
"script-torrent-done-seeding-enabled": false,
"script-torrent-done-seeding-filename": "",
"seed-queue-enabled": false,
"seed-queue-size": 10,
"speed-limit-down": 50,
"speed-limit-down-enabled": false,
"speed-limit-up": 1000,
"speed-limit-up-enabled": true,
"start-added-torrents": true,
"tcp-enabled": true,
"torrent-added-verify-mode": "fast",
"trash-original-torrent-files": false,
"umask": "022",
"upload-limit": 100,
"upload-limit-enabled": 0,
"upload-slots-per-torrent": 14,
"utp-enabled": true
}

View file

@ -0,0 +1,18 @@
listen=NO
listen_ipv6=YES
anonymous_enable=YES
local_enable=YES
write_enable=NO
anon_upload_enable=NO
dirmessage_enable=YES
use_localtime=YES
xferlog_enable=YES
connect_from_port_20=YES
ftpd_banner=Welcome to the Pez Dispenser
secure_chroot_dir=/var/run/vsftpd/empty
pam_service_name=vsftpd
rsa_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
rsa_private_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
ssl_enable=NO
anon_root=/hdd/ftp
allow_writeable_chroot=YES

View file

@ -0,0 +1,16 @@
[Unit]
Description=Whisparr Daemon
After=syslog.target network.target
[Service]
User=root
Group=root
UMask=0002
Type=simple
ExecStart=/opt/Whisparr/Whisparr -nobrowser -data=/var/lib/whisparr/
TimeoutStopSec=20
KillMode=process
Restart=on-failure
[Install]
WantedBy=multi-user.target

View file

@ -68,7 +68,49 @@ RAIDZ1 tolerates one drive failure per vdev. With this many drives and this much
| smartctl_exporter | 9633 | (Prometheus scrape) | | smartctl_exporter | 9633 | (Prometheus scrape) |
| prom-plex-exporter | — | (Prometheus scrape) | | prom-plex-exporter | — | (Prometheus scrape) |
All services run in Docker. Media is served directly from the ZFS pool. ### Systemd Services (non-Docker)
The media automation suite and several supporting services run as native systemd units, not in Docker:
| Service | Unit Name | Notes |
|---------|-----------|-------|
| Sonarr | sonarr | Package-managed (mono) |
| Radarr | radarr | /opt/Radarr, custom unit |
| Prowlarr | prowlarr | /opt/Prowlarr, custom unit |
| Lidarr | lidarr | /opt/Lidarr, custom unit |
| Readarr | readarr | /opt/Readarr, custom unit |
| Whisparr | whisparr | /opt/Whisparr, custom unit (disabled) |
| Plex | plexmediaserver | Package-managed |
| Jellyfin | jellyfin | Package-managed |
| Transmission | transmission-daemon | Package-managed |
| Samba | smbd | Package-managed |
| Ollama | ollama | /usr/local/bin, custom unit |
| Promtail | promtail | Custom unit, ships logs to Loki |
| Cloudflared | cloudflared | Tunnel to Cloudflare |
| vsftpd | vsftpd | FTP server for /hdd/ftp |
| systemd_exporter | systemd_exporter | Ansible-managed |
| node_exporter | node_exporter | Ansible-managed |
Docker services: Nextcloud AIO, Jellyseerr, Navidrome, slskd, Miniflux, smartctl-exporter, plex-exporter.
### Cron Jobs
| Schedule | Job |
|----------|-----|
| Every hour | `/root/scripts/movie-rename-fix.fish` |
| Midnight daily | `systemctl restart radarr` |
| Midnight daily | `systemctl restart sonarr` |
| 22:00 daily | `/root/scripts/backup.sh` (rclone to B2) |
### Samba Shares
| Share | Path | Access |
|-------|------|--------|
| HDD | /hdd | pez, root (rw) |
| Movies | /hdd/movies | public (ro) |
| TV Shows | /hdd/tv | public (ro) |
Media is served directly from the ZFS pool.
## Networking ## Networking