- Input chain now accepts WAN traffic for every port in ports.toml so
external access (SSH, HTTP, HTTPS, game ports) works through the eero's
upstream port forwards during phase 1, and via our own DNAT in phase 2.
- Add AdGuard DNS rewrite nordhammer.it → 192.168.4.25 so LAN clients
hit the mediaserver directly instead of relying on eero hairpin NAT.
Target changes to 10.0.0.1 at phase 2 cutover.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Without this, the default-drop input policy blocked SSH and AdGuard DNS
from existing 192.168.4.x clients because they arrive on eno1 (still
acting as a client on the eero network until phase 2 cutover).
The trustedLegacyCidrs list is meant to be emptied in phase 2 when
eno1 becomes the ISP-facing WAN.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Adds services/router.nix with systemd-networkd (eno1=WAN via DHCP,
eth0=LAN 10.0.0.1/24), nftables (NAT + firewall, default drop on WAN
in), dnsmasq (DHCP only — AdGuard Home keeps :53 for DNS), and sysctl
IP forwarding. NetworkManager is forced off on this host.
Port forwards live in ports.toml at the repo root and are imported via
builtins.fromTOML. Supports single ports, ranges ("26901-26902"), and
"both" protocol. Initial forwards: 22, 80, 443, 26900, 26901-26902.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
The quality-floor helper uses awk to compare floats (since jq output
can be 10 vs 10.0 depending on type). Without gawk on PATH, the check
failed silently and every run issued PUTs even when values already
matched.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Sonarr/Radarr default minSize=0 let through tiny sub-bitrate releases
(e.g. 163 MiB for a 40-min episode = 0.8 Mbps, unwatchable). Set min to
10 MB/min (~1.3 Mbps) across HDTV/WEBDL/WEBRip/Bluray 1080p so anything
below that is rejected on grab. Idempotent: only PUTs when value differs.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
- services/adguard.nix: mutableSettings = false so Nix config overrides
UI-made changes on rebuild (settings are the source of truth)
- common.nix: add busybox for its collection of handy utilities
- common.nix: remove networking.nameservers — DNS now comes purely from
per-host NetworkManager config (AdGuard as the only resolver, no leaks)
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
- adguard.nordhammer.it now routes through Authelia forward auth
(AdGuard Home itself has no login, so this becomes the single gate)
- Added Authelia ACL rule for the subdomain so default_policy=deny
returns 401 for redirect instead of 403
- Added AdGuard Home widget to Homepage under Infrastructure
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
DoH-only sequential upstreams made first-time lookups slow. Add plain
UDP 1.1.1.1/9.9.9.9 alongside DoH and set upstream_mode=parallel so
AdGuard queries all four simultaneously and uses the fastest response.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
New services/adguard.nix runs AdGuard Home on the mediaserver with DoH
upstreams (Cloudflare + Quad9) and three default blocklists. DNS listens
on :53; web UI on 127.0.0.1:3000, reverse-proxied at adguard.nordhammer.it.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Without this rule the subdomain falls under default_policy=deny,
which returns 403 instead of the 401 that nginx needs to redirect
to the Authelia login page.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Proton-based clients (e.g. CachyOS native install hitting 7DTD via
the Proton runtime) fail EAC handshake against a Linux dedicated
server. Disabling server-side lets Proton clients join via the
"Play without EasyAntiCheat" splash option.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Publishes the container's web dashboard port only on host loopback
(127.0.0.1:8090) so nginx can reverse-proxy it with Authelia
forward-auth, matching the Homepage/camera vhost pattern. Also flips
WebDashboardEnabled to true in the XML patcher so the server actually
starts the web server.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Enables the previously-disabled game-servers module with a new 7DTD
container (vinanrra/7dtd-server) on ports 26900 TCP + 26900-26902 UDP.
A oneshot systemd service waits for LGSM's first install to drop
sdtdserver.xml, then patches in the server name, password, and
random-gen world before restarting the container. V-Rising is removed
— the module hadn't been imported, so this just drops dead code.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Sonarr was silently removing torrents from qBittorrent once imports
completed, killing seeding. Set removeCompletedDownloads to false for
both clients so torrents stick around and keep seeding post-import.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
record-update parses nvd diff after switch and writes latest.json;
Homepage polls a local-only nginx listener and renders date/changes/
closure/kernel via a customapi widget.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
New nixpkgs defaults for the *arr services set UMask=0022, which
conflicts with the media-group-writable overrides. Wrap with
lib.mkForce alongside the existing Jellyfin fix.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
nixpkgs now sets UMask=0077 on the Jellyfin service, conflicting with
our override that ensures media-group writes. Wrapping with lib.mkForce
restores the intended permission bits.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Share the wallpaper symlink across all hosts by moving it from gnome.nix
into home-manager/fred.nix, and add matugen templates for btop and the
Homepage dashboard.
The Homepage NixOS module writes custom.css into /etc (read-only), so
bind-mount /var/lib/homepage-custom-css/custom.css over it. A systemd
path unit restarts homepage-dashboard whenever matugen rewrites the
file, so regeneration works without sudo.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sonarr, Radarr, qBittorrent, Jellyfin, and Bazarr all need to create
files that are writable by the media group. Without this, Jellyfin
can't write thumbnails/artwork to media directories and services
can't collaborate on shared files. Also fixes radarr movies directory
to use setgid (2775) consistently.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Runs Tdarr server with internal node on the mediaserver for managing
library-wide re-encoding to save disk space. Web UI at tdarr.nordhammer.it.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add NVIDIA proprietary driver config to FredOS-Mediaserver hardware
(Maxwell/GM206, open=false, modesetting enabled, headless)
- Enable hardware.graphics for DRM/KMS infrastructure
- Add jellyfin user to video and render groups for device access
After deploying, enable NVENC in Jellyfin: Dashboard → Playback →
Transcoding → Hardware acceleration: Nvidia NVENC.
https://claude.ai/code/session_016jJU8ZtWLSnJQBdbMr5pxK
Cloudflare's authoritative nameservers take longer than the
default 2-minute timeout to propagate TXT records created via
API. Set CLOUDFLARE_PROPAGATION_TIMEOUT=600 to give enough
time for DNS-01 challenge validation.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
--dns.resolvers is a global lego flag, not a run/renew subcommand
flag. Use extraLegoFlags instead of extraLegoRunFlags.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Route DNS propagation checks through 1.1.1.1 only, bypassing
the local resolver that caches stale responses and causes
wildcard cert DNS-01 challenges to time out.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Disabling the propagation check caused lego to submit to Let's
Encrypt before Cloudflare's authoritative nameservers had the
TXT record. A 30s wait gives Cloudflare time to propagate.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Local DNS resolver caches stale responses causing the wildcard
cert DNS-01 challenge to time out before propagation is confirmed.
Cloudflare's authoritative servers propagate fast enough for
Let's Encrypt to validate without the client-side check.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Consolidates V-Rising into the existing game-servers module instead of
a separate file. Also uncomments the game-servers import in common.nix
and adds UDP 9876/9877 to the shared firewall rules.
https://claude.ai/code/session_01Ays1x4CUUJE1jPLkeNMojV
Uses NixOS virtualisation.oci-containers (Docker backend) with the
trueosiris/vrising image. Persists server files and save data under
/var/lib/v-rising/. Opens UDP 9876/9877 in the firewall.
https://claude.ai/code/session_01Ays1x4CUUJE1jPLkeNMojV
DynamicUser can't write to /run directly. RuntimeDirectory lets systemd
create and manage the directory.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- go2rtc.nix: template config at runtime from /var/secrets/go2rtc-rtsp-url
instead of embedding credentials in the nix store
- readme.md: add Mediaserver secrets section documenting all secrets
needed for a fresh deploy (Cloudflare, go2rtc, Authelia)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Use /api/verify endpoint instead of /api/authz/forward-auth
- Add proxy_pass_request_body off to auth location
- Put redirect URL inline in error_page instead of using a variable
- Use X-Forwarded-Uri (matching old config) instead of X-Forwarded-URI
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
auth_request_set reads variables from the auth subrequest context where
$scheme/$http_host/$request_uri are empty, causing a 500 instead of a
302 redirect. Using set captures from the main request context.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The CNAME interference is resolved so the default lego propagation check
(querying Cloudflare authoritative NS) should work correctly now.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The previous dnsPropagationCheck=false caused lego to ask LE to validate
before the TXT record was globally visible. Adding --dns.propagation-wait
gives Cloudflare time to serve the record from all edge locations.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Cloudflare is the authoritative NS so API-created TXT records are
immediately visible — the propagation poll was timing out unnecessarily.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Native nginx with ACME wildcard cert (*.nordhammer.it) via Cloudflare DNS-01
- Native Authelia SSO with forward auth protecting homepage + camera
- Native go2rtc camera streaming (no more Docker)
- Auto-migration script for Authelia secrets and user database from Docker
- Homepage hrefs updated to use HTTPS domain names
- Fail2ban updated for native nginx log paths + new Authelia jail
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds a systemd oneshot that runs before homepage-dashboard and:
- Reads *arr API keys from their config.xml files
- Reads Bazarr key from config.ini
- Creates a Jellyfin API key in the DB if one named "Homepage" doesn't exist
- Uses localhost for qBittorrent (LocalHostAuth=false, no creds needed)
- Writes everything to /etc/homepage-secrets
Zero manual setup — all keys are extracted or generated automatically.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Covers all running services: Jellyfin, Sonarr, Radarr, Bazarr, Prowlarr,
qBittorrent, Nginx Proxy Manager, Authelia, go2rtc. Live widgets for
*arr apps, Jellyfin now-playing, and qBittorrent speed use API keys
loaded from /etc/homepage-secrets (outside the Nix store).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds authorised keys for FredOS-Gaming and phone. Disables SSH password
authentication on FredOS-Mediaserver — key auth only going forward.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
ES 8.x enables security and enrollment by default. Adding
xpack.security.enrollment.enabled=false to Elasticsearch and
xpack.security.enabled=false to Kibana suppresses the enrollment
token screen and lets Kibana connect directly over HTTP.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Elasticsearch + Kibana + Filebeat in Docker, bridged via an elk network.
Filebeat uses the Suricata module to parse eve.json and auto-installs
Kibana dashboards on first run. ES heap capped at 1g; Kibana Node heap
at 512m — total stack ~2-2.5 GB RAM.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>