Containers connecting to host services on 10.0.0.1 (e.g. Profilarr → Radarr
at 10.0.0.1:7878) hit the input chain, not forward, because the destination
is a local IP. The forward chain already trusts docker0 for outbound; this
adds the matching input rule so the return path stops getting dropped.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
The ghcr.io/dictionarry-hub/profilarr path mentioned in some docs isn't
publicly pullable — anonymous token requests get 403. Canonical image is
santiagosayshey/profilarr:latest on Docker Hub per the upstream README.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Profilarr replaces the recyclarr/TRaSH-Guides flow with a stateful web
service that owns *arr profiles end-to-end via its own UI. Runs as an
oci-container on 127.0.0.1:6868, fronted by nginx at
profilarr.nordhammer.it behind Authelia (one_factor).
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Override TRaSH's -10000 ban on x265 (HD) to +500 on Sonarr WEB-1080p
and Radarr HD Bluray + WEB. The Scene/No-RlsGroup/Retags/Obfuscated
custom formats (each at -10000) still filter the truly low-bitrate
x265 trash, so we get smaller files without inviting slop.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
The flag was removed in matugen 3.x; the call now exits with an arg
parse error on every update (caught by '|| true' but noisy). matugen
picks a sensible source color by default, so we just drop the flag.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
AdGuard's recent config schema added an enabled flag on each rewrite
that defaults to false. Without it, the *.nordhammer.it -> 10.0.0.1
rules were silently disabled, so LAN clients resolved their own
domains to the public DDNS IP and tripped over NAT loopback.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
The Sonarr 4K profile is sonarr-v4-quality-profile-web-2160p in TRaSH's
recyclarr templates — uhd-bluray-web exists for German content only.
The English UHD profile is WEB-only and named "WEB-2160p", so update
the include list and the AV1-ban score assignment to match.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Two failing services after the channel switch.
automatic-timezoned has been polkit-blocked since well before the
switch — replace with a static Europe/London timezone. Hosts that
travel can override locally if needed.
The vendored crowdsec module's setup unit chowns its config dir to
the (DynamicUser-allocated) crowdsec user via an ExecStartPre+ hack.
On stable's systemd the dynamic user isn't visible to chown via NSS
at that point, so it fails with 'invalid user'. Declaring crowdsec
as a static system user makes systemd use it (DynamicUser becomes a
no-op) and the chown resolves cleanly.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
NVIDIA 535's kernel module won't compile against kernel 7.x — the
DMA mapping API changed in a way the 535 source doesn't handle.
6.12 LTS is the highest kernel branch that's a well-tested combo
with the 535 driver, which we need on stable's nixpkgs to retain
Maxwell support for Jellyfin NVENC.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
legacy_580 only exists on unstable nixpkgs and isn't backported to
25.11. The Maxwell GM206 (Quadro M2000) is supported through the
535.x branch — last production driver to ship Maxwell support — so
this is a clean swap with no expected impact on Jellyfin's NVENC.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
The mediaserver kept hard-freezing on local builds (gnupg, openldap,
deno/rusty-v8) whenever a fresh unstable revision outran Hydra's
binary cache. It doesn't need bleeding-edge packages — every service
it runs is mature enough that 6-month-old versions are fine — so move
it onto the stable channel where the cache is essentially always
warm. Gaming and Macbook stay on unstable for fresh GPU/kernel work.
Implementation: add nixpkgs-stable + home-manager-stable inputs,
parameterise mkHost to accept a (nixpkgs, home-manager) pair.
Drive-by:
- Switch homepage.nix from environmentFiles (plural, unstable-only)
to environmentFile (singular, present on both channels).
- Gate the openldap-skip-tests overlay to non-mediaserver hosts so
it doesn't force a local rebuild on stable, where openldap is
always cached.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
bazaar is a Flathub GUI app store — it has no business on the headless
mediaserver, where it was also pulling flatpak in transitively and
inflating local builds.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
The mediaserver (56 cores, 31 GiB RAM, no swap) was hard-freezing on
local builds of gnupg/openldap because Nix defaulted max-jobs=auto and
launched ~56 parallel gcc compilations, blowing past available memory
and OOM-stalling AdGuard.
Cap parallelism (max-jobs=4, cores=8 per build) and add zramSwap as a
compressed in-memory safety net so a build storm can't take services
with it.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Heavy local builds (gnupg/openldap checkPhase under a freshly-bumped
nixpkgs lock) were saturating CPU and starving AdGuard on the
mediaserver, making DNS effectively unresponsive until the build
finished or got cancelled.
Halving the daemon's CPU share leaves headroom for latency-sensitive
services without meaningfully slowing builds on an otherwise idle box.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Recyclarr now manages quality definitions via TRaSH templates, so the
hand-rolled minSize=10 floor is redundant — every sync would overwrite
it anyway.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Score-based release filtering replaces the brittle "minimum size" approach
— good HEVC encodes from reputable groups now win regardless of file
size, while obfuscated/no-group/lazy-x265 garbage gets banned.
Profiles installed:
Sonarr: WEB-1080p (default), UHD Bluray + WEB (per-show opt-in)
Radarr: HD Bluray + WEB (default), UHD Bluray + WEB (per-movie opt-in)
AV1 is banned across all four profiles since the GPU lacks hardware
decode. API keys are extracted at runtime from each *arr's config.xml,
matching the arr-interconnect pattern.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
test017-syncreplication-refresh is timing-flaky and fails reliably on
local builds when Hydra's binary cache hasn't yet served the upstream
artifact. Overlay sets doCheck=false so the build can proceed. Remove
once the substituter catches up to the pinned nixpkgs revision.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
LAN has no v6 route, so AAAA lookups succeed but connect fails. NM's
connectivity probe was reporting "limited" at boot (GNOME's "?" icon)
until the next 5-min repoll cleared it.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
NixOS's nftables module rebuilds the tables it owns on every activation,
which previously wiped Docker's DOCKER/PREROUTING chains in ip nat
(both Docker and the router were defining 'ip nat'). Renaming our
table sidesteps the collision — kernel hooks across separate tables
at the same priority all run, so functionality is unchanged.
Eliminates the need to run 'systemctl restart docker' after every
nixos-rebuild to restore container port-forwards.
The forward rule only accepted iifname=eno1 oifname=eth0 ct status=dnat,
which worked when port-forwards always landed on a LAN host. Docker
DNAT routes to docker0, so external traffic to 26900 was being DNAT'd
correctly but then dropped at the forward filter. Drop the oifname
constraint — the prerouting DNAT rule already controls what gets
forwarded; the filter doesn't need to second-guess it.
CrowdSec reads the ntfy topic URL from /var/secrets/ntfy-url at eval
time via builtins.readFile. Pure flake mode forbids reading paths
outside the source tree, so without --impure the read silently falls
through to the placeholder URL on every rebuild. Adding --impure to
both build and switch keeps the secret-file pattern working.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
CrowdSec covers the same surface (sshd, authelia, nginx, *arr apps,
qBit) with the addition of community-sourced threat intel and ntfy
push alerts. Keeping both was redundant. State at /var/lib/fail2ban
will sit unused until cleaned up by hand.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
DynamicUser can only see its own journald entries by default, so the
sshd + authelia journalctl acquisitions were dying with "insufficient
permissions" and exit status 1 from the spawned journalctl process.
Adding systemd-journal grants the read access journald gates on group
membership, restoring the ssh-bf / authelia-bf detection chain.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
The vinanrra image's mode numbers are: 0=Install+STOP, 1=Start,
2=Update+STOP, 3=Update+Start, 4=Backup+STOP. I picked 2 thinking
it meant "Only Start", which is why the container kept exiting
cleanly after each update check. Mode 1 just starts the server,
which matches what the main 7dtd container uses.
SteamCMD anonymous install fails with "Missing configuration" on a
fresh coop dir. The main 7dtd works because its binaries were
installed long ago and LinuxGSM skips the SteamCMD step. Same trick
for coop: rsync the binaries over and start-only, no update path.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Container outbound (image pulls, LinuxGSM bootstrap fetches) was
dropped by the inet filter forward chain — only eth0 and DNAT'd
WAN traffic were whitelisted. Add iifname "docker0" accept so
containers can reach the internet.
Also add the coop server's 26910/26911-26912 forwards to ports.toml
so WAN players can connect.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Second container 7dtd-coop with its own /var/lib/7dtd-coop state dir
and a configure unit that patches the server as unlisted, 2-player,
distinct world seed.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
The agent runs as a systemd DynamicUser and was failing the nginx
acquisition with "No matching files for pattern /var/log/nginx/access.log"
because access.log is nginx:nginx 640 — readOnlyPaths handles sandbox
visibility but not Unix perms. extraGroups = [ "nginx" ] gets it past
the group bit.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Upstream nixpkgs builds only cmd/crowdsec and cmd/crowdsec-cli; the
PR #446307 module's setup script expects notification plugins at
\$package/libexec/crowdsec/plugins/notification-*, causing first-start
failure (cannot stat notification-dummy). Add the cmd/notification-*
subpackages and move the resulting binaries into the libexec layout the
module expects.
Drop this override along with the vendored modules once the PR lands —
nixpkgs will need a matching package update for the rewrite to work.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
The upstream NixOS crowdsec module fails on first deploy ("no API client
section in configuration") because it doesn't auto-register LAPI
credentials. The rewrite in NixOS/nixpkgs#446307 (TornaxO7's branch) adds
a setup oneshot that runs `cscli machines add --auto` if the credentials
file is missing, and handles DynamicUser StateDirectory permissions
explicitly. The bouncer rewrite gets matching auto-registration.
Vendor both module files locally and disable the upstream copies. Drop
modules/crowdsec/ and the disabledModules+imports lines once the PR
merges into nixpkgs unstable.
Config moves to the new unified `settings` API (no more separate
`localConfig`); LAPI moved to 127.0.0.1:8081 to dodge the qBit collision.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Enables the CrowdSec agent with sshd/nginx/http-cve hub collections,
acquires logs from nginx, sshd, and Authelia journald, and wires the
firewall bouncer to enforce bans via nftables. Alerts are POSTed to a
self-chosen ntfy.sh topic (URL read from /var/secrets/ntfy-url, falls
back to a placeholder so the repo stays eval-clean without the secret).
Module is self-contained — remove the file + import to uninstall; state
lives under /var/lib/crowdsec.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
LAN is 10.0.0.0/24 since the router cutover; the 192.168 range was
a leftover from the eero-bridge era and no longer matches any host.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
- nginx: strip Referer on torrent.nordhammer.it so qBit's origin check
doesn't reject the post-Authelia redirect (Referer was auth.nordhammer.it,
Host was torrent.nordhammer.it → 401 loop).
- tmpfiles: collapse the nested qbittorrent `d` rules into a single
`d` + recursive `Z` so systemd re-enforces ownership/perms on every
boot. Caught Docker-migration UID drift that silently broke state
persistence and file logging.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
qBittorrent's auth logic is "no SID cookie → bypass for localhost; SID
cookie present → validate it." If the browser has a stale SID from an
earlier session, qBit fails validation and returns 401 even though the
connection is from 127.0.0.1 and bypass is enabled.
Strip both directions: drop the client's Cookie header on the way in so
qBit never sees an SID, and hide Set-Cookie on the way back so the
browser never accumulates one in the first place.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Sonarr/Radarr/Bazarr default to DisabledForLocalAddresses so that requests
coming via the nginx reverse proxy (from 127.0.0.1) skip the app's own
login, leaving Authelia as the single gate. Prowlarr defaults to Enabled,
which produces a 401 behind Authelia.
Idempotent: only rewrites config.xml + restarts prowlarr when it finds
the "Enabled" value; logs a no-op otherwise. Added pkgs.systemd to PATH
so the restart call works.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Only Jellyfin and the Authelia portal itself stay unprotected externally
(Jellyfin because it's streamed to remote clients; Authelia because it
is the login gate). Everything else (sonarr, radarr, bazarr, prowlarr,
torrent/qBittorrent, games, search) now goes through Authelia forward auth.
Internal integrations (Homepage widgets, Prowlarr → Sonarr/Radarr,
Bazarr → Sonarr/Radarr, transcode-hevc qBit queries) use 127.0.0.1:PORT
directly, so they are unaffected.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>