Compare commits

..

54 commits

Author SHA1 Message Date
b725bf1773 chore: ruff format
All checks were successful
Build ISO / build-iso (push) Successful in 18m3s
CI / lint (push) Successful in 27s
CI / test (push) Successful in 1m22s
CI / validate-json (push) Successful in 25s
CI / markdown-links (push) Successful in 13s
Release / release (push) Successful in 12m13s
Whitespace-only — `ruff check` was green when 26.17-alpha shipped but
I forgot to run `ruff format`, so the CI format-check job went red on
the release commit. Runtime artifacts are unaffected (release.yml
doesn't gate on lint); this just re-greens the main baseline going
forward.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-11 20:10:04 +02:00
8e1f817d85 feat(apps): app-to-app dependencies with install + start hooks
Some checks failed
Build ISO / build-iso (push) Successful in 21m39s
CI / lint (push) Failing after 28s
CI / test (push) Successful in 1m29s
CI / validate-json (push) Successful in 24s
CI / markdown-links (push) Successful in 14s
Release / release (push) Successful in 12m2s
Manifests gain an optional `requires` array. Each entry points at
another app and may declare `on_install` + `on_start` hook scripts
that live in the *provider's* folder and run inside its container
via `docker compose exec`. Hook stdout (KEY=VALUE + optional
FURTKA_JSON: sentinel) gets merged into the consumer's .env; the
placeholder-secret check re-runs over the merged file. Provider apps
that aren't installed get auto-installed first (topo order, cycle
detection, explicit UI confirm). Removing an app is blocked while
other installed apps require it. Reconcile now visits apps in
dependency order so consumers' on_start hooks fire against already-up
providers; per-app error isolation skips just the offending consumer's
compose_up.

Release 26.17-alpha.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-11 19:39:10 +02:00
863ffa9737 chore: release 26.16-alpha
All checks were successful
Build ISO / build-iso (push) Successful in 18m12s
Deploy site / deploy (push) Successful in 3s
CI / lint (push) Successful in 28s
CI / test (push) Successful in 1m21s
CI / validate-json (push) Successful in 24s
CI / markdown-links (push) Successful in 13s
Release / release (push) Successful in 12m13s
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-10 12:59:27 +02:00
ee132712be docs: sync READMEs with 26.15 HTTPS opt-in + boot-USB filter
All checks were successful
Build ISO / build-iso (push) Successful in 24m38s
CI / lint (push) Successful in 1m1s
CI / test (push) Successful in 2m42s
CI / validate-json (push) Successful in 58s
CI / markdown-links (push) Successful in 28s
- README roadmap: Local HTTPS Phase 1 entry now reflects the 26.15
  opt-in model (default off, toggle in /settings) instead of the
  26.4 auto-trust story.
- README + iso/README: boot-USB filtering is no longer a TODO; both
  files now describe the implemented `findmnt`/`PKNAME` behaviour.
- iso/README rough edges: drop the boot-USB bullet (closed) and
  re-word the wizard-still-HTTP-only bullet to match the 26.15 toggle
  flow (it was a stale dup of the same line under it).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 12:09:33 +02:00
1193504a1e perf(site): gzip CSS, JS, SVG and fonts on the furtka.org nginx
Default nginx only gzips text/html, so the homepage HTML was the only
asset coming back compressed. The ~600 KB three.min.js bundle (and the
hashed CSS) were being shipped uncompressed across the public openresty
proxy. `gzip_types` now covers css/js/json/xml/svg/woff2.

Needs `sudo ops/nginx/setup-vm.sh` on forge-runner-01 to take effect —
the site-deploy workflow only rebuilds Hugo, it doesn't touch the
nginx config.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 12:09:26 +02:00
65d48c92f8 feat(installer): filter the boot USB out of the install drive picker
On bare-metal installs, `lsblk` reports the USB stick the live ISO
booted from as TYPE=disk, so it showed up in the drive picker
alongside the real install target — a user could in theory pick the
USB they had just booted from. `findmnt /run/archiso/bootmnt` resolves
the boot partition and `lsblk -no PKNAME` walks it up to the parent
disk; that disk is dropped before scoring. On a normal box neither
file nor mountpoint exist and the picker is unchanged.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 12:09:19 +02:00
aa7dea0528 feat(site): pimp homepage with animated 3D background and scroll reveals
Some checks failed
CI / lint (push) Successful in 1m24s
CI / test (push) Successful in 2m24s
CI / validate-json (push) Successful in 57s
CI / markdown-links (push) Successful in 29s
Deploy site / deploy (push) Successful in 7s
Build ISO / build-iso (push) Failing after 14m59s
Adopts the visual feel of Pascal's prototype while keeping Furtka's
voice, brand palette, and bilingual structure intact.

What changed
- Three.js wireframe torus-knot behind the hero, color/opacity tied
  to the existing --accent / --scene-opacity CSS vars so light and
  dark modes both work without a scene re-init.
- Scroll-driven camera zoom + core scale + tilt; canvas opacity fades
  past hero so feature cards stay readable.
- GSAP + ScrollTrigger reveal hero on load and stagger feature cards
  in as they enter the viewport. Lenis smooths scroll.
- "What works today" / "What's coming next" lists move from markdown
  bullets into front-matter arrays and render as scroll-reveal cards
  (7 + 4 cards, EN/DE parallel; copy is 1:1 from the original lists).
- Hero scaled up: gradient text on the wordmark (fg → accent),
  drop-shadow glow in dark mode, brighter lede color.
- Primary CTA -> /releases listing on Forgejo (Forgejo has no
  /releases/latest), with a pulsing glow + arrow slide on hover.
- Version bump 26.8-alpha -> 26.15-alpha to match the actual release.

Performance / a11y
- Vendor JS (Three.js r128, GSAP 3.12.2 + ScrollTrigger, Lenis 1.0.33)
  vendored locally under assets/js/vendor/ - no third-party CDN at
  runtime. ~728 KB total, fingerprinted via Hugo's pipeline with SRI.
- Canvas + scripts gated to homepage only ({{ if .IsHome }}); the
  Impressum/Datenschutz pages stay plain.
- prefers-reduced-motion: scene + GSAP early-return, CSS forces cards
  to their resting state. No-JS users see all content.
- All scripts deferred so first paint isn't blocked.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 16:14:21 +02:00
1cff22658b feat(auth): rate-limit failed logins with per-(user, IP) lockout
All checks were successful
CI / lint (push) Successful in 1m59s
CI / test (push) Successful in 3m27s
CI / validate-json (push) Successful in 1m56s
CI / markdown-links (push) Successful in 1m24s
Build ISO / build-iso (push) Successful in 26m58s
Ten wrong passwords from the same (username, client-IP) tuple within
15 minutes now return 429 with Retry-After for the next 15 minutes;
authenticate() isn't even called while locked, so the 429 response is
identical whether the password would have been correct — no oracle.

Tuple keying prevents an attacker from one IP from locking the real
admin out of their own box: a different IP (or an ISP reconnect) keeps
them in. The client IP comes from the rightmost X-Forwarded-For entry,
which is what Caddy appends and thus trustworthy (no upstream proxy in
front of Caddy). First-run setup bypasses the lockout — otherwise a
clumsy operator could lock themselves out before an admin exists.

State is in-memory (parallel to SessionStore), so `systemctl restart
furtka` clears a stuck lockout.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-22 17:27:14 +02:00
e68ed279cc fix(https): make HTTPS opt-in to stop the BAD_SIGNATURE trap on fresh installs
All checks were successful
Build ISO / build-iso (push) Successful in 17m23s
CI / lint (push) Successful in 27s
CI / test (push) Successful in 1m2s
CI / validate-json (push) Successful in 24s
CI / markdown-links (push) Successful in 15s
Release / release (push) Successful in 11m34s
Every Furtka since 26.5 shipped a Caddyfile with a
`__FURTKA_HOSTNAME__.local { tls internal }` site block, so every
first boot auto-generated a fresh self-signed CA + intermediate +
leaf. That worked for the first-ever Furtka user, but every reinstall
(or second box on the same LAN) produced a new CA whose intermediate
shared the fixed CN `Caddy Local Authority - ECC Intermediate` with
the previous one. Firefox caches intermediates by CN across profiles
— even private windows share cert9.db — so any visitor who had
trusted an older Furtka's CA got a cached intermediate with
mismatched keys when they hit the new box, producing
`SEC_ERROR_BAD_SIGNATURE`. Unlike UNKNOWN_ISSUER, Firefox has NO
"Advanced → Accept Risk" bypass for BAD_SIGNATURE, so fresh-install
boxes were effectively unreachable over HTTPS in any browser that
had ever seen a previous Furtka.

Validated live on the .46 test VM: fresh 26.14 ISO install → Firefox
hits BAD_SIGNATURE on https://furtka.local/ (even in private mode).
Chromium bypasses it via mDNS failure but the issue is the same.
openssl verify on the box confirms the chain is internally valid —
this is purely client-side cache pollution across boxes.

Fix:
- assets/Caddyfile: removed the hostname site block. Default install
  serves :80 only — https://furtka.local connection-refuses, which is
  a normal error every browser handles instead of the unbypassable
  crypto fault. Added top-level import of
  /etc/caddy/furtka-https.d/*.caddyfile so the /settings HTTPS toggle
  can drop a listener snippet there when a user explicitly opts in.
- furtka/https.py: set_force_https now writes TWO snippets atomically
  — the top-level hostname + tls internal block (enables :443) and
  the :80-scoped redirect (forces HTTP→HTTPS). Disable removes both.
  Reload failure rolls both back. Added _read_hostname + _https_snippet_content
  helpers with `/etc/hostname` → 'furtka' fallback so a missing
  hostname file doesn't produce an empty site block Caddy rejects.
- furtka/https.py::status: force_https now reads the listener
  snippet (was reading the redirect snippet). A redirect without a
  listener isn't actually HTTPS being served, so the listener is the
  authoritative "HTTPS is on" signal.
- furtka/updater.py: new _maybe_migrate_preserve_https hook runs
  inside _refresh_caddyfile on the 26.14 → 26.15 transition. If the
  box had the redirect snippet on disk (user had opted into HTTPS
  under the old regime), it writes the new listener snippet too so
  HTTPS keeps working after the Caddyfile swap removes the hostname
  block.
- webinstaller/app.py: post-install creates /etc/caddy/furtka-https.d/
  alongside /etc/caddy/furtka.d/ so the glob import can't trip an
  older Caddy on a missing path during the first reload.

Live-tested on .46: set_force_https(True) writes both snippets, Caddy
reloads, :443 listener comes up with fresh CA, curl -k returns 302,
HTTP 301-redirects. set_force_https(False) removes both snippets
atomically, :443 goes back to connection-refused.

Tests: test_https.py expanded from 13 to 15 cases. Toggle-on asserts
both snippets written + hostname substituted. Toggle-off asserts
both removed. Rollback cases verify BOTH snippets restore on reload
failure. New test_https_snippet_content_has_tls_internal_and_routes
locks the exact shape of the listener block.
test_webinstaller_assets.py: updated two old asserts that assumed
hostname block was in Caddyfile; new test_post_install_creates_https_snippet_dir
guards the new directory.

276 tests pass, ruff check + format clean.

Known remaining wart (documented in CHANGELOG): a browser that
trusted a prior Furtka CA still hits BAD_SIGNATURE on this box's
HTTPS after enabling it, because the fixed intermediate CN is a
Caddy-side limitation. Workaround: clear cert9.db or visit in a
fresh profile. Won't affect end users with one Furtka box ever.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 19:30:04 +02:00
26f0424ae3 fix: auth-guard / and /settings, add Logout link to static navs
All checks were successful
Build ISO / build-iso (push) Successful in 17m14s
CI / lint (push) Successful in 26s
CI / test (push) Successful in 1m2s
CI / validate-json (push) Successful in 24s
CI / markdown-links (push) Successful in 15s
Release / release (push) Successful in 11m26s
Since 26.11 shipped login, two of the three nav pages were secretly
unauthenticated. The Caddyfile only reverse-proxied /api/*, /apps*,
/login*, /logout* to the Python auth-gated handler. Everything else —
including / (landing page) and /settings/ — fell through to Caddy's
catch-all file_server straight out of assets/www/, skipping the
session check entirely.

LAN visitor effect: they could read the box's hostname, IP, Furtka
version, uptime, and see all the Update-now / Reboot / HTTPS-toggle
buttons on /settings/. The API calls those buttons fired were
themselves 401-gated so nothing actually happened — but the info leak
plus "looks open" UX was real. Caught in the 26.13 SSH test session
when the user noticed Logout only appeared in the nav on /apps, and
not on / or /settings/.

Fix:
- Caddyfile: new `handle /settings*` and `handle /` blocks in the
  shared `(furtka_routes)` snippet reverse-proxy to localhost:7000,
  so both hit the Python auth-guard before the HTML goes out.
- api.py: new `_serve_static_www(relative_path)` helper reads
  assets/www/{index.html, settings/index.html} with a path-traversal
  clamp (resolved path must stay under static_www_dir). `do_GET`
  routes `/` and `/settings[/]` to it. Removed the `/` branch from
  the old combined-with-/apps line — those are different pages now.
- paths.py: new `static_www_dir()` helper with `FURTKA_STATIC_WWW`
  env override for tests.
- assets/www/*.html: both nav bars get the Logout link + a shared
  `doLogout()` inline script matching the _HTML pattern. Users never
  see the link unauthed (the Python handler 302s them before the
  page renders), but authed users get consistent navigation across
  all three pages.

Tests: 5 new cases in test_api.py — unauth / redirects, unauth
/settings redirects (both trailing-slash and not), authed / serves
index.html, authed /settings serves settings/index.html,
regression guard that / and /apps serve different content.
Existing test updated (the one that used / as a proxy for /apps).

Static /style.css, /rootCA.crt, /status.json, /furtka.json,
/update-state.json stay served by Caddy's catch-all — those are
public by design (login page needs style.css, fresh users need the
CA to trust HTTPS, runtime JSON is metadata not creds).

272 tests pass, ruff check + format clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 18:16:42 +02:00
8c1fd1da2b fix: unbreak upgrade path + install-lock race
All checks were successful
Build ISO / build-iso (push) Successful in 17m28s
CI / lint (push) Successful in 27s
CI / test (push) Successful in 59s
CI / validate-json (push) Successful in 23s
CI / markdown-links (push) Successful in 15s
Release / release (push) Successful in 11m38s
Three interlocking issues that made 26.11/26.12 effectively
un-upgradable from pre-auth versions without manual pacman +
symlink surgery. Caught while SSH-testing the .196 VM which landed
on a rollback loop after every Update-now click.

1. auth.py imported werkzeug.security, but the target system runs
   core as bare system Python — neither flask nor werkzeug are
   pip-installed. Fresh 26.11+ boxes died on import. Replaced with
   a 50-line stdlib `furtka/passwd.py` using hashlib.pbkdf2_hmac
   for new hashes and parsing werkzeug's `scrypt:N:r:p$salt$hex`
   format for backward-read so existing users.json survives.

2. updater._health_check pinged /api/apps expecting 200. Post-
   auth, /api/apps returns 401 for unauth requests → HTTPError
   caught as URLError → retry loop → 30s timeout → rollback. Now
   any 2xx-4xx counts as "server alive"; only 5xx / connection
   errors fail. Server responding at all is proof it came back up.

3. _do_install released the fcntl lock between sync pre-validation
   and the systemd-run dispatch. A second POST could slip in,
   pass the lock check, return 202, and leave its install-bg child
   to die silently on the in-child lock. Now the API also reads
   install-state.json and refuses 409 on non-terminal stages —
   the state file is the reliable signal, the fcntl lock is
   defence in depth.

Test coverage:
- tests/test_passwd.py (new, 6 cases): roundtrip, salt uniqueness,
  format shape, werkzeug scrypt backward-compat against a real
  hash captured from the .196 box, malformed + non-string
  rejection.
- tests/test_updater.py: +3 cases for _health_check — 4xx=healthy,
  5xx=unhealthy, URLError retry loop.
- tests/test_api.py: +2 cases for install 409 on non-terminal
  state + 202 after terminal.

All 267 tests green, ruff check + format clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 17:03:28 +02:00
f3cd9e963c feat(install): async background install with progress polling
All checks were successful
Build ISO / build-iso (push) Successful in 17m24s
CI / lint (push) Successful in 26s
CI / test (push) Successful in 43s
CI / validate-json (push) Successful in 24s
CI / markdown-links (push) Successful in 16s
Release / release (push) Successful in 11m34s
POST /api/apps/install now returns 202 Accepted after the synchronous
pre-validation (resolve source, copy files, write .env, check for
placeholder secrets, validate path-type settings). The docker-facing
phases (compose pull → ensure volumes → compose up) are dispatched as
a background systemd-run unit (furtka-install-<app>) that writes stage
transitions to /var/lib/furtka/install-state.json. The UI polls
GET /api/apps/install/status every 1.5s and re-labels the modal
submit button — "Image wird heruntergeladen…" →
"Speicherbereiche werden erstellt…" → "Container wird gestartet…" —
instead of sitting dead on "Installing…" for 30+ seconds on large
images like Jellyfin.

Mirrors the exact shape of /api/catalog/sync/apply and
/api/furtka/update/apply: same fcntl lock, same atomic state-file
writes, same terminal-state poll loop ("done" | "error"). New CLI
subcommand `furtka app install-bg <name>` is what systemd-run invokes;
it's hidden from --help because regular CLI users still want the
synchronous `furtka app install <name>`.

Reinstall button on the app list polls too — after dispatch, its text
reflects the background stage until terminal, matching the modal
flow.

Tests:
- tests/test_install_runner.py (new, 9 cases): state roundtrip, lock
  contention, happy-path phase ordering, error writes on pull/up
  failure, lock release on both terminal outcomes.
- tests/test_api.py: new no_systemd_run fixture stubs subprocess.run;
  existing install tests adapted to 202 response; new tests for 409
  lock contention and the status endpoint.
- tests/test_cli.py: install-bg dispatches correctly and returns 1
  on failure with journald-friendly stderr.

256 tests pass, ruff check + format clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 15:50:49 +02:00
470823b347 feat(auth): login-guard the Furtka UI with a cookie session
All checks were successful
Build ISO / build-iso (push) Successful in 17m30s
CI / lint (push) Successful in 27s
CI / test (push) Successful in 43s
CI / validate-json (push) Successful in 31s
CI / markdown-links (push) Successful in 15s
Release / release (push) Successful in 11m38s
One-admin, one-password model — all of /apps, /api/*, /, and
/settings/ now require a signed-in session. Passwords are werkzeug
PBKDF2-hashed in /var/lib/furtka/users.json (mode 0600, atomic write
via the same .tmp+chmod+rename dance installer.write_env uses).
Sessions are secrets.token_urlsafe(32) tokens held in a module-level
SessionStore dict (thread-safe lock included for when we swap to
ThreadingHTTPServer). Cookies are HttpOnly, SameSite=Strict, and
Path=/, with Secure set when X-Forwarded-Proto from Caddy says HTTPS.

Two bootstrap paths:
  * Fresh install — webinstaller step-1 collects Linux user + password,
    the chroot post-install step hashes the password and writes
    users.json on the target partition. First browser visit lands on
    /login with the account already present.
  * Upgrade from 26.10-alpha — no users.json yet, so /login detects
    setup_needed() and renders a first-run setup form. POST creates
    the admin and immediately logs in.

POST /logout revokes the server session and clears the cookie.
Unauthenticated HTML requests 302 to /login; unauthenticated API
requests 401 JSON so fetch() callers see a clean error. A sleep(0.5)
on failed logins is the brute-force speed bump on top of werkzeug's
~600k-iter PBKDF2.

Caddyfile gains /login* and /logout* handle blocks in the shared
furtka_routes snippet so both :80 and the HTTPS hostname block
forward the auth endpoints to localhost:7000. Without this Caddy
would 404 from the static file server.

Test surface:
  * tests/test_auth.py (new, 19 cases): hash roundtrip, users.json
    I/O, session create/lookup/expire/revoke.
  * tests/test_api.py: new admin_session fixture; existing HTTP
    tests updated to send the cookie; new tests cover login setup,
    login success, wrong-password 401, logout revocation, and the
    guard's 302/401 split.
  * tests/test_webinstaller_assets.py: new case that unpacks the
    users.json _write_file_cmd body and verifies the werkzeug hash
    round-trips against the step-1 password.

Bumped version to 26.11-alpha and rolled CHANGELOG. Also folded in
the ruff-format fix that was pending from 26.10-alpha's lint red.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 13:01:17 +02:00
577c2469f7 style(tests): reflow OPTIONAL_PATH_MANIFEST to match ruff format
All checks were successful
Build ISO / build-iso (push) Successful in 20m27s
CI / lint (push) Successful in 29s
CI / test (push) Successful in 1m3s
CI / validate-json (push) Successful in 46s
CI / markdown-links (push) Successful in 23s
Fixes the lint failure on the 26.10-alpha commit — ruff format wanted
the single-item settings list on one line rather than spread over
three. Pure formatting, no behaviour change.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 11:56:52 +02:00
e8c5317660 chore: release 26.10-alpha
Some checks failed
CI / lint (push) Failing after 50s
CI / test (push) Successful in 1m6s
CI / validate-json (push) Successful in 42s
CI / markdown-links (push) Successful in 22s
Release / release (push) Successful in 13m27s
Ships the new path-type setting (the schema extension that unlocks
host bind mounts for Jellyfin / Paperless / Nextcloud / Immich-class
apps), server-side path validation, app-author docs for the new type,
and the remove-USB-stick hint on the installer's reboot screen.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 11:48:07 +02:00
474af8fb2d feat(installer): remove-USB-stick hint on the reboot screen
Some checks failed
CI / lint (push) Waiting to run
CI / test (push) Waiting to run
CI / validate-json (push) Waiting to run
CI / markdown-links (push) Waiting to run
Build ISO / build-iso (push) Failing after 4m15s
Adds a bold "Remove the USB stick now" line before the reboot, plus a
muted fallback paragraph pointing at the BIOS one-time boot menu keys
(F11/F12/Esc) for when removal isn't enough. Caught on the 2026-04-21
Medion bare-metal test: the box didn't boot the installed system on
first reboot and required manual BIOS boot-order changes, which
non-technical users won't know how to do.

Template-only change. No new CSS, no new code paths — <kbd> uses
browser defaults, <strong> keeps the hierarchy readable.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 11:46:38 +02:00
7c6da3d051 docs(apps): document the new path setting type
Some checks failed
CI / lint (push) Failing after 38s
CI / test (push) Successful in 54s
CI / validate-json (push) Successful in 34s
CI / markdown-links (push) Successful in 19s
Covers the path-type declaration in manifest.json, the companion
compose bind-mount pattern (${MEDIA_PATH}:/media:ro), and the full
server-side validation rules the installer applies (absolute, exists,
is-directory, resolve-then-deny-list, traversal caught).

Clarifies the mental split between manifest.volumes (internal state
the app owns) and path settings (user data the container mounts and
usually reads without owning), and recommends :ro as the default for
consumer-only mounts.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 11:43:09 +02:00
04762f5dd1 feat(manifest): add 'path' setting type with server-side validation
Some checks failed
CI / lint (push) Waiting to run
CI / test (push) Waiting to run
CI / validate-json (push) Waiting to run
CI / markdown-links (push) Waiting to run
Build ISO / build-iso (push) Failing after 4m34s
Apps can now declare a setting with "type": "path" whose value is an
absolute host filesystem path. Compose bind-mounts it via standard .env
substitution (${MEDIA_PATH}:/media) — no reconciler changes needed.
Unlocks media/data-heavy apps (Jellyfin, later Paperless, Nextcloud,
Immich) that point at existing user data instead of copying it into a
Docker volume.

Install/update refuses values that aren't absolute, don't exist, aren't
directories, or resolve into a system-path deny-list (/, /etc, /root,
/boot, /proc, /sys, /dev, /bin, /sbin, /usr/bin, /usr/sbin,
/var/lib/furtka). Path.resolve() is applied before the deny-list check
so /mnt/../etc traversal is caught too. Error messages surface in the
existing install/edit modal.

UI: path settings render as a text input with a /mnt/… placeholder.
The manifest's `description` field carries the actual hint ("Absoluter
Pfad zu deinem Filme-Ordner, z.B. /mnt/media"). No new form
components, no new API routes.

Tests: 9 new cases for install + update path validation; 1 new case
for manifest schema accepting the path type. 211 total passing.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 11:39:15 +02:00
c7e7c8b1e5 chore: release 26.9-alpha
All checks were successful
Build ISO / build-iso (push) Successful in 20m49s
CI / lint (push) Successful in 1m13s
CI / test (push) Successful in 48s
CI / validate-json (push) Successful in 44s
CI / markdown-links (push) Successful in 16s
Release / release (push) Successful in 13m31s
Three small fixes surfaced by the 26.8 QA pass on fresh VM .161:

- Landing-page app tiles now open external `open_url` links in a new
  tab, matching /apps Open-button behaviour. Without this a Kuma click
  on the home screen replaced Furtka itself.
- `scripts/publish-release.sh` treats the ISO upload as best-effort;
  a Forgejo-proxy 504 no longer kills the whole release after tarball
  + sha + release.json are already uploaded.
- `furtka app list --json` now mirrors /api/apps — includes
  `description_long`, `open_url`, and `settings` that the previous
  slim projection dropped.
2026-04-20 18:51:30 +02:00
cf93ef44cb chore: release 26.8-alpha (power actions, supersedes orphan 26.7 tag)
Some checks failed
Build ISO / build-iso (push) Successful in 26m56s
Deploy site / deploy (push) Successful in 23s
CI / lint (push) Successful in 34s
CI / test (push) Successful in 1m4s
CI / validate-json (push) Successful in 51s
CI / markdown-links (push) Successful in 28s
Release / release (push) Failing after 7m38s
Adds Reboot + Shut down buttons on /settings, backed by a new
POST /api/furtka/power endpoint that kicks a delayed `systemd-run
--on-active=3s systemctl {reboot|poweroff}` so the HTTP response
flushes before the kernel loses network. Both buttons open a native
confirm dialog; after reboot, the page polls /furtka.json until the
box is back and reloads itself.

26.7-alpha was tagged on 5d8ac63 but release.yml never fired for that
tag (Forgejo race with the concurrent main push; re-push of the deleted
tag didn't wake the workflow either). 26.8 supersedes it and carries
the same open_url + Open-button content plus the power actions.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 16:00:19 +02:00
5d8ac63d9f chore: release 26.7-alpha
Some checks failed
Deploy site / deploy (push) Waiting to run
Build ISO / build-iso (push) Has been cancelled
CI / lint (push) Successful in 1m26s
CI / test (push) Successful in 1m18s
CI / validate-json (push) Successful in 52s
CI / markdown-links (push) Successful in 27s
Release / release (push) Has been cancelled
Ships the open_url manifest field + the Open button in /apps and on
the landing page, replacing the fileshare-only hardcoded deep-link
with a generalised {host}-templated URL. Fileshare seed manifest
bumps to 0.1.2; the furtka-apps catalog release that goes with this
adds matching open_url values for fileshare + uptime-kuma.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 15:44:01 +02:00
018f2e20b0 chore: release 26.6-alpha
All checks were successful
Build ISO / build-iso (push) Successful in 21m23s
CI / lint (push) Successful in 1m31s
CI / test (push) Successful in 1m20s
CI / validate-json (push) Successful in 48s
CI / markdown-links (push) Successful in 27s
Deploy site / deploy (push) Successful in 8s
Release / release (push) Successful in 24s
Rolls the apps-catalog split, the /settings CSS wrap fix, and the version
bump to 26.6-alpha across pyproject + website copy. Core release tarball
still carries apps/fileshare as the offline first-boot seed; the new
daniel/furtka-apps catalog (tagged 26.6-alpha today) is the authoritative
source on boxes that have synced at least once.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 14:49:31 +02:00
3a8fad5185 feat(catalog): on-box apps catalog synced independently of core version
New `furtka catalog sync` pulls the latest daniel/furtka-apps release,
verifies its sha256, extracts under /var/lib/furtka/catalog/, and
atomically swaps into place — so apps can ship without cutting a new
Furtka core release. A daily timer (furtka-catalog-sync.timer, 10 min
post-boot + 24 h with ±6 h jitter) drives the sync; /apps gets a
manual "Sync apps catalog" button that kicks the same code path via a
detached systemd-run unit.

Layout of the new on-box tree:

  /var/lib/furtka/catalog/            synced catalog (survives self-updates)
    ├── VERSION
    └── apps/<name>/ ...
  /var/lib/furtka/catalog-state.json  sync stage + last version, UI-polled
  /run/furtka/catalog.lock            flock so timer + manual click can't race

Resolver precedence (furtka/sources.py): catalog wins over the bundled
seed (/opt/furtka/current/apps/, carried by the core release for offline
first-boot). Installed apps under /var/lib/furtka/apps/ are never auto-
swapped — user clicks Reinstall to move an existing install onto a
newer catalog version; settings merge-preserved via the existing
installer.install_from path.

New files:
- furtka/_release_common.py — shared Forgejo/tarball primitives lifted
  from furtka/updater.py. Both modules now import from here; updater's
  behaviour and public API unchanged.
- furtka/catalog.py — check_catalog(), sync_catalog() with staging +
  manifest validation + atomic rename. Refuses bad sha256 / broken
  manifests and leaves the live catalog intact on any failure path.
- furtka/sources.py — resolve_app_name() / list_available() abstraction
  used by installer.resolve_source and api._list_available.
- assets/systemd/furtka-catalog-sync.{service,timer} — oneshot service
  + daily timer. Timer auto-enables on self-update via a one-line
  addition to _link_new_units (fresh installs get enabled via the
  webinstaller's _FURTKA_UNITS list).

API + UI:
- /api/bundled renamed internally to _list_available; endpoint stays as
  a backcompat alias; /api/apps/available is the new canonical name.
  Each list entry carries a `source` field ("catalog" | "bundled").
- POST /api/catalog/sync/check + /apply + GET /api/catalog/status.
- /apps page grows a catalog-status row + Sync button; poll loop
  mirrors the Furtka self-update flow.

CLI: `furtka catalog sync [--check]` + `furtka catalog status` (both
support --json). Old `furtka app install` / `reconcile` / `update` /
`rollback` surfaces are unchanged.

Test gate: 194/170 baseline + 24 new tests covering catalog sync
(happy path, sha256 mismatch, invalid manifest, lock contention,
preserves-on-failure) + resolver precedence + api renames. ruff
check + format clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 14:16:02 +02:00
e7ee1698bd fix(ui): stop SHA-256 fingerprint overflowing the Local HTTPS card
The /settings "CA fingerprint (SHA-256)" value is a 95-char colon-
separated hex string with no whitespace, so CSS had no valid break
points and the value pushed past the card's right edge — visible on
the 192.168.178.23 fresh-install test.

.kv is a two-column grid (max-content 1fr); grid items default to
min-width: auto (= content width), which overrides the 1fr track's
width constraint. min-width: 0 lets the track shrink, and
overflow-wrap: anywhere gives the fingerprint valid break points at
any character. The styling stays scoped to .kv dd so card prose isn't
affected.

Verified live on .23 via hot-patch into /opt/furtka/current/assets/
www/style.css + caddy reload.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 13:41:33 +02:00
54357aa2a3 style: ruff format — collapse two-line hostname file path + version loop
All checks were successful
Build ISO / build-iso (push) Successful in 21m29s
CI / lint (push) Successful in 37s
CI / test (push) Successful in 58s
CI / validate-json (push) Successful in 42s
CI / markdown-links (push) Successful in 23s
Format-only diff from `ruff format`. The 26.5-alpha push's CI run failed
on `ruff format --check`; these three files had two-line constructs that
fit on one line at ruff's default line length. No behaviour change.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 12:41:58 +02:00
fec962e3d2 chore: release 26.5-alpha
Some checks failed
Build ISO / build-iso (push) Successful in 20m10s
Deploy site / deploy (push) Successful in 13s
CI / lint (push) Failing after 26s
CI / test (push) Successful in 33s
CI / validate-json (push) Successful in 24s
CI / markdown-links (push) Successful in 14s
Release / release (push) Successful in 6s
Rolls the HTTPS handshake fix (#10) and the README realignment into a
tagged release. Also closes the 26.4 follow-up that the wizard footer
version was hand-pinned: webinstaller/app.py now resolves the version
via a Flask context processor (reads /opt/furtka/VERSION on the live
ISO, written by iso/build.sh from pyproject.toml at build time; falls
back to pyproject.toml in dev runs, then to "dev"). pyproject.toml and
the website version strings bumped in the same commit so every surface
reports 26.5-alpha consistently.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 11:52:36 +02:00
8fbe67ffb9 fix(https): restore TLS handshake — name hostname + correct PKI path
Some checks failed
Build ISO / build-iso (push) Waiting to run
CI / lint (push) Failing after 2m11s
CI / test (push) Successful in 2m8s
CI / validate-json (push) Successful in 55s
CI / markdown-links (push) Successful in 25s
Deploy site / deploy (push) Successful in 8s
Closes #10. Two linked bugs in 26.4-alpha's Phase 1 HTTPS made the
force-HTTPS toggle fatal: every SNI handshake on :443 died with
SSL_ERROR_INTERNAL_ERROR_ALERT, so the toggle redirected users from
working HTTP to broken HTTPS.

Root cause 1: bare `:443 { tls internal }` gives Caddy no hostname to
issue a leaf cert for, so /var/lib/caddy/certificates/ stayed empty and
Caddy sent TLS `internal_error` on every handshake. Fix: the :443 block
is now `__FURTKA_HOSTNAME__.local, __FURTKA_HOSTNAME__ { tls internal }`,
with the marker substituted by webinstaller/app.py at install time and
by furtka.updater._refresh_caddyfile on self-update (reads /etc/hostname,
falls back to "furtka"). `auto_https disable_redirects` keeps Caddy's
built-in redirect out of the way of the /settings toggle.

Root cause 2: furtka/https.py and the /rootCA.crt handler both referenced
/var/lib/caddy/.local/share/caddy/pki/authorities/local/ — a path that
doesn't exist. caddy.service sets XDG_DATA_HOME=/var/lib, so Caddy's
storage is /var/lib/caddy/ directly. Fix: both paths corrected.

Verified on the 192.168.178.110 smoke VM by swapping the Caddyfile in,
reloading, handshaking, restoring: TLS 1.3 handshake succeeds, leaf cert
issued under /var/lib/caddy/certificates/local/, /rootCA.crt returns 200.

Tests: new cases assert the Caddyfile ships the hostname placeholder,
the webinstaller substitutes it, _refresh_caddyfile re-substitutes from
/etc/hostname on update, and the asset sets auto_https disable_redirects.
Unit tests still stub the Caddy reload — the real handshake regression
needs a smoke-VM integration test (follow-up, separate from this fix).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 11:39:48 +02:00
9ae14f4108 docs: add apps/ authoring guide + realign READMEs with 26.4-alpha
Closes #9. New apps/README.md walks through the four-file contract
(manifest.json, docker-compose.yaml, .env.example, icon.svg) with
the rules enforced by furtka/manifest.py and the SVG sanitiser, using
apps/fileshare as the reference.

Root README: release list now covers 26.1/26.3/26.4 (26.2 stalled on
the jq apt hang). Local HTTPS Phase 1 and the post-build smoke VM on
pollux both flip to [x]; the old proksi.local HTTPS TODO becomes a
Phase 2 entry (dedicated local CA + HTTPS on the live-installer wizard).

iso/README: mDNS is wired — live ISO advertises proksi.local, installed
box defaults to furtka.local (the form's default hostname, not proksi).
HTTPS section notes Caddy tls internal on :443 shipped in 26.4 while
the wizard itself is still HTTP. Overlay table picks up etc/hostname,
etc/issue, furtka-update-issue, and furtka-issue.service.

website/README: auto-deploy via .forgejo/workflows/deploy-site.yml is
the default path now; website/deploy.sh stays as the SSH-hop fallback
for off-CI pushes, and deploy-ci.sh is called out in the structure map.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 11:39:48 +02:00
850d656169 Merge pull request 'fix(smoke): capture arp-scan output instead of piping into awk' (#8) from fix-smoke-pipefail into main
All checks were successful
Build ISO / build-iso (push) Successful in 17m6s
CI / lint (push) Successful in 26s
CI / test (push) Successful in 33s
CI / validate-json (push) Successful in 28s
CI / markdown-links (push) Successful in 13s
Reviewed-on: #8
2026-04-18 15:43:50 +02:00
93c6b838a7 fix(smoke): capture arp-scan output instead of piping into awk
All checks were successful
CI / lint (pull_request) Successful in 26s
CI / test (pull_request) Successful in 34s
CI / validate-json (pull_request) Successful in 23s
CI / markdown-links (pull_request) Successful in 14s
When host-networking finally gave arp-scan a real LAN to scan, the
first MAC-match emitted a line, awk hit its `exit` clause, closed the
pipe, and arp-scan died from SIGPIPE (exit 141). With `set -o pipefail`
active, that killed the whole smoke-vm.sh run immediately after
"==> starting VM" — no IP discovery, no curl, no prune.

Fix: capture arp-scan's output into a variable first, then let awk
parse a here-string. Same treatment for the `ip neigh show` fallback.
No pipe, no pipefail cascade.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 15:26:10 +02:00
caa8609908 Merge pull request 'release-26.4-alpha' (#7) from release-26.4-alpha into main
Some checks failed
Build ISO / build-iso (push) Successful in 26m22s
Deploy site / deploy (push) Successful in 3s
CI / lint (push) Successful in 26s
CI / test (push) Successful in 1m37s
CI / markdown-links (push) Successful in 33s
Release / release (push) Successful in 6s
CI / validate-json (push) Failing after 14m0s
Reviewed-on: #7
2026-04-18 14:29:19 +02:00
522ea06cd0 fix(smoke): bump smoke-VM RAM to 8 GiB + make cores/memory configurable
All checks were successful
CI / lint (pull_request) Successful in 1m10s
CI / test (pull_request) Successful in 2m17s
CI / validate-json (pull_request) Successful in 1m5s
CI / markdown-links (pull_request) Successful in 41s
pollux (192.168.178.165) wedged at the network level during an
end-to-end install test today — mkinitcpio on a 4 GiB smoke VM +
the cached 1.5 GB ISO + a busy runner container pushed the host into
OOM, taking pveproxy and the SSH path down with it. Recovered by
physical reset.

Smoke VM now defaults to 8192 MiB / 2 vCPU, configurable via
PVE_TEST_VM_MEMORY / PVE_TEST_VM_CORES. Host has 64 GiB, so one
smoke VM at 8 GiB is well within headroom.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 14:28:29 +02:00
d567317538 chore: release 26.4-alpha
Bumps version everywhere user-facing that had drifted from the tag:

- pyproject.toml 26.0 → 26.4
- website/hugo.toml 26.0 → 26.4 (driving furtka.org landing + footer)
- website/content/_index{.md,.de.md} status string
- webinstaller/templates/base.html footer (was hardcoded — noted as
  follow-up to read dynamically from pyproject.toml)

Promotes the Unreleased section to 26.4-alpha and folds in today's
additions:

- Local HTTPS via Caddy tls internal + opt-in redirect toggle
- Two self-update UX fixes (Installed-field refresh + 45s reload
  fallback)
- Impressum + Datenschutzerklärung on furtka.org
- deploy-site.yml auto-deploy of the Hugo site on push-to-main
- Smoke VM pipeline on .165 Proxmox (build-iso inline smoke step +
  workflow_dispatch Smoke latest ISO for cheap re-tests)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 14:21:43 +02:00
931d62149f Merge pull request 'chore(smoke): surface PVE response body on API failure' (#6) from debug-smoke-errors into main
Some checks failed
CI / validate-json (push) Waiting to run
CI / markdown-links (push) Waiting to run
Build ISO / build-iso (push) Has been cancelled
CI / lint (push) Has been cancelled
CI / test (push) Has been cancelled
Reviewed-on: #6
2026-04-18 14:06:47 +02:00
f4f7d853ba chore(smoke): surface PVE response body on API failure
Some checks failed
CI / lint (pull_request) Successful in 1m3s
CI / test (pull_request) Successful in 1m23s
CI / markdown-links (pull_request) Has been cancelled
CI / validate-json (pull_request) Has been cancelled
api() was swallowing Proxmox's error body because callers pipe its
output to /dev/null. With a bare "curl: (22) 403" in the log we can't
tell which permission is missing. Now we capture the response body,
print it to stderr on failure, and only emit it to stdout on success.

No behaviour change on the happy path.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 14:06:09 +02:00
cb6e92aa92 Merge pull request 'fix(smoke): reuse existing PVE-side ISO instead of delete+re-upload' (#5) from fix-smoke-reuse-iso into main
Some checks failed
CI / validate-json (push) Waiting to run
CI / markdown-links (push) Waiting to run
Build ISO / build-iso (push) Has been cancelled
CI / lint (push) Has been cancelled
CI / test (push) Has been cancelled
Reviewed-on: #5
2026-04-18 14:00:40 +02:00
afbb8d59f9 fix(smoke): reuse existing PVE-side ISO instead of delete+re-upload
Some checks failed
CI / markdown-links (pull_request) Waiting to run
CI / lint (pull_request) Successful in 1m5s
CI / validate-json (pull_request) Has been cancelled
CI / test (pull_request) Has been cancelled
The delete branch required Datastore.Allocate (or was hitting a
privilege-separated token ACL edge case) and produced 403s on re-runs
against the same commit SHA. Since the ISO bytes are reproducible for
a given SHA — furtka-<sha>.iso is content-addressed — we can just
reuse whatever is already in PVE storage instead of cycling it.

Fixes the "runs-on-same-sha" re-dispatch case without needing any extra
PVE permission, and shaves ~2 min off repeated smoke runs.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 13:59:42 +02:00
2cfe54e03a Merge pull request 'fix(ci): apk-install smoke prerequisites before running smoke-vm.sh' (#4) from fix-smoke-deps into main
All checks were successful
Build ISO / build-iso (push) Successful in 22m40s
CI / lint (push) Successful in 1m4s
CI / test (push) Successful in 1m24s
CI / validate-json (push) Successful in 55s
CI / markdown-links (push) Successful in 26s
Reviewed-on: #4
2026-04-18 13:20:52 +02:00
1d75a165c4 fix(ci): apk-install smoke prerequisites before running smoke-vm.sh
All checks were successful
CI / lint (pull_request) Successful in 2m2s
CI / test (pull_request) Successful in 1m23s
CI / validate-json (pull_request) Successful in 58s
CI / markdown-links (pull_request) Successful in 26s
The Forgejo runner container is Alpine with a near-empty base — no
curl, python3, arp-scan, or sudo out of the box. scripts/smoke-vm.sh
needs all four:
  - curl: every PVE API call
  - python3: JSON parsing of PVE responses
  - arp-scan: MAC→IP discovery on the LAN (live ISO has no guest agent)
  - sudo: so the same script also works from a dev laptop as non-root

Without this step the smoke job fails immediately on "curl: not found",
regardless of whether the PVE secrets are correctly set.

Added to both build-iso.yml (inline smoke after ISO build) and
smoke-latest.yml (workflow_dispatch retest path).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 13:17:51 +02:00
2cc3fab027 Merge pull request 'feat(ci): workflow_dispatch smoke-latest + cache ISO for fast retests' (#3) from feat-smoke-latest into main
Some checks failed
Build ISO / build-iso (push) Has been cancelled
CI / test (push) Has been cancelled
CI / validate-json (push) Has been cancelled
CI / markdown-links (push) Has been cancelled
CI / lint (push) Has been cancelled
Reviewed-on: #3
2026-04-18 13:11:41 +02:00
41d0e7a398 feat(ci): workflow_dispatch smoke-latest + cache ISO for fast retests
Some checks failed
CI / lint (pull_request) Successful in 2m6s
CI / test (pull_request) Successful in 3m23s
CI / validate-json (pull_request) Has been cancelled
CI / markdown-links (pull_request) Has been cancelled
When smoke-vm.sh / PVE setup / secrets change, we want to verify the
fix without waiting for a full 25-min build-iso rebuild (most of which
is the upload-artifact step for a 1.5 GB file).

Adds two things:

1. build-iso.yml grows a "Cache ISO for smoke-latest" step that copies
   the freshly built ISO to /data/smoke-cache/latest.iso. /data is
   already bind-mounted into the runner container at a matching host
   path, so no compose.yml change or runner restart needed.

2. smoke-latest.yml is a workflow_dispatch-only workflow that reads
   /data/smoke-cache/latest.iso and runs scripts/smoke-vm.sh against
   it. ~2 min end-to-end. Errors cleanly if the cache is empty (build-
   iso.yml hasn't populated it yet).

First build-iso run after this merges will populate the cache; from
then on smoke-latest is available for on-demand re-tests.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 13:04:22 +02:00
a511f5418d Merge pull request 'feat(website): legal pages (Impressum/Datenschutz) + auto-deploy on push-to-main' (#1) from website-legal into main
Some checks are pending
CI / test (push) Waiting to run
Build ISO / build-iso (push) Successful in 24m57s
CI / lint (push) Successful in 2m17s
CI / validate-json (push) Successful in 2m0s
CI / markdown-links (push) Successful in 1m52s
Deploy site / deploy (push) Successful in 15s
Reviewed-on: #1
2026-04-18 12:32:15 +02:00
cf85217c0d Merge pull request 'fix(ci): inline smoke-vm as a step instead of a downstream job' (#2) from fix-smoke-inline into main
Some checks are pending
Build ISO / build-iso (push) Waiting to run
CI / lint (push) Waiting to run
CI / test (push) Waiting to run
CI / validate-json (push) Waiting to run
CI / markdown-links (push) Waiting to run
Reviewed-on: #2
2026-04-18 12:31:50 +02:00
7b894f096f fix(ci): inline smoke-vm as a step instead of a downstream job
All checks were successful
CI / lint (pull_request) Successful in 1m6s
CI / test (pull_request) Successful in 1m23s
CI / validate-json (pull_request) Successful in 56s
CI / markdown-links (pull_request) Successful in 24s
The separate smoke-vm job with `needs: build-iso` required round-tripping
the 1.5 GB ISO through actions/upload-artifact + download-artifact. v3
on Forgejo has a known issue where large artifacts stall at 0.0% in the
download step — the smoke run hung today with endless "Total file count:
1 ---- Processed file #0 (0.0%)" output.

Since both jobs run on the same self-hosted runner (host mode, same
workspace available), there was never a real need for the artifact
indirection. Inlining as a step after the artifact upload reuses the
ISO already in iso/out/ and skips the download entirely.

step-level continue-on-error preserves the original guarantee that a
VM-side flake doesn't mark the ISO build red.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 12:20:58 +02:00
b77ef80b56 feat(website): legal pages (Impressum/Datenschutz) + auto-deploy on push-to-main
All checks were successful
CI / lint (pull_request) Successful in 1m2s
CI / test (pull_request) Successful in 1m19s
CI / validate-json (pull_request) Successful in 55s
CI / markdown-links (pull_request) Successful in 27s
Two coupled changes that make sense to land together:

1. Legal pages required under German law
   - /imprint/ + /de/impressum/ — §5 DDG disclosure (contact is email
     plus Forgejo-Issues as the second quick-contact channel, per ECJ
     C-298/07 no phone number required)
   - /privacy/ + /de/datenschutz/ — Art. 13 GDPR minimum: server-log
     processing (IP, UA, URL, retention ≤30 days), no cookies, no
     tracking, no third-party embeds. RLP Landesbeauftragter as the
     competent supervisory authority.
   - Footer partial linked from every page, localized per language.
   - DE versions are legally binding; EN versions are courtesy
     translations noting that.

2. Auto-deploy wired up
   - New workflow .forgejo/workflows/deploy-site.yml fires on
     push-to-main with paths under website/**. Runs on the self-hosted
     runner, which *is* forge-runner-01 — so "deploy" is just a local
     rsync into /srv/furtka-site and a hugo build into
     /var/www/furtka.org. No SSH, no secrets.
   - website/deploy-ci.sh is the SSH-free counterpart of deploy.sh,
     invoked by the workflow.
   - compose.yml bind-mounts /srv/furtka-site and /var/www/furtka.org
     into the runner container at matching paths so the workflow can
     reach them. Requires a one-time `docker compose up -d` on the
     runner host to pick the mounts up.
   - deploy.sh is kept for out-of-band manual deploys (testing from a
     local branch, CI outage) but gets a header comment pointing at
     the CI path as the normal flow.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 12:10:06 +02:00
d499907613 feat(ci): auto-boot every main-ISO in smoke VM on .165 Proxmox
Some checks failed
Build ISO / smoke-vm (push) Blocked by required conditions
Build ISO / build-iso (push) Successful in 24m28s
CI / test (push) Successful in 3m1s
CI / validate-json (push) Successful in 55s
CI / markdown-links (push) Successful in 37s
CI / lint (push) Failing after 13m19s
After build-iso, a new smoke-vm job uploads the freshly built ISO to
the test Proxmox at 192.168.178.165 via PVE API token, boots it in a
fresh VM (VMID range 9000-9099, MAC derived from commit SHA so the
runner can find the DHCP IP by scanning the LAN), and curls :5000 to
confirm the webinstaller answers HTTP 200. Last 5 smoke VMs + their
ISOs are kept for post-mortem; older ones are purged. continue-on-error
on the smoke job so a VM-side flake doesn't mark the ISO build red.

Shortens the feedback loop on ISO regressions from "next manual VM
test session" (days) to "next push" (minutes) — the 2026-04-15/16 VM
sessions each found real boot-time bugs that unit tests missed.

Docs at docs/smoke-vm.md. Requires Forgejo secrets PVE_TEST_HOST and
PVE_TEST_TOKEN (dedicated smoke@pve!ci PVE token, privilege-separated).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 11:41:44 +02:00
3f7b97c8c7 style: ruff format two files the pre-commit hook caught
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 11:41:28 +02:00
663bd74572 feat(https): local HTTPS via Caddy tls internal + opt-in redirect toggle
Some checks failed
Build ISO / build-iso (push) Successful in 20m57s
CI / lint (push) Failing after 31s
CI / test (push) Successful in 36s
CI / validate-json (push) Successful in 23s
CI / markdown-links (push) Successful in 14s
Caddy now serves both :80 (plain HTTP, unchanged default) and :443 with
tls internal — it generates its own per-box root CA on first start,
stored under /var/lib/caddy/.local/share/caddy/pki/authorities/local/.
Users can download rootCA.crt at /rootCA.crt (served on both listeners)
and install it per-OS via the new /https-install/ guide.

Settings page grows a Local HTTPS card with CA fingerprint, download
button, reachability probe, and an opt-in "force HTTPS" toggle. The
toggle only unhides itself once the current browser already trusts the
cert, so enabling it can't lock the user out of the settings page.

Backend: GET /api/furtka/https/status and POST /api/furtka/https/force
in furtka.https. The force toggle drops a Caddy import snippet into
/etc/caddy/furtka.d/redirect.caddyfile and reloads Caddy; reload
failure rolls the snippet state back so a bad config can't wedge the
next service start.

updater._refresh_caddyfile() ensures /etc/caddy/furtka.d exists before
every reload so 26.3-alpha → 26.4-alpha self-updates don't trip on the
new glob import directive.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 12:19:06 +02:00
a5de3d7622 fix(settings): close the two self-update UX gaps from 2026-04-16 VM test
Drive upd-current from the /api/furtka/update/check response so a
post-update Check reflects the new installed version without Ctrl+F5,
and arm a 45s fallback location.reload on apply-click so the page still
comes up on the new version when the mid-apply API restart drops the
/update-state.json poll before stage=done is observed.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 09:22:34 +02:00
bf86ffaf4c docs(website): ship the two update bullets — validated on VM today
All checks were successful
CI / lint (push) Successful in 26s
CI / test (push) Successful in 33s
CI / validate-json (push) Successful in 23s
CI / markdown-links (push) Successful in 12s
End-to-end validation of the per-app container update and the Furtka
self-update ran green on VM 192.168.178.128 this afternoon (26.0-alpha
→ 26.3-alpha → rollback → reboot). Both flows are real — promote the
drafted HTML-comment bullets from _index.md and _index.de.md into the
visible "What works today" list.

The "plain-English Wi-Fi story" was the only one the copy was missing
a truthful on-box outcome for, and it still is (for a moment here
a-few-days-ago, but the update story moved past that).

Matches the commitment in feedback_no_invented_content.md — we only
publish after confirmation.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 17:44:51 +02:00
8de8f3fd87 docs(readme): roadmap through 2026-04-16 — resource mgr, UI, self-update
All checks were successful
CI / lint (push) Successful in 36s
CI / test (push) Successful in 34s
CI / validate-json (push) Successful in 24s
CI / markdown-links (push) Successful in 13s
Roadmap section drifted far enough that "re-tag 26.0-alpha" was still
listed as open while 26.1-alpha and 26.3-alpha are live releases.

Updated:
- Replaced the stale "re-tag 26.0-alpha" line with the actual state:
  tag-driven release pipeline is wired, two pre-releases published,
  all assets downloadable anonymously.
- Added five new checked items for the work that landed this month:
  resource manager + fileshare (validated), on-box UI uplevel (shared
  CSS / settings page / icons), versioned layout + per-app container
  updates, Phase 2 Furtka self-update (tag → release.yml → /settings
  Update now → atomic swap + auto-rollback), plus the broader Forgejo
  release pipeline that underpins the update story.
- Kept open items (wizard S3-S7, managed gateway, Authentik, local CA,
  Nextcloud first service, UI mockups) as the remaining TODO surface.

No code or test changes; pytest + ruff still green from the last push.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 17:40:25 +02:00
25bef628c2 docs(changelog): note two /settings update-flow UX gaps for next release
All checks were successful
CI / lint (push) Successful in 26s
CI / test (push) Successful in 34s
CI / validate-json (push) Successful in 23s
CI / markdown-links (push) Successful in 12s
End-to-end validated the Phase-2 self-update today on a fresh install
(192.168.178.128 → 26.0-alpha → 26.3-alpha): the symlink flip, the
tarball verify, the stage-by-stage progress, and the rollback slots
all work. But two browser-side UX bits are rough:

1. The "Installed" version displayed on /settings doesn't refresh
   right after the update; a hard reload shows the new value.
2. The auto-reload that should fire 5s after stage=done missed on
   the test — the polling connection likely dropped during the
   mid-update API restart.

Neither affects the integrity of the update itself. Landed the notes
in [Unreleased] so the next release cycle picks them up.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 17:31:41 +02:00
b4c65f46bf fix(release): drop jq dependency, use python3 for JSON assembly
All checks were successful
Build ISO / build-iso (push) Successful in 17m30s
CI / lint (push) Successful in 25s
CI / test (push) Successful in 33s
CI / validate-json (push) Successful in 24s
CI / markdown-links (push) Successful in 12s
Release / release (push) Successful in 6s
The 26.2-alpha release workflow hung for 15+ minutes on
"apt-get install -y jq" — the runner's apt mirror was unreachable
(or very slow), and the whole publish stalled.

jq was only used for two tiny things: building the release-create
POST body and reading the release id from the response. Both are
one-liners in Python, which is guaranteed-present on the Forgejo
Actions ubuntu-latest runner image. Replaced both uses; removed
the apt-get step from release.yml entirely. Slow mirrors no
longer block tagged releases.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 17:05:21 +02:00
b96f225c3c fix(updater): /releases?limit=1 instead of /releases/latest
Some checks failed
Build ISO / build-iso (push) Successful in 17m5s
CI / lint (push) Successful in 25s
CI / test (push) Successful in 33s
CI / validate-json (push) Successful in 23s
CI / markdown-links (push) Successful in 12s
Release / release (push) Has been cancelled
Forgejo's /releases/latest silently skips pre-releases (any release
with a -alpha / -beta / -rc suffix) and 404s when there's no stable
release. During Furtka's alpha stage every tag is a pre-release, so
the Check-for-updates button always 404'd against a perfectly-valid
releases page.

Switch check_update() to GET /releases?limit=1 and take the first
entry. Forgejo returns releases newest-first regardless of kind, so
this works whether the top of the list is pre-release or stable.
Empty list (no releases published yet) now returns a clean
"no releases" UpdateError instead of a raw 404.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 16:29:11 +02:00
85 changed files with 9198 additions and 372 deletions

View file

@ -50,3 +50,37 @@ jobs:
path: iso/out/*.iso path: iso/out/*.iso
retention-days: 14 retention-days: 14
if-no-files-found: error if-no-files-found: error
- name: Cache ISO for smoke-latest
# Persist the ISO to /data/smoke-cache/latest.iso so the
# smoke-latest.yml workflow_dispatch job can re-test without
# rebuilding. /data is already mounted into the runner container
# at a matching host path.
run: |
mkdir -p /data/smoke-cache
iso=$(ls iso/out/*.iso | head -1)
cp -f "$iso" /data/smoke-cache/latest.iso
ls -lh /data/smoke-cache/latest.iso
- name: Install smoke prerequisites
# Runner container is Alpine with a near-empty base; smoke-vm.sh
# needs curl, python3, arp-scan, and sudo (kept so the script
# also works when invoked from a dev laptop as a non-root user).
# apk cache survives across jobs so subsequent runs are ~1 s.
run: apk add --no-cache curl python3 arp-scan sudo
- name: Smoke-test ISO on Proxmox test host
# Inlined as a step (rather than a separate job with `needs:`) so
# we can reuse the ISO that's already in the workspace — Forgejo's
# actions/download-artifact@v3 hangs on 1.5 GB files.
# step-level continue-on-error: a VM-side flake doesn't mark the
# ISO build red, the ISO itself is still valid and uploaded.
continue-on-error: true
env:
PVE_TEST_HOST: ${{ secrets.PVE_TEST_HOST }}
PVE_TEST_TOKEN: ${{ secrets.PVE_TEST_TOKEN }}
SMOKE_SHA: ${{ github.sha }}
run: |
iso=$(ls iso/out/*.iso | head -1)
echo "Smoking $iso"
./scripts/smoke-vm.sh "$iso"

View file

@ -0,0 +1,39 @@
name: Deploy site
# Auto-deploy the Hugo site to /var/www/furtka.org on push-to-main.
# Only fires when content under website/ changes — everything else
# (Python code, ISO build, runbook docs) is unaffected.
#
# Runs on the self-hosted runner, which is forge-runner-01 — the same
# host that serves furtka.org. So the "deploy" is just a local rsync
# of the Hugo source into /srv/furtka-site and a `hugo` build into
# /var/www/furtka.org. No SSH, no secrets, no cross-host anything.
#
# Requires two bind-mounts on the runner container (/srv/furtka-site
# and /var/www/furtka.org → same paths inside). See compose.yml.
on:
push:
branches: [main]
paths:
- 'website/**'
concurrency:
group: deploy-site
cancel-in-progress: true
jobs:
deploy:
runs-on: self-hosted
timeout-minutes: 5
steps:
- uses: actions/checkout@v4
- name: Install hugo + rsync
# Runner image is alpine-based; apk is fast and cached.
# Pinning is intentionally skipped — alpine:latest moves hugo
# forward in lockstep with upstream, and the site only uses
# baseline features.
run: apk add --no-cache hugo rsync
- name: Deploy
run: ./website/deploy-ci.sh

View file

@ -1,32 +1,58 @@
name: Release name: Release
# Tag-triggered: when `git push origin <version>` lands, this builds the # Tag-triggered: when `git push origin <version>` lands, this builds the
# release tarball and publishes it + the sha256 + release.json to the # release tarball + the live-installer ISO, and publishes them both to
# Forgejo releases page for that tag. Boxes then POST /api/furtka/update # the Forgejo releases page. Boxes POST /api/furtka/update to pull the
# to pull from here. # tarball; fresh-install users download the ISO from the release page.
# #
# Version tags only (pattern matches CalVer like 26.0-alpha, 26.1, 27.0-beta). # Runs on the self-hosted runner because iso/build.sh needs privileged
# Documentation / random tags are ignored by the [0-9]* prefix. # docker access (mkarchiso wants root + loop mounts), and because the
# ubuntu-latest Forgejo hosted runner doesn't carry the docker socket
# bind-mount the build needs. Self-hosted adds ~5-7 min to the release
# (ISO build) but keeps the release page self-contained.
#
# Version tags only (CalVer like 26.0-alpha, 26.1, 27.0-beta). Random
# tags are ignored by the [0-9]* prefix.
on: on:
push: push:
tags: ['[0-9]*'] tags: ['[0-9]*']
jobs: jobs:
release: release:
runs-on: ubuntu-latest runs-on: self-hosted
timeout-minutes: 45
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
with: with:
fetch-depth: 0 # changelog section extraction needs history fetch-depth: 0 # changelog section extraction needs history
- name: Install jq - name: Install prerequisites
run: | # Alpine runner is near-empty: we need curl + python3 for the
apt-get update -qq # publish script, bash for the build scripts.
apt-get install -y jq run: apk add --no-cache curl python3 bash
- name: Build release tarball - name: Build release tarball
run: ./scripts/build-release-tarball.sh "${GITHUB_REF_NAME}" run: ./scripts/build-release-tarball.sh "${GITHUB_REF_NAME}"
- name: Build live-installer ISO
# Same script build-iso.yml uses on every main push. Re-running
# here is intentional: guarantees the ISO matches the exact
# tagged commit without coordinating across workflows. Step-level
# continue-on-error so an ISO build flake doesn't block the
# core tarball (which is what boxes need for self-update) from
# publishing.
continue-on-error: true
id: build_iso
run: ./iso/build.sh
- name: Move ISO into dist/
# publish-release.sh attaches dist/furtka-<ver>.iso if present.
# Skipped gracefully when the build step above failed.
if: steps.build_iso.outcome == 'success'
run: |
iso=$(ls iso/out/*.iso | head -1)
cp "$iso" "dist/furtka-${GITHUB_REF_NAME}.iso"
- name: Publish to Forgejo releases - name: Publish to Forgejo releases
env: env:
FORGEJO_TOKEN: ${{ secrets.FORGEJO_RELEASE_TOKEN }} FORGEJO_TOKEN: ${{ secrets.FORGEJO_RELEASE_TOKEN }}

View file

@ -0,0 +1,47 @@
name: Smoke latest ISO
# Manual-trigger smoke test against the last ISO `build-iso.yml` produced.
# Use this when you've changed something that only affects smoke-vm.sh,
# the PVE setup, or the secrets — skips the 25-min ISO rebuild and only
# runs the ~2-min VM boot + /:5000 check.
#
# The ISO lives at /data/smoke-cache/latest.iso on the runner, populated
# by build-iso.yml's "Cache ISO for smoke-latest" step. That path is
# inside the runner's already-mounted /data volume, so no extra bind
# mounts needed.
on:
workflow_dispatch:
concurrency:
group: smoke-latest
cancel-in-progress: false
jobs:
smoke:
runs-on: self-hosted
timeout-minutes: 10
steps:
- uses: actions/checkout@v4
- name: Check cached ISO exists
run: |
iso=/data/smoke-cache/latest.iso
if [ ! -f "$iso" ]; then
echo "::error::$iso not found — trigger build-iso.yml first to populate the cache."
exit 1
fi
echo "Will smoke: $iso"
ls -lh "$iso"
- name: Install smoke prerequisites
# Runner container is Alpine with a near-empty base; smoke-vm.sh
# needs curl, python3, arp-scan, and sudo (kept so the script
# also works when invoked from a dev laptop as a non-root user).
run: apk add --no-cache curl python3 arp-scan sudo
- name: Smoke-test ISO on Proxmox test host
env:
PVE_TEST_HOST: ${{ secrets.PVE_TEST_HOST }}
PVE_TEST_TOKEN: ${{ secrets.PVE_TEST_TOKEN }}
SMOKE_SHA: ${{ github.sha }}
run: ./scripts/smoke-vm.sh /data/smoke-cache/latest.iso

1
.gitignore vendored
View file

@ -13,3 +13,4 @@ iso/out/
website/public/ website/public/
website/resources/ website/resources/
website/.hugo_build.lock website/.hugo_build.lock
website/hugo_stats.json

View file

@ -7,6 +7,421 @@ This project uses calendar versioning: `YY.N-stage` (e.g. `26.0-alpha` = 2026, r
## [Unreleased] ## [Unreleased]
## [26.17-alpha] - 2026-05-11
### Added
- **App-to-app dependencies.** Manifests gain an optional `requires`
array; each entry names a provider app plus two optional hook scripts
that live in the *provider's* folder. `on_install` runs once via
`docker compose exec` against the provider's running container while
the consumer is being installed (use case: `mosquitto_passwd` a new
MQTT user for the consumer). `on_start` runs every boot during
reconcile, before the consumer's container starts (use case: make
sure the user still exists after a Mosquitto wipe). Hook stdout
parses as `KEY=VALUE` lines and optional `FURTKA_JSON: {…}` sentinel
lines, both validated against the existing `SETTING_NAME` regex; the
values get merged into the consumer's `.env` (hook wins on conflict)
and the placeholder-secret check runs again over the merged file so
a hook returning `MQTT_PASS=changeme` is refused the same way an
unedited `.env.example` is.
- **`POST /api/apps/install/plan`.** New read-only endpoint that
returns the topo-sorted install order for a target app plus per-app
summaries (display_name, version, has_settings, installed flag). The
catalog UI calls this before opening the settings dialog so it can
show a confirm modal — "Installing zigbee2mqtt also installs
Mosquitto" — before anything mutates. Circular dependencies surface
as `400 {error: "circular dependency: A -> B -> A"}`; missing
providers as `400 {error: "required app 'X' not found …"}`.
- **`/var/lib/furtka/install-plan.json`** (overridable via
`FURTKA_INSTALL_PLAN`). The HTTP install endpoint writes this before
it spawns the systemd-run background job so the runner knows the
full chain to pull → create volumes → fire hooks → `compose up` for
in plan order. The runner consumes the file after reading so a stale
plan from a previous install can't accidentally steer the next one.
### Changed
- **`furtka reconcile` now visits apps in dependency order, not
alphabetical.** Topo-sort over `requires` puts providers before
consumers so a consumer's `on_start` hook can talk to an already-up
provider. Within a tier, ties stay alphabetical so boot logs are
still deterministic across reboots. Apps with unresolvable `requires`
(missing provider) are visited last; the per-app error-isolation in
reconcile then catches them without aborting the whole sweep.
- **`POST /api/apps/install` requires `confirm_dependencies: true`**
when installing a named app would pull in transitive providers.
Without the flag, the endpoint returns `409` plus the full plan body
so the UI can render the confirm dialog without a second round-trip.
Lone-target installs (no transitive deps) keep the existing
one-click flow — no UX change for `fileshare`-style standalone apps.
- **`furtka app install <name>` and the web UI now install transitive
dependencies automatically.** `furtka app install /path/to/dir`
stays as today (single-app, dev/test workflow).
- **`compose_exec` and `compose_exec_script` helpers** in
`furtka/dockerops.py`. Both pass `-T` (no TTY) so they work from the
install runner and from reconcile; both raise `DockerError` on
non-zero exit or timeout. `compose_exec_script` streams the script
body via stdin to `sh -s` so hooks don't need to be baked into the
provider's container image.
### Notes
- Hook target service: v1 auto-picks the *first* service in the
provider's compose config. Works for Mosquitto, Postgres, Redis.
Multi-service providers (Authentik server+worker) will need an
optional `service` field on the requirement entry; deferred until a
real case lands.
- Hook timeouts: `on_install` 60 s, `on_start` 30 s. Hardcoded for
v1 — revisit if a DB seed legitimately needs longer.
- Removing an app is now blocked (`409 {dependents: […]}` from the
API, exit 2 from the CLI) when other installed apps require it.
## [26.16-alpha] - 2026-05-10
### Added
- **Failed-login rate limit on `/login`.** A new in-memory
`LoginAttempts` store in `furtka/auth.py` blocks brute-force attempts
after 10 failures in 15 minutes from the same (username, IP) pair,
with a 15-minute lockout. Successful logins clear the counter; a
`systemctl restart furtka` clears any stuck lockout — fine for an
alpha single-user box. Tuple-keying means a flood from one source IP
can't lock the admin out from elsewhere; an attacker can rotate IPs
to keep probing forever, but each attempt still eats the PBKDF2 cost.
Locked attempts get a `Retry-After` header so the UI can render the
cooldown.
- **Live-ISO boot USB is filtered out of the install drive picker.** On
bare-metal installs, `lsblk` reports the USB stick the live ISO
booted from as `TYPE=disk`, so it showed up in the picker alongside
the real install target — a user could in theory pick the USB they
had just booted from. `webinstaller/drives.py` now resolves
`/run/archiso/bootmnt` via `findmnt`, walks it up to its parent disk
via `lsblk -no PKNAME`, and drops that disk before scoring. On a
normal (non-live) box `/run/archiso/bootmnt` does not exist and the
picker is unchanged.
### Changed
- **furtka.org homepage rebuild.** Adopted the visual feel of Pascal's
prototype while keeping Furtka's voice, brand palette, and bilingual
structure: Three.js wireframe torus-knot behind the hero (color +
opacity tied to the existing `--accent` CSS var so light and dark
modes share one scene), scroll-driven camera zoom + tilt, GSAP +
ScrollTrigger card reveals, Lenis smooth scroll, gradient wordmark,
drop-shadow glow in dark mode, and a pulsing CTA pointing at
`/releases`. "What works today" / "What's coming next" lists moved
from markdown bullets into front-matter arrays and now render as
scroll-reveal cards. All vendor JS (Three.js r128, GSAP 3.12.2 +
ScrollTrigger, Lenis 1.0.33) is vendored locally under
`website/assets/js/vendor/`, fingerprinted with SRI, gated to the
homepage only, deferred so first paint isn't blocked, and
early-returned on `prefers-reduced-motion`.
- **Static-asset gzip on the furtka.org nginx (config only — needs a
deploy on forge-runner-01).** Default nginx only gzips `text/html`,
so the homepage HTML was the only asset coming back compressed. The
~600 KB `three.min.js` bundle (and the hashed CSS) were being shipped
uncompressed across the public openresty proxy. `gzip_types` in
`ops/nginx/furtka.org.conf` now covers css/js/json/xml/svg/woff2.
Needs `sudo ops/nginx/setup-vm.sh` on forge-runner-01 to take effect
— the site-deploy workflow only rebuilds Hugo, it doesn't touch the
nginx config.
## [26.15-alpha] - 2026-04-21
### Fixed
- **HTTPS is now opt-in; fresh installs no longer hit unbypassable
SEC_ERROR_BAD_SIGNATURE.** Every version since 26.5 shipped a
Caddyfile with a `__FURTKA_HOSTNAME__.local { tls internal }` site
block, so Caddy auto-generated a self-signed root CA + intermediate
+ leaf on first boot. That worked for first-time-ever users, but
every reinstall (or second Furtka box on the same LAN) produced a
new CA with the **same intermediate CN** (`Caddy Local Authority -
ECC Intermediate` — Caddy hardcodes it). Any browser that had ever
trusted an earlier Furtka CA got a cached intermediate with
mismatched keys, then Firefox's cert lookup substituted the cached
intermediate when validating the new box's leaf → the signature
check failed → `SEC_ERROR_BAD_SIGNATURE`, which Firefox has no
"Advanced → Accept Risk" bypass for.
- Removed the hostname site block from the default Caddyfile.
Fresh installs serve `:80` only; visiting `https://furtka.local`
now yields a clean connection-refused instead of the crypto
fault.
- Added top-level `import /etc/caddy/furtka-https.d/*.caddyfile`.
The `/settings` HTTPS toggle (via `furtka.https.set_force_https`)
now writes TWO snippets atomically — the top-level hostname +
`tls internal` block (enables `:443`) and the `:80`-scoped
redirect (forces HTTP → HTTPS) — and removes both on disable.
Caddy reloads after the pair-swap; failure rolls both back.
- Webinstaller creates `/etc/caddy/furtka-https.d/` during
post-install alongside the existing `furtka.d/`.
- `updater._refresh_caddyfile` runs a 26.14 → 26.15 migration: if
the box already had the redirect snippet on disk (user had
explicitly enabled "Force HTTPS" under the old regime), the
migration also writes the new listener snippet so HTTPS keeps
working across the upgrade.
- **`status.force_https` now reads the listener snippet, not the
redirect snippet.** A lone redirect without a `:443` listener
wouldn't actually serve HTTPS, so the listener file is the
authoritative "HTTPS is on" signal. The UI on `/settings` sees the
correct state as a result.
Known remaining UX wart: a browser that trusted a previous Furtka box
still sees `BAD_SIGNATURE` when visiting this box's `https://` after
enabling HTTPS here — the fixed intermediate CN is a Caddy-side
limitation we can't fix from Furtka. Fresh installs on a browser that
never visited another Furtka box work correctly. Workaround:
`about:networking#sts` → Forget → clear `cert9.db`.
## [26.14-alpha] - 2026-04-21
### Fixed
- **Landing page and `/settings/` were silently bypassing the auth
guard.** Since 26.11 shipped login, the Caddyfile only
reverse-proxied `/api/*`, `/apps*`, `/login*`, and `/logout*` to
Python. Everything else — including `/` and `/settings/` — fell
through to Caddy's catch-all `file_server` and was served straight
from `assets/www/` without ever hitting the session check. The
effect: a LAN visitor saw the box's hostname, IP, Furtka version,
and the buttons for Update-now / Reboot / HTTPS-toggle. The API
calls those buttons fired were all 401-auth-gated so actions didn't
land, but the information leak and the "looks open" UX was a real
bug. Caught in the 26.13 SSH test session when the user noticed
Logout only showed up on `/apps`. Now Caddy routes `/` and
`/settings*` through Python; a new `_serve_static_www` handler
checks the session cookie, redirects to `/login` if unauthed, and
reads the HTML from `assets/www/` otherwise. Catch-all still
serves `/style.css`, `/rootCA.crt`, and the runtime JSON files
publicly — those don't need auth.
- **Logout link now shows on every authed page, not just `/apps`.**
The static HTML for `/` and `/settings/` maintained their own nav
separate from `_HTML` in `api.py`, so they never got the Logout
entry when it was added in 26.11. Both nav bars now include it
plus an inline `doLogout()` that POSTs `/logout` and bounces to
`/login`, matching the pattern in `_HTML`.
## [26.13-alpha] - 2026-04-21
### Fixed
- **Upgrade path from pre-auth releases actually works.** 26.11-alpha
introduced `from werkzeug.security import ...` in `furtka/auth.py`,
but werkzeug isn't installed on the target system — core runs as
system Python with stdlib only, and `flask>=3.0` in `pyproject.toml`
is never pip-installed on the box. Fresh boxes from the 26.11/26.12
ISO without a manually-installed werkzeug crashed on import; boxes
upgrading from pre-26.11 got double-broken by that plus the health
check below. Replaced the werkzeug dependency with a stdlib-only
`furtka/passwd.py` that uses `hashlib.pbkdf2_hmac` for new hashes
and parses werkzeug's `scrypt:N:r:p$salt$hex` format for backward
compatibility — existing `users.json` files created on the rare
boxes that did have werkzeug keep working after this upgrade, no
re-setup needed. `from werkzeug.security import ...` is gone from
the import chain entirely; `pyproject.toml`'s flask dep stays only
for the live-ISO webinstaller.
- **Self-update no longer auto-rolls-back when crossing the auth
boundary.** `updater._health_check` pinged `/api/apps` and demanded
a 200, which meant every 26.10 → 26.11+ upgrade hit the post-restart
check, got a 401 (auth guard), and treated that as "server dead"
→ rollback. Now any 2xx4xx response counts as "server alive"; only
connection-level failures or 5xx fail the check. 5xx still fails
rollback because that means the new process is up but broken.
- **Install lock closes its race window.** `POST /api/apps/install`
used to release the fcntl lock immediately after the sync
pre-validation so the systemd-run child could re-acquire it —
leaving a tiny gap where a second POST could slip in, pass the lock
check, and return 202. Both child processes would start, one would
win the in-child lock, the other would die silently. Now the API
also reads `install-state.json` and refuses with 409 if the stage
is non-terminal (`pulling_image`, `creating_volumes`,
`starting_container`). The fcntl lock stays as belt-and-suspenders.
## [26.12-alpha] - 2026-04-21
### Changed
- **App-Install geht async mit Live-Progress.** `POST /api/apps/install`
returnt jetzt `202 Accepted` nach der synchronen Pre-Validation
(Source auflösen, Files kopieren, `.env` schreiben, Placeholder- und
Path-Checks). Den eigentlichen Docker-Teil (`compose pull` → volumes
`compose up`) dispatched der Handler als `systemd-run
--unit=furtka-install-<app>` Hintergrund-Job, der seine Phase in
`/var/lib/furtka/install-state.json` schreibt. Neues
`GET /api/apps/install/status` für UI-Polling. Das Install-Modal
zeigt jetzt live "Image wird heruntergeladen…" →
"Speicherbereiche werden erstellt…" → "Container wird gestartet…"
statt ~30 Sekunden totem "Installing…". Muster 1:1 parallel zu
`/api/catalog/sync/apply` und `/api/furtka/update/apply`. Neue CLI-
Subcommand `furtka app install-bg <name>` (intern, von der API
aufgerufen); `furtka app install` für Terminal-User bleibt synchron.
Die Reinstall-Taste in der App-Liste pollt ebenfalls den
Install-Status und spiegelt die Phase im Button-Text.
## [26.11-alpha] - 2026-04-21
### Added
- **Login-auth for the Furtka web UI.** Every `/apps`, `/api/*`, `/`,
and `/settings/` route now requires a signed-in session. New
`/login` page serves a username/password form; `POST /login`
validates against `/var/lib/furtka/users.json` (werkzeug PBKDF2-
hashed), sets a `furtka_session` cookie (`HttpOnly`, `SameSite=
Strict`, 7-day TTL), and redirects to `/apps`. `POST /logout`
revokes the server-side session and clears the cookie.
Unauthenticated HTML requests get a 302 to `/login`; unauthenticated
API requests get 401 JSON. The old "No authentication on this UI
yet" banner is gone; the `/apps` header picks up a `Logout` link
instead.
- **First-run setup fallback for upgrade-path boxes.** Boxes
upgrading from 26.10-alpha have no `users.json` yet — on the first
visit `/login` renders a setup form (username + password +
password-confirm) that creates the admin record on submit. Fresh
installs skip this: the webinstaller writes `users.json` during
the chroot post-install step using the step-1 password, so the
first browser visit after boot goes straight to the login form.
- **Caddy proxy routes `/login` and `/logout`.** `assets/Caddyfile`
gets two new `handle` blocks in the shared `(furtka_routes)`
snippet so both the `:80` block and the `hostname.local, hostname`
HTTPS block forward the auth endpoints to the stdlib server on
`127.0.0.1:7000`. Without this Caddy would serve a 404 from the
static file server.
### Fixed
- `tests/test_installer.py` ruff-format nit — the 26.10-alpha
release commit had a misformatted list literal that failed
`ruff format --check`. Caught when the Release page on Forgejo
showed a red CI badge for the tag.
- `pyproject.toml` version string bumped from the stale 26.8-alpha
to 26.11-alpha. Release pipeline uses `GITHUB_REF_NAME` as source
of truth for the artefact name, but having the two agree matters
for local dev runs that read `pyproject.toml`.
## [26.10-alpha] - 2026-04-21
### Added
- **Remove-USB-stick hint on the installer's post-install screen.**
`webinstaller/templates/install/rebooting.html` now shows a bold
"Remove the USB stick now" line before the reboot, plus a muted
fallback explaining the BIOS boot-menu keys (F11/F12/Esc) if the
machine boots back into the installer anyway. Caught on the first
bare-metal test (Medion i5-4gen, 2026-04-21) where the box didn't
boot the installed system without manual BIOS-order changes.
- **New `path` setting type for app manifests.** Apps can now declare a
setting with `"type": "path"` whose value is an absolute filesystem
path on the host; docker-compose bind-mounts it via the usual `.env`
substitution (`${MEDIA_PATH}:/media`). Unlocks media/data-heavy apps
(Jellyfin, later Paperless/Nextcloud/Immich) where the user points at
an existing folder instead of copying everything into a Docker
volume. The install form renders path settings as a plain text input
with a `/mnt/…` placeholder hint.
- **Server-side path validation.** Both `install_from()` and
`update_env()` refuse values that aren't absolute, don't exist,
aren't directories, or resolve (after `Path.resolve()`) into a
system-path deny-list (`/`, `/etc`, `/root`, `/boot`, `/proc`,
`/sys`, `/dev`, `/bin`, `/sbin`, `/usr/bin`, `/usr/sbin`,
`/var/lib/furtka`). Catches `/mnt/../etc`-style traversal too. Error
messages surface in the existing install/edit modal error line.
## [26.9-alpha] - 2026-04-21
### Fixed
- Landing-page app tiles with an `open_url` now open in a new tab
(`target="_blank" rel="noopener"`), matching the Open button
behaviour on `/apps`. Without this, clicking "Uptime Kuma" on the
home screen replaced Furtka itself with the Kuma admin page.
Internal links (the `Manage →` fallback for apps without an
`open_url`) still open in the same tab.
- `scripts/publish-release.sh` no longer fails the whole release when
the ISO upload hits a Forgejo proxy 504. The core tarball + sha256 +
release.json (which running boxes need for self-update) are uploaded
first and the ISO is attempted last as a best-effort; a 504 now logs
a warning and exits 0 so the release page still publishes. Surfaced
by the 26.8-alpha cut: the tarball landed but the ~1 GB ISO upload
timed out at the Forgejo reverse proxy.
### Changed
- `furtka app list --json` now mirrors `/api/apps` field-for-field —
previously the CLI emitted a slim projection missing
`description_long`, `open_url`, and `settings`. Anyone piping the
CLI output into jq for automation was seeing an incomplete view.
## [26.8-alpha] - 2026-04-20
### Added
- **Live-installer ISO attached to the Forgejo release page.** `.forgejo/workflows/release.yml` moves to the self-hosted runner, builds both the self-update tarball and the ISO, and `scripts/publish-release.sh` uploads the ISO as a fourth release asset (`furtka-<version>.iso`) alongside the existing tarball + sha256 + release.json. Fresh-install users can now grab the ISO from the release page instead of hunting through `build-iso.yml` artifact retention windows. ISO build step is `continue-on-error` so an ISO flake doesn't hold back the core tarball that running boxes need for self-update.
- **Reboot + Shut down buttons on `/settings`.** Replaces the two "Coming next" placeholders with real actions backed by `POST /api/furtka/power` (`{"action": "reboot" | "poweroff"}`). Handler kicks a delayed `systemd-run --on-active=3s systemctl {reboot|poweroff}` so the HTTP response reaches the browser before the kernel loses network. Each button opens a native confirm dialog first (reboot: "back in ~30 s", shut down: "need to press the physical power button"), then the UI swaps to a status line and — after a reboot — polls `/furtka.json` until the box is back, reloading the page automatically. No auth (same posture as install/remove).
- **Manifest `open_url` field + Open button in `/apps` and on the landing page.** Apps declare a URL template (e.g. `smb://{host}/files` for fileshare, `http://{host}:3001/` for Uptime Kuma); the UI substitutes `{host}` with the current browser's hostname at render time so the link follows however the user reached Furtka (furtka.local, raw IP, a future reverse-proxy hostname). The landing page's hardcoded `if app.name === 'fileshare'` special-case is gone — any app with an `open_url` in its manifest now gets a proper "Open" link. The core seed `apps/fileshare/manifest.json` bumps to v0.1.2 to carry it.
### Changed
- `.btn` CSS class introduced so an `<a>` rendered-as-button lines up with its `<button>` siblings in `.buttons`. Needed because "Open" is a real link (middle-click, copy URL, screen readers) and HTML doesn't let `<button>` carry `href`.
### Notes
- `26.7-alpha` was tagged but never published — the tag push didn't trigger `release.yml` (Forgejo race with the concurrent main push). `26.8-alpha` supersedes it and carries the same content plus power actions.
## [26.6-alpha] - 2026-04-20
### Added
- **Apps catalog synced independently of core.** A new `daniel/furtka-apps` Forgejo repo carries the bundled app catalog; running boxes pull the latest release via `furtka-catalog-sync.timer` (10 min post-boot + daily, ±6 h jitter) and extract atomically into `/var/lib/furtka/catalog/`. The resolver now prefers catalog apps over the seed `/opt/furtka/current/apps/` tree that ships inside the core release tarball, so apps can update without cutting a Furtka core release. Manual trigger: "Sync apps catalog" button on `/apps`, or `sudo furtka catalog sync` at the console. Fresh boxes with no network fall back to the seed, so offline first-boot still shows installable apps. Installed apps are never auto-swapped — users click Reinstall in `/apps` to move an existing install onto a newer catalog version (settings merge-preserved via the existing `installer.install_from` path).
- **Catalog CLI**: `furtka catalog sync [--check] [--json]` + `furtka catalog status [--json]`. Same shape as the core `furtka update` commands.
- **Catalog API endpoints**: `POST /api/catalog/sync/check`, `POST /api/catalog/sync/apply` (detached via `systemd-run` for symmetry with `/api/furtka/update/apply`), `GET /api/catalog/status`. The existing `/api/bundled` endpoint keeps working as a backwards-compat alias for `/api/apps/available`, which now returns the union of catalog + seed apps with a new `"source"` field on each entry (`"catalog"` | `"bundled"`).
### Changed
- **`furtka._release_common`** extracted from `furtka.updater`. Both `updater` and the new `catalog` module now share one implementation of the Forgejo-releases-API call, SHA256 verification, path-traversal-guarded tarball extraction, and CalVer comparison. Public updater surface unchanged.
- **`_link_new_units` now auto-enables newly-linked `.timer` units.** On self-update, a fresh timer file (e.g. `furtka-catalog-sync.timer` added in this release) needs `systemctl enable` to actually start firing — linking alone isn't enough. Fresh installs get their enable via the webinstaller's `_FURTKA_UNITS` list as before.
### Fixed
- **SHA-256 CA fingerprint no longer overflows the `/settings` Local HTTPS card** on narrow viewports. `.kv dd` grid items now set `min-width: 0` + `overflow-wrap: anywhere` so the colon-separated hex string breaks within the card's right edge instead of pushing past it.
## [26.5-alpha] - 2026-04-20
### Fixed
- **HTTPS handshake regression on the installed box (#10).** Phase 1 shipped two linked bugs: the `:443 { tls internal }` site block had no hostname, so Caddy never issued a leaf cert and every SNI handshake died with `SSL_ERROR_INTERNAL_ERROR_ALERT`; and both `furtka.https` and the Caddyfile's `/rootCA.crt` handler referenced `/var/lib/caddy/.local/share/caddy/pki/…`, a path that doesn't exist because our systemd unit sets `XDG_DATA_HOME=/var/lib`. Force-HTTPS toggle made the brokenness user-visible by redirecting working HTTP to dead HTTPS. Fixed: the Caddyfile now ships a `__FURTKA_HOSTNAME__.local, __FURTKA_HOSTNAME__ { tls internal }` block with the placeholder substituted at install time (`webinstaller/app.py`) and on every self-update (`furtka.updater._refresh_caddyfile` reads `/etc/hostname`). `auto_https disable_redirects` keeps Caddy's built-in redirect out of the way of the `/settings` toggle. PKI paths corrected in both `furtka/https.py` and `assets/Caddyfile`. Verified end-to-end on the 192.168.178.110 test VM: TLS 1.3 handshake completes, leaf cert issued, `/rootCA.crt` returns 200.
### Changed
- **Wizard footer version is now dynamic.** `webinstaller/app.py` resolves the Furtka version at startup via a Flask context processor — reads `/opt/furtka/VERSION` on the live ISO (written by `iso/build.sh` from `pyproject.toml` at build time), falls back to `pyproject.toml` in dev runs, then to literal `"dev"`. The 26.4 footer was hand-pinned and drifted within hours of release; that follow-up item is now closed.
- **Docs realigned with 26.4-alpha reality.** `apps/README.md` added (manifest schema, volume namespacing, `.env.example` guardrails, SVG sanitiser limits, install/test flow). Root `README.md` roadmap updated with Phase 1 HTTPS + smoke-VM pipeline as shipped items and 26.4-alpha in the release list. `iso/README.md` corrected: mDNS is wired (not "later milestone"), post-install default URL is `http://furtka.local` (not `proksi.local`), HTTPS is available via `tls internal` since 26.4. `website/README.md` now documents the auto-deploy on push-to-main as the default path, manual `deploy.sh` as the SSH-hop fallback.
## [26.4-alpha] - 2026-04-18
### Added
- **Local HTTPS via Caddy `tls internal`** on port 443. Caddy generates a per-box local root CA on first start; the Caddyfile now serves both `:80` and `:443` from the same routes. HTTP stays on by default — no regression for users who haven't trusted the CA yet. New "Local HTTPS" section in `/settings` shows the CA's SHA-256 fingerprint, offers a one-click download of `rootCA.crt`, links to the per-OS install guide at `/https-install/`, and exposes an opt-in "force HTTPS" toggle that only unhides itself once the current browser has already trusted the cert (so enabling it can't lock the user out of the settings page). Backend: `GET /api/furtka/https/status` and `POST /api/furtka/https/force` in `furtka.https`. The force toggle drops a Caddy import snippet into `/etc/caddy/furtka.d/redirect.caddyfile` and reloads Caddy; reload failure automatically rolls the snippet state back so a bad config can't wedge the next service start.
- **Impressum + Datenschutzerklärung on furtka.org** (both DE and EN) covering §5 DDG and Art. 13 GDPR. Linked from the site footer on every page; bilingual with DE as the legally binding version.
- **Auto-deploy of furtka.org on push-to-main.** New `.forgejo/workflows/deploy-site.yml` runs on the self-hosted runner (which *is* forge-runner-01 — the webserver host), so the deploy is just a local rsync + `hugo --minify` into `/var/www/furtka.org/`. No SSH, no secrets. Manual `website/deploy.sh` remains for out-of-band deploys.
- **Post-build smoke VM on Proxmox test host 192.168.178.165.** Every `build-iso` run boots the freshly built ISO in a throwaway VM on pollux (8 GiB RAM / 2 vCPU — the 4 GB default OOM-ed the host during mkinitcpio), then curls `:5000` to confirm the webinstaller is alive. VMs in VMID range 90009099 tagged with the commit SHA; last 5 kept for post-mortem debugging. Optional `workflow_dispatch` "Smoke latest ISO" re-tests the cached ISO in ~2 min without rebuilding. Step-level `continue-on-error` means a VM-side flake doesn't mark the ISO build red.
### Fixed
- **Settings page "Installed" field now refreshes after a self-update.** The `/api/furtka/update/check` response already carries `current` — the settings JS now drives `upd-current` from it the same way it drives `upd-latest`, so clicking "Check for updates" after a successful update reflects the new installed version without a force-reload.
- **Auto-reload on update completion is now reliable.** Clicking "Update now" arms a 45 s fallback `setTimeout(location.reload)` in addition to the existing `/update-state.json` polling loop. If the mid-apply API restart drops the poll connection before `stage: done` is ever observed (as seen on the 2026-04-16 VM test), the fallback still brings the page up on the new version. The fallback is cleared on `done` (5 s reload wins) or `rolled_back` (user needs the error visible).
- **Version string in the webinstaller footer** was pinned at `26.0-alpha` and didn't track releases. Bumped to `26.4-alpha` for this release; follow-up will make it render from `pyproject.toml` dynamically.
## [26.3-alpha] - 2026-04-16
### Fixed
- **Release workflow no longer depends on `jq`.** The previous `apt-get install -y jq` step hung on a slow mirror for 15+ minutes and stalled the 26.2-alpha publish. `publish-release.sh` now assembles the release-create payload via a tiny `python3 -c` block — Python is always available on the Forgejo Actions runner. `apt-get` path removed entirely.
## [26.2-alpha] - 2026-04-16
### Fixed
- **Updater "Check for updates" no longer 404s when every release is a pre-release.** `check_update()` queried Forgejo's `/releases/latest`, which silently excludes pre-releases (anything tagged `-alpha`/`-beta`/`-rc`) and returns 404 when there is no stable release. Switched to `/releases?limit=1`, which Forgejo sorts newest-first across all release kinds. During the alpha stage where every tag is a pre-release this is the only thing that works; once we tag a stable release, the same query still picks it up.
## [26.1-alpha] - 2026-04-16 ## [26.1-alpha] - 2026-04-16
### Added ### Added
@ -59,6 +474,20 @@ First tagged snapshot. Pre-alpha — the installer does not yet boot, but the de
- **Containers:** Docker + Compose - **Containers:** Docker + Compose
- **License:** AGPL-3.0 - **License:** AGPL-3.0
[Unreleased]: https://forgejo.sourcegate.online/daniel/furtka/compare/26.1-alpha...HEAD [Unreleased]: https://forgejo.sourcegate.online/daniel/furtka/compare/26.16-alpha...HEAD
[26.16-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.16-alpha
[26.15-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.15-alpha
[26.14-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.14-alpha
[26.13-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.13-alpha
[26.12-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.12-alpha
[26.11-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.11-alpha
[26.10-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.10-alpha
[26.9-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.9-alpha
[26.8-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.8-alpha
[26.6-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.6-alpha
[26.5-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.5-alpha
[26.4-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.4-alpha
[26.3-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.3-alpha
[26.2-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.2-alpha
[26.1-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.1-alpha [26.1-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.1-alpha
[26.0-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.0-alpha [26.0-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.0-alpha

View file

@ -106,15 +106,21 @@ None of these nail the "your dad can set this up" experience. The installer wiza
- [x] Release process + CI — CalVer tags, conventional commits, Forgejo Actions (ruff, pytest, JSON, link checks), `26.0-alpha` tagged - [x] Release process + CI — CalVer tags, conventional commits, Forgejo Actions (ruff, pytest, JSON, link checks), `26.0-alpha` tagged
- [x] Forgejo runner live on Proxmox VM (`forge-runner-01`, Ubuntu 24.04) — docker-outside-of-docker with host-mode jobs for ISO builds, setup captured in [docs/runner-setup.md](docs/runner-setup.md) + [ops/forgejo-runner/](ops/forgejo-runner/) - [x] Forgejo runner live on Proxmox VM (`forge-runner-01`, Ubuntu 24.04) — docker-outside-of-docker with host-mode jobs for ISO builds, setup captured in [docs/runner-setup.md](docs/runner-setup.md) + [ops/forgejo-runner/](ops/forgejo-runner/)
- [x] **ISO-build in CI**`.forgejo/workflows/build-iso.yml` runs `iso/build.sh` on every push to `main` and publishes the resulting `.iso` as the `furtka-iso` artifact (14 d retention). Push → green run → download → test. - [x] **ISO-build in CI**`.forgejo/workflows/build-iso.yml` runs `iso/build.sh` on every push to `main` and publishes the resulting `.iso` as the `furtka-iso` artifact (14 d retention). Push → green run → download → test.
- [ ] **Publish `26.0-alpha` Forgejo Release** — blocker is gone (end-to-end install now works on a VM), re-tag when we're happy with the installer copy. - [x] **Forgejo Releases + tag-driven release pipeline**`.forgejo/workflows/release.yml` fires on `[0-9]*` tags, `scripts/build-release-tarball.sh` packages `furtka/` + `apps/` + `assets/` + a root VERSION, `scripts/publish-release.sh` uploads tarball + sha256 + release.json to the Forgejo releases page. Releases `26.1-alpha`, `26.3-alpha`, and `26.4-alpha` live at [releases](https://forgejo.sourcegate.online/daniel/furtka/releases) (26.2 stalled on a `jq` apt hang, fixed in 26.3). Needs one repo secret (`FORGEJO_RELEASE_TOKEN`).
- [x] **Walking-skeleton live ISO — end to end**`iso/build.sh` produces a hybrid BIOS/UEFI Arch-based ISO. It boots in a Proxmox VM, DHCPs onto the LAN, shows a console welcome with `http://proksi.local:5000` (+ IP fallback), serves the Flask webinstaller, runs `archinstall --silent`, reboots the VM via a Reboot-now button, and the installed system logs in and runs `docker ps` without sudo. Build infra in [`iso/`](iso/). - [x] **Walking-skeleton live ISO — end to end**`iso/build.sh` produces a hybrid BIOS/UEFI Arch-based ISO. It boots in a Proxmox VM, DHCPs onto the LAN, shows a console welcome with `http://proksi.local:5000` (+ IP fallback), serves the Flask webinstaller, runs `archinstall --silent`, reboots the VM via a Reboot-now button, and the installed system logs in and runs `docker ps` without sudo. Build infra in [`iso/`](iso/).
- [x] **Drop loop/rom devices from drive list**`webinstaller/drives.py` filters by `lsblk` `TYPE=disk`, so the live squashfs and CD-ROM no longer appear as install targets. Boot-USB filtering on bare metal is still TODO; see [iso/README.md](iso/README.md). - [x] **Drop loop/rom devices from drive list**`webinstaller/drives.py` filters by `lsblk` `TYPE=disk`, so the live squashfs and CD-ROM no longer appear as install targets. The boot USB itself is also filtered: on the live ISO, `findmnt /run/archiso/bootmnt` resolves the boot partition and its parent disk is dropped from the picker.
- [x] **Rebrand GRUB menu**`iso/build.sh` rewrites "Arch Linux install medium" → "Furtka Live Installer" across GRUB, syslinux, and systemd-boot configs; default entry marked `(Recommended)`. - [x] **Rebrand GRUB menu**`iso/build.sh` rewrites "Arch Linux install medium" → "Furtka Live Installer" across GRUB, syslinux, and systemd-boot configs; default entry marked `(Recommended)`.
- [x] **Wizard: account form → drive picker → overview → archinstall** — S1 collects hostname/user/password/language with validation, S2 picks boot drive, overview confirms, `/install/run` writes `user_configuration.json` + `user_credentials.json` (0600) and execs `archinstall --silent` against its 4.x schema (`default_layout` disk_config + `!root-password` / `!password` sentinel keys + `custom_commands` for post-install group joins). Install log page polls a JSON endpoint and renders a phase-based progress bar with a collapsible raw log. `FURTKA_DRY_RUN=1` skips the real exec for testing. - [x] **Wizard: account form → drive picker → overview → archinstall** — S1 collects hostname/user/password/language with validation, S2 picks boot drive, overview confirms, `/install/run` writes `user_configuration.json` + `user_credentials.json` (0600) and execs `archinstall --silent` against its 4.x schema (`default_layout` disk_config + `!root-password` / `!password` sentinel keys + `custom_commands` for post-install group joins). Install log page polls a JSON endpoint and renders a phase-based progress bar with a collapsible raw log. `FURTKA_DRY_RUN=1` skips the real exec for testing.
- [x] **mDNS `proksi.local`** — hostname baked into the live ISO, avahi + nss-mdns in the package list, advertised as soon as network-online fires. The HTTPS + local-CA half of this milestone is still open below. - [x] **mDNS `proksi.local`** — hostname baked into the live ISO, avahi + nss-mdns in the package list, advertised as soon as network-online fires. The HTTPS + local-CA half of this milestone is still open below.
- [x] **Base OS post-install (demo level)** — after reboot the installed system comes up with Caddy on `:80` serving a Furtka landing page (welcome + live uptime/Docker/disk tiles), the console shows a banner pointing at `http://<hostname>.local`, and `nss-mdns` makes that URL resolve on the LAN. Written by `webinstaller/app.py`'s `_post_install_commands` via archinstall's `custom_commands`. No Authentik / no app store yet — that's the next milestone (Robert's area). - [x] **Base OS post-install (demo level)** — after reboot the installed system comes up with Caddy on `:80` serving a Furtka landing page (welcome + live uptime/Docker/disk tiles), the console shows a banner pointing at `http://<hostname>.local`, and `nss-mdns` makes that URL resolve on the LAN. Written by `webinstaller/app.py`'s `_post_install_commands` via archinstall's `custom_commands`.
- [x] **Resource manager + first bundled app (`fileshare`/SMB)**`furtka/` Python package handles scan / install / remove / reinstall of apps shipped under `apps/`. Manifest schema with settings fields drives an in-browser config form (no SSH needed). First app is a `dperson/samba` share mountable from Mac/Win/Linux. Validated end-to-end on VM 2026-04-16.
- [x] **On-box web UI uplevel** — shared `/style.css` served by Caddy, persistent top nav, landing page with an "Your apps" tile grid + live status, `/apps` with real per-app icons (inlined SVG from each manifest), new `/settings` page (hostname, IP, version, kernel, RAM, Docker, uptime + Furtka-updates card). `prefers-color-scheme` light/dark.
- [x] **Versioned on-box layout + Phase 1 per-app updates**`/opt/furtka/versions/<ver>/` + `current` symlink; `/var/lib/furtka/` for runtime state. `POST /api/apps/<name>/update` runs `docker compose pull` + compares digests + conditional `up -d`.
- [x] **Phase 2 Furtka self-update**`/settings` → Check → Update now. Downloads signed tarball (SHA256), stages, atomic symlink flip, reloads Caddy, daemon-reload, restarts services, health-checks the new api with auto-rollback on failure. CLI: `furtka update [--check]` + `furtka rollback`. Validated end-to-end on VM 2026-04-16 (`26.0-alpha``26.3-alpha` → rollback → reboot).
- [x] **Local HTTPS Phase 1** — Caddy `tls internal` on `:443` is fully opt-in via the `/settings` toggle (26.15-alpha); fresh installs stay HTTP-only so a half-trusted cert chain can't lock the user out. Per-box root CA generated on first enable, `rootCA.crt` downloadable from `/settings`, per-OS install guide at `/https-install/`. The "force HTTPS" sub-toggle still only appears once the current browser already trusts the cert.
- [x] **Post-build smoke VM on Proxmox**`.forgejo/workflows/build-iso.yml` hands the freshly built ISO to `scripts/smoke-vm.sh`, which boots it in a throwaway VM on `pollux` (192.168.178.165) and curls the webinstaller on `:5000`. VMID range 90009099, last 5 kept. Green end-to-end since 26.4-alpha.
- [ ] Installer wizard screens S3S7 — per-device purpose, network, domain, SSL, diagnostic. S5/S6 blocked on managed-gateway DNS infra not yet built. - [ ] Installer wizard screens S3S7 — per-device purpose, network, domain, SSL, diagnostic. S5/S6 blocked on managed-gateway DNS infra not yet built.
- [ ] `https://proksi.local` with a local CA (today: plain HTTP at `http://proksi.local:5000`) - [ ] Local HTTPS Phase 2 — dedicated local CA (not Caddy's `tls internal`), streamlined one-click install across Win/Mac/Linux/Android, and HTTPS on the live-installer wizard (`https://proksi.local:5000`).
- [ ] Caddy + Authentik wired into first-boot bootstrap - [ ] Caddy + Authentik wired into first-boot bootstrap
- [ ] Managed gateway infrastructure — `ns1/ns2.furtka.org` + DNS-01 wildcard automation - [ ] Managed gateway infrastructure — `ns1/ns2.furtka.org` + DNS-01 wildcard automation
- [ ] First containerized service (Nextcloud?) with auto-SSO + auto-subdomain - [ ] First containerized service (Nextcloud?) with auto-SSO + auto-subdomain

View file

@ -45,7 +45,7 @@ Tag per meaningful milestone, not on a calendar. A milestone is: ISO boots, a wi
git push origin 26.1-alpha git push origin 26.1-alpha
``` ```
5. **The release workflow does the rest.** `.forgejo/workflows/release.yml` fires on the tag push: `scripts/build-release-tarball.sh` builds the self-update payload (tarball + sha256 + release.json under `dist/`), `scripts/publish-release.sh` uploads all three assets to the Forgejo release page. Pre-release is flagged automatically based on the suffix (`-alpha`/`-beta`/`-rc`). 5. **The release workflow does the rest.** `.forgejo/workflows/release.yml` fires on the tag push and runs on the self-hosted runner: `scripts/build-release-tarball.sh` builds the self-update payload (tarball + sha256 + release.json under `dist/`), `iso/build.sh` builds the live-installer ISO, `scripts/publish-release.sh` uploads tarball + sha256 + release.json + ISO to the Forgejo release page. Pre-release is flagged automatically based on the suffix (`-alpha`/`-beta`/`-rc`). ISO build is `continue-on-error`: a flaky ISO step doesn't block the core tarball (the thing boxes need for self-update).
The release workflow needs one secret set at repo **Settings → Secrets → Actions**: The release workflow needs one secret set at repo **Settings → Secrets → Actions**:
- `FORGEJO_RELEASE_TOKEN` — a PAT with `write:repository` scope. - `FORGEJO_RELEASE_TOKEN` — a PAT with `write:repository` scope.

145
apps/README.md Normal file
View file

@ -0,0 +1,145 @@
# Building a Furtka app from a Docker image
A Furtka app is a folder with four files. The reconciler walks `/var/lib/furtka/apps/*` at boot, validates each manifest, ensures the declared volumes exist, and runs `docker compose up -d` per app. Filesystem is the only source of truth — no database.
Use `apps/fileshare/` as the reference implementation.
## Folder layout
```
apps/<name>/
manifest.json # required — app metadata and user-facing settings
docker-compose.yaml # required — filename is .yaml, not .yml
.env.example # required — keys consumed by docker-compose, with safe defaults
icon.svg # required — referenced by manifest.icon
```
The folder name must equal `manifest.name`. The scanner rejects mismatches.
## `manifest.json`
All top-level fields except `description_long` and `settings` are required.
```json
{
"name": "myapp",
"display_name": "My App",
"version": "0.1.0",
"description": "One-line summary shown in the app list.",
"description_long": "Longer German prose shown on the app page. Optional.",
"volumes": ["data"],
"ports": [8080],
"icon": "icon.svg",
"settings": [
{
"name": "ADMIN_PASSWORD",
"label": "Passwort",
"description": "Wird beim ersten Start gesetzt.",
"type": "password",
"required": true
}
]
}
```
Rules enforced by `furtka/manifest.py`:
- `volumes` — short names, strings. Namespaced to `furtka_<app>_<short>` at runtime.
- `ports` — integers. Informational only; compose owns the actual port binding.
- `settings[].name` — must match `^[A-Z_][A-Z0-9_]*$`. This name becomes both the env-var key and the form-field ID.
- `settings[].type` — one of `text`, `password`, `number`, `path`.
- `settings[].required` — if true, the install refuses when the value is empty.
- `settings[].default` — optional string. Used to pre-fill the form and the bootstrapped `.env`.
### Path-type settings (host bind mounts)
Use `"type": "path"` when the app should point at an existing folder on the host — media libraries, document archives, photo backups. The value is written to `.env` like any other setting, and compose consumes it via `${VAR}` substitution as a bind mount.
```json
{
"name": "MEDIA_PATH",
"label": "Medienordner",
"description": "Absoluter Pfad zu deinem Medien-Ordner, z.B. /mnt/media.",
"type": "path",
"required": true
}
```
```yaml
services:
app:
volumes:
- ${MEDIA_PATH}:/media:ro
```
The installer (`install_from` and `update_env`) refuses values that:
- aren't absolute (must start with `/`),
- don't exist on the host,
- aren't directories,
- resolve (after `Path.resolve()`) into a system-path deny-list: `/`, `/etc`, `/root`, `/boot`, `/proc`, `/sys`, `/dev`, `/bin`, `/sbin`, `/usr/bin`, `/usr/sbin`, `/var/lib/furtka`.
Traversal like `/mnt/../etc` is caught too — the deny-list check runs on the resolved path.
Path settings sit alongside manifest-declared volumes. Use `manifest.volumes` for internal state the app owns (databases, caches, config), and path settings for user data the container should mount and — usually — read without owning. Mounting read-only (`:ro`) is a good default for data the app only consumes.
## `docker-compose.yaml`
- File extension is `.yaml`. The compose runner hardcodes this — `.yml` will not be found.
- Reference manifest volumes as `furtka_<app>_<short>` with `external: true`. The reconciler creates the volume *before* `compose up`, so compose must not try to manage its lifecycle.
- Values from `.env` are substituted by compose in the usual `${VAR}` form.
- If the upstream image ships a HEALTHCHECK that misbehaves on Furtka's setup, disable it — a permanently-unhealthy container scares users reading `docker ps`.
- Pin images to a digest or stable tag when you can. `:latest` is acceptable for an MVP but noisy.
Minimal example:
```yaml
services:
app:
image: ghcr.io/example/myapp:1.2.3
restart: unless-stopped
environment:
- ADMIN_PASSWORD=${ADMIN_PASSWORD}
ports:
- "8080:8080"
volumes:
- furtka_myapp_data:/var/lib/myapp
volumes:
furtka_myapp_data:
external: true
```
## `.env.example`
One `KEY=VALUE` per line. Every key declared in `manifest.settings` should have a line here so the compose file resolves cleanly on first install even before the user opens the form.
Do not use `changeme` (or any value listed in `furtka.installer.PLACEHOLDER_SECRETS`) as the default for a required secret. The install step scans the final `.env` and refuses to finish if a placeholder survives — this is the guardrail that stops us shipping an app with a known password.
For non-secret values (usernames, paths), sensible defaults are fine and go straight into `.env` on first install.
## `icon.svg`
- 64×64 viewBox, no width/height attributes so the UI can scale it.
- Use `fill="currentColor"` (and `stroke="currentColor"`) so the icon picks up the current theme instead of baking in a color.
- Keep it single-path-ish. These render small in the app grid.
- The icon is inlined into the `/apps` page by the defensive SVG sanitiser, which strips `<script>`, `on*` attributes, and `javascript:` refs and enforces a 16 KB cap. Anything fancier than static paths and shapes will be rejected.
## Install and test
From the repo root on a dev box with Furtka installed:
```
sudo furtka app install ./apps/myapp
```
`furtka app install` runs a reconcile as its last step, so the container is up once the command returns. Open the Web UI (`http://furtka.local/`), fill in the settings form, and confirm the app starts. `docker ps` should show one container per compose service; `docker volume ls` should show `furtka_myapp_*`.
To bundle the app into the ISO, drop the folder into `apps/` before `iso/build.sh` runs — the build tarballs the whole `apps/` tree into the image.
## Out of scope (for now)
- Sharing volumes between apps. v1 keeps them isolated.
- Auth on the Web UI. The UI itself has a banner about this.
- Automatic updates. User-triggered per-app update is `POST /api/apps/<name>/update`.
- A network catalog. `furtka app install <name>` only resolves bundled apps in `/opt/furtka/apps/`.

View file

@ -1,12 +1,13 @@
{ {
"name": "fileshare", "name": "fileshare",
"display_name": "Network Files", "display_name": "Network Files",
"version": "0.1.1", "version": "0.1.2",
"description": "SMB share for Mac, Windows, Linux and Android devices on the LAN.", "description": "SMB share for Mac, Windows, Linux and Android devices on the LAN.",
"description_long": "Alle Geräte im WLAN sehen einen gemeinsamen Ordner. Funktioniert mit Windows, Mac, Linux und Android. Verbinden zu smb://furtka.local — Anmeldung mit dem hier gesetzten Benutzernamen und Passwort.", "description_long": "Alle Geräte im WLAN sehen einen gemeinsamen Ordner. Funktioniert mit Windows, Mac, Linux und Android. Verbinden zu smb://furtka.local — Anmeldung mit dem hier gesetzten Benutzernamen und Passwort.",
"volumes": ["files"], "volumes": ["files"],
"ports": [445, 139], "ports": [445, 139],
"icon": "icon.svg", "icon": "icon.svg",
"open_url": "smb://{host}/files",
"settings": [ "settings": [
{ {
"name": "SMB_USER", "name": "SMB_USER",

View file

@ -1,17 +1,62 @@
# Serves the Furtka landing page + live JSON on :80. Static pages are read # Serves the Furtka landing page + live JSON on :80 (plain HTTP). HTTPS
# from the current-version directory under /opt/furtka/current/ — updates # is **opt-in** — Caddy doesn't serve :443 until the user clicks the
# flip the symlink and everything picks up the new content without a Caddy # "Enable HTTPS" toggle on /settings, which drops an import snippet into
# restart (a `systemctl reload caddy` is still triggered post-swap to flush # /etc/caddy/furtka-https.d/. Default install has NO tls site block →
# the file-server's handle cache). /apps and /api are reverse-proxied to the # Caddy never generates a self-signed CA / leaf cert → no
# resource-manager API (furtka serve, bound to 127.0.0.1:7000). TLS / auth # SEC_ERROR_BAD_SIGNATURE when a user visits https://furtka.local before
# come later when Authentik is wired in. # they've trusted anything. That was the 26.14-era regression this file
:80 { # exists to cure: the old Caddyfile always served :443 with a freshly-
# generated cert, and a browser that had ever trusted an older Furtka
# box's CA would reject the new one with an unbypassable bad-sig error.
#
# /apps, /api, /login, /logout, / (home), /settings are reverse-proxied
# to the resource-manager API (furtka serve, bound to 127.0.0.1:7000).
# Static pages are read from /opt/furtka/current/ — updates flip the
# symlink and everything picks up the new content without a Caddy
# restart (a `systemctl reload caddy` is still triggered post-swap to
# flush the file-server's handle cache).
#
# Two snippet dirs, both silently no-op when empty:
# - /etc/caddy/furtka.d/*.caddyfile → imported inside the :80 block.
# The HTTPS toggle's "force HTTP→HTTPS redirect" snippet lands here.
# - /etc/caddy/furtka-https.d/*.caddyfile → imported at TOP LEVEL, so
# the HTTPS hostname+tls-internal site block can drop in here when
# the toggle is on. Hostname is substituted at toggle-time.
{
# Named-hostname :443 blocks would otherwise make Caddy add its own
# HTTP→HTTPS redirect — but we already serve our own `:80` block and
# the opt-in /settings toggle owns the redirect. Disable the built-in
# to keep a single source of truth.
auto_https disable_redirects
}
(furtka_routes) {
handle /api/* { handle /api/* {
reverse_proxy localhost:7000 reverse_proxy localhost:7000
} }
handle /apps* { handle /apps* {
reverse_proxy localhost:7000 reverse_proxy localhost:7000
} }
handle /login* {
reverse_proxy localhost:7000
}
handle /logout* {
reverse_proxy localhost:7000
}
# /settings and / — these previously served as static HTML straight
# from the catch-all file_server, which meant the auth-guard was
# bypassed: a LAN visitor could see the box's version, IP, and
# reach the Update-now / Reboot buttons (the API calls behind them
# are auth-gated, but the page itself rendered without a redirect
# to /login). Route them through the Python handler which checks
# the session cookie and either serves the static HTML from
# assets/www/ or redirects to /login.
handle /settings* {
reverse_proxy localhost:7000
}
handle / {
reverse_proxy localhost:7000
}
# Runtime JSON lives under /var/lib/furtka/ so it survives self-updates # Runtime JSON lives under /var/lib/furtka/ so it survives self-updates
# (which only swap /opt/furtka/current). # (which only swap /opt/furtka/current).
handle /status.json { handle /status.json {
@ -26,6 +71,16 @@
root * /var/lib/furtka root * /var/lib/furtka
file_server file_server
} }
# Download the local root CA cert Caddy generated for `tls internal`.
# Public because users need to grab it before they've trusted it.
# The private key next to it stays 0600 / caddy-owned.
handle /rootCA.crt {
root * /var/lib/caddy/pki/authorities/local
rewrite * /root.crt
file_server
header Content-Type "application/x-x509-ca-cert"
header Content-Disposition "attachment; filename=furtka-local-rootCA.crt"
}
handle { handle {
root * /opt/furtka/current/assets/www root * /opt/furtka/current/assets/www
file_server file_server
@ -35,3 +90,13 @@
output stdout output stdout
} }
} }
# HTTPS opt-in: when /settings toggles HTTPS on, a snippet gets written
# into /etc/caddy/furtka-https.d/ that adds the hostname+tls-internal
# site block. Empty directory = HTTP-only (default fresh install).
import /etc/caddy/furtka-https.d/*.caddyfile
:80 {
import /etc/caddy/furtka.d/*.caddyfile
import furtka_routes
}

View file

@ -0,0 +1,12 @@
[Unit]
Description=Furtka apps catalog sync
Requires=network-online.target
After=network-online.target
[Service]
Type=oneshot
ExecStart=/usr/local/bin/furtka catalog sync
TimeoutStartSec=5min
[Install]
WantedBy=multi-user.target

View file

@ -0,0 +1,14 @@
[Unit]
Description=Furtka apps catalog daily sync
[Timer]
# First sync 10 min after boot, then once per day with up to 6 h jitter so
# a fleet of boxes doesn't all hit Forgejo at the same second. Persistent
# = catch up if the box was off when the timer should have fired.
OnBootSec=10min
OnUnitActiveSec=24h
RandomizedDelaySec=6h
Persistent=true
[Install]
WantedBy=timers.target

View file

@ -0,0 +1,159 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Install local HTTPS · Furtka</title>
<meta name="viewport" content="width=device-width,initial-scale=1">
<link rel="stylesheet" href="/style.css">
</head>
<body>
<main class="wrap">
<nav class="nav">
<a class="brand" href="/">Furtka</a>
<div class="nav-links">
<a href="/">Home</a>
<a href="/apps">Apps</a>
<a href="/settings/" aria-current="page">Settings</a>
</div>
</nav>
<h1>Install local HTTPS</h1>
<p class="lede">
Trust the Furtka root CA on your device, then reach this box at
<code>https://<span id="hostname"></span>/</code> with a green padlock.
HTTP stays available until you enable the redirect in
<a class="inline-link" href="/settings/">Settings</a>.
</p>
<section>
<h2>Download the CA</h2>
<div class="card">
<dl class="kv">
<dt>Fingerprint (SHA-256)</dt><dd id="fingerprint"></dd>
</dl>
<p class="hint">
Check this fingerprint matches what <code>/settings</code> shows before
trusting it on another device. The root CA is unique to this box.
</p>
<div class="update-actions">
<button id="download-btn" class="secondary">Download rootCA.crt</button>
</div>
</div>
</section>
<section>
<h2>Linux (system-wide)</h2>
<div class="card">
<p class="hint">Arch / Fedora / RHEL:</p>
<pre>sudo cp rootCA.crt /etc/ca-certificates/trust-source/anchors/furtka-local.crt
sudo update-ca-trust</pre>
<p class="hint">Debian / Ubuntu:</p>
<pre>sudo cp rootCA.crt /usr/local/share/ca-certificates/furtka-local.crt
sudo update-ca-certificates</pre>
<p class="hint">
Firefox keeps its own certificate store. After the above, open
<code>about:preferences#privacy</code><em>View Certificates</em>
<em>Authorities</em><em>Import</em>, pick <code>rootCA.crt</code>,
tick <em>Trust this CA to identify websites</em>.
</p>
</div>
</section>
<section>
<h2>macOS</h2>
<div class="card">
<ol>
<li>Double-click <code>rootCA.crt</code>. Keychain Access opens.</li>
<li>When prompted, add it to the <strong>System</strong> keychain.</li>
<li>Find the <em>Furtka</em> entry, double-click, expand <em>Trust</em>,
set <em>When using this certificate</em> to <strong>Always Trust</strong>.</li>
<li>Close the window — you will be asked for your password.</li>
</ol>
</div>
</section>
<section>
<h2>Windows</h2>
<div class="card">
<ol>
<li>Double-click <code>rootCA.crt</code>.</li>
<li>Click <strong>Install Certificate</strong>.</li>
<li>Choose <strong>Local Machine</strong> (requires admin) and click <em>Next</em>.</li>
<li>Select <strong>Place all certificates in the following store</strong>
<em>Browse</em><strong>Trusted Root Certification Authorities</strong>.</li>
<li>Finish. Chrome and Edge pick this up immediately. Firefox keeps its
own store — import the same file via Firefox settings.</li>
</ol>
</div>
</section>
<section>
<h2>Android</h2>
<div class="card">
<ol>
<li>Transfer <code>rootCA.crt</code> to the device (AirDrop, email,
USB — whatever is handy).</li>
<li>Settings → <em>Security</em> (or <em>Security &amp; privacy</em>)
<em>More security settings</em><em>Encryption &amp; credentials</em>
<em>Install a certificate</em><strong>CA certificate</strong>.</li>
<li>Confirm the warning, then pick the file.</li>
</ol>
<p class="hint">
Android 11+ only trusts user-installed CAs for browsers by default.
Some apps (banking, Play services) ignore them. Not a Furtka bug —
an Android policy choice.
</p>
</div>
</section>
<section>
<h2>iOS &amp; iPadOS</h2>
<div class="card">
<p class="hint">
Honest warning: iOS needs a signed configuration profile for a
properly trusted CA. What works today:
</p>
<ol>
<li>Email <code>rootCA.crt</code> to yourself and open the attachment
in Mail. iOS prompts to install a profile.</li>
<li>Settings → <em>General</em><em>VPN &amp; Device Management</em>
→ tap the Furtka profile → <strong>Install</strong>.</li>
<li>Settings → <em>General</em><em>About</em><em>Certificate
Trust Settings</em> → toggle <strong>Furtka</strong> on.</li>
</ol>
<p class="hint">
A packaged <code>.mobileconfig</code> makes this smoother; it's on
the roadmap but not in this release.
</p>
</div>
</section>
<footer>
<p>Furtka · <a href="https://furtka.org">furtka.org</a></p>
</footer>
</main>
<script>
document.getElementById('hostname').textContent = location.hostname;
document.getElementById('download-btn').addEventListener('click', () => {
const a = document.createElement('a');
a.href = '/rootCA.crt';
a.download = 'furtka-local-rootCA.crt';
document.body.appendChild(a);
a.click();
a.remove();
});
(async () => {
try {
const r = await fetch('/api/furtka/https/status', { cache: 'no-store' });
if (!r.ok) return;
const s = await r.json();
document.getElementById('fingerprint').textContent =
s.fingerprint_sha256 || 'waiting for Caddy…';
} catch (e) { /* keep the placeholder */ }
})();
</script>
</body>
</html>

View file

@ -14,6 +14,7 @@
<a href="/" aria-current="page">Home</a> <a href="/" aria-current="page">Home</a>
<a href="/apps">Apps</a> <a href="/apps">Apps</a>
<a href="/settings/">Settings</a> <a href="/settings/">Settings</a>
<a href="#" id="logout-link" onclick="return doLogout(event)">Logout</a>
</div> </div>
</nav> </nav>
<header> <header>
@ -67,6 +68,17 @@
</main> </main>
<script> <script>
// Revoke the cookie server-side and bounce to /login. Shared
// shape with the _HTML in furtka/api.py so the two logout
// links behave identically.
async function doLogout(ev) {
ev.preventDefault();
try { await fetch('/logout', { method: 'POST', credentials: 'same-origin' }); }
catch (e) { /* server may already be down */ }
window.location.href = '/login';
return false;
}
// Hostname + install metadata — written once at install time to // Hostname + install metadata — written once at install time to
// /var/lib/furtka/furtka.json (see _furtka_json_cmd in the installer). // /var/lib/furtka/furtka.json (see _furtka_json_cmd in the installer).
// Separate from status.json because these facts don't change between // Separate from status.json because these facts don't change between
@ -92,13 +104,17 @@
} }
function primaryAction(app) { function primaryAction(app) {
// Only fileshare has a direct "open" link today. Future apps with // open_url is a manifest-declared template with a `{host}`
// HTTP endpoints would surface a URL here; everything else falls // placeholder — substituted against the current browser's
// back to the /apps manage page. // hostname so smb://host/files and http://host:3001/ both
if (app.name === 'fileshare' && HOSTNAME) { // follow however the user reached Furtka (furtka.local, raw
return { href: `smb://${HOSTNAME}.local/files`, label: 'Open files' }; // IP, a future reverse-proxy hostname). Apps without a
// frontend fall back to /apps for management.
if (app.open_url) {
const host = HOSTNAME || location.hostname;
return { href: app.open_url.replace('{host}', host), label: 'Open', external: true };
} }
return { href: '/apps', label: 'Manage →' }; return { href: '/apps', label: 'Manage →', external: false };
} }
async function renderApps() { async function renderApps() {
@ -115,8 +131,9 @@
} }
target.innerHTML = apps.map(a => { target.innerHTML = apps.map(a => {
const icon = a.icon_svg || FALLBACK_ICON; const icon = a.icon_svg || FALLBACK_ICON;
const { href, label } = primaryAction(a); const { href, label, external } = primaryAction(a);
return `<a class="app-tile" href="${esc(href)}"> const tgt = external ? ' target="_blank" rel="noopener"' : '';
return `<a class="app-tile" href="${esc(href)}"${tgt}>
<div class="icon">${icon}</div> <div class="icon">${icon}</div>
<span class="name">${esc(a.display_name || a.name)}</span> <span class="name">${esc(a.display_name || a.name)}</span>
<span class="cta">${esc(label)}</span> <span class="cta">${esc(label)}</span>

View file

@ -14,6 +14,7 @@
<a href="/">Home</a> <a href="/">Home</a>
<a href="/apps">Apps</a> <a href="/apps">Apps</a>
<a href="/settings/" aria-current="page">Settings</a> <a href="/settings/" aria-current="page">Settings</a>
<a href="#" id="logout-link" onclick="return doLogout(event)">Logout</a>
</div> </div>
</nav> </nav>
@ -50,6 +51,35 @@
</div> </div>
</section> </section>
<section>
<h2>Local HTTPS</h2>
<div class="card">
<p class="lede">
Serve this box over <code>https://<span id="https-host"></span>/</code>
with a green padlock. Install the Furtka root CA once per device, then
optionally force every HTTP request to redirect.
</p>
<dl class="kv">
<dt>CA fingerprint (SHA-256)</dt><dd id="https-fingerprint"></dd>
<dt>Reachable from this browser</dt><dd id="https-reachable">checking…</dd>
</dl>
<div class="update-actions">
<button id="https-download-btn" class="secondary">Download CA (.crt)</button>
<a href="/https-install/" class="inline-link">Per-OS install guide</a>
</div>
<label class="https-toggle" hidden id="https-force-wrap">
<input type="checkbox" id="https-force">
<span>Force HTTPS (redirect plain HTTP to HTTPS)</span>
</label>
<p class="hint" id="https-force-hint" hidden>
Enable this only after you've installed the CA and confirmed
<code>https://</code> works in this browser — otherwise the redirect
will leave you with a scary certificate warning.
</p>
<p id="https-status" class="hint"></p>
</div>
</section>
<section> <section>
<h2>Appearance</h2> <h2>Appearance</h2>
<div class="card"> <div class="card">
@ -60,12 +90,25 @@
</div> </div>
</section> </section>
<section>
<h2>Power</h2>
<div class="card">
<p class="lede">
Reboot or shut down the whole Furtka box. Takes a few seconds to
finish; the UI will reconnect itself after a reboot.
</p>
<div class="power-actions">
<button type="button" id="power-reboot" class="secondary">Reboot</button>
<button type="button" id="power-poweroff" class="danger">Shut down</button>
</div>
<p id="power-status" class="hint"></p>
</div>
</section>
<section> <section>
<h2>Coming next</h2> <h2>Coming next</h2>
<div class="coming"> <div class="coming">
<p class="hint">Controls we're building — follow progress on <a href="https://furtka.org">furtka.org</a>.</p> <p class="hint">Controls we're building — follow progress on <a href="https://furtka.org">furtka.org</a>.</p>
<a href="https://furtka.org/#planned">Reboot</a>
<a href="https://furtka.org/#planned">Shut down</a>
<a href="https://furtka.org/#planned">Change hostname</a> <a href="https://furtka.org/#planned">Change hostname</a>
<a href="https://furtka.org/#planned">Backup</a> <a href="https://furtka.org/#planned">Backup</a>
<a href="https://furtka.org/#planned">User accounts</a> <a href="https://furtka.org/#planned">User accounts</a>
@ -79,6 +122,15 @@
</main> </main>
<script> <script>
// Logout button in the nav — same shape as /apps and / pages.
async function doLogout(ev) {
ev.preventDefault();
try { await fetch('/logout', { method: 'POST', credentials: 'same-origin' }); }
catch (e) { /* server may already be down */ }
window.location.href = '/login';
return false;
}
async function refresh() { async function refresh() {
try { try {
const r = await fetch('/status.json', { cache: 'no-store' }); const r = await fetch('/status.json', { cache: 'no-store' });
@ -113,6 +165,7 @@
}; };
let pollHandle = null; let pollHandle = null;
let fallbackReloadHandle = null;
const statusEl = document.getElementById('update-status'); const statusEl = document.getElementById('update-status');
const checkBtn = document.getElementById('check-updates-btn'); const checkBtn = document.getElementById('check-updates-btn');
const applyBtn = document.getElementById('apply-update-btn'); const applyBtn = document.getElementById('apply-update-btn');
@ -135,6 +188,7 @@
return; return;
} }
document.getElementById('upd-latest').textContent = data.latest || '—'; document.getElementById('upd-latest').textContent = data.latest || '—';
document.getElementById('upd-current').textContent = data.current || '—';
if (data.update_available) { if (data.update_available) {
applyBtn.hidden = false; applyBtn.hidden = false;
applyBtn.textContent = `Update to ${data.latest}`; applyBtn.textContent = `Update to ${data.latest}`;
@ -169,6 +223,10 @@
// Poll /update-state.json (served by Caddy, unaffected by the // Poll /update-state.json (served by Caddy, unaffected by the
// API restart the updater is about to trigger) every 2s. // API restart the updater is about to trigger) every 2s.
pollHandle = setInterval(pollUpdateState, 2000); pollHandle = setInterval(pollUpdateState, 2000);
// Fallback: reload regardless of whether polling observes 'done'.
// The mid-apply API restart can drop the poll connection before
// the terminal state is ever seen by this page.
fallbackReloadHandle = setTimeout(() => location.reload(), 45000);
} catch (e) { } catch (e) {
setStatus(`Network error: ${e.message}`, true); setStatus(`Network error: ${e.message}`, true);
applyBtn.disabled = false; applyBtn.disabled = false;
@ -176,6 +234,111 @@
} }
}); });
// --- Local HTTPS --------------------------------------------------
const httpsFingerprintEl = document.getElementById('https-fingerprint');
const httpsReachableEl = document.getElementById('https-reachable');
const httpsHostEl = document.getElementById('https-host');
const httpsDownloadBtn = document.getElementById('https-download-btn');
const httpsForceWrap = document.getElementById('https-force-wrap');
const httpsForceHint = document.getElementById('https-force-hint');
const httpsForce = document.getElementById('https-force');
const httpsStatusEl = document.getElementById('https-status');
httpsHostEl.textContent = location.hostname;
httpsDownloadBtn.addEventListener('click', () => {
// Use an anchor with the download attr so the browser treats
// the cert as a download rather than rendering it.
const a = document.createElement('a');
a.href = '/rootCA.crt';
a.download = 'furtka-local-rootCA.crt';
document.body.appendChild(a);
a.click();
a.remove();
});
async function refreshHttpsStatus() {
try {
const r = await fetch('/api/furtka/https/status', { cache: 'no-store' });
if (!r.ok) return;
const s = await r.json();
httpsFingerprintEl.textContent = s.fingerprint_sha256 || 'waiting for Caddy…';
httpsDownloadBtn.disabled = !s.ca_available;
httpsForce.checked = !!s.force_https;
updateForceToggleVisibility(s);
} catch (e) {
/* next refresh will retry */
}
}
async function probeHttpsReachable() {
if (location.protocol === 'https:') {
httpsReachableEl.textContent = 'yes — you are on HTTPS now';
return true;
}
try {
// no-cors: we don't need the response body, just whether the
// TLS handshake + fetch succeed. Browsers reject on untrusted
// cert with a TypeError, which is exactly the signal we want.
await fetch('https://' + location.hostname + '/furtka.json',
{ cache: 'no-store', mode: 'no-cors' });
httpsReachableEl.textContent = 'yes — CA already trusted';
return true;
} catch (e) {
httpsReachableEl.textContent = 'no — install the CA first';
return false;
}
}
let httpsReachableCache = false;
function updateForceToggleVisibility(status) {
// Show the force-redirect toggle only when both:
// - Caddy's CA exists (otherwise there's no HTTPS to redirect to)
// - the current browser already trusts the cert (otherwise the
// user would lock themselves out of this very page)
const show = status.ca_available && httpsReachableCache;
httpsForceWrap.hidden = !show;
httpsForceHint.hidden = !show;
}
httpsForce.addEventListener('change', async () => {
httpsForce.disabled = true;
const desired = httpsForce.checked;
httpsStatusEl.textContent = desired
? 'Enabling HTTP→HTTPS redirect…'
: 'Disabling HTTP→HTTPS redirect…';
httpsStatusEl.style.color = 'var(--muted)';
try {
const r = await fetch('/api/furtka/https/force', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ enabled: desired }),
});
const data = await r.json();
if (!r.ok) {
httpsStatusEl.textContent = data.error || `HTTP ${r.status}`;
httpsStatusEl.style.color = 'var(--danger)';
httpsForce.checked = !desired;
} else {
httpsStatusEl.textContent = data.force_https
? 'Redirect on — new HTTP requests will jump to HTTPS.'
: 'Redirect off — HTTP serves the content directly.';
}
} catch (e) {
httpsStatusEl.textContent = `Network error: ${e.message}`;
httpsStatusEl.style.color = 'var(--danger)';
httpsForce.checked = !desired;
} finally {
httpsForce.disabled = false;
}
});
(async () => {
httpsReachableCache = await probeHttpsReachable();
await refreshHttpsStatus();
})();
async function pollUpdateState() { async function pollUpdateState() {
try { try {
const r = await fetch('/update-state.json', { cache: 'no-store' }); const r = await fetch('/update-state.json', { cache: 'no-store' });
@ -185,9 +348,11 @@
setStatus(label, s.stage === 'rolled_back'); setStatus(label, s.stage === 'rolled_back');
if (s.stage === 'done') { if (s.stage === 'done') {
clearInterval(pollHandle); clearInterval(pollHandle);
clearTimeout(fallbackReloadHandle);
setTimeout(() => location.reload(), 5000); setTimeout(() => location.reload(), 5000);
} else if (s.stage === 'rolled_back') { } else if (s.stage === 'rolled_back') {
clearInterval(pollHandle); clearInterval(pollHandle);
clearTimeout(fallbackReloadHandle);
if (s.reason) { if (s.reason) {
setStatus(`${label} — ${s.reason}`, true); setStatus(`${label} — ${s.reason}`, true);
} }
@ -198,6 +363,85 @@
/* keep polling; restart blip expected */ /* keep polling; restart blip expected */
} }
} }
// Power buttons: confirm, POST, then swap the whole card into a
// "going down" state so the user doesn't keep clicking. After a
// reboot we try to reconnect after ~45s; for shutdown we just
// tell the user the box is off — no auto-reconnect attempt.
const powerStatusEl = document.getElementById('power-status');
const rebootBtn = document.getElementById('power-reboot');
const poweroffBtn = document.getElementById('power-poweroff');
function setPowerStatus(msg, tone = 'muted') {
powerStatusEl.textContent = msg;
powerStatusEl.style.color =
tone === 'error' ? 'var(--danger)' : 'var(--muted)';
}
async function triggerPower(action, confirmMsg, inflightLabel) {
if (!confirm(confirmMsg)) return;
rebootBtn.disabled = true;
poweroffBtn.disabled = true;
setPowerStatus(inflightLabel);
try {
const r = await fetch('/api/furtka/power', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ action }),
});
if (!r.ok) {
const data = await r.json().catch(() => ({}));
setPowerStatus(data.error || `HTTP ${r.status}`, 'error');
rebootBtn.disabled = false;
poweroffBtn.disabled = false;
return;
}
if (action === 'reboot') {
setPowerStatus('Rebooting… this page will reload when the box is back.');
// Try reconnecting after a generous delay. archinstall
// + boot + services typically takes 3045 s; give it 30
// before the first poke so we don't just spin against
// a down kernel.
setTimeout(pollForReconnect, 30000);
} else {
setPowerStatus(
'Shutdown scheduled. Press the physical power button to turn it back on.'
);
}
} catch (e) {
setPowerStatus(`Network error: ${e.message}`, 'error');
rebootBtn.disabled = false;
poweroffBtn.disabled = false;
}
}
async function pollForReconnect() {
// Fetch a tiny static file; when it comes back 200 the box is up.
try {
const r = await fetch('/furtka.json', { cache: 'no-store' });
if (r.ok) {
setPowerStatus('Back up — reloading…');
setTimeout(() => location.reload(), 1500);
return;
}
} catch (e) { /* still down */ }
setTimeout(pollForReconnect, 3000);
}
rebootBtn.addEventListener('click', () =>
triggerPower(
'reboot',
"Wirklich neu starten? Die Box ist für ~30 Sekunden nicht erreichbar.",
'Rebooting…'
)
);
poweroffBtn.addEventListener('click', () =>
triggerPower(
'poweroff',
"Wirklich ausschalten? Du kannst die Box erst wieder starten, wenn du den physischen Power-Knopf drückst.",
'Shutting down…'
)
);
</script> </script>
</body> </body>
</html> </html>

View file

@ -198,7 +198,7 @@ h2 {
flex-wrap: wrap; flex-wrap: wrap;
justify-content: flex-end; justify-content: flex-end;
} }
button { button, .btn {
background: var(--accent); background: var(--accent);
border: none; border: none;
color: var(--bg); color: var(--bg);
@ -209,16 +209,39 @@ button {
white-space: nowrap; white-space: nowrap;
font-size: 0.9rem; font-size: 0.9rem;
font-family: inherit; font-family: inherit;
/* Anchor rendered-as-button: strip underline + keep the button's
rectangular hit area. `display: inline-flex` so an <a class="btn">
lines up vertically with its <button> siblings in .buttons. */
text-decoration: none;
display: inline-flex;
align-items: center;
} }
button.secondary { button.secondary, .btn.secondary {
background: var(--card); background: var(--card);
color: var(--fg); color: var(--fg);
border: 1px solid var(--border); border: 1px solid var(--border);
} }
button.danger { background: var(--danger); color: #fff; } button.danger { background: var(--danger); color: #fff; }
button:disabled { opacity: 0.5; cursor: wait; } button:disabled { opacity: 0.5; cursor: wait; }
button:focus-visible { outline: none; box-shadow: var(--ring); } button:focus-visible, .btn:focus-visible { outline: none; box-shadow: var(--ring); }
.empty { color: var(--muted); font-style: italic; padding: 0.5rem 0; } .empty { color: var(--muted); font-style: italic; padding: 0.5rem 0; }
.catalog-row {
display: flex;
justify-content: space-between;
align-items: center;
flex-wrap: wrap;
gap: 0.75rem;
padding: 0.5rem 0 0.75rem;
}
.catalog-state {
margin: 0;
color: var(--muted);
font-size: 0.9rem;
}
.catalog-stage.pending {
color: var(--fg);
font-style: italic;
}
pre { pre {
background: var(--card); background: var(--card);
padding: 1rem; padding: 1rem;
@ -287,7 +310,8 @@ details.log-details[open] > summary { color: var(--fg); }
} }
.field input:focus { outline: 2px solid var(--accent); outline-offset: -1px; } .field input:focus { outline: 2px solid var(--accent); outline-offset: -1px; }
.field .req { color: var(--danger); margin-left: 0.25rem; } .field .req { color: var(--danger); margin-left: 0.25rem; }
.modal .error { .modal .error,
.login-wrap .error {
background: var(--warn); background: var(--warn);
color: var(--warn-fg); color: var(--warn-fg);
padding: 0.5rem 0.75rem; padding: 0.5rem 0.75rem;
@ -296,7 +320,25 @@ details.log-details[open] > summary { color: var(--fg); }
font-size: 0.9rem; font-size: 0.9rem;
display: none; display: none;
} }
.modal .error.show { display: block; } .modal .error.show,
.login-wrap .error.show { display: block; }
.modal .dep-list {
margin: 0 0 1rem;
padding: 0.75rem 1rem 0.75rem 1.75rem;
background: var(--bg);
border: 1px solid var(--border);
border-radius: var(--r-sm);
font-size: 0.9rem;
line-height: 1.4;
}
.modal .dep-list li { margin: 0.15rem 0; }
/* Login + first-run setup page. Shares .wrap's max-width so the form
sits in the same column the rest of the app uses, just without the
Home/Apps/Settings nav. A bit of top padding so the H1 isn't glued
to the viewport edge. */
.login-wrap { padding-top: 3rem; }
.login-wrap .actions { margin-top: 0.5rem; }
.modal-actions { .modal-actions {
display: flex; display: flex;
justify-content: flex-end; justify-content: flex-end;
@ -306,13 +348,37 @@ details.log-details[open] > summary { color: var(--fg); }
/* Row of buttons beneath a card used by the Furtka updates card on /* Row of buttons beneath a card used by the Furtka updates card on
/settings. Left-aligned, wraps on narrow screens. */ /settings. Left-aligned, wraps on narrow screens. */
.update-actions { .update-actions,
.power-actions {
display: flex; display: flex;
gap: 0.5rem; gap: 0.5rem;
flex-wrap: wrap; flex-wrap: wrap;
margin-top: 1rem; margin-top: 1rem;
align-items: center;
} }
/* Inline link rendered alongside a button (e.g. next to "Download CA"
on /settings). No button chrome just accent colour + underline on
hover so the distinction between primary action and secondary
resource stays visually clear. */
.inline-link {
color: var(--accent);
text-decoration: none;
font-size: 0.9rem;
}
.inline-link:hover { text-decoration: underline; }
/* Checkbox + label row for the /settings HTTPS-force toggle. */
.https-toggle {
display: flex;
align-items: center;
gap: 0.55rem;
margin-top: 1rem;
font-size: 0.95rem;
cursor: pointer;
}
.https-toggle input { cursor: pointer; }
/* -- Shared primitives for later slices ------------------------ */ /* -- Shared primitives for later slices ------------------------ */
.chip { .chip {
display: inline-block; display: inline-block;
@ -342,7 +408,18 @@ details.log-details[open] > summary { color: var(--fg); }
font-size: 0.95rem; font-size: 0.95rem;
} }
.kv dt { color: var(--muted); } .kv dt { color: var(--muted); }
.kv dd { margin: 0; color: var(--fg); font-family: ui-monospace, SFMono-Regular, Menlo, monospace; } .kv dd {
margin: 0;
color: var(--fg);
font-family: ui-monospace, SFMono-Regular, Menlo, monospace;
/* Grid items default to min-width: auto (= content width), so a long
unbreakable value like a SHA-256 fingerprint would push past the
card. min-width: 0 lets the 1fr track enforce the column width, and
overflow-wrap: anywhere gives the colon-separated hex string valid
break opportunities. */
min-width: 0;
overflow-wrap: anywhere;
}
.coming { .coming {
display: flex; display: flex;

106
docs/smoke-vm.md Normal file
View file

@ -0,0 +1,106 @@
# Smoke VM on Proxmox Test Host
Every push to `main` builds a fresh ISO (`build-iso.yml`) and then boots
it in a throwaway VM on the Proxmox test host — currently
`192.168.178.165` — to confirm the live ISO boots and the webinstaller
responds on `:5000`. If the smoke step fails, the ISO artifact is still
uploaded and the VM is left running for post-mortem.
The heavy lifting lives in [`scripts/smoke-vm.sh`](../scripts/smoke-vm.sh);
the workflow just downloads the artifact and shells out.
## Where smoke VMs live
- Node: whatever the test host reports as its node name (auto-detected)
- VMID range: `90009099` (`PVE_TEST_VMID_MIN` / `PVE_TEST_VMID_MAX`)
- Name: `furtka-smoke-<12-char-sha>`
- Tags: `furtka`, `smoke`, `sha-<12-char-sha>`
- MAC: `BC:24:11:<first-6-hex-of-sha>` (Proxmox's OUI; lets the runner
find the VM by scanning the LAN — the live ISO has no guest agent)
- ISO on test host: `local:iso/furtka-<short-sha>.iso`
Five most recent VMs (and their ISOs) are kept; anything older is stopped
and purged (`destroy-unreferenced-disks=1`) on the next run. Tune via
`PVE_TEST_KEEP`.
## Poking a failed smoke VM
1. Find it in the Proxmox WebUI — look for `furtka-smoke-<sha>` in the
9000-range. The VM is still running.
2. Console: **Console** tab in the WebUI (SPICE or noVNC). The webinstaller
logs to `journalctl -u furtka-webinstaller.service` on the live ISO.
3. SSH: the live Arch ISO ships `sshd` enabled with no root password.
Normally SSH as a LAN-reachable user is not possible without creds —
use the WebUI console instead. (The **installed** system, post-wizard,
has the `server` user with the password the wizard set.)
4. Fetch the short-sha from the VM name → cross-reference against
`git log` to see exactly which commit built the failing ISO.
## Running a smoke test locally
Needs LAN access to the test Proxmox and an API token with VM perms.
```bash
PVE_TEST_HOST=192.168.178.165 \
PVE_TEST_TOKEN='user@pve!smoke=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' \
./scripts/smoke-vm.sh iso/out/furtka-*.iso
```
The script exits 0 on success, non-zero if the VM never served
`http://<ip>:5000`. Pruning runs either way.
## Clearing the 9000-range by hand
If smoke tests wedge or you want a clean slate:
```bash
# List smoke VMs
curl -sSk -H "Authorization: PVEAPIToken=${PVE_TEST_TOKEN}" \
https://192.168.178.165:8006/api2/json/nodes/<node>/qemu \
| python3 -c 'import json,sys; [print(v["vmid"],v["name"]) for v in json.load(sys.stdin)["data"] if 9000<=int(v["vmid"])<=9099]'
# Destroy one
curl -sSk -X POST -H "Authorization: PVEAPIToken=${PVE_TEST_TOKEN}" \
https://192.168.178.165:8006/api2/json/nodes/<node>/qemu/<vmid>/status/stop
curl -sSk -X DELETE -H "Authorization: PVEAPIToken=${PVE_TEST_TOKEN}" \
"https://192.168.178.165:8006/api2/json/nodes/<node>/qemu/<vmid>?purge=1&destroy-unreferenced-disks=1"
```
Or just run `scripts/smoke-vm.sh` with `PVE_TEST_KEEP=0` and any ISO —
the prune step will sweep everything in the range except the one it
just created.
## Proxmox API token setup (one-time)
1. WebUI → **Datacenter → Permissions → API Tokens → Add**
2. User: `root@pam` (or a dedicated `smoke@pve` user — see below)
3. Token ID: `smoke`
4. Uncheck **Privilege Separation** for the quick path, or keep it
separated and grant explicit perms below
5. Save the displayed secret once — it's shown only here
Minimum perms on `/` (if privilege-separated):
`VM.Allocate`, `VM.Config.Disk`, `VM.Config.CPU`, `VM.Config.Memory`,
`VM.Config.Network`, `VM.Config.Options`, `VM.Config.HWType`,
`VM.Config.CDROM`, `VM.PowerMgmt`, `VM.Audit`, `Datastore.AllocateTemplate`
(for ISO upload/delete on the `local` content store).
Set the result as Forgejo secret `PVE_TEST_TOKEN` in the format:
```
user@realm!tokenid=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
```
…and `PVE_TEST_HOST` as `192.168.178.165`. That's all the workflow needs.
## Assumptions
- Runner has L2 reachability to `192.168.178.0/24` (MAC→IP discovery
uses `arp-scan` from the runner).
- Test host uses default storage names: `local` for ISOs, `local-lvm` for
disks. Override via `PVE_TEST_ISO_STORAGE` / `PVE_TEST_DISK_STORAGE`.
- Bridge `vmbr0` carries LAN DHCP. Override via `PVE_TEST_BRIDGE`.
If any of those don't match, set the corresponding env var in
`build-iso.yml` (via `env:` on the smoke step) or override on the CLI
when running locally.

115
furtka/_release_common.py Normal file
View file

@ -0,0 +1,115 @@
"""Shared primitives for release-tarball flows.
Both ``furtka.updater`` (core self-update) and ``furtka.catalog`` (apps
catalog sync) pull a tarball from a Forgejo Releases page, verify its
SHA256 against the ``.sha256`` sidecar, and extract it with a path-
traversal guard. The helpers here are the single implementation of
that dance.
Each error-raising helper accepts an ``error_cls`` kwarg so callers can
keep their domain-specific exception type (``UpdateError``,
``CatalogError``) at call sites the helper itself defaults to a
neutral ``ReleaseError`` for use in tests or standalone scripts.
"""
from __future__ import annotations
import hashlib
import json
import shutil
import tarfile
import urllib.error
import urllib.request
from pathlib import Path
class ReleaseError(RuntimeError):
"""Neutral failure for release-tarball operations."""
def forgejo_api(host: str, repo: str, path: str, *, error_cls: type = ReleaseError) -> dict | list:
url = f"https://{host}/api/v1/repos/{repo}{path}"
req = urllib.request.Request(url, headers={"Accept": "application/json"})
try:
with urllib.request.urlopen(req, timeout=15) as resp:
return json.loads(resp.read())
except (urllib.error.URLError, json.JSONDecodeError) as e:
raise error_cls(f"forgejo api {url}: {e}") from e
def download(url: str, dest: Path, *, error_cls: type = ReleaseError) -> None:
dest.parent.mkdir(parents=True, exist_ok=True)
req = urllib.request.Request(url)
try:
with urllib.request.urlopen(req, timeout=60) as resp, dest.open("wb") as f:
shutil.copyfileobj(resp, f)
except urllib.error.URLError as e:
raise error_cls(f"download {url}: {e}") from e
def sha256_of(path: Path) -> str:
h = hashlib.sha256()
with path.open("rb") as f:
for chunk in iter(lambda: f.read(1024 * 1024), b""):
h.update(chunk)
return h.hexdigest()
def verify_tarball(tarball: Path, expected_sha: str, *, error_cls: type = ReleaseError) -> None:
actual = sha256_of(tarball)
if actual != expected_sha:
raise error_cls(f"sha256 mismatch: expected {expected_sha}, got {actual}")
def parse_sha256_sidecar(text: str, *, error_cls: type = ReleaseError) -> str:
"""Extract the hash from a standard `sha256sum` sidecar line."""
line = text.strip().split("\n", 1)[0].strip()
if not line:
raise error_cls("empty sha256 sidecar")
return line.split()[0]
def extract_tarball(tarball: Path, dest: Path, *, error_cls: type = ReleaseError) -> str:
"""Extract the tarball and return the VERSION read from its root.
Refuses entries that could escape ``dest`` via absolute paths or ``..``
segments. On Python 3.12+ the stricter ``data`` filter is additionally
enabled to catch symlink-escape / device-node / setuid tricks that the
regex check can't see.
"""
dest.mkdir(parents=True, exist_ok=True)
with tarfile.open(tarball, "r:gz") as tf:
for member in tf.getmembers():
if member.name.startswith(("/", "..")) or ".." in Path(member.name).parts:
raise error_cls(f"refusing tarball entry {member.name!r}")
try:
tf.extractall(dest, filter="data")
except TypeError:
tf.extractall(dest)
version_file = dest / "VERSION"
if not version_file.is_file():
raise error_cls("tarball has no VERSION file at root")
return version_file.read_text().strip()
def version_tuple(v: str) -> tuple:
"""CalVer comparator: 26.1-alpha < 26.1-beta < 26.1-rc < 26.1 < 26.2-alpha.
Pre-release stages sort before the corresponding stable (no-suffix)
release. Unknown suffixes sort below everything except the malformed
fallback. Returns a tuple of (year, release, stage_rank, suffix).
"""
stage_rank = {"alpha": 0, "beta": 1, "rc": 2}
head, _, suffix = v.partition("-")
try:
year_str, release_str = head.split(".", 1)
year = int(year_str)
release = int(release_str)
except (ValueError, IndexError):
return (-1, -1, -1, v)
if not suffix:
return (year, release, 3, "")
for name, rank in stage_rank.items():
if suffix.startswith(name):
return (year, release, rank, suffix)
return (year, release, -1, suffix)

File diff suppressed because it is too large Load diff

260
furtka/auth.py Normal file
View file

@ -0,0 +1,260 @@
"""Login-guard primitives for the Furtka UI.
One admin, one password. Passwords are PBKDF2-SHA256 hashed via
``furtka.passwd`` (stdlib-only hashlib.pbkdf2_hmac / hashlib.scrypt),
stored in /var/lib/furtka/users.json with mode 0600. Sessions live in
memory a systemctl restart logs everyone out again, which is fine
for an alpha single-user box. The ``LoginAttempts`` store in this
module rate-limits failed logins per (username, IP) and is also
in-memory; a restart clears a stuck lockout.
On upgrade from pre-auth Furtka the users.json file does not exist
yet; the api's GET /login detects this via ``setup_needed()`` and
renders a first-run form that POSTs to /login as if it were a setup
submit. Fresh installs get the file pre-populated by the webinstaller
so the setup step is skipped.
Hash format is compatible with werkzeug.security 26.11 / 26.12 boxes
that happened to have werkzeug installed can carry their users.json
forward without re-setup; see ``furtka.passwd`` for the scrypt reader.
"""
from __future__ import annotations
import json
import math
import secrets
import threading
from dataclasses import dataclass
from datetime import UTC, datetime, timedelta
from furtka.passwd import hash_password as _hash_password
from furtka.passwd import verify_password as _verify_password
from furtka.paths import users_file
COOKIE_NAME = "furtka_session"
COOKIE_TTL_SECONDS = 7 * 24 * 3600 # one week
def hash_password(plain: str) -> str:
"""PBKDF2-SHA256 via stdlib. 600k iterations (OWASP 2023)."""
return _hash_password(plain)
def verify_password(plain: str, hashed: str) -> bool:
"""Constant-time compare. Accepts stdlib + legacy werkzeug formats."""
return _verify_password(plain, hashed)
def load_users() -> dict:
"""Return the users dict, or {} if the file is missing or empty.
Missing-file is the expected state on first boot and on upgrades from
pre-auth versions callers treat empty-dict as "setup required".
"""
path = users_file()
if not path.exists():
return {}
try:
raw = path.read_text()
except OSError:
return {}
if not raw.strip():
return {}
try:
data = json.loads(raw)
except json.JSONDecodeError:
return {}
if not isinstance(data, dict):
return {}
return data
def save_users(users: dict) -> None:
"""Atomically write users.json with mode 0600.
Same pattern as installer.write_env write to .tmp, chmod, rename
so a crash between open() and close() can't leave a world-readable
partial file.
"""
path = users_file()
path.parent.mkdir(parents=True, exist_ok=True)
tmp = path.with_suffix(path.suffix + ".tmp")
tmp.write_text(json.dumps(users, indent=2) + "\n")
tmp.chmod(0o600)
tmp.replace(path)
def setup_needed() -> bool:
"""True when no admin is registered yet — initial setup is required."""
users = load_users()
return not users or "admin" not in users
def create_admin(username: str, password: str) -> None:
"""Overwrite users.json with a single admin account.
The webinstaller calls this post-install (with the step-1 password) so
the installed system is login-guarded from first boot. The /login
route calls it on first setup for upgrade-path boxes that don't
already have a users.json.
"""
users = {
"admin": {
"username": username,
"hash": hash_password(password),
"created_at": datetime.now(UTC).isoformat(timespec="seconds"),
}
}
save_users(users)
def authenticate(username: str, password: str) -> bool:
"""Return True iff the supplied credentials match the admin record."""
users = load_users()
admin = users.get("admin")
if not admin:
return False
if admin.get("username") != username:
return False
hashed = admin.get("hash")
if not isinstance(hashed, str) or not hashed:
return False
return verify_password(password, hashed)
@dataclass(frozen=True)
class Session:
token: str
username: str
expires_at: datetime
class SessionStore:
"""In-memory session table. Thread-safe (api.py uses the stdlib
HTTPServer which handles one request per thread though the default
variant is single-threaded, we keep the lock so swapping to
ThreadingHTTPServer later doesn't require revisiting this).
"""
def __init__(self, ttl_seconds: int = COOKIE_TTL_SECONDS) -> None:
self._ttl = timedelta(seconds=ttl_seconds)
self._by_token: dict[str, Session] = {}
self._lock = threading.Lock()
def create(self, username: str) -> Session:
token = secrets.token_urlsafe(32)
session = Session(
token=token,
username=username,
expires_at=datetime.now(UTC) + self._ttl,
)
with self._lock:
self._by_token[token] = session
return session
def lookup(self, token: str | None) -> Session | None:
if not token:
return None
with self._lock:
session = self._by_token.get(token)
if session is None:
return None
if datetime.now(UTC) >= session.expires_at:
# Expired — drop it on the floor so repeat lookups stay fast.
self._by_token.pop(token, None)
return None
return session
def revoke(self, token: str | None) -> None:
if not token:
return
with self._lock:
self._by_token.pop(token, None)
def clear(self) -> None:
"""Test helper — wipe all sessions."""
with self._lock:
self._by_token.clear()
class LoginAttempts:
"""In-memory rate-limiter for failed logins, keyed by (username, ip).
Parallels SessionStore: thread-safe, uses ``datetime.now(UTC)`` so the
same ``_FakeDatetime`` test shim works, lives only in memory so a
``systemctl restart furtka`` wipes a stuck lockout. Tuple keying means
a flood from one source IP can't lock the admin out from elsewhere
(different IP different key) the trade-off is that an attacker
can keep probing forever by rotating IPs, but they still eat the
PBKDF2 cost per attempt.
Stored data is a dict[key list[datetime]] of recent failure
timestamps. Every call prunes entries older than ``WINDOW_SECONDS``,
so memory per active key is bounded by ``MAX_FAILURES``.
"""
MAX_FAILURES = 10
WINDOW_SECONDS = 15 * 60
LOCKOUT_SECONDS = 15 * 60
def __init__(
self,
max_failures: int = MAX_FAILURES,
window_seconds: int = WINDOW_SECONDS,
lockout_seconds: int = LOCKOUT_SECONDS,
) -> None:
self._max = max_failures
self._window = timedelta(seconds=window_seconds)
self._lockout = timedelta(seconds=lockout_seconds)
self._fails: dict[tuple[str, str], list[datetime]] = {}
self._lock = threading.Lock()
def _prune_locked(self, key: tuple[str, str], now: datetime) -> list[datetime]:
"""Drop timestamps older than the window; caller holds self._lock."""
cutoff = now - self._window
kept = [ts for ts in self._fails.get(key, ()) if ts >= cutoff]
if kept:
self._fails[key] = kept
else:
self._fails.pop(key, None)
return kept
def register_failure(self, key: tuple[str, str]) -> None:
now = datetime.now(UTC)
with self._lock:
self._prune_locked(key, now)
self._fails.setdefault(key, []).append(now)
def is_locked(self, key: tuple[str, str]) -> bool:
return self.retry_after_seconds(key) > 0
def retry_after_seconds(self, key: tuple[str, str]) -> int:
"""Seconds remaining on an active lockout, or 0 if not locked."""
now = datetime.now(UTC)
with self._lock:
kept = self._prune_locked(key, now)
if len(kept) < self._max:
return 0
# Lockout runs from the oldest retained failure; once it
# falls off the window the key is effectively released.
unlock_at = kept[0] + self._lockout
remaining = (unlock_at - now).total_seconds()
if remaining <= 0:
return 0
return max(1, math.ceil(remaining))
def clear(self, key: tuple[str, str]) -> None:
with self._lock:
self._fails.pop(key, None)
def clear_all(self) -> None:
"""Test helper — wipe all failure state."""
with self._lock:
self._fails.clear()
# Module-level singleton used by the HTTP handler.
SESSIONS = SessionStore()
LOCKOUT = LoginAttempts()

253
furtka/catalog.py Normal file
View file

@ -0,0 +1,253 @@
"""Furtka apps catalog sync.
Mirrors the shape of ``furtka.updater`` but targets a separate Forgejo
repo (``daniel/furtka-apps`` by default) whose releases carry a single
``furtka-apps-<ver>.tar.gz`` with ``VERSION`` at the root and an
``apps/<name>/`` tree underneath. Pulling the catalog keeps the on-box
app ecosystem fresh without requiring a Furtka core release core
ships a seed ``apps/`` under ``/opt/furtka/current/apps/`` that the
resolver falls back to when the catalog is empty or stale.
Flow of ``sync_catalog()``:
1. flock on ``/run/furtka/catalog.lock`` so two triggers (timer + manual
UI click) can't race.
2. ``check_catalog()`` asks Forgejo for the latest release and picks out
the tarball + sidecar URLs.
3. Download tarball + sidecar to ``/var/lib/furtka/catalog/_downloads/``.
4. Verify the sha256 sidecar against the tarball.
5. Extract into ``/var/lib/furtka/catalog/_staging/``.
6. Validate every ``apps/<name>/manifest.json`` via ``furtka.manifest.
load_manifest``. A broken catalog release is refused here, not half-
applied.
7. Atomic rename: existing live catalog ``catalog.prev/``, staging
``catalog/``, then rmtree the prev. Any failure before this step
leaves the live catalog untouched.
8. Write ``/var/lib/furtka/catalog-state.json`` for the UI.
Paths can be overridden via env vars so tests can redirect everything to
a tmp dir.
"""
from __future__ import annotations
import fcntl
import json
import os
import shutil
import time
from dataclasses import dataclass
from pathlib import Path
from furtka import _release_common as _rc
from furtka.manifest import ManifestError, load_manifest
from furtka.paths import catalog_dir
FORGEJO_HOST = os.environ.get("FURTKA_FORGEJO_HOST", "forgejo.sourcegate.online")
CATALOG_REPO = os.environ.get("FURTKA_CATALOG_REPO", "daniel/furtka-apps")
_CATALOG_STATE = Path(os.environ.get("FURTKA_CATALOG_STATE", "/var/lib/furtka/catalog-state.json"))
_LOCK_PATH = Path(os.environ.get("FURTKA_CATALOG_LOCK", "/run/furtka/catalog.lock"))
_STAGING_NAME = "_staging"
_DOWNLOADS_NAME = "_downloads"
_PREV_SUFFIX = ".prev"
_VERSION_FILE = "VERSION"
class CatalogError(RuntimeError):
"""Any failure in the catalog sync flow that should surface to the caller."""
@dataclass(frozen=True)
class CatalogCheck:
current: str | None
latest: str
update_available: bool
tarball_url: str | None
sha256_url: str | None
def state_path() -> Path:
return _CATALOG_STATE
def lock_path() -> Path:
return _LOCK_PATH
def read_current_catalog_version() -> str | None:
"""Return the string in <catalog_dir>/VERSION, or None if absent / unreadable."""
try:
value = (catalog_dir() / _VERSION_FILE).read_text().strip()
except (FileNotFoundError, NotADirectoryError, OSError):
return None
return value or None
def check_catalog() -> CatalogCheck:
"""Query Forgejo for the latest catalog release.
Uses ``/releases?limit=1`` (not ``/releases/latest``) for the same
reason the core updater does Forgejo's ``latest`` endpoint skips
pre-releases and 404s when every tag carries a suffix.
"""
current = read_current_catalog_version()
releases = _rc.forgejo_api(
FORGEJO_HOST, CATALOG_REPO, "/releases?limit=1", error_cls=CatalogError
)
if not isinstance(releases, list) or not releases:
raise CatalogError("no catalog releases published yet")
release = releases[0]
latest = str(release.get("tag_name") or "").strip()
if not latest:
raise CatalogError("latest catalog release has empty tag_name")
tarball_url = None
sha256_url = None
for asset in release.get("assets") or []:
name = asset.get("name") or ""
url = asset.get("browser_download_url") or ""
if name.endswith(".tar.gz") and "furtka-apps-" in name:
tarball_url = url
elif name.endswith(".tar.gz.sha256"):
sha256_url = url
available = latest != current and (
current is None or _rc.version_tuple(latest) > _rc.version_tuple(current)
)
return CatalogCheck(
current=current,
latest=latest,
update_available=available,
tarball_url=tarball_url,
sha256_url=sha256_url,
)
def write_state(stage: str, **extra) -> None:
"""Atomic JSON state write — same shape as updater's update-state.json."""
state_path().parent.mkdir(parents=True, exist_ok=True)
tmp = state_path().with_suffix(".tmp")
payload = {"stage": stage, "updated_at": time.strftime("%Y-%m-%dT%H:%M:%S%z"), **extra}
tmp.write_text(json.dumps(payload, indent=2))
tmp.replace(state_path())
def read_state() -> dict:
try:
return json.loads(state_path().read_text())
except (FileNotFoundError, json.JSONDecodeError):
return {}
def acquire_lock():
path = lock_path()
path.parent.mkdir(parents=True, exist_ok=True)
fh = path.open("w")
try:
fcntl.flock(fh, fcntl.LOCK_EX | fcntl.LOCK_NB)
except BlockingIOError as e:
fh.close()
raise CatalogError("another catalog sync is already in progress") from e
return fh
def _validate_staging(staging: Path, expected_version: str) -> None:
"""Fail hard if the staging tree isn't a well-formed catalog release."""
version_file = staging / _VERSION_FILE
if not version_file.is_file():
raise CatalogError("catalog tarball has no VERSION file at root")
actual = version_file.read_text().strip()
if actual != expected_version:
raise CatalogError(
f"catalog tarball VERSION ({actual!r}) doesn't match expected ({expected_version!r})"
)
apps_root = staging / "apps"
if not apps_root.is_dir():
raise CatalogError("catalog tarball has no apps/ directory")
for entry in sorted(apps_root.iterdir()):
if not entry.is_dir():
continue
manifest_path = entry / "manifest.json"
if not manifest_path.exists():
raise CatalogError(f"catalog app {entry.name!r} has no manifest.json")
try:
load_manifest(manifest_path, expected_name=entry.name)
except ManifestError as e:
raise CatalogError(f"catalog app {entry.name!r}: invalid manifest: {e}") from e
def _atomic_swap(staging: Path) -> None:
"""Move staging → live catalog, keeping the previous tree as .prev until
the rename succeeds so we never leave a half-written catalog on disk."""
live = catalog_dir()
live.parent.mkdir(parents=True, exist_ok=True)
prev = live.with_name(live.name + _PREV_SUFFIX)
if prev.exists():
shutil.rmtree(prev)
if live.exists():
live.rename(prev)
try:
staging.rename(live)
except OSError as e:
if prev.exists():
# try to restore the previous tree; if that also fails the box
# has no catalog at all until the next sync — still better than
# a partially-extracted tree.
try:
prev.rename(live)
except OSError:
pass
raise CatalogError(f"atomic catalog swap failed: {e}") from e
if prev.exists():
shutil.rmtree(prev, ignore_errors=True)
def sync_catalog() -> CatalogCheck:
"""End-to-end sync. Acquires the lock, writes state at each stage, and
leaves the live catalog untouched on any failure before the rename step.
"""
with acquire_lock():
write_state("checking")
check = check_catalog()
if not check.update_available:
write_state("done", version=check.current or check.latest, note="already up to date")
return check
if not check.tarball_url or not check.sha256_url:
raise CatalogError("catalog release is missing tarball or sha256 asset")
# Downloads land in a sibling of the live catalog so half-finished
# artefacts never pollute the live tree, and stay under /var/lib/
# furtka/ so a sync interrupted by reboot can resume instead of
# starting over from /tmp (which clears).
dl_dir = catalog_dir().with_name(catalog_dir().name + _DOWNLOADS_NAME)
dl_dir.mkdir(parents=True, exist_ok=True)
tarball = dl_dir / f"furtka-apps-{check.latest}.tar.gz"
sha_file = dl_dir / f"furtka-apps-{check.latest}.tar.gz.sha256"
write_state("downloading", latest=check.latest)
_rc.download(check.tarball_url, tarball, error_cls=CatalogError)
_rc.download(check.sha256_url, sha_file, error_cls=CatalogError)
write_state("verifying", latest=check.latest)
expected = _rc.parse_sha256_sidecar(sha_file.read_text(), error_cls=CatalogError)
_rc.verify_tarball(tarball, expected, error_cls=CatalogError)
write_state("extracting", latest=check.latest)
staging = catalog_dir().with_name(catalog_dir().name + _STAGING_NAME)
if staging.exists():
shutil.rmtree(staging)
try:
_rc.extract_tarball(tarball, staging, error_cls=CatalogError)
_validate_staging(staging, check.latest)
except CatalogError:
shutil.rmtree(staging, ignore_errors=True)
raise
write_state("swapping", latest=check.latest)
try:
_atomic_swap(staging)
except CatalogError:
shutil.rmtree(staging, ignore_errors=True)
raise
write_state("done", version=check.latest, previous=check.current)
return check

View file

@ -1,8 +1,9 @@
import argparse import argparse
import json import json
import sys import sys
from pathlib import Path
from furtka import dockerops, installer, reconciler from furtka import deps, dockerops, installer, reconciler
from furtka.paths import apps_dir from furtka.paths import apps_dir
from furtka.scanner import scan from furtka.scanner import scan
@ -21,9 +22,30 @@ def _cmd_app_list(args: argparse.Namespace) -> int:
"display_name": r.manifest.display_name, "display_name": r.manifest.display_name,
"version": r.manifest.version, "version": r.manifest.version,
"description": r.manifest.description, "description": r.manifest.description,
"description_long": r.manifest.description_long,
"volumes": list(r.manifest.volumes), "volumes": list(r.manifest.volumes),
"ports": list(r.manifest.ports), "ports": list(r.manifest.ports),
"icon": r.manifest.icon, "icon": r.manifest.icon,
"open_url": r.manifest.open_url,
"settings": [
{
"name": s.name,
"label": s.label,
"description": s.description,
"type": s.type,
"required": s.required,
"default": s.default,
}
for s in r.manifest.settings
],
"requires": [
{
"app": req.app,
"on_install": req.on_install,
"on_start": req.on_start,
}
for req in r.manifest.requires
],
} }
if r.manifest if r.manifest
else None, else None,
@ -45,24 +67,71 @@ def _cmd_app_list(args: argparse.Namespace) -> int:
def _cmd_app_install(args: argparse.Namespace) -> int: def _cmd_app_install(args: argparse.Namespace) -> int:
# If the user passed a path (or a path-ish thing), bypass dep resolution —
# local paths are dev/test workflows where the caller knows what they want.
# Catalog/bundled name installs go through plan_install() so transitive
# `requires` are pulled in.
src_path = Path(args.source)
is_path = src_path.is_dir() or "/" in args.source or args.source.startswith(".")
try: try:
src = installer.resolve_source(args.source) if is_path:
target = installer.install_from(src) src = installer.resolve_source(args.source)
target = installer.install_from(src)
print(f"installed {target.name} to {target}")
else:
try:
plan = deps.plan_install(args.source)
except deps.DependencyError as e:
print(f"error: {e}", file=sys.stderr)
return 2
if not plan.to_install:
# Target is already installed — re-run as a single-app install
# to refresh files (matches reinstall semantics).
target_path = installer.install_from(installer.resolve_source(args.source))
print(f"reinstalled {target_path.name} to {target_path}")
else:
targets = installer.install_plan(plan)
for t in targets:
print(f"installed {t.name} to {t}")
except installer.InstallError as e: except installer.InstallError as e:
print(f"error: {e}", file=sys.stderr) print(f"error: {e}", file=sys.stderr)
return 2 return 2
print(f"installed {target.name} to {target}")
actions = reconciler.reconcile(apps_dir()) actions = reconciler.reconcile(apps_dir())
for a in actions: for a in actions:
print(f" {a.describe()}") print(f" {a.describe()}")
return 1 if reconciler.has_errors(actions) else 0 return 1 if reconciler.has_errors(actions) else 0
def _cmd_app_install_bg(args: argparse.Namespace) -> int:
"""Docker-facing phases of an install — called by the API via systemd-run.
Internal subcommand; normal CLI users want `app install` (synchronous).
This exists to separate the slow docker pull/up from the synchronous
validation the API does inline, so the UI can poll a state file.
"""
from furtka import install_runner
try:
install_runner.run_install(args.name)
except Exception as e:
# run_install already wrote state="error"; echo for journald.
print(f"install-bg failed: {e}", file=sys.stderr)
return 1
return 0
def _cmd_app_remove(args: argparse.Namespace) -> int: def _cmd_app_remove(args: argparse.Namespace) -> int:
target = apps_dir() / args.name target = apps_dir() / args.name
if not target.exists(): if not target.exists():
print(f"error: {args.name!r} is not installed", file=sys.stderr) print(f"error: {args.name!r} is not installed", file=sys.stderr)
return 1 return 1
dependents = deps.dependents_of(args.name)
if dependents:
print(
f"error: {args.name!r} is required by: {', '.join(dependents)}. Remove those first.",
file=sys.stderr,
)
return 2
try: try:
dockerops.compose_down(target, args.name) dockerops.compose_down(target, args.name)
except dockerops.DockerError as e: except dockerops.DockerError as e:
@ -149,6 +218,60 @@ def _cmd_rollback(args: argparse.Namespace) -> int:
return 0 return 0
def _cmd_catalog_sync(args: argparse.Namespace) -> int:
from furtka import catalog
if args.check:
try:
check = catalog.check_catalog()
except catalog.CatalogError as e:
print(f"error: {e}", file=sys.stderr)
return 2
if args.json:
print(
json.dumps(
{
"current": check.current,
"latest": check.latest,
"update_available": check.update_available,
},
indent=2,
)
)
elif check.update_available:
print(f"Catalog update available: {check.current or '(none)'}{check.latest}")
else:
print(f"Catalog already up to date ({check.current or check.latest})")
return 0
try:
check = catalog.sync_catalog()
except catalog.CatalogError as e:
print(f"error: {e}", file=sys.stderr)
return 2
if not check.update_available:
print(f"Catalog already up to date ({check.current or check.latest})")
else:
print(f"Synced catalog {check.current or '(none)'}{check.latest}")
return 0
def _cmd_catalog_status(args: argparse.Namespace) -> int:
from furtka import catalog
current = catalog.read_current_catalog_version()
state = catalog.read_state()
if args.json:
print(json.dumps({"current": current, "state": state}, indent=2))
return 0
print(f"Catalog version: {current or '(none — run `furtka catalog sync`)'}")
if state:
print(f"Last sync stage: {state.get('stage', '?')} at {state.get('updated_at', '?')}")
else:
print("Last sync stage: (never)")
return 0
def build_parser() -> argparse.ArgumentParser: def build_parser() -> argparse.ArgumentParser:
p = argparse.ArgumentParser(prog="furtka", description="Furtka resource manager") p = argparse.ArgumentParser(prog="furtka", description="Furtka resource manager")
sub = p.add_subparsers(dest="command", required=True) sub = p.add_subparsers(dest="command", required=True)
@ -170,6 +293,15 @@ def build_parser() -> argparse.ArgumentParser:
) )
app_install.set_defaults(func=_cmd_app_install) app_install.set_defaults(func=_cmd_app_install)
# Internal — called by the HTTP API via systemd-run. Deliberately omitted
# from the help listing; regular CLI users want `app install` above.
app_install_bg = app_sub.add_parser(
"install-bg",
help=argparse.SUPPRESS,
)
app_install_bg.add_argument("name", help="Installed app folder name")
app_install_bg.set_defaults(func=_cmd_app_install_bg)
app_remove = app_sub.add_parser("remove", help="Stop and uninstall an app (keeps volumes)") app_remove = app_sub.add_parser("remove", help="Stop and uninstall an app (keeps volumes)")
app_remove.add_argument("name", help="App name (folder name under /var/lib/furtka/apps/)") app_remove.add_argument("name", help="App name (folder name under /var/lib/furtka/apps/)")
app_remove.set_defaults(func=_cmd_app_remove) app_remove.set_defaults(func=_cmd_app_remove)
@ -212,6 +344,36 @@ def build_parser() -> argparse.ArgumentParser:
) )
rollback.set_defaults(func=_cmd_rollback) rollback.set_defaults(func=_cmd_rollback)
catalog = sub.add_parser("catalog", help="Manage the apps catalog (daniel/furtka-apps)")
catalog_sub = catalog.add_subparsers(dest="subcommand", required=True)
catalog_sync = catalog_sub.add_parser(
"sync",
help="Download and install the latest apps catalog from Forgejo",
)
catalog_sync.add_argument(
"--check",
action="store_true",
help="Only check whether a catalog update is available; don't apply",
)
catalog_sync.add_argument(
"--json",
action="store_true",
help="Emit machine-readable JSON (only honoured with --check)",
)
catalog_sync.set_defaults(func=_cmd_catalog_sync)
catalog_status = catalog_sub.add_parser(
"status",
help="Print the currently-installed catalog version and last-sync stage",
)
catalog_status.add_argument(
"--json",
action="store_true",
help="Emit machine-readable JSON",
)
catalog_status.set_defaults(func=_cmd_catalog_status)
return p return p

237
furtka/deps.py Normal file
View file

@ -0,0 +1,237 @@
"""App-to-app dependency planning.
A manifest may declare ``requires: [{"app": "<name>", "on_install": ...,
"on_start": ...}]``. This module turns that graph into:
- ``plan_install(name)`` topo-sorted install order so providers come up
before consumers, with cycle detection. Read-only over the catalog +
installed tree; the installer is the one that mutates.
- ``dependents_of(name)`` installed apps that name ``<name>`` in their
``requires``. Used by the remove guard to block "rip out mosquitto"
while zigbee2mqtt is still installed.
- ``installed_topo_order(scan_results)`` re-order a list of installed
apps so reconcile's per-boot sweep visits providers before consumers
(so a consumer's ``on_start`` hook runs against an already-up provider).
- ``provider_exec_service(provider_dir, project)`` pick the compose
service to ``docker compose exec`` into when firing a hook. v1: first
service in the provider's compose config.
"""
from __future__ import annotations
from dataclasses import dataclass
from pathlib import Path
from furtka import dockerops, sources
from furtka.manifest import Manifest, ManifestError, load_manifest
from furtka.paths import apps_dir
from furtka.scanner import ScanResult, scan
class DependencyError(RuntimeError):
pass
@dataclass(frozen=True)
class DepPlan:
target: str
install_order: tuple[str, ...] # topo: providers first, target last
already_installed: frozenset[str]
to_install: tuple[str, ...] # install_order minus already_installed
def _load_any(name: str) -> Manifest | None:
"""Load `<name>`'s manifest — prefer installed, fall back to catalog/bundled.
Returns None if the app exists nowhere we can see. Caller decides how
loud to be about that `plan_install` raises, `dependents_of` just
skips entries it can't parse so reconcile keeps working.
"""
installed = apps_dir() / name / "manifest.json"
if installed.is_file():
try:
return load_manifest(installed, expected_name=name)
except ManifestError:
return None
src = sources.resolve_app_name(name)
if src is None:
return None
try:
return load_manifest(src.path / "manifest.json")
except ManifestError:
return None
def _installed_names() -> frozenset[str]:
return frozenset(r.manifest.name for r in scan(apps_dir()) if r.ok)
def plan_install(name: str) -> DepPlan:
"""Build a topo-sorted install plan for `name`.
Walks the dependency graph via the catalog/bundled+installed manifests,
detects cycles, and returns the order plus which entries are already
installed (those get skipped at install time but stay in `install_order`
for sequencing the `on_install` hooks correctly).
"""
WHITE, GRAY, BLACK = 0, 1, 2
color: dict[str, int] = {}
order: list[str] = []
stack_chain: list[str] = []
# Iterative post-order DFS with a per-frame iterator over children.
# Cycle detection uses GRAY-on-GRAY (Tarjan-style) so a chain through
# several apps still surfaces the full path in the error message.
def visit(start: str) -> None:
if color.get(start, WHITE) == BLACK:
return
# Each frame: (name, manifest, iterator over sorted requires)
m = _load_any(start)
if m is None:
raise DependencyError(
f"required app {start!r} not found in installed apps, catalog, or bundled apps"
)
# Sort requires alphabetically for deterministic install order.
children = iter(sorted(r.app for r in m.requires))
stack: list[tuple[str, Manifest, "object"]] = [(start, m, children)] # noqa: UP037
color[start] = GRAY
stack_chain.append(start)
while stack:
cur_name, cur_m, it = stack[-1]
child = next(it, None)
if child is None:
# All children processed — emit and pop.
color[cur_name] = BLACK
order.append(cur_name)
stack.pop()
stack_chain.pop()
continue
c = color.get(child, WHITE)
if c == BLACK:
continue
if c == GRAY:
# Cycle — find the back-edge target in the chain and report.
idx = stack_chain.index(child)
cycle = " -> ".join(stack_chain[idx:] + [child])
raise DependencyError(f"circular dependency: {cycle}")
# WHITE — descend.
child_m = _load_any(child)
if child_m is None:
raise DependencyError(
f"required app {child!r} (needed by {cur_name!r}) "
"not found in installed apps, catalog, or bundled apps"
)
color[child] = GRAY
stack_chain.append(child)
stack.append((child, child_m, iter(sorted(r.app for r in child_m.requires))))
visit(name)
installed = _installed_names()
to_install = tuple(n for n in order if n not in installed)
return DepPlan(
target=name,
install_order=tuple(order),
already_installed=frozenset(n for n in order if n in installed),
to_install=to_install,
)
def dependents_of(name: str) -> tuple[str, ...]:
"""Names of installed apps that declare `<name>` in their `requires`.
Used by the remove guard. Result is sorted alphabetically so error
messages read in a stable order.
"""
out: list[str] = []
for r in scan(apps_dir()):
if not r.ok:
continue
if any(req.app == name for req in r.manifest.requires):
out.append(r.manifest.name)
out.sort()
return tuple(out)
def installed_topo_order(results: list[ScanResult]) -> list[ScanResult]:
"""Re-order installed apps so providers come before consumers.
Apps whose `requires` point at uninstalled providers (or that contain
cycles) are emitted at the tail in their original order reconcile
already isolates per-app failure so we don't want to abort the whole
sweep on a misconfigured manifest. Ties within a tier stay alphabetical
(the scanner already returns alphabetical), matching the deterministic
boot order users rely on.
"""
ok = [r for r in results if r.ok]
bad = [r for r in results if not r.ok]
by_name = {r.manifest.name: r for r in ok}
# Kahn's algorithm against the installed subgraph only. Edges from
# consumer -> provider; we want providers first, so build the indegree
# over consumers ("how many of MY providers are still pending").
pending_providers: dict[str, set[str]] = {}
consumers_of: dict[str, list[str]] = {n: [] for n in by_name}
for r in ok:
deps = {req.app for req in r.manifest.requires if req.app in by_name}
pending_providers[r.manifest.name] = deps
for dep in deps:
consumers_of[dep].append(r.manifest.name)
# Seed with anything that has no installed providers, alphabetical.
ready = sorted(n for n, deps in pending_providers.items() if not deps)
ordered: list[str] = []
while ready:
# Pop the alphabetically-smallest so ties stay deterministic.
n = ready.pop(0)
ordered.append(n)
for consumer in consumers_of[n]:
pending_providers[consumer].discard(n)
if not pending_providers[consumer]:
# Insert in sorted position.
_insort(ready, consumer)
# Anything left has unresolved providers (missing or cyclic) — append
# in scanner order so reconcile still tries them and gets a clean
# per-app error.
leftover = [n for n in by_name if n not in set(ordered)]
leftover_set = set(leftover)
leftover_in_scan_order = [r.manifest.name for r in ok if r.manifest.name in leftover_set]
out = [by_name[n] for n in ordered]
out.extend(by_name[n] for n in leftover_in_scan_order)
out.extend(bad) # broken manifests already had their place in `results`; append last
return out
def _insort(seq: list[str], value: str) -> None:
"""Insert `value` into the sorted list `seq` (keeping it sorted)."""
lo, hi = 0, len(seq)
while lo < hi:
mid = (lo + hi) // 2
if seq[mid] < value:
lo = mid + 1
else:
hi = mid
seq.insert(lo, value)
def provider_exec_service(provider_dir: Path, project: str) -> str:
"""Pick the compose service name to `docker compose exec` into for a hook.
v1: first service in the provider's compose file. Works for the apps we
actually have (Mosquitto, Postgres, Redis all single-service). When a
multi-service provider (Authentik etc.) lands, the deferred follow-up is
to add an explicit `service` field on the Requirement entry.
Falls back to the project name if compose config can't be read — that's
a desperate guess but better than crashing, and the resulting exec error
will be surfaced cleanly as a DockerError to the caller.
"""
try:
cfg = dockerops.compose_image_tags(provider_dir, project)
except dockerops.DockerError:
return project
if not cfg:
return project
return next(iter(cfg.keys()))

View file

@ -60,6 +60,88 @@ def compose_pull(app_dir: Path, project: str) -> None:
_run([*_compose_args(app_dir, project), "pull"], cwd=app_dir) _run([*_compose_args(app_dir, project), "pull"], cwd=app_dir)
def compose_exec(
app_dir: Path,
project: str,
service: str,
argv: list[str],
*,
env: dict[str, str] | None = None,
timeout: float | None = None,
) -> str:
"""`docker compose exec -T <service> <argv...>`. Returns captured stdout.
`-T` disables TTY allocation required when called from a non-interactive
parent (the install background job, the reconcile service). Without it,
docker exits with "the input device is not a TTY".
"""
cmd = [*_compose_args(app_dir, project), "exec", "-T"]
for k, v in (env or {}).items():
cmd.extend(["--env", f"{k}={v}"])
cmd.append(service)
cmd.extend(argv)
try:
proc = subprocess.run(
cmd,
cwd=app_dir,
check=False,
capture_output=True,
text=True,
timeout=timeout,
)
except subprocess.TimeoutExpired as e:
raise DockerError(f"compose exec {service}: timed out after {timeout}s") from e
if proc.returncode != 0:
msg = proc.stderr.strip() or proc.stdout.strip()
raise DockerError(f"compose exec {service} exited {proc.returncode}: {msg}")
return proc.stdout
def compose_exec_script(
app_dir: Path,
project: str,
service: str,
script_path: Path,
*,
env: dict[str, str] | None = None,
timeout: float | None = None,
) -> str:
"""Run a host-side script inside the compose container via `sh -s`.
The script's bytes are streamed on stdin, so it doesn't need to be
copied into the image. Used by the app-dependency feature to run a
provider's hook scripts (e.g. "create an MQTT user for the consumer")
when a consumer is being installed or every time it starts.
Returns the script's stdout as text (UTF-8, replace-on-error). Raises
DockerError on non-zero exit or timeout, mirroring `compose_exec`.
"""
body = Path(script_path).read_bytes()
cmd = [*_compose_args(app_dir, project), "exec", "-T"]
for k, v in (env or {}).items():
cmd.extend(["--env", f"{k}={v}"])
cmd.extend([service, "sh", "-s"])
try:
proc = subprocess.run(
cmd,
cwd=app_dir,
check=False,
input=body,
capture_output=True,
timeout=timeout,
)
except subprocess.TimeoutExpired as e:
raise DockerError(
f"compose exec {service}: hook {script_path.name} timed out after {timeout}s"
) from e
if proc.returncode != 0:
err = (proc.stderr or proc.stdout or b"").decode("utf-8", "replace").strip()
raise DockerError(
f"compose exec {service} hook {script_path.name} exited {proc.returncode}: {err}"
)
return proc.stdout.decode("utf-8", "replace")
def compose_image_tags(app_dir: Path, project: str) -> dict[str, str]: def compose_image_tags(app_dir: Path, project: str) -> dict[str, str]:
"""Return {service_name: image_tag} as declared in the compose file. """Return {service_name: image_tag} as declared in the compose file.

183
furtka/https.py Normal file
View file

@ -0,0 +1,183 @@
"""Local-CA HTTPS helpers for the `tls internal` setup.
Caddy generates the local root CA lazily on first start and keeps it under
$XDG_DATA_HOME/caddy/pki/authorities/local/ our packaged caddy.service
sets `XDG_DATA_HOME=/var/lib`, so on the target that resolves to
/var/lib/caddy/pki/authorities/local/. The private key stays 0600 /
caddy-owned; we only ever read the public root.crt next to it.
HTTPS is **opt-in** since 26.15-alpha. Default Caddyfile has no `:443`
site block, so `tls internal` never triggers cert issuance. The
/settings toggle drops a snippet file into /etc/caddy/furtka-https.d/
that adds the hostname+tls-internal block (plus the redirect snippet
inside /etc/caddy/furtka.d/ for HTTPHTTPS). Disabling the toggle
removes both snippets and reloads Caddy falls back to HTTP-only.
Why opt-in: fresh-install boxes used to always serve a self-signed
cert on :443. Any browser that had ever trusted a previous Furtka
box's local CA rejected the new cert with an unbypassable
SEC_ERROR_BAD_SIGNATURE Firefox in particular has no "Advanced →
Accept" for that case. Making HTTPS explicit means fresh installs
never hit that trap; users who want HTTPS download the rootCA.crt
first and then click the toggle.
This module exposes:
- status(): CA fingerprint + current toggle state
- set_force_https(enabled): write/remove BOTH snippets atomically,
reload Caddy, roll back on failure.
"""
import base64
import hashlib
import re
import subprocess
from pathlib import Path
CA_CERT_PATH = Path("/var/lib/caddy/pki/authorities/local/root.crt")
SNIPPET_DIR = Path("/etc/caddy/furtka.d")
REDIRECT_SNIPPET = SNIPPET_DIR / "redirect.caddyfile"
REDIRECT_CONTENT = "redir https://{host}{uri} permanent\n"
HTTPS_SNIPPET_DIR = Path("/etc/caddy/furtka-https.d")
HTTPS_SNIPPET = HTTPS_SNIPPET_DIR / "https.caddyfile"
HOSTNAME_FILE = Path("/etc/hostname")
_PEM_RE = re.compile(
r"-----BEGIN CERTIFICATE-----\s*(.+?)\s*-----END CERTIFICATE-----",
re.DOTALL,
)
class HttpsError(Exception):
"""Recoverable failure from set_force_https — the caller should 5xx."""
def _read_hostname(hostname_file: Path = HOSTNAME_FILE) -> str:
"""Return the box's hostname, stripped. Falls back to 'furtka' so a
missing /etc/hostname doesn't produce an empty site block that Caddy
would reject at parse time."""
try:
value = hostname_file.read_text().strip()
except (FileNotFoundError, PermissionError, OSError):
return "furtka"
return value or "furtka"
def _https_snippet_content(hostname: str) -> str:
"""Caddy site block the HTTPS toggle installs at opt-in.
Serves <hostname>.local and <hostname> on :443 with Caddy's
`tls internal` (local CA auto-issuance), and imports the shared
furtka_routes snippet so the :443 listener exposes the same
routes as :80. Must be written at top-level (not inside another
site block) that's why the Caddyfile imports furtka-https.d at
top-level rather than inside :80.
"""
return f"{hostname}.local, {hostname} {{\n\ttls internal\n\timport furtka_routes\n}}\n"
def _ca_fingerprint(ca_path: Path) -> str | None:
try:
pem = ca_path.read_text()
except (FileNotFoundError, PermissionError, IsADirectoryError):
return None
match = _PEM_RE.search(pem)
if not match:
return None
try:
der = base64.b64decode("".join(match.group(1).split()))
except (ValueError, base64.binascii.Error):
return None
return hashlib.sha256(der).hexdigest().upper()
def _format_fingerprint(hex_upper: str) -> str:
return ":".join(hex_upper[i : i + 2] for i in range(0, len(hex_upper), 2))
def status(
ca_path: Path = CA_CERT_PATH,
https_snippet: Path = HTTPS_SNIPPET,
) -> dict:
"""force_https is True iff the HTTPS listener snippet exists.
Before 26.15-alpha this checked the redirect snippet instead but
the redirect alone without a :443 listener wouldn't actually serve
HTTPS, so the listener snippet is the authoritative "HTTPS is on"
signal.
"""
fp = _ca_fingerprint(ca_path)
return {
"ca_available": fp is not None,
"fingerprint_sha256": _format_fingerprint(fp) if fp else None,
"force_https": https_snippet.is_file(),
"ca_download_url": "/rootCA.crt",
}
def _default_reload() -> None:
subprocess.run(
["systemctl", "reload", "caddy"],
check=True,
capture_output=True,
text=True,
)
def set_force_https(
enabled: bool,
snippet_dir: Path = SNIPPET_DIR,
snippet: Path = REDIRECT_SNIPPET,
https_snippet_dir: Path = HTTPS_SNIPPET_DIR,
https_snippet: Path = HTTPS_SNIPPET,
hostname_file: Path = HOSTNAME_FILE,
reload_caddy=_default_reload,
) -> bool:
"""Toggle HTTPS by writing or removing two snippets atomically:
1. The top-level HTTPS hostname+tls-internal block (enables :443
listener + Caddy's `tls internal` cert issuance)
2. The :80-scoped redirect snippet (forces HTTP HTTPS)
Reload Caddy after the snippet swap. On reload failure both
snippets are reverted to their pre-call state so a bad config
can't leave Caddy wedged.
"""
snippet_dir.mkdir(mode=0o755, parents=True, exist_ok=True)
https_snippet_dir.mkdir(mode=0o755, parents=True, exist_ok=True)
had_redirect = snippet.is_file()
previous_redirect = snippet.read_text() if had_redirect else None
had_https = https_snippet.is_file()
previous_https = https_snippet.read_text() if had_https else None
if enabled:
snippet.write_text(REDIRECT_CONTENT)
https_snippet.write_text(_https_snippet_content(_read_hostname(hostname_file)))
else:
if had_redirect:
snippet.unlink()
if had_https:
https_snippet.unlink()
try:
reload_caddy()
except subprocess.CalledProcessError as e:
_revert(snippet, previous_redirect)
_revert(https_snippet, previous_https)
msg = (e.stderr or e.stdout or "").strip() or f"exit {e.returncode}"
raise HttpsError(f"caddy reload failed: {msg}") from e
except FileNotFoundError as e:
_revert(snippet, previous_redirect)
_revert(https_snippet, previous_https)
raise HttpsError(f"systemctl not available: {e}") from e
return enabled
def _revert(snippet: Path, previous: str | None) -> None:
if previous is None:
try:
snippet.unlink()
except FileNotFoundError:
pass
else:
snippet.write_text(previous)

299
furtka/install_runner.py Normal file
View file

@ -0,0 +1,299 @@
"""Background job for app installs — progress-visible via state file.
The slow part of installing an app is `docker compose pull` on a large
image (Jellyfin ~500 MB); without progress feedback, the UI modal sits
dead on "Installing…" for 30+ seconds and the user wonders if it hung.
This module mirrors the exact same shape as ``furtka.catalog`` and
``furtka.updater`` so the UI can poll an install just like it polls a
catalog sync or a self-update. The split is:
- ``furtka.api._do_install`` runs synchronously: resolve source(s), copy
the app folder(s), write .env. Those are fast, and their failures
deserve an immediate 4xx so the install modal can surface them in-line.
- After that the API writes an initial state file (stage
"pulling_image") and dispatches ``systemd-run --unit=furtka-install-
<name>`` to run ``furtka app install-bg <name>`` in the background.
That CLI subcommand is what calls ``run_install()`` here it does the
docker-facing phases and writes state transitions as it goes.
If the API also wrote a plan file at ``/var/lib/furtka/install-plan.json``
(because the target had transitive dependencies), the runner iterates
through every app in ``to_install`` pulling, creating volumes, firing
``on_install`` hooks against already-up providers, then ``compose up``
so providers are ready before consumers' hooks try to talk to them. The
state file's ``target`` field carries the original user-chosen app name
so the UI can show "Installing mosquitto (required by zigbee2mqtt)".
State file schema (``/var/lib/furtka/install-state.json``):
{
"stage": "pulling_image" | "creating_volumes"
| "running_hooks" | "starting_container" | "done" | "error",
"updated_at": "2026-04-21T17:30:45+0200",
"app": "mosquitto", // app currently being processed
"target": "zigbee2mqtt", // original target (== app for single-app installs)
"version": "1.0.0", // added at "done"
"error": "details..." // added at "error"
}
Lock: ``/run/furtka/install.lock`` (tmpfs, reboot-safe). Global, not
per-app two parallel installs are not a v1 use-case and the lock
keeps the state-file representation simple (one in-flight install at
a time).
"""
from __future__ import annotations
import fcntl
import json
import os
import re
import time
from pathlib import Path
from furtka import deps, dockerops, installer
from furtka.manifest import SETTING_NAME_RE, Manifest, load_manifest
from furtka.paths import apps_dir
_INSTALL_STATE = Path(os.environ.get("FURTKA_INSTALL_STATE", "/var/lib/furtka/install-state.json"))
_INSTALL_PLAN = Path(os.environ.get("FURTKA_INSTALL_PLAN", "/var/lib/furtka/install-plan.json"))
_LOCK_PATH = Path(os.environ.get("FURTKA_INSTALL_LOCK", "/run/furtka/install.lock"))
_ON_INSTALL_TIMEOUT_SECONDS = 60.0
_FURTKA_JSON_RE = re.compile(r"^FURTKA_JSON:\s*(.*)$")
class InstallRunnerError(RuntimeError):
"""Any failure in the background install flow that should surface to the caller."""
def state_path() -> Path:
return _INSTALL_STATE
def plan_path() -> Path:
return _INSTALL_PLAN
def lock_path() -> Path:
return _LOCK_PATH
def write_state(stage: str, **extra) -> None:
"""Atomic JSON state write — same shape as catalog/update-state."""
state_path().parent.mkdir(parents=True, exist_ok=True)
tmp = state_path().with_suffix(".tmp")
payload = {"stage": stage, "updated_at": time.strftime("%Y-%m-%dT%H:%M:%S%z"), **extra}
tmp.write_text(json.dumps(payload, indent=2))
tmp.replace(state_path())
def read_state() -> dict:
try:
return json.loads(state_path().read_text())
except (FileNotFoundError, json.JSONDecodeError):
return {}
def _read_plan(target: str) -> dict:
"""Load the install plan if the API wrote one; otherwise the single-app fallback.
The plan file is consumed once removed after read so a stale plan from
a previous install can't accidentally steer this run. If the file is
missing/unparseable we synthesize a one-element plan from the target arg
so the old single-app behaviour still works (CLI invocations, smoke tests).
"""
try:
raw = plan_path().read_text()
except (FileNotFoundError, OSError):
return {"target": target, "to_install": [target]}
try:
data = json.loads(raw)
except json.JSONDecodeError:
return {"target": target, "to_install": [target]}
finally:
try:
plan_path().unlink()
except OSError:
pass
if not isinstance(data, dict):
return {"target": target, "to_install": [target]}
return {
"target": data.get("target", target),
"to_install": data.get("to_install") or [target],
}
def acquire_lock():
path = lock_path()
path.parent.mkdir(parents=True, exist_ok=True)
fh = path.open("w")
try:
fcntl.flock(fh, fcntl.LOCK_EX | fcntl.LOCK_NB)
except BlockingIOError as e:
fh.close()
raise InstallRunnerError("another install is already in progress") from e
return fh
def _parse_hook_output(text: str) -> dict[str, str]:
"""Extract KEY=VALUE pairs from hook stdout plus any FURTKA_JSON: {...} line.
KEY=VALUE keys must match the manifest's SETTING_NAME regex (UPPER_SNAKE_CASE)
so a misbehaving hook can't inject e.g. `PATH=` and clobber the container's
runtime environment.
The FURTKA_JSON sentinel is opt-in for hooks that need to return structured
data later (e.g. a list of generated certificates). Only string values are
accepted; non-string values raise so a hook can't smuggle non-env content
into the .env file. JSON values overlay KEY=VALUE values.
"""
out: dict[str, str] = {}
# First pass: skip FURTKA_JSON lines for KEY=VALUE extraction.
kv_lines = [line for line in text.splitlines() if not _FURTKA_JSON_RE.match(line.strip())]
kv = installer.parse_env_text("\n".join(kv_lines))
for key, value in kv.items():
if not SETTING_NAME_RE.match(key):
raise InstallRunnerError(
f"hook returned invalid env-var name {key!r} "
"(must be UPPER_SNAKE_CASE, e.g. MQTT_USER)"
)
out[key] = value
# Second pass: pick up FURTKA_JSON sentinels.
for raw in text.splitlines():
m = _FURTKA_JSON_RE.match(raw.strip())
if not m:
continue
try:
payload = json.loads(m.group(1))
except json.JSONDecodeError as e:
raise InstallRunnerError(f"hook returned invalid FURTKA_JSON payload: {e}") from e
if not isinstance(payload, dict):
raise InstallRunnerError(
"hook FURTKA_JSON payload must be an object of KEY=VALUE strings"
)
for key, value in payload.items():
if not isinstance(key, str) or not SETTING_NAME_RE.match(key):
raise InstallRunnerError(f"hook FURTKA_JSON key {key!r} must be UPPER_SNAKE_CASE")
if not isinstance(value, str):
raise InstallRunnerError(f"hook FURTKA_JSON value for {key!r} must be a string")
out[key] = value
return out
def _merge_hook_output_into_env(env_path: Path, hook_stdout: str) -> None:
"""Overlay hook-returned keys onto an app's `.env`. Hook wins on conflict.
Re-runs the placeholder-secret check so a hook returning literal "changeme"
is refused the same way an unedited .env.example is. Re-chmods to 0600 so
even an interrupted run leaves the file root-only.
"""
overlay = _parse_hook_output(hook_stdout)
if not overlay:
return
existing = installer.read_env_values(env_path)
merged: dict[str, str] = {}
merged.update(existing)
merged.update(overlay) # hook wins
installer.write_env(env_path, merged)
env_path.chmod(0o600)
bad = installer._placeholder_keys(env_path)
if bad:
raise InstallRunnerError(
f"{env_path}: hook returned placeholder values for {', '.join(bad)}"
)
def _fire_install_hooks(consumer: Manifest, consumer_dir: Path) -> None:
"""Run each `on_install` hook against the corresponding provider's container.
The provider must already be running (its `compose up` ran earlier in the
same plan). Hook stdout is parsed via `_parse_hook_output` and merged into
the consumer's `.env` before its own `compose up` fires.
"""
for req in consumer.requires:
if not req.on_install:
continue
provider_dir = apps_dir() / req.app
provider_manifest_path = provider_dir / "manifest.json"
if not provider_manifest_path.is_file():
raise InstallRunnerError(f"{consumer.name}: required app {req.app!r} is not installed")
# Validate provider manifest loads (matches the contract the rest of
# the system relies on — never trust a provider folder with a busted
# manifest).
load_manifest(provider_manifest_path, expected_name=req.app)
hook_abs = provider_dir / req.on_install
if not hook_abs.is_file():
raise InstallRunnerError(
f"{consumer.name}: on_install hook {req.on_install!r} missing in provider {req.app}"
)
service = deps.provider_exec_service(provider_dir, req.app)
stdout = dockerops.compose_exec_script(
provider_dir,
req.app,
service,
hook_abs,
env={
"FURTKA_CONSUMER_APP": consumer.name,
"FURTKA_CONSUMER_VERSION": consumer.version,
},
timeout=_ON_INSTALL_TIMEOUT_SECONDS,
)
_merge_hook_output_into_env(consumer_dir / ".env", stdout)
def run_install(name: str) -> None:
"""Docker-facing phases of the install: pull → volumes → hooks → compose up.
Called by the ``furtka app install-bg <name>`` CLI subcommand from the
systemd-run spawned by the API. Assumes the API has already run
``installer.install_from()`` for every app in the plan, so each app folder,
`.env`, and manifest are on disk under ``apps_dir() / <name>``.
If ``/var/lib/furtka/install-plan.json`` exists, every app in its
``to_install`` is processed in order (providers before consumers). Each
provider is fully up before the consumer's ``on_install`` hooks fire,
so a hook can ``mosquitto_passwd``/`createuser` against a live broker/DB.
Every phase transition is written to the state file for the UI to poll.
On exception the state flips to ``"error"`` with the message, then the
exception is re-raised so the CLI exits non-zero and journald gets a
traceback. Per-app failure aborts the rest of the plan: a half-installed
consumer whose provider is fine is recoverable by retrying.
"""
with acquire_lock():
plan = _read_plan(name)
target = plan["target"]
to_install = list(plan["to_install"])
try:
last_manifest = None
for app_name in to_install:
target_dir = apps_dir() / app_name
m = load_manifest(target_dir / "manifest.json", expected_name=app_name)
last_manifest = m
write_state("pulling_image", app=app_name, target=target)
dockerops.compose_pull(target_dir, app_name)
write_state("creating_volumes", app=app_name, target=target)
for short in m.volumes:
dockerops.ensure_volume(m.volume_name(short))
if m.requires:
write_state("running_hooks", app=app_name, target=target)
_fire_install_hooks(m, target_dir)
write_state("starting_container", app=app_name, target=target)
dockerops.compose_up(target_dir, app_name)
# Terminal state carries the original target's name + version so
# the UI's poll loop ("is install of <target> done yet?") still
# works unchanged.
if last_manifest is not None and last_manifest.name == target:
write_state("done", app=target, target=target, version=last_manifest.version)
else:
# Fallback: target wasn't last in the plan (shouldn't happen for
# a well-formed plan, but don't crash on the terminal write).
write_state("done", app=target, target=target)
except Exception as e:
current = read_state().get("app", target)
write_state("error", app=current, target=target, error=str(e))
raise

View file

@ -1,8 +1,9 @@
import shutil import shutil
from pathlib import Path from pathlib import Path
from furtka.manifest import ManifestError, load_manifest from furtka import sources
from furtka.paths import apps_dir, bundled_apps_dir from furtka.manifest import Manifest, ManifestError, load_manifest
from furtka.paths import apps_dir
# Values that an app's .env.example may use as obvious "fill me in" markers. # Values that an app's .env.example may use as obvious "fill me in" markers.
# If any of these reach the live .env, install refuses — otherwise we'd ship # If any of these reach the live .env, install refuses — otherwise we'd ship
@ -10,6 +11,25 @@ from furtka.paths import apps_dir, bundled_apps_dir
# default that ends up screenshotted on Hacker News. # default that ends up screenshotted on Hacker News.
PLACEHOLDER_SECRETS: frozenset[str] = frozenset({"changeme"}) PLACEHOLDER_SECRETS: frozenset[str] = frozenset({"changeme"})
# System paths that must never be accepted as a user-supplied `path`-type
# setting. The user is root on their own box, so this is about preventing
# accidental footguns (typing `/etc` when they meant `/mnt/etc`), not
# defending against an attacker. Matches exact paths and their subtrees
# after `Path.resolve()` — so `/mnt/../etc` also lands here.
DENIED_PATH_PREFIXES: tuple[str, ...] = (
"/etc",
"/root",
"/boot",
"/proc",
"/sys",
"/dev",
"/bin",
"/sbin",
"/usr/bin",
"/usr/sbin",
"/var/lib/furtka",
)
class InstallError(RuntimeError): class InstallError(RuntimeError):
pass pass
@ -30,6 +50,53 @@ def _placeholder_keys(env_path: Path) -> list[str]:
return bad return bad
def _is_denied_system_path(resolved: str) -> bool:
if resolved == "/":
return True
for bad in DENIED_PATH_PREFIXES:
if resolved == bad or resolved.startswith(bad + "/"):
return True
return False
def _path_setting_errors(m: Manifest, env_path: Path) -> list[str]:
"""Validate the filesystem paths named by `path`-type settings.
Returns one human-readable message per offending setting. Empty values
on non-required settings are allowed the required-field check in the
caller already refuses blanks on required fields before write.
"""
if not env_path.exists():
return []
values = _read_env(env_path)
errors: list[str] = []
for s in m.settings:
if s.type != "path":
continue
value = values.get(s.name, "")
if not value:
continue
p = Path(value)
if not p.is_absolute():
errors.append(f"{s.name}={value!r} must be an absolute path (start with /)")
continue
try:
resolved = p.resolve(strict=False)
except (OSError, RuntimeError) as e:
errors.append(f"{s.name}={value!r} cannot be resolved: {e}")
continue
if _is_denied_system_path(str(resolved)):
errors.append(f"{s.name}={value!r} resolves into a system path and is not allowed")
continue
if not resolved.exists():
errors.append(f"{s.name}={value!r} does not exist on this box")
continue
if not resolved.is_dir():
errors.append(f"{s.name}={value!r} is not a directory")
continue
return errors
def _format_env_value(v: str) -> str: def _format_env_value(v: str) -> str:
# Quote values that contain whitespace, quotes, or shell metacharacters so # Quote values that contain whitespace, quotes, or shell metacharacters so
# docker-compose's env substitution reads them back intact. Simple values # docker-compose's env substitution reads them back intact. Simple values
@ -58,17 +125,18 @@ def resolve_source(source: str) -> Path:
"""Resolve a `furtka app install <source>` arg to a real source folder. """Resolve a `furtka app install <source>` arg to a real source folder.
If `source` looks like a path (or exists on disk), use it. Otherwise treat If `source` looks like a path (or exists on disk), use it. Otherwise treat
it as a bundled app name and look up under /opt/furtka/apps/<name>. it as an app name and look it up via `furtka.sources.resolve_app_name`
which checks the synced catalog first and falls back to the bundled seed.
""" """
p = Path(source) p = Path(source)
if p.is_dir(): if p.is_dir():
return p return p
if "/" in source or source.startswith("."): if "/" in source or source.startswith("."):
raise InstallError(f"{source!r} is not a directory") raise InstallError(f"{source!r} is not a directory")
bundled = bundled_apps_dir() / source resolved = sources.resolve_app_name(source)
if bundled.is_dir(): if resolved is None:
return bundled raise InstallError(f"{source!r} not found as a path, catalog app, or bundled app")
raise InstallError(f"{source!r} not found as a path or bundled app") return resolved.path
def install_from(src: Path, settings: dict[str, str] | None = None) -> Path: def install_from(src: Path, settings: dict[str, str] | None = None) -> Path:
@ -158,13 +226,22 @@ def install_from(src: Path, settings: dict[str, str] | None = None) -> Path:
f"file and re-run `furtka app install {m.name}`." f"file and re-run `furtka app install {m.name}`."
) )
path_errors = _path_setting_errors(m, env)
if path_errors:
raise InstallError(f"{m.name}: {'; '.join(path_errors)}")
return target return target
def _read_env(env_path: Path) -> dict[str, str]: def parse_env_text(text: str) -> dict[str, str]:
"""Parse a simple KEY=VALUE .env into a dict. Unquotes quoted values.""" """Parse KEY=VALUE lines from a string into a dict. Unquotes quoted values.
Reusable by anything that needs the same lenient .env parsing logic
without reading a file e.g. hook script stdout merged into an app's
.env during install (see install_runner._fire_install_hooks).
"""
out: dict[str, str] = {} out: dict[str, str] = {}
for raw in env_path.read_text().splitlines(): for raw in text.splitlines():
line = raw.strip() line = raw.strip()
if not line or line.startswith("#") or "=" not in line: if not line or line.startswith("#") or "=" not in line:
continue continue
@ -178,6 +255,11 @@ def _read_env(env_path: Path) -> dict[str, str]:
return out return out
def _read_env(env_path: Path) -> dict[str, str]:
"""Parse a simple KEY=VALUE .env into a dict. Unquotes quoted values."""
return parse_env_text(env_path.read_text())
def read_env_values(env_path: Path) -> dict[str, str]: def read_env_values(env_path: Path) -> dict[str, str]:
"""Public wrapper — returns {} if the file doesn't exist.""" """Public wrapper — returns {} if the file doesn't exist."""
if not env_path.exists(): if not env_path.exists():
@ -229,9 +311,34 @@ def update_env(name: str, settings: dict[str, str]) -> Path:
bad = _placeholder_keys(env) bad = _placeholder_keys(env)
if bad: if bad:
raise InstallError(f"{m.name}: {env} still has placeholder values for {', '.join(bad)}.") raise InstallError(f"{m.name}: {env} still has placeholder values for {', '.join(bad)}.")
path_errors = _path_setting_errors(m, env)
if path_errors:
raise InstallError(f"{m.name}: {'; '.join(path_errors)}")
return target return target
def install_plan(plan, settings_target: dict[str, str] | None = None) -> list[Path]:
"""Run the synchronous install phase for every app in `plan.to_install`.
Each name is resolved via `resolve_source()` and copied via `install_from`
in plan order, so providers land before consumers. Only the target app
receives user-supplied settings transitive providers install from their
catalog/bundled `.env.example` and rely on the placeholder-secret check
to refuse if anyone shipped a "changeme" default.
No rollback on partial failure. Re-running install is the recovery path;
stopping providers a user may already rely on for other apps is more
destructive than a partial state. Returns the list of target folders in
install order.
"""
targets: list[Path] = []
for name in plan.to_install:
src = resolve_source(name)
settings = settings_target if name == plan.target else None
targets.append(install_from(src, settings=settings))
return targets
def remove(name: str) -> Path: def remove(name: str) -> Path:
"""Delete /var/lib/furtka/apps/<name>/. Volumes are NOT touched. """Delete /var/lib/furtka/apps/<name>/. Volumes are NOT touched.

View file

@ -13,8 +13,9 @@ REQUIRED_FIELDS = (
"icon", "icon",
) )
VALID_SETTING_TYPES = frozenset({"text", "password", "number"}) VALID_SETTING_TYPES = frozenset({"text", "password", "number", "path"})
SETTING_NAME_RE = re.compile(r"^[A-Z_][A-Z0-9_]*$") SETTING_NAME_RE = re.compile(r"^[A-Z_][A-Z0-9_]*$")
APP_NAME_RE = re.compile(r"^[a-z][a-z0-9_-]*$")
class ManifestError(Exception): class ManifestError(Exception):
@ -31,6 +32,18 @@ class Setting:
default: str | None default: str | None
@dataclass(frozen=True)
class Requirement:
app: str # name of the required app — must resolve in installed/catalog/bundled
# Hook paths are relative to the PROVIDER's app folder (not the consumer's).
# Resolved at hook-fire time, not manifest-load time — the provider may not
# be installed yet when this manifest is parsed.
# on_install: script run via `docker compose exec` on the provider during install.
on_install: str | None
# on_start: script run on every boot before the consumer starts (must be idempotent).
on_start: str | None
@dataclass(frozen=True) @dataclass(frozen=True)
class Manifest: class Manifest:
name: str name: str
@ -42,6 +55,13 @@ class Manifest:
icon: str icon: str
description_long: str = "" description_long: str = ""
settings: tuple[Setting, ...] = field(default_factory=tuple) settings: tuple[Setting, ...] = field(default_factory=tuple)
# Optional "Open" link for the landing page + installed-app row.
# `{host}` is substituted with the current browser hostname at render
# time so the URL follows whatever the user typed to reach Furtka —
# furtka.local, a raw IP, a future reverse-proxy hostname. Apps with
# no frontend (CLI-only, background workers) leave this empty.
open_url: str = ""
requires: tuple[Requirement, ...] = field(default_factory=tuple)
def volume_name(self, short: str) -> str: def volume_name(self, short: str) -> str:
# Namespace volume names so two apps can each declare e.g. "data" # Namespace volume names so two apps can each declare e.g. "data"
@ -92,6 +112,49 @@ def _parse_settings(raw: object, manifest_path: Path) -> tuple[Setting, ...]:
return tuple(out) return tuple(out)
def _validate_hook_path(value: object, manifest_path: Path, where: str) -> str | None:
if value is None:
return None
if not isinstance(value, str) or not value:
raise ManifestError(f"{manifest_path}: {where} must be a non-empty string if set")
if value.startswith("/"):
raise ManifestError(f"{manifest_path}: {where} must be relative (no leading /)")
parts = value.replace("\\", "/").split("/")
if any(p == ".." for p in parts):
raise ManifestError(f"{manifest_path}: {where} must not contain '..'")
return value
def _parse_requires(raw: object, manifest_path: Path, self_name: str) -> tuple[Requirement, ...]:
if raw is None:
return ()
if not isinstance(raw, list):
raise ManifestError(f"{manifest_path}: requires must be a list")
out: list[Requirement] = []
seen: set[str] = set()
for i, item in enumerate(raw):
if not isinstance(item, dict):
raise ManifestError(f"{manifest_path}: requires[{i}] must be an object")
app = item.get("app")
if not isinstance(app, str) or not app or not APP_NAME_RE.match(app):
raise ManifestError(
f"{manifest_path}: requires[{i}].app must be a non-empty lowercase app name"
)
if app == self_name:
raise ManifestError(f"{manifest_path}: requires[{i}].app {app!r} is a self-reference")
if app in seen:
raise ManifestError(f"{manifest_path}: requires has duplicate app {app!r}")
seen.add(app)
on_install = _validate_hook_path(
item.get("on_install"), manifest_path, f"requires[{app}].on_install"
)
on_start = _validate_hook_path(
item.get("on_start"), manifest_path, f"requires[{app}].on_start"
)
out.append(Requirement(app=app, on_install=on_install, on_start=on_start))
return tuple(out)
def load_manifest(path: Path, expected_name: str | None = None) -> Manifest: def load_manifest(path: Path, expected_name: str | None = None) -> Manifest:
"""Parse and validate a manifest.json. """Parse and validate a manifest.json.
@ -126,6 +189,11 @@ def load_manifest(path: Path, expected_name: str | None = None) -> Manifest:
raise ManifestError(f"{path}: ports must be a list of integers") raise ManifestError(f"{path}: ports must be a list of integers")
settings = _parse_settings(raw.get("settings"), path) settings = _parse_settings(raw.get("settings"), path)
requires = _parse_requires(raw.get("requires"), path, name)
open_url_raw = raw.get("open_url", "")
if not isinstance(open_url_raw, str):
raise ManifestError(f"{path}: open_url must be a string if set")
return Manifest( return Manifest(
name=name, name=name,
@ -137,4 +205,6 @@ def load_manifest(path: Path, expected_name: str | None = None) -> Manifest:
icon=str(raw["icon"]), icon=str(raw["icon"]),
description_long=str(raw.get("description_long", "")), description_long=str(raw.get("description_long", "")),
settings=settings, settings=settings,
open_url=open_url_raw,
requires=requires,
) )

95
furtka/passwd.py Normal file
View file

@ -0,0 +1,95 @@
"""Stdlib-only password hashing, compatible with werkzeug's hash format.
Why this exists: 26.11-alpha introduced auth via ``werkzeug.security``,
but the target system doesn't have ``werkzeug`` installed (Core runs as
system Python with only the stdlib pyproject.toml's ``flask>=3.0``
dep is never pip-installed on the box). Fresh installs from a 26.11 /
26.12 ISO crashed on import; upgrades from pre-auth versions were
double-broken by that plus a too-strict updater health check.
Fix: replace werkzeug with stdlib equivalents using the same hash
**format** so existing ``users.json`` files created by 26.11 / 26.12 on
the rare boxes that happened to have werkzeug installed (Medion, .196
after manual pacman) still verify.
Format: ``<method>$<salt>$<hex digest>``
- ``pbkdf2:<hash>:<iterations>`` what we generate by default here
- ``scrypt:<N>:<r>:<p>`` what werkzeug's default produces
Both are implemented via ``hashlib`` which has been stdlib since 3.6.
"""
from __future__ import annotations
import hashlib
import hmac
import secrets
_PBKDF2_HASH = "sha256"
_PBKDF2_ITERATIONS = 600_000
_SALT_LEN = 16
def hash_password(password: str) -> str:
"""Return a ``pbkdf2:sha256:<iter>$<salt>$<hex>`` hash of *password*.
PBKDF2-SHA256 over UTF-8. 600k iterations same as werkzeug's
default in the 3.x series, roughly OWASP 2023's recommendation.
"""
if not isinstance(password, str):
raise TypeError("password must be str")
salt = secrets.token_urlsafe(_SALT_LEN)[:_SALT_LEN]
dk = hashlib.pbkdf2_hmac(
_PBKDF2_HASH, password.encode("utf-8"), salt.encode("utf-8"), _PBKDF2_ITERATIONS
)
return f"pbkdf2:{_PBKDF2_HASH}:{_PBKDF2_ITERATIONS}${salt}${dk.hex()}"
def verify_password(password: str, hashed: str) -> bool:
"""Constant-time verify *password* against a stored *hashed* value.
Accepts both our own pbkdf2 hashes and legacy werkzeug scrypt
hashes in ``scrypt:N:r:p$salt$hex`` form so users.json files
written by 26.11 / 26.12 keep working after upgrade.
"""
if not isinstance(password, str) or not isinstance(hashed, str):
return False
try:
method, salt, expected = hashed.split("$", 2)
except ValueError:
return False
parts = method.split(":")
if not parts:
return False
algo = parts[0]
pw_bytes = password.encode("utf-8")
salt_bytes = salt.encode("utf-8")
try:
if algo == "pbkdf2":
if len(parts) < 3:
return False
inner_hash = parts[1]
iterations = int(parts[2])
dk = hashlib.pbkdf2_hmac(inner_hash, pw_bytes, salt_bytes, iterations)
elif algo == "scrypt":
# werkzeug: scrypt:N:r:p, dklen=64, maxmem=132 MiB. Without
# the explicit maxmem we'd hit OpenSSL's default memory cap
# and throw ValueError on N >= 32768.
if len(parts) < 4:
return False
n = int(parts[1])
r = int(parts[2])
p = int(parts[3])
dk = hashlib.scrypt(
pw_bytes,
salt=salt_bytes,
n=n,
r=r,
p=p,
dklen=64,
maxmem=132 * 1024 * 1024,
)
else:
return False
except (ValueError, TypeError, OverflowError):
return False
return hmac.compare_digest(dk.hex(), expected)

View file

@ -7,6 +7,19 @@ DEFAULT_APPS_DIR = Path("/var/lib/furtka/apps")
# symlink. A flat /opt/furtka/apps path would break the Phase-2 self-update # symlink. A flat /opt/furtka/apps path would break the Phase-2 self-update
# flow (symlink swap wouldn't move the bundled-app tree along with the code). # flow (symlink swap wouldn't move the bundled-app tree along with the code).
DEFAULT_BUNDLED_APPS_DIR = Path("/opt/furtka/current/apps") DEFAULT_BUNDLED_APPS_DIR = Path("/opt/furtka/current/apps")
# Catalog apps come from `furtka catalog sync` pulling the daniel/furtka-apps
# release tarball. Lives under /var/lib/furtka/ so it survives core self-
# updates — the resolver (furtka.sources) prefers it over the bundled seed.
DEFAULT_CATALOG_DIR = Path("/var/lib/furtka/catalog")
# Users / auth state. One JSON file keyed by role — today only "admin" exists.
# Lives under /var/lib/furtka/ so self-updates don't stomp it. Mode 0600 is
# enforced by furtka.auth.save_users (same atomic-write pattern as the app
# .env files).
DEFAULT_USERS_FILE = Path("/var/lib/furtka/users.json")
# Static-web asset dir served by the Python handler for / and
# /settings* so those pages pick up the auth-guard. Caddy also serves
# /style.css and other assets directly from here for the login page.
DEFAULT_STATIC_WWW = Path("/opt/furtka/current/assets/www")
def apps_dir() -> Path: def apps_dir() -> Path:
@ -15,3 +28,19 @@ def apps_dir() -> Path:
def bundled_apps_dir() -> Path: def bundled_apps_dir() -> Path:
return Path(os.environ.get("FURTKA_BUNDLED_APPS_DIR", DEFAULT_BUNDLED_APPS_DIR)) return Path(os.environ.get("FURTKA_BUNDLED_APPS_DIR", DEFAULT_BUNDLED_APPS_DIR))
def catalog_dir() -> Path:
return Path(os.environ.get("FURTKA_CATALOG_DIR", DEFAULT_CATALOG_DIR))
def catalog_apps_dir() -> Path:
return catalog_dir() / "apps"
def users_file() -> Path:
return Path(os.environ.get("FURTKA_USERS_FILE", DEFAULT_USERS_FILE))
def static_www_dir() -> Path:
return Path(os.environ.get("FURTKA_STATIC_WWW", DEFAULT_STATIC_WWW))

View file

@ -1,13 +1,16 @@
from dataclasses import dataclass from dataclasses import dataclass
from pathlib import Path from pathlib import Path
from furtka import dockerops from furtka import deps, dockerops
from furtka.manifest import ManifestError, load_manifest
from furtka.scanner import scan from furtka.scanner import scan
_ON_START_TIMEOUT_SECONDS = 30.0
@dataclass(frozen=True) @dataclass(frozen=True)
class Action: class Action:
kind: str # "ensure_volume" | "compose_up" | "skip" kind: str # "ensure_volume" | "compose_up" | "hook" | "skip" | "error"
target: str target: str
detail: str = "" detail: str = ""
@ -20,13 +23,20 @@ class Action:
def reconcile(apps_root: Path, dry_run: bool = False) -> list[Action]: def reconcile(apps_root: Path, dry_run: bool = False) -> list[Action]:
"""Walk the apps tree and bring docker into the desired state. """Walk the apps tree and bring docker into the desired state.
Apps are visited in dependency order providers before consumers so a
consumer's `on_start` hook runs against an already-up provider. Within a
tier, order stays alphabetical for deterministic boot logs. Apps with
unresolvable `requires` (missing provider, broken manifest cycle) are
visited last; reconcile's per-app isolation still kicks in if they fail.
Failures during one app's reconcile (Docker errors, missing binary, …) are Failures during one app's reconcile (Docker errors, missing binary, …) are
captured as Action(kind='error', ) and do NOT abort the whole sweep the captured as Action(kind='error', ) and do NOT abort the whole sweep the
other apps still get reconciled. Callers inspect the returned actions to other apps still get reconciled. Callers inspect the returned actions to
decide overall success. decide overall success.
""" """
actions: list[Action] = [] actions: list[Action] = []
for result in scan(apps_root): results = scan(apps_root)
for result in deps.installed_topo_order(results):
if not result.ok: if not result.ok:
actions.append(Action("skip", result.path.name, result.error or "")) actions.append(Action("skip", result.path.name, result.error or ""))
continue continue
@ -37,6 +47,31 @@ def reconcile(apps_root: Path, dry_run: bool = False) -> list[Action]:
actions.append(Action("ensure_volume", full)) actions.append(Action("ensure_volume", full))
if not dry_run: if not dry_run:
dockerops.ensure_volume(full) dockerops.ensure_volume(full)
hook_failed = False
for req in m.requires:
if not req.on_start:
continue
hook_label = f"{m.name}:{req.app}:on_start"
actions.append(Action("hook", hook_label, req.on_start))
if dry_run:
continue
try:
_fire_on_start_hook(m, req, apps_root)
except (
dockerops.DockerError,
FileNotFoundError,
OSError,
ManifestError,
) as e:
actions.append(Action("error", m.name, f"on_start({req.app}): {e}"))
hook_failed = True
break
if hook_failed:
# Skip compose_up: a consumer whose provider's contract didn't
# get re-established (e.g. missing MQTT user) starting up
# blindly is worse than not starting it. The provider stays up
# and other apps in the sweep keep going.
continue
actions.append(Action("compose_up", m.name)) actions.append(Action("compose_up", m.name))
if not dry_run: if not dry_run:
dockerops.compose_up(result.path, m.name) dockerops.compose_up(result.path, m.name)
@ -48,5 +83,36 @@ def reconcile(apps_root: Path, dry_run: bool = False) -> list[Action]:
return actions return actions
def _fire_on_start_hook(consumer, req, apps_root: Path) -> None:
"""Run a single `on_start` hook against the provider's running container.
Reconciler-local helper kept narrow on purpose so reconcile's main loop
stays scannable. Errors propagate; the caller decorates with the per-app
Action("error", ...) and skips compose_up for this consumer.
"""
provider_dir = apps_root / req.app
provider_manifest_path = provider_dir / "manifest.json"
if not provider_manifest_path.is_file():
raise FileNotFoundError(f"required app {req.app!r} is not installed")
# Validate provider manifest loads (otherwise scanner would have skipped
# it and we'd still try to exec — fail loud here instead).
load_manifest(provider_manifest_path, expected_name=req.app)
hook_abs = provider_dir / req.on_start
if not hook_abs.is_file():
raise FileNotFoundError(f"on_start hook {req.on_start!r} missing in provider {req.app}")
service = deps.provider_exec_service(provider_dir, req.app)
dockerops.compose_exec_script(
provider_dir,
req.app,
service,
hook_abs,
env={
"FURTKA_CONSUMER_APP": consumer.name,
"FURTKA_CONSUMER_VERSION": consumer.version,
},
timeout=_ON_START_TIMEOUT_SECONDS,
)
def has_errors(actions: list[Action]) -> bool: def has_errors(actions: list[Action]) -> bool:
return any(a.kind == "error" for a in actions) return any(a.kind == "error" for a in actions)

75
furtka/sources.py Normal file
View file

@ -0,0 +1,75 @@
"""Single lookup layer for "where does app <name> live right now?".
Three origins an app folder can come from:
- ``catalog`` the daily-synced ``/var/lib/furtka/catalog/apps/`` tree
that ``furtka.catalog.sync_catalog`` maintains.
- ``bundled`` the seed ``/opt/furtka/current/apps/`` tree shipped
inside the core release tarball. Used for first-boot before any
catalog sync has run, and as the fallback when the catalog is stale,
missing, or doesn't know about this app.
- ``local`` an explicit directory path passed to ``furtka app install
/path/to/src``; bypasses this module entirely.
Catalog wins on collision. The precedence is deliberate when the user
pressed "Sync apps catalog" they want what they synced, not whatever the
core tarball happened to carry.
"""
from __future__ import annotations
from dataclasses import dataclass
from pathlib import Path
from furtka.paths import bundled_apps_dir, catalog_apps_dir
@dataclass(frozen=True)
class AppSource:
path: Path
origin: str # "catalog" | "bundled" | "local"
def resolve_app_name(name: str) -> AppSource | None:
"""Return the source folder for a bundled/catalog app name.
Checks catalog first, then bundled seed. Presence is tested by
``manifest.json`` existing an empty folder or a stray ``.env``
won't register. Returns ``None`` if the name isn't known anywhere.
"""
cat = catalog_apps_dir() / name
if (cat / "manifest.json").is_file():
return AppSource(cat, "catalog")
bundled = bundled_apps_dir() / name
if (bundled / "manifest.json").is_file():
return AppSource(bundled, "bundled")
return None
def list_available() -> list[AppSource]:
"""Catalog bundled, catalog wins on name collision.
Each entry is a folder containing a manifest.json. Ordering is
alphabetical by folder name, which matches how the scanner sorts so
the UI list stays stable across sync/reboot.
"""
seen: dict[str, AppSource] = {}
cat_root = catalog_apps_dir()
if cat_root.is_dir():
for entry in sorted(cat_root.iterdir()):
if not entry.is_dir():
continue
if not (entry / "manifest.json").is_file():
continue
seen[entry.name] = AppSource(entry, "catalog")
bundled_root = bundled_apps_dir()
if bundled_root.is_dir():
for entry in sorted(bundled_root.iterdir()):
if not entry.is_dir():
continue
if entry.name in seen:
continue
if not (entry / "manifest.json").is_file():
continue
seen[entry.name] = AppSource(entry, "bundled")
return [seen[name] for name in sorted(seen)]

View file

@ -29,24 +29,32 @@ the updater at a tmpdir.
from __future__ import annotations from __future__ import annotations
import fcntl import fcntl
import hashlib
import json import json
import os import os
import shutil import shutil
import subprocess import subprocess
import tarfile
import time import time
import urllib.error import urllib.error
import urllib.request import urllib.request
from dataclasses import dataclass from dataclasses import dataclass
from pathlib import Path from pathlib import Path
from furtka import _release_common as _rc
FORGEJO_HOST = os.environ.get("FURTKA_FORGEJO_HOST", "forgejo.sourcegate.online") FORGEJO_HOST = os.environ.get("FURTKA_FORGEJO_HOST", "forgejo.sourcegate.online")
FORGEJO_REPO = os.environ.get("FURTKA_FORGEJO_REPO", "daniel/furtka") FORGEJO_REPO = os.environ.get("FURTKA_FORGEJO_REPO", "daniel/furtka")
_FURTKA_ROOT = Path(os.environ.get("FURTKA_ROOT", "/opt/furtka")) _FURTKA_ROOT = Path(os.environ.get("FURTKA_ROOT", "/opt/furtka"))
_STATE_DIR = Path(os.environ.get("FURTKA_STATE_DIR", "/var/lib/furtka")) _STATE_DIR = Path(os.environ.get("FURTKA_STATE_DIR", "/var/lib/furtka"))
_CADDYFILE_LIVE = Path(os.environ.get("FURTKA_CADDYFILE_PATH", "/etc/caddy/Caddyfile")) _CADDYFILE_LIVE = Path(os.environ.get("FURTKA_CADDYFILE_PATH", "/etc/caddy/Caddyfile"))
_CADDY_SNIPPET_DIR = Path(
os.environ.get("FURTKA_CADDY_SNIPPET_DIR", str(_CADDYFILE_LIVE.parent / "furtka.d"))
)
_CADDY_HTTPS_SNIPPET_DIR = Path(
os.environ.get("FURTKA_CADDY_HTTPS_SNIPPET_DIR", str(_CADDYFILE_LIVE.parent / "furtka-https.d"))
)
_SYSTEMD_DIR = Path(os.environ.get("FURTKA_SYSTEMD_DIR", "/etc/systemd/system")) _SYSTEMD_DIR = Path(os.environ.get("FURTKA_SYSTEMD_DIR", "/etc/systemd/system"))
_HOSTNAME_FILE = Path(os.environ.get("FURTKA_HOSTNAME_FILE", "/etc/hostname"))
_CADDYFILE_HOSTNAME_MARKER = "__FURTKA_HOSTNAME__"
class UpdateError(RuntimeError): class UpdateError(RuntimeError):
@ -90,46 +98,30 @@ def read_current_version() -> str:
return "dev" return "dev"
def _forgejo_api(path: str) -> dict: def _forgejo_api(path: str) -> dict | list:
url = f"https://{FORGEJO_HOST}/api/v1/repos/{FORGEJO_REPO}{path}" return _rc.forgejo_api(FORGEJO_HOST, FORGEJO_REPO, path, error_cls=UpdateError)
req = urllib.request.Request(url, headers={"Accept": "application/json"})
try:
with urllib.request.urlopen(req, timeout=15) as resp:
return json.loads(resp.read())
except (urllib.error.URLError, json.JSONDecodeError) as e:
raise UpdateError(f"forgejo api {url}: {e}") from e
def _version_tuple(v: str) -> tuple: _version_tuple = _rc.version_tuple
"""Compare CalVer tags like 26.1-alpha < 26.1-beta < 26.1 < 26.2-alpha.
The "stable" release (no suffix) sorts after its own pre-releases. Uses a
tuple of (year, release, stage-rank, stage-tag). Stage rank: alpha=0,
beta=1, rc=2, stable=3, unknown=-1.
"""
stage_rank = {"alpha": 0, "beta": 1, "rc": 2}
head, _, suffix = v.partition("-")
try:
year_str, release_str = head.split(".", 1)
year = int(year_str)
release = int(release_str)
except (ValueError, IndexError):
return (-1, -1, -1, v)
if not suffix:
return (year, release, 3, "")
for name, rank in stage_rank.items():
if suffix.startswith(name):
return (year, release, rank, suffix)
return (year, release, -1, suffix)
def check_update() -> UpdateCheck: def check_update() -> UpdateCheck:
"""Return current + latest versions and whether an update is available.""" """Return current + latest versions and whether an update is available.
Forgejo's /releases/latest endpoint skips anything marked as a
pre-release, so during the CalVer alpha/beta stage where every tag
carries a suffix, that endpoint always 404s. Query the paginated
/releases list instead and take the first entry Forgejo returns
them newest-first, including pre-releases.
"""
current = read_current_version() current = read_current_version()
release = _forgejo_api("/releases/latest") releases = _forgejo_api("/releases?limit=1")
if not isinstance(releases, list) or not releases:
raise UpdateError("no releases published yet")
release = releases[0]
latest = str(release.get("tag_name") or "").strip() latest = str(release.get("tag_name") or "").strip()
if not latest: if not latest:
raise UpdateError("no latest release (empty tag_name)") raise UpdateError("latest release has empty tag_name")
tarball_url = None tarball_url = None
sha256_url = None sha256_url = None
for asset in release.get("assets") or []: for asset in release.get("assets") or []:
@ -150,74 +142,97 @@ def check_update() -> UpdateCheck:
def _download(url: str, dest: Path) -> None: def _download(url: str, dest: Path) -> None:
dest.parent.mkdir(parents=True, exist_ok=True) _rc.download(url, dest, error_cls=UpdateError)
req = urllib.request.Request(url)
try:
with urllib.request.urlopen(req, timeout=60) as resp, dest.open("wb") as f:
shutil.copyfileobj(resp, f)
except urllib.error.URLError as e:
raise UpdateError(f"download {url}: {e}") from e
def _sha256_of(path: Path) -> str: _sha256_of = _rc.sha256_of
h = hashlib.sha256()
with path.open("rb") as f:
for chunk in iter(lambda: f.read(1024 * 1024), b""):
h.update(chunk)
return h.hexdigest()
def verify_tarball(tarball: Path, expected_sha: str) -> None: def verify_tarball(tarball: Path, expected_sha: str) -> None:
actual = _sha256_of(tarball) _rc.verify_tarball(tarball, expected_sha, error_cls=UpdateError)
if actual != expected_sha:
raise UpdateError(f"sha256 mismatch: expected {expected_sha}, got {actual}")
def _parse_sha256_sidecar(text: str) -> str: def _parse_sha256_sidecar(text: str) -> str:
"""Extract the hash from a standard `sha256sum` sidecar line.""" return _rc.parse_sha256_sidecar(text, error_cls=UpdateError)
line = text.strip().split("\n", 1)[0].strip()
if not line:
raise UpdateError("empty sha256 sidecar")
return line.split()[0]
def _extract_tarball(tarball: Path, dest: Path) -> str: def _extract_tarball(tarball: Path, dest: Path) -> str:
"""Extract the tarball and return the VERSION read from its root.""" return _rc.extract_tarball(tarball, dest, error_cls=UpdateError)
dest.mkdir(parents=True, exist_ok=True)
with tarfile.open(tarball, "r:gz") as tf:
# defensive: refuse entries that would escape dest def _current_hostname() -> str:
for member in tf.getmembers(): """Read the box's hostname from /etc/hostname, falling back to 'furtka'.
if member.name.startswith(("/", "..")) or ".." in Path(member.name).parts:
raise UpdateError(f"refusing tarball entry {member.name!r}") Used to substitute the __FURTKA_HOSTNAME__ marker in the shipped Caddyfile
# Python 3.12+ grew a stricter default filter; opt into it where so Caddy's `tls internal` sees a real name to issue a leaf cert for.
# available to catch symlink-escape / device-node / setuid tricks """
# that our regex check can't see. Older Pythons fall back to the try:
# historical permissive behaviour. name = _HOSTNAME_FILE.read_text().strip()
try: except (FileNotFoundError, PermissionError, OSError):
tf.extractall(dest, filter="data") return "furtka"
except TypeError: return name or "furtka"
tf.extractall(dest)
version_file = dest / "VERSION"
if not version_file.is_file(): def _maybe_migrate_preserve_https() -> None:
raise UpdateError("tarball has no VERSION file at root") """26.14 → 26.15 migration: if the box already had the force-HTTPS
return version_file.read_text().strip() redirect snippet on disk, that means the user explicitly opted
into HTTPS under the old regime. Under the new opt-in regime,
HTTPS also requires a separate listener snippet write it here so
the user's HTTPS doesn't silently break when the Caddyfile refresh
removes the default hostname block.
"""
redirect_snippet = _CADDY_SNIPPET_DIR / "redirect.caddyfile"
https_snippet = _CADDY_HTTPS_SNIPPET_DIR / "https.caddyfile"
if not redirect_snippet.is_file() or https_snippet.is_file():
return
hostname = _current_hostname()
https_snippet.write_text(
f"{hostname}.local, {hostname} {{\n\ttls internal\n\timport furtka_routes\n}}\n"
)
def _refresh_caddyfile(source: Path) -> bool: def _refresh_caddyfile(source: Path) -> bool:
"""Copy the shipped Caddyfile to /etc/caddy/ iff it differs. Returns True """Copy the shipped Caddyfile to /etc/caddy/ iff it differs. Returns True
if the file changed (so caddy needs more than a bare reload).""" if the file changed (so caddy needs more than a bare reload).
Substitutes __FURTKA_HOSTNAME__ with the current hostname before comparing
and writing same rendering the webinstaller applies at install time, so
a self-update lands byte-identical content when nothing else changed.
"""
if not source.is_file(): if not source.is_file():
return False return False
if _CADDYFILE_LIVE.is_file() and source.read_bytes() == _CADDYFILE_LIVE.read_bytes(): # Snippet dirs for the /api/furtka/https/force toggle. Pre-HTTPS
# installs don't have them; ensure both so the Caddyfile's glob
# imports can't trip an older Caddy on missing paths during the
# first reload. furtka-https.d is new in 26.15-alpha — older boxes
# upgrading across this version line won't have it on disk yet.
_CADDY_SNIPPET_DIR.mkdir(mode=0o755, parents=True, exist_ok=True)
_CADDY_HTTPS_SNIPPET_DIR.mkdir(mode=0o755, parents=True, exist_ok=True)
# Migration: pre-26.15 Caddyfile always served :443 via tls internal,
# so a box that had the "force HTTPS" redirect toggle ON relied on
# HTTPS being there implicitly. After this Caddyfile refresh the
# hostname block is gone, so the redirect would 301 to a dead :443.
# Preserve intent by writing the HTTPS listener snippet too.
_maybe_migrate_preserve_https()
rendered = source.read_text().replace(_CADDYFILE_HOSTNAME_MARKER, _current_hostname())
if _CADDYFILE_LIVE.is_file() and rendered == _CADDYFILE_LIVE.read_text():
return False return False
_CADDYFILE_LIVE.parent.mkdir(parents=True, exist_ok=True) _CADDYFILE_LIVE.parent.mkdir(parents=True, exist_ok=True)
shutil.copy(source, _CADDYFILE_LIVE) _CADDYFILE_LIVE.write_text(rendered)
return True return True
def _link_new_units(unit_dir: Path) -> list[str]: def _link_new_units(unit_dir: Path) -> list[str]:
"""`systemctl link` any unit file in unit_dir that isn't already symlinked """`systemctl link` any unit file in unit_dir that isn't already symlinked
into /etc/systemd/system/. Returns the list of newly-linked unit names.""" into /etc/systemd/system/. Returns the list of newly-linked unit names.
Newly-linked `.timer` units are additionally `systemctl enable`d so that
a self-update introducing a timer (e.g. 26.5 26.6 adding
furtka-catalog-sync.timer) activates it automatically the installer's
enable list only applies to fresh installs. A linked-but-disabled timer
never fires on its own, so without this step catalog sync would never
happen on upgraded boxes.
"""
if not unit_dir.is_dir(): if not unit_dir.is_dir():
return [] return []
linked = [] linked = []
@ -228,6 +243,8 @@ def _link_new_units(unit_dir: Path) -> list[str]:
if target.exists() or target.is_symlink(): if target.exists() or target.is_symlink():
continue continue
_run(["systemctl", "link", str(unit_file)]) _run(["systemctl", "link", str(unit_file)])
if unit_file.suffix == ".timer":
_run(["systemctl", "enable", unit_file.name])
linked.append(unit_file.name) linked.append(unit_file.name)
return linked return linked
@ -268,13 +285,35 @@ def _run(cmd: list[str]) -> None:
def _health_check(url: str, deadline_s: float = 30.0) -> bool: def _health_check(url: str, deadline_s: float = 30.0) -> bool:
"""Poll *url* until we get *any* response from the Python server.
Treats any 2xx-4xx response as "server is up". A 401 on
/api/apps after the 26.11-alpha auth-guard shipped is a perfectly
valid signal that the new code imported + the socket is listening
rejecting the request is still "alive". Only 5xx or connection-
level failures count as unhealthy.
Rationale: pre-26.13 this function hit /api/apps and expected 200,
which silently broke every upgrade across the auth boundary (26.10
26.11+) and auto-rolled back. Now we just need proof the new
process came up.
"""
end = time.time() + deadline_s end = time.time() + deadline_s
while time.time() < end: while time.time() < end:
try: try:
with urllib.request.urlopen(url, timeout=3) as resp: with urllib.request.urlopen(url, timeout=3) as resp:
if resp.status == 200: # Any 2xx/3xx → alive. urllib follows redirects by
# default, so a 302 → /login resolves to /login's 200.
if resp.status < 500:
return True return True
except urllib.error.HTTPError as e:
# 4xx → server is up, just refused us (auth, bad request,
# whatever). Counts as healthy for the "did it come back"
# check. 5xx → genuinely broken, don't accept.
if 400 <= e.code < 500:
return True
except urllib.error.URLError: except urllib.error.URLError:
# Connection refused / DNS / timeout → not up yet, retry.
pass pass
time.sleep(1) time.sleep(1)
return False return False

View file

@ -20,16 +20,19 @@ The script re-execs itself inside a privileged `archlinux:latest` container. Tha
The build starts from Arch's stock `releng` profile (the same one used to build the official Arch ISO), then overlays our customizations from `overlay/`: The build starts from Arch's stock `releng` profile (the same one used to build the official Arch ISO), then overlays our customizations from `overlay/`:
| Overlay file | Effect | | Overlay file | Effect |
|-------------------------------------------|----------------------------------------------------------------------------------| |----------------------------------------------|----------------------------------------------------------------------------------|
| `overlay/packages.extra` | Appended to the package list. Adds `python`, `python-flask`, `avahi`, `nss-mdns` | | `overlay/packages.extra` | Appended to the package list. Adds `python`, `python-flask`, `avahi`, `nss-mdns` |
| `overlay/profiledef.sh` | Appended to `profiledef.sh`. Renames the ISO to `furtka-*` with a dated version | | `overlay/profiledef.sh` | Appended to `profiledef.sh`. Renames the ISO to `furtka-*` with a dated version |
| `overlay/airootfs/opt/furtka/` | Directory where `webinstaller/` is copied at build time | | `overlay/airootfs/opt/furtka/` | Directory where `webinstaller/` is copied at build time |
| `overlay/airootfs/etc/systemd/system/` | Contains `furtka-webinstaller.service` + a symlink into `multi-user.target.wants/` so it auto-starts on boot | | `overlay/airootfs/etc/hostname` | Live-ISO hostname (`proksi`) so mDNS advertises the installer as `proksi.local` |
| `overlay/airootfs/etc/issue` | Welcome banner on the TTY pointing users at `http://proksi.local:5000` |
| `overlay/airootfs/usr/local/bin/furtka-update-issue` | Rewrites `/etc/issue` at runtime so the banner also shows the DHCP-assigned IP as a fallback URL |
| `overlay/airootfs/etc/systemd/system/` | `furtka-webinstaller.service` (Flask on :5000) + `furtka-issue.service` (runs the banner-updater on network-online), each symlinked into `multi-user.target.wants/` to auto-start on boot |
The systemd service runs `flask --app app run --host 0.0.0.0 --port 5000` under `/opt/furtka`. The `0.0.0.0` binding is important — the Flask default is localhost-only, which wouldn't be reachable from another machine on the LAN. The systemd service runs `flask --app app run --host 0.0.0.0 --port 5000` under `/opt/furtka`. The `0.0.0.0` binding is important — the Flask default is localhost-only, which wouldn't be reachable from another machine on the LAN.
mDNS (`proksi.local`) via avahi is installed but not yet wired. First milestone is just "boot → browser → wizard at raw IP". Naming comes next. mDNS is wired: `avahi-daemon` + `nss-mdns` come from `packages.extra`, the live ISO's hostname is `proksi`, and as soon as `systemd-networkd-wait-online` fires the installer is reachable at `http://proksi.local:5000`. The raw IP still shows on the console for fallback — some Windows clients need the Bonjour service for `.local` to resolve at all.
## Test flow ## Test flow
@ -51,7 +54,7 @@ mDNS (`proksi.local`) via avahi is installed but not yet wired. First milestone
Once `archinstall` finishes and you click **Reboot now**, the VM comes up into the installed system. No more port `:5000` — the wizard ISO is gone. Instead: Once `archinstall` finishes and you click **Reboot now**, the VM comes up into the installed system. No more port `:5000` — the wizard ISO is gone. Instead:
- **Console**: agetty shows `Furtka is ready. Open http://<hostname>.local …` with the IP fallback underneath. - **Console**: agetty shows `Furtka is ready. Open http://<hostname>.local …` with the IP fallback underneath.
- **Browser** at `http://<hostname>.local` (default `http://proksi.local`): Caddy-served landing page with three live status tiles (uptime, Docker version, free disk) refreshed every 30 s by `furtka-status.timer`. - **Browser** at `http://<hostname>.local` (default `http://furtka.local` — the form's default hostname is `furtka`; only the live-installer ISO uses `proksi`): Caddy-served landing page with three live status tiles (uptime, Docker version, free disk) refreshed every 30 s by `furtka-status.timer`. HTTPS is opt-in (26.15-alpha) — flip the toggle in `/settings` to switch on Caddy's `tls internal` on `:443`, then trust `rootCA.crt` from `/settings` to clear browser warnings.
- **SSH**: `ssh <user>@<hostname>.local` works; `docker ps` works without `sudo` because the user is in the `docker` group. - **SSH**: `ssh <user>@<hostname>.local` works; `docker ps` works without `sudo` because the user is in the `docker` group.
This is a demo shell — no Authentik, no app store yet. The landing page lives at `/srv/furtka/www/`, served by Caddy on `:80` per `/etc/caddy/Caddyfile`. All of this is written into the target by `webinstaller/app.py`'s `_post_install_commands` via archinstall's `custom_commands`. This is a demo shell — no Authentik, no app store yet. The landing page lives at `/srv/furtka/www/`, served by Caddy on `:80` per `/etc/caddy/Caddyfile`. All of this is written into the target by `webinstaller/app.py`'s `_post_install_commands` via archinstall's `custom_commands`.
@ -59,5 +62,4 @@ This is a demo shell — no Authentik, no app store yet. The landing page lives
## Known rough edges ## Known rough edges
- **Disk space**: the first time you build on a fresh host, the squashfs/xorriso steps need ~15 GB free. If the host's LVM-root is smaller, `xorriso` silently dies at the very end with "Image size exceeds free space on media". - **Disk space**: the first time you build on a fresh host, the squashfs/xorriso steps need ~15 GB free. If the host's LVM-root is smaller, `xorriso` silently dies at the very end with "Image size exceeds free space on media".
- **No HTTPS yet**. The Furtka plan is "local CA + green padlock on `https://proksi.local`" — that's a later milestone. For now, plain HTTP. - **Live-installer wizard is still HTTP-only**. `http://proksi.local:5000` during install has no TLS; once the box reboots, Caddy can serve `tls internal` on `:443` if the user opts in via `/settings` (26.15-alpha), but bringing TLS to the wizard itself is a later milestone.
- **Boot USB could appear as an install target on bare metal**. On a VM the ISO is a CD-ROM (filtered) and SATA is the only disk, so the picker only shows the install target. On bare metal with a USB stick, the USB is `TYPE=disk` and shows up alongside the real install drive; a user could in theory pick the USB they just booted from. Mitigating this needs detecting the boot media (via `findmnt /run/archiso/bootmnt` or similar) and filtering it out in `webinstaller/drives.py`.

View file

@ -78,6 +78,11 @@ cp -a "$REPO_ROOT/webinstaller/." "$PROFILE_WORK/airootfs/opt/furtka/"
# next to webinstaller/app.py so _resolve_assets_dir() finds it at runtime. # next to webinstaller/app.py so _resolve_assets_dir() finds it at runtime.
cp -a "$REPO_ROOT/assets" "$PROFILE_WORK/airootfs/opt/furtka/assets" cp -a "$REPO_ROOT/assets" "$PROFILE_WORK/airootfs/opt/furtka/assets"
rm -rf "$PROFILE_WORK/airootfs/opt/furtka/__pycache__" rm -rf "$PROFILE_WORK/airootfs/opt/furtka/__pycache__"
# VERSION next to the webinstaller so the wizard footer can render the
# release string at runtime instead of carrying a hardcoded one. Matches
# what the resource-manager payload ships in its own VERSION file below.
ISO_VERSION=$(grep -E '^version = ' "$REPO_ROOT/pyproject.toml" | head -1 | sed 's/.*= "\(.*\)"/\1/')
echo "$ISO_VERSION" > "$PROFILE_WORK/airootfs/opt/furtka/VERSION"
# Pack the resource manager (furtka/ Python package + bundled apps/) as a # Pack the resource manager (furtka/ Python package + bundled apps/) as a
# tarball that webinstaller hands to archinstall via custom_commands. Lives at # tarball that webinstaller hands to archinstall via custom_commands. Lives at
@ -94,9 +99,9 @@ cp -a "$REPO_ROOT/apps" "$PAYLOAD_STAGE/"
cp -a "$REPO_ROOT/assets" "$PAYLOAD_STAGE/" cp -a "$REPO_ROOT/assets" "$PAYLOAD_STAGE/"
find "$PAYLOAD_STAGE" -type d -name __pycache__ -exec rm -rf {} + find "$PAYLOAD_STAGE" -type d -name __pycache__ -exec rm -rf {} +
# VERSION at tarball root: the installer reads it to choose the versions/<ver>/ # VERSION at tarball root: the installer reads it to choose the versions/<ver>/
# directory name and /opt/furtka/current/VERSION reports it at runtime. # directory name and /opt/furtka/current/VERSION reports it at runtime. Same
grep -E '^version = ' "$REPO_ROOT/pyproject.toml" | head -1 \ # value we wrote into /opt/furtka/VERSION for the live wizard footer above.
| sed 's/.*= "\(.*\)"/\1/' > "$PAYLOAD_STAGE/VERSION" echo "$ISO_VERSION" > "$PAYLOAD_STAGE/VERSION"
tar -czf "$PROFILE_WORK/airootfs/opt/furtka-resource-manager.tar.gz" \ tar -czf "$PROFILE_WORK/airootfs/opt/furtka-resource-manager.tar.gz" \
-C "$PAYLOAD_STAGE" . -C "$PAYLOAD_STAGE" .
rm -rf "$PAYLOAD_STAGE" rm -rf "$PAYLOAD_STAGE"

View file

@ -12,7 +12,10 @@ fi
echo "==> Updating apt and installing prerequisites" echo "==> Updating apt and installing prerequisites"
sudo apt-get update -y sudo apt-get update -y
sudo apt-get install -y ca-certificates curl gnupg # arp-scan + iputils: needed by scripts/smoke-vm.sh for MAC→IP discovery
# of the test VM on the Proxmox test host (live ISO has no guest agent,
# so we scan the LAN and match on the MAC we assigned at VM creation).
sudo apt-get install -y ca-certificates curl gnupg arp-scan iputils-arping
echo "==> Adding Docker's official GPG key" echo "==> Adding Docker's official GPG key"
sudo install -m 0755 -d /etc/apt/keyrings sudo install -m 0755 -d /etc/apt/keyrings

View file

@ -19,6 +19,13 @@ services:
volumes: volumes:
- ./data:/data - ./data:/data
- /var/run/docker.sock:/var/run/docker.sock - /var/run/docker.sock:/var/run/docker.sock
# Auto-deploy of furtka.org runs inside this container — the
# runner host *is* the web server. Bind these at matching paths
# so rsync/hugo just see plain local filesystem. Without these
# mounts, .forgejo/workflows/deploy-site.yml can't reach the
# source tree or the webroot.
- /srv/furtka-site:/srv/furtka-site
- /var/www/furtka.org:/var/www/furtka.org
command: >- command: >-
/bin/sh -c "apk add --no-cache nodejs docker-cli && sleep 5 && /bin/sh -c "apk add --no-cache nodejs docker-cli && sleep 5 &&
forgejo-runner daemon --config /data/config.yml" forgejo-runner daemon --config /data/config.yml"

View file

@ -8,6 +8,23 @@ server {
charset utf-8; charset utf-8;
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_comp_level 6;
gzip_types
text/css
text/plain
text/xml
application/javascript
application/json
application/xml
application/rss+xml
application/atom+xml
image/svg+xml
font/woff
font/woff2;
location / { location / {
try_files $uri $uri/ $uri.html =404; try_files $uri $uri/ $uri.html =404;
} }

View file

@ -1,6 +1,6 @@
[project] [project]
name = "furtka" name = "furtka"
version = "26.0-alpha" version = "26.17-alpha"
description = "Open-source home server OS — simple enough for everyone." description = "Open-source home server OS — simple enough for everyone."
requires-python = ">=3.11" requires-python = ">=3.11"
readme = "README.md" readme = "README.md"

View file

@ -57,20 +57,27 @@ api() {
base="https://$HOST/api/v1/repos/$REPO" base="https://$HOST/api/v1/repos/$REPO"
# 1. Create the release. # 1. Create the release. Python for JSON assembly so we don't depend on jq
release_body_json="$(jq -n \ # on the runner — the previous `apt-get install -y jq` step in release.yml
--arg tag "$VERSION" \ # hung for 15+ minutes on a slow mirror and stalled the whole publish.
--arg name "$VERSION" \ release_body_json="$(
--arg body "$BODY" \ VERSION="$VERSION" BODY="$BODY" PRE="$PRERELEASE" python3 -c '
--argjson pre "$PRERELEASE" \ import json, os
'{tag_name: $tag, name: $name, body: $body, prerelease: $pre}')" print(json.dumps({
"tag_name": os.environ["VERSION"],
"name": os.environ["VERSION"],
"body": os.environ["BODY"],
"prerelease": os.environ["PRE"] == "true",
}))
'
)"
echo "==> Creating release $VERSION" echo "==> Creating release $VERSION"
release_response="$(api --request POST "$base/releases" \ release_response="$(api --request POST "$base/releases" \
--header "Content-Type: application/json" \ --header "Content-Type: application/json" \
--data "$release_body_json")" --data "$release_body_json")"
release_id="$(echo "$release_response" | jq -r '.id')" release_id="$(echo "$release_response" | python3 -c 'import json, sys; print(json.load(sys.stdin).get("id", ""))')"
if [ -z "$release_id" ] || [ "$release_id" = "null" ]; then if [ -z "$release_id" ] || [ "$release_id" = "null" ]; then
echo "error: couldn't parse release id from response:" echo "error: couldn't parse release id from response:"
echo "$release_response" echo "$release_response"
@ -92,4 +99,20 @@ upload_asset "$TARBALL"
upload_asset "$SHA_FILE" upload_asset "$SHA_FILE"
upload_asset "$RELEASE_JSON" upload_asset "$RELEASE_JSON"
# Optional: attach the live-installer ISO when dist/furtka-<version>.iso
# exists. Release workflows that want this build the ISO via iso/build.sh
# and move the output here before calling publish-release. Local runs
# that skip the ISO step still publish the core release successfully.
#
# Soft-fail: the ISO is ~1 GB and Forgejo's reverse proxy has returned
# 504 on the upload even when the write eventually succeeds. The core
# tarball (which boxes need for self-update) is already uploaded above,
# so don't let an ISO transport hiccup fail the whole release.
ISO="$DIST_DIR/furtka-$VERSION.iso"
if [ -f "$ISO" ]; then
if ! upload_asset "$ISO"; then
echo "warning: ISO upload failed — release published without ISO asset" >&2
fi
fi
echo "Release $VERSION published: https://$HOST/$REPO/releases/tag/$VERSION" echo "Release $VERSION published: https://$HOST/$REPO/releases/tag/$VERSION"

238
scripts/smoke-vm.sh Executable file
View file

@ -0,0 +1,238 @@
#!/usr/bin/env bash
# Smoke-test a freshly built Furtka live ISO by booting it in a VM on the
# Proxmox test host (defaults to $PVE_TEST_HOST) and checking that the
# webinstaller answers HTTP 200 on :5000.
#
# Usage: ./scripts/smoke-vm.sh <iso-path>
#
# Required env:
# PVE_TEST_HOST IP/hostname of the test node (e.g. 192.168.178.165)
# PVE_TEST_TOKEN "user@realm!tokenid=secret" single string
#
# Optional env:
# PVE_TEST_NODE PVE node name; auto-detected from /nodes if empty
# PVE_TEST_ISO_STORAGE default "local"
# PVE_TEST_DISK_STORAGE default "local-lvm"
# PVE_TEST_BRIDGE default "vmbr0"
# PVE_TEST_VMID_MIN default 9000
# PVE_TEST_VMID_MAX default 9099
# PVE_TEST_KEEP how many past smoke VMs to retain (default 5)
# PVE_TEST_BOOT_TIMEOUT seconds to wait for :5000 (default 180)
# PVE_TEST_VM_MEMORY MiB of RAM for the smoke VM (default 8192). Bumped
# from 4096 on 2026-04-18 — mkinitcpio on 4 GB VMs
# OOM-ed the pollux host mid-install, pulling pveproxy
# + the runner connection down with it.
# PVE_TEST_VM_CORES vCPU count for the smoke VM (default 2)
# SMOKE_SHA commit SHA used in name/tag/MAC; defaults to git HEAD
#
# Exits 0 iff the ISO booted and :5000 returned 200. Prunes old VMs + ISOs
# after the test regardless of outcome so a failed build's VM stays behind
# for post-mortem (at the cost of the run before it).
set -euo pipefail
ISO_PATH="${1:?usage: $0 <iso-path>}"
[[ -f "$ISO_PATH" ]] || { echo "iso not found: $ISO_PATH" >&2; exit 1; }
: "${PVE_TEST_HOST:?PVE_TEST_HOST must be set}"
: "${PVE_TEST_TOKEN:?PVE_TEST_TOKEN must be set}"
ISO_STORAGE="${PVE_TEST_ISO_STORAGE:-local}"
DISK_STORAGE="${PVE_TEST_DISK_STORAGE:-local-lvm}"
BRIDGE="${PVE_TEST_BRIDGE:-vmbr0}"
VMID_MIN="${PVE_TEST_VMID_MIN:-9000}"
VMID_MAX="${PVE_TEST_VMID_MAX:-9099}"
KEEP="${PVE_TEST_KEEP:-5}"
BOOT_TIMEOUT="${PVE_TEST_BOOT_TIMEOUT:-180}"
VM_MEMORY="${PVE_TEST_VM_MEMORY:-8192}"
VM_CORES="${PVE_TEST_VM_CORES:-2}"
SHA="${SMOKE_SHA:-$(git rev-parse HEAD 2>/dev/null || echo unknownunknown)}"
SHORT_SHA="${SHA:0:12}"
API="https://${PVE_TEST_HOST}:8006/api2/json"
api() {
# Wrapper so that on non-2xx we print the PVE response body to stderr
# before bubbling the failure — otherwise `--fail-with-body` output
# gets swallowed by callers that pipe to /dev/null, and you're left
# staring at "curl: (22)" with no idea which permission is missing.
local body rc
body=$(curl --silent --show-error --fail-with-body -k \
--header "Authorization: PVEAPIToken=${PVE_TEST_TOKEN}" \
"$@" 2>&1)
rc=$?
if [[ $rc -ne 0 ]]; then
echo "!! PVE API call failed (rc=$rc)" >&2
echo "!! request: $*" >&2
[[ -n "$body" ]] && echo "!! response: $body" >&2
return $rc
fi
printf '%s' "$body"
}
# PVE returns {"data": <payload>}; grab .data into a python expression.
jget() {
python3 -c 'import json,sys; print(json.load(sys.stdin)["data"])'
}
# Auto-detect node name if not given: first entry from /nodes.
NODE="${PVE_TEST_NODE:-}"
if [[ -z "$NODE" ]]; then
NODE="$(api "$API/nodes" | python3 -c '
import json, sys
nodes = json.load(sys.stdin)["data"]
if not nodes:
sys.exit("no nodes returned from PVE")
print(nodes[0]["node"])
')"
fi
echo "==> node=$NODE sha=$SHORT_SHA iso=$(basename "$ISO_PATH")"
ISO_NAME="furtka-${SHORT_SHA}.iso"
VOLID="${ISO_STORAGE}:iso/${ISO_NAME}"
# --- Step 1: upload ISO (or reuse if same SHA already on PVE) ---------------
# For a given commit SHA the ISO bytes are reproducible, so if furtka-<sha>.iso
# is already in PVE storage from a prior smoke run we reuse it and skip the
# upload. Avoids DELETE-permission friction and shaves ~2 min off re-runs.
if api "$API/nodes/$NODE/storage/$ISO_STORAGE/content/$VOLID" \
--output /dev/null 2>/dev/null; then
echo "==> reusing existing ISO $VOLID"
else
echo "==> uploading ISO as $ISO_NAME"
api --request POST "$API/nodes/$NODE/storage/$ISO_STORAGE/upload" \
--form "content=iso" \
--form "filename=@${ISO_PATH};filename=${ISO_NAME}" \
> /dev/null
fi
# --- Step 2: pick a free VMID in the reserved range ------------------------
# List VMs on the node, filter by range, pick the lowest integer not in use.
USED="$(api "$API/nodes/$NODE/qemu" | python3 -c '
import json, sys
data = json.load(sys.stdin)["data"]
print(" ".join(str(v["vmid"]) for v in data))
')"
VMID=""
for ((id = VMID_MIN; id <= VMID_MAX; id++)); do
if ! [[ " $USED " == *" $id "* ]]; then
VMID="$id"
break
fi
done
[[ -n "$VMID" ]] || { echo "no free VMID in ${VMID_MIN}-${VMID_MAX}" >&2; exit 1; }
# Derive a stable MAC from the SHA. BC:24:11 is Proxmox's assigned OUI.
MAC_TAIL="$(echo "$SHORT_SHA" | tr 'a-z' 'A-Z' | cut -c1-6)"
MAC="BC:24:11:${MAC_TAIL:0:2}:${MAC_TAIL:2:2}:${MAC_TAIL:4:2}"
echo "==> creating VM $VMID name=furtka-smoke-${SHORT_SHA} mac=$MAC"
api --request POST "$API/nodes/$NODE/qemu" \
--data-urlencode "vmid=$VMID" \
--data-urlencode "name=furtka-smoke-${SHORT_SHA}" \
--data-urlencode "tags=furtka;smoke;sha-${SHORT_SHA}" \
--data-urlencode "cores=${VM_CORES}" \
--data-urlencode "memory=${VM_MEMORY}" \
--data-urlencode "bios=ovmf" \
--data-urlencode "machine=q35" \
--data-urlencode "ostype=l26" \
--data-urlencode "scsihw=virtio-scsi-single" \
--data-urlencode "efidisk0=${DISK_STORAGE}:1,efitype=4m,pre-enrolled-keys=0" \
--data-urlencode "scsi0=${DISK_STORAGE}:20,discard=on,ssd=1" \
--data-urlencode "ide2=${VOLID},media=cdrom" \
--data-urlencode "boot=order=ide2;scsi0" \
--data-urlencode "net0=virtio=${MAC},bridge=${BRIDGE},firewall=0" \
> /dev/null
echo "==> starting VM $VMID"
api --request POST "$API/nodes/$NODE/qemu/$VMID/status/start" > /dev/null
# --- Step 3: discover the VM's IP by MAC -----------------------------------
# The live ISO has no qemu-guest-agent, so PVE can't tell us the IP.
# We scan the LAN from the runner and match on our derived MAC.
MAC_LOWER="$(echo "$MAC" | tr 'A-Z' 'a-z')"
IP=""
deadline=$((SECONDS + 150))
while (( SECONDS < deadline )); do
# Capture-then-parse instead of piping directly into awk. `awk '... exit'`
# exits on first match, which SIGPIPEs the upstream arp-scan (exit 141).
# With `set -o pipefail` active that kills the whole script — exactly what
# happened the first time host-networking gave arp-scan real matches.
SCAN=""
if command -v arp-scan >/dev/null 2>&1; then
SCAN="$(sudo arp-scan --localnet --quiet --ignoredups 2>/dev/null || true)"
IP="$(awk -v m="$MAC_LOWER" 'tolower($2) == m { print $1; exit }' <<<"$SCAN")"
fi
if [[ -z "$IP" ]] && command -v nmap >/dev/null 2>&1; then
sudo nmap -sn -T4 192.168.178.0/24 >/dev/null 2>&1 || true
NEIGH="$(ip neigh show)"
IP="$(awk -v m="$MAC_LOWER" 'tolower($5) == m && $1 ~ /^[0-9]/ { print $1; exit }' <<<"$NEIGH")"
fi
[[ -n "$IP" ]] && break
sleep 5
done
if [[ -z "$IP" ]]; then
echo "!! never saw $MAC on the LAN within 150s" >&2
SMOKE_RC=1
else
echo "==> VM $VMID is at $IP (mac $MAC)"
fi
# --- Step 4: smoke the webinstaller ----------------------------------------
SMOKE_RC="${SMOKE_RC:-0}"
if [[ "$SMOKE_RC" -eq 0 ]]; then
echo "==> polling http://${IP}:5000 (timeout ${BOOT_TIMEOUT}s)"
end=$((SECONDS + BOOT_TIMEOUT))
while (( SECONDS < end )); do
if curl --silent --fail --max-time 5 --output /dev/null "http://${IP}:5000/"; then
echo "==> :5000 answered 200 — smoke passed"
SMOKE_RC=0
break
fi
SMOKE_RC=1
sleep 5
done
if [[ "$SMOKE_RC" -ne 0 ]]; then
echo "!! :5000 never returned 200 on ${IP}" >&2
fi
fi
# --- Step 5: prune old smoke VMs + ISOs ------------------------------------
echo "==> pruning smoke VMs, keeping last $KEEP"
# List VMs in the reserved range sorted by vmid desc; drop the first KEEP.
TO_DROP="$(api "$API/nodes/$NODE/qemu" | python3 -c "
import json, sys
lo, hi, keep = ${VMID_MIN}, ${VMID_MAX}, ${KEEP}
vms = [v for v in json.load(sys.stdin)['data']
if lo <= int(v['vmid']) <= hi]
vms.sort(key=lambda v: int(v['vmid']), reverse=True)
for v in vms[keep:]:
print(v['vmid'])
")"
for old in $TO_DROP; do
echo " dropping VM $old"
# Find the ISO the VM was booted from so we can delete it after.
OLD_ISO="$(api "$API/nodes/$NODE/qemu/$old/config" | python3 -c '
import json, sys, re
cfg = json.load(sys.stdin)["data"]
for k in ("ide0","ide1","ide2","ide3","sata0","sata1","sata2","sata3"):
v = cfg.get(k,"")
m = re.match(r"([^,]+),.*media=cdrom", v)
if m and m.group(1).endswith(".iso"):
print(m.group(1)); break
' || true)"
# Stop (ignore errors if already stopped), then purge.
api --request POST "$API/nodes/$NODE/qemu/$old/status/stop" \
--output /dev/null 2>/dev/null || true
# /qemu/<id> DELETE is async; the call returns a UPID but for our purposes
# "fire and forget" is fine — next prune will retry if it didn't land.
api --request DELETE "$API/nodes/$NODE/qemu/$old?purge=1&destroy-unreferenced-disks=1" \
--output /dev/null || echo " (delete of $old failed; skipping)"
if [[ -n "$OLD_ISO" && "$OLD_ISO" != "$VOLID" ]]; then
echo " dropping ISO $OLD_ISO"
api --request DELETE "$API/nodes/$NODE/storage/$ISO_STORAGE/content/$OLD_ISO" \
--output /dev/null 2>/dev/null || true
fi
done
exit "$SMOKE_RC"

File diff suppressed because it is too large Load diff

230
tests/test_auth.py Normal file
View file

@ -0,0 +1,230 @@
import json
from datetime import UTC, datetime, timedelta
import pytest
from furtka import auth
@pytest.fixture
def tmp_users_file(tmp_path, monkeypatch):
path = tmp_path / "users.json"
monkeypatch.setenv("FURTKA_USERS_FILE", str(path))
# Sessions and lockout state are module-level; wipe between tests so
# one doesn't leak a valid token (or a stale failure counter) into
# the next.
auth.SESSIONS.clear()
auth.LOCKOUT.clear_all()
return path
def test_hash_password_roundtrip():
h = auth.hash_password("hunter2")
assert h != "hunter2" # Not plain text.
assert auth.verify_password("hunter2", h) is True
assert auth.verify_password("hunter3", h) is False
def test_hash_password_is_salted():
# Two calls with the same password must produce different hashes.
a = auth.hash_password("same")
b = auth.hash_password("same")
assert a != b
# But both verify against the original.
assert auth.verify_password("same", a)
assert auth.verify_password("same", b)
def test_load_users_returns_empty_when_missing(tmp_users_file):
assert not tmp_users_file.exists()
assert auth.load_users() == {}
def test_load_users_returns_empty_on_junk(tmp_users_file):
tmp_users_file.write_text("{not json")
assert auth.load_users() == {}
def test_load_users_returns_empty_on_non_dict(tmp_users_file):
tmp_users_file.write_text("[]")
assert auth.load_users() == {}
def test_save_users_atomic_and_0600(tmp_users_file):
auth.save_users({"admin": {"hash": "x", "username": "daniel"}})
assert tmp_users_file.exists()
mode = tmp_users_file.stat().st_mode & 0o777
assert mode == 0o600, f"expected 0o600, got {oct(mode)}"
loaded = json.loads(tmp_users_file.read_text())
assert loaded["admin"]["username"] == "daniel"
def test_setup_needed_true_on_missing_file(tmp_users_file):
assert auth.setup_needed() is True
def test_setup_needed_true_on_empty_dict(tmp_users_file):
tmp_users_file.write_text("{}")
assert auth.setup_needed() is True
def test_setup_needed_false_when_admin_exists(tmp_users_file):
auth.create_admin("daniel", "secret-pw")
assert auth.setup_needed() is False
def test_create_admin_overwrites_file(tmp_users_file):
auth.create_admin("daniel", "secret-pw")
auth.create_admin("robert", "new-pw")
users = auth.load_users()
assert users["admin"]["username"] == "robert"
def test_authenticate_happy(tmp_users_file):
auth.create_admin("daniel", "secret-pw")
assert auth.authenticate("daniel", "secret-pw") is True
def test_authenticate_wrong_username(tmp_users_file):
auth.create_admin("daniel", "secret-pw")
assert auth.authenticate("robert", "secret-pw") is False
def test_authenticate_wrong_password(tmp_users_file):
auth.create_admin("daniel", "secret-pw")
assert auth.authenticate("daniel", "wrong") is False
def test_authenticate_no_admin(tmp_users_file):
assert auth.authenticate("daniel", "anything") is False
# ---- Session store ---------------------------------------------------------
def test_session_create_and_lookup(tmp_users_file):
s = auth.SESSIONS.create("daniel")
assert s.username == "daniel"
assert s.token
looked_up = auth.SESSIONS.lookup(s.token)
assert looked_up is not None
assert looked_up.username == "daniel"
def test_session_lookup_unknown_token(tmp_users_file):
assert auth.SESSIONS.lookup("not-a-real-token") is None
def test_session_lookup_none_token(tmp_users_file):
assert auth.SESSIONS.lookup(None) is None
assert auth.SESSIONS.lookup("") is None
def test_session_revoke(tmp_users_file):
s = auth.SESSIONS.create("daniel")
auth.SESSIONS.revoke(s.token)
assert auth.SESSIONS.lookup(s.token) is None
def test_session_expires(tmp_users_file, monkeypatch):
# Build a session store with a 0-second TTL so lookup immediately
# treats new sessions as expired.
store = auth.SessionStore(ttl_seconds=0)
s = store.create("daniel")
# Force the clock forward a hair so the > check fires.
monkeypatch.setattr(
auth,
"datetime",
_FakeDatetime(datetime.now(UTC) + timedelta(seconds=1)),
)
# The module-local datetime reference inside SessionStore.lookup
# resolves at call time. Verify that an expired session is dropped.
assert store.lookup(s.token) is None
class _FakeDatetime:
"""Tiny shim — only `.now(tz)` is used from SessionStore."""
def __init__(self, fixed_utc):
self._fixed = fixed_utc
def now(self, tz=None):
if tz is None:
return self._fixed.replace(tzinfo=None)
return self._fixed.astimezone(tz)
# ---- Login attempts / lockout ----------------------------------------------
def test_lockout_under_threshold_still_allowed(tmp_users_file):
store = auth.LoginAttempts(max_failures=3, window_seconds=60, lockout_seconds=60)
key = ("daniel", "10.0.0.1")
for _ in range(2):
store.register_failure(key)
assert store.is_locked(key) is False
assert store.retry_after_seconds(key) == 0
def test_lockout_triggers_at_threshold(tmp_users_file):
store = auth.LoginAttempts(max_failures=3, window_seconds=60, lockout_seconds=60)
key = ("daniel", "10.0.0.1")
for _ in range(3):
store.register_failure(key)
assert store.is_locked(key) is True
assert store.retry_after_seconds(key) > 0
assert store.retry_after_seconds(key) <= 60
def test_lockout_window_decay(tmp_users_file, monkeypatch):
store = auth.LoginAttempts(max_failures=3, window_seconds=60, lockout_seconds=60)
key = ("daniel", "10.0.0.1")
for _ in range(3):
store.register_failure(key)
assert store.is_locked(key) is True
# Jump 2 minutes ahead — all failures are older than the window
# and should be pruned on the next check.
monkeypatch.setattr(
auth,
"datetime",
_FakeDatetime(datetime.now(UTC) + timedelta(seconds=121)),
)
assert store.is_locked(key) is False
assert store.retry_after_seconds(key) == 0
def test_lockout_clear_resets(tmp_users_file):
store = auth.LoginAttempts(max_failures=2, window_seconds=60, lockout_seconds=60)
key = ("daniel", "10.0.0.1")
store.register_failure(key)
store.register_failure(key)
assert store.is_locked(key) is True
store.clear(key)
assert store.is_locked(key) is False
assert store.retry_after_seconds(key) == 0
def test_lockout_keys_are_independent(tmp_users_file):
store = auth.LoginAttempts(max_failures=2, window_seconds=60, lockout_seconds=60)
locked = ("daniel", "1.1.1.1")
other_ip = ("daniel", "2.2.2.2")
other_user = ("robert", "1.1.1.1")
store.register_failure(locked)
store.register_failure(locked)
assert store.is_locked(locked) is True
assert store.is_locked(other_ip) is False
assert store.is_locked(other_user) is False
def test_lockout_clear_all_wipes_every_key(tmp_users_file):
store = auth.LoginAttempts(max_failures=2, window_seconds=60, lockout_seconds=60)
a = ("daniel", "1.1.1.1")
b = ("robert", "2.2.2.2")
store.register_failure(a)
store.register_failure(a)
store.register_failure(b)
store.register_failure(b)
assert store.is_locked(a) and store.is_locked(b)
store.clear_all()
assert not store.is_locked(a)
assert not store.is_locked(b)

333
tests/test_catalog.py Normal file
View file

@ -0,0 +1,333 @@
"""Tests for the apps-catalog sync flow.
Same shape as ``tests/test_updater.py``: fixture reloads the module with
env-overridden paths, fake tarballs land in tmp_path, Forgejo API is
stubbed via ``urllib.request.urlopen`` monkeypatching so nothing talks
to the network.
Asserts end-to-end atomicity: on any failure path bad sha256, broken
tarball, invalid manifest the live catalog dir is either left
untouched (if one existed) or absent (if it didn't).
"""
from __future__ import annotations
import io
import json
import tarfile
from pathlib import Path
import pytest
@pytest.fixture
def catalog(tmp_path, monkeypatch):
monkeypatch.setenv("FURTKA_CATALOG_DIR", str(tmp_path / "var_lib_furtka_catalog"))
monkeypatch.setenv("FURTKA_CATALOG_STATE", str(tmp_path / "var_lib_furtka_catalog-state.json"))
monkeypatch.setenv("FURTKA_CATALOG_LOCK", str(tmp_path / "catalog.lock"))
monkeypatch.setenv("FURTKA_FORGEJO_HOST", "forgejo.test.local")
monkeypatch.setenv("FURTKA_CATALOG_REPO", "daniel/furtka-apps")
import importlib
from furtka import catalog as c
from furtka import paths as p
importlib.reload(p)
importlib.reload(c)
return c
def _manifest(name: str = "fileshare") -> dict:
return {
"name": name,
"display_name": "Fileshare",
"version": "0.1.0",
"description": "Test fixture app",
"volumes": ["files"],
"ports": [445],
"icon": "icon.svg",
}
def _make_catalog_tarball(
path: Path,
version: str,
*,
apps: list[tuple[str, dict]] | None = None,
extra_entries: list[tuple[str, bytes]] | None = None,
) -> None:
"""Build a minimal valid catalog tarball.
`apps` is a list of (folder_name, manifest_dict). Each app folder gets
a `manifest.json` + a stub `docker-compose.yaml` + `icon.svg`.
`extra_entries` lets tests inject malformed content (path-traversal,
missing VERSION, ...) without rebuilding the helper.
"""
apps = apps if apps is not None else [("fileshare", _manifest())]
buf = io.BytesIO()
with tarfile.open(fileobj=buf, mode="w:gz") as tf:
entries: list[tuple[str, bytes]] = [("VERSION", f"{version}\n".encode())]
for folder, m in apps:
entries.append((f"apps/{folder}/manifest.json", json.dumps(m).encode()))
entries.append(
(f"apps/{folder}/docker-compose.yaml", b"services:\n app:\n image: scratch\n")
)
entries.append((f"apps/{folder}/icon.svg", b"<svg/>"))
if extra_entries:
entries.extend(extra_entries)
for name, data in entries:
info = tarfile.TarInfo(name=name)
info.size = len(data)
tf.addfile(info, io.BytesIO(data))
path.write_bytes(buf.getvalue())
def _stub_forgejo_release(
monkeypatch,
catalog,
*,
tag: str,
tarball_url: str = "https://forgejo.test.local/t.tar.gz",
sha_url: str = "https://forgejo.test.local/t.tar.gz.sha256",
releases: list | None = None,
):
"""Patch ``_rc.forgejo_api`` so check_catalog sees a canned release list."""
if releases is None:
releases = [
{
"tag_name": tag,
"assets": [
{"name": f"furtka-apps-{tag}.tar.gz", "browser_download_url": tarball_url},
{
"name": f"furtka-apps-{tag}.tar.gz.sha256",
"browser_download_url": sha_url,
},
],
}
]
def fake_api(host, repo, path, *, error_cls=RuntimeError):
return releases
from furtka import _release_common as _rc
monkeypatch.setattr(_rc, "forgejo_api", fake_api)
def _stub_download(monkeypatch, catalog, mapping: dict[str, bytes]):
"""Patch ``_rc.download`` so sync_catalog pulls from an in-memory map."""
from furtka import _release_common as _rc
def fake_download(url, dest, *, error_cls=RuntimeError):
if url not in mapping:
raise error_cls(f"test: no fake content for {url}")
dest.parent.mkdir(parents=True, exist_ok=True)
dest.write_bytes(mapping[url])
monkeypatch.setattr(_rc, "download", fake_download)
# --------------------------------------------------------------------------- #
# check_catalog
# --------------------------------------------------------------------------- #
def test_check_catalog_reports_update_when_versions_differ(catalog, monkeypatch, tmp_path):
# Pretend we already have catalog version 26.5 on disk; Forgejo reports 26.6.
catalog.catalog_dir().mkdir(parents=True)
(catalog.catalog_dir() / "VERSION").write_text("26.5\n")
_stub_forgejo_release(monkeypatch, catalog, tag="26.6")
check = catalog.check_catalog()
assert check.current == "26.5"
assert check.latest == "26.6"
assert check.update_available is True
assert check.tarball_url.endswith(".tar.gz")
assert check.sha256_url.endswith(".sha256")
def test_check_catalog_reports_up_to_date_when_same_version(catalog, monkeypatch):
catalog.catalog_dir().mkdir(parents=True)
(catalog.catalog_dir() / "VERSION").write_text("26.5\n")
_stub_forgejo_release(monkeypatch, catalog, tag="26.5")
check = catalog.check_catalog()
assert check.current == "26.5"
assert check.latest == "26.5"
assert check.update_available is False
def test_check_catalog_treats_missing_current_as_installable(catalog, monkeypatch):
# Fresh box, no catalog ever synced — any release is an update.
_stub_forgejo_release(monkeypatch, catalog, tag="26.5")
check = catalog.check_catalog()
assert check.current is None
assert check.update_available is True
def test_check_catalog_raises_when_no_releases_published(catalog, monkeypatch):
_stub_forgejo_release(monkeypatch, catalog, tag="x", releases=[])
with pytest.raises(catalog.CatalogError, match="no catalog releases"):
catalog.check_catalog()
# --------------------------------------------------------------------------- #
# sync_catalog — happy + error paths
# --------------------------------------------------------------------------- #
def test_sync_catalog_happy_path(catalog, monkeypatch, tmp_path):
import hashlib
tarball_path = tmp_path / "tarball.tar.gz"
_make_catalog_tarball(tarball_path, "26.6")
tarball_bytes = tarball_path.read_bytes()
sha = hashlib.sha256(tarball_bytes).hexdigest()
_stub_forgejo_release(monkeypatch, catalog, tag="26.6")
_stub_download(
monkeypatch,
catalog,
{
"https://forgejo.test.local/t.tar.gz": tarball_bytes,
"https://forgejo.test.local/t.tar.gz.sha256": (
f"{sha} furtka-apps-26.6.tar.gz\n".encode()
),
},
)
check = catalog.sync_catalog()
assert check.latest == "26.6"
assert (catalog.catalog_dir() / "VERSION").read_text().strip() == "26.6"
assert (catalog.catalog_dir() / "apps" / "fileshare" / "manifest.json").is_file()
state = catalog.read_state()
assert state["stage"] == "done"
assert state["version"] == "26.6"
def test_sync_catalog_noop_when_already_current(catalog, monkeypatch, tmp_path):
catalog.catalog_dir().mkdir(parents=True)
(catalog.catalog_dir() / "VERSION").write_text("26.5\n")
_stub_forgejo_release(monkeypatch, catalog, tag="26.5")
check = catalog.sync_catalog()
assert check.update_available is False
assert catalog.read_state()["stage"] == "done"
def test_sync_catalog_refuses_sha256_mismatch(catalog, monkeypatch, tmp_path):
tarball_path = tmp_path / "tarball.tar.gz"
_make_catalog_tarball(tarball_path, "26.6")
_stub_forgejo_release(monkeypatch, catalog, tag="26.6")
_stub_download(
monkeypatch,
catalog,
{
"https://forgejo.test.local/t.tar.gz": tarball_path.read_bytes(),
# Hash for some OTHER content — will mismatch.
"https://forgejo.test.local/t.tar.gz.sha256": (b"0" * 64 + b" wrong.tar.gz\n"),
},
)
with pytest.raises(catalog.CatalogError, match="sha256 mismatch"):
catalog.sync_catalog()
# Live catalog never existed, must still not exist after the failed sync.
assert not catalog.catalog_dir().exists()
def test_sync_catalog_refuses_tarball_with_invalid_manifest(catalog, monkeypatch, tmp_path):
import hashlib
bad_manifest = {"name": "broken"} # missing required fields
tarball_path = tmp_path / "tarball.tar.gz"
_make_catalog_tarball(tarball_path, "26.6", apps=[("broken", bad_manifest)])
tarball_bytes = tarball_path.read_bytes()
sha = hashlib.sha256(tarball_bytes).hexdigest()
_stub_forgejo_release(monkeypatch, catalog, tag="26.6")
_stub_download(
monkeypatch,
catalog,
{
"https://forgejo.test.local/t.tar.gz": tarball_bytes,
"https://forgejo.test.local/t.tar.gz.sha256": (
f"{sha} furtka-apps-26.6.tar.gz\n".encode()
),
},
)
with pytest.raises(catalog.CatalogError, match="invalid manifest"):
catalog.sync_catalog()
# Staging was cleaned; live catalog never materialised.
assert not catalog.catalog_dir().exists()
def test_sync_catalog_preserves_existing_catalog_on_failure(catalog, monkeypatch, tmp_path):
"""A failed sync must leave the previous live catalog intact so boxes
keep working until the next successful sync."""
import hashlib
# Seed a live catalog that represents a previous successful sync.
live = catalog.catalog_dir()
live.mkdir(parents=True)
(live / "VERSION").write_text("26.5\n")
(live / "apps").mkdir()
bad_manifest = {"name": "broken"} # invalid
tarball_path = tmp_path / "tarball.tar.gz"
_make_catalog_tarball(tarball_path, "26.6", apps=[("broken", bad_manifest)])
sha = hashlib.sha256(tarball_path.read_bytes()).hexdigest()
_stub_forgejo_release(monkeypatch, catalog, tag="26.6")
_stub_download(
monkeypatch,
catalog,
{
"https://forgejo.test.local/t.tar.gz": tarball_path.read_bytes(),
"https://forgejo.test.local/t.tar.gz.sha256": f"{sha} x\n".encode(),
},
)
with pytest.raises(catalog.CatalogError):
catalog.sync_catalog()
# The 26.5 live catalog survives the failed 26.6 sync.
assert (live / "VERSION").read_text().strip() == "26.5"
def test_sync_catalog_lock_contention(catalog, monkeypatch):
_stub_forgejo_release(monkeypatch, catalog, tag="26.6")
# Hold the lock from outside; the real sync_catalog call must refuse.
first = catalog.acquire_lock()
try:
with pytest.raises(catalog.CatalogError, match="already in progress"):
catalog.sync_catalog()
finally:
first.close()
# --------------------------------------------------------------------------- #
# state + current-version helpers
# --------------------------------------------------------------------------- #
def test_read_current_catalog_version_absent(catalog):
assert catalog.read_current_catalog_version() is None
def test_read_current_catalog_version_empty_file(catalog):
catalog.catalog_dir().mkdir(parents=True)
(catalog.catalog_dir() / "VERSION").write_text("\n")
assert catalog.read_current_catalog_version() is None
def test_write_and_read_state_round_trip(catalog):
catalog.write_state("downloading", latest="26.6")
s = catalog.read_state()
assert s["stage"] == "downloading"
assert s["latest"] == "26.6"
assert "updated_at" in s

View file

@ -32,9 +32,21 @@ def test_app_list_json_with_one_app(tmp_path, monkeypatch, capsys):
"display_name": "Network Files", "display_name": "Network Files",
"version": "0.1.0", "version": "0.1.0",
"description": "SMB", "description": "SMB",
"description_long": "Long description here.",
"volumes": ["files"], "volumes": ["files"],
"ports": [445], "ports": [445],
"icon": "icon.svg", "icon": "icon.svg",
"open_url": "smb://{host}/files",
"settings": [
{
"name": "SMB_USER",
"label": "User",
"description": "SMB user",
"type": "text",
"default": "furtka",
"required": True,
}
],
} }
) )
) )
@ -43,7 +55,14 @@ def test_app_list_json_with_one_app(tmp_path, monkeypatch, capsys):
data = json.loads(capsys.readouterr().out) data = json.loads(capsys.readouterr().out)
assert len(data) == 1 assert len(data) == 1
assert data[0]["ok"] is True assert data[0]["ok"] is True
assert data[0]["manifest"]["name"] == "fileshare" m = data[0]["manifest"]
assert m["name"] == "fileshare"
assert m["description_long"] == "Long description here."
assert m["open_url"] == "smb://{host}/files"
assert len(m["settings"]) == 1
assert m["settings"][0]["name"] == "SMB_USER"
assert m["settings"][0]["required"] is True
assert m["settings"][0]["default"] == "furtka"
def test_reconcile_dry_run_empty(tmp_path, monkeypatch, capsys): def test_reconcile_dry_run_empty(tmp_path, monkeypatch, capsys):
@ -52,3 +71,120 @@ def test_reconcile_dry_run_empty(tmp_path, monkeypatch, capsys):
assert rc == 0 assert rc == 0
out = capsys.readouterr().out out = capsys.readouterr().out
assert "0 actions" in out assert "0 actions" in out
def test_app_install_bg_dispatches_to_runner(tmp_path, monkeypatch):
"""CLI `app install-bg <name>` must call install_runner.run_install(name).
This is the entry point the HTTP API fires via systemd-run; regression
here would leave the UI hanging at "pulling_image…" forever because
the background never transitions state.
"""
_set_env(monkeypatch, tmp_path)
from furtka import install_runner
called = []
monkeypatch.setattr(install_runner, "run_install", lambda name: called.append(name))
rc = main(["app", "install-bg", "fileshare"])
assert rc == 0
assert called == ["fileshare"]
def test_app_install_bg_returns_1_on_failure(tmp_path, monkeypatch, capsys):
_set_env(monkeypatch, tmp_path)
from furtka import install_runner
def boom(name):
raise RuntimeError("compose pull failed")
monkeypatch.setattr(install_runner, "run_install", boom)
rc = main(["app", "install-bg", "fileshare"])
assert rc == 1
err = capsys.readouterr().err
assert "install-bg failed" in err
assert "compose pull failed" in err
# --- Dependency-aware install + remove ---------------------------------------
def _write_manifest(root, name, **overrides):
app = root / name
app.mkdir(parents=True, exist_ok=True)
payload = {
"name": name,
"display_name": name,
"version": "0.1.0",
"description": "x",
"volumes": [],
"ports": [],
"icon": "icon.svg",
**overrides,
}
(app / "manifest.json").write_text(json.dumps(payload))
(app / "docker-compose.yaml").write_text("services: {}\n")
return app
def test_app_remove_blocked_by_dependent(tmp_path, monkeypatch, capsys):
_set_env(monkeypatch, tmp_path)
_write_manifest(tmp_path, "mosquitto")
_write_manifest(tmp_path, "zigbee2mqtt", requires=[{"app": "mosquitto"}])
rc = main(["app", "remove", "mosquitto"])
assert rc == 2
err = capsys.readouterr().err
assert "required by: zigbee2mqtt" in err
def test_app_remove_unblocked_when_no_dependents(tmp_path, monkeypatch):
_set_env(monkeypatch, tmp_path)
_write_manifest(tmp_path, "mosquitto")
from furtka import dockerops
monkeypatch.setattr(dockerops, "compose_down", lambda *a, **k: None)
rc = main(["app", "remove", "mosquitto"])
assert rc == 0
assert not (tmp_path / "mosquitto").exists()
def test_app_install_uses_plan_for_named_install(tmp_path, monkeypatch, capsys):
"""Named install pulls in dependencies via plan_install."""
_set_env(monkeypatch, tmp_path)
bundled = tmp_path / "bundled"
bundled.mkdir()
monkeypatch.setenv("FURTKA_BUNDLED_APPS_DIR", str(bundled))
# No catalog dir — bundled-only.
monkeypatch.setenv("FURTKA_CATALOG_DIR", str(tmp_path / "catalog"))
_write_manifest(bundled, "mosquitto")
_write_manifest(bundled, "zigbee2mqtt", requires=[{"app": "mosquitto"}])
from furtka import installer, reconciler
# Stub install_from so we don't actually copy files / mess with placeholders.
install_calls: list[str] = []
def fake_install_from(src, settings=None):
install_calls.append(src.name)
return tmp_path / src.name
monkeypatch.setattr(installer, "install_from", fake_install_from)
monkeypatch.setattr(reconciler, "reconcile", lambda *a, **k: [])
rc = main(["app", "install", "zigbee2mqtt"])
assert rc == 0
# Provider installed before consumer.
assert install_calls == ["mosquitto", "zigbee2mqtt"]
def test_app_install_named_with_cycle_exits_2(tmp_path, monkeypatch, capsys):
_set_env(monkeypatch, tmp_path)
bundled = tmp_path / "bundled"
bundled.mkdir()
monkeypatch.setenv("FURTKA_BUNDLED_APPS_DIR", str(bundled))
monkeypatch.setenv("FURTKA_CATALOG_DIR", str(tmp_path / "catalog"))
_write_manifest(bundled, "a", requires=[{"app": "b"}])
_write_manifest(bundled, "b", requires=[{"app": "a"}])
rc = main(["app", "install", "a"])
assert rc == 2
err = capsys.readouterr().err
assert "circular" in err.lower()

183
tests/test_deps.py Normal file
View file

@ -0,0 +1,183 @@
import json
import pytest
from furtka import deps
BASE_MANIFEST = {
"display_name": "X",
"version": "0.1.0",
"description": "x",
"volumes": [],
"ports": [],
"icon": "icon.svg",
}
@pytest.fixture
def apps_root(tmp_path, monkeypatch):
"""Three roots: installed, catalog, bundled. Each set up empty by default."""
installed = tmp_path / "installed"
catalog = tmp_path / "catalog" / "apps"
bundled = tmp_path / "bundled"
for p in (installed, catalog, bundled):
p.mkdir(parents=True)
monkeypatch.setenv("FURTKA_APPS_DIR", str(installed))
monkeypatch.setenv("FURTKA_CATALOG_DIR", str(tmp_path / "catalog"))
monkeypatch.setenv("FURTKA_BUNDLED_APPS_DIR", str(bundled))
return {"installed": installed, "catalog": catalog, "bundled": bundled}
def _write_manifest(root, name, **overrides):
app = root / name
app.mkdir(parents=True, exist_ok=True)
payload = dict(BASE_MANIFEST, name=name, **overrides)
(app / "manifest.json").write_text(json.dumps(payload))
return app
def test_plan_install_no_deps(apps_root):
_write_manifest(apps_root["catalog"], "alone")
plan = deps.plan_install("alone")
assert plan.target == "alone"
assert plan.install_order == ("alone",)
assert plan.to_install == ("alone",)
assert plan.already_installed == frozenset()
def test_plan_install_linear_chain(apps_root):
# A requires B, B requires C — all in catalog, none installed yet.
_write_manifest(apps_root["catalog"], "c")
_write_manifest(apps_root["catalog"], "b", requires=[{"app": "c"}])
_write_manifest(apps_root["catalog"], "a", requires=[{"app": "b"}])
plan = deps.plan_install("a")
assert plan.install_order == ("c", "b", "a")
assert plan.to_install == ("c", "b", "a")
def test_plan_install_diamond(apps_root):
# A requires B and C; B requires D; C requires D. D must appear once,
# before B and C, which come before A.
_write_manifest(apps_root["catalog"], "d")
_write_manifest(apps_root["catalog"], "b", requires=[{"app": "d"}])
_write_manifest(apps_root["catalog"], "c", requires=[{"app": "d"}])
_write_manifest(apps_root["catalog"], "a", requires=[{"app": "b"}, {"app": "c"}])
plan = deps.plan_install("a")
order = plan.install_order
# D first, A last, B and C in between (deterministically alphabetical).
assert order[0] == "d"
assert order[-1] == "a"
assert set(order[1:-1]) == {"b", "c"}
assert order.count("d") == 1
def test_plan_install_already_installed_provider(apps_root):
_write_manifest(apps_root["installed"], "b") # provider already installed
_write_manifest(apps_root["catalog"], "b")
_write_manifest(apps_root["catalog"], "a", requires=[{"app": "b"}])
plan = deps.plan_install("a")
assert plan.install_order == ("b", "a")
assert plan.to_install == ("a",)
assert plan.already_installed == frozenset({"b"})
def test_plan_install_cycle_two_node(apps_root):
# Manifest validator already rejects self-reference at load time, but
# mutual references (A -> B -> A) only show up at plan time.
_write_manifest(apps_root["catalog"], "b", requires=[{"app": "a"}])
_write_manifest(apps_root["catalog"], "a", requires=[{"app": "b"}])
with pytest.raises(deps.DependencyError, match="circular"):
deps.plan_install("a")
def test_plan_install_cycle_three_node(apps_root):
_write_manifest(apps_root["catalog"], "c", requires=[{"app": "a"}])
_write_manifest(apps_root["catalog"], "b", requires=[{"app": "c"}])
_write_manifest(apps_root["catalog"], "a", requires=[{"app": "b"}])
with pytest.raises(deps.DependencyError, match="a -> b -> c -> a"):
deps.plan_install("a")
def test_plan_install_missing_provider(apps_root):
_write_manifest(apps_root["catalog"], "a", requires=[{"app": "ghost"}])
with pytest.raises(deps.DependencyError, match="ghost"):
deps.plan_install("a")
def test_plan_install_prefers_installed_over_catalog(apps_root):
# If a provider exists in both installed and catalog, we resolve via
# installed (so we read the actual on-disk manifest the user has).
_write_manifest(apps_root["installed"], "b")
_write_manifest(apps_root["catalog"], "b", requires=[{"app": "extra"}])
_write_manifest(apps_root["catalog"], "a", requires=[{"app": "b"}])
plan = deps.plan_install("a")
# The installed manifest has no requires, so "extra" is NOT pulled in.
assert plan.install_order == ("b", "a")
def test_dependents_of_empty(apps_root):
assert deps.dependents_of("anything") == ()
def test_dependents_of_finds_consumers(apps_root):
_write_manifest(apps_root["installed"], "x")
_write_manifest(apps_root["installed"], "a", requires=[{"app": "x"}])
_write_manifest(apps_root["installed"], "b", requires=[{"app": "x"}])
_write_manifest(apps_root["installed"], "unrelated")
assert deps.dependents_of("x") == ("a", "b")
assert deps.dependents_of("unrelated") == ()
def test_installed_topo_order_preserves_alpha_when_independent(apps_root):
from furtka.scanner import scan
_write_manifest(apps_root["installed"], "alpha")
_write_manifest(apps_root["installed"], "bravo")
_write_manifest(apps_root["installed"], "charlie")
ordered = deps.installed_topo_order(scan(apps_root["installed"]))
assert [r.manifest.name for r in ordered] == ["alpha", "bravo", "charlie"]
def test_installed_topo_order_puts_providers_first(apps_root):
from furtka.scanner import scan
# Alphabetically z2m comes before mqtt? No — but let's force the
# dependency to win. consumer=alpha requires=provider=zulu, so naive
# alpha order would put alpha first. Topo must flip them.
_write_manifest(apps_root["installed"], "zulu")
_write_manifest(apps_root["installed"], "alpha", requires=[{"app": "zulu"}])
ordered = deps.installed_topo_order(scan(apps_root["installed"]))
names = [r.manifest.name for r in ordered]
assert names == ["zulu", "alpha"]
def test_installed_topo_order_missing_provider_tails_app(apps_root):
from furtka.scanner import scan
_write_manifest(apps_root["installed"], "good")
_write_manifest(apps_root["installed"], "needy", requires=[{"app": "ghost"}])
ordered = deps.installed_topo_order(scan(apps_root["installed"]))
names = [r.manifest.name for r in ordered]
# `good` first (no deps), `needy` last (unresolved).
assert names == ["good", "needy"]
def test_provider_exec_service_picks_first_service(apps_root, monkeypatch):
from furtka import dockerops
monkeypatch.setattr(
dockerops,
"compose_image_tags",
lambda app_dir, project: {"server": "img:1", "worker": "img:2"},
)
assert deps.provider_exec_service(apps_root["installed"] / "x", "x") == "server"
def test_provider_exec_service_falls_back_to_project_on_docker_error(apps_root, monkeypatch):
from furtka import dockerops
def boom(app_dir, project):
raise dockerops.DockerError("docker not running")
monkeypatch.setattr(dockerops, "compose_image_tags", boom)
assert deps.provider_exec_service(apps_root["installed"] / "x", "myproj") == "myproj"

116
tests/test_dockerops.py Normal file
View file

@ -0,0 +1,116 @@
import subprocess
import pytest
from furtka import dockerops
class FakeProc:
def __init__(self, stdout=b"", stderr=b"", returncode=0):
self.stdout = stdout
self.stderr = stderr
self.returncode = returncode
def test_compose_exec_builds_command(tmp_path, monkeypatch):
recorded = {}
def fake_run(cmd, **kwargs):
recorded["cmd"] = cmd
recorded["kwargs"] = kwargs
return FakeProc(stdout="ok\n", returncode=0)
monkeypatch.setattr(subprocess, "run", fake_run)
out = dockerops.compose_exec(tmp_path, "myproj", "svc", ["echo", "hi"])
assert out == "ok\n"
cmd = recorded["cmd"]
# docker compose --project-name myproj --file <path>/docker-compose.yaml exec -T svc echo hi
assert cmd[0] == "docker"
assert cmd[1] == "compose"
assert "--project-name" in cmd and "myproj" in cmd
assert "exec" in cmd
assert "-T" in cmd
# -T must come before the service name
assert cmd.index("-T") < cmd.index("svc")
# argv appended after service
assert cmd[-2:] == ["echo", "hi"]
def test_compose_exec_propagates_env(tmp_path, monkeypatch):
recorded = {}
def fake_run(cmd, **kwargs):
recorded["cmd"] = cmd
return FakeProc()
monkeypatch.setattr(subprocess, "run", fake_run)
dockerops.compose_exec(tmp_path, "p", "s", ["true"], env={"A": "1", "B": "two"})
cmd = recorded["cmd"]
# `--env A=1 --env B=two` should appear before the service name.
s_idx = cmd.index("s")
env_args = cmd[:s_idx]
assert env_args.count("--env") == 2
assert "A=1" in env_args
assert "B=two" in env_args
def test_compose_exec_raises_on_nonzero(tmp_path, monkeypatch):
def fake_run(cmd, **kwargs):
return FakeProc(stdout="", stderr="boom", returncode=2)
monkeypatch.setattr(subprocess, "run", fake_run)
with pytest.raises(dockerops.DockerError, match="exited 2"):
dockerops.compose_exec(tmp_path, "p", "s", ["fail"])
def test_compose_exec_raises_on_timeout(tmp_path, monkeypatch):
def fake_run(cmd, **kwargs):
raise subprocess.TimeoutExpired(cmd, timeout=kwargs.get("timeout"))
monkeypatch.setattr(subprocess, "run", fake_run)
with pytest.raises(dockerops.DockerError, match="timed out"):
dockerops.compose_exec(tmp_path, "p", "s", ["sleep", "9999"], timeout=1)
def test_compose_exec_script_streams_via_stdin(tmp_path, monkeypatch):
script = tmp_path / "hook.sh"
body = b"#!/bin/sh\necho hello\n"
script.write_bytes(body)
recorded = {}
def fake_run(cmd, **kwargs):
recorded["cmd"] = cmd
recorded["input"] = kwargs["input"]
return FakeProc(stdout=b"hello\n", returncode=0)
monkeypatch.setattr(subprocess, "run", fake_run)
out = dockerops.compose_exec_script(tmp_path, "p", "s", script)
assert out == "hello\n"
# exec ... s sh -s (script body comes in on stdin)
cmd = recorded["cmd"]
assert cmd[-3:] == ["s", "sh", "-s"]
assert recorded["input"] == body
def test_compose_exec_script_raises_on_nonzero(tmp_path, monkeypatch):
script = tmp_path / "fail.sh"
script.write_bytes(b"exit 1\n")
def fake_run(cmd, **kwargs):
return FakeProc(stdout=b"", stderr=b"hook says no", returncode=1)
monkeypatch.setattr(subprocess, "run", fake_run)
with pytest.raises(dockerops.DockerError, match="hook fail.sh exited 1"):
dockerops.compose_exec_script(tmp_path, "p", "s", script)
def test_compose_exec_script_raises_on_timeout(tmp_path, monkeypatch):
script = tmp_path / "slow.sh"
script.write_bytes(b"sleep 10\n")
def fake_run(cmd, **kwargs):
raise subprocess.TimeoutExpired(cmd, timeout=kwargs.get("timeout"))
monkeypatch.setattr(subprocess, "run", fake_run)
with pytest.raises(dockerops.DockerError, match="hook slow.sh timed out"):
dockerops.compose_exec_script(tmp_path, "p", "s", script, timeout=1)

View file

@ -95,3 +95,23 @@ def test_drive_type_label_nvme_ssd_hdd():
def test_parse_lsblk_handles_empty_output(): def test_parse_lsblk_handles_empty_output():
assert parse_lsblk_output("") == [] assert parse_lsblk_output("") == []
def test_parse_lsblk_drops_boot_usb(monkeypatch):
import drives
monkeypatch.setattr(drives, "_smart_status", lambda _: "passed")
output = "sda 500G disk\nsdb 16G disk\nnvme0n1 1T disk\n"
devices = parse_lsblk_output(output, boot_disk="sdb")
names = [d["name"] for d in devices]
assert "/dev/sdb" not in names
assert names == ["/dev/nvme0n1", "/dev/sda"]
def test_parse_lsblk_no_boot_disk_keeps_all(monkeypatch):
import drives
monkeypatch.setattr(drives, "_smart_status", lambda _: "passed")
output = "sda 500G disk\nsdb 16G disk\n"
names = [d["name"] for d in parse_lsblk_output(output, boot_disk=None)]
assert set(names) == {"/dev/sda", "/dev/sdb"}

216
tests/test_https.py Normal file
View file

@ -0,0 +1,216 @@
"""Tests for furtka.https — fingerprint extraction + HTTPS toggle.
Since 26.15-alpha the toggle writes/removes TWO snippets atomically:
- The top-level HTTPS listener snippet (enables :443 + tls internal)
- The :80-scoped redirect snippet (forces HTTP HTTPS)
The fingerprint case uses a throwaway self-signed EC cert with a known
reference fingerprint (computed once via `openssl x509 -fingerprint
-sha256 -noout`) so we verify the PEM DER SHA256 path without a
runtime subprocess dependency. The toggle cases stub the caddy reload
so we assert both snippet files are written / removed together and that
reload failures roll BOTH state back.
"""
import subprocess
import pytest
from furtka import https
# Self-signed test-only cert. Don't trust it anywhere; it's here because
# we need a real PEM whose fingerprint we can pre-compute.
_TEST_CERT_PEM = """-----BEGIN CERTIFICATE-----
MIIBjjCCATOgAwIBAgIUGIKx2BGMvNQwAcZvjwJiaJO1GvEwCgYIKoZIzj0EAwIw
HDEaMBgGA1UEAwwRRnVydGthIFRlc3QgTG9jYWwwHhcNMjYwNDE3MTAxNTMxWhcN
MzYwNDE0MTAxNTMxWjAcMRowGAYDVQQDDBFGdXJ0a2EgVGVzdCBMb2NhbDBZMBMG
ByqGSM49AgEGCCqGSM49AwEHA0IABIfWX2oVXrw+iv4lCcIIceoX24bvRdlEECB5
QoMYphmlOoI492tRCGHxA8eaIwIYqFn1DzBKBRSL0H3xcu+4Pg6jUzBRMB0GA1Ud
DgQWBBSMizCL5Kh+SLE5n12oKV05L9bJXjAfBgNVHSMEGDAWgBSMizCL5Kh+SLE5
n12oKV05L9bJXjAPBgNVHRMBAf8EBTADAQH/MAoGCCqGSM49BAMCA0kAMEYCIQDp
6etGEuj7AGD5zzyzDSpmRiMEgBp1k6fVoLYW7N2K3AIhAK8khUp3gKPo4UqtWNK9
Cs/B0mzRy2MUPGdZ5QU6LoDz
-----END CERTIFICATE-----
"""
_TEST_CERT_FP_SHA256 = (
"40:A7:98:2E:8D:1F:4C:0D:9B:E6:87:ED:91:FA:6F:B1:"
"3D:8A:10:06:79:7C:08:A9:8F:AD:71:0C:B8:29:87:28"
)
def _paths(tmp_path):
"""Return the four paths the toggle touches, in a dict for kwargs
spreading. Keeps each test's fixture boilerplate small."""
return {
"snippet_dir": tmp_path / "furtka.d",
"snippet": tmp_path / "furtka.d" / "redirect.caddyfile",
"https_snippet_dir": tmp_path / "furtka-https.d",
"https_snippet": tmp_path / "furtka-https.d" / "https.caddyfile",
"hostname_file": tmp_path / "etc_hostname",
}
def _prepare_hostname(tmp_path, value="testbox"):
(tmp_path / "etc_hostname").write_text(f"{value}\n")
def test_ca_fingerprint_matches_openssl(tmp_path):
cert = tmp_path / "root.crt"
cert.write_text(_TEST_CERT_PEM)
fp_hex = https._ca_fingerprint(cert)
assert fp_hex is not None
assert https._format_fingerprint(fp_hex) == _TEST_CERT_FP_SHA256
def test_ca_fingerprint_missing_file(tmp_path):
assert https._ca_fingerprint(tmp_path / "nope.crt") is None
def test_ca_fingerprint_no_pem_block(tmp_path):
garbage = tmp_path / "root.crt"
garbage.write_text("not a certificate")
assert https._ca_fingerprint(garbage) is None
def test_status_no_ca_no_snippet(tmp_path):
s = https.status(ca_path=tmp_path / "root.crt", https_snippet=tmp_path / "https.caddyfile")
assert s == {
"ca_available": False,
"fingerprint_sha256": None,
"force_https": False,
"ca_download_url": "/rootCA.crt",
}
def test_status_with_ca_and_https_snippet(tmp_path):
ca = tmp_path / "root.crt"
ca.write_text(_TEST_CERT_PEM)
https_snip = tmp_path / "https.caddyfile"
https_snip.write_text("furtka.local, furtka {\n\ttls internal\n\timport furtka_routes\n}\n")
s = https.status(ca_path=ca, https_snippet=https_snip)
assert s["ca_available"] is True
assert s["fingerprint_sha256"] == _TEST_CERT_FP_SHA256
assert s["force_https"] is True
def test_status_force_reflects_https_snippet_not_redirect(tmp_path):
"""Authoritative signal for "HTTPS is on" is the listener snippet —
a lone redirect without a :443 listener wouldn't actually serve
HTTPS, so the status must NOT report it as on. Locks 26.15 semantic."""
ca = tmp_path / "root.crt"
ca.write_text(_TEST_CERT_PEM)
s = https.status(ca_path=ca, https_snippet=tmp_path / "does-not-exist.caddyfile")
assert s["force_https"] is False
def test_set_force_enable_writes_both_snippets_and_reloads(tmp_path):
_prepare_hostname(tmp_path)
p = _paths(tmp_path)
calls = []
def fake_reload():
calls.append("reload")
result = https.set_force_https(True, reload_caddy=fake_reload, **p)
assert result is True
assert p["snippet"].read_text() == https.REDIRECT_CONTENT
written = p["https_snippet"].read_text()
assert "testbox.local, testbox" in written
assert "tls internal" in written
assert "import furtka_routes" in written
assert calls == ["reload"]
def test_set_force_uses_fallback_hostname_when_file_missing(tmp_path):
# No /etc/hostname → fall back to 'furtka' so Caddy gets a parseable
# block instead of an empty hostname that would fail config load.
p = _paths(tmp_path)
result = https.set_force_https(True, reload_caddy=lambda: None, **p)
assert result is True
assert "furtka.local, furtka" in p["https_snippet"].read_text()
def test_set_force_disable_removes_both_snippets(tmp_path):
_prepare_hostname(tmp_path)
p = _paths(tmp_path)
p["snippet_dir"].mkdir()
p["https_snippet_dir"].mkdir()
p["snippet"].write_text(https.REDIRECT_CONTENT)
p["https_snippet"].write_text("furtka.local { tls internal }\n")
result = https.set_force_https(False, reload_caddy=lambda: None, **p)
assert result is False
assert not p["snippet"].exists()
assert not p["https_snippet"].exists()
def test_set_force_disable_is_idempotent_when_already_off(tmp_path):
p = _paths(tmp_path)
result = https.set_force_https(False, reload_caddy=lambda: None, **p)
assert result is False
assert not p["snippet"].exists()
assert not p["https_snippet"].exists()
def test_reload_failure_rolls_back_enable(tmp_path):
_prepare_hostname(tmp_path)
p = _paths(tmp_path)
def failing_reload():
raise subprocess.CalledProcessError(1, ["systemctl"], stderr="bad config")
with pytest.raises(https.HttpsError, match="caddy reload failed: bad config"):
https.set_force_https(True, reload_caddy=failing_reload, **p)
# Rollback: since neither snippet existed before, neither exists after.
assert not p["snippet"].exists()
assert not p["https_snippet"].exists()
def test_reload_failure_rolls_back_disable(tmp_path):
_prepare_hostname(tmp_path)
p = _paths(tmp_path)
p["snippet_dir"].mkdir()
p["https_snippet_dir"].mkdir()
original_redirect = "redir https://{host}{uri} permanent\n# marker\n"
original_https = "# old https block\nfurtka.local { tls internal }\n"
p["snippet"].write_text(original_redirect)
p["https_snippet"].write_text(original_https)
def failing_reload():
raise subprocess.CalledProcessError(1, ["systemctl"], stderr="bad config")
with pytest.raises(https.HttpsError):
https.set_force_https(False, reload_caddy=failing_reload, **p)
# Rollback: both snippets are restored to their exact prior contents.
assert p["snippet"].read_text() == original_redirect
assert p["https_snippet"].read_text() == original_https
def test_systemctl_missing_raises_and_rolls_back(tmp_path):
_prepare_hostname(tmp_path)
p = _paths(tmp_path)
def missing_systemctl():
raise FileNotFoundError(2, "No such file", "systemctl")
with pytest.raises(https.HttpsError, match="systemctl not available"):
https.set_force_https(True, reload_caddy=missing_systemctl, **p)
assert not p["snippet"].exists()
assert not p["https_snippet"].exists()
def test_redirect_snippet_content_is_caddy_redir_directive():
# Lock the exact directive. A regression here silently stops the
# redirect from taking effect even though the file-swap looks fine.
assert https.REDIRECT_CONTENT.strip() == "redir https://{host}{uri} permanent"
def test_https_snippet_content_has_tls_internal_and_routes(tmp_path):
# Lock the shape of the opt-in HTTPS listener block. Caddy parses
# this verbatim — changing the shape without updating the test
# risks shipping a silently-broken Caddyfile import.
s = https._https_snippet_content("mybox")
assert "mybox.local, mybox {" in s
assert "\ttls internal" in s
assert "\timport furtka_routes" in s
assert s.endswith("}\n")

View file

@ -0,0 +1,480 @@
"""Tests for the background app-install runner.
Same shape as test_catalog.py / test_updater.py: fixture reloads the
module with env-overridden paths, dockerops calls are stubbed so nothing
touches a real daemon. Asserts that state transitions happen in the
right order and that exceptions flip the state to "error" with the
message before re-raising.
"""
from __future__ import annotations
import json
from pathlib import Path
import pytest
@pytest.fixture
def runner(tmp_path, monkeypatch):
apps = tmp_path / "apps"
apps.mkdir()
monkeypatch.setenv("FURTKA_APPS_DIR", str(apps))
monkeypatch.setenv("FURTKA_INSTALL_STATE", str(tmp_path / "install-state.json"))
monkeypatch.setenv("FURTKA_INSTALL_PLAN", str(tmp_path / "install-plan.json"))
monkeypatch.setenv("FURTKA_INSTALL_LOCK", str(tmp_path / "install.lock"))
import importlib
from furtka import install_runner as r
from furtka import paths as p
importlib.reload(p)
importlib.reload(r)
return r
def _write_installed_app(apps_dir: Path, name: str = "fileshare", **overrides):
app = apps_dir / name
app.mkdir()
manifest = {
"name": name,
"display_name": "Fileshare",
"version": "0.1.0",
"description": "Test fixture",
"volumes": ["files"],
"ports": [445],
"icon": "icon.svg",
**overrides,
}
(app / "manifest.json").write_text(json.dumps(manifest))
(app / "docker-compose.yaml").write_text("services: {}\n")
return app
def test_write_and_read_state_round_trip(runner):
runner.write_state("pulling_image", app="jellyfin")
s = runner.read_state()
assert s["stage"] == "pulling_image"
assert s["app"] == "jellyfin"
assert "updated_at" in s
def test_read_state_returns_empty_when_missing(runner):
assert runner.read_state() == {}
def test_read_state_returns_empty_on_junk(runner):
runner.state_path().parent.mkdir(parents=True, exist_ok=True)
runner.state_path().write_text("{not json")
assert runner.read_state() == {}
def test_acquire_lock_prevents_concurrent_runs(runner):
held = runner.acquire_lock()
try:
with pytest.raises(runner.InstallRunnerError, match="in progress"):
runner.acquire_lock()
finally:
held.close()
def test_run_install_happy_path(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "fileshare")
calls = []
monkeypatch.setattr(dockerops, "compose_pull", lambda *a, **k: calls.append(("pull", a)))
monkeypatch.setattr(dockerops, "ensure_volume", lambda name: calls.append(("vol", name)))
monkeypatch.setattr(dockerops, "compose_up", lambda *a, **k: calls.append(("up", a)))
runner.run_install("fileshare")
# Ordering: pull first, then volumes, then up.
assert [c[0] for c in calls] == ["pull", "vol", "up"]
# Exactly the namespaced volume name got created.
assert calls[1] == ("vol", "furtka_fileshare_files")
# Final state is "done" with the manifest version.
s = runner.read_state()
assert s["stage"] == "done"
assert s["app"] == "fileshare"
assert s["version"] == "0.1.0"
def test_run_install_writes_error_on_pull_failure(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "fileshare")
def boom(*a, **k):
raise dockerops.DockerError("pull failed: registry unreachable")
monkeypatch.setattr(dockerops, "compose_pull", boom)
monkeypatch.setattr(dockerops, "ensure_volume", lambda name: None)
monkeypatch.setattr(dockerops, "compose_up", lambda *a, **k: None)
with pytest.raises(dockerops.DockerError):
runner.run_install("fileshare")
s = runner.read_state()
assert s["stage"] == "error"
assert s["app"] == "fileshare"
assert "registry unreachable" in s["error"]
def test_run_install_writes_error_on_up_failure(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "fileshare")
monkeypatch.setattr(dockerops, "compose_pull", lambda *a, **k: None)
monkeypatch.setattr(dockerops, "ensure_volume", lambda name: None)
def boom(*a, **k):
raise dockerops.DockerError("compose up: container refused to start")
monkeypatch.setattr(dockerops, "compose_up", boom)
with pytest.raises(dockerops.DockerError):
runner.run_install("fileshare")
s = runner.read_state()
assert s["stage"] == "error"
assert "refused to start" in s["error"]
def test_run_install_releases_lock_after_done(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "fileshare")
monkeypatch.setattr(dockerops, "compose_pull", lambda *a, **k: None)
monkeypatch.setattr(dockerops, "ensure_volume", lambda name: None)
monkeypatch.setattr(dockerops, "compose_up", lambda *a, **k: None)
runner.run_install("fileshare")
# Lock released — a fresh acquire must succeed.
fh = runner.acquire_lock()
fh.close()
def test_run_install_releases_lock_after_error(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "fileshare")
monkeypatch.setattr(
dockerops, "compose_pull", lambda *a, **k: (_ for _ in ()).throw(dockerops.DockerError("x"))
)
with pytest.raises(dockerops.DockerError):
runner.run_install("fileshare")
fh = runner.acquire_lock()
fh.close()
# --- plan-aware multi-app installs -------------------------------------------
def _write_plan(plan_path: Path, target: str, to_install: list[str]) -> None:
plan_path.write_text(json.dumps({"target": target, "to_install": to_install}))
def _stub_docker_ops(monkeypatch, calls: list):
import furtka.dockerops as dockerops
def _pull(app_dir, project):
calls.append(("pull", project))
def _vol(name):
calls.append(("vol", name))
def _up(app_dir, project):
calls.append(("up", project))
monkeypatch.setattr(dockerops, "compose_pull", _pull)
monkeypatch.setattr(dockerops, "ensure_volume", _vol)
monkeypatch.setattr(dockerops, "compose_up", _up)
def test_run_install_iterates_plan_order(runner, monkeypatch):
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "mosquitto")
_write_installed_app(
apps_dir(),
"zigbee2mqtt",
requires=[{"app": "mosquitto"}],
)
_write_plan(runner.plan_path(), "zigbee2mqtt", ["mosquitto", "zigbee2mqtt"])
calls: list = []
_stub_docker_ops(monkeypatch, calls)
runner.run_install("zigbee2mqtt")
# mosquitto fully reconciled before zigbee2mqtt starts.
assert [c for c in calls if c[0] == "pull"] == [("pull", "mosquitto"), ("pull", "zigbee2mqtt")]
assert [c for c in calls if c[0] == "up"] == [("up", "mosquitto"), ("up", "zigbee2mqtt")]
s = runner.read_state()
assert s["stage"] == "done"
assert s["target"] == "zigbee2mqtt"
assert s["app"] == "zigbee2mqtt"
def test_run_install_fires_on_install_hook_against_provider(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
mosq = _write_installed_app(apps_dir(), "mosquitto")
# Provider ships a hook script.
(mosq / "hooks").mkdir()
hook = mosq / "hooks" / "create-user.sh"
hook.write_bytes(b"#!/bin/sh\necho MQTT_USER=z2m\necho MQTT_PASS=hunter2\n")
consumer = _write_installed_app(
apps_dir(),
"zigbee2mqtt",
requires=[{"app": "mosquitto", "on_install": "hooks/create-user.sh"}],
)
# Consumer's .env starts empty.
(consumer / ".env").write_text("")
_write_plan(runner.plan_path(), "zigbee2mqtt", ["mosquitto", "zigbee2mqtt"])
calls: list = []
_stub_docker_ops(monkeypatch, calls)
captured = {}
def fake_exec_script(app_dir, project, service, script_path, *, env, timeout):
captured["app_dir"] = app_dir
captured["project"] = project
captured["service"] = service
captured["script_path"] = script_path
captured["env"] = env
captured["timeout"] = timeout
return "MQTT_USER=z2m\nMQTT_PASS=hunter2\n"
# Tell the provider_exec_service helper to pick a deterministic service.
monkeypatch.setattr(
dockerops, "compose_image_tags", lambda a, p: {"mosquitto": "eclipse-mosquitto:2"}
)
monkeypatch.setattr(dockerops, "compose_exec_script", fake_exec_script)
runner.run_install("zigbee2mqtt")
# Hook was called against the provider, with the consumer's name + version
# in env, and the timeout we expect.
assert captured["project"] == "mosquitto"
assert captured["service"] == "mosquitto"
assert captured["script_path"] == hook
assert captured["env"] == {
"FURTKA_CONSUMER_APP": "zigbee2mqtt",
"FURTKA_CONSUMER_VERSION": "0.1.0",
}
assert captured["timeout"] == 60.0
# Consumer's .env now has the hook output.
env_text = (consumer / ".env").read_text()
assert "MQTT_USER=z2m" in env_text
assert "MQTT_PASS=hunter2" in env_text
# Mode 0600.
assert (consumer / ".env").stat().st_mode & 0o777 == 0o600
def test_run_install_hook_furtka_json_sentinel(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "mosquitto")
consumer = _write_installed_app(
apps_dir(),
"z2m",
requires=[{"app": "mosquitto", "on_install": "hooks/x.sh"}],
)
(apps_dir() / "mosquitto" / "hooks").mkdir()
(apps_dir() / "mosquitto" / "hooks" / "x.sh").write_bytes(b"")
(consumer / ".env").write_text("")
_write_plan(runner.plan_path(), "z2m", ["mosquitto", "z2m"])
calls: list = []
_stub_docker_ops(monkeypatch, calls)
monkeypatch.setattr(dockerops, "compose_image_tags", lambda a, p: {"mosquitto": "img"})
# Hook output mixes plain KEY=VALUE and a FURTKA_JSON sentinel. JSON
# wins on conflict (overlays plain).
monkeypatch.setattr(
dockerops,
"compose_exec_script",
lambda *a, **k: 'MQTT_USER=oldval\nFURTKA_JSON: {"MQTT_USER": "newval", "TOKEN": "abc"}\n',
)
runner.run_install("z2m")
env_text = (consumer / ".env").read_text()
assert "MQTT_USER=newval" in env_text # JSON overlay wins
assert "TOKEN=abc" in env_text
def test_run_install_hook_rejects_bad_key_name(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "mosquitto")
consumer = _write_installed_app(
apps_dir(),
"z2m",
requires=[{"app": "mosquitto", "on_install": "hooks/x.sh"}],
)
(apps_dir() / "mosquitto" / "hooks").mkdir()
(apps_dir() / "mosquitto" / "hooks" / "x.sh").write_bytes(b"")
(consumer / ".env").write_text("")
_write_plan(runner.plan_path(), "z2m", ["mosquitto", "z2m"])
calls: list = []
_stub_docker_ops(monkeypatch, calls)
monkeypatch.setattr(dockerops, "compose_image_tags", lambda a, p: {"mosquitto": "img"})
monkeypatch.setattr(dockerops, "compose_exec_script", lambda *a, **k: "lowercase_key=oops\n")
with pytest.raises(runner.InstallRunnerError, match="UPPER_SNAKE_CASE"):
runner.run_install("z2m")
s = runner.read_state()
assert s["stage"] == "error"
# Consumer's compose_up was never called because the hook failed.
assert not any(c[0] == "up" and c[1] == "z2m" for c in calls)
def test_run_install_hook_rejects_placeholder_value(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "mosquitto")
consumer = _write_installed_app(
apps_dir(),
"z2m",
requires=[{"app": "mosquitto", "on_install": "hooks/x.sh"}],
)
(apps_dir() / "mosquitto" / "hooks").mkdir()
(apps_dir() / "mosquitto" / "hooks" / "x.sh").write_bytes(b"")
(consumer / ".env").write_text("")
_write_plan(runner.plan_path(), "z2m", ["mosquitto", "z2m"])
calls: list = []
_stub_docker_ops(monkeypatch, calls)
monkeypatch.setattr(dockerops, "compose_image_tags", lambda a, p: {"mosquitto": "img"})
monkeypatch.setattr(dockerops, "compose_exec_script", lambda *a, **k: "MQTT_PASS=changeme\n")
with pytest.raises(runner.InstallRunnerError, match="placeholder"):
runner.run_install("z2m")
def test_run_install_hook_failure_skips_consumer_compose_up(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "mosquitto")
consumer = _write_installed_app(
apps_dir(),
"z2m",
requires=[{"app": "mosquitto", "on_install": "hooks/x.sh"}],
)
(apps_dir() / "mosquitto" / "hooks").mkdir()
(apps_dir() / "mosquitto" / "hooks" / "x.sh").write_bytes(b"")
(consumer / ".env").write_text("")
_write_plan(runner.plan_path(), "z2m", ["mosquitto", "z2m"])
calls: list = []
_stub_docker_ops(monkeypatch, calls)
monkeypatch.setattr(dockerops, "compose_image_tags", lambda a, p: {"mosquitto": "img"})
def boom(*a, **k):
raise dockerops.DockerError("hook returned 1: connection refused")
monkeypatch.setattr(dockerops, "compose_exec_script", boom)
with pytest.raises(dockerops.DockerError):
runner.run_install("z2m")
s = runner.read_state()
assert s["stage"] == "error"
assert s["target"] == "z2m"
# The provider's compose_up DID run earlier in the plan.
assert ("up", "mosquitto") in calls
# But the consumer's never did.
assert ("up", "z2m") not in calls
def test_run_install_missing_provider_hook_file_raises(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "mosquitto")
consumer = _write_installed_app(
apps_dir(),
"z2m",
requires=[{"app": "mosquitto", "on_install": "hooks/missing.sh"}],
)
(consumer / ".env").write_text("")
_write_plan(runner.plan_path(), "z2m", ["mosquitto", "z2m"])
calls: list = []
_stub_docker_ops(monkeypatch, calls)
monkeypatch.setattr(dockerops, "compose_image_tags", lambda a, p: {"mosquitto": "img"})
with pytest.raises(runner.InstallRunnerError, match="missing in provider"):
runner.run_install("z2m")
def test_run_install_plan_file_is_consumed_after_read(runner, monkeypatch):
"""After a run, the plan file is removed so a stale plan can't steer the next run."""
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "fileshare")
_write_plan(runner.plan_path(), "fileshare", ["fileshare"])
calls: list = []
_stub_docker_ops(monkeypatch, calls)
runner.run_install("fileshare")
assert not runner.plan_path().exists()
# --- _parse_hook_output (unit) -----------------------------------------------
def test_parse_hook_output_kv_only(runner):
out = runner._parse_hook_output("MQTT_USER=z2m\nMQTT_PASS=hunter2\n")
assert out == {"MQTT_USER": "z2m", "MQTT_PASS": "hunter2"}
def test_parse_hook_output_rejects_lowercase_key(runner):
with pytest.raises(runner.InstallRunnerError, match="UPPER_SNAKE_CASE"):
runner._parse_hook_output("lowercase=oops\n")
def test_parse_hook_output_furtka_json(runner):
out = runner._parse_hook_output('FURTKA_JSON: {"FOO": "bar", "BAZ": "qux"}\n')
assert out == {"FOO": "bar", "BAZ": "qux"}
def test_parse_hook_output_furtka_json_rejects_non_string(runner):
with pytest.raises(runner.InstallRunnerError, match="must be a string"):
runner._parse_hook_output('FURTKA_JSON: {"FOO": 42}\n')
def test_parse_hook_output_furtka_json_rejects_bad_payload(runner):
with pytest.raises(runner.InstallRunnerError, match="must be an object"):
runner._parse_hook_output('FURTKA_JSON: ["not", "a", "dict"]\n')
def test_parse_hook_output_furtka_json_invalid_json(runner):
with pytest.raises(runner.InstallRunnerError, match="invalid FURTKA_JSON"):
runner._parse_hook_output("FURTKA_JSON: {not json}\n")

View file

@ -267,3 +267,173 @@ def test_read_env_values_roundtrip(tmp_path, fake_dirs):
write_env(p, {"A": "plain", "B": "has space", "C": 'has "quote"', "D": ""}) write_env(p, {"A": "plain", "B": "has space", "C": 'has "quote"', "D": ""})
values = read_env_values(p) values = read_env_values(p)
assert values == {"A": "plain", "B": "has space", "C": 'has "quote"', "D": ""} assert values == {"A": "plain", "B": "has space", "C": 'has "quote"', "D": ""}
# --- path-type settings ------------------------------------------------------
PATH_MANIFEST = dict(
VALID_MANIFEST,
name="jellyfin",
settings=[
{
"name": "MEDIA_PATH",
"label": "Medienordner",
"type": "path",
"required": True,
}
],
)
OPTIONAL_PATH_MANIFEST = dict(
VALID_MANIFEST,
name="jellyfin",
settings=[{"name": "OPTIONAL_PATH", "label": "Optional", "type": "path", "required": False}],
)
def test_install_with_valid_path_succeeds(tmp_path, fake_dirs):
media = tmp_path / "media"
media.mkdir()
src = _write_app_source(tmp_path, "jellyfin", PATH_MANIFEST)
installer.install_from(src, settings={"MEDIA_PATH": str(media)})
target = apps_dir() / "jellyfin"
assert f"MEDIA_PATH={media}" in (target / ".env").read_text()
def test_install_rejects_nonexistent_path(tmp_path, fake_dirs):
src = _write_app_source(tmp_path, "jellyfin", PATH_MANIFEST)
with pytest.raises(installer.InstallError, match="does not exist"):
installer.install_from(src, settings={"MEDIA_PATH": str(tmp_path / "ghost")})
def test_install_rejects_path_that_is_a_file(tmp_path, fake_dirs):
f = tmp_path / "not-a-dir"
f.write_text("hi")
src = _write_app_source(tmp_path, "jellyfin", PATH_MANIFEST)
with pytest.raises(installer.InstallError, match="is not a directory"):
installer.install_from(src, settings={"MEDIA_PATH": str(f)})
def test_install_rejects_relative_path(tmp_path, fake_dirs):
src = _write_app_source(tmp_path, "jellyfin", PATH_MANIFEST)
with pytest.raises(installer.InstallError, match="absolute path"):
installer.install_from(src, settings={"MEDIA_PATH": "media"})
def test_install_rejects_system_path(tmp_path, fake_dirs):
src = _write_app_source(tmp_path, "jellyfin", PATH_MANIFEST)
with pytest.raises(installer.InstallError, match="system path"):
installer.install_from(src, settings={"MEDIA_PATH": "/etc"})
def test_install_rejects_root_filesystem(tmp_path, fake_dirs):
src = _write_app_source(tmp_path, "jellyfin", PATH_MANIFEST)
with pytest.raises(installer.InstallError, match="system path"):
installer.install_from(src, settings={"MEDIA_PATH": "/"})
def test_install_rejects_deny_list_via_traversal(tmp_path, fake_dirs):
# /mnt/../etc resolves to /etc — must be caught after Path.resolve().
src = _write_app_source(tmp_path, "jellyfin", PATH_MANIFEST)
with pytest.raises(installer.InstallError, match="system path"):
installer.install_from(src, settings={"MEDIA_PATH": "/mnt/../etc"})
def test_install_accepts_empty_optional_path(tmp_path, fake_dirs):
src = _write_app_source(tmp_path, "jellyfin", OPTIONAL_PATH_MANIFEST)
installer.install_from(src, settings={"OPTIONAL_PATH": ""})
target = apps_dir() / "jellyfin"
assert (target / ".env").exists()
def test_update_env_rejects_invalid_path(tmp_path, fake_dirs):
# First install with a valid path.
media = tmp_path / "media"
media.mkdir()
src = _write_app_source(tmp_path, "jellyfin", PATH_MANIFEST)
installer.install_from(src, settings={"MEDIA_PATH": str(media)})
# Then try to update to a bad path.
with pytest.raises(installer.InstallError, match="does not exist"):
installer.update_env("jellyfin", {"MEDIA_PATH": str(tmp_path / "ghost")})
# --- parse_env_text ----------------------------------------------------------
def test_parse_env_text_basic():
from furtka.installer import parse_env_text
out = parse_env_text("A=1\nB=two\n#comment\n\nC=three=four\n")
assert out == {"A": "1", "B": "two", "C": "three=four"}
def test_parse_env_text_handles_quoted_values():
from furtka.installer import parse_env_text
out = parse_env_text('A="has space"\nB=\'plain\'\nC="quote \\"inside\\""\n')
assert out == {"A": "has space", "B": "plain", "C": 'quote "inside"'}
def test_parse_env_text_ignores_malformed_lines():
from furtka.installer import parse_env_text
out = parse_env_text("no-equals-sign\n=missing-key\nGOOD=ok\n")
assert out == {"GOOD": "ok", "": "missing-key"}
# --- install_plan driver -----------------------------------------------------
def test_install_plan_calls_install_from_in_order(tmp_path, fake_dirs, monkeypatch):
from furtka.deps import DepPlan
calls: list[tuple[str, dict | None]] = []
def fake_resolve(name):
return tmp_path / "src" / name
def fake_install_from(src, settings=None):
calls.append((src.name, settings))
return apps_dir() / src.name
monkeypatch.setattr(installer, "resolve_source", fake_resolve)
monkeypatch.setattr(installer, "install_from", fake_install_from)
plan = DepPlan(
target="a",
install_order=("c", "b", "a"),
already_installed=frozenset(),
to_install=("c", "b", "a"),
)
out = installer.install_plan(plan, settings_target={"K": "v"})
assert [name for name, _ in calls] == ["c", "b", "a"]
# Only the target receives settings.
assert calls[0] == ("c", None)
assert calls[1] == ("b", None)
assert calls[2] == ("a", {"K": "v"})
assert [p.name for p in out] == ["c", "b", "a"]
def test_install_plan_skips_already_installed(tmp_path, fake_dirs, monkeypatch):
from furtka.deps import DepPlan
calls: list[str] = []
def fake_resolve(name):
return tmp_path / "src" / name
def fake_install_from(src, settings=None):
calls.append(src.name)
return apps_dir() / src.name
monkeypatch.setattr(installer, "resolve_source", fake_resolve)
monkeypatch.setattr(installer, "install_from", fake_install_from)
plan = DepPlan(
target="a",
install_order=("b", "a"),
already_installed=frozenset({"b"}),
to_install=("a",),
)
installer.install_plan(plan)
assert calls == ["a"]

View file

@ -95,6 +95,21 @@ def test_settings_optional_default_empty(tmp_path):
m = load_manifest(path) m = load_manifest(path)
assert m.settings == () assert m.settings == ()
assert m.description_long == "" assert m.description_long == ""
assert m.open_url == ""
def test_open_url_stored_when_present(tmp_path):
payload = dict(VALID_MANIFEST, open_url="smb://{host}/files")
path = _write_app(tmp_path, "fileshare", payload)
m = load_manifest(path)
assert m.open_url == "smb://{host}/files"
def test_open_url_non_string_rejected(tmp_path):
payload = dict(VALID_MANIFEST, open_url=42)
path = _write_app(tmp_path, "fileshare", payload)
with pytest.raises(ManifestError, match="open_url"):
load_manifest(path)
def test_settings_parsed(tmp_path): def test_settings_parsed(tmp_path):
@ -140,6 +155,27 @@ def test_settings_reject_unknown_type(tmp_path):
load_manifest(path) load_manifest(path)
def test_settings_accept_path_type(tmp_path):
payload = dict(
VALID_MANIFEST,
settings=[
{
"name": "MEDIA_PATH",
"label": "Medienordner",
"description": "Absoluter Pfad zu deinen Medien",
"type": "path",
"required": True,
}
],
)
path = _write_app(tmp_path, "fileshare", payload)
m = load_manifest(path)
assert len(m.settings) == 1
assert m.settings[0].name == "MEDIA_PATH"
assert m.settings[0].type == "path"
assert m.settings[0].required is True
def test_settings_reject_duplicate_name(tmp_path): def test_settings_reject_duplicate_name(tmp_path):
bad = dict( bad = dict(
VALID_MANIFEST, VALID_MANIFEST,
@ -155,3 +191,104 @@ def test_settings_non_list_rejected(tmp_path):
path = _write_app(tmp_path, "fileshare", bad) path = _write_app(tmp_path, "fileshare", bad)
with pytest.raises(ManifestError, match="settings must be a list"): with pytest.raises(ManifestError, match="settings must be a list"):
load_manifest(path) load_manifest(path)
def test_requires_optional_default_empty(tmp_path):
path = _write_app(tmp_path, "fileshare", VALID_MANIFEST)
m = load_manifest(path)
assert m.requires == ()
def test_requires_parsed_full_entry(tmp_path):
payload = dict(
VALID_MANIFEST,
name="zigbee2mqtt",
requires=[
{
"app": "mosquitto",
"on_install": "hooks/create-user.sh",
"on_start": "hooks/ensure-user.sh",
}
],
)
path = _write_app(tmp_path, "zigbee2mqtt", payload)
m = load_manifest(path)
assert len(m.requires) == 1
r = m.requires[0]
assert r.app == "mosquitto"
assert r.on_install == "hooks/create-user.sh"
assert r.on_start == "hooks/ensure-user.sh"
def test_requires_app_only_no_hooks(tmp_path):
payload = dict(VALID_MANIFEST, name="z2m", requires=[{"app": "mosquitto"}])
path = _write_app(tmp_path, "z2m", payload)
m = load_manifest(path)
assert m.requires[0].app == "mosquitto"
assert m.requires[0].on_install is None
assert m.requires[0].on_start is None
def test_requires_rejects_self_reference(tmp_path):
payload = dict(VALID_MANIFEST, requires=[{"app": "fileshare"}])
path = _write_app(tmp_path, "fileshare", payload)
with pytest.raises(ManifestError, match="self-reference"):
load_manifest(path)
def test_requires_rejects_duplicate_app(tmp_path):
payload = dict(
VALID_MANIFEST,
name="z2m",
requires=[{"app": "mosquitto"}, {"app": "mosquitto"}],
)
path = _write_app(tmp_path, "z2m", payload)
with pytest.raises(ManifestError, match="duplicate"):
load_manifest(path)
def test_requires_rejects_traversal_hook_path(tmp_path):
payload = dict(
VALID_MANIFEST,
name="z2m",
requires=[{"app": "mosquitto", "on_install": "../../etc/passwd"}],
)
path = _write_app(tmp_path, "z2m", payload)
with pytest.raises(ManifestError, match=r"must not contain '\.\.'"):
load_manifest(path)
def test_requires_rejects_absolute_hook_path(tmp_path):
payload = dict(
VALID_MANIFEST,
name="z2m",
requires=[{"app": "mosquitto", "on_start": "/tmp/hook.sh"}],
)
path = _write_app(tmp_path, "z2m", payload)
with pytest.raises(ManifestError, match="must be relative"):
load_manifest(path)
def test_requires_non_list_rejected(tmp_path):
payload = dict(VALID_MANIFEST, requires={"app": "mosquitto"})
path = _write_app(tmp_path, "fileshare", payload)
with pytest.raises(ManifestError, match="requires must be a list"):
load_manifest(path)
def test_requires_rejects_invalid_app_name(tmp_path):
payload = dict(VALID_MANIFEST, requires=[{"app": "Bad-Name!"}])
path = _write_app(tmp_path, "fileshare", payload)
with pytest.raises(ManifestError, match="lowercase app name"):
load_manifest(path)
def test_requires_rejects_empty_hook_string(tmp_path):
payload = dict(
VALID_MANIFEST,
name="z2m",
requires=[{"app": "mosquitto", "on_install": ""}],
)
path = _write_app(tmp_path, "z2m", payload)
with pytest.raises(ManifestError, match="non-empty string"):
load_manifest(path)

74
tests/test_passwd.py Normal file
View file

@ -0,0 +1,74 @@
"""Tests for furtka.passwd — stdlib-only password hashing.
The primary contract: hash/verify roundtrips cleanly, AND the verifier
accepts the werkzeug hash format that 26.11 / 26.12 boxes wrote to
``users.json``. Losing that backward compat would lock out existing
admins after a 26.13+ upgrade.
"""
from __future__ import annotations
from furtka import passwd
def test_hash_roundtrip():
h = passwd.hash_password("hunter2")
assert passwd.verify_password("hunter2", h)
assert not passwd.verify_password("wrong", h)
def test_hash_is_salted():
# Two separate hashes of the same password must diverge.
a = passwd.hash_password("same-pw")
b = passwd.hash_password("same-pw")
assert a != b
assert passwd.verify_password("same-pw", a)
assert passwd.verify_password("same-pw", b)
def test_generated_hash_format():
# Shape is pbkdf2:<hash>:<iter>$<salt>$<hex>
h = passwd.hash_password("x")
parts = h.split("$", 2)
assert len(parts) == 3
method, salt, digest = parts
assert method.startswith("pbkdf2:sha256:")
assert salt
# digest is hex of pbkdf2_hmac sha256 → 64 hex chars
assert len(digest) == 64
assert all(c in "0123456789abcdef" for c in digest)
def test_verify_werkzeug_scrypt_hash():
"""Known werkzeug scrypt hash generated by 26.11 / 26.12 boxes.
Captured live off a .196 test VM after its auth bootstrap:
username=daniel, password=test-admin-pw1
Hash format: scrypt:32768:8:1$<salt>$<hex>
If this regresses, every existing box that upgraded via 26.11 and
set a password gets locked out on the next upgrade.
"""
known = (
"scrypt:32768:8:1$yWZUqJodowt9ieI1$"
"2d1059b3564da7492b4aa3c2be7fff6fef06085e5e1bfd52e897948c58246b7a"
"9603400355b7264f61c4436eba7bf8c947adec3d7a76be03b50efb4227e15a80"
)
assert passwd.verify_password("test-admin-pw1", known)
assert not passwd.verify_password("wrong-password", known)
def test_verify_rejects_malformed_hashes():
# Empty / missing delimiters / unknown method / bad int — all False.
assert not passwd.verify_password("x", "")
assert not passwd.verify_password("x", "nothingspecial")
assert not passwd.verify_password("x", "pbkdf2:sha256:600000") # no $salt$digest
assert not passwd.verify_password("x", "pbkdf2$salt$digest") # missing hash + iter
assert not passwd.verify_password("x", "bcrypt:12$salt$digest") # unsupported algo
assert not passwd.verify_password("x", "pbkdf2:sha256:abc$salt$digest") # bad iter int
def test_verify_rejects_nonstring_inputs():
# Defensive: users.json can be corrupted or have nulls.
assert not passwd.verify_password(None, "pbkdf2:sha256:1000$salt$digest") # type: ignore[arg-type]
assert not passwd.verify_password("x", None) # type: ignore[arg-type]
assert not passwd.verify_password("x", 12345) # type: ignore[arg-type]

View file

@ -133,3 +133,121 @@ def test_reconcile_isolates_missing_docker_binary(tmp_path, monkeypatch):
error = next(a for a in actions if a.kind == "error") error = next(a for a in actions if a.kind == "error")
assert error.target == "fileshare" assert error.target == "fileshare"
assert "docker" in error.detail assert "docker" in error.detail
# --- Topo ordering + on_start hooks ----------------------------------------
PROVIDER_MANIFEST = dict(
VALID_MANIFEST,
name="mosquitto",
volumes=["data"],
)
CONSUMER_MANIFEST = dict(
VALID_MANIFEST,
name="zigbee2mqtt",
volumes=["state"],
requires=[
{
"app": "mosquitto",
"on_start": "hooks/ensure-user.sh",
}
],
)
def test_reconcile_topo_orders_providers_before_consumers(tmp_path, fake_docker, monkeypatch):
# Consumer comes alphabetically AFTER provider here, but the explicit dep
# also needs to win when the order was reversed. Add an alpha-first
# consumer name to make this load-bearing.
consumer = dict(CONSUMER_MANIFEST, name="alpha", requires=[{"app": "mosquitto"}])
_make_app(tmp_path, "mosquitto", PROVIDER_MANIFEST)
_make_app(tmp_path, "alpha", consumer)
reconciler.reconcile(tmp_path)
up_order = [project for _, project in fake_docker["compose_up"]]
assert up_order == ["mosquitto", "alpha"]
def test_reconcile_fires_on_start_before_compose_up(tmp_path, fake_docker, monkeypatch):
provider = _make_app(tmp_path, "mosquitto", PROVIDER_MANIFEST)
(provider / "hooks").mkdir()
(provider / "hooks" / "ensure-user.sh").write_bytes(b"#!/bin/sh\necho ok\n")
_make_app(tmp_path, "zigbee2mqtt", CONSUMER_MANIFEST)
hook_calls: list[str] = []
def fake_exec_script(app_dir, project, service, script_path, *, env, timeout):
hook_calls.append(f"{project}:{script_path.name}")
return ""
monkeypatch.setattr(dockerops, "compose_image_tags", lambda a, p: {"mosquitto": "img"})
monkeypatch.setattr(dockerops, "compose_exec_script", fake_exec_script)
actions = reconciler.reconcile(tmp_path)
# Hook fired against mosquitto exactly once.
assert hook_calls == ["mosquitto:ensure-user.sh"]
# Hook action appears before consumer's compose_up.
kinds = [(a.kind, a.target) for a in actions]
hook_idx = kinds.index(("hook", "zigbee2mqtt:mosquitto:on_start"))
up_idx = kinds.index(("compose_up", "zigbee2mqtt"))
assert hook_idx < up_idx
# And the provider's compose_up happened first.
assert fake_docker["compose_up"][0][1] == "mosquitto"
def test_reconcile_on_start_failure_skips_consumer_compose_up(tmp_path, fake_docker, monkeypatch):
provider = _make_app(tmp_path, "mosquitto", PROVIDER_MANIFEST)
(provider / "hooks").mkdir()
(provider / "hooks" / "ensure-user.sh").write_bytes(b"")
_make_app(tmp_path, "zigbee2mqtt", CONSUMER_MANIFEST)
# Unrelated third app: must still come up despite the consumer's hook fail.
_make_app(tmp_path, "lonely", dict(VALID_MANIFEST, name="lonely", volumes=["data"]))
def boom(*a, **k):
raise dockerops.DockerError("hook returned 1")
monkeypatch.setattr(dockerops, "compose_image_tags", lambda a, p: {"mosquitto": "img"})
monkeypatch.setattr(dockerops, "compose_exec_script", boom)
actions = reconciler.reconcile(tmp_path)
assert reconciler.has_errors(actions)
error_actions = [a for a in actions if a.kind == "error"]
assert len(error_actions) == 1
assert error_actions[0].target == "zigbee2mqtt"
assert "on_start(mosquitto)" in error_actions[0].detail
# Provider AND unrelated app came up; consumer did NOT.
up_projects = {p for _, p in fake_docker["compose_up"]}
assert "mosquitto" in up_projects
assert "lonely" in up_projects
assert "zigbee2mqtt" not in up_projects
def test_reconcile_dry_run_emits_hook_action_without_executing(tmp_path, fake_docker, monkeypatch):
provider = _make_app(tmp_path, "mosquitto", PROVIDER_MANIFEST)
(provider / "hooks").mkdir()
(provider / "hooks" / "ensure-user.sh").write_bytes(b"")
_make_app(tmp_path, "zigbee2mqtt", CONSUMER_MANIFEST)
called = []
monkeypatch.setattr(dockerops, "compose_exec_script", lambda *a, **k: called.append(1) or "")
actions = reconciler.reconcile(tmp_path, dry_run=True)
assert called == []
hook_actions = [a for a in actions if a.kind == "hook"]
assert any(a.target == "zigbee2mqtt:mosquitto:on_start" for a in hook_actions)
def test_reconcile_missing_provider_still_isolated(tmp_path, fake_docker, monkeypatch):
"""Consumer requires an app that isn't installed — per-app error, others continue."""
_make_app(tmp_path, "zigbee2mqtt", CONSUMER_MANIFEST)
_make_app(tmp_path, "lonely", dict(VALID_MANIFEST, name="lonely", volumes=["data"]))
actions = reconciler.reconcile(tmp_path)
assert reconciler.has_errors(actions)
errors = [a for a in actions if a.kind == "error"]
assert len(errors) == 1
assert errors[0].target == "zigbee2mqtt"
# `lonely` still got its compose_up.
assert any(p == "lonely" for _, p in fake_docker["compose_up"])

108
tests/test_sources.py Normal file
View file

@ -0,0 +1,108 @@
"""Tests for the catalog > bundled resolver."""
from __future__ import annotations
import json
from pathlib import Path
import pytest
def _manifest(name: str = "fileshare") -> dict:
return {
"name": name,
"display_name": "Fileshare",
"version": "0.1.0",
"description": "x",
"volumes": [],
"ports": [],
"icon": "icon.svg",
}
@pytest.fixture
def sources_mod(tmp_path, monkeypatch):
monkeypatch.setenv("FURTKA_CATALOG_DIR", str(tmp_path / "catalog"))
monkeypatch.setenv("FURTKA_BUNDLED_APPS_DIR", str(tmp_path / "bundled"))
import importlib
from furtka import paths as p
from furtka import sources as s
importlib.reload(p)
importlib.reload(s)
return s
def _seed_app(root: Path, name: str, manifest: dict | None = None) -> Path:
folder = root / name
folder.mkdir(parents=True)
(folder / "manifest.json").write_text(json.dumps(manifest or _manifest(name)))
return folder
def test_resolve_app_name_returns_none_when_absent(sources_mod):
assert sources_mod.resolve_app_name("nope") is None
def test_resolve_app_name_prefers_catalog_over_bundled(sources_mod, tmp_path):
_seed_app(tmp_path / "catalog" / "apps", "fileshare")
_seed_app(tmp_path / "bundled", "fileshare")
result = sources_mod.resolve_app_name("fileshare")
assert result is not None
assert result.origin == "catalog"
assert result.path.parent.name == "apps"
assert result.path.parent.parent.name == "catalog"
def test_resolve_app_name_falls_back_to_bundled(sources_mod, tmp_path):
_seed_app(tmp_path / "bundled", "fileshare")
result = sources_mod.resolve_app_name("fileshare")
assert result is not None
assert result.origin == "bundled"
def test_resolve_app_name_ignores_folder_without_manifest(sources_mod, tmp_path):
# Empty folder is not a valid app even if the name matches.
(tmp_path / "catalog" / "apps" / "fileshare").mkdir(parents=True)
_seed_app(tmp_path / "bundled", "fileshare")
result = sources_mod.resolve_app_name("fileshare")
# Catalog entry without manifest is skipped; bundled wins.
assert result.origin == "bundled"
def test_list_available_unions_catalog_and_bundled(sources_mod, tmp_path):
_seed_app(tmp_path / "catalog" / "apps", "fileshare")
_seed_app(tmp_path / "bundled", "otherapp")
names = {s.path.name: s.origin for s in sources_mod.list_available()}
assert names == {"fileshare": "catalog", "otherapp": "bundled"}
def test_list_available_catalog_wins_on_collision(sources_mod, tmp_path):
_seed_app(tmp_path / "catalog" / "apps", "fileshare")
_seed_app(tmp_path / "bundled", "fileshare")
entries = sources_mod.list_available()
assert len(entries) == 1
assert entries[0].origin == "catalog"
def test_list_available_empty_when_neither_exists(sources_mod):
assert sources_mod.list_available() == []
def test_list_available_skips_non_dirs_and_no_manifest(sources_mod, tmp_path):
# A plain file in catalog/apps and an empty dir in bundled — both ignored.
cat_root = tmp_path / "catalog" / "apps"
cat_root.mkdir(parents=True)
(cat_root / "not-a-dir.txt").write_text("x")
(tmp_path / "bundled" / "emptyapp").mkdir(parents=True)
_seed_app(tmp_path / "bundled", "realapp")
entries = sources_mod.list_available()
assert [e.path.name for e in entries] == ["realapp"]

View file

@ -24,6 +24,9 @@ def updater(tmp_path, monkeypatch):
monkeypatch.setenv("FURTKA_LOCK_PATH", str(tmp_path / "update.lock")) monkeypatch.setenv("FURTKA_LOCK_PATH", str(tmp_path / "update.lock"))
monkeypatch.setenv("FURTKA_CADDYFILE_PATH", str(tmp_path / "etc_caddy" / "Caddyfile")) monkeypatch.setenv("FURTKA_CADDYFILE_PATH", str(tmp_path / "etc_caddy" / "Caddyfile"))
monkeypatch.setenv("FURTKA_SYSTEMD_DIR", str(tmp_path / "etc_systemd_system")) monkeypatch.setenv("FURTKA_SYSTEMD_DIR", str(tmp_path / "etc_systemd_system"))
hostname_file = tmp_path / "etc_hostname"
hostname_file.write_text("testbox\n")
monkeypatch.setenv("FURTKA_HOSTNAME_FILE", str(hostname_file))
(tmp_path / "etc_systemd_system").mkdir() (tmp_path / "etc_systemd_system").mkdir()
# Reload the module so the path constants pick up the env vars. # Reload the module so the path constants pick up the env vars.
import importlib import importlib
@ -206,6 +209,99 @@ def test_refresh_caddyfile_noops_if_source_missing(updater, tmp_path):
assert updater._refresh_caddyfile(tmp_path / "does-not-exist") is False assert updater._refresh_caddyfile(tmp_path / "does-not-exist") is False
def test_refresh_caddyfile_substitutes_hostname_placeholder(updater, tmp_path):
# Self-update rewrites the shipped Caddyfile against the box's real
# hostname, same substitution the installer does on first boot. Without
# this the named-hostname :443 block ships with a literal
# `__FURTKA_HOSTNAME__` and Caddy refuses to load the config.
src = tmp_path / "src"
src.write_text("__FURTKA_HOSTNAME__.local, __FURTKA_HOSTNAME__ {\n\ttls internal\n}\n")
assert updater._refresh_caddyfile(src) is True
live = updater._CADDYFILE_LIVE.read_text()
assert "testbox.local, testbox {" in live
assert "__FURTKA_HOSTNAME__" not in live
# Second call with the same source is a no-op — rendered content matches.
assert updater._refresh_caddyfile(src) is False
def test_health_check_treats_4xx_as_healthy(updater, monkeypatch):
"""26.11+ auth makes /api/apps return 401 on unauth requests. If the
health check treated that as "down", every pre-auth auth upgrade
auto-rolls back. Server responding at all is enough signal for the
health check."""
import urllib.error
calls = {"n": 0}
class _FakeResp:
def __init__(self, code):
self.status = code
def __enter__(self):
return self
def __exit__(self, *a):
return False
def raising_401(url, timeout):
calls["n"] += 1
raise urllib.error.HTTPError(url, 401, "Unauthorized", {}, None)
monkeypatch.setattr("urllib.request.urlopen", raising_401)
assert updater._health_check("http://127.0.0.1:7000/api/apps", deadline_s=2.0) is True
# One call was enough — early exit on 4xx, no retry loop.
assert calls["n"] == 1
def test_health_check_rejects_5xx(updater, monkeypatch):
"""500s mean the server is up but broken — that's NOT healthy.
Distinguishes auth refusals (4xx = healthy) from real runtime
errors (5xx = unhealthy, roll back)."""
import urllib.error
def raising_500(url, timeout):
raise urllib.error.HTTPError(url, 500, "Internal Server Error", {}, None)
monkeypatch.setattr("urllib.request.urlopen", raising_500)
assert updater._health_check("http://127.0.0.1:7000/api/apps", deadline_s=1.5) is False
def test_health_check_retries_on_connection_refused(updater, monkeypatch):
"""While furtka-api is still starting, urlopen raises URLError.
The loop must keep polling until the server comes up or deadline."""
import urllib.error
calls = {"n": 0}
def flaky(url, timeout):
calls["n"] += 1
if calls["n"] < 3:
raise urllib.error.URLError("connection refused")
class _Resp:
status = 200
def __enter__(self):
return self
def __exit__(self, *a):
return False
return _Resp()
monkeypatch.setattr("urllib.request.urlopen", flaky)
assert updater._health_check("http://127.0.0.1:7000/api/apps", deadline_s=10.0) is True
assert calls["n"] == 3
def test_current_hostname_falls_back_when_file_missing(updater, monkeypatch, tmp_path):
monkeypatch.setenv("FURTKA_HOSTNAME_FILE", str(tmp_path / "missing"))
import importlib
importlib.reload(updater)
assert updater._current_hostname() == "furtka"
def test_link_new_units_only_links_missing(updater, tmp_path, monkeypatch): def test_link_new_units_only_links_missing(updater, tmp_path, monkeypatch):
unit_dir = tmp_path / "assets_systemd" unit_dir = tmp_path / "assets_systemd"
unit_dir.mkdir() unit_dir.mkdir()
@ -220,17 +316,25 @@ def test_link_new_units_only_links_missing(updater, tmp_path, monkeypatch):
linked = updater._link_new_units(unit_dir) linked = updater._link_new_units(unit_dir)
assert linked == ["furtka-bar.timer"] assert linked == ["furtka-bar.timer"]
# Only one systemctl link call — for the new timer, not the existing service. # Two calls for the newly-linked timer: systemctl link + systemctl enable.
assert len(seen) == 1 # The already-linked service is untouched. Timers need the follow-up
# `enable` so self-updates that introduce new timers don't leave them
# dormant — fresh installs get their enable via the webinstaller.
assert len(seen) == 2
assert seen[0][:2] == ["systemctl", "link"] assert seen[0][:2] == ["systemctl", "link"]
assert seen[0][2].endswith("furtka-bar.timer") assert seen[0][2].endswith("furtka-bar.timer")
assert seen[1] == ["systemctl", "enable", "furtka-bar.timer"]
def test_extract_tarball_uses_data_filter_when_available(tmp_path, updater, monkeypatch): def test_extract_tarball_uses_data_filter_when_available(tmp_path, updater, monkeypatch):
# Confirm we pass filter='data' to extractall on Python 3.12+; fall back # Confirm we pass filter='data' to extractall on Python 3.12+; fall back
# cleanly on older runtimes. Capture the kwarg via a stub. # cleanly on older runtimes. Capture the kwarg via a stub. tarfile lives
# in furtka._release_common after the extraction refactor, so we patch
# that module — updater._extract_tarball delegates there.
from furtka import _release_common as _rc
calls = [] calls = []
real_open = updater.tarfile.open # capture before monkeypatching real_open = _rc.tarfile.open # capture before monkeypatching
class _Recorder: class _Recorder:
def __init__(self, tarball): def __init__(self, tarball):
@ -255,7 +359,7 @@ def test_extract_tarball_uses_data_filter_when_available(tmp_path, updater, monk
tar = tmp_path / "t.tar.gz" tar = tmp_path / "t.tar.gz"
_make_release_tarball(tar, "26.9-alpha") _make_release_tarball(tar, "26.9-alpha")
monkeypatch.setattr(updater.tarfile, "open", lambda *a, **kw: _Recorder(tar)) monkeypatch.setattr(_rc.tarfile, "open", lambda *a, **kw: _Recorder(tar))
dest = tmp_path / "dest" dest = tmp_path / "dest"
updater._extract_tarball(tar, dest) updater._extract_tarball(tar, dest)
@ -308,24 +412,27 @@ def test_rollback_flips_to_previous_slot(tmp_path, updater, monkeypatch):
def test_check_update_queries_forgejo_and_compares(updater, monkeypatch): def test_check_update_queries_forgejo_and_compares(updater, monkeypatch):
# Stub the API and the current-version read. # Stub the API and the current-version read. Forgejo's /releases list
# returns most-recent first, including pre-releases — we take [0].
monkeypatch.setattr(updater, "read_current_version", lambda: "26.0-alpha") monkeypatch.setattr(updater, "read_current_version", lambda: "26.0-alpha")
monkeypatch.setattr( monkeypatch.setattr(
updater, updater,
"_forgejo_api", "_forgejo_api",
lambda path: { lambda path: [
"tag_name": "26.1-alpha", {
"assets": [ "tag_name": "26.1-alpha",
{ "assets": [
"name": "furtka-26.1-alpha.tar.gz", {
"browser_download_url": "https://x/t.tar.gz", "name": "furtka-26.1-alpha.tar.gz",
}, "browser_download_url": "https://x/t.tar.gz",
{ },
"name": "furtka-26.1-alpha.tar.gz.sha256", {
"browser_download_url": "https://x/t.tar.gz.sha256", "name": "furtka-26.1-alpha.tar.gz.sha256",
}, "browser_download_url": "https://x/t.tar.gz.sha256",
], },
}, ],
}
],
) )
check = updater.check_update() check = updater.check_update()
assert check.current == "26.0-alpha" assert check.current == "26.0-alpha"
@ -340,7 +447,16 @@ def test_check_update_reports_up_to_date_when_same_version(updater, monkeypatch)
monkeypatch.setattr( monkeypatch.setattr(
updater, updater,
"_forgejo_api", "_forgejo_api",
lambda path: {"tag_name": "26.1-alpha", "assets": []}, lambda path: [{"tag_name": "26.1-alpha", "assets": []}],
) )
check = updater.check_update() check = updater.check_update()
assert check.update_available is False assert check.update_available is False
def test_check_update_raises_when_no_releases_published(updater, monkeypatch):
# Newly-created repo with zero releases: don't crash, surface a clean
# error the UI can show instead of "HTTP 404 Not Found".
monkeypatch.setattr(updater, "read_current_version", lambda: "26.0-alpha")
monkeypatch.setattr(updater, "_forgejo_api", lambda path: [])
with pytest.raises(updater.UpdateError, match="no releases"):
updater.check_update()

View file

@ -31,9 +31,10 @@ ASSETS = REPO_ROOT / "assets"
# (install target path, asset path under furtka/assets/) — only the files we # (install target path, asset path under furtka/assets/) — only the files we
# still copy bit-for-bit at install time. Scripts + unit files are no longer # still copy bit-for-bit at install time. Scripts + unit files are no longer
# copied; they're reached via /opt/furtka/current and `systemctl link`. # copied; they're reached via /opt/furtka/current and `systemctl link`. The
# Caddyfile is not in this list because it's written with the hostname
# placeholder substituted — see test_post_install_substitutes_hostname_in_caddyfile.
ASSET_TARGETS = [ ASSET_TARGETS = [
("/etc/caddy/Caddyfile", "Caddyfile"),
("/var/lib/furtka/status.json", "www/status.json"), ("/var/lib/furtka/status.json", "www/status.json"),
] ]
@ -53,7 +54,7 @@ def install_cmds(tmp_path, monkeypatch):
fake = tmp_path / "payload.tar.gz" fake = tmp_path / "payload.tar.gz"
fake.write_bytes(b"not a real tarball") fake.write_bytes(b"not a real tarball")
monkeypatch.setattr(app, "RESOURCE_MANAGER_PAYLOAD", fake) monkeypatch.setattr(app, "RESOURCE_MANAGER_PAYLOAD", fake)
return app._post_install_commands("testhost") return app._post_install_commands("testhost", "daniel", "test-admin-pw")
@pytest.mark.parametrize("target,asset_relpath", ASSET_TARGETS) @pytest.mark.parametrize("target,asset_relpath", ASSET_TARGETS)
@ -121,6 +122,103 @@ def test_caddyfile_asset_serves_from_current():
assert "root * /var/lib/furtka" in caddy assert "root * /var/lib/furtka" in caddy
def _strip_caddy_comments(text: str) -> str:
"""Remove # comments + blank lines so string-match assertions can
target actual Caddyfile directives, not the leading doc block.
Comment intro is ``#`` at start-of-line or preceded by whitespace."""
out = []
for line in text.splitlines():
stripped = line.split("#", 1)[0].rstrip()
if stripped:
out.append(stripped)
return "\n".join(out)
def test_caddyfile_serves_http_by_default_https_opt_in():
# 26.15-alpha: HTTPS is opt-in. The default Caddyfile has a :80 block
# and imports /etc/caddy/furtka-https.d/*.caddyfile at top level —
# the /settings HTTPS toggle drops the hostname+tls-internal block
# into that dir when the user explicitly enables HTTPS. Default
# Caddyfile therefore contains no `tls internal` directive anywhere;
# if a future refactor puts it back, every fresh install regresses
# to the 26.14-era BAD_SIGNATURE trap. Strip comments first because
# the doc-block DOES mention `tls internal` in prose.
caddy_full = (ASSETS / "Caddyfile").read_text()
caddy = _strip_caddy_comments(caddy_full)
assert ":80 {" in caddy
assert "tls internal" not in caddy
assert "__FURTKA_HOSTNAME__" not in caddy
assert "import /etc/caddy/furtka-https.d/*.caddyfile" in caddy
# Shared routes still live in a named snippet so the HTTPS toggle's
# snippet can import the same routes without duplication.
assert "(furtka_routes)" in caddy
# Default Caddyfile imports it once (inside :80). The HTTPS snippet,
# when written by the toggle, imports it a second time.
assert caddy.count("import furtka_routes") == 1
def test_caddyfile_disables_caddy_auto_redirects():
# Named-hostname :443 block makes Caddy want to add its own HTTP→HTTPS
# redirect. The /settings toggle is the single source of truth, so the
# built-in has to be off — otherwise the toggle and auto_https race.
caddy = (ASSETS / "Caddyfile").read_text()
assert "auto_https disable_redirects" in caddy
def test_caddyfile_imports_force_redirect_snippet_dir():
# The /api/furtka/https/force endpoint toggles HTTP→HTTPS by writing or
# removing a snippet file in this dir; the Caddyfile must glob-import it
# inside the :80 block for the toggle to take effect.
caddy = (ASSETS / "Caddyfile").read_text()
assert "import /etc/caddy/furtka.d/*.caddyfile" in caddy
def test_caddyfile_exposes_root_ca_download():
# /rootCA.crt is the download handle the UI uses. It must map to the
# Caddy local-CA pki path and set a Content-Disposition so the browser
# treats it as a download rather than trying to render it. Path is the
# real one Caddy uses under XDG_DATA_HOME=/var/lib (see caddy.service
# Environment= directive) — not the /var/lib/caddy/.local/share/caddy/
# path Caddy docs show for non-systemd installs.
caddy = (ASSETS / "Caddyfile").read_text()
assert "handle /rootCA.crt" in caddy
assert "/var/lib/caddy/pki/authorities/local" in caddy
assert ".local/share/caddy" not in caddy
assert "attachment; filename=furtka-local-rootCA.crt" in caddy
def test_post_install_writes_caddyfile_without_hostname_placeholder(install_cmds):
# 26.15-alpha: the shipped Caddyfile no longer carries the
# __FURTKA_HOSTNAME__ marker — HTTPS + hostname now live in the
# opt-in snippet written by set_force_https(), not in the base
# Caddyfile. Verify the post-install writes the file as-is (no
# substitution expected) and it has the opt-in import glob.
caddyfile_cmd = next((c for c in install_cmds if " > /etc/caddy/Caddyfile" in c), None)
assert caddyfile_cmd is not None
written_full = _extract_written_content(caddyfile_cmd, "/etc/caddy/Caddyfile")
written = _strip_caddy_comments(written_full)
assert "__FURTKA_HOSTNAME__" not in written
assert "import /etc/caddy/furtka-https.d/*.caddyfile" in written
assert "tls internal" not in written
def test_post_install_creates_https_snippet_dir(install_cmds):
# The top-level HTTPS opt-in snippet dir must exist before Caddy's
# first start — its glob import tolerates an empty directory, but
# not a missing one on older Caddy builds. Parallel guarantee to
# test_post_install_creates_furtka_d_snippet_dir below.
matching = [c for c in install_cmds if "/etc/caddy/furtka-https.d" in c and "install -d" in c]
assert matching, "no install -d command creates /etc/caddy/furtka-https.d"
def test_post_install_creates_furtka_d_snippet_dir(install_cmds):
# Pre-existing installs pick up the import path via updater._refresh_caddyfile,
# but fresh installs never run that — this command is the only guarantee
# that the first Caddy start on a brand-new box has a dir to glob-import.
matching = [c for c in install_cmds if "/etc/caddy/furtka.d" in c and "install -d" in c]
assert matching, "no install -d command creates /etc/caddy/furtka.d"
def test_systemd_units_reference_current_paths(): def test_systemd_units_reference_current_paths():
for unit in ("furtka-status.service", "furtka-welcome.service"): for unit in ("furtka-status.service", "furtka-welcome.service"):
body = (ASSETS / "systemd" / unit).read_text() body = (ASSETS / "systemd" / unit).read_text()
@ -136,3 +234,28 @@ def test_read_asset_raises_for_missing_file():
def test_assets_dir_resolves_to_repo_tree(): def test_assets_dir_resolves_to_repo_tree():
assert app._ASSETS_DIR == ASSETS assert app._ASSETS_DIR == ASSETS
def test_post_install_writes_users_json_with_hashed_password(install_cmds):
"""The Furtka-admin users.json is created during the chroot post-install.
Without this, a fresh-install box lands at /login in first-run setup
mode and the user has to go through the browser to set a password
which defeats the "step-1 password works for everything" design. Also
check that the file is chmod 0600 (the PBKDF2 hash is a secret even
if it's slow to crack).
"""
import json as _json
from werkzeug.security import check_password_hash
users_cmd = next((c for c in install_cmds if " > /var/lib/furtka/users.json" in c), None)
assert users_cmd is not None, "no command writes /var/lib/furtka/users.json"
assert "chmod 600" in users_cmd, "users.json must be chmod 0600"
body = _extract_written_content(users_cmd, "/var/lib/furtka/users.json")
parsed = _json.loads(body)
assert "admin" in parsed
assert parsed["admin"]["username"] == "daniel" # matches fixture
# Hash is a real werkzeug hash, not the plaintext password.
assert parsed["admin"]["hash"] != "test-admin-pw"
assert check_password_hash(parsed["admin"]["hash"], "test-admin-pw")

View file

@ -8,6 +8,7 @@ import os
import re import re
import subprocess import subprocess
import sys import sys
from datetime import UTC
from pathlib import Path from pathlib import Path
from drives import list_scored_devices from drives import list_scored_devices
@ -15,6 +16,41 @@ from flask import Flask, jsonify, redirect, render_template, request, url_for
app = Flask(__name__) app = Flask(__name__)
def _resolve_version() -> str:
"""Resolve the Furtka version to display in the wizard footer.
On the live ISO `iso/build.sh` writes `/opt/furtka/VERSION` at build time
from `pyproject.toml`; that's the authoritative source at runtime. For
local dev runs (pytest, `flask run` outside the ISO) fall back to
reading `pyproject.toml` directly, then to the literal "dev" so the
footer never 500s if both files are missing.
"""
iso_path = Path(__file__).resolve().parent / "VERSION"
for candidate in (iso_path, Path(__file__).resolve().parent.parent / "pyproject.toml"):
try:
text = candidate.read_text(encoding="utf-8")
except (FileNotFoundError, PermissionError, OSError):
continue
if candidate.name == "VERSION":
value = text.strip()
if value:
return value
else:
match = re.search(r'^version\s*=\s*"([^"]+)"', text, re.MULTILINE)
if match:
return match.group(1)
return "dev"
FURTKA_VERSION = _resolve_version()
@app.context_processor
def _inject_version():
return {"furtka_version": FURTKA_VERSION}
LANGUAGES = { LANGUAGES = {
"en": {"locale": "en_US.UTF-8", "label": "English", "keyboard": "us"}, "en": {"locale": "en_US.UTF-8", "label": "English", "keyboard": "us"},
"de": {"locale": "de_DE.UTF-8", "label": "Deutsch", "keyboard": "de"}, "de": {"locale": "de_DE.UTF-8", "label": "Deutsch", "keyboard": "de"},
@ -228,6 +264,10 @@ _FURTKA_UNITS = (
"furtka-status.service", "furtka-status.service",
"furtka-status.timer", "furtka-status.timer",
"furtka-welcome.service", "furtka-welcome.service",
# Daily apps-catalog pull. Timer drives the service; the .service itself
# is oneshot and also callable ad-hoc via `furtka catalog sync`.
"furtka-catalog-sync.service",
"furtka-catalog-sync.timer",
) )
@ -309,7 +349,35 @@ def _furtka_json_cmd(hostname):
) )
def _post_install_commands(hostname): def _users_json_cmd(username, password):
"""Write /var/lib/furtka/users.json with the admin account hashed.
The core furtka-api reads this file on every login attempt; the
auth.py module treats `admin.username` + `admin.hash` as the only
credential. Hashing happens here in the webinstaller (werkzeug is a
flask transitive dep so it's already installed in this environment)
the chroot doesn't need pip. Mode 0600 so nobody but root on the
installed box can read the PBKDF2 hash.
"""
from datetime import datetime
from werkzeug.security import generate_password_hash
users = {
"admin": {
"username": username,
"hash": generate_password_hash(password),
"created_at": datetime.now(UTC).isoformat(timespec="seconds"),
}
}
return _write_file_cmd(
"/var/lib/furtka/users.json",
json.dumps(users, indent=2) + "\n",
mode="600",
)
def _post_install_commands(hostname, admin_username, admin_password):
# nss-mdns: splice `mdns_minimal [NOTFOUND=return]` before `resolve` on # nss-mdns: splice `mdns_minimal [NOTFOUND=return]` before `resolve` on
# the hosts line so `*.local` works from the installed system too. Guarded # the hosts line so `*.local` works from the installed system too. Guarded
# so a re-run (or a future Arch default that already ships mdns) is a # so a re-run (or a future Arch default that already ships mdns) is a
@ -320,11 +388,35 @@ def _post_install_commands(hostname):
"/etc/nsswitch.conf" "/etc/nsswitch.conf"
) )
return [ return [
# Import dir for the HTTP→HTTPS force-redirect snippet. The
# /api/furtka/https/force endpoint writes/removes a .caddyfile here
# to toggle the redirect. Must exist before Caddy starts — the
# Caddyfile's glob `import /etc/caddy/furtka.d/*.caddyfile` tolerates
# an empty dir but not a missing one on every Caddy version, so we
# create it up front and stay on the safe side.
"install -d -m 0755 -o root -g root /etc/caddy/furtka.d",
# Parallel dir for the top-level HTTPS-listener snippet, written
# by /api/furtka/https/force (26.15-alpha+) when the user opts
# into HTTPS. Empty by default so fresh installs never generate
# a tls internal cert — that was the 26.14 regression where
# Firefox hit unbypassable SEC_ERROR_BAD_SIGNATURE because
# Caddy's fixed intermediate-CN clashed with any cached trust
# from a previously-reinstalled Furtka box.
"install -d -m 0755 -o root -g root /etc/caddy/furtka-https.d",
# The Caddyfile lives at /etc/caddy/Caddyfile per Caddy's convention # The Caddyfile lives at /etc/caddy/Caddyfile per Caddy's convention
# (systemd unit points there). Content comes from the shipped asset, # (systemd unit points there). Content comes from the shipped asset,
# which we copy in at install time so updates that change routing # which we copy in at install time so updates that change routing
# need a new release to refresh it. # need a new release to refresh it.
_write_file_cmd("/etc/caddy/Caddyfile", _read_asset("Caddyfile")), #
# __FURTKA_HOSTNAME__ is the placeholder the asset carries in place
# of the real hostname — Caddy's `tls internal` needs a named site
# block to issue a leaf cert, and the hostname isn't known until
# the user fills in the form. Self-updates re-apply the same
# substitution against /etc/hostname (see updater._refresh_caddyfile).
_write_file_cmd(
"/etc/caddy/Caddyfile",
_read_asset("Caddyfile").replace("__FURTKA_HOSTNAME__", hostname),
),
# Initial status.json so Caddy doesn't 404 before furtka-status fires. # Initial status.json so Caddy doesn't 404 before furtka-status fires.
_write_file_cmd("/var/lib/furtka/status.json", _read_asset("www/status.json")), _write_file_cmd("/var/lib/furtka/status.json", _read_asset("www/status.json")),
nss_sed, nss_sed,
@ -334,6 +426,12 @@ def _post_install_commands(hostname):
# furtka.json depends on /opt/furtka/current/VERSION, so it has to # furtka.json depends on /opt/furtka/current/VERSION, so it has to
# run after the resource-manager commands. # run after the resource-manager commands.
_furtka_json_cmd(hostname), _furtka_json_cmd(hostname),
# Admin account for the Furtka web UI. Hashed here (werkzeug is
# already in scope for the Flask webinstaller) and materialised
# into /var/lib/furtka/users.json at mode 0600 on the target
# partition — the installed core's auth.py picks it up on first
# login.
_users_json_cmd(admin_username, admin_password),
] ]
@ -392,7 +490,7 @@ def build_archinstall_config(s):
# page, status timer, and welcome banner into place. # page, status timer, and welcome banner into place.
"custom_commands": [ "custom_commands": [
f"gpasswd -a {s['username']} docker", f"gpasswd -a {s['username']} docker",
*_post_install_commands(s["hostname"]), *_post_install_commands(s["hostname"], s["username"], s["password"]),
], ],
"network_config": {"type": "iso"}, "network_config": {"type": "iso"},
"ssh": True, "ssh": True,

View file

@ -1,6 +1,41 @@
import subprocess import subprocess
def _boot_disk_name():
"""Return the parent disk name of the live-ISO boot media (e.g. "sdb"), or None.
On a normal box `/run/archiso/bootmnt` does not exist and we return None,
leaving the device list untouched. On bare metal booted from USB this is
the stick we booted from we want to filter it out so the user can't
accidentally pick it as the install target.
"""
try:
result = subprocess.run(
["findmnt", "-no", "SOURCE", "/run/archiso/bootmnt"],
capture_output=True,
text=True,
)
except FileNotFoundError:
return None
if result.returncode != 0:
return None
partition = result.stdout.strip()
if not partition:
return None
try:
parent = subprocess.run(
["lsblk", "-no", "PKNAME", partition],
capture_output=True,
text=True,
)
except FileNotFoundError:
return None
if parent.returncode != 0:
return None
name = parent.stdout.strip().splitlines()[0] if parent.stdout.strip() else ""
return name or None
def _smart_status(device): def _smart_status(device):
try: try:
result = subprocess.run( result = subprocess.run(
@ -75,11 +110,14 @@ def score_device(device, size_gb):
return get_drive_type_score(device) + get_drive_health(device) + get_size_score(size_gb) return get_drive_type_score(device) + get_drive_health(device) + get_size_score(size_gb)
def parse_lsblk_output(output): def parse_lsblk_output(output, boot_disk=None):
"""Parse `lsblk -dn -o NAME,SIZE,TYPE` output into scored device dicts. """Parse `lsblk -dn -o NAME,SIZE,TYPE` output into scored device dicts.
Keeps only TYPE=disk so the live ISO's own squashfs (loop) and the boot Keeps only TYPE=disk so the live ISO's own squashfs (loop) and the boot
CD-ROM (rom) don't show up as install targets. CD-ROM (rom) don't show up as install targets. If `boot_disk` is given,
that disk is also dropped it's the USB stick the live ISO booted from
on bare metal, where it appears as TYPE=disk and would otherwise be a
valid-looking install target.
""" """
devices = [] devices = []
for line in output.strip().split("\n"): for line in output.strip().split("\n"):
@ -91,6 +129,8 @@ def parse_lsblk_output(output):
name, size, dev_type = parts[0], parts[1], parts[2] name, size, dev_type = parts[0], parts[1], parts[2]
if dev_type != "disk": if dev_type != "disk":
continue continue
if boot_disk and name == boot_disk:
continue
device = f"/dev/{name}" device = f"/dev/{name}"
size_gb = parse_size_gb(size) size_gb = parse_size_gb(size)
status = _smart_status(device) status = _smart_status(device)
@ -120,7 +160,7 @@ def list_scored_devices():
except subprocess.CalledProcessError as e: except subprocess.CalledProcessError as e:
print(f"Error listing devices: {e}") print(f"Error listing devices: {e}")
return [] return []
return parse_lsblk_output(result.stdout) return parse_lsblk_output(result.stdout, boot_disk=_boot_disk_name())
def main(): def main():

View file

@ -30,7 +30,7 @@
<footer class="site-footer"> <footer class="site-footer">
<div class="container"> <div class="container">
<p class="kicker">Furtka 26.0-alpha · AGPL-3.0</p> <p class="kicker">Furtka {{ furtka_version }} · AGPL-3.0</p>
<p class="kicker"><a href="https://furtka.org" style="color: inherit; text-decoration: none">furtka.org</a></p> <p class="kicker"><a href="https://furtka.org" style="color: inherit; text-decoration: none">furtka.org</a></p>
</div> </div>
</footer> </footer>

View file

@ -6,6 +6,8 @@
{% block content %} {% block content %}
<h1>Rebooting…</h1> <h1>Rebooting…</h1>
<p class="lede">The machine is restarting. This page will stop responding in a moment — that's expected.</p> <p class="lede">The machine is restarting. This page will stop responding in a moment — that's expected.</p>
<p><strong>Remove the USB stick now</strong> — if it's still plugged in when the machine reboots, some BIOS setups will boot into this installer again instead of starting Furtka.</p>
<p class="muted">If the installer does come back anyway, your BIOS is set to boot from USB before the disk. Press the one-time boot menu key at startup (often <kbd>F11</kbd>, <kbd>F12</kbd>, or <kbd>Esc</kbd> — it flashes briefly on screen) and pick the internal disk, or change the boot order in BIOS settings.</p>
<p>When the machine comes back up (~1 minute), open Furtka in your browser:</p> <p>When the machine comes back up (~1 minute), open Furtka in your browser:</p>
<p><a href="http://{{ hostname }}.local" class="btn btn-primary">http://{{ hostname }}.local</a></p> <p><a href="http://{{ hostname }}.local" class="btn btn-primary">http://{{ hostname }}.local</a></p>
<p class="muted">If that doesn't resolve, your network may not support mDNS — use the IP address shown on the machine's console instead.</p> <p class="muted">If that doesn't resolve, your network may not support mDNS — use the IP address shown on the machine's console instead.</p>

View file

@ -19,21 +19,37 @@ Hosted on `forge-runner-01` (Proxmox VM, Ubuntu 24.04). Hugo runs on the VM;
nginx serves the built output from `/var/www/furtka.org`. TLS is terminated by nginx serves the built output from `/var/www/furtka.org`. TLS is terminated by
an upstream openresty reverse proxy — the VM itself only speaks plain HTTP. an upstream openresty reverse proxy — the VM itself only speaks plain HTTP.
First time only, on the VM: ### Auto-deploy on push-to-main (default)
```sh `.forgejo/workflows/deploy-site.yml` fires on every push to `main` that touches
ssh forge-runner `website/**`. The self-hosted runner *is* forge-runner-01, so the whole deploy
sudo /srv/furtka-site/ops/nginx/setup-vm.sh # or copy the script over first collapses to a local rsync into `/srv/furtka-site/` + `hugo --minify` into
``` `/var/www/furtka.org/`. No SSH hop, no secrets. Runs in under a minute.
From then on, deploy from your dev machine: The in-CI script is `website/deploy-ci.sh`. Don't invoke it from your dev box —
it assumes it's already on the target host.
### Manual deploy (fallback)
For out-of-band pushes (feature branch, CI outage), deploy from your dev
machine:
```sh ```sh
./website/deploy.sh ./website/deploy.sh
``` ```
The script rsyncs `website/` to `/srv/furtka-site/` on the VM and runs This rsyncs `website/` to `/srv/furtka-site/` on the VM over SSH and runs
`hugo --minify` into `/var/www/furtka.org`. `hugo --minify` into `/var/www/furtka.org`. Same end state as the CI path,
just with an SSH hop.
### First-time VM setup
Only needed once, when provisioning a fresh forge-runner VM:
```sh
ssh forge-runner
sudo /srv/furtka-site/ops/nginx/setup-vm.sh # or copy the script over first
```
## Structure ## Structure
@ -48,7 +64,8 @@ layouts/ Custom inline theme — no external theme or framework
index.html Home-only layout with editorial hero index.html Home-only layout with editorial hero
assets/css/main.css Stylesheet (fingerprinted + minified on build) assets/css/main.css Stylesheet (fingerprinted + minified on build)
static/favicon.svg Gate mark in crimson static/favicon.svg Gate mark in crimson
deploy.sh Rsync + remote Hugo build deploy.sh Manual rsync + remote Hugo build (over SSH, for off-CI pushes)
deploy-ci.sh Local rsync + Hugo build — runs on forge-runner-01 from CI
``` ```
## Design ## Design

View file

@ -6,6 +6,10 @@
--accent: #c03a28; --accent: #c03a28;
--accent-hover: #a0301f; --accent-hover: #a0301f;
--border: #e4e3dc; --border: #e4e3dc;
--accent-glow: rgba(192, 58, 40, 0.2);
--card-bg: rgba(247, 246, 243, 0.72);
--card-border: var(--border);
--scene-opacity: 0.18;
--font-sans: --font-sans:
-apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue",
Arial, "Noto Sans", sans-serif; Arial, "Noto Sans", sans-serif;
@ -23,6 +27,10 @@
--accent: #ff6b56; --accent: #ff6b56;
--accent-hover: #ff8b78; --accent-hover: #ff8b78;
--border: #232326; --border: #232326;
--accent-glow: rgba(255, 107, 86, 0.4);
--card-bg: rgba(23, 23, 26, 0.65);
--card-border: #26262b;
--scene-opacity: 0.34;
} }
} }
@ -43,6 +51,25 @@ body {
flex-direction: column; flex-direction: column;
min-height: 100vh; min-height: 100vh;
text-rendering: optimizeLegibility; text-rendering: optimizeLegibility;
isolation: isolate;
}
/* ── Animated background canvas (home only) ─────────────── */
.scene-canvas {
position: fixed;
inset: 0;
width: 100vw;
height: 100vh;
z-index: 0;
pointer-events: none;
}
.site-header,
main.container,
.site-footer {
position: relative;
z-index: 1;
} }
.container { .container {
@ -171,11 +198,36 @@ main.container {
.home h1 { .home h1 {
font-family: var(--font-sans); font-family: var(--font-sans);
font-weight: 800; font-weight: 800;
font-size: clamp(3.25rem, 10vw, 6.5rem); font-size: clamp(3.5rem, 14vw, 11rem);
line-height: 0.95; line-height: 0.9;
letter-spacing: -0.035em; letter-spacing: -0.04em;
margin: 0 0 1.5rem; margin: 0 0 1.5rem;
color: var(--fg); color: var(--fg);
background-image: linear-gradient(180deg, var(--fg) 0%, var(--accent) 110%);
-webkit-background-clip: text;
background-clip: text;
-webkit-text-fill-color: transparent;
}
@media (prefers-color-scheme: dark) {
.home h1 {
filter: drop-shadow(0 0 28px var(--accent-glow));
}
.home .lede {
color: #c8c8cc;
}
}
.hero {
min-height: 78vh;
display: flex;
flex-direction: column;
justify-content: center;
padding-block: 4.5rem 3rem;
}
.home .lede {
font-weight: 450;
} }
.home .lede { .home .lede {
@ -258,3 +310,132 @@ main.container {
outline-offset: 3px; outline-offset: 3px;
border-radius: 2px; border-radius: 2px;
} }
/* ── Primary CTA ─────────────────────────────────────────── */
.cta-row { margin-top: 2.5rem; }
.cta {
display: inline-flex;
align-items: center;
gap: 0.55rem;
padding: 1.1rem 2rem;
font-family: var(--font-sans);
font-weight: 600;
font-size: 1.02rem;
letter-spacing: 0.005em;
text-decoration: none;
border-radius: 0.7rem;
transition: transform 180ms, box-shadow 180ms, background 180ms, color 180ms;
}
.cta--primary {
background: linear-gradient(135deg, var(--accent), var(--accent-hover));
color: #fff;
box-shadow: 0 10px 36px var(--accent-glow),
0 0 0 1px color-mix(in srgb, var(--accent) 40%, transparent);
animation: cta-pulse 2.8s ease-in-out infinite;
}
.cta--primary:hover {
transform: translateY(-3px);
box-shadow: 0 18px 52px var(--accent-glow),
0 0 0 1px var(--accent);
animation-play-state: paused;
}
.cta--primary:active { transform: translateY(-1px); }
.cta--primary span { transition: transform 180ms; }
.cta--primary:hover span { transform: translateX(4px); }
@keyframes cta-pulse {
0%, 100% { box-shadow: 0 10px 36px var(--accent-glow),
0 0 0 1px color-mix(in srgb, var(--accent) 40%, transparent); }
50% { box-shadow: 0 14px 48px var(--accent-glow),
0 0 0 1px color-mix(in srgb, var(--accent) 70%, transparent); }
}
@media (prefers-reduced-motion: reduce) {
.cta--primary { animation: none; }
}
/* ── Intro paragraph (home, between hero and feature grids) ─ */
.intro {
max-width: 38rem;
margin: 0 0 4rem;
font-size: 1.15rem;
line-height: 1.55;
color: var(--fg);
}
.intro p { margin: 0 0 1rem; }
.intro p:last-child { margin: 0; }
.intro strong { font-weight: 600; }
/* ── Feature sections (home) ─────────────────────────────── */
.feature-section { margin-block: 4rem; }
.section-eyebrow {
font-family: var(--font-sans);
font-weight: 500;
font-size: 0.72rem;
letter-spacing: 0.14em;
text-transform: uppercase;
color: var(--fg-muted);
margin: 0 0 1.25rem;
}
.feature-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(17rem, 1fr));
gap: 1rem;
}
.feature-card {
background: var(--card-bg);
border: 1px solid var(--card-border);
border-radius: 1rem;
padding: 1.5rem 1.5rem 1.4rem;
-webkit-backdrop-filter: blur(10px);
backdrop-filter: blur(10px);
transition: transform 240ms, border-color 240ms, box-shadow 240ms;
}
.feature-card:hover {
border-color: var(--accent);
box-shadow: 0 10px 32px var(--accent-glow);
transform: translateY(-2px);
}
.feature-card p {
margin: 0;
font-size: 1rem;
line-height: 1.55;
color: var(--fg);
}
.feature-card strong {
font-weight: 600;
color: var(--fg);
}
/* ── Closer prose (home, after feature grids) ────────────── */
.closer {
margin-top: 4rem;
max-width: var(--measure);
}
/* ── Reveal-on-load (hero) and reveal-on-scroll (cards) ──── */
.js .reveal,
.js [data-gsap="card"] {
opacity: 0;
transform: translateY(40px);
will-change: opacity, transform;
}
@media (prefers-reduced-motion: reduce) {
.scene-canvas { display: none; }
.js .reveal,
.js [data-gsap="card"] {
opacity: 1 !important;
transform: none !important;
will-change: auto;
}
}

View file

@ -0,0 +1,25 @@
(function () {
if (window.matchMedia('(prefers-reduced-motion: reduce)').matches) return;
if (!window.gsap || !window.ScrollTrigger || !window.Lenis) return;
gsap.registerPlugin(ScrollTrigger);
const lenis = new Lenis({ lerp: 0.1, smoothWheel: true });
lenis.on('scroll', ScrollTrigger.update);
gsap.ticker.add((time) => { lenis.raf(time * 1000); });
gsap.ticker.lagSmoothing(0);
// Hero stagger — runs once on load.
gsap.to('.hero .reveal', {
y: 0, opacity: 1, duration: 1.1, ease: 'power3.out', stagger: 0.12
});
// Card reveals — batched so cards in the same row come in together.
ScrollTrigger.batch('[data-gsap="card"]', {
start: 'top 90%',
onEnter: (els) => gsap.to(els, {
y: 0, opacity: 1, scale: 1,
duration: 0.9, ease: 'power3.out', stagger: 0.08, overwrite: true
})
});
})();

View file

@ -0,0 +1,98 @@
(function () {
if (window.matchMedia('(prefers-reduced-motion: reduce)').matches) return;
if (!window.WebGLRenderingContext || !window.THREE) return;
const canvas = document.getElementById('scene');
if (!canvas) return;
const root = document.documentElement;
const readVar = (name) => getComputedStyle(root).getPropertyValue(name).trim();
const readOpacity = () => parseFloat(readVar('--scene-opacity')) || 0.18;
const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera(
60, window.innerWidth / window.innerHeight, 0.1, 100
);
const renderer = new THREE.WebGLRenderer({ canvas, antialias: true, alpha: true });
renderer.setSize(window.innerWidth, window.innerHeight, false);
renderer.setPixelRatio(Math.min(window.devicePixelRatio || 1, 2));
const geometry = new THREE.TorusKnotGeometry(2.5, 0.4, 130, 20);
const material = new THREE.MeshPhongMaterial({
color: readVar('--accent') || '#c03a28',
wireframe: true,
transparent: true,
opacity: readOpacity()
});
const core = new THREE.Mesh(geometry, material);
scene.add(core);
scene.add(new THREE.AmbientLight(0xffffff, 0.6));
const dir = new THREE.DirectionalLight(0xffffff, 0.8);
dir.position.set(5, 5, 5);
scene.add(dir);
const BASE_Z = 9;
camera.position.z = BASE_Z;
let scrollY = window.scrollY || 0;
window.addEventListener('scroll', () => {
scrollY = window.scrollY || 0;
}, { passive: true });
let baseOpacity = readOpacity();
let running = true;
function tick() {
if (!running) return;
requestAnimationFrame(tick);
// Continuous slow drift.
core.rotation.y += 0.0015;
core.rotation.z += 0.0006;
// Scroll-driven motion: zoom in, scale up, tilt.
const s = Math.min(scrollY, 2000);
camera.position.z = BASE_Z - s * 0.0022;
const scale = 1 + s * 0.00035;
core.scale.set(scale, scale, scale);
core.rotation.x = s * 0.0008;
// Fade past hero so feature cards stay readable.
const vh = window.innerHeight;
const fadeStart = vh * 0.5;
const fadeEnd = vh * 1.4;
const t = Math.max(0, Math.min(1, (scrollY - fadeStart) / (fadeEnd - fadeStart)));
material.opacity = baseOpacity * (1 - t * 0.92);
renderer.render(scene, camera);
}
tick();
window.addEventListener('resize', () => {
camera.aspect = window.innerWidth / window.innerHeight;
camera.updateProjectionMatrix();
renderer.setSize(window.innerWidth, window.innerHeight, false);
});
document.addEventListener('visibilitychange', () => {
if (document.hidden) {
running = false;
} else if (!running) {
running = true;
tick();
}
});
const mql = window.matchMedia('(prefers-color-scheme: dark)');
const updateTheme = () => {
const accent = readVar('--accent');
if (accent) material.color.set(accent);
baseOpacity = readOpacity();
};
if (mql.addEventListener) {
mql.addEventListener('change', updateTheme);
} else if (mql.addListener) {
mql.addListener(updateTheme);
}
})();

19
website/assets/js/vendor/PROVENANCE.md vendored Normal file
View file

@ -0,0 +1,19 @@
# Vendored JavaScript libraries
These minified bundles are checked into the repo so furtka.org has zero
third-party-CDN dependencies at runtime. Pin date: **2026-04-27**.
| File | Version | Source |
|---|---|---|
| `three.min.js` | r128 | https://cdnjs.cloudflare.com/ajax/libs/three.js/r128/three.min.js |
| `gsap.min.js` | 3.12.2 (core only) | https://cdnjs.cloudflare.com/ajax/libs/gsap/3.12.2/gsap.min.js |
| `ScrollTrigger.min.js` | 3.12.2 | https://cdnjs.cloudflare.com/ajax/libs/gsap/3.12.2/ScrollTrigger.min.js |
| `lenis.min.js` | @studio-freight/lenis 1.0.33 | https://unpkg.com/@studio-freight/lenis@1.0.33/dist/lenis.min.js |
All four expose UMD globals (`THREE`, `gsap`, `ScrollTrigger`, `Lenis`).
None are ES modules, so no `js.Build` step is needed — Hugo just fingerprints them.
GSAP "Club" plugins (SplitText, MorphSVG, etc.) are **not** free for commercial use.
Only `gsap` core + `ScrollTrigger` (both standard MIT-style license) are bundled.
To refresh: re-run `curl -sSfL -o <file> <url>` and bump the pin date here.

File diff suppressed because one or more lines are too long

11
website/assets/js/vendor/gsap.min.js vendored Normal file

File diff suppressed because one or more lines are too long

1
website/assets/js/vendor/lenis.min.js vendored Normal file

File diff suppressed because one or more lines are too long

6
website/assets/js/vendor/three.min.js vendored Normal file

File diff suppressed because one or more lines are too long

View file

@ -1,39 +1,33 @@
--- ---
title: "Furtka" title: "Furtka"
description: "Offenes Heimserver-Betriebssystem — einfach genug für alle." description: "Offenes Heimserver-Betriebssystem — einfach genug für alle."
status: "<span class=\"mono\">26.0-alpha</span> — in Arbeit" status: "<span class=\"mono\">26.16-alpha</span> — in Arbeit"
# features_today / features_next müssen index-parallel zu content/_index.md bleiben.
intro: |
**Furtka** ist ein offenes Heimserver-Betriebssystem.
USB-Stick einstecken, durch einen Assistenten klicken, und aus jedem
alten Rechner wird eine private Cloud für den Haushalt — mit eigenen
Apps, eigenem Namen im Netz, eigenen Daten.
Das Ziel ist einfach: **dein Vater soll das einrichten können.**
features_today_label: "Was heute schon geht"
features_today:
- "Vom USB-Stick booten und Furtka auf die Festplatte einrichten"
- "Ein Assistent fragt nach Name, Benutzer und Netzwerk — fertig"
- "Danach: Bedienseite im Browser öffnen"
- "Erste App: **Dateifreigabe im Heimnetz** (alle im WLAN sehen den Ordner)"
- "Apps mit einem Klick installieren und entfernen"
- "Eine installierte App mit einem Klick aktualisieren (holt das neueste Container-Image)"
- "Furtka selbst mit einem Klick aktualisieren — keine Neuinstallation mehr für neue Features"
features_next_label: "Was als Nächstes kommt"
features_next:
- "Apps für Fotos, Dateien, Smarthome, Spiele-Streaming und Medien"
- "Einfachere Sprache im Einrichtungs-Assistenten"
- "Sichere Verbindung im Heimnetz (ohne Warnmeldung im Browser)"
- "Mehrere Server zusammenschalten"
--- ---
**Furtka** ist ein offenes Heimserver-Betriebssystem.
USB-Stick einstecken, durch einen Assistenten klicken, und aus jedem
alten Rechner wird eine private Cloud für den Haushalt — mit eigenen
Apps, eigenem Namen im Netz, eigenen Daten.
Das Ziel ist einfach: **dein Vater soll das einrichten können.**
### Was als Nächstes kommt
- Apps für Fotos, Dateien, Smarthome, Spiele-Streaming und Medien
- Einfachere Sprache im Einrichtungs-Assistenten
- Sichere Verbindung im Heimnetz (ohne Warnmeldung im Browser)
- Mehrere Server zusammenschalten
### Was heute schon geht
- Vom USB-Stick booten und Furtka auf die Festplatte einrichten
- Ein Assistent fragt nach Name, Benutzer und Netzwerk — fertig
- Danach: Bedienseite im Browser öffnen
- Erste App: **Dateifreigabe im Heimnetz** (alle im WLAN sehen den Ordner)
- Apps mit einem Klick installieren und entfernen
<!--
Entwurf bei 43a39a4 geschrieben; erst in echte Bullets umwandeln, wenn
der nächste ISO-Test beide Abläufe auf echter Hardware bestätigt.
Englische Kopie liegt in _index.md.
- Eine installierte App mit einem Klick aktualisieren (holt das neueste Container-Image)
- Furtka selbst mit einem Klick aktualisieren — keine Neuinstallation mehr für neue Features
-->
Wir sind zu zweit und bauen das öffentlich, abends und am Wochenende. Wir sind zu zweit und bauen das öffentlich, abends und am Wochenende.
Es ist früh. Es ist früh.
Mitlesen? Schreib an <a href="mailto:hallo@furtka.org">hallo@furtka.org</a>. Mitlesen? Schreib an <hallo@furtka.org>.

View file

@ -1,39 +1,33 @@
--- ---
title: "Furtka" title: "Furtka"
description: "Open-source home server OS — simple enough for everyone." description: "Open-source home server OS — simple enough for everyone."
status: "<span class=\"mono\">26.0-alpha</span> — work in progress" status: "<span class=\"mono\">26.16-alpha</span> — work in progress"
# Keep features_today / features_next index-aligned with content/_index.de.md.
intro: |
**Furtka** is an open-source home server OS.
Boot from USB, click through a wizard, and any old computer
turns into a private cloud for your household — with your own apps,
your own name on the network, your own data.
The goal is simple: **your dad should be able to set this up.**
features_today_label: "What works today"
features_today:
- "Boot from USB stick and install Furtka onto the hard drive"
- "A wizard asks for name, user and network — done"
- "Then: open the control page in your browser"
- "First app: **file sharing on the home network** (everyone on Wi-Fi sees the folder)"
- "Install and remove apps with one click"
- "Update an installed app with one click (pulls the newest container image)"
- "Update Furtka itself with one click — no reinstalling for new features"
features_next_label: "What's coming next"
features_next:
- "Apps for photos, files, smart home, game streaming and media"
- "Plainer language in the setup wizard"
- "Secure connection on your home network (no browser warning)"
- "Linking several servers together"
--- ---
**Furtka** is an open-source home server OS.
Boot from USB, click through a wizard, and any old computer
turns into a private cloud for your household — with your own apps,
your own name on the network, your own data.
The goal is simple: **your dad should be able to set this up.**
### What's coming next
- Apps for photos, files, smart home, game streaming and media
- Plainer language in the setup wizard
- Secure connection on your home network (no browser warning)
- Linking several servers together
### What works today
- Boot from USB stick and install Furtka onto the hard drive
- A wizard asks for name, user and network — done
- Then: open the control page in your browser
- First app: **file sharing on the home network** (everyone on Wi-Fi sees the folder)
- Install and remove apps with one click
<!--
Drafted while shipping 43a39a4; promote to real bullets once the next ISO
test confirms both flows on real hardware (see AUDIT + Phase-2 plan).
Matching German copy sits in _index.de.md.
- Update an installed app with one click (pulls the newest container image)
- Update Furtka itself with one click — no reinstalling for new features
-->
We're two people building it in public on evenings and weekends. We're two people building it in public on evenings and weekends.
It's early. It's early.
Want to follow along? Write to <a href="mailto:hallo@furtka.org">hallo@furtka.org</a>. Want to follow along? Write to <hallo@furtka.org>.

View file

@ -0,0 +1,79 @@
---
title: "Datenschutzerklärung"
translationKey: "privacy"
sitemap:
priority: 0.2
---
### Kurzfassung
Diese Website setzt **keine Cookies**, lädt **keine Schriften oder
Skripte von Drittanbietern**, bindet **keine Analyse- oder
Tracking-Dienste** ein und enthält **keine externen Einbettungen**
(YouTube, Maps, Social-Media-Buttons, …). Technisch anfallend sind
ausschließlich kurzfristige Server-Zugriffsprotokolle.
### Verantwortlicher
<address>
Daniel Maksymilian Syrnicki<br>
Hauptstraße 35<br>
55569 Monzingen<br>
Deutschland<br>
E-Mail: <a href="mailto:hallo@furtka.org">hallo@furtka.org</a>
</address>
### Server-Logfiles
Beim Aufruf der Website werden technisch notwendige Daten vom Browser an
den Server übermittelt und dort in Logdateien gespeichert:
- Datum und Uhrzeit des Aufrufs
- IP-Adresse (in gekürzter Form)
- aufgerufene URL
- HTTP-Statuscode und übertragene Datenmenge
- Referrer (falls vom Browser gesendet)
- User-Agent-Kennung des Browsers
**Zweck:** Auslieferung der Website und Abwehr missbräuchlicher Zugriffe.
**Rechtsgrundlage:** Art. 6 Abs. 1 lit. f DSGVO (berechtigtes Interesse
an Betrieb und Sicherheit).
**Speicherdauer:** maximal 30 Tage, danach automatische Löschung.
**Empfänger:** keine Weitergabe. Die Website läuft auf eigener
Infrastruktur; es gibt keinen externen Auftragsverarbeiter.
**Drittlandübermittlung:** keine.
### Cookies und Tracking
Keine. Es werden keine Cookies gesetzt, kein LocalStorage oder
SessionStorage verwendet und keine Tracking- oder Analyse-Dienste
eingebunden. Eine Einwilligung nach § 25 TDDDG ist daher nicht
erforderlich.
### Ihre Rechte
Sie haben jederzeit das Recht auf:
- Auskunft über die zu Ihrer Person gespeicherten Daten (Art. 15 DSGVO)
- Berichtigung unrichtiger Daten (Art. 16 DSGVO)
- Löschung (Art. 17 DSGVO)
- Einschränkung der Verarbeitung (Art. 18 DSGVO)
- Datenübertragbarkeit (Art. 20 DSGVO)
- Widerspruch gegen die Verarbeitung (Art. 21 DSGVO)
Anfragen hierzu richten Sie bitte per E-Mail an hallo@furtka.org.
### Beschwerderecht
Sie haben das Recht, sich bei einer Datenschutz-Aufsichtsbehörde zu
beschweren. Zuständig ist:
**Der Landesbeauftragte für den Datenschutz und die Informationsfreiheit
Rheinland-Pfalz**
Hintere Bleiche 34, 55116 Mainz
Website: <https://www.datenschutz.rlp.de>
### Stand
Diese Erklärung ist aktuell gültig und wurde zuletzt am 18.04.2026
aktualisiert.

View file

@ -0,0 +1,80 @@
---
title: "Privacy"
translationKey: "privacy"
url: /privacy/
sitemap:
priority: 0.2
---
### In short
This website sets **no cookies**, loads **no third-party fonts or
scripts**, embeds **no analytics or tracking services**, and contains
**no external embeds** (YouTube, Maps, social buttons, …). The only
technical data collected is short-lived server access logs.
### Controller
<address>
Daniel Maksymilian Syrnicki<br>
Hauptstraße 35<br>
55569 Monzingen<br>
Germany<br>
Email: <a href="mailto:hallo@furtka.org">hallo@furtka.org</a>
</address>
### Server logs
When you load a page, your browser sends technically necessary data to
the server, which stores it in log files:
- Date and time of the request
- IP address (in shortened form)
- Requested URL
- HTTP status code and bytes transferred
- Referrer (if sent by the browser)
- User-agent string
**Purpose:** serving the website and defending against abusive traffic.
**Legal basis:** Art. 6(1)(f) GDPR — legitimate interest in operation
and security.
**Retention:** up to 30 days, then automatically deleted.
**Recipients:** none. The site runs on our own infrastructure; no
external processor is involved.
**Transfers outside the EU/EEA:** none.
### Cookies and tracking
None. No cookies are set, no localStorage or sessionStorage is used, and
no tracking or analytics services are embedded. Because of this, no
consent under § 25 TDDDG (German implementation of the ePrivacy
directive) is required.
### Your rights
You have the right at any time to:
- Information about the data stored about you (Art. 15 GDPR)
- Correction of inaccurate data (Art. 16 GDPR)
- Erasure (Art. 17 GDPR)
- Restriction of processing (Art. 18 GDPR)
- Data portability (Art. 20 GDPR)
- Object to processing (Art. 21 GDPR)
Send requests to hallo@furtka.org.
### Right to complain
You have the right to file a complaint with a data protection
supervisory authority. The competent one is:
**Der Landesbeauftragte für den Datenschutz und die Informationsfreiheit
Rheinland-Pfalz**
Hintere Bleiche 34, 55116 Mainz, Germany
Website: <https://www.datenschutz.rlp.de>
### Last updated
This statement was last updated on 2026-04-18.
The German version of this privacy statement is the legally binding one.

View file

@ -0,0 +1,26 @@
---
title: "Impressum"
translationKey: "imprint"
sitemap:
priority: 0.2
---
Angaben gemäß § 5 DDG.
<address>
<strong>Daniel Maksymilian Syrnicki</strong><br>
Hauptstraße 35<br>
55569 Monzingen<br>
Deutschland
</address>
### Kontakt
- E-Mail: hallo@furtka.org
- Forgejo-Issues: <https://forgejo.sourcegate.online/daniel/furtka/issues>
### Hinweis zum Projekt
Furtka ist ein privates Open-Source-Projekt. Es ist kein Unternehmen,
bietet keine kostenpflichtigen Leistungen an und nimmt keine Zahlungen
entgegen. Der Quelltext steht unter der AGPL-3.0.

View file

@ -0,0 +1,29 @@
---
title: "Imprint"
translationKey: "imprint"
url: /imprint/
sitemap:
priority: 0.2
---
Information pursuant to § 5 DDG (German Digital Services Act).
<address>
<strong>Daniel Maksymilian Syrnicki</strong><br>
Hauptstraße 35<br>
55569 Monzingen<br>
Germany
</address>
### Contact
- Email: hallo@furtka.org
- Forgejo issues: <https://forgejo.sourcegate.online/daniel/furtka/issues>
### Project note
Furtka is a private open-source project. It is not a company, offers no
paid services, and does not accept payments. The source code is licensed
under AGPL-3.0.
The German version of this imprint is the legally binding one.

27
website/deploy-ci.sh Executable file
View file

@ -0,0 +1,27 @@
#!/usr/bin/env bash
# Auto-deploy path run by .forgejo/workflows/deploy-site.yml inside the
# self-hosted runner — which is forge-runner-01, the actual web server.
# Same effect as deploy.sh but without the SSH hop: everything is local.
#
# Requires `rsync` and `hugo` on PATH. The workflow apk-installs both
# before invoking this script.
set -euo pipefail
HERE="$(cd "$(dirname "$0")" && pwd)"
SRCROOT="${FURTKA_SRCROOT:-/srv/furtka-site}"
WEBROOT="${FURTKA_WEBROOT:-/var/www/furtka.org}"
echo "==> rsync website/ → $SRCROOT"
rsync -az --delete \
--exclude='.hugo_build.lock' \
--exclude='public/' \
--exclude='resources/' \
--exclude='deploy.sh' \
--exclude='deploy-ci.sh' \
"$HERE/" "$SRCROOT/"
echo "==> hugo build → $WEBROOT"
cd "$SRCROOT"
hugo --minify --cleanDestinationDir -d "$WEBROOT"
echo "OK: deployed to https://furtka.org/"

View file

@ -1,6 +1,12 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# Deploy furtka.org to forge-runner-01. # Manual deploy of furtka.org to forge-runner-01.
# Rsyncs the Hugo source up to the VM and builds it in-place. # Rsyncs the Hugo source up to the VM and builds it in-place.
#
# Normal path is now the `Deploy site` Forgejo workflow, which auto-fires
# on every push-to-main that touches website/** — see
# .forgejo/workflows/deploy-site.yml. Keep this script for out-of-band
# pushes (testing from a local branch, recovering from a CI outage,
# whatever). The in-CI equivalent that skips the SSH hop is deploy-ci.sh.
set -euo pipefail set -euo pipefail
HOST="${FURTKA_HOST:-forge-runner}" HOST="${FURTKA_HOST:-forge-runner}"

View file

@ -6,7 +6,7 @@ enableRobotsTXT = true
[params] [params]
description = "Open-source home server OS — simple enough for everyone." description = "Open-source home server OS — simple enough for everyone."
version = "26.0-alpha" version = "26.16-alpha"
contactEmail = "hallo@furtka.org" contactEmail = "hallo@furtka.org"
[markup.goldmark.renderer] [markup.goldmark.renderer]

View file

@ -1,13 +1,15 @@
<!DOCTYPE html> <!DOCTYPE html>
<html lang="{{ .Site.Language.Lang }}"> <html lang="{{ .Site.Language.Lang }}" class="no-js">
<head> <head>
{{ partial "head.html" . }} {{ partial "head.html" . }}
</head> </head>
<body> <body>
{{ if .IsHome }}<canvas id="scene" class="scene-canvas" aria-hidden="true"></canvas>{{ end }}
{{ partial "header.html" . }} {{ partial "header.html" . }}
<main class="container"> <main class="container">
{{ block "main" . }}{{ end }} {{ block "main" . }}{{ end }}
</main> </main>
{{ partial "footer.html" . }} {{ partial "footer.html" . }}
{{ if .IsHome }}{{ partial "scripts.html" . }}{{ end }}
</body> </body>
</html> </html>

View file

@ -2,13 +2,46 @@
<article class="home"> <article class="home">
<header class="hero"> <header class="hero">
{{ with .Params.status }} {{ with .Params.status }}
<p class="status-chip">{{ . | safeHTML }}</p> <p class="status-chip reveal">{{ . | safeHTML }}</p>
{{ end }} {{ end }}
<h1>{{ .Title }}</h1> <h1 class="reveal">{{ .Title }}</h1>
{{ with site.Params.description }}<p class="lede">{{ . }}</p>{{ end }} {{ with site.Params.description }}<p class="lede reveal">{{ . }}</p>{{ end }}
<p class="cta-row reveal">
<a class="cta cta--primary" href="https://forgejo.sourcegate.online/daniel/furtka/releases">
{{ if eq site.Language.Lang "de" }}Neuestes Release{{ else }}Latest release{{ end }}
<span aria-hidden="true"></span>
</a>
</p>
</header> </header>
<div class="prose">
{{ .Content }} {{ with .Params.intro }}
</div> <section class="intro">{{ . | markdownify }}</section>
{{ end }}
{{ if .Params.features_today }}
<section class="feature-section">
{{ with .Params.features_today_label }}<p class="section-eyebrow">{{ . }}</p>{{ end }}
<div class="feature-grid">
{{ range .Params.features_today }}
<article class="feature-card" data-gsap="card">{{ . | markdownify }}</article>
{{ end }}
</div>
</section>
{{ end }}
{{ if .Params.features_next }}
<section class="feature-section">
{{ with .Params.features_next_label }}<p class="section-eyebrow">{{ . }}</p>{{ end }}
<div class="feature-grid">
{{ range .Params.features_next }}
<article class="feature-card" data-gsap="card">{{ . | markdownify }}</article>
{{ end }}
</div>
</section>
{{ end }}
{{ with .Content }}
<section class="prose closer">{{ . }}</section>
{{ end }}
</article> </article>
{{ end }} {{ end }}

View file

@ -5,5 +5,12 @@
· AGPL-3.0 · · AGPL-3.0 ·
<a href="mailto:{{ site.Params.contactEmail }}">{{ site.Params.contactEmail }}</a> <a href="mailto:{{ site.Params.contactEmail }}">{{ site.Params.contactEmail }}</a>
</p> </p>
<p class="kicker">
{{ if eq site.Language.Lang "de" -}}
<a href="/de/impressum/">Impressum</a> · <a href="/de/datenschutz/">Datenschutz</a>
{{- else -}}
<a href="/imprint/">Imprint</a> · <a href="/privacy/">Privacy</a>
{{- end }}
</p>
</div> </div>
</footer> </footer>

View file

@ -1,7 +1,10 @@
<meta charset="utf-8"> <meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="viewport" content="width=device-width, initial-scale=1">
<script>document.documentElement.classList.replace('no-js','js');</script>
<title>{{ if .IsHome }}{{ site.Title }} — {{ site.Params.description }}{{ else }}{{ .Title }} · {{ site.Title }}{{ end }}</title> <title>{{ if .IsHome }}{{ site.Title }} — {{ site.Params.description }}{{ else }}{{ .Title }} · {{ site.Title }}{{ end }}</title>
<meta name="description" content="{{ with .Params.description }}{{ . }}{{ else }}{{ site.Params.description }}{{ end }}"> <meta name="description" content="{{ with .Params.description }}{{ . }}{{ else }}{{ site.Params.description }}{{ end }}">
<meta name="theme-color" content="#f7f6f3" media="(prefers-color-scheme: light)">
<meta name="theme-color" content="#0d0d0f" media="(prefers-color-scheme: dark)">
<link rel="icon" type="image/svg+xml" href="/favicon.svg"> <link rel="icon" type="image/svg+xml" href="/favicon.svg">
<meta property="og:site_name" content="{{ site.Title }}"> <meta property="og:site_name" content="{{ site.Title }}">
<meta property="og:title" content="{{ if .IsHome }}{{ site.Title }}{{ else }}{{ .Title }} · {{ site.Title }}{{ end }}"> <meta property="og:title" content="{{ if .IsHome }}{{ site.Title }}{{ else }}{{ .Title }} · {{ site.Title }}{{ end }}">

View file

@ -0,0 +1,12 @@
{{ $three := resources.Get "js/vendor/three.min.js" | fingerprint }}
{{ $gsap := resources.Get "js/vendor/gsap.min.js" | fingerprint }}
{{ $st := resources.Get "js/vendor/ScrollTrigger.min.js" | fingerprint }}
{{ $lenis := resources.Get "js/vendor/lenis.min.js" | fingerprint }}
{{ $scene := resources.Get "js/scene.js" | fingerprint }}
{{ $anim := resources.Get "js/animations.js" | fingerprint }}
<script defer src="{{ $three.RelPermalink }}" integrity="{{ $three.Data.Integrity }}"></script>
<script defer src="{{ $gsap.RelPermalink }}" integrity="{{ $gsap.Data.Integrity }}"></script>
<script defer src="{{ $st.RelPermalink }}" integrity="{{ $st.Data.Integrity }}"></script>
<script defer src="{{ $lenis.RelPermalink }}" integrity="{{ $lenis.Data.Integrity }}"></script>
<script defer src="{{ $scene.RelPermalink }}" integrity="{{ $scene.Data.Integrity }}"></script>
<script defer src="{{ $anim.RelPermalink }}" integrity="{{ $anim.Data.Integrity }}"></script>