Compare commits

...

22 commits

Author SHA1 Message Date
ee132712be docs: sync READMEs with 26.15 HTTPS opt-in + boot-USB filter
All checks were successful
Build ISO / build-iso (push) Successful in 24m38s
CI / lint (push) Successful in 1m1s
CI / test (push) Successful in 2m42s
CI / validate-json (push) Successful in 58s
CI / markdown-links (push) Successful in 28s
- README roadmap: Local HTTPS Phase 1 entry now reflects the 26.15
  opt-in model (default off, toggle in /settings) instead of the
  26.4 auto-trust story.
- README + iso/README: boot-USB filtering is no longer a TODO; both
  files now describe the implemented `findmnt`/`PKNAME` behaviour.
- iso/README rough edges: drop the boot-USB bullet (closed) and
  re-word the wizard-still-HTTP-only bullet to match the 26.15 toggle
  flow (it was a stale dup of the same line under it).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 12:09:33 +02:00
1193504a1e perf(site): gzip CSS, JS, SVG and fonts on the furtka.org nginx
Default nginx only gzips text/html, so the homepage HTML was the only
asset coming back compressed. The ~600 KB three.min.js bundle (and the
hashed CSS) were being shipped uncompressed across the public openresty
proxy. `gzip_types` now covers css/js/json/xml/svg/woff2.

Needs `sudo ops/nginx/setup-vm.sh` on forge-runner-01 to take effect —
the site-deploy workflow only rebuilds Hugo, it doesn't touch the
nginx config.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 12:09:26 +02:00
65d48c92f8 feat(installer): filter the boot USB out of the install drive picker
On bare-metal installs, `lsblk` reports the USB stick the live ISO
booted from as TYPE=disk, so it showed up in the drive picker
alongside the real install target — a user could in theory pick the
USB they had just booted from. `findmnt /run/archiso/bootmnt` resolves
the boot partition and `lsblk -no PKNAME` walks it up to the parent
disk; that disk is dropped before scoring. On a normal box neither
file nor mountpoint exist and the picker is unchanged.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 12:09:19 +02:00
aa7dea0528 feat(site): pimp homepage with animated 3D background and scroll reveals
Some checks failed
CI / lint (push) Successful in 1m24s
CI / test (push) Successful in 2m24s
CI / validate-json (push) Successful in 57s
CI / markdown-links (push) Successful in 29s
Deploy site / deploy (push) Successful in 7s
Build ISO / build-iso (push) Failing after 14m59s
Adopts the visual feel of Pascal's prototype while keeping Furtka's
voice, brand palette, and bilingual structure intact.

What changed
- Three.js wireframe torus-knot behind the hero, color/opacity tied
  to the existing --accent / --scene-opacity CSS vars so light and
  dark modes both work without a scene re-init.
- Scroll-driven camera zoom + core scale + tilt; canvas opacity fades
  past hero so feature cards stay readable.
- GSAP + ScrollTrigger reveal hero on load and stagger feature cards
  in as they enter the viewport. Lenis smooths scroll.
- "What works today" / "What's coming next" lists move from markdown
  bullets into front-matter arrays and render as scroll-reveal cards
  (7 + 4 cards, EN/DE parallel; copy is 1:1 from the original lists).
- Hero scaled up: gradient text on the wordmark (fg → accent),
  drop-shadow glow in dark mode, brighter lede color.
- Primary CTA -> /releases listing on Forgejo (Forgejo has no
  /releases/latest), with a pulsing glow + arrow slide on hover.
- Version bump 26.8-alpha -> 26.15-alpha to match the actual release.

Performance / a11y
- Vendor JS (Three.js r128, GSAP 3.12.2 + ScrollTrigger, Lenis 1.0.33)
  vendored locally under assets/js/vendor/ - no third-party CDN at
  runtime. ~728 KB total, fingerprinted via Hugo's pipeline with SRI.
- Canvas + scripts gated to homepage only ({{ if .IsHome }}); the
  Impressum/Datenschutz pages stay plain.
- prefers-reduced-motion: scene + GSAP early-return, CSS forces cards
  to their resting state. No-JS users see all content.
- All scripts deferred so first paint isn't blocked.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 16:14:21 +02:00
1cff22658b feat(auth): rate-limit failed logins with per-(user, IP) lockout
All checks were successful
CI / lint (push) Successful in 1m59s
CI / test (push) Successful in 3m27s
CI / validate-json (push) Successful in 1m56s
CI / markdown-links (push) Successful in 1m24s
Build ISO / build-iso (push) Successful in 26m58s
Ten wrong passwords from the same (username, client-IP) tuple within
15 minutes now return 429 with Retry-After for the next 15 minutes;
authenticate() isn't even called while locked, so the 429 response is
identical whether the password would have been correct — no oracle.

Tuple keying prevents an attacker from one IP from locking the real
admin out of their own box: a different IP (or an ISP reconnect) keeps
them in. The client IP comes from the rightmost X-Forwarded-For entry,
which is what Caddy appends and thus trustworthy (no upstream proxy in
front of Caddy). First-run setup bypasses the lockout — otherwise a
clumsy operator could lock themselves out before an admin exists.

State is in-memory (parallel to SessionStore), so `systemctl restart
furtka` clears a stuck lockout.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-22 17:27:14 +02:00
e68ed279cc fix(https): make HTTPS opt-in to stop the BAD_SIGNATURE trap on fresh installs
All checks were successful
Build ISO / build-iso (push) Successful in 17m23s
CI / lint (push) Successful in 27s
CI / test (push) Successful in 1m2s
CI / validate-json (push) Successful in 24s
CI / markdown-links (push) Successful in 15s
Release / release (push) Successful in 11m34s
Every Furtka since 26.5 shipped a Caddyfile with a
`__FURTKA_HOSTNAME__.local { tls internal }` site block, so every
first boot auto-generated a fresh self-signed CA + intermediate +
leaf. That worked for the first-ever Furtka user, but every reinstall
(or second box on the same LAN) produced a new CA whose intermediate
shared the fixed CN `Caddy Local Authority - ECC Intermediate` with
the previous one. Firefox caches intermediates by CN across profiles
— even private windows share cert9.db — so any visitor who had
trusted an older Furtka's CA got a cached intermediate with
mismatched keys when they hit the new box, producing
`SEC_ERROR_BAD_SIGNATURE`. Unlike UNKNOWN_ISSUER, Firefox has NO
"Advanced → Accept Risk" bypass for BAD_SIGNATURE, so fresh-install
boxes were effectively unreachable over HTTPS in any browser that
had ever seen a previous Furtka.

Validated live on the .46 test VM: fresh 26.14 ISO install → Firefox
hits BAD_SIGNATURE on https://furtka.local/ (even in private mode).
Chromium bypasses it via mDNS failure but the issue is the same.
openssl verify on the box confirms the chain is internally valid —
this is purely client-side cache pollution across boxes.

Fix:
- assets/Caddyfile: removed the hostname site block. Default install
  serves :80 only — https://furtka.local connection-refuses, which is
  a normal error every browser handles instead of the unbypassable
  crypto fault. Added top-level import of
  /etc/caddy/furtka-https.d/*.caddyfile so the /settings HTTPS toggle
  can drop a listener snippet there when a user explicitly opts in.
- furtka/https.py: set_force_https now writes TWO snippets atomically
  — the top-level hostname + tls internal block (enables :443) and
  the :80-scoped redirect (forces HTTP→HTTPS). Disable removes both.
  Reload failure rolls both back. Added _read_hostname + _https_snippet_content
  helpers with `/etc/hostname` → 'furtka' fallback so a missing
  hostname file doesn't produce an empty site block Caddy rejects.
- furtka/https.py::status: force_https now reads the listener
  snippet (was reading the redirect snippet). A redirect without a
  listener isn't actually HTTPS being served, so the listener is the
  authoritative "HTTPS is on" signal.
- furtka/updater.py: new _maybe_migrate_preserve_https hook runs
  inside _refresh_caddyfile on the 26.14 → 26.15 transition. If the
  box had the redirect snippet on disk (user had opted into HTTPS
  under the old regime), it writes the new listener snippet too so
  HTTPS keeps working after the Caddyfile swap removes the hostname
  block.
- webinstaller/app.py: post-install creates /etc/caddy/furtka-https.d/
  alongside /etc/caddy/furtka.d/ so the glob import can't trip an
  older Caddy on a missing path during the first reload.

Live-tested on .46: set_force_https(True) writes both snippets, Caddy
reloads, :443 listener comes up with fresh CA, curl -k returns 302,
HTTP 301-redirects. set_force_https(False) removes both snippets
atomically, :443 goes back to connection-refused.

Tests: test_https.py expanded from 13 to 15 cases. Toggle-on asserts
both snippets written + hostname substituted. Toggle-off asserts
both removed. Rollback cases verify BOTH snippets restore on reload
failure. New test_https_snippet_content_has_tls_internal_and_routes
locks the exact shape of the listener block.
test_webinstaller_assets.py: updated two old asserts that assumed
hostname block was in Caddyfile; new test_post_install_creates_https_snippet_dir
guards the new directory.

276 tests pass, ruff check + format clean.

Known remaining wart (documented in CHANGELOG): a browser that
trusted a prior Furtka CA still hits BAD_SIGNATURE on this box's
HTTPS after enabling it, because the fixed intermediate CN is a
Caddy-side limitation. Workaround: clear cert9.db or visit in a
fresh profile. Won't affect end users with one Furtka box ever.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 19:30:04 +02:00
26f0424ae3 fix: auth-guard / and /settings, add Logout link to static navs
All checks were successful
Build ISO / build-iso (push) Successful in 17m14s
CI / lint (push) Successful in 26s
CI / test (push) Successful in 1m2s
CI / validate-json (push) Successful in 24s
CI / markdown-links (push) Successful in 15s
Release / release (push) Successful in 11m26s
Since 26.11 shipped login, two of the three nav pages were secretly
unauthenticated. The Caddyfile only reverse-proxied /api/*, /apps*,
/login*, /logout* to the Python auth-gated handler. Everything else —
including / (landing page) and /settings/ — fell through to Caddy's
catch-all file_server straight out of assets/www/, skipping the
session check entirely.

LAN visitor effect: they could read the box's hostname, IP, Furtka
version, uptime, and see all the Update-now / Reboot / HTTPS-toggle
buttons on /settings/. The API calls those buttons fired were
themselves 401-gated so nothing actually happened — but the info leak
plus "looks open" UX was real. Caught in the 26.13 SSH test session
when the user noticed Logout only appeared in the nav on /apps, and
not on / or /settings/.

Fix:
- Caddyfile: new `handle /settings*` and `handle /` blocks in the
  shared `(furtka_routes)` snippet reverse-proxy to localhost:7000,
  so both hit the Python auth-guard before the HTML goes out.
- api.py: new `_serve_static_www(relative_path)` helper reads
  assets/www/{index.html, settings/index.html} with a path-traversal
  clamp (resolved path must stay under static_www_dir). `do_GET`
  routes `/` and `/settings[/]` to it. Removed the `/` branch from
  the old combined-with-/apps line — those are different pages now.
- paths.py: new `static_www_dir()` helper with `FURTKA_STATIC_WWW`
  env override for tests.
- assets/www/*.html: both nav bars get the Logout link + a shared
  `doLogout()` inline script matching the _HTML pattern. Users never
  see the link unauthed (the Python handler 302s them before the
  page renders), but authed users get consistent navigation across
  all three pages.

Tests: 5 new cases in test_api.py — unauth / redirects, unauth
/settings redirects (both trailing-slash and not), authed / serves
index.html, authed /settings serves settings/index.html,
regression guard that / and /apps serve different content.
Existing test updated (the one that used / as a proxy for /apps).

Static /style.css, /rootCA.crt, /status.json, /furtka.json,
/update-state.json stay served by Caddy's catch-all — those are
public by design (login page needs style.css, fresh users need the
CA to trust HTTPS, runtime JSON is metadata not creds).

272 tests pass, ruff check + format clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 18:16:42 +02:00
8c1fd1da2b fix: unbreak upgrade path + install-lock race
All checks were successful
Build ISO / build-iso (push) Successful in 17m28s
CI / lint (push) Successful in 27s
CI / test (push) Successful in 59s
CI / validate-json (push) Successful in 23s
CI / markdown-links (push) Successful in 15s
Release / release (push) Successful in 11m38s
Three interlocking issues that made 26.11/26.12 effectively
un-upgradable from pre-auth versions without manual pacman +
symlink surgery. Caught while SSH-testing the .196 VM which landed
on a rollback loop after every Update-now click.

1. auth.py imported werkzeug.security, but the target system runs
   core as bare system Python — neither flask nor werkzeug are
   pip-installed. Fresh 26.11+ boxes died on import. Replaced with
   a 50-line stdlib `furtka/passwd.py` using hashlib.pbkdf2_hmac
   for new hashes and parsing werkzeug's `scrypt:N:r:p$salt$hex`
   format for backward-read so existing users.json survives.

2. updater._health_check pinged /api/apps expecting 200. Post-
   auth, /api/apps returns 401 for unauth requests → HTTPError
   caught as URLError → retry loop → 30s timeout → rollback. Now
   any 2xx-4xx counts as "server alive"; only 5xx / connection
   errors fail. Server responding at all is proof it came back up.

3. _do_install released the fcntl lock between sync pre-validation
   and the systemd-run dispatch. A second POST could slip in,
   pass the lock check, return 202, and leave its install-bg child
   to die silently on the in-child lock. Now the API also reads
   install-state.json and refuses 409 on non-terminal stages —
   the state file is the reliable signal, the fcntl lock is
   defence in depth.

Test coverage:
- tests/test_passwd.py (new, 6 cases): roundtrip, salt uniqueness,
  format shape, werkzeug scrypt backward-compat against a real
  hash captured from the .196 box, malformed + non-string
  rejection.
- tests/test_updater.py: +3 cases for _health_check — 4xx=healthy,
  5xx=unhealthy, URLError retry loop.
- tests/test_api.py: +2 cases for install 409 on non-terminal
  state + 202 after terminal.

All 267 tests green, ruff check + format clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 17:03:28 +02:00
f3cd9e963c feat(install): async background install with progress polling
All checks were successful
Build ISO / build-iso (push) Successful in 17m24s
CI / lint (push) Successful in 26s
CI / test (push) Successful in 43s
CI / validate-json (push) Successful in 24s
CI / markdown-links (push) Successful in 16s
Release / release (push) Successful in 11m34s
POST /api/apps/install now returns 202 Accepted after the synchronous
pre-validation (resolve source, copy files, write .env, check for
placeholder secrets, validate path-type settings). The docker-facing
phases (compose pull → ensure volumes → compose up) are dispatched as
a background systemd-run unit (furtka-install-<app>) that writes stage
transitions to /var/lib/furtka/install-state.json. The UI polls
GET /api/apps/install/status every 1.5s and re-labels the modal
submit button — "Image wird heruntergeladen…" →
"Speicherbereiche werden erstellt…" → "Container wird gestartet…" —
instead of sitting dead on "Installing…" for 30+ seconds on large
images like Jellyfin.

Mirrors the exact shape of /api/catalog/sync/apply and
/api/furtka/update/apply: same fcntl lock, same atomic state-file
writes, same terminal-state poll loop ("done" | "error"). New CLI
subcommand `furtka app install-bg <name>` is what systemd-run invokes;
it's hidden from --help because regular CLI users still want the
synchronous `furtka app install <name>`.

Reinstall button on the app list polls too — after dispatch, its text
reflects the background stage until terminal, matching the modal
flow.

Tests:
- tests/test_install_runner.py (new, 9 cases): state roundtrip, lock
  contention, happy-path phase ordering, error writes on pull/up
  failure, lock release on both terminal outcomes.
- tests/test_api.py: new no_systemd_run fixture stubs subprocess.run;
  existing install tests adapted to 202 response; new tests for 409
  lock contention and the status endpoint.
- tests/test_cli.py: install-bg dispatches correctly and returns 1
  on failure with journald-friendly stderr.

256 tests pass, ruff check + format clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 15:50:49 +02:00
470823b347 feat(auth): login-guard the Furtka UI with a cookie session
All checks were successful
Build ISO / build-iso (push) Successful in 17m30s
CI / lint (push) Successful in 27s
CI / test (push) Successful in 43s
CI / validate-json (push) Successful in 31s
CI / markdown-links (push) Successful in 15s
Release / release (push) Successful in 11m38s
One-admin, one-password model — all of /apps, /api/*, /, and
/settings/ now require a signed-in session. Passwords are werkzeug
PBKDF2-hashed in /var/lib/furtka/users.json (mode 0600, atomic write
via the same .tmp+chmod+rename dance installer.write_env uses).
Sessions are secrets.token_urlsafe(32) tokens held in a module-level
SessionStore dict (thread-safe lock included for when we swap to
ThreadingHTTPServer). Cookies are HttpOnly, SameSite=Strict, and
Path=/, with Secure set when X-Forwarded-Proto from Caddy says HTTPS.

Two bootstrap paths:
  * Fresh install — webinstaller step-1 collects Linux user + password,
    the chroot post-install step hashes the password and writes
    users.json on the target partition. First browser visit lands on
    /login with the account already present.
  * Upgrade from 26.10-alpha — no users.json yet, so /login detects
    setup_needed() and renders a first-run setup form. POST creates
    the admin and immediately logs in.

POST /logout revokes the server session and clears the cookie.
Unauthenticated HTML requests 302 to /login; unauthenticated API
requests 401 JSON so fetch() callers see a clean error. A sleep(0.5)
on failed logins is the brute-force speed bump on top of werkzeug's
~600k-iter PBKDF2.

Caddyfile gains /login* and /logout* handle blocks in the shared
furtka_routes snippet so both :80 and the HTTPS hostname block
forward the auth endpoints to localhost:7000. Without this Caddy
would 404 from the static file server.

Test surface:
  * tests/test_auth.py (new, 19 cases): hash roundtrip, users.json
    I/O, session create/lookup/expire/revoke.
  * tests/test_api.py: new admin_session fixture; existing HTTP
    tests updated to send the cookie; new tests cover login setup,
    login success, wrong-password 401, logout revocation, and the
    guard's 302/401 split.
  * tests/test_webinstaller_assets.py: new case that unpacks the
    users.json _write_file_cmd body and verifies the werkzeug hash
    round-trips against the step-1 password.

Bumped version to 26.11-alpha and rolled CHANGELOG. Also folded in
the ruff-format fix that was pending from 26.10-alpha's lint red.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 13:01:17 +02:00
577c2469f7 style(tests): reflow OPTIONAL_PATH_MANIFEST to match ruff format
All checks were successful
Build ISO / build-iso (push) Successful in 20m27s
CI / lint (push) Successful in 29s
CI / test (push) Successful in 1m3s
CI / validate-json (push) Successful in 46s
CI / markdown-links (push) Successful in 23s
Fixes the lint failure on the 26.10-alpha commit — ruff format wanted
the single-item settings list on one line rather than spread over
three. Pure formatting, no behaviour change.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 11:56:52 +02:00
e8c5317660 chore: release 26.10-alpha
Some checks failed
CI / lint (push) Failing after 50s
CI / test (push) Successful in 1m6s
CI / validate-json (push) Successful in 42s
CI / markdown-links (push) Successful in 22s
Release / release (push) Successful in 13m27s
Ships the new path-type setting (the schema extension that unlocks
host bind mounts for Jellyfin / Paperless / Nextcloud / Immich-class
apps), server-side path validation, app-author docs for the new type,
and the remove-USB-stick hint on the installer's reboot screen.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 11:48:07 +02:00
474af8fb2d feat(installer): remove-USB-stick hint on the reboot screen
Some checks failed
CI / lint (push) Waiting to run
CI / test (push) Waiting to run
CI / validate-json (push) Waiting to run
CI / markdown-links (push) Waiting to run
Build ISO / build-iso (push) Failing after 4m15s
Adds a bold "Remove the USB stick now" line before the reboot, plus a
muted fallback paragraph pointing at the BIOS one-time boot menu keys
(F11/F12/Esc) for when removal isn't enough. Caught on the 2026-04-21
Medion bare-metal test: the box didn't boot the installed system on
first reboot and required manual BIOS boot-order changes, which
non-technical users won't know how to do.

Template-only change. No new CSS, no new code paths — <kbd> uses
browser defaults, <strong> keeps the hierarchy readable.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 11:46:38 +02:00
7c6da3d051 docs(apps): document the new path setting type
Some checks failed
CI / lint (push) Failing after 38s
CI / test (push) Successful in 54s
CI / validate-json (push) Successful in 34s
CI / markdown-links (push) Successful in 19s
Covers the path-type declaration in manifest.json, the companion
compose bind-mount pattern (${MEDIA_PATH}:/media:ro), and the full
server-side validation rules the installer applies (absolute, exists,
is-directory, resolve-then-deny-list, traversal caught).

Clarifies the mental split between manifest.volumes (internal state
the app owns) and path settings (user data the container mounts and
usually reads without owning), and recommends :ro as the default for
consumer-only mounts.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 11:43:09 +02:00
04762f5dd1 feat(manifest): add 'path' setting type with server-side validation
Some checks failed
CI / lint (push) Waiting to run
CI / test (push) Waiting to run
CI / validate-json (push) Waiting to run
CI / markdown-links (push) Waiting to run
Build ISO / build-iso (push) Failing after 4m34s
Apps can now declare a setting with "type": "path" whose value is an
absolute host filesystem path. Compose bind-mounts it via standard .env
substitution (${MEDIA_PATH}:/media) — no reconciler changes needed.
Unlocks media/data-heavy apps (Jellyfin, later Paperless, Nextcloud,
Immich) that point at existing user data instead of copying it into a
Docker volume.

Install/update refuses values that aren't absolute, don't exist, aren't
directories, or resolve into a system-path deny-list (/, /etc, /root,
/boot, /proc, /sys, /dev, /bin, /sbin, /usr/bin, /usr/sbin,
/var/lib/furtka). Path.resolve() is applied before the deny-list check
so /mnt/../etc traversal is caught too. Error messages surface in the
existing install/edit modal.

UI: path settings render as a text input with a /mnt/… placeholder.
The manifest's `description` field carries the actual hint ("Absoluter
Pfad zu deinem Filme-Ordner, z.B. /mnt/media"). No new form
components, no new API routes.

Tests: 9 new cases for install + update path validation; 1 new case
for manifest schema accepting the path type. 211 total passing.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 11:39:15 +02:00
c7e7c8b1e5 chore: release 26.9-alpha
All checks were successful
Build ISO / build-iso (push) Successful in 20m49s
CI / lint (push) Successful in 1m13s
CI / test (push) Successful in 48s
CI / validate-json (push) Successful in 44s
CI / markdown-links (push) Successful in 16s
Release / release (push) Successful in 13m31s
Three small fixes surfaced by the 26.8 QA pass on fresh VM .161:

- Landing-page app tiles now open external `open_url` links in a new
  tab, matching /apps Open-button behaviour. Without this a Kuma click
  on the home screen replaced Furtka itself.
- `scripts/publish-release.sh` treats the ISO upload as best-effort;
  a Forgejo-proxy 504 no longer kills the whole release after tarball
  + sha + release.json are already uploaded.
- `furtka app list --json` now mirrors /api/apps — includes
  `description_long`, `open_url`, and `settings` that the previous
  slim projection dropped.
2026-04-20 18:51:30 +02:00
cf93ef44cb chore: release 26.8-alpha (power actions, supersedes orphan 26.7 tag)
Some checks failed
Build ISO / build-iso (push) Successful in 26m56s
Deploy site / deploy (push) Successful in 23s
CI / lint (push) Successful in 34s
CI / test (push) Successful in 1m4s
CI / validate-json (push) Successful in 51s
CI / markdown-links (push) Successful in 28s
Release / release (push) Failing after 7m38s
Adds Reboot + Shut down buttons on /settings, backed by a new
POST /api/furtka/power endpoint that kicks a delayed `systemd-run
--on-active=3s systemctl {reboot|poweroff}` so the HTTP response
flushes before the kernel loses network. Both buttons open a native
confirm dialog; after reboot, the page polls /furtka.json until the
box is back and reloads itself.

26.7-alpha was tagged on 5d8ac63 but release.yml never fired for that
tag (Forgejo race with the concurrent main push; re-push of the deleted
tag didn't wake the workflow either). 26.8 supersedes it and carries
the same open_url + Open-button content plus the power actions.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 16:00:19 +02:00
5d8ac63d9f chore: release 26.7-alpha
Some checks failed
Deploy site / deploy (push) Waiting to run
Build ISO / build-iso (push) Has been cancelled
CI / lint (push) Successful in 1m26s
CI / test (push) Successful in 1m18s
CI / validate-json (push) Successful in 52s
CI / markdown-links (push) Successful in 27s
Release / release (push) Has been cancelled
Ships the open_url manifest field + the Open button in /apps and on
the landing page, replacing the fileshare-only hardcoded deep-link
with a generalised {host}-templated URL. Fileshare seed manifest
bumps to 0.1.2; the furtka-apps catalog release that goes with this
adds matching open_url values for fileshare + uptime-kuma.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 15:44:01 +02:00
018f2e20b0 chore: release 26.6-alpha
All checks were successful
Build ISO / build-iso (push) Successful in 21m23s
CI / lint (push) Successful in 1m31s
CI / test (push) Successful in 1m20s
CI / validate-json (push) Successful in 48s
CI / markdown-links (push) Successful in 27s
Deploy site / deploy (push) Successful in 8s
Release / release (push) Successful in 24s
Rolls the apps-catalog split, the /settings CSS wrap fix, and the version
bump to 26.6-alpha across pyproject + website copy. Core release tarball
still carries apps/fileshare as the offline first-boot seed; the new
daniel/furtka-apps catalog (tagged 26.6-alpha today) is the authoritative
source on boxes that have synced at least once.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 14:49:31 +02:00
3a8fad5185 feat(catalog): on-box apps catalog synced independently of core version
New `furtka catalog sync` pulls the latest daniel/furtka-apps release,
verifies its sha256, extracts under /var/lib/furtka/catalog/, and
atomically swaps into place — so apps can ship without cutting a new
Furtka core release. A daily timer (furtka-catalog-sync.timer, 10 min
post-boot + 24 h with ±6 h jitter) drives the sync; /apps gets a
manual "Sync apps catalog" button that kicks the same code path via a
detached systemd-run unit.

Layout of the new on-box tree:

  /var/lib/furtka/catalog/            synced catalog (survives self-updates)
    ├── VERSION
    └── apps/<name>/ ...
  /var/lib/furtka/catalog-state.json  sync stage + last version, UI-polled
  /run/furtka/catalog.lock            flock so timer + manual click can't race

Resolver precedence (furtka/sources.py): catalog wins over the bundled
seed (/opt/furtka/current/apps/, carried by the core release for offline
first-boot). Installed apps under /var/lib/furtka/apps/ are never auto-
swapped — user clicks Reinstall to move an existing install onto a
newer catalog version; settings merge-preserved via the existing
installer.install_from path.

New files:
- furtka/_release_common.py — shared Forgejo/tarball primitives lifted
  from furtka/updater.py. Both modules now import from here; updater's
  behaviour and public API unchanged.
- furtka/catalog.py — check_catalog(), sync_catalog() with staging +
  manifest validation + atomic rename. Refuses bad sha256 / broken
  manifests and leaves the live catalog intact on any failure path.
- furtka/sources.py — resolve_app_name() / list_available() abstraction
  used by installer.resolve_source and api._list_available.
- assets/systemd/furtka-catalog-sync.{service,timer} — oneshot service
  + daily timer. Timer auto-enables on self-update via a one-line
  addition to _link_new_units (fresh installs get enabled via the
  webinstaller's _FURTKA_UNITS list).

API + UI:
- /api/bundled renamed internally to _list_available; endpoint stays as
  a backcompat alias; /api/apps/available is the new canonical name.
  Each list entry carries a `source` field ("catalog" | "bundled").
- POST /api/catalog/sync/check + /apply + GET /api/catalog/status.
- /apps page grows a catalog-status row + Sync button; poll loop
  mirrors the Furtka self-update flow.

CLI: `furtka catalog sync [--check]` + `furtka catalog status` (both
support --json). Old `furtka app install` / `reconcile` / `update` /
`rollback` surfaces are unchanged.

Test gate: 194/170 baseline + 24 new tests covering catalog sync
(happy path, sha256 mismatch, invalid manifest, lock contention,
preserves-on-failure) + resolver precedence + api renames. ruff
check + format clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 14:16:02 +02:00
e7ee1698bd fix(ui): stop SHA-256 fingerprint overflowing the Local HTTPS card
The /settings "CA fingerprint (SHA-256)" value is a 95-char colon-
separated hex string with no whitespace, so CSS had no valid break
points and the value pushed past the card's right edge — visible on
the 192.168.178.23 fresh-install test.

.kv is a two-column grid (max-content 1fr); grid items default to
min-width: auto (= content width), which overrides the 1fr track's
width constraint. min-width: 0 lets the track shrink, and
overflow-wrap: anywhere gives the fingerprint valid break points at
any character. The styling stays scoped to .kv dd so card prose isn't
affected.

Verified live on .23 via hot-patch into /opt/furtka/current/assets/
www/style.css + caddy reload.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 13:41:33 +02:00
54357aa2a3 style: ruff format — collapse two-line hostname file path + version loop
All checks were successful
Build ISO / build-iso (push) Successful in 21m29s
CI / lint (push) Successful in 37s
CI / test (push) Successful in 58s
CI / validate-json (push) Successful in 42s
CI / markdown-links (push) Successful in 23s
Format-only diff from `ruff format`. The 26.5-alpha push's CI run failed
on `ruff format --check`; these three files had two-line constructs that
fit on one line at ruff's default line length. No behaviour change.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 12:41:58 +02:00
61 changed files with 5358 additions and 399 deletions

View file

@ -1,27 +1,58 @@
name: Release name: Release
# Tag-triggered: when `git push origin <version>` lands, this builds the # Tag-triggered: when `git push origin <version>` lands, this builds the
# release tarball and publishes it + the sha256 + release.json to the # release tarball + the live-installer ISO, and publishes them both to
# Forgejo releases page for that tag. Boxes then POST /api/furtka/update # the Forgejo releases page. Boxes POST /api/furtka/update to pull the
# to pull from here. # tarball; fresh-install users download the ISO from the release page.
# #
# Version tags only (pattern matches CalVer like 26.0-alpha, 26.1, 27.0-beta). # Runs on the self-hosted runner because iso/build.sh needs privileged
# Documentation / random tags are ignored by the [0-9]* prefix. # docker access (mkarchiso wants root + loop mounts), and because the
# ubuntu-latest Forgejo hosted runner doesn't carry the docker socket
# bind-mount the build needs. Self-hosted adds ~5-7 min to the release
# (ISO build) but keeps the release page self-contained.
#
# Version tags only (CalVer like 26.0-alpha, 26.1, 27.0-beta). Random
# tags are ignored by the [0-9]* prefix.
on: on:
push: push:
tags: ['[0-9]*'] tags: ['[0-9]*']
jobs: jobs:
release: release:
runs-on: ubuntu-latest runs-on: self-hosted
timeout-minutes: 45
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
with: with:
fetch-depth: 0 # changelog section extraction needs history fetch-depth: 0 # changelog section extraction needs history
- name: Install prerequisites
# Alpine runner is near-empty: we need curl + python3 for the
# publish script, bash for the build scripts.
run: apk add --no-cache curl python3 bash
- name: Build release tarball - name: Build release tarball
run: ./scripts/build-release-tarball.sh "${GITHUB_REF_NAME}" run: ./scripts/build-release-tarball.sh "${GITHUB_REF_NAME}"
- name: Build live-installer ISO
# Same script build-iso.yml uses on every main push. Re-running
# here is intentional: guarantees the ISO matches the exact
# tagged commit without coordinating across workflows. Step-level
# continue-on-error so an ISO build flake doesn't block the
# core tarball (which is what boxes need for self-update) from
# publishing.
continue-on-error: true
id: build_iso
run: ./iso/build.sh
- name: Move ISO into dist/
# publish-release.sh attaches dist/furtka-<ver>.iso if present.
# Skipped gracefully when the build step above failed.
if: steps.build_iso.outcome == 'success'
run: |
iso=$(ls iso/out/*.iso | head -1)
cp "$iso" "dist/furtka-${GITHUB_REF_NAME}.iso"
- name: Publish to Forgejo releases - name: Publish to Forgejo releases
env: env:
FORGEJO_TOKEN: ${{ secrets.FORGEJO_RELEASE_TOKEN }} FORGEJO_TOKEN: ${{ secrets.FORGEJO_RELEASE_TOKEN }}

1
.gitignore vendored
View file

@ -13,3 +13,4 @@ iso/out/
website/public/ website/public/
website/resources/ website/resources/
website/.hugo_build.lock website/.hugo_build.lock
website/hugo_stats.json

View file

@ -7,6 +7,263 @@ This project uses calendar versioning: `YY.N-stage` (e.g. `26.0-alpha` = 2026, r
## [Unreleased] ## [Unreleased]
## [26.15-alpha] - 2026-04-21
### Fixed
- **HTTPS is now opt-in; fresh installs no longer hit unbypassable
SEC_ERROR_BAD_SIGNATURE.** Every version since 26.5 shipped a
Caddyfile with a `__FURTKA_HOSTNAME__.local { tls internal }` site
block, so Caddy auto-generated a self-signed root CA + intermediate
+ leaf on first boot. That worked for first-time-ever users, but
every reinstall (or second Furtka box on the same LAN) produced a
new CA with the **same intermediate CN** (`Caddy Local Authority -
ECC Intermediate` — Caddy hardcodes it). Any browser that had ever
trusted an earlier Furtka CA got a cached intermediate with
mismatched keys, then Firefox's cert lookup substituted the cached
intermediate when validating the new box's leaf → the signature
check failed → `SEC_ERROR_BAD_SIGNATURE`, which Firefox has no
"Advanced → Accept Risk" bypass for.
- Removed the hostname site block from the default Caddyfile.
Fresh installs serve `:80` only; visiting `https://furtka.local`
now yields a clean connection-refused instead of the crypto
fault.
- Added top-level `import /etc/caddy/furtka-https.d/*.caddyfile`.
The `/settings` HTTPS toggle (via `furtka.https.set_force_https`)
now writes TWO snippets atomically — the top-level hostname +
`tls internal` block (enables `:443`) and the `:80`-scoped
redirect (forces HTTP → HTTPS) — and removes both on disable.
Caddy reloads after the pair-swap; failure rolls both back.
- Webinstaller creates `/etc/caddy/furtka-https.d/` during
post-install alongside the existing `furtka.d/`.
- `updater._refresh_caddyfile` runs a 26.14 → 26.15 migration: if
the box already had the redirect snippet on disk (user had
explicitly enabled "Force HTTPS" under the old regime), the
migration also writes the new listener snippet so HTTPS keeps
working across the upgrade.
- **`status.force_https` now reads the listener snippet, not the
redirect snippet.** A lone redirect without a `:443` listener
wouldn't actually serve HTTPS, so the listener file is the
authoritative "HTTPS is on" signal. The UI on `/settings` sees the
correct state as a result.
Known remaining UX wart: a browser that trusted a previous Furtka box
still sees `BAD_SIGNATURE` when visiting this box's `https://` after
enabling HTTPS here — the fixed intermediate CN is a Caddy-side
limitation we can't fix from Furtka. Fresh installs on a browser that
never visited another Furtka box work correctly. Workaround:
`about:networking#sts` → Forget → clear `cert9.db`.
## [26.14-alpha] - 2026-04-21
### Fixed
- **Landing page and `/settings/` were silently bypassing the auth
guard.** Since 26.11 shipped login, the Caddyfile only
reverse-proxied `/api/*`, `/apps*`, `/login*`, and `/logout*` to
Python. Everything else — including `/` and `/settings/` — fell
through to Caddy's catch-all `file_server` and was served straight
from `assets/www/` without ever hitting the session check. The
effect: a LAN visitor saw the box's hostname, IP, Furtka version,
and the buttons for Update-now / Reboot / HTTPS-toggle. The API
calls those buttons fired were all 401-auth-gated so actions didn't
land, but the information leak and the "looks open" UX was a real
bug. Caught in the 26.13 SSH test session when the user noticed
Logout only showed up on `/apps`. Now Caddy routes `/` and
`/settings*` through Python; a new `_serve_static_www` handler
checks the session cookie, redirects to `/login` if unauthed, and
reads the HTML from `assets/www/` otherwise. Catch-all still
serves `/style.css`, `/rootCA.crt`, and the runtime JSON files
publicly — those don't need auth.
- **Logout link now shows on every authed page, not just `/apps`.**
The static HTML for `/` and `/settings/` maintained their own nav
separate from `_HTML` in `api.py`, so they never got the Logout
entry when it was added in 26.11. Both nav bars now include it
plus an inline `doLogout()` that POSTs `/logout` and bounces to
`/login`, matching the pattern in `_HTML`.
## [26.13-alpha] - 2026-04-21
### Fixed
- **Upgrade path from pre-auth releases actually works.** 26.11-alpha
introduced `from werkzeug.security import ...` in `furtka/auth.py`,
but werkzeug isn't installed on the target system — core runs as
system Python with stdlib only, and `flask>=3.0` in `pyproject.toml`
is never pip-installed on the box. Fresh boxes from the 26.11/26.12
ISO without a manually-installed werkzeug crashed on import; boxes
upgrading from pre-26.11 got double-broken by that plus the health
check below. Replaced the werkzeug dependency with a stdlib-only
`furtka/passwd.py` that uses `hashlib.pbkdf2_hmac` for new hashes
and parses werkzeug's `scrypt:N:r:p$salt$hex` format for backward
compatibility — existing `users.json` files created on the rare
boxes that did have werkzeug keep working after this upgrade, no
re-setup needed. `from werkzeug.security import ...` is gone from
the import chain entirely; `pyproject.toml`'s flask dep stays only
for the live-ISO webinstaller.
- **Self-update no longer auto-rolls-back when crossing the auth
boundary.** `updater._health_check` pinged `/api/apps` and demanded
a 200, which meant every 26.10 → 26.11+ upgrade hit the post-restart
check, got a 401 (auth guard), and treated that as "server dead"
→ rollback. Now any 2xx4xx response counts as "server alive"; only
connection-level failures or 5xx fail the check. 5xx still fails
rollback because that means the new process is up but broken.
- **Install lock closes its race window.** `POST /api/apps/install`
used to release the fcntl lock immediately after the sync
pre-validation so the systemd-run child could re-acquire it —
leaving a tiny gap where a second POST could slip in, pass the lock
check, and return 202. Both child processes would start, one would
win the in-child lock, the other would die silently. Now the API
also reads `install-state.json` and refuses with 409 if the stage
is non-terminal (`pulling_image`, `creating_volumes`,
`starting_container`). The fcntl lock stays as belt-and-suspenders.
## [26.12-alpha] - 2026-04-21
### Changed
- **App-Install geht async mit Live-Progress.** `POST /api/apps/install`
returnt jetzt `202 Accepted` nach der synchronen Pre-Validation
(Source auflösen, Files kopieren, `.env` schreiben, Placeholder- und
Path-Checks). Den eigentlichen Docker-Teil (`compose pull` → volumes
`compose up`) dispatched der Handler als `systemd-run
--unit=furtka-install-<app>` Hintergrund-Job, der seine Phase in
`/var/lib/furtka/install-state.json` schreibt. Neues
`GET /api/apps/install/status` für UI-Polling. Das Install-Modal
zeigt jetzt live "Image wird heruntergeladen…" →
"Speicherbereiche werden erstellt…" → "Container wird gestartet…"
statt ~30 Sekunden totem "Installing…". Muster 1:1 parallel zu
`/api/catalog/sync/apply` und `/api/furtka/update/apply`. Neue CLI-
Subcommand `furtka app install-bg <name>` (intern, von der API
aufgerufen); `furtka app install` für Terminal-User bleibt synchron.
Die Reinstall-Taste in der App-Liste pollt ebenfalls den
Install-Status und spiegelt die Phase im Button-Text.
## [26.11-alpha] - 2026-04-21
### Added
- **Login-auth for the Furtka web UI.** Every `/apps`, `/api/*`, `/`,
and `/settings/` route now requires a signed-in session. New
`/login` page serves a username/password form; `POST /login`
validates against `/var/lib/furtka/users.json` (werkzeug PBKDF2-
hashed), sets a `furtka_session` cookie (`HttpOnly`, `SameSite=
Strict`, 7-day TTL), and redirects to `/apps`. `POST /logout`
revokes the server-side session and clears the cookie.
Unauthenticated HTML requests get a 302 to `/login`; unauthenticated
API requests get 401 JSON. The old "No authentication on this UI
yet" banner is gone; the `/apps` header picks up a `Logout` link
instead.
- **First-run setup fallback for upgrade-path boxes.** Boxes
upgrading from 26.10-alpha have no `users.json` yet — on the first
visit `/login` renders a setup form (username + password +
password-confirm) that creates the admin record on submit. Fresh
installs skip this: the webinstaller writes `users.json` during
the chroot post-install step using the step-1 password, so the
first browser visit after boot goes straight to the login form.
- **Caddy proxy routes `/login` and `/logout`.** `assets/Caddyfile`
gets two new `handle` blocks in the shared `(furtka_routes)`
snippet so both the `:80` block and the `hostname.local, hostname`
HTTPS block forward the auth endpoints to the stdlib server on
`127.0.0.1:7000`. Without this Caddy would serve a 404 from the
static file server.
### Fixed
- `tests/test_installer.py` ruff-format nit — the 26.10-alpha
release commit had a misformatted list literal that failed
`ruff format --check`. Caught when the Release page on Forgejo
showed a red CI badge for the tag.
- `pyproject.toml` version string bumped from the stale 26.8-alpha
to 26.11-alpha. Release pipeline uses `GITHUB_REF_NAME` as source
of truth for the artefact name, but having the two agree matters
for local dev runs that read `pyproject.toml`.
## [26.10-alpha] - 2026-04-21
### Added
- **Remove-USB-stick hint on the installer's post-install screen.**
`webinstaller/templates/install/rebooting.html` now shows a bold
"Remove the USB stick now" line before the reboot, plus a muted
fallback explaining the BIOS boot-menu keys (F11/F12/Esc) if the
machine boots back into the installer anyway. Caught on the first
bare-metal test (Medion i5-4gen, 2026-04-21) where the box didn't
boot the installed system without manual BIOS-order changes.
- **New `path` setting type for app manifests.** Apps can now declare a
setting with `"type": "path"` whose value is an absolute filesystem
path on the host; docker-compose bind-mounts it via the usual `.env`
substitution (`${MEDIA_PATH}:/media`). Unlocks media/data-heavy apps
(Jellyfin, later Paperless/Nextcloud/Immich) where the user points at
an existing folder instead of copying everything into a Docker
volume. The install form renders path settings as a plain text input
with a `/mnt/…` placeholder hint.
- **Server-side path validation.** Both `install_from()` and
`update_env()` refuse values that aren't absolute, don't exist,
aren't directories, or resolve (after `Path.resolve()`) into a
system-path deny-list (`/`, `/etc`, `/root`, `/boot`, `/proc`,
`/sys`, `/dev`, `/bin`, `/sbin`, `/usr/bin`, `/usr/sbin`,
`/var/lib/furtka`). Catches `/mnt/../etc`-style traversal too. Error
messages surface in the existing install/edit modal error line.
## [26.9-alpha] - 2026-04-21
### Fixed
- Landing-page app tiles with an `open_url` now open in a new tab
(`target="_blank" rel="noopener"`), matching the Open button
behaviour on `/apps`. Without this, clicking "Uptime Kuma" on the
home screen replaced Furtka itself with the Kuma admin page.
Internal links (the `Manage →` fallback for apps without an
`open_url`) still open in the same tab.
- `scripts/publish-release.sh` no longer fails the whole release when
the ISO upload hits a Forgejo proxy 504. The core tarball + sha256 +
release.json (which running boxes need for self-update) are uploaded
first and the ISO is attempted last as a best-effort; a 504 now logs
a warning and exits 0 so the release page still publishes. Surfaced
by the 26.8-alpha cut: the tarball landed but the ~1 GB ISO upload
timed out at the Forgejo reverse proxy.
### Changed
- `furtka app list --json` now mirrors `/api/apps` field-for-field —
previously the CLI emitted a slim projection missing
`description_long`, `open_url`, and `settings`. Anyone piping the
CLI output into jq for automation was seeing an incomplete view.
## [26.8-alpha] - 2026-04-20
### Added
- **Live-installer ISO attached to the Forgejo release page.** `.forgejo/workflows/release.yml` moves to the self-hosted runner, builds both the self-update tarball and the ISO, and `scripts/publish-release.sh` uploads the ISO as a fourth release asset (`furtka-<version>.iso`) alongside the existing tarball + sha256 + release.json. Fresh-install users can now grab the ISO from the release page instead of hunting through `build-iso.yml` artifact retention windows. ISO build step is `continue-on-error` so an ISO flake doesn't hold back the core tarball that running boxes need for self-update.
- **Reboot + Shut down buttons on `/settings`.** Replaces the two "Coming next" placeholders with real actions backed by `POST /api/furtka/power` (`{"action": "reboot" | "poweroff"}`). Handler kicks a delayed `systemd-run --on-active=3s systemctl {reboot|poweroff}` so the HTTP response reaches the browser before the kernel loses network. Each button opens a native confirm dialog first (reboot: "back in ~30 s", shut down: "need to press the physical power button"), then the UI swaps to a status line and — after a reboot — polls `/furtka.json` until the box is back, reloading the page automatically. No auth (same posture as install/remove).
- **Manifest `open_url` field + Open button in `/apps` and on the landing page.** Apps declare a URL template (e.g. `smb://{host}/files` for fileshare, `http://{host}:3001/` for Uptime Kuma); the UI substitutes `{host}` with the current browser's hostname at render time so the link follows however the user reached Furtka (furtka.local, raw IP, a future reverse-proxy hostname). The landing page's hardcoded `if app.name === 'fileshare'` special-case is gone — any app with an `open_url` in its manifest now gets a proper "Open" link. The core seed `apps/fileshare/manifest.json` bumps to v0.1.2 to carry it.
### Changed
- `.btn` CSS class introduced so an `<a>` rendered-as-button lines up with its `<button>` siblings in `.buttons`. Needed because "Open" is a real link (middle-click, copy URL, screen readers) and HTML doesn't let `<button>` carry `href`.
### Notes
- `26.7-alpha` was tagged but never published — the tag push didn't trigger `release.yml` (Forgejo race with the concurrent main push). `26.8-alpha` supersedes it and carries the same content plus power actions.
## [26.6-alpha] - 2026-04-20
### Added
- **Apps catalog synced independently of core.** A new `daniel/furtka-apps` Forgejo repo carries the bundled app catalog; running boxes pull the latest release via `furtka-catalog-sync.timer` (10 min post-boot + daily, ±6 h jitter) and extract atomically into `/var/lib/furtka/catalog/`. The resolver now prefers catalog apps over the seed `/opt/furtka/current/apps/` tree that ships inside the core release tarball, so apps can update without cutting a Furtka core release. Manual trigger: "Sync apps catalog" button on `/apps`, or `sudo furtka catalog sync` at the console. Fresh boxes with no network fall back to the seed, so offline first-boot still shows installable apps. Installed apps are never auto-swapped — users click Reinstall in `/apps` to move an existing install onto a newer catalog version (settings merge-preserved via the existing `installer.install_from` path).
- **Catalog CLI**: `furtka catalog sync [--check] [--json]` + `furtka catalog status [--json]`. Same shape as the core `furtka update` commands.
- **Catalog API endpoints**: `POST /api/catalog/sync/check`, `POST /api/catalog/sync/apply` (detached via `systemd-run` for symmetry with `/api/furtka/update/apply`), `GET /api/catalog/status`. The existing `/api/bundled` endpoint keeps working as a backwards-compat alias for `/api/apps/available`, which now returns the union of catalog + seed apps with a new `"source"` field on each entry (`"catalog"` | `"bundled"`).
### Changed
- **`furtka._release_common`** extracted from `furtka.updater`. Both `updater` and the new `catalog` module now share one implementation of the Forgejo-releases-API call, SHA256 verification, path-traversal-guarded tarball extraction, and CalVer comparison. Public updater surface unchanged.
- **`_link_new_units` now auto-enables newly-linked `.timer` units.** On self-update, a fresh timer file (e.g. `furtka-catalog-sync.timer` added in this release) needs `systemctl enable` to actually start firing — linking alone isn't enough. Fresh installs get their enable via the webinstaller's `_FURTKA_UNITS` list as before.
### Fixed
- **SHA-256 CA fingerprint no longer overflows the `/settings` Local HTTPS card** on narrow viewports. `.kv dd` grid items now set `min-width: 0` + `overflow-wrap: anywhere` so the colon-separated hex string breaks within the card's right edge instead of pushing past it.
## [26.5-alpha] - 2026-04-20 ## [26.5-alpha] - 2026-04-20
### Fixed ### Fixed
@ -97,7 +354,16 @@ First tagged snapshot. Pre-alpha — the installer does not yet boot, but the de
- **Containers:** Docker + Compose - **Containers:** Docker + Compose
- **License:** AGPL-3.0 - **License:** AGPL-3.0
[Unreleased]: https://forgejo.sourcegate.online/daniel/furtka/compare/26.5-alpha...HEAD [Unreleased]: https://forgejo.sourcegate.online/daniel/furtka/compare/26.15-alpha...HEAD
[26.15-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.15-alpha
[26.14-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.14-alpha
[26.13-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.13-alpha
[26.12-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.12-alpha
[26.11-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.11-alpha
[26.10-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.10-alpha
[26.9-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.9-alpha
[26.8-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.8-alpha
[26.6-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.6-alpha
[26.5-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.5-alpha [26.5-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.5-alpha
[26.4-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.4-alpha [26.4-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.4-alpha
[26.3-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.3-alpha [26.3-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.3-alpha

View file

@ -108,7 +108,7 @@ None of these nail the "your dad can set this up" experience. The installer wiza
- [x] **ISO-build in CI**`.forgejo/workflows/build-iso.yml` runs `iso/build.sh` on every push to `main` and publishes the resulting `.iso` as the `furtka-iso` artifact (14 d retention). Push → green run → download → test. - [x] **ISO-build in CI**`.forgejo/workflows/build-iso.yml` runs `iso/build.sh` on every push to `main` and publishes the resulting `.iso` as the `furtka-iso` artifact (14 d retention). Push → green run → download → test.
- [x] **Forgejo Releases + tag-driven release pipeline**`.forgejo/workflows/release.yml` fires on `[0-9]*` tags, `scripts/build-release-tarball.sh` packages `furtka/` + `apps/` + `assets/` + a root VERSION, `scripts/publish-release.sh` uploads tarball + sha256 + release.json to the Forgejo releases page. Releases `26.1-alpha`, `26.3-alpha`, and `26.4-alpha` live at [releases](https://forgejo.sourcegate.online/daniel/furtka/releases) (26.2 stalled on a `jq` apt hang, fixed in 26.3). Needs one repo secret (`FORGEJO_RELEASE_TOKEN`). - [x] **Forgejo Releases + tag-driven release pipeline**`.forgejo/workflows/release.yml` fires on `[0-9]*` tags, `scripts/build-release-tarball.sh` packages `furtka/` + `apps/` + `assets/` + a root VERSION, `scripts/publish-release.sh` uploads tarball + sha256 + release.json to the Forgejo releases page. Releases `26.1-alpha`, `26.3-alpha`, and `26.4-alpha` live at [releases](https://forgejo.sourcegate.online/daniel/furtka/releases) (26.2 stalled on a `jq` apt hang, fixed in 26.3). Needs one repo secret (`FORGEJO_RELEASE_TOKEN`).
- [x] **Walking-skeleton live ISO — end to end**`iso/build.sh` produces a hybrid BIOS/UEFI Arch-based ISO. It boots in a Proxmox VM, DHCPs onto the LAN, shows a console welcome with `http://proksi.local:5000` (+ IP fallback), serves the Flask webinstaller, runs `archinstall --silent`, reboots the VM via a Reboot-now button, and the installed system logs in and runs `docker ps` without sudo. Build infra in [`iso/`](iso/). - [x] **Walking-skeleton live ISO — end to end**`iso/build.sh` produces a hybrid BIOS/UEFI Arch-based ISO. It boots in a Proxmox VM, DHCPs onto the LAN, shows a console welcome with `http://proksi.local:5000` (+ IP fallback), serves the Flask webinstaller, runs `archinstall --silent`, reboots the VM via a Reboot-now button, and the installed system logs in and runs `docker ps` without sudo. Build infra in [`iso/`](iso/).
- [x] **Drop loop/rom devices from drive list**`webinstaller/drives.py` filters by `lsblk` `TYPE=disk`, so the live squashfs and CD-ROM no longer appear as install targets. Boot-USB filtering on bare metal is still TODO; see [iso/README.md](iso/README.md). - [x] **Drop loop/rom devices from drive list**`webinstaller/drives.py` filters by `lsblk` `TYPE=disk`, so the live squashfs and CD-ROM no longer appear as install targets. The boot USB itself is also filtered: on the live ISO, `findmnt /run/archiso/bootmnt` resolves the boot partition and its parent disk is dropped from the picker.
- [x] **Rebrand GRUB menu**`iso/build.sh` rewrites "Arch Linux install medium" → "Furtka Live Installer" across GRUB, syslinux, and systemd-boot configs; default entry marked `(Recommended)`. - [x] **Rebrand GRUB menu**`iso/build.sh` rewrites "Arch Linux install medium" → "Furtka Live Installer" across GRUB, syslinux, and systemd-boot configs; default entry marked `(Recommended)`.
- [x] **Wizard: account form → drive picker → overview → archinstall** — S1 collects hostname/user/password/language with validation, S2 picks boot drive, overview confirms, `/install/run` writes `user_configuration.json` + `user_credentials.json` (0600) and execs `archinstall --silent` against its 4.x schema (`default_layout` disk_config + `!root-password` / `!password` sentinel keys + `custom_commands` for post-install group joins). Install log page polls a JSON endpoint and renders a phase-based progress bar with a collapsible raw log. `FURTKA_DRY_RUN=1` skips the real exec for testing. - [x] **Wizard: account form → drive picker → overview → archinstall** — S1 collects hostname/user/password/language with validation, S2 picks boot drive, overview confirms, `/install/run` writes `user_configuration.json` + `user_credentials.json` (0600) and execs `archinstall --silent` against its 4.x schema (`default_layout` disk_config + `!root-password` / `!password` sentinel keys + `custom_commands` for post-install group joins). Install log page polls a JSON endpoint and renders a phase-based progress bar with a collapsible raw log. `FURTKA_DRY_RUN=1` skips the real exec for testing.
- [x] **mDNS `proksi.local`** — hostname baked into the live ISO, avahi + nss-mdns in the package list, advertised as soon as network-online fires. The HTTPS + local-CA half of this milestone is still open below. - [x] **mDNS `proksi.local`** — hostname baked into the live ISO, avahi + nss-mdns in the package list, advertised as soon as network-online fires. The HTTPS + local-CA half of this milestone is still open below.
@ -117,7 +117,7 @@ None of these nail the "your dad can set this up" experience. The installer wiza
- [x] **On-box web UI uplevel** — shared `/style.css` served by Caddy, persistent top nav, landing page with an "Your apps" tile grid + live status, `/apps` with real per-app icons (inlined SVG from each manifest), new `/settings` page (hostname, IP, version, kernel, RAM, Docker, uptime + Furtka-updates card). `prefers-color-scheme` light/dark. - [x] **On-box web UI uplevel** — shared `/style.css` served by Caddy, persistent top nav, landing page with an "Your apps" tile grid + live status, `/apps` with real per-app icons (inlined SVG from each manifest), new `/settings` page (hostname, IP, version, kernel, RAM, Docker, uptime + Furtka-updates card). `prefers-color-scheme` light/dark.
- [x] **Versioned on-box layout + Phase 1 per-app updates**`/opt/furtka/versions/<ver>/` + `current` symlink; `/var/lib/furtka/` for runtime state. `POST /api/apps/<name>/update` runs `docker compose pull` + compares digests + conditional `up -d`. - [x] **Versioned on-box layout + Phase 1 per-app updates**`/opt/furtka/versions/<ver>/` + `current` symlink; `/var/lib/furtka/` for runtime state. `POST /api/apps/<name>/update` runs `docker compose pull` + compares digests + conditional `up -d`.
- [x] **Phase 2 Furtka self-update**`/settings` → Check → Update now. Downloads signed tarball (SHA256), stages, atomic symlink flip, reloads Caddy, daemon-reload, restarts services, health-checks the new api with auto-rollback on failure. CLI: `furtka update [--check]` + `furtka rollback`. Validated end-to-end on VM 2026-04-16 (`26.0-alpha``26.3-alpha` → rollback → reboot). - [x] **Phase 2 Furtka self-update**`/settings` → Check → Update now. Downloads signed tarball (SHA256), stages, atomic symlink flip, reloads Caddy, daemon-reload, restarts services, health-checks the new api with auto-rollback on failure. CLI: `furtka update [--check]` + `furtka rollback`. Validated end-to-end on VM 2026-04-16 (`26.0-alpha``26.3-alpha` → rollback → reboot).
- [x] **Local HTTPS Phase 1** — Caddy `tls internal` on `:443` alongside plain `:80`. Per-box root CA generated on first start, `rootCA.crt` downloadable from `/settings`, per-OS install guide at `/https-install/`. Opt-in "force HTTPS" toggle only exposes itself once the current browser already trusts the cert, so enabling it can't lock the user out. Shipped in 26.4-alpha. - [x] **Local HTTPS Phase 1** — Caddy `tls internal` on `:443` is fully opt-in via the `/settings` toggle (26.15-alpha); fresh installs stay HTTP-only so a half-trusted cert chain can't lock the user out. Per-box root CA generated on first enable, `rootCA.crt` downloadable from `/settings`, per-OS install guide at `/https-install/`. The "force HTTPS" sub-toggle still only appears once the current browser already trusts the cert.
- [x] **Post-build smoke VM on Proxmox**`.forgejo/workflows/build-iso.yml` hands the freshly built ISO to `scripts/smoke-vm.sh`, which boots it in a throwaway VM on `pollux` (192.168.178.165) and curls the webinstaller on `:5000`. VMID range 90009099, last 5 kept. Green end-to-end since 26.4-alpha. - [x] **Post-build smoke VM on Proxmox**`.forgejo/workflows/build-iso.yml` hands the freshly built ISO to `scripts/smoke-vm.sh`, which boots it in a throwaway VM on `pollux` (192.168.178.165) and curls the webinstaller on `:5000`. VMID range 90009099, last 5 kept. Green end-to-end since 26.4-alpha.
- [ ] Installer wizard screens S3S7 — per-device purpose, network, domain, SSL, diagnostic. S5/S6 blocked on managed-gateway DNS infra not yet built. - [ ] Installer wizard screens S3S7 — per-device purpose, network, domain, SSL, diagnostic. S5/S6 blocked on managed-gateway DNS infra not yet built.
- [ ] Local HTTPS Phase 2 — dedicated local CA (not Caddy's `tls internal`), streamlined one-click install across Win/Mac/Linux/Android, and HTTPS on the live-installer wizard (`https://proksi.local:5000`). - [ ] Local HTTPS Phase 2 — dedicated local CA (not Caddy's `tls internal`), streamlined one-click install across Win/Mac/Linux/Android, and HTTPS on the live-installer wizard (`https://proksi.local:5000`).

View file

@ -45,7 +45,7 @@ Tag per meaningful milestone, not on a calendar. A milestone is: ISO boots, a wi
git push origin 26.1-alpha git push origin 26.1-alpha
``` ```
5. **The release workflow does the rest.** `.forgejo/workflows/release.yml` fires on the tag push: `scripts/build-release-tarball.sh` builds the self-update payload (tarball + sha256 + release.json under `dist/`), `scripts/publish-release.sh` uploads all three assets to the Forgejo release page. Pre-release is flagged automatically based on the suffix (`-alpha`/`-beta`/`-rc`). 5. **The release workflow does the rest.** `.forgejo/workflows/release.yml` fires on the tag push and runs on the self-hosted runner: `scripts/build-release-tarball.sh` builds the self-update payload (tarball + sha256 + release.json under `dist/`), `iso/build.sh` builds the live-installer ISO, `scripts/publish-release.sh` uploads tarball + sha256 + release.json + ISO to the Forgejo release page. Pre-release is flagged automatically based on the suffix (`-alpha`/`-beta`/`-rc`). ISO build is `continue-on-error`: a flaky ISO step doesn't block the core tarball (the thing boxes need for self-update).
The release workflow needs one secret set at repo **Settings → Secrets → Actions**: The release workflow needs one secret set at repo **Settings → Secrets → Actions**:
- `FORGEJO_RELEASE_TOKEN` — a PAT with `write:repository` scope. - `FORGEJO_RELEASE_TOKEN` — a PAT with `write:repository` scope.

View file

@ -47,10 +47,42 @@ Rules enforced by `furtka/manifest.py`:
- `volumes` — short names, strings. Namespaced to `furtka_<app>_<short>` at runtime. - `volumes` — short names, strings. Namespaced to `furtka_<app>_<short>` at runtime.
- `ports` — integers. Informational only; compose owns the actual port binding. - `ports` — integers. Informational only; compose owns the actual port binding.
- `settings[].name` — must match `^[A-Z_][A-Z0-9_]*$`. This name becomes both the env-var key and the form-field ID. - `settings[].name` — must match `^[A-Z_][A-Z0-9_]*$`. This name becomes both the env-var key and the form-field ID.
- `settings[].type` — one of `text`, `password`, `number`. - `settings[].type` — one of `text`, `password`, `number`, `path`.
- `settings[].required` — if true, the install refuses when the value is empty. - `settings[].required` — if true, the install refuses when the value is empty.
- `settings[].default` — optional string. Used to pre-fill the form and the bootstrapped `.env`. - `settings[].default` — optional string. Used to pre-fill the form and the bootstrapped `.env`.
### Path-type settings (host bind mounts)
Use `"type": "path"` when the app should point at an existing folder on the host — media libraries, document archives, photo backups. The value is written to `.env` like any other setting, and compose consumes it via `${VAR}` substitution as a bind mount.
```json
{
"name": "MEDIA_PATH",
"label": "Medienordner",
"description": "Absoluter Pfad zu deinem Medien-Ordner, z.B. /mnt/media.",
"type": "path",
"required": true
}
```
```yaml
services:
app:
volumes:
- ${MEDIA_PATH}:/media:ro
```
The installer (`install_from` and `update_env`) refuses values that:
- aren't absolute (must start with `/`),
- don't exist on the host,
- aren't directories,
- resolve (after `Path.resolve()`) into a system-path deny-list: `/`, `/etc`, `/root`, `/boot`, `/proc`, `/sys`, `/dev`, `/bin`, `/sbin`, `/usr/bin`, `/usr/sbin`, `/var/lib/furtka`.
Traversal like `/mnt/../etc` is caught too — the deny-list check runs on the resolved path.
Path settings sit alongside manifest-declared volumes. Use `manifest.volumes` for internal state the app owns (databases, caches, config), and path settings for user data the container should mount and — usually — read without owning. Mounting read-only (`:ro`) is a good default for data the app only consumes.
## `docker-compose.yaml` ## `docker-compose.yaml`
- File extension is `.yaml`. The compose runner hardcodes this — `.yml` will not be found. - File extension is `.yaml`. The compose runner hardcodes this — `.yml` will not be found.

View file

@ -1,12 +1,13 @@
{ {
"name": "fileshare", "name": "fileshare",
"display_name": "Network Files", "display_name": "Network Files",
"version": "0.1.1", "version": "0.1.2",
"description": "SMB share for Mac, Windows, Linux and Android devices on the LAN.", "description": "SMB share for Mac, Windows, Linux and Android devices on the LAN.",
"description_long": "Alle Geräte im WLAN sehen einen gemeinsamen Ordner. Funktioniert mit Windows, Mac, Linux und Android. Verbinden zu smb://furtka.local — Anmeldung mit dem hier gesetzten Benutzernamen und Passwort.", "description_long": "Alle Geräte im WLAN sehen einen gemeinsamen Ordner. Funktioniert mit Windows, Mac, Linux und Android. Verbinden zu smb://furtka.local — Anmeldung mit dem hier gesetzten Benutzernamen und Passwort.",
"volumes": ["files"], "volumes": ["files"],
"ports": [445, 139], "ports": [445, 139],
"icon": "icon.svg", "icon": "icon.svg",
"open_url": "smb://{host}/files",
"settings": [ "settings": [
{ {
"name": "SMB_USER", "name": "SMB_USER",

View file

@ -1,25 +1,27 @@
# Serves the Furtka landing page + live JSON on :80 (plain HTTP) and on # Serves the Furtka landing page + live JSON on :80 (plain HTTP). HTTPS
# HTTPS via Caddy's built-in `tls internal` — locally-issued certs signed # is **opt-in** — Caddy doesn't serve :443 until the user clicks the
# by a root CA that Caddy generates on first start and stores under # "Enable HTTPS" toggle on /settings, which drops an import snippet into
# /var/lib/caddy/pki/authorities/local/. Static pages are read from # /etc/caddy/furtka-https.d/. Default install has NO tls site block →
# /opt/furtka/current/ — updates flip the symlink and everything picks up # Caddy never generates a self-signed CA / leaf cert → no
# the new content without a Caddy restart (a `systemctl reload caddy` is # SEC_ERROR_BAD_SIGNATURE when a user visits https://furtka.local before
# still triggered post-swap to flush the file-server's handle cache). # they've trusted anything. That was the 26.14-era regression this file
# /apps and /api are reverse-proxied to the resource-manager API # exists to cure: the old Caddyfile always served :443 with a freshly-
# (furtka serve, bound to 127.0.0.1:7000). # generated cert, and a browser that had ever trusted an older Furtka
# box's CA would reject the new one with an unbypassable bad-sig error.
# #
# Hostname templating: __FURTKA_HOSTNAME__ gets substituted with the # /apps, /api, /login, /logout, / (home), /settings are reverse-proxied
# install-time hostname by webinstaller/app.py on first install and by # to the resource-manager API (furtka serve, bound to 127.0.0.1:7000).
# furtka.updater._refresh_caddyfile on every self-update. A bare `:443 # Static pages are read from /opt/furtka/current/ — updates flip the
# { tls internal }` (no hostname) never triggers leaf-cert issuance, so # symlink and everything picks up the new content without a Caddy
# SNI-based handshakes die with `SSL_ERROR_INTERNAL_ERROR_ALERT` — the # restart (a `systemctl reload caddy` is still triggered post-swap to
# 26.4-alpha regression this file exists to cure. # flush the file-server's handle cache).
# #
# Force-HTTPS: /etc/caddy/furtka.d/*.caddyfile gets imported into the :80 # Two snippet dirs, both silently no-op when empty:
# block. The /api/furtka/https/force endpoint creates or removes # - /etc/caddy/furtka.d/*.caddyfile → imported inside the :80 block.
# redirect.caddyfile there to toggle the HTTP→HTTPS redirect, then reloads # The HTTPS toggle's "force HTTP→HTTPS redirect" snippet lands here.
# Caddy. Glob imports silently no-op on an empty/missing directory, so the # - /etc/caddy/furtka-https.d/*.caddyfile → imported at TOP LEVEL, so
# toggle-off state is "no file present" rather than "empty file". # the HTTPS hostname+tls-internal site block can drop in here when
# the toggle is on. Hostname is substituted at toggle-time.
{ {
# Named-hostname :443 blocks would otherwise make Caddy add its own # Named-hostname :443 blocks would otherwise make Caddy add its own
# HTTP→HTTPS redirect — but we already serve our own `:80` block and # HTTP→HTTPS redirect — but we already serve our own `:80` block and
@ -35,6 +37,26 @@
handle /apps* { handle /apps* {
reverse_proxy localhost:7000 reverse_proxy localhost:7000
} }
handle /login* {
reverse_proxy localhost:7000
}
handle /logout* {
reverse_proxy localhost:7000
}
# /settings and / — these previously served as static HTML straight
# from the catch-all file_server, which meant the auth-guard was
# bypassed: a LAN visitor could see the box's version, IP, and
# reach the Update-now / Reboot buttons (the API calls behind them
# are auth-gated, but the page itself rendered without a redirect
# to /login). Route them through the Python handler which checks
# the session cookie and either serves the static HTML from
# assets/www/ or redirects to /login.
handle /settings* {
reverse_proxy localhost:7000
}
handle / {
reverse_proxy localhost:7000
}
# Runtime JSON lives under /var/lib/furtka/ so it survives self-updates # Runtime JSON lives under /var/lib/furtka/ so it survives self-updates
# (which only swap /opt/furtka/current). # (which only swap /opt/furtka/current).
handle /status.json { handle /status.json {
@ -50,8 +72,8 @@
file_server file_server
} }
# Download the local root CA cert Caddy generated for `tls internal`. # Download the local root CA cert Caddy generated for `tls internal`.
# Available on both :80 and :443 so users can grab it before they've # Public because users need to grab it before they've trusted it.
# trusted it. The private key next to it stays 0600 / caddy-owned. # The private key next to it stays 0600 / caddy-owned.
handle /rootCA.crt { handle /rootCA.crt {
root * /var/lib/caddy/pki/authorities/local root * /var/lib/caddy/pki/authorities/local
rewrite * /root.crt rewrite * /root.crt
@ -69,12 +91,12 @@
} }
} }
# HTTPS opt-in: when /settings toggles HTTPS on, a snippet gets written
# into /etc/caddy/furtka-https.d/ that adds the hostname+tls-internal
# site block. Empty directory = HTTP-only (default fresh install).
import /etc/caddy/furtka-https.d/*.caddyfile
:80 { :80 {
import /etc/caddy/furtka.d/*.caddyfile import /etc/caddy/furtka.d/*.caddyfile
import furtka_routes import furtka_routes
} }
__FURTKA_HOSTNAME__.local, __FURTKA_HOSTNAME__ {
tls internal
import furtka_routes
}

View file

@ -0,0 +1,12 @@
[Unit]
Description=Furtka apps catalog sync
Requires=network-online.target
After=network-online.target
[Service]
Type=oneshot
ExecStart=/usr/local/bin/furtka catalog sync
TimeoutStartSec=5min
[Install]
WantedBy=multi-user.target

View file

@ -0,0 +1,14 @@
[Unit]
Description=Furtka apps catalog daily sync
[Timer]
# First sync 10 min after boot, then once per day with up to 6 h jitter so
# a fleet of boxes doesn't all hit Forgejo at the same second. Persistent
# = catch up if the box was off when the timer should have fired.
OnBootSec=10min
OnUnitActiveSec=24h
RandomizedDelaySec=6h
Persistent=true
[Install]
WantedBy=timers.target

View file

@ -14,6 +14,7 @@
<a href="/" aria-current="page">Home</a> <a href="/" aria-current="page">Home</a>
<a href="/apps">Apps</a> <a href="/apps">Apps</a>
<a href="/settings/">Settings</a> <a href="/settings/">Settings</a>
<a href="#" id="logout-link" onclick="return doLogout(event)">Logout</a>
</div> </div>
</nav> </nav>
<header> <header>
@ -67,6 +68,17 @@
</main> </main>
<script> <script>
// Revoke the cookie server-side and bounce to /login. Shared
// shape with the _HTML in furtka/api.py so the two logout
// links behave identically.
async function doLogout(ev) {
ev.preventDefault();
try { await fetch('/logout', { method: 'POST', credentials: 'same-origin' }); }
catch (e) { /* server may already be down */ }
window.location.href = '/login';
return false;
}
// Hostname + install metadata — written once at install time to // Hostname + install metadata — written once at install time to
// /var/lib/furtka/furtka.json (see _furtka_json_cmd in the installer). // /var/lib/furtka/furtka.json (see _furtka_json_cmd in the installer).
// Separate from status.json because these facts don't change between // Separate from status.json because these facts don't change between
@ -92,13 +104,17 @@
} }
function primaryAction(app) { function primaryAction(app) {
// Only fileshare has a direct "open" link today. Future apps with // open_url is a manifest-declared template with a `{host}`
// HTTP endpoints would surface a URL here; everything else falls // placeholder — substituted against the current browser's
// back to the /apps manage page. // hostname so smb://host/files and http://host:3001/ both
if (app.name === 'fileshare' && HOSTNAME) { // follow however the user reached Furtka (furtka.local, raw
return { href: `smb://${HOSTNAME}.local/files`, label: 'Open files' }; // IP, a future reverse-proxy hostname). Apps without a
// frontend fall back to /apps for management.
if (app.open_url) {
const host = HOSTNAME || location.hostname;
return { href: app.open_url.replace('{host}', host), label: 'Open', external: true };
} }
return { href: '/apps', label: 'Manage →' }; return { href: '/apps', label: 'Manage →', external: false };
} }
async function renderApps() { async function renderApps() {
@ -115,8 +131,9 @@
} }
target.innerHTML = apps.map(a => { target.innerHTML = apps.map(a => {
const icon = a.icon_svg || FALLBACK_ICON; const icon = a.icon_svg || FALLBACK_ICON;
const { href, label } = primaryAction(a); const { href, label, external } = primaryAction(a);
return `<a class="app-tile" href="${esc(href)}"> const tgt = external ? ' target="_blank" rel="noopener"' : '';
return `<a class="app-tile" href="${esc(href)}"${tgt}>
<div class="icon">${icon}</div> <div class="icon">${icon}</div>
<span class="name">${esc(a.display_name || a.name)}</span> <span class="name">${esc(a.display_name || a.name)}</span>
<span class="cta">${esc(label)}</span> <span class="cta">${esc(label)}</span>

View file

@ -14,6 +14,7 @@
<a href="/">Home</a> <a href="/">Home</a>
<a href="/apps">Apps</a> <a href="/apps">Apps</a>
<a href="/settings/" aria-current="page">Settings</a> <a href="/settings/" aria-current="page">Settings</a>
<a href="#" id="logout-link" onclick="return doLogout(event)">Logout</a>
</div> </div>
</nav> </nav>
@ -89,12 +90,25 @@
</div> </div>
</section> </section>
<section>
<h2>Power</h2>
<div class="card">
<p class="lede">
Reboot or shut down the whole Furtka box. Takes a few seconds to
finish; the UI will reconnect itself after a reboot.
</p>
<div class="power-actions">
<button type="button" id="power-reboot" class="secondary">Reboot</button>
<button type="button" id="power-poweroff" class="danger">Shut down</button>
</div>
<p id="power-status" class="hint"></p>
</div>
</section>
<section> <section>
<h2>Coming next</h2> <h2>Coming next</h2>
<div class="coming"> <div class="coming">
<p class="hint">Controls we're building — follow progress on <a href="https://furtka.org">furtka.org</a>.</p> <p class="hint">Controls we're building — follow progress on <a href="https://furtka.org">furtka.org</a>.</p>
<a href="https://furtka.org/#planned">Reboot</a>
<a href="https://furtka.org/#planned">Shut down</a>
<a href="https://furtka.org/#planned">Change hostname</a> <a href="https://furtka.org/#planned">Change hostname</a>
<a href="https://furtka.org/#planned">Backup</a> <a href="https://furtka.org/#planned">Backup</a>
<a href="https://furtka.org/#planned">User accounts</a> <a href="https://furtka.org/#planned">User accounts</a>
@ -108,6 +122,15 @@
</main> </main>
<script> <script>
// Logout button in the nav — same shape as /apps and / pages.
async function doLogout(ev) {
ev.preventDefault();
try { await fetch('/logout', { method: 'POST', credentials: 'same-origin' }); }
catch (e) { /* server may already be down */ }
window.location.href = '/login';
return false;
}
async function refresh() { async function refresh() {
try { try {
const r = await fetch('/status.json', { cache: 'no-store' }); const r = await fetch('/status.json', { cache: 'no-store' });
@ -340,6 +363,85 @@
/* keep polling; restart blip expected */ /* keep polling; restart blip expected */
} }
} }
// Power buttons: confirm, POST, then swap the whole card into a
// "going down" state so the user doesn't keep clicking. After a
// reboot we try to reconnect after ~45s; for shutdown we just
// tell the user the box is off — no auto-reconnect attempt.
const powerStatusEl = document.getElementById('power-status');
const rebootBtn = document.getElementById('power-reboot');
const poweroffBtn = document.getElementById('power-poweroff');
function setPowerStatus(msg, tone = 'muted') {
powerStatusEl.textContent = msg;
powerStatusEl.style.color =
tone === 'error' ? 'var(--danger)' : 'var(--muted)';
}
async function triggerPower(action, confirmMsg, inflightLabel) {
if (!confirm(confirmMsg)) return;
rebootBtn.disabled = true;
poweroffBtn.disabled = true;
setPowerStatus(inflightLabel);
try {
const r = await fetch('/api/furtka/power', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ action }),
});
if (!r.ok) {
const data = await r.json().catch(() => ({}));
setPowerStatus(data.error || `HTTP ${r.status}`, 'error');
rebootBtn.disabled = false;
poweroffBtn.disabled = false;
return;
}
if (action === 'reboot') {
setPowerStatus('Rebooting… this page will reload when the box is back.');
// Try reconnecting after a generous delay. archinstall
// + boot + services typically takes 3045 s; give it 30
// before the first poke so we don't just spin against
// a down kernel.
setTimeout(pollForReconnect, 30000);
} else {
setPowerStatus(
'Shutdown scheduled. Press the physical power button to turn it back on.'
);
}
} catch (e) {
setPowerStatus(`Network error: ${e.message}`, 'error');
rebootBtn.disabled = false;
poweroffBtn.disabled = false;
}
}
async function pollForReconnect() {
// Fetch a tiny static file; when it comes back 200 the box is up.
try {
const r = await fetch('/furtka.json', { cache: 'no-store' });
if (r.ok) {
setPowerStatus('Back up — reloading…');
setTimeout(() => location.reload(), 1500);
return;
}
} catch (e) { /* still down */ }
setTimeout(pollForReconnect, 3000);
}
rebootBtn.addEventListener('click', () =>
triggerPower(
'reboot',
"Wirklich neu starten? Die Box ist für ~30 Sekunden nicht erreichbar.",
'Rebooting…'
)
);
poweroffBtn.addEventListener('click', () =>
triggerPower(
'poweroff',
"Wirklich ausschalten? Du kannst die Box erst wieder starten, wenn du den physischen Power-Knopf drückst.",
'Shutting down…'
)
);
</script> </script>
</body> </body>
</html> </html>

View file

@ -198,7 +198,7 @@ h2 {
flex-wrap: wrap; flex-wrap: wrap;
justify-content: flex-end; justify-content: flex-end;
} }
button { button, .btn {
background: var(--accent); background: var(--accent);
border: none; border: none;
color: var(--bg); color: var(--bg);
@ -209,16 +209,39 @@ button {
white-space: nowrap; white-space: nowrap;
font-size: 0.9rem; font-size: 0.9rem;
font-family: inherit; font-family: inherit;
/* Anchor rendered-as-button: strip underline + keep the button's
rectangular hit area. `display: inline-flex` so an <a class="btn">
lines up vertically with its <button> siblings in .buttons. */
text-decoration: none;
display: inline-flex;
align-items: center;
} }
button.secondary { button.secondary, .btn.secondary {
background: var(--card); background: var(--card);
color: var(--fg); color: var(--fg);
border: 1px solid var(--border); border: 1px solid var(--border);
} }
button.danger { background: var(--danger); color: #fff; } button.danger { background: var(--danger); color: #fff; }
button:disabled { opacity: 0.5; cursor: wait; } button:disabled { opacity: 0.5; cursor: wait; }
button:focus-visible { outline: none; box-shadow: var(--ring); } button:focus-visible, .btn:focus-visible { outline: none; box-shadow: var(--ring); }
.empty { color: var(--muted); font-style: italic; padding: 0.5rem 0; } .empty { color: var(--muted); font-style: italic; padding: 0.5rem 0; }
.catalog-row {
display: flex;
justify-content: space-between;
align-items: center;
flex-wrap: wrap;
gap: 0.75rem;
padding: 0.5rem 0 0.75rem;
}
.catalog-state {
margin: 0;
color: var(--muted);
font-size: 0.9rem;
}
.catalog-stage.pending {
color: var(--fg);
font-style: italic;
}
pre { pre {
background: var(--card); background: var(--card);
padding: 1rem; padding: 1rem;
@ -287,7 +310,8 @@ details.log-details[open] > summary { color: var(--fg); }
} }
.field input:focus { outline: 2px solid var(--accent); outline-offset: -1px; } .field input:focus { outline: 2px solid var(--accent); outline-offset: -1px; }
.field .req { color: var(--danger); margin-left: 0.25rem; } .field .req { color: var(--danger); margin-left: 0.25rem; }
.modal .error { .modal .error,
.login-wrap .error {
background: var(--warn); background: var(--warn);
color: var(--warn-fg); color: var(--warn-fg);
padding: 0.5rem 0.75rem; padding: 0.5rem 0.75rem;
@ -296,7 +320,15 @@ details.log-details[open] > summary { color: var(--fg); }
font-size: 0.9rem; font-size: 0.9rem;
display: none; display: none;
} }
.modal .error.show { display: block; } .modal .error.show,
.login-wrap .error.show { display: block; }
/* Login + first-run setup page. Shares .wrap's max-width so the form
sits in the same column the rest of the app uses, just without the
Home/Apps/Settings nav. A bit of top padding so the H1 isn't glued
to the viewport edge. */
.login-wrap { padding-top: 3rem; }
.login-wrap .actions { margin-top: 0.5rem; }
.modal-actions { .modal-actions {
display: flex; display: flex;
justify-content: flex-end; justify-content: flex-end;
@ -306,7 +338,8 @@ details.log-details[open] > summary { color: var(--fg); }
/* Row of buttons beneath a card used by the Furtka updates card on /* Row of buttons beneath a card used by the Furtka updates card on
/settings. Left-aligned, wraps on narrow screens. */ /settings. Left-aligned, wraps on narrow screens. */
.update-actions { .update-actions,
.power-actions {
display: flex; display: flex;
gap: 0.5rem; gap: 0.5rem;
flex-wrap: wrap; flex-wrap: wrap;
@ -365,7 +398,18 @@ details.log-details[open] > summary { color: var(--fg); }
font-size: 0.95rem; font-size: 0.95rem;
} }
.kv dt { color: var(--muted); } .kv dt { color: var(--muted); }
.kv dd { margin: 0; color: var(--fg); font-family: ui-monospace, SFMono-Regular, Menlo, monospace; } .kv dd {
margin: 0;
color: var(--fg);
font-family: ui-monospace, SFMono-Regular, Menlo, monospace;
/* Grid items default to min-width: auto (= content width), so a long
unbreakable value like a SHA-256 fingerprint would push past the
card. min-width: 0 lets the 1fr track enforce the column width, and
overflow-wrap: anywhere gives the colon-separated hex string valid
break opportunities. */
min-width: 0;
overflow-wrap: anywhere;
}
.coming { .coming {
display: flex; display: flex;

115
furtka/_release_common.py Normal file
View file

@ -0,0 +1,115 @@
"""Shared primitives for release-tarball flows.
Both ``furtka.updater`` (core self-update) and ``furtka.catalog`` (apps
catalog sync) pull a tarball from a Forgejo Releases page, verify its
SHA256 against the ``.sha256`` sidecar, and extract it with a path-
traversal guard. The helpers here are the single implementation of
that dance.
Each error-raising helper accepts an ``error_cls`` kwarg so callers can
keep their domain-specific exception type (``UpdateError``,
``CatalogError``) at call sites the helper itself defaults to a
neutral ``ReleaseError`` for use in tests or standalone scripts.
"""
from __future__ import annotations
import hashlib
import json
import shutil
import tarfile
import urllib.error
import urllib.request
from pathlib import Path
class ReleaseError(RuntimeError):
"""Neutral failure for release-tarball operations."""
def forgejo_api(host: str, repo: str, path: str, *, error_cls: type = ReleaseError) -> dict | list:
url = f"https://{host}/api/v1/repos/{repo}{path}"
req = urllib.request.Request(url, headers={"Accept": "application/json"})
try:
with urllib.request.urlopen(req, timeout=15) as resp:
return json.loads(resp.read())
except (urllib.error.URLError, json.JSONDecodeError) as e:
raise error_cls(f"forgejo api {url}: {e}") from e
def download(url: str, dest: Path, *, error_cls: type = ReleaseError) -> None:
dest.parent.mkdir(parents=True, exist_ok=True)
req = urllib.request.Request(url)
try:
with urllib.request.urlopen(req, timeout=60) as resp, dest.open("wb") as f:
shutil.copyfileobj(resp, f)
except urllib.error.URLError as e:
raise error_cls(f"download {url}: {e}") from e
def sha256_of(path: Path) -> str:
h = hashlib.sha256()
with path.open("rb") as f:
for chunk in iter(lambda: f.read(1024 * 1024), b""):
h.update(chunk)
return h.hexdigest()
def verify_tarball(tarball: Path, expected_sha: str, *, error_cls: type = ReleaseError) -> None:
actual = sha256_of(tarball)
if actual != expected_sha:
raise error_cls(f"sha256 mismatch: expected {expected_sha}, got {actual}")
def parse_sha256_sidecar(text: str, *, error_cls: type = ReleaseError) -> str:
"""Extract the hash from a standard `sha256sum` sidecar line."""
line = text.strip().split("\n", 1)[0].strip()
if not line:
raise error_cls("empty sha256 sidecar")
return line.split()[0]
def extract_tarball(tarball: Path, dest: Path, *, error_cls: type = ReleaseError) -> str:
"""Extract the tarball and return the VERSION read from its root.
Refuses entries that could escape ``dest`` via absolute paths or ``..``
segments. On Python 3.12+ the stricter ``data`` filter is additionally
enabled to catch symlink-escape / device-node / setuid tricks that the
regex check can't see.
"""
dest.mkdir(parents=True, exist_ok=True)
with tarfile.open(tarball, "r:gz") as tf:
for member in tf.getmembers():
if member.name.startswith(("/", "..")) or ".." in Path(member.name).parts:
raise error_cls(f"refusing tarball entry {member.name!r}")
try:
tf.extractall(dest, filter="data")
except TypeError:
tf.extractall(dest)
version_file = dest / "VERSION"
if not version_file.is_file():
raise error_cls("tarball has no VERSION file at root")
return version_file.read_text().strip()
def version_tuple(v: str) -> tuple:
"""CalVer comparator: 26.1-alpha < 26.1-beta < 26.1-rc < 26.1 < 26.2-alpha.
Pre-release stages sort before the corresponding stable (no-suffix)
release. Unknown suffixes sort below everything except the malformed
fallback. Returns a tuple of (year, release, stage_rank, suffix).
"""
stage_rank = {"alpha": 0, "beta": 1, "rc": 2}
head, _, suffix = v.partition("-")
try:
year_str, release_str = head.split(".", 1)
year = int(year_str)
release = int(release_str)
except (ValueError, IndexError):
return (-1, -1, -1, v)
if not suffix:
return (year, release, 3, "")
for name, rank in stage_rank.items():
if suffix.startswith(name):
return (year, release, rank, suffix)
return (year, release, -1, suffix)

View file

@ -2,22 +2,28 @@
# its lines hurts readability and the rendered output is what matters here. # its lines hurts readability and the rendered output is what matters here.
"""Tiny HTTP API + management UI for the Furtka resource manager. """Tiny HTTP API + management UI for the Furtka resource manager.
Single stdlib http.server process, no Flask/no third-party deps so we don't Single stdlib http.server process, served behind Caddy (reverse-proxies
have to pip-install anything on the target. Caddy reverse-proxies /apps and /apps, /api, /login and /logout from :80 to here).
/api from :80 to here.
Security: NO AUTH. Bound to 127.0.0.1 by default; the Caddy proxy makes it Security: single-admin password login, cookie-session, werkzeug-hashed
LAN-reachable. Anyone on the LAN can install/remove apps. The UI shouts this password stored at /var/lib/furtka/users.json (0600). Sessions live in
out at the top. Auth lands when Authentik does. memory `systemctl restart furtka-api` invalidates everyone. Fresh
installs pre-populate users.json from the webinstaller step-1 password;
upgrades from pre-auth releases fall into a first-run setup form at
/login where the admin password is created from the browser. Authentik
integration remains the long-term plan; this is the pragmatic alpha
stopgap.
""" """
import json import json
import re import re
import time
from http.cookies import SimpleCookie
from http.server import BaseHTTPRequestHandler, HTTPServer from http.server import BaseHTTPRequestHandler, HTTPServer
from furtka import dockerops, installer, reconciler from furtka import auth, dockerops, install_runner, installer, reconciler, sources
from furtka.manifest import ManifestError, load_manifest from furtka.manifest import ManifestError, load_manifest
from furtka.paths import apps_dir, bundled_apps_dir from furtka.paths import apps_dir, static_www_dir
from furtka.scanner import scan from furtka.scanner import scan
_ICON_MAX_BYTES = 16 * 1024 _ICON_MAX_BYTES = 16 * 1024
@ -77,17 +83,21 @@ _HTML = """<!DOCTYPE html>
<a href="/">Home</a> <a href="/">Home</a>
<a href="/apps" aria-current="page">Apps</a> <a href="/apps" aria-current="page">Apps</a>
<a href="/settings/">Settings</a> <a href="/settings/">Settings</a>
<a href="#" id="logout-link" onclick="return doLogout(event)">Logout</a>
</div> </div>
</nav> </nav>
<h1>Furtka Apps</h1> <h1>Furtka Apps</h1>
<p class="lede">Install or remove resource-manager apps on this Furtka box.</p> <p class="lede">Install or remove resource-manager apps on this Furtka box.</p>
<div class="warn">No authentication on this UI yet. Anyone on your LAN can install or remove apps. Don't expose this to the wider internet.</div>
<h2>Installed</h2> <h2>Installed</h2>
<div id="installed"></div> <div id="installed"></div>
<h2>Available to install</h2> <h2>Available to install</h2>
<div class="catalog-row">
<p class="catalog-state">Catalog version <span id="catalog-current"></span> · last sync <span id="catalog-last-sync">never</span> <span id="catalog-stage" class="catalog-stage"></span></p>
<button type="button" class="secondary" id="catalog-sync-btn">Sync apps catalog</button>
</div>
<div id="available"></div> <div id="available"></div>
<details class="log-details"> <details class="log-details">
@ -116,6 +126,15 @@ function esc(s) {
return d.innerHTML; return d.innerHTML;
} }
async function doLogout(ev) {
ev.preventDefault();
try {
await fetch('/logout', { method: 'POST', credentials: 'same-origin' });
} catch (e) { /* best-effort server may already be down */ }
window.location.href = '/login';
return false;
}
// Fallback when an app doesn't ship a parseable icon.svg. Simple // Fallback when an app doesn't ship a parseable icon.svg. Simple
// stroked folder currentColor so the tile's accent tint applies. // stroked folder currentColor so the tile's accent tint applies.
const FALLBACK_ICON = '<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"><path d="M3 7v12a2 2 0 0 0 2 2h14a2 2 0 0 0 2-2V9a2 2 0 0 0-2-2h-7l-2-2H5a2 2 0 0 0-2 2z"/></svg>'; const FALLBACK_ICON = '<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"><path d="M3 7v12a2 2 0 0 0 2 2h14a2 2 0 0 0 2-2V9a2 2 0 0 0-2-2h-7l-2-2H5a2 2 0 0 0-2 2z"/></svg>';
@ -169,7 +188,9 @@ async function openSettingsDialog(name, action) {
modal.form.innerHTML = data.settings.map(s => { modal.form.innerHTML = data.settings.map(s => {
const id = `field-${esc(s.name)}`; const id = `field-${esc(s.name)}`;
const value = action === 'edit' && s.type === 'password' ? '' : esc(s.value || ''); const value = action === 'edit' && s.type === 'password' ? '' : esc(s.value || '');
const placeholder = action === 'edit' && s.type === 'password' ? 'Leave blank to keep current' : ''; const placeholder = action === 'edit' && s.type === 'password' ? 'Leave blank to keep current'
: s.type === 'path' ? '/mnt/…'
: '';
return ` return `
<div class="field"> <div class="field">
<label for="${id}">${esc(s.label)}${s.required ? '<span class="req">*</span>' : ''}</label> <label for="${id}">${esc(s.label)}${s.required ? '<span class="req">*</span>' : ''}</label>
@ -193,6 +214,51 @@ async function openSettingsDialog(name, action) {
modal.submit.addEventListener('click', submitModal); modal.submit.addEventListener('click', submitModal);
// Install progress phases written by the background job's state file.
// Mirrors furtka/install_runner.py stage strings. Unknown stages fall
// back to a neutral "Installing…" so a future phase rename doesn't
// leave the modal button blank.
const INSTALL_STAGE_LABELS = {
'pulling_image': 'Image wird heruntergeladen…',
'creating_volumes': 'Speicherbereiche werden erstellt…',
'starting_container': 'Container wird gestartet…',
'done': 'Fertig',
};
async function pollInstallStatus(original) {
// Two-minute ceiling: Jellyfin over a slow DSL line can take ~90s
// just on the image pull. Beyond that something's stuck — the
// background job is still running in systemd, but the UI gives up
// on the modal and lets the user close it.
const deadline = Date.now() + 120000;
while (Date.now() < deadline) {
await new Promise(res => setTimeout(res, 1500));
let s = {};
try {
s = await fetch('/api/apps/install/status').then(r => r.json());
} catch (e) { /* transient; keep polling */ }
const stage = s.stage || '';
modal.submit.textContent = INSTALL_STAGE_LABELS[stage] || 'Installing…';
if (stage === 'done') {
closeModal();
await refresh();
return;
}
if (stage === 'error') {
modal.error.textContent = s.error || 'Install failed';
modal.error.classList.add('show');
modal.submit.disabled = false;
modal.submit.textContent = original;
return;
}
}
// Timed out waiting for a terminal state don't lie to the user.
modal.error.textContent = 'Installation is taking longer than expected. Check /settings for the background job status.';
modal.error.classList.add('show');
modal.submit.disabled = false;
modal.submit.textContent = original;
}
async function submitModal() { async function submitModal() {
if (!modal.current) return; if (!modal.current) return;
const { name, action } = modal.current; const { name, action } = modal.current;
@ -226,6 +292,13 @@ async function submitModal() {
modal.submit.textContent = original; modal.submit.textContent = original;
return; return;
} }
// Install dispatched a background job poll until terminal. The
// edit path stays synchronous (settings updates are fast: env write
// + reconcile, no image pull).
if (action === 'install' && r.status === 202) {
await pollInstallStatus(original);
return;
}
closeModal(); closeModal();
await refresh(); await refresh();
} catch (e) { } catch (e) {
@ -244,6 +317,14 @@ async function refresh() {
document.getElementById('installed').innerHTML = installed.length document.getElementById('installed').innerHTML = installed.length
? installed.map(a => { ? installed.map(a => {
const hasSettings = a.has_settings; const hasSettings = a.has_settings;
const openHref = a.open_url ? a.open_url.replace('{host}', location.hostname) : '';
// Plain <a> rendered as a button so it behaves like a real link
// (middle-click, right-click "copy link", screen readers) instead
// of a JS onclick. Most installed apps will want this fileshare
// deep-links to smb://, Kuma to http://host:3001/.
const openBtn = openHref
? `<a class="btn" href="${esc(openHref)}" target="_blank" rel="noopener">Open</a>`
: '';
return ` return `
<div class="app"> <div class="app">
<div class="left"> <div class="left">
@ -254,6 +335,7 @@ async function refresh() {
</div> </div>
</div> </div>
<div class="buttons"> <div class="buttons">
${openBtn}
${hasSettings ? `<button data-op="edit" data-name="${esc(a.name)}">Settings</button>` : ''} ${hasSettings ? `<button data-op="edit" data-name="${esc(a.name)}">Settings</button>` : ''}
<button class="secondary" data-op="update" data-name="${esc(a.name)}">Update</button> <button class="secondary" data-op="update" data-name="${esc(a.name)}">Update</button>
<button class="secondary" data-op="reinstall" data-name="${esc(a.name)}">Reinstall</button> <button class="secondary" data-op="reinstall" data-name="${esc(a.name)}">Reinstall</button>
@ -309,20 +391,197 @@ async function handleButton(op, name, btn) {
: ' — already up to date'; : ' — already up to date';
} }
document.getElementById('log').textContent = header + '\\n' + JSON.stringify(data, null, 2); document.getElementById('log').textContent = header + '\\n' + JSON.stringify(data, null, 2);
// Reinstall dispatches an async install the same way the modal does
// follow the background job on the button label until terminal.
if (op === 'reinstall' && r.status === 202) {
const deadline = Date.now() + 120000;
while (Date.now() < deadline) {
await new Promise(res => setTimeout(res, 1500));
let s = {};
try { s = await fetch('/api/apps/install/status').then(r => r.json()); } catch (e) {}
const stage = s.stage || '';
btn.textContent = INSTALL_STAGE_LABELS[stage] || 'Reinstalling…';
if (stage === 'done' || stage === 'error') break;
}
}
} catch (e) { } catch (e) {
document.getElementById('log').textContent = `[${op} ${name}] network error: ${e.message}`; document.getElementById('log').textContent = `[${op} ${name}] network error: ${e.message}`;
} }
btn.textContent = original; btn.textContent = original;
btn.disabled = false;
await refresh(); await refresh();
} }
async function refreshCatalog() {
let status;
try {
status = await fetch('/api/catalog/status').then(r => r.json());
} catch (e) {
return;
}
const cur = status.current || 'never synced';
document.getElementById('catalog-current').textContent = cur;
const stage = (status.state || {}).stage || '';
const updatedAt = (status.state || {}).updated_at || '';
document.getElementById('catalog-last-sync').textContent = updatedAt || 'never';
const stageEl = document.getElementById('catalog-stage');
if (stage && stage !== 'done') {
stageEl.textContent = '· ' + stage + '';
stageEl.classList.add('pending');
} else {
stageEl.textContent = '';
stageEl.classList.remove('pending');
}
}
const catalogBtn = document.getElementById('catalog-sync-btn');
catalogBtn.addEventListener('click', async () => {
catalogBtn.disabled = true;
const original = catalogBtn.textContent;
catalogBtn.textContent = 'Syncing…';
try {
const r = await fetch('/api/catalog/sync/apply', {method: 'POST'});
const data = await r.json();
document.getElementById('log').textContent = `[catalog sync] HTTP ${r.status}\\n` + JSON.stringify(data, null, 2);
// Poll for completion sync is fast (KB-range tarball) so 30 s is plenty.
const deadline = Date.now() + 30000;
while (Date.now() < deadline) {
await new Promise(res => setTimeout(res, 1500));
const s = await fetch('/api/catalog/status').then(r => r.json()).catch(() => null);
const stage = (s && s.state && s.state.stage) || '';
if (stage === 'done' || stage === 'error') break;
}
await refreshCatalog();
await refresh();
} catch (e) {
document.getElementById('log').textContent = `[catalog sync] network error: ${e.message}`;
}
catalogBtn.disabled = false;
catalogBtn.textContent = original;
});
refresh(); refresh();
refreshCatalog();
</script> </script>
</body> </body>
</html> </html>
""" """
# Login / first-run setup page. Rendered standalone (no main-UI chrome) so
# an unauthenticated visitor never gets a glimpse of the app list. Reuses
# /style.css for the look — the page is just a form + optional error line.
# The template has a {{ SETUP }} marker the server flips on/off depending
# on whether users.json exists yet (first-run vs. normal login).
_HTML_LOGIN = """<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Furtka · {{ TITLE }}</title>
<meta name="viewport" content="width=device-width,initial-scale=1">
<link rel="stylesheet" href="/style.css">
</head>
<body>
<main class="wrap login-wrap">
<h1>{{ HEADING }}</h1>
<p class="lede">{{ LEDE }}</p>
<form id="login-form" onsubmit="return doLogin(event)">
<div class="field">
<label for="username">Username</label>
<input id="username" name="username" type="text" autocomplete="username" required value="{{ DEFAULT_USERNAME }}" autofocus>
</div>
<div class="field">
<label for="password">Password</label>
<input id="password" name="password" type="password" autocomplete="{{ PWD_AUTOCOMPLETE }}" required minlength="8">
</div>
{{ PASSWORD2_FIELD }}
<div id="login-error" class="error"></div>
<div class="actions">
<button type="submit" id="login-submit">{{ SUBMIT_LABEL }}</button>
</div>
</form>
</main>
<script>
const SETUP = {{ SETUP_JSON }};
const errBox = document.getElementById('login-error');
async function doLogin(ev) {
ev.preventDefault();
errBox.classList.remove('show');
errBox.textContent = '';
const btn = document.getElementById('login-submit');
btn.disabled = true;
const body = {
username: document.getElementById('username').value,
password: document.getElementById('password').value,
};
if (SETUP) body.password2 = document.getElementById('password2').value;
try {
const r = await fetch('/login', {
method: 'POST',
headers: {'Content-Type': 'application/json'},
credentials: 'same-origin',
body: JSON.stringify(body),
});
if (r.ok) {
window.location.href = '/apps';
return false;
}
const data = await r.json().catch(() => ({error: 'HTTP ' + r.status}));
errBox.textContent = data.error || 'Login failed';
errBox.classList.add('show');
} catch (e) {
errBox.textContent = 'Network error — is the box reachable?';
errBox.classList.add('show');
} finally {
btn.disabled = false;
}
return false;
}
</script>
</body>
</html>
"""
def _render_login_html(setup: bool, default_username: str = "") -> str:
if setup:
password2_field = (
'<div class="field"><label for="password2">Repeat password</label>'
'<input id="password2" name="password2" type="password" '
'autocomplete="new-password" required minlength="8"></div>'
)
subs = {
"TITLE": "First-run setup",
"HEADING": "Set admin password",
"LEDE": "No admin account exists yet on this box. Pick a username and password — you'll use them to sign in to the Furtka UI.",
"PWD_AUTOCOMPLETE": "new-password",
"PASSWORD2_FIELD": password2_field,
"SUBMIT_LABEL": "Create admin",
"DEFAULT_USERNAME": "admin",
"SETUP_JSON": "true",
}
else:
subs = {
"TITLE": "Login",
"HEADING": "Furtka login",
"LEDE": "Sign in with the admin credentials you set during install.",
"PWD_AUTOCOMPLETE": "current-password",
"PASSWORD2_FIELD": "",
"SUBMIT_LABEL": "Log in",
"DEFAULT_USERNAME": default_username,
"SETUP_JSON": "false",
}
out = _HTML_LOGIN
for key, val in subs.items():
out = out.replace("{{ " + key + " }}", val)
return out
# Minimum password length enforced server-side (browser also enforces it
# via the input's minlength, but don't rely on client-side only).
_MIN_PASSWORD_LEN = 8
def _manifest_summary(m, app_dir=None): def _manifest_summary(m, app_dir=None):
return { return {
"name": m.name, "name": m.name,
@ -334,6 +593,9 @@ def _manifest_summary(m, app_dir=None):
"icon": m.icon, "icon": m.icon,
"icon_svg": _read_icon_svg(app_dir, m.icon), "icon_svg": _read_icon_svg(app_dir, m.icon),
"has_settings": bool(m.settings), "has_settings": bool(m.settings),
# Optional template URL with `{host}` placeholder; frontend
# substitutes against location.hostname at render time.
"open_url": m.open_url,
} }
@ -349,28 +611,31 @@ def _list_installed():
return out return out
def _list_bundled(): def _list_available():
"""Apps available to install — catalog union bundled, catalog wins on collision.
Each entry carries a `"source"` field (`"catalog"` | `"bundled"`) so the
UI can visually differentiate later. Already-installed apps are filtered
out so the UI shows them only in the installed list.
"""
installed_names = {r.path.name for r in scan(apps_dir()) if r.ok} installed_names = {r.path.name for r in scan(apps_dir()) if r.ok}
bundled = bundled_apps_dir()
if not bundled.exists():
return []
out = [] out = []
for entry in sorted(bundled.iterdir()): for app_source in sources.list_available():
if not entry.is_dir() or entry.name in installed_names: if app_source.path.name in installed_names:
continue
manifest_path = entry / "manifest.json"
if not manifest_path.exists():
continue continue
manifest_path = app_source.path / "manifest.json"
try: try:
m = load_manifest(manifest_path) m = load_manifest(manifest_path)
except ManifestError: except ManifestError:
continue continue
out.append(_manifest_summary(m, entry)) summary = _manifest_summary(m, app_source.path)
summary["source"] = app_source.origin
out.append(summary)
return out return out
def _load_manifest_for(name): def _load_manifest_for(name):
"""Return (manifest, env_values, installed_bool) for an installed or bundled app. """Return (manifest, env_values, installed_bool) for an installed or bundled/catalog app.
Returns (None, None, False) if the name doesn't resolve anywhere. Returns (None, None, False) if the name doesn't resolve anywhere.
""" """
@ -382,13 +647,13 @@ def _load_manifest_for(name):
return None, None, False return None, None, False
values = installer.read_env_values(target / ".env") values = installer.read_env_values(target / ".env")
return m, values, True return m, values, True
bundled = bundled_apps_dir() / name resolved = sources.resolve_app_name(name)
if bundled.exists() and (bundled / "manifest.json").exists(): if resolved is not None:
try: try:
m = load_manifest(bundled / "manifest.json") m = load_manifest(resolved.path / "manifest.json")
except ManifestError: except ManifestError:
return None, None, False return None, None, False
env_example = bundled / ".env.example" env_example = resolved.path / ".env.example"
values = installer.read_env_values(env_example) if env_example.exists() else {} values = installer.read_env_values(env_example) if env_example.exists() else {}
return m, values, False return m, values, False
return None, None, False return None, None, False
@ -427,19 +692,86 @@ def _do_get_settings(name):
} }
_INSTALL_TERMINAL_STAGES = frozenset({"done", "error"})
def _do_install(name, settings=None): def _do_install(name, settings=None):
"""Kick off an app install. Synchronous sync-phase + async docker-phase.
Fast parts run inline so validation failures come back as immediate
4xx (bad path, placeholder secret, unknown app, etc.). The slow
`docker compose pull` then `compose up` are dispatched as a
background systemd-run unit that writes phase transitions to
/var/lib/furtka/install-state.json for the UI to poll.
"""
import subprocess
# Reject if the state file reports a non-terminal install. The
# fcntl lock below catches the same race, but only *after* the API
# releases it to let the systemd-run child grab it — a competing
# POST can sneak in during that tiny window. Reading the state
# first closes that gap: as long as a previous install hasn't
# written "done" or "error", we refuse.
current_state = install_runner.read_state()
current_stage = current_state.get("stage", "") if isinstance(current_state, dict) else ""
if current_stage and current_stage not in _INSTALL_TERMINAL_STAGES:
return 409, {
"error": (
f"another install is in progress ({current_state.get('app', '?')}"
f" at {current_stage})"
)
}
# Fast-fail if another install is already in flight. Lock lives under
# /run/ so a previous reboot clears it automatically.
try:
fh = install_runner.acquire_lock()
except install_runner.InstallRunnerError as e:
return 409, {"error": str(e)}
try:
try: try:
src = installer.resolve_source(name) src = installer.resolve_source(name)
target = installer.install_from(src, settings=settings) target = installer.install_from(src, settings=settings)
except installer.InstallError as e: except installer.InstallError as e:
return 400, {"error": str(e)} return 400, {"error": str(e)}
actions = reconciler.reconcile(apps_dir()) # Initial state so the UI has something to show between this
payload = { # response and the background job's first write.
"installed": str(target), install_runner.write_state("pulling_image", app=name)
"actions": [{"kind": a.kind, "target": a.target, "detail": a.detail} for a in actions], finally:
} # Release the lock so the background job can re-acquire it.
# 207 Multi-Status — install copy succeeded but reconcile had per-app errors. fh.close()
return (207 if reconciler.has_errors(actions) else 200, payload)
unit = f"furtka-install-{name}"
try:
subprocess.run(
[
"systemd-run",
f"--unit={unit}",
"--no-block",
"--collect",
"/usr/local/bin/furtka",
"app",
"install-bg",
name,
],
check=True,
capture_output=True,
text=True,
)
except FileNotFoundError:
install_runner.write_state("error", app=name, error="systemd-run not available")
return 502, {"error": "systemd-run not available"}
except subprocess.CalledProcessError as e:
err = (e.stderr or e.stdout or "").strip()
install_runner.write_state("error", app=name, error=f"dispatch failed: {err}")
return 502, {"error": f"systemd-run failed: {err}"}
return 202, {"status": "dispatched", "unit": unit, "installed": str(target)}
def _do_install_status():
"""Return the current install-state.json contents (or {})."""
return 200, install_runner.read_state()
def _do_update_settings(name, settings): def _do_update_settings(name, settings):
@ -583,6 +915,131 @@ def _do_furtka_status():
return 200, updater.read_state() return 200, updater.read_state()
def _do_catalog_check():
"""Check Forgejo for a newer apps-catalog release.
Parallels _do_furtka_check: returns current/latest/update_available.
"""
from furtka import catalog
try:
check = catalog.check_catalog()
except catalog.CatalogError as e:
return 502, {"error": str(e)}
return 200, {
"current": check.current,
"latest": check.latest,
"update_available": check.update_available,
}
def _do_catalog_apply():
"""Kick off a catalog sync detached from this process.
Catalog sync doesn't restart furtka-api, so the lifecycle constraint that
forces the Furtka self-update to detach doesn't strictly apply here — but
using the same systemd-run pattern keeps the two UI flows symmetric and
means a slow network can't tie up the API thread. Client polls
/api/catalog/status the same way it polls /update-state.json.
"""
import subprocess
from furtka import catalog
try:
fh = catalog.acquire_lock()
except catalog.CatalogError as e:
return 409, {"error": str(e)}
fh.close()
try:
subprocess.run(
[
"systemd-run",
"--unit=furtka-catalog-sync-api",
"--no-block",
"--collect",
"/usr/local/bin/furtka",
"catalog",
"sync",
],
check=True,
capture_output=True,
text=True,
)
except FileNotFoundError:
return 502, {"error": "systemd-run not available"}
except subprocess.CalledProcessError as e:
return 502, {
"error": f"systemd-run failed: {(e.stderr or e.stdout or '').strip()}",
}
return 202, {"status": "dispatched", "unit": "furtka-catalog-sync-api"}
def _do_catalog_status():
"""Return {current, state} for the apps catalog.
`current` is the catalog's on-disk VERSION; `state` is whatever was last
written by sync_catalog to catalog-state.json. UI uses both: show the
version next to a last-sync timestamp plus a stage indicator.
"""
from furtka import catalog
return 200, {
"current": catalog.read_current_catalog_version(),
"state": catalog.read_state(),
}
_POWER_ACTIONS = {
"reboot": "reboot",
"poweroff": "poweroff",
}
def _do_power(payload):
"""Schedule a reboot or poweroff with a short delay.
`systemd-run --on-active=3s` kicks a transient timer that fires
`systemctl {reboot|poweroff}` a few seconds after the API returns
long enough for the HTTP response to reach the browser + the UI to
swap to a "Going down…" state before the kernel loses network.
The `--no-block` flag makes the systemd-run call itself return
immediately; `--collect` GCs the transient unit once it fires.
No auth: same posture as the install/remove endpoints. Anyone on the
LAN can reboot the box. The /settings banner warns about this;
Authentik will lock it down.
"""
import subprocess
action = payload.get("action")
systemctl_verb = _POWER_ACTIONS.get(action)
if systemctl_verb is None:
return 400, {"error": f"'action' must be one of {sorted(_POWER_ACTIONS)}"}
try:
subprocess.run(
[
"systemd-run",
"--on-active=3s",
"--no-block",
"--collect",
"systemctl",
systemctl_verb,
],
check=True,
capture_output=True,
text=True,
)
except FileNotFoundError:
return 502, {"error": "systemd-run not available"}
except subprocess.CalledProcessError as e:
return 502, {
"error": f"systemd-run failed: {(e.stderr or e.stdout or '').strip()}",
}
return 202, {"action": action, "scheduled_in_seconds": 3}
def _do_update(name): def _do_update(name):
"""Pull newer container images for an installed app; restart if any changed. """Pull newer container images for an installed app; restart if any changed.
@ -631,35 +1088,211 @@ def _parse_settings_body(payload):
class _Handler(BaseHTTPRequestHandler): class _Handler(BaseHTTPRequestHandler):
def _json(self, status, payload): def _json(self, status, payload, extra_headers=None):
body = json.dumps(payload).encode() body = json.dumps(payload).encode()
self.send_response(status) self.send_response(status)
self.send_header("Content-Type", "application/json") self.send_header("Content-Type", "application/json")
self.send_header("Content-Length", str(len(body))) self.send_header("Content-Length", str(len(body)))
for name, value in extra_headers or []:
self.send_header(name, value)
self.end_headers() self.end_headers()
self.wfile.write(body) self.wfile.write(body)
def _html(self, status, body): def _html(self, status, body, extra_headers=None):
b = body.encode() b = body.encode()
self.send_response(status) self.send_response(status)
self.send_header("Content-Type", "text/html; charset=utf-8") self.send_header("Content-Type", "text/html; charset=utf-8")
self.send_header("Content-Length", str(len(b))) self.send_header("Content-Length", str(len(b)))
for name, value in extra_headers or []:
self.send_header(name, value)
self.end_headers() self.end_headers()
self.wfile.write(b) self.wfile.write(b)
def _serve_static_www(self, relative_path: str):
"""Read an HTML asset from assets/www/ and serve it as 200.
Only reached after the do_GET auth-guard so the caller is
already authed. Relative_path is hard-coded at the call site
(``index.html`` or ``settings/index.html``), not user-supplied,
so there's no path-traversal surface here — but we still clamp
the resolved path to static_www_dir() as a defensive check in
case a future refactor wires a dynamic path through.
"""
root = static_www_dir().resolve()
target = (root / relative_path).resolve()
if root not in target.parents and target != root:
return self._html(500, "<h1>internal error</h1>")
try:
body = target.read_text(encoding="utf-8")
except (FileNotFoundError, OSError):
return self._html(404, "<h1>not found</h1>")
return self._html(200, body)
def _redirect(self, location, extra_headers=None):
self.send_response(302)
self.send_header("Location", location)
self.send_header("Content-Length", "0")
for name, value in extra_headers or []:
self.send_header(name, value)
self.end_headers()
# ---- Auth helpers -------------------------------------------------
def _request_cookies(self) -> SimpleCookie:
cookies = SimpleCookie()
header = self.headers.get("Cookie")
if header:
try:
cookies.load(header)
except Exception:
# Malformed Cookie header — treat as no cookies rather
# than 500ing. Same posture as browsers.
return SimpleCookie()
return cookies
def _current_session(self):
cookies = self._request_cookies()
morsel = cookies.get(auth.COOKIE_NAME)
if morsel is None:
return None
return auth.SESSIONS.lookup(morsel.value)
def _session_cookie_header(self, token: str, max_age: int) -> tuple[str, str]:
secure = self.headers.get("X-Forwarded-Proto", "").lower() == "https"
parts = [
f"{auth.COOKIE_NAME}={token}",
"HttpOnly",
"SameSite=Strict",
"Path=/",
f"Max-Age={max_age}",
]
if secure:
parts.append("Secure")
return ("Set-Cookie", "; ".join(parts))
def _clear_cookie_header(self) -> tuple[str, str]:
# Max-Age=0 with an empty value tells the browser to drop it.
return (
"Set-Cookie",
f"{auth.COOKIE_NAME}=; HttpOnly; SameSite=Strict; Path=/; Max-Age=0",
)
def _client_ip(self) -> str:
# Caddy's reverse_proxy appends the real TCP peer to X-Forwarded-For;
# the rightmost entry is the one Caddy added, so it's trustworthy
# even if a client spoofed an XFF of their own. Caddy is the edge —
# no upstream proxy in front of it.
xff = self.headers.get("X-Forwarded-For")
if xff:
return xff.rsplit(",", 1)[-1].strip()
return self.client_address[0]
def _handle_login(self, payload):
username = payload.get("username") if isinstance(payload, dict) else None
password = payload.get("password") if isinstance(payload, dict) else None
if not isinstance(username, str) or not username.strip():
return self._json(400, {"error": "username is required"})
if not isinstance(password, str) or not password:
return self._json(400, {"error": "password is required"})
username = username.strip()
if auth.setup_needed():
# First-run setup path — create the admin account, then log
# in. Require password2 so a typo doesn't lock the user out
# of their own box.
password2 = payload.get("password2")
if password2 != password:
return self._json(400, {"error": "passwords do not match"})
if len(password) < _MIN_PASSWORD_LEN:
return self._json(
400,
{"error": f"password must be at least {_MIN_PASSWORD_LEN} characters"},
)
auth.create_admin(username, password)
else:
# Tuple-keyed lockout: a flood from one IP can't lock the
# admin out from a different IP. When locked we return the
# same 429 regardless of whether the password is correct —
# no oracle, no timing leak via "would have worked."
lockout_key = (username, self._client_ip())
retry = auth.LOCKOUT.retry_after_seconds(lockout_key)
if retry > 0:
return self._json(
429,
{"error": "too many failed attempts, try again later"},
extra_headers=[("Retry-After", str(retry))],
)
if not auth.authenticate(username, password):
# Register before the sleep so concurrent threads see a
# consistent count; keep the sleep so timing can't
# distinguish "locked" from "wrong password."
auth.LOCKOUT.register_failure(lockout_key)
time.sleep(0.5)
return self._json(401, {"error": "invalid username or password"})
auth.LOCKOUT.clear(lockout_key)
session = auth.SESSIONS.create(username)
cookie = self._session_cookie_header(session.token, auth.COOKIE_TTL_SECONDS)
return self._json(200, {"ok": True, "username": username}, extra_headers=[cookie])
def _handle_logout(self):
cookies = self._request_cookies()
morsel = cookies.get(auth.COOKIE_NAME)
if morsel is not None:
auth.SESSIONS.revoke(morsel.value)
return self._json(200, {"ok": True}, extra_headers=[self._clear_cookie_header()])
def do_GET(self): # noqa: N802 — http.server convention def do_GET(self): # noqa: N802 — http.server convention
if self.path in ("/", "/apps", "/apps/"): # --- Public routes: login page + its assets ------------------
if self.path in ("/login", "/login/"):
# Already authed? Skip straight to the app list.
if self._current_session() is not None:
return self._redirect("/apps")
return self._html(200, _render_login_html(auth.setup_needed()))
# --- Auth guard for everything below -------------------------
session = self._current_session()
if session is None:
# API paths get a 401 JSON so fetch() callers see a clean
# error. HTML paths get a redirect to /login so the browser
# naturally ends up on the login form.
if self.path.startswith("/api/"):
return self._json(401, {"error": "not authenticated"})
return self._redirect("/login")
if self.path in ("/apps", "/apps/"):
return self._html(200, _HTML) return self._html(200, _HTML)
# Landing page + settings page used to be served directly by
# Caddy as static HTML, which silently bypassed this auth
# guard (26.11-era regression that shipped and nobody noticed
# until the 26.13 SSH test session — LAN visitors could read
# the box version, IP and fire pre-authed clicks at the
# update/reboot/https-toggle buttons even though the API calls
# themselves would 401). Python reads the static HTML from
# assets/www/ and serves it behind the session check; Caddy
# now proxies / and /settings* here (see Caddyfile).
if self.path == "/":
return self._serve_static_www("index.html")
if self.path in ("/settings", "/settings/"):
return self._serve_static_www("settings/index.html")
if self.path == "/api/apps": if self.path == "/api/apps":
return self._json(200, _list_installed()) return self._json(200, _list_installed())
if self.path == "/api/bundled": # /api/bundled is the pre-26.6 name for this list; kept as an alias
return self._json(200, _list_bundled()) # so any external tooling survives the rename to /api/apps/available.
if self.path in ("/api/bundled", "/api/apps/available"):
return self._json(200, _list_available())
if self.path == "/api/furtka/update/status": if self.path == "/api/furtka/update/status":
status, body = _do_furtka_status() status, body = _do_furtka_status()
return self._json(status, body) return self._json(status, body)
if self.path == "/api/furtka/https/status": if self.path == "/api/furtka/https/status":
status, body = _do_https_status() status, body = _do_https_status()
return self._json(status, body) return self._json(status, body)
if self.path == "/api/catalog/status":
status, body = _do_catalog_status()
return self._json(status, body)
if self.path == "/api/apps/install/status":
status, body = _do_install_status()
return self._json(status, body)
# /api/apps/<name>/settings # /api/apps/<name>/settings
if self.path.startswith("/api/apps/") and self.path.endswith("/settings"): if self.path.startswith("/api/apps/") and self.path.endswith("/settings"):
name = self.path[len("/api/apps/") : -len("/settings")] name = self.path[len("/api/apps/") : -len("/settings")]
@ -679,6 +1312,16 @@ class _Handler(BaseHTTPRequestHandler):
if not isinstance(payload, dict): if not isinstance(payload, dict):
return self._json(400, {"error": "body must be a JSON object"}) return self._json(400, {"error": "body must be a JSON object"})
# --- Public routes: login + logout ----------------------------
if self.path in ("/login", "/login/"):
return self._handle_login(payload)
if self.path in ("/logout", "/logout/"):
return self._handle_logout()
# --- Auth guard for every other POST --------------------------
if self._current_session() is None:
return self._json(401, {"error": "not authenticated"})
# Per-app settings update: /api/apps/<name>/settings # Per-app settings update: /api/apps/<name>/settings
if self.path.startswith("/api/apps/") and self.path.endswith("/settings"): if self.path.startswith("/api/apps/") and self.path.endswith("/settings"):
name = self.path[len("/api/apps/") : -len("/settings")] name = self.path[len("/api/apps/") : -len("/settings")]
@ -709,6 +1352,19 @@ class _Handler(BaseHTTPRequestHandler):
status, body = _do_https_force(payload) status, body = _do_https_force(payload)
return self._json(status, body) return self._json(status, body)
# Apps catalog: check + apply (daily timer + manual UI button).
if self.path == "/api/catalog/sync/check":
status, body = _do_catalog_check()
return self._json(status, body)
if self.path == "/api/catalog/sync/apply":
status, body = _do_catalog_apply()
return self._json(status, body)
# System power: /settings Reboot / Shut down buttons.
if self.path == "/api/furtka/power":
status, body = _do_power(payload)
return self._json(status, body)
name = payload.get("name") name = payload.get("name")
if not isinstance(name, str) or not name: if not isinstance(name, str) or not name:
return self._json(400, {"error": "missing or empty 'name' field"}) return self._json(400, {"error": "missing or empty 'name' field"})

260
furtka/auth.py Normal file
View file

@ -0,0 +1,260 @@
"""Login-guard primitives for the Furtka UI.
One admin, one password. Passwords are PBKDF2-SHA256 hashed via
``furtka.passwd`` (stdlib-only hashlib.pbkdf2_hmac / hashlib.scrypt),
stored in /var/lib/furtka/users.json with mode 0600. Sessions live in
memory a systemctl restart logs everyone out again, which is fine
for an alpha single-user box. The ``LoginAttempts`` store in this
module rate-limits failed logins per (username, IP) and is also
in-memory; a restart clears a stuck lockout.
On upgrade from pre-auth Furtka the users.json file does not exist
yet; the api's GET /login detects this via ``setup_needed()`` and
renders a first-run form that POSTs to /login as if it were a setup
submit. Fresh installs get the file pre-populated by the webinstaller
so the setup step is skipped.
Hash format is compatible with werkzeug.security 26.11 / 26.12 boxes
that happened to have werkzeug installed can carry their users.json
forward without re-setup; see ``furtka.passwd`` for the scrypt reader.
"""
from __future__ import annotations
import json
import math
import secrets
import threading
from dataclasses import dataclass
from datetime import UTC, datetime, timedelta
from furtka.passwd import hash_password as _hash_password
from furtka.passwd import verify_password as _verify_password
from furtka.paths import users_file
COOKIE_NAME = "furtka_session"
COOKIE_TTL_SECONDS = 7 * 24 * 3600 # one week
def hash_password(plain: str) -> str:
"""PBKDF2-SHA256 via stdlib. 600k iterations (OWASP 2023)."""
return _hash_password(plain)
def verify_password(plain: str, hashed: str) -> bool:
"""Constant-time compare. Accepts stdlib + legacy werkzeug formats."""
return _verify_password(plain, hashed)
def load_users() -> dict:
"""Return the users dict, or {} if the file is missing or empty.
Missing-file is the expected state on first boot and on upgrades from
pre-auth versions callers treat empty-dict as "setup required".
"""
path = users_file()
if not path.exists():
return {}
try:
raw = path.read_text()
except OSError:
return {}
if not raw.strip():
return {}
try:
data = json.loads(raw)
except json.JSONDecodeError:
return {}
if not isinstance(data, dict):
return {}
return data
def save_users(users: dict) -> None:
"""Atomically write users.json with mode 0600.
Same pattern as installer.write_env write to .tmp, chmod, rename
so a crash between open() and close() can't leave a world-readable
partial file.
"""
path = users_file()
path.parent.mkdir(parents=True, exist_ok=True)
tmp = path.with_suffix(path.suffix + ".tmp")
tmp.write_text(json.dumps(users, indent=2) + "\n")
tmp.chmod(0o600)
tmp.replace(path)
def setup_needed() -> bool:
"""True when no admin is registered yet — initial setup is required."""
users = load_users()
return not users or "admin" not in users
def create_admin(username: str, password: str) -> None:
"""Overwrite users.json with a single admin account.
The webinstaller calls this post-install (with the step-1 password) so
the installed system is login-guarded from first boot. The /login
route calls it on first setup for upgrade-path boxes that don't
already have a users.json.
"""
users = {
"admin": {
"username": username,
"hash": hash_password(password),
"created_at": datetime.now(UTC).isoformat(timespec="seconds"),
}
}
save_users(users)
def authenticate(username: str, password: str) -> bool:
"""Return True iff the supplied credentials match the admin record."""
users = load_users()
admin = users.get("admin")
if not admin:
return False
if admin.get("username") != username:
return False
hashed = admin.get("hash")
if not isinstance(hashed, str) or not hashed:
return False
return verify_password(password, hashed)
@dataclass(frozen=True)
class Session:
token: str
username: str
expires_at: datetime
class SessionStore:
"""In-memory session table. Thread-safe (api.py uses the stdlib
HTTPServer which handles one request per thread though the default
variant is single-threaded, we keep the lock so swapping to
ThreadingHTTPServer later doesn't require revisiting this).
"""
def __init__(self, ttl_seconds: int = COOKIE_TTL_SECONDS) -> None:
self._ttl = timedelta(seconds=ttl_seconds)
self._by_token: dict[str, Session] = {}
self._lock = threading.Lock()
def create(self, username: str) -> Session:
token = secrets.token_urlsafe(32)
session = Session(
token=token,
username=username,
expires_at=datetime.now(UTC) + self._ttl,
)
with self._lock:
self._by_token[token] = session
return session
def lookup(self, token: str | None) -> Session | None:
if not token:
return None
with self._lock:
session = self._by_token.get(token)
if session is None:
return None
if datetime.now(UTC) >= session.expires_at:
# Expired — drop it on the floor so repeat lookups stay fast.
self._by_token.pop(token, None)
return None
return session
def revoke(self, token: str | None) -> None:
if not token:
return
with self._lock:
self._by_token.pop(token, None)
def clear(self) -> None:
"""Test helper — wipe all sessions."""
with self._lock:
self._by_token.clear()
class LoginAttempts:
"""In-memory rate-limiter for failed logins, keyed by (username, ip).
Parallels SessionStore: thread-safe, uses ``datetime.now(UTC)`` so the
same ``_FakeDatetime`` test shim works, lives only in memory so a
``systemctl restart furtka`` wipes a stuck lockout. Tuple keying means
a flood from one source IP can't lock the admin out from elsewhere
(different IP different key) the trade-off is that an attacker
can keep probing forever by rotating IPs, but they still eat the
PBKDF2 cost per attempt.
Stored data is a dict[key list[datetime]] of recent failure
timestamps. Every call prunes entries older than ``WINDOW_SECONDS``,
so memory per active key is bounded by ``MAX_FAILURES``.
"""
MAX_FAILURES = 10
WINDOW_SECONDS = 15 * 60
LOCKOUT_SECONDS = 15 * 60
def __init__(
self,
max_failures: int = MAX_FAILURES,
window_seconds: int = WINDOW_SECONDS,
lockout_seconds: int = LOCKOUT_SECONDS,
) -> None:
self._max = max_failures
self._window = timedelta(seconds=window_seconds)
self._lockout = timedelta(seconds=lockout_seconds)
self._fails: dict[tuple[str, str], list[datetime]] = {}
self._lock = threading.Lock()
def _prune_locked(self, key: tuple[str, str], now: datetime) -> list[datetime]:
"""Drop timestamps older than the window; caller holds self._lock."""
cutoff = now - self._window
kept = [ts for ts in self._fails.get(key, ()) if ts >= cutoff]
if kept:
self._fails[key] = kept
else:
self._fails.pop(key, None)
return kept
def register_failure(self, key: tuple[str, str]) -> None:
now = datetime.now(UTC)
with self._lock:
self._prune_locked(key, now)
self._fails.setdefault(key, []).append(now)
def is_locked(self, key: tuple[str, str]) -> bool:
return self.retry_after_seconds(key) > 0
def retry_after_seconds(self, key: tuple[str, str]) -> int:
"""Seconds remaining on an active lockout, or 0 if not locked."""
now = datetime.now(UTC)
with self._lock:
kept = self._prune_locked(key, now)
if len(kept) < self._max:
return 0
# Lockout runs from the oldest retained failure; once it
# falls off the window the key is effectively released.
unlock_at = kept[0] + self._lockout
remaining = (unlock_at - now).total_seconds()
if remaining <= 0:
return 0
return max(1, math.ceil(remaining))
def clear(self, key: tuple[str, str]) -> None:
with self._lock:
self._fails.pop(key, None)
def clear_all(self) -> None:
"""Test helper — wipe all failure state."""
with self._lock:
self._fails.clear()
# Module-level singleton used by the HTTP handler.
SESSIONS = SessionStore()
LOCKOUT = LoginAttempts()

253
furtka/catalog.py Normal file
View file

@ -0,0 +1,253 @@
"""Furtka apps catalog sync.
Mirrors the shape of ``furtka.updater`` but targets a separate Forgejo
repo (``daniel/furtka-apps`` by default) whose releases carry a single
``furtka-apps-<ver>.tar.gz`` with ``VERSION`` at the root and an
``apps/<name>/`` tree underneath. Pulling the catalog keeps the on-box
app ecosystem fresh without requiring a Furtka core release core
ships a seed ``apps/`` under ``/opt/furtka/current/apps/`` that the
resolver falls back to when the catalog is empty or stale.
Flow of ``sync_catalog()``:
1. flock on ``/run/furtka/catalog.lock`` so two triggers (timer + manual
UI click) can't race.
2. ``check_catalog()`` asks Forgejo for the latest release and picks out
the tarball + sidecar URLs.
3. Download tarball + sidecar to ``/var/lib/furtka/catalog/_downloads/``.
4. Verify the sha256 sidecar against the tarball.
5. Extract into ``/var/lib/furtka/catalog/_staging/``.
6. Validate every ``apps/<name>/manifest.json`` via ``furtka.manifest.
load_manifest``. A broken catalog release is refused here, not half-
applied.
7. Atomic rename: existing live catalog ``catalog.prev/``, staging
``catalog/``, then rmtree the prev. Any failure before this step
leaves the live catalog untouched.
8. Write ``/var/lib/furtka/catalog-state.json`` for the UI.
Paths can be overridden via env vars so tests can redirect everything to
a tmp dir.
"""
from __future__ import annotations
import fcntl
import json
import os
import shutil
import time
from dataclasses import dataclass
from pathlib import Path
from furtka import _release_common as _rc
from furtka.manifest import ManifestError, load_manifest
from furtka.paths import catalog_dir
FORGEJO_HOST = os.environ.get("FURTKA_FORGEJO_HOST", "forgejo.sourcegate.online")
CATALOG_REPO = os.environ.get("FURTKA_CATALOG_REPO", "daniel/furtka-apps")
_CATALOG_STATE = Path(os.environ.get("FURTKA_CATALOG_STATE", "/var/lib/furtka/catalog-state.json"))
_LOCK_PATH = Path(os.environ.get("FURTKA_CATALOG_LOCK", "/run/furtka/catalog.lock"))
_STAGING_NAME = "_staging"
_DOWNLOADS_NAME = "_downloads"
_PREV_SUFFIX = ".prev"
_VERSION_FILE = "VERSION"
class CatalogError(RuntimeError):
"""Any failure in the catalog sync flow that should surface to the caller."""
@dataclass(frozen=True)
class CatalogCheck:
current: str | None
latest: str
update_available: bool
tarball_url: str | None
sha256_url: str | None
def state_path() -> Path:
return _CATALOG_STATE
def lock_path() -> Path:
return _LOCK_PATH
def read_current_catalog_version() -> str | None:
"""Return the string in <catalog_dir>/VERSION, or None if absent / unreadable."""
try:
value = (catalog_dir() / _VERSION_FILE).read_text().strip()
except (FileNotFoundError, NotADirectoryError, OSError):
return None
return value or None
def check_catalog() -> CatalogCheck:
"""Query Forgejo for the latest catalog release.
Uses ``/releases?limit=1`` (not ``/releases/latest``) for the same
reason the core updater does Forgejo's ``latest`` endpoint skips
pre-releases and 404s when every tag carries a suffix.
"""
current = read_current_catalog_version()
releases = _rc.forgejo_api(
FORGEJO_HOST, CATALOG_REPO, "/releases?limit=1", error_cls=CatalogError
)
if not isinstance(releases, list) or not releases:
raise CatalogError("no catalog releases published yet")
release = releases[0]
latest = str(release.get("tag_name") or "").strip()
if not latest:
raise CatalogError("latest catalog release has empty tag_name")
tarball_url = None
sha256_url = None
for asset in release.get("assets") or []:
name = asset.get("name") or ""
url = asset.get("browser_download_url") or ""
if name.endswith(".tar.gz") and "furtka-apps-" in name:
tarball_url = url
elif name.endswith(".tar.gz.sha256"):
sha256_url = url
available = latest != current and (
current is None or _rc.version_tuple(latest) > _rc.version_tuple(current)
)
return CatalogCheck(
current=current,
latest=latest,
update_available=available,
tarball_url=tarball_url,
sha256_url=sha256_url,
)
def write_state(stage: str, **extra) -> None:
"""Atomic JSON state write — same shape as updater's update-state.json."""
state_path().parent.mkdir(parents=True, exist_ok=True)
tmp = state_path().with_suffix(".tmp")
payload = {"stage": stage, "updated_at": time.strftime("%Y-%m-%dT%H:%M:%S%z"), **extra}
tmp.write_text(json.dumps(payload, indent=2))
tmp.replace(state_path())
def read_state() -> dict:
try:
return json.loads(state_path().read_text())
except (FileNotFoundError, json.JSONDecodeError):
return {}
def acquire_lock():
path = lock_path()
path.parent.mkdir(parents=True, exist_ok=True)
fh = path.open("w")
try:
fcntl.flock(fh, fcntl.LOCK_EX | fcntl.LOCK_NB)
except BlockingIOError as e:
fh.close()
raise CatalogError("another catalog sync is already in progress") from e
return fh
def _validate_staging(staging: Path, expected_version: str) -> None:
"""Fail hard if the staging tree isn't a well-formed catalog release."""
version_file = staging / _VERSION_FILE
if not version_file.is_file():
raise CatalogError("catalog tarball has no VERSION file at root")
actual = version_file.read_text().strip()
if actual != expected_version:
raise CatalogError(
f"catalog tarball VERSION ({actual!r}) doesn't match expected ({expected_version!r})"
)
apps_root = staging / "apps"
if not apps_root.is_dir():
raise CatalogError("catalog tarball has no apps/ directory")
for entry in sorted(apps_root.iterdir()):
if not entry.is_dir():
continue
manifest_path = entry / "manifest.json"
if not manifest_path.exists():
raise CatalogError(f"catalog app {entry.name!r} has no manifest.json")
try:
load_manifest(manifest_path, expected_name=entry.name)
except ManifestError as e:
raise CatalogError(f"catalog app {entry.name!r}: invalid manifest: {e}") from e
def _atomic_swap(staging: Path) -> None:
"""Move staging → live catalog, keeping the previous tree as .prev until
the rename succeeds so we never leave a half-written catalog on disk."""
live = catalog_dir()
live.parent.mkdir(parents=True, exist_ok=True)
prev = live.with_name(live.name + _PREV_SUFFIX)
if prev.exists():
shutil.rmtree(prev)
if live.exists():
live.rename(prev)
try:
staging.rename(live)
except OSError as e:
if prev.exists():
# try to restore the previous tree; if that also fails the box
# has no catalog at all until the next sync — still better than
# a partially-extracted tree.
try:
prev.rename(live)
except OSError:
pass
raise CatalogError(f"atomic catalog swap failed: {e}") from e
if prev.exists():
shutil.rmtree(prev, ignore_errors=True)
def sync_catalog() -> CatalogCheck:
"""End-to-end sync. Acquires the lock, writes state at each stage, and
leaves the live catalog untouched on any failure before the rename step.
"""
with acquire_lock():
write_state("checking")
check = check_catalog()
if not check.update_available:
write_state("done", version=check.current or check.latest, note="already up to date")
return check
if not check.tarball_url or not check.sha256_url:
raise CatalogError("catalog release is missing tarball or sha256 asset")
# Downloads land in a sibling of the live catalog so half-finished
# artefacts never pollute the live tree, and stay under /var/lib/
# furtka/ so a sync interrupted by reboot can resume instead of
# starting over from /tmp (which clears).
dl_dir = catalog_dir().with_name(catalog_dir().name + _DOWNLOADS_NAME)
dl_dir.mkdir(parents=True, exist_ok=True)
tarball = dl_dir / f"furtka-apps-{check.latest}.tar.gz"
sha_file = dl_dir / f"furtka-apps-{check.latest}.tar.gz.sha256"
write_state("downloading", latest=check.latest)
_rc.download(check.tarball_url, tarball, error_cls=CatalogError)
_rc.download(check.sha256_url, sha_file, error_cls=CatalogError)
write_state("verifying", latest=check.latest)
expected = _rc.parse_sha256_sidecar(sha_file.read_text(), error_cls=CatalogError)
_rc.verify_tarball(tarball, expected, error_cls=CatalogError)
write_state("extracting", latest=check.latest)
staging = catalog_dir().with_name(catalog_dir().name + _STAGING_NAME)
if staging.exists():
shutil.rmtree(staging)
try:
_rc.extract_tarball(tarball, staging, error_cls=CatalogError)
_validate_staging(staging, check.latest)
except CatalogError:
shutil.rmtree(staging, ignore_errors=True)
raise
write_state("swapping", latest=check.latest)
try:
_atomic_swap(staging)
except CatalogError:
shutil.rmtree(staging, ignore_errors=True)
raise
write_state("done", version=check.latest, previous=check.current)
return check

View file

@ -21,9 +21,22 @@ def _cmd_app_list(args: argparse.Namespace) -> int:
"display_name": r.manifest.display_name, "display_name": r.manifest.display_name,
"version": r.manifest.version, "version": r.manifest.version,
"description": r.manifest.description, "description": r.manifest.description,
"description_long": r.manifest.description_long,
"volumes": list(r.manifest.volumes), "volumes": list(r.manifest.volumes),
"ports": list(r.manifest.ports), "ports": list(r.manifest.ports),
"icon": r.manifest.icon, "icon": r.manifest.icon,
"open_url": r.manifest.open_url,
"settings": [
{
"name": s.name,
"label": s.label,
"description": s.description,
"type": s.type,
"required": s.required,
"default": s.default,
}
for s in r.manifest.settings
],
} }
if r.manifest if r.manifest
else None, else None,
@ -58,6 +71,24 @@ def _cmd_app_install(args: argparse.Namespace) -> int:
return 1 if reconciler.has_errors(actions) else 0 return 1 if reconciler.has_errors(actions) else 0
def _cmd_app_install_bg(args: argparse.Namespace) -> int:
"""Docker-facing phases of an install — called by the API via systemd-run.
Internal subcommand; normal CLI users want `app install` (synchronous).
This exists to separate the slow docker pull/up from the synchronous
validation the API does inline, so the UI can poll a state file.
"""
from furtka import install_runner
try:
install_runner.run_install(args.name)
except Exception as e:
# run_install already wrote state="error"; echo for journald.
print(f"install-bg failed: {e}", file=sys.stderr)
return 1
return 0
def _cmd_app_remove(args: argparse.Namespace) -> int: def _cmd_app_remove(args: argparse.Namespace) -> int:
target = apps_dir() / args.name target = apps_dir() / args.name
if not target.exists(): if not target.exists():
@ -149,6 +180,60 @@ def _cmd_rollback(args: argparse.Namespace) -> int:
return 0 return 0
def _cmd_catalog_sync(args: argparse.Namespace) -> int:
from furtka import catalog
if args.check:
try:
check = catalog.check_catalog()
except catalog.CatalogError as e:
print(f"error: {e}", file=sys.stderr)
return 2
if args.json:
print(
json.dumps(
{
"current": check.current,
"latest": check.latest,
"update_available": check.update_available,
},
indent=2,
)
)
elif check.update_available:
print(f"Catalog update available: {check.current or '(none)'}{check.latest}")
else:
print(f"Catalog already up to date ({check.current or check.latest})")
return 0
try:
check = catalog.sync_catalog()
except catalog.CatalogError as e:
print(f"error: {e}", file=sys.stderr)
return 2
if not check.update_available:
print(f"Catalog already up to date ({check.current or check.latest})")
else:
print(f"Synced catalog {check.current or '(none)'}{check.latest}")
return 0
def _cmd_catalog_status(args: argparse.Namespace) -> int:
from furtka import catalog
current = catalog.read_current_catalog_version()
state = catalog.read_state()
if args.json:
print(json.dumps({"current": current, "state": state}, indent=2))
return 0
print(f"Catalog version: {current or '(none — run `furtka catalog sync`)'}")
if state:
print(f"Last sync stage: {state.get('stage', '?')} at {state.get('updated_at', '?')}")
else:
print("Last sync stage: (never)")
return 0
def build_parser() -> argparse.ArgumentParser: def build_parser() -> argparse.ArgumentParser:
p = argparse.ArgumentParser(prog="furtka", description="Furtka resource manager") p = argparse.ArgumentParser(prog="furtka", description="Furtka resource manager")
sub = p.add_subparsers(dest="command", required=True) sub = p.add_subparsers(dest="command", required=True)
@ -170,6 +255,15 @@ def build_parser() -> argparse.ArgumentParser:
) )
app_install.set_defaults(func=_cmd_app_install) app_install.set_defaults(func=_cmd_app_install)
# Internal — called by the HTTP API via systemd-run. Deliberately omitted
# from the help listing; regular CLI users want `app install` above.
app_install_bg = app_sub.add_parser(
"install-bg",
help=argparse.SUPPRESS,
)
app_install_bg.add_argument("name", help="Installed app folder name")
app_install_bg.set_defaults(func=_cmd_app_install_bg)
app_remove = app_sub.add_parser("remove", help="Stop and uninstall an app (keeps volumes)") app_remove = app_sub.add_parser("remove", help="Stop and uninstall an app (keeps volumes)")
app_remove.add_argument("name", help="App name (folder name under /var/lib/furtka/apps/)") app_remove.add_argument("name", help="App name (folder name under /var/lib/furtka/apps/)")
app_remove.set_defaults(func=_cmd_app_remove) app_remove.set_defaults(func=_cmd_app_remove)
@ -212,6 +306,36 @@ def build_parser() -> argparse.ArgumentParser:
) )
rollback.set_defaults(func=_cmd_rollback) rollback.set_defaults(func=_cmd_rollback)
catalog = sub.add_parser("catalog", help="Manage the apps catalog (daniel/furtka-apps)")
catalog_sub = catalog.add_subparsers(dest="subcommand", required=True)
catalog_sync = catalog_sub.add_parser(
"sync",
help="Download and install the latest apps catalog from Forgejo",
)
catalog_sync.add_argument(
"--check",
action="store_true",
help="Only check whether a catalog update is available; don't apply",
)
catalog_sync.add_argument(
"--json",
action="store_true",
help="Emit machine-readable JSON (only honoured with --check)",
)
catalog_sync.set_defaults(func=_cmd_catalog_sync)
catalog_status = catalog_sub.add_parser(
"status",
help="Print the currently-installed catalog version and last-sync stage",
)
catalog_status.add_argument(
"--json",
action="store_true",
help="Emit machine-readable JSON",
)
catalog_status.set_defaults(func=_cmd_catalog_status)
return p return p

View file

@ -6,10 +6,25 @@ sets `XDG_DATA_HOME=/var/lib`, so on the target that resolves to
/var/lib/caddy/pki/authorities/local/. The private key stays 0600 / /var/lib/caddy/pki/authorities/local/. The private key stays 0600 /
caddy-owned; we only ever read the public root.crt next to it. caddy-owned; we only ever read the public root.crt next to it.
This module exposes two operations: HTTPS is **opt-in** since 26.15-alpha. Default Caddyfile has no `:443`
- status(): current CA fingerprint + whether force-HTTPS is on site block, so `tls internal` never triggers cert issuance. The
- set_force_https(enabled): write/remove the Caddy import snippet that /settings toggle drops a snippet file into /etc/caddy/furtka-https.d/
redirects HTTP to HTTPS, reload Caddy, roll back on failure. that adds the hostname+tls-internal block (plus the redirect snippet
inside /etc/caddy/furtka.d/ for HTTPHTTPS). Disabling the toggle
removes both snippets and reloads Caddy falls back to HTTP-only.
Why opt-in: fresh-install boxes used to always serve a self-signed
cert on :443. Any browser that had ever trusted a previous Furtka
box's local CA rejected the new cert with an unbypassable
SEC_ERROR_BAD_SIGNATURE Firefox in particular has no "Advanced →
Accept" for that case. Making HTTPS explicit means fresh installs
never hit that trap; users who want HTTPS download the rootCA.crt
first and then click the toggle.
This module exposes:
- status(): CA fingerprint + current toggle state
- set_force_https(enabled): write/remove BOTH snippets atomically,
reload Caddy, roll back on failure.
""" """
import base64 import base64
@ -22,6 +37,9 @@ CA_CERT_PATH = Path("/var/lib/caddy/pki/authorities/local/root.crt")
SNIPPET_DIR = Path("/etc/caddy/furtka.d") SNIPPET_DIR = Path("/etc/caddy/furtka.d")
REDIRECT_SNIPPET = SNIPPET_DIR / "redirect.caddyfile" REDIRECT_SNIPPET = SNIPPET_DIR / "redirect.caddyfile"
REDIRECT_CONTENT = "redir https://{host}{uri} permanent\n" REDIRECT_CONTENT = "redir https://{host}{uri} permanent\n"
HTTPS_SNIPPET_DIR = Path("/etc/caddy/furtka-https.d")
HTTPS_SNIPPET = HTTPS_SNIPPET_DIR / "https.caddyfile"
HOSTNAME_FILE = Path("/etc/hostname")
_PEM_RE = re.compile( _PEM_RE = re.compile(
r"-----BEGIN CERTIFICATE-----\s*(.+?)\s*-----END CERTIFICATE-----", r"-----BEGIN CERTIFICATE-----\s*(.+?)\s*-----END CERTIFICATE-----",
@ -33,6 +51,30 @@ class HttpsError(Exception):
"""Recoverable failure from set_force_https — the caller should 5xx.""" """Recoverable failure from set_force_https — the caller should 5xx."""
def _read_hostname(hostname_file: Path = HOSTNAME_FILE) -> str:
"""Return the box's hostname, stripped. Falls back to 'furtka' so a
missing /etc/hostname doesn't produce an empty site block that Caddy
would reject at parse time."""
try:
value = hostname_file.read_text().strip()
except (FileNotFoundError, PermissionError, OSError):
return "furtka"
return value or "furtka"
def _https_snippet_content(hostname: str) -> str:
"""Caddy site block the HTTPS toggle installs at opt-in.
Serves <hostname>.local and <hostname> on :443 with Caddy's
`tls internal` (local CA auto-issuance), and imports the shared
furtka_routes snippet so the :443 listener exposes the same
routes as :80. Must be written at top-level (not inside another
site block) that's why the Caddyfile imports furtka-https.d at
top-level rather than inside :80.
"""
return f"{hostname}.local, {hostname} {{\n\ttls internal\n\timport furtka_routes\n}}\n"
def _ca_fingerprint(ca_path: Path) -> str | None: def _ca_fingerprint(ca_path: Path) -> str | None:
try: try:
pem = ca_path.read_text() pem = ca_path.read_text()
@ -54,13 +96,20 @@ def _format_fingerprint(hex_upper: str) -> str:
def status( def status(
ca_path: Path = CA_CERT_PATH, ca_path: Path = CA_CERT_PATH,
snippet: Path = REDIRECT_SNIPPET, https_snippet: Path = HTTPS_SNIPPET,
) -> dict: ) -> dict:
"""force_https is True iff the HTTPS listener snippet exists.
Before 26.15-alpha this checked the redirect snippet instead but
the redirect alone without a :443 listener wouldn't actually serve
HTTPS, so the listener snippet is the authoritative "HTTPS is on"
signal.
"""
fp = _ca_fingerprint(ca_path) fp = _ca_fingerprint(ca_path)
return { return {
"ca_available": fp is not None, "ca_available": fp is not None,
"fingerprint_sha256": _format_fingerprint(fp) if fp else None, "fingerprint_sha256": _format_fingerprint(fp) if fp else None,
"force_https": snippet.is_file(), "force_https": https_snippet.is_file(),
"ca_download_url": "/rootCA.crt", "ca_download_url": "/rootCA.crt",
} }
@ -78,29 +127,48 @@ def set_force_https(
enabled: bool, enabled: bool,
snippet_dir: Path = SNIPPET_DIR, snippet_dir: Path = SNIPPET_DIR,
snippet: Path = REDIRECT_SNIPPET, snippet: Path = REDIRECT_SNIPPET,
https_snippet_dir: Path = HTTPS_SNIPPET_DIR,
https_snippet: Path = HTTPS_SNIPPET,
hostname_file: Path = HOSTNAME_FILE,
reload_caddy=_default_reload, reload_caddy=_default_reload,
) -> bool: ) -> bool:
"""Toggle the HTTP→HTTPS redirect by writing or removing the snippet """Toggle HTTPS by writing or removing two snippets atomically:
Caddy imports. Always reloads Caddy. Rolls the snippet state back on
reload failure so a broken config can't leave Caddy wedged on the next 1. The top-level HTTPS hostname+tls-internal block (enables :443
restart. listener + Caddy's `tls internal` cert issuance)
2. The :80-scoped redirect snippet (forces HTTP HTTPS)
Reload Caddy after the snippet swap. On reload failure both
snippets are reverted to their pre-call state so a bad config
can't leave Caddy wedged.
""" """
snippet_dir.mkdir(mode=0o755, parents=True, exist_ok=True) snippet_dir.mkdir(mode=0o755, parents=True, exist_ok=True)
had = snippet.is_file() https_snippet_dir.mkdir(mode=0o755, parents=True, exist_ok=True)
previous = snippet.read_text() if had else None
had_redirect = snippet.is_file()
previous_redirect = snippet.read_text() if had_redirect else None
had_https = https_snippet.is_file()
previous_https = https_snippet.read_text() if had_https else None
if enabled: if enabled:
snippet.write_text(REDIRECT_CONTENT) snippet.write_text(REDIRECT_CONTENT)
elif had: https_snippet.write_text(_https_snippet_content(_read_hostname(hostname_file)))
else:
if had_redirect:
snippet.unlink() snippet.unlink()
if had_https:
https_snippet.unlink()
try: try:
reload_caddy() reload_caddy()
except subprocess.CalledProcessError as e: except subprocess.CalledProcessError as e:
_revert(snippet, previous) _revert(snippet, previous_redirect)
_revert(https_snippet, previous_https)
msg = (e.stderr or e.stdout or "").strip() or f"exit {e.returncode}" msg = (e.stderr or e.stdout or "").strip() or f"exit {e.returncode}"
raise HttpsError(f"caddy reload failed: {msg}") from e raise HttpsError(f"caddy reload failed: {msg}") from e
except FileNotFoundError as e: except FileNotFoundError as e:
_revert(snippet, previous) _revert(snippet, previous_redirect)
_revert(https_snippet, previous_https)
raise HttpsError(f"systemctl not available: {e}") from e raise HttpsError(f"systemctl not available: {e}") from e
return enabled return enabled

121
furtka/install_runner.py Normal file
View file

@ -0,0 +1,121 @@
"""Background job for app installs — progress-visible via state file.
The slow part of installing an app is `docker compose pull` on a large
image (Jellyfin ~500 MB); without progress feedback, the UI modal sits
dead on "Installing…" for 30+ seconds and the user wonders if it hung.
This module mirrors the exact same shape as ``furtka.catalog`` and
``furtka.updater`` so the UI can poll an install just like it polls a
catalog sync or a self-update. The split is:
- ``furtka.api._do_install`` runs synchronously: resolve source, copy
the app folder, write .env, validate path settings + placeholders.
Those are fast, and their failures deserve an immediate 4xx so the
install modal can surface them in-line.
- After that the API writes an initial state file (stage
"pulling_image") and dispatches ``systemd-run --unit=furtka-install-
<name>`` to run ``furtka app install-bg <name>`` in the background.
That CLI subcommand is what calls ``run_install()`` here it does the
docker-facing phases and writes state transitions as it goes.
State file schema (``/var/lib/furtka/install-state.json``):
{
"stage": "pulling_image" | "creating_volumes"
| "starting_container" | "done" | "error",
"updated_at": "2026-04-21T17:30:45+0200",
"app": "jellyfin",
"version": "1.0.0", // added at "done"
"error": "details..." // added at "error"
}
Lock: ``/run/furtka/install.lock`` (tmpfs, reboot-safe). Global, not
per-app two parallel installs are not a v1 use-case and the lock
keeps the state-file representation simple (one in-flight install at
a time).
"""
from __future__ import annotations
import fcntl
import json
import os
import time
from pathlib import Path
from furtka import dockerops
from furtka.manifest import load_manifest
from furtka.paths import apps_dir
_INSTALL_STATE = Path(os.environ.get("FURTKA_INSTALL_STATE", "/var/lib/furtka/install-state.json"))
_LOCK_PATH = Path(os.environ.get("FURTKA_INSTALL_LOCK", "/run/furtka/install.lock"))
class InstallRunnerError(RuntimeError):
"""Any failure in the background install flow that should surface to the caller."""
def state_path() -> Path:
return _INSTALL_STATE
def lock_path() -> Path:
return _LOCK_PATH
def write_state(stage: str, **extra) -> None:
"""Atomic JSON state write — same shape as catalog/update-state."""
state_path().parent.mkdir(parents=True, exist_ok=True)
tmp = state_path().with_suffix(".tmp")
payload = {"stage": stage, "updated_at": time.strftime("%Y-%m-%dT%H:%M:%S%z"), **extra}
tmp.write_text(json.dumps(payload, indent=2))
tmp.replace(state_path())
def read_state() -> dict:
try:
return json.loads(state_path().read_text())
except (FileNotFoundError, json.JSONDecodeError):
return {}
def acquire_lock():
path = lock_path()
path.parent.mkdir(parents=True, exist_ok=True)
fh = path.open("w")
try:
fcntl.flock(fh, fcntl.LOCK_EX | fcntl.LOCK_NB)
except BlockingIOError as e:
fh.close()
raise InstallRunnerError("another install is already in progress") from e
return fh
def run_install(name: str) -> None:
"""Docker-facing phases of the install: pull → volumes → compose up.
Called by the ``furtka app install-bg <name>`` CLI subcommand from the
systemd-run spawned by the API. Assumes the API has already run
``installer.install_from()``, so the app folder, .env, and manifest
are on disk at ``apps_dir() / <name>``.
Every phase transition is written to the state file for the UI to
poll. On exception the state flips to ``"error"`` with the message,
then the exception is re-raised so the CLI exits non-zero and
journald has a traceback.
"""
with acquire_lock():
target = apps_dir() / name
manifest = load_manifest(target / "manifest.json", expected_name=name)
try:
write_state("pulling_image", app=name)
dockerops.compose_pull(target, name)
write_state("creating_volumes", app=name)
for short in manifest.volumes:
dockerops.ensure_volume(manifest.volume_name(short))
write_state("starting_container", app=name)
dockerops.compose_up(target, name)
write_state("done", app=name, version=manifest.version)
except Exception as e:
write_state("error", app=name, error=str(e))
raise

View file

@ -1,8 +1,9 @@
import shutil import shutil
from pathlib import Path from pathlib import Path
from furtka.manifest import ManifestError, load_manifest from furtka import sources
from furtka.paths import apps_dir, bundled_apps_dir from furtka.manifest import Manifest, ManifestError, load_manifest
from furtka.paths import apps_dir
# Values that an app's .env.example may use as obvious "fill me in" markers. # Values that an app's .env.example may use as obvious "fill me in" markers.
# If any of these reach the live .env, install refuses — otherwise we'd ship # If any of these reach the live .env, install refuses — otherwise we'd ship
@ -10,6 +11,25 @@ from furtka.paths import apps_dir, bundled_apps_dir
# default that ends up screenshotted on Hacker News. # default that ends up screenshotted on Hacker News.
PLACEHOLDER_SECRETS: frozenset[str] = frozenset({"changeme"}) PLACEHOLDER_SECRETS: frozenset[str] = frozenset({"changeme"})
# System paths that must never be accepted as a user-supplied `path`-type
# setting. The user is root on their own box, so this is about preventing
# accidental footguns (typing `/etc` when they meant `/mnt/etc`), not
# defending against an attacker. Matches exact paths and their subtrees
# after `Path.resolve()` — so `/mnt/../etc` also lands here.
DENIED_PATH_PREFIXES: tuple[str, ...] = (
"/etc",
"/root",
"/boot",
"/proc",
"/sys",
"/dev",
"/bin",
"/sbin",
"/usr/bin",
"/usr/sbin",
"/var/lib/furtka",
)
class InstallError(RuntimeError): class InstallError(RuntimeError):
pass pass
@ -30,6 +50,53 @@ def _placeholder_keys(env_path: Path) -> list[str]:
return bad return bad
def _is_denied_system_path(resolved: str) -> bool:
if resolved == "/":
return True
for bad in DENIED_PATH_PREFIXES:
if resolved == bad or resolved.startswith(bad + "/"):
return True
return False
def _path_setting_errors(m: Manifest, env_path: Path) -> list[str]:
"""Validate the filesystem paths named by `path`-type settings.
Returns one human-readable message per offending setting. Empty values
on non-required settings are allowed the required-field check in the
caller already refuses blanks on required fields before write.
"""
if not env_path.exists():
return []
values = _read_env(env_path)
errors: list[str] = []
for s in m.settings:
if s.type != "path":
continue
value = values.get(s.name, "")
if not value:
continue
p = Path(value)
if not p.is_absolute():
errors.append(f"{s.name}={value!r} must be an absolute path (start with /)")
continue
try:
resolved = p.resolve(strict=False)
except (OSError, RuntimeError) as e:
errors.append(f"{s.name}={value!r} cannot be resolved: {e}")
continue
if _is_denied_system_path(str(resolved)):
errors.append(f"{s.name}={value!r} resolves into a system path and is not allowed")
continue
if not resolved.exists():
errors.append(f"{s.name}={value!r} does not exist on this box")
continue
if not resolved.is_dir():
errors.append(f"{s.name}={value!r} is not a directory")
continue
return errors
def _format_env_value(v: str) -> str: def _format_env_value(v: str) -> str:
# Quote values that contain whitespace, quotes, or shell metacharacters so # Quote values that contain whitespace, quotes, or shell metacharacters so
# docker-compose's env substitution reads them back intact. Simple values # docker-compose's env substitution reads them back intact. Simple values
@ -58,17 +125,18 @@ def resolve_source(source: str) -> Path:
"""Resolve a `furtka app install <source>` arg to a real source folder. """Resolve a `furtka app install <source>` arg to a real source folder.
If `source` looks like a path (or exists on disk), use it. Otherwise treat If `source` looks like a path (or exists on disk), use it. Otherwise treat
it as a bundled app name and look up under /opt/furtka/apps/<name>. it as an app name and look it up via `furtka.sources.resolve_app_name`
which checks the synced catalog first and falls back to the bundled seed.
""" """
p = Path(source) p = Path(source)
if p.is_dir(): if p.is_dir():
return p return p
if "/" in source or source.startswith("."): if "/" in source or source.startswith("."):
raise InstallError(f"{source!r} is not a directory") raise InstallError(f"{source!r} is not a directory")
bundled = bundled_apps_dir() / source resolved = sources.resolve_app_name(source)
if bundled.is_dir(): if resolved is None:
return bundled raise InstallError(f"{source!r} not found as a path, catalog app, or bundled app")
raise InstallError(f"{source!r} not found as a path or bundled app") return resolved.path
def install_from(src: Path, settings: dict[str, str] | None = None) -> Path: def install_from(src: Path, settings: dict[str, str] | None = None) -> Path:
@ -158,6 +226,10 @@ def install_from(src: Path, settings: dict[str, str] | None = None) -> Path:
f"file and re-run `furtka app install {m.name}`." f"file and re-run `furtka app install {m.name}`."
) )
path_errors = _path_setting_errors(m, env)
if path_errors:
raise InstallError(f"{m.name}: {'; '.join(path_errors)}")
return target return target
@ -229,6 +301,9 @@ def update_env(name: str, settings: dict[str, str]) -> Path:
bad = _placeholder_keys(env) bad = _placeholder_keys(env)
if bad: if bad:
raise InstallError(f"{m.name}: {env} still has placeholder values for {', '.join(bad)}.") raise InstallError(f"{m.name}: {env} still has placeholder values for {', '.join(bad)}.")
path_errors = _path_setting_errors(m, env)
if path_errors:
raise InstallError(f"{m.name}: {'; '.join(path_errors)}")
return target return target

View file

@ -13,7 +13,7 @@ REQUIRED_FIELDS = (
"icon", "icon",
) )
VALID_SETTING_TYPES = frozenset({"text", "password", "number"}) VALID_SETTING_TYPES = frozenset({"text", "password", "number", "path"})
SETTING_NAME_RE = re.compile(r"^[A-Z_][A-Z0-9_]*$") SETTING_NAME_RE = re.compile(r"^[A-Z_][A-Z0-9_]*$")
@ -42,6 +42,12 @@ class Manifest:
icon: str icon: str
description_long: str = "" description_long: str = ""
settings: tuple[Setting, ...] = field(default_factory=tuple) settings: tuple[Setting, ...] = field(default_factory=tuple)
# Optional "Open" link for the landing page + installed-app row.
# `{host}` is substituted with the current browser hostname at render
# time so the URL follows whatever the user typed to reach Furtka —
# furtka.local, a raw IP, a future reverse-proxy hostname. Apps with
# no frontend (CLI-only, background workers) leave this empty.
open_url: str = ""
def volume_name(self, short: str) -> str: def volume_name(self, short: str) -> str:
# Namespace volume names so two apps can each declare e.g. "data" # Namespace volume names so two apps can each declare e.g. "data"
@ -127,6 +133,10 @@ def load_manifest(path: Path, expected_name: str | None = None) -> Manifest:
settings = _parse_settings(raw.get("settings"), path) settings = _parse_settings(raw.get("settings"), path)
open_url_raw = raw.get("open_url", "")
if not isinstance(open_url_raw, str):
raise ManifestError(f"{path}: open_url must be a string if set")
return Manifest( return Manifest(
name=name, name=name,
display_name=str(raw["display_name"]), display_name=str(raw["display_name"]),
@ -137,4 +147,5 @@ def load_manifest(path: Path, expected_name: str | None = None) -> Manifest:
icon=str(raw["icon"]), icon=str(raw["icon"]),
description_long=str(raw.get("description_long", "")), description_long=str(raw.get("description_long", "")),
settings=settings, settings=settings,
open_url=open_url_raw,
) )

95
furtka/passwd.py Normal file
View file

@ -0,0 +1,95 @@
"""Stdlib-only password hashing, compatible with werkzeug's hash format.
Why this exists: 26.11-alpha introduced auth via ``werkzeug.security``,
but the target system doesn't have ``werkzeug`` installed (Core runs as
system Python with only the stdlib pyproject.toml's ``flask>=3.0``
dep is never pip-installed on the box). Fresh installs from a 26.11 /
26.12 ISO crashed on import; upgrades from pre-auth versions were
double-broken by that plus a too-strict updater health check.
Fix: replace werkzeug with stdlib equivalents using the same hash
**format** so existing ``users.json`` files created by 26.11 / 26.12 on
the rare boxes that happened to have werkzeug installed (Medion, .196
after manual pacman) still verify.
Format: ``<method>$<salt>$<hex digest>``
- ``pbkdf2:<hash>:<iterations>`` what we generate by default here
- ``scrypt:<N>:<r>:<p>`` what werkzeug's default produces
Both are implemented via ``hashlib`` which has been stdlib since 3.6.
"""
from __future__ import annotations
import hashlib
import hmac
import secrets
_PBKDF2_HASH = "sha256"
_PBKDF2_ITERATIONS = 600_000
_SALT_LEN = 16
def hash_password(password: str) -> str:
"""Return a ``pbkdf2:sha256:<iter>$<salt>$<hex>`` hash of *password*.
PBKDF2-SHA256 over UTF-8. 600k iterations same as werkzeug's
default in the 3.x series, roughly OWASP 2023's recommendation.
"""
if not isinstance(password, str):
raise TypeError("password must be str")
salt = secrets.token_urlsafe(_SALT_LEN)[:_SALT_LEN]
dk = hashlib.pbkdf2_hmac(
_PBKDF2_HASH, password.encode("utf-8"), salt.encode("utf-8"), _PBKDF2_ITERATIONS
)
return f"pbkdf2:{_PBKDF2_HASH}:{_PBKDF2_ITERATIONS}${salt}${dk.hex()}"
def verify_password(password: str, hashed: str) -> bool:
"""Constant-time verify *password* against a stored *hashed* value.
Accepts both our own pbkdf2 hashes and legacy werkzeug scrypt
hashes in ``scrypt:N:r:p$salt$hex`` form so users.json files
written by 26.11 / 26.12 keep working after upgrade.
"""
if not isinstance(password, str) or not isinstance(hashed, str):
return False
try:
method, salt, expected = hashed.split("$", 2)
except ValueError:
return False
parts = method.split(":")
if not parts:
return False
algo = parts[0]
pw_bytes = password.encode("utf-8")
salt_bytes = salt.encode("utf-8")
try:
if algo == "pbkdf2":
if len(parts) < 3:
return False
inner_hash = parts[1]
iterations = int(parts[2])
dk = hashlib.pbkdf2_hmac(inner_hash, pw_bytes, salt_bytes, iterations)
elif algo == "scrypt":
# werkzeug: scrypt:N:r:p, dklen=64, maxmem=132 MiB. Without
# the explicit maxmem we'd hit OpenSSL's default memory cap
# and throw ValueError on N >= 32768.
if len(parts) < 4:
return False
n = int(parts[1])
r = int(parts[2])
p = int(parts[3])
dk = hashlib.scrypt(
pw_bytes,
salt=salt_bytes,
n=n,
r=r,
p=p,
dklen=64,
maxmem=132 * 1024 * 1024,
)
else:
return False
except (ValueError, TypeError, OverflowError):
return False
return hmac.compare_digest(dk.hex(), expected)

View file

@ -7,6 +7,19 @@ DEFAULT_APPS_DIR = Path("/var/lib/furtka/apps")
# symlink. A flat /opt/furtka/apps path would break the Phase-2 self-update # symlink. A flat /opt/furtka/apps path would break the Phase-2 self-update
# flow (symlink swap wouldn't move the bundled-app tree along with the code). # flow (symlink swap wouldn't move the bundled-app tree along with the code).
DEFAULT_BUNDLED_APPS_DIR = Path("/opt/furtka/current/apps") DEFAULT_BUNDLED_APPS_DIR = Path("/opt/furtka/current/apps")
# Catalog apps come from `furtka catalog sync` pulling the daniel/furtka-apps
# release tarball. Lives under /var/lib/furtka/ so it survives core self-
# updates — the resolver (furtka.sources) prefers it over the bundled seed.
DEFAULT_CATALOG_DIR = Path("/var/lib/furtka/catalog")
# Users / auth state. One JSON file keyed by role — today only "admin" exists.
# Lives under /var/lib/furtka/ so self-updates don't stomp it. Mode 0600 is
# enforced by furtka.auth.save_users (same atomic-write pattern as the app
# .env files).
DEFAULT_USERS_FILE = Path("/var/lib/furtka/users.json")
# Static-web asset dir served by the Python handler for / and
# /settings* so those pages pick up the auth-guard. Caddy also serves
# /style.css and other assets directly from here for the login page.
DEFAULT_STATIC_WWW = Path("/opt/furtka/current/assets/www")
def apps_dir() -> Path: def apps_dir() -> Path:
@ -15,3 +28,19 @@ def apps_dir() -> Path:
def bundled_apps_dir() -> Path: def bundled_apps_dir() -> Path:
return Path(os.environ.get("FURTKA_BUNDLED_APPS_DIR", DEFAULT_BUNDLED_APPS_DIR)) return Path(os.environ.get("FURTKA_BUNDLED_APPS_DIR", DEFAULT_BUNDLED_APPS_DIR))
def catalog_dir() -> Path:
return Path(os.environ.get("FURTKA_CATALOG_DIR", DEFAULT_CATALOG_DIR))
def catalog_apps_dir() -> Path:
return catalog_dir() / "apps"
def users_file() -> Path:
return Path(os.environ.get("FURTKA_USERS_FILE", DEFAULT_USERS_FILE))
def static_www_dir() -> Path:
return Path(os.environ.get("FURTKA_STATIC_WWW", DEFAULT_STATIC_WWW))

75
furtka/sources.py Normal file
View file

@ -0,0 +1,75 @@
"""Single lookup layer for "where does app <name> live right now?".
Three origins an app folder can come from:
- ``catalog`` the daily-synced ``/var/lib/furtka/catalog/apps/`` tree
that ``furtka.catalog.sync_catalog`` maintains.
- ``bundled`` the seed ``/opt/furtka/current/apps/`` tree shipped
inside the core release tarball. Used for first-boot before any
catalog sync has run, and as the fallback when the catalog is stale,
missing, or doesn't know about this app.
- ``local`` an explicit directory path passed to ``furtka app install
/path/to/src``; bypasses this module entirely.
Catalog wins on collision. The precedence is deliberate when the user
pressed "Sync apps catalog" they want what they synced, not whatever the
core tarball happened to carry.
"""
from __future__ import annotations
from dataclasses import dataclass
from pathlib import Path
from furtka.paths import bundled_apps_dir, catalog_apps_dir
@dataclass(frozen=True)
class AppSource:
path: Path
origin: str # "catalog" | "bundled" | "local"
def resolve_app_name(name: str) -> AppSource | None:
"""Return the source folder for a bundled/catalog app name.
Checks catalog first, then bundled seed. Presence is tested by
``manifest.json`` existing an empty folder or a stray ``.env``
won't register. Returns ``None`` if the name isn't known anywhere.
"""
cat = catalog_apps_dir() / name
if (cat / "manifest.json").is_file():
return AppSource(cat, "catalog")
bundled = bundled_apps_dir() / name
if (bundled / "manifest.json").is_file():
return AppSource(bundled, "bundled")
return None
def list_available() -> list[AppSource]:
"""Catalog bundled, catalog wins on name collision.
Each entry is a folder containing a manifest.json. Ordering is
alphabetical by folder name, which matches how the scanner sorts so
the UI list stays stable across sync/reboot.
"""
seen: dict[str, AppSource] = {}
cat_root = catalog_apps_dir()
if cat_root.is_dir():
for entry in sorted(cat_root.iterdir()):
if not entry.is_dir():
continue
if not (entry / "manifest.json").is_file():
continue
seen[entry.name] = AppSource(entry, "catalog")
bundled_root = bundled_apps_dir()
if bundled_root.is_dir():
for entry in sorted(bundled_root.iterdir()):
if not entry.is_dir():
continue
if entry.name in seen:
continue
if not (entry / "manifest.json").is_file():
continue
seen[entry.name] = AppSource(entry, "bundled")
return [seen[name] for name in sorted(seen)]

View file

@ -29,18 +29,18 @@ the updater at a tmpdir.
from __future__ import annotations from __future__ import annotations
import fcntl import fcntl
import hashlib
import json import json
import os import os
import shutil import shutil
import subprocess import subprocess
import tarfile
import time import time
import urllib.error import urllib.error
import urllib.request import urllib.request
from dataclasses import dataclass from dataclasses import dataclass
from pathlib import Path from pathlib import Path
from furtka import _release_common as _rc
FORGEJO_HOST = os.environ.get("FURTKA_FORGEJO_HOST", "forgejo.sourcegate.online") FORGEJO_HOST = os.environ.get("FURTKA_FORGEJO_HOST", "forgejo.sourcegate.online")
FORGEJO_REPO = os.environ.get("FURTKA_FORGEJO_REPO", "daniel/furtka") FORGEJO_REPO = os.environ.get("FURTKA_FORGEJO_REPO", "daniel/furtka")
_FURTKA_ROOT = Path(os.environ.get("FURTKA_ROOT", "/opt/furtka")) _FURTKA_ROOT = Path(os.environ.get("FURTKA_ROOT", "/opt/furtka"))
@ -49,6 +49,9 @@ _CADDYFILE_LIVE = Path(os.environ.get("FURTKA_CADDYFILE_PATH", "/etc/caddy/Caddy
_CADDY_SNIPPET_DIR = Path( _CADDY_SNIPPET_DIR = Path(
os.environ.get("FURTKA_CADDY_SNIPPET_DIR", str(_CADDYFILE_LIVE.parent / "furtka.d")) os.environ.get("FURTKA_CADDY_SNIPPET_DIR", str(_CADDYFILE_LIVE.parent / "furtka.d"))
) )
_CADDY_HTTPS_SNIPPET_DIR = Path(
os.environ.get("FURTKA_CADDY_HTTPS_SNIPPET_DIR", str(_CADDYFILE_LIVE.parent / "furtka-https.d"))
)
_SYSTEMD_DIR = Path(os.environ.get("FURTKA_SYSTEMD_DIR", "/etc/systemd/system")) _SYSTEMD_DIR = Path(os.environ.get("FURTKA_SYSTEMD_DIR", "/etc/systemd/system"))
_HOSTNAME_FILE = Path(os.environ.get("FURTKA_HOSTNAME_FILE", "/etc/hostname")) _HOSTNAME_FILE = Path(os.environ.get("FURTKA_HOSTNAME_FILE", "/etc/hostname"))
_CADDYFILE_HOSTNAME_MARKER = "__FURTKA_HOSTNAME__" _CADDYFILE_HOSTNAME_MARKER = "__FURTKA_HOSTNAME__"
@ -95,37 +98,11 @@ def read_current_version() -> str:
return "dev" return "dev"
def _forgejo_api(path: str) -> dict: def _forgejo_api(path: str) -> dict | list:
url = f"https://{FORGEJO_HOST}/api/v1/repos/{FORGEJO_REPO}{path}" return _rc.forgejo_api(FORGEJO_HOST, FORGEJO_REPO, path, error_cls=UpdateError)
req = urllib.request.Request(url, headers={"Accept": "application/json"})
try:
with urllib.request.urlopen(req, timeout=15) as resp:
return json.loads(resp.read())
except (urllib.error.URLError, json.JSONDecodeError) as e:
raise UpdateError(f"forgejo api {url}: {e}") from e
def _version_tuple(v: str) -> tuple: _version_tuple = _rc.version_tuple
"""Compare CalVer tags like 26.1-alpha < 26.1-beta < 26.1 < 26.2-alpha.
The "stable" release (no suffix) sorts after its own pre-releases. Uses a
tuple of (year, release, stage-rank, stage-tag). Stage rank: alpha=0,
beta=1, rc=2, stable=3, unknown=-1.
"""
stage_rank = {"alpha": 0, "beta": 1, "rc": 2}
head, _, suffix = v.partition("-")
try:
year_str, release_str = head.split(".", 1)
year = int(year_str)
release = int(release_str)
except (ValueError, IndexError):
return (-1, -1, -1, v)
if not suffix:
return (year, release, 3, "")
for name, rank in stage_rank.items():
if suffix.startswith(name):
return (year, release, rank, suffix)
return (year, release, -1, suffix)
def check_update() -> UpdateCheck: def check_update() -> UpdateCheck:
@ -165,57 +142,22 @@ def check_update() -> UpdateCheck:
def _download(url: str, dest: Path) -> None: def _download(url: str, dest: Path) -> None:
dest.parent.mkdir(parents=True, exist_ok=True) _rc.download(url, dest, error_cls=UpdateError)
req = urllib.request.Request(url)
try:
with urllib.request.urlopen(req, timeout=60) as resp, dest.open("wb") as f:
shutil.copyfileobj(resp, f)
except urllib.error.URLError as e:
raise UpdateError(f"download {url}: {e}") from e
def _sha256_of(path: Path) -> str: _sha256_of = _rc.sha256_of
h = hashlib.sha256()
with path.open("rb") as f:
for chunk in iter(lambda: f.read(1024 * 1024), b""):
h.update(chunk)
return h.hexdigest()
def verify_tarball(tarball: Path, expected_sha: str) -> None: def verify_tarball(tarball: Path, expected_sha: str) -> None:
actual = _sha256_of(tarball) _rc.verify_tarball(tarball, expected_sha, error_cls=UpdateError)
if actual != expected_sha:
raise UpdateError(f"sha256 mismatch: expected {expected_sha}, got {actual}")
def _parse_sha256_sidecar(text: str) -> str: def _parse_sha256_sidecar(text: str) -> str:
"""Extract the hash from a standard `sha256sum` sidecar line.""" return _rc.parse_sha256_sidecar(text, error_cls=UpdateError)
line = text.strip().split("\n", 1)[0].strip()
if not line:
raise UpdateError("empty sha256 sidecar")
return line.split()[0]
def _extract_tarball(tarball: Path, dest: Path) -> str: def _extract_tarball(tarball: Path, dest: Path) -> str:
"""Extract the tarball and return the VERSION read from its root.""" return _rc.extract_tarball(tarball, dest, error_cls=UpdateError)
dest.mkdir(parents=True, exist_ok=True)
with tarfile.open(tarball, "r:gz") as tf:
# defensive: refuse entries that would escape dest
for member in tf.getmembers():
if member.name.startswith(("/", "..")) or ".." in Path(member.name).parts:
raise UpdateError(f"refusing tarball entry {member.name!r}")
# Python 3.12+ grew a stricter default filter; opt into it where
# available to catch symlink-escape / device-node / setuid tricks
# that our regex check can't see. Older Pythons fall back to the
# historical permissive behaviour.
try:
tf.extractall(dest, filter="data")
except TypeError:
tf.extractall(dest)
version_file = dest / "VERSION"
if not version_file.is_file():
raise UpdateError("tarball has no VERSION file at root")
return version_file.read_text().strip()
def _current_hostname() -> str: def _current_hostname() -> str:
@ -231,6 +173,24 @@ def _current_hostname() -> str:
return name or "furtka" return name or "furtka"
def _maybe_migrate_preserve_https() -> None:
"""26.14 → 26.15 migration: if the box already had the force-HTTPS
redirect snippet on disk, that means the user explicitly opted
into HTTPS under the old regime. Under the new opt-in regime,
HTTPS also requires a separate listener snippet write it here so
the user's HTTPS doesn't silently break when the Caddyfile refresh
removes the default hostname block.
"""
redirect_snippet = _CADDY_SNIPPET_DIR / "redirect.caddyfile"
https_snippet = _CADDY_HTTPS_SNIPPET_DIR / "https.caddyfile"
if not redirect_snippet.is_file() or https_snippet.is_file():
return
hostname = _current_hostname()
https_snippet.write_text(
f"{hostname}.local, {hostname} {{\n\ttls internal\n\timport furtka_routes\n}}\n"
)
def _refresh_caddyfile(source: Path) -> bool: def _refresh_caddyfile(source: Path) -> bool:
"""Copy the shipped Caddyfile to /etc/caddy/ iff it differs. Returns True """Copy the shipped Caddyfile to /etc/caddy/ iff it differs. Returns True
if the file changed (so caddy needs more than a bare reload). if the file changed (so caddy needs more than a bare reload).
@ -241,10 +201,19 @@ def _refresh_caddyfile(source: Path) -> bool:
""" """
if not source.is_file(): if not source.is_file():
return False return False
# Snippet dir for the /api/furtka/https/force toggle. Pre-HTTPS installs # Snippet dirs for the /api/furtka/https/force toggle. Pre-HTTPS
# don't have this dir; ensure it so the Caddyfile's glob import can't # installs don't have them; ensure both so the Caddyfile's glob
# trip an older Caddy on a missing path during the first reload. # imports can't trip an older Caddy on missing paths during the
# first reload. furtka-https.d is new in 26.15-alpha — older boxes
# upgrading across this version line won't have it on disk yet.
_CADDY_SNIPPET_DIR.mkdir(mode=0o755, parents=True, exist_ok=True) _CADDY_SNIPPET_DIR.mkdir(mode=0o755, parents=True, exist_ok=True)
_CADDY_HTTPS_SNIPPET_DIR.mkdir(mode=0o755, parents=True, exist_ok=True)
# Migration: pre-26.15 Caddyfile always served :443 via tls internal,
# so a box that had the "force HTTPS" redirect toggle ON relied on
# HTTPS being there implicitly. After this Caddyfile refresh the
# hostname block is gone, so the redirect would 301 to a dead :443.
# Preserve intent by writing the HTTPS listener snippet too.
_maybe_migrate_preserve_https()
rendered = source.read_text().replace(_CADDYFILE_HOSTNAME_MARKER, _current_hostname()) rendered = source.read_text().replace(_CADDYFILE_HOSTNAME_MARKER, _current_hostname())
if _CADDYFILE_LIVE.is_file() and rendered == _CADDYFILE_LIVE.read_text(): if _CADDYFILE_LIVE.is_file() and rendered == _CADDYFILE_LIVE.read_text():
return False return False
@ -255,7 +224,15 @@ def _refresh_caddyfile(source: Path) -> bool:
def _link_new_units(unit_dir: Path) -> list[str]: def _link_new_units(unit_dir: Path) -> list[str]:
"""`systemctl link` any unit file in unit_dir that isn't already symlinked """`systemctl link` any unit file in unit_dir that isn't already symlinked
into /etc/systemd/system/. Returns the list of newly-linked unit names.""" into /etc/systemd/system/. Returns the list of newly-linked unit names.
Newly-linked `.timer` units are additionally `systemctl enable`d so that
a self-update introducing a timer (e.g. 26.5 26.6 adding
furtka-catalog-sync.timer) activates it automatically the installer's
enable list only applies to fresh installs. A linked-but-disabled timer
never fires on its own, so without this step catalog sync would never
happen on upgraded boxes.
"""
if not unit_dir.is_dir(): if not unit_dir.is_dir():
return [] return []
linked = [] linked = []
@ -266,6 +243,8 @@ def _link_new_units(unit_dir: Path) -> list[str]:
if target.exists() or target.is_symlink(): if target.exists() or target.is_symlink():
continue continue
_run(["systemctl", "link", str(unit_file)]) _run(["systemctl", "link", str(unit_file)])
if unit_file.suffix == ".timer":
_run(["systemctl", "enable", unit_file.name])
linked.append(unit_file.name) linked.append(unit_file.name)
return linked return linked
@ -306,13 +285,35 @@ def _run(cmd: list[str]) -> None:
def _health_check(url: str, deadline_s: float = 30.0) -> bool: def _health_check(url: str, deadline_s: float = 30.0) -> bool:
"""Poll *url* until we get *any* response from the Python server.
Treats any 2xx-4xx response as "server is up". A 401 on
/api/apps after the 26.11-alpha auth-guard shipped is a perfectly
valid signal that the new code imported + the socket is listening
rejecting the request is still "alive". Only 5xx or connection-
level failures count as unhealthy.
Rationale: pre-26.13 this function hit /api/apps and expected 200,
which silently broke every upgrade across the auth boundary (26.10
26.11+) and auto-rolled back. Now we just need proof the new
process came up.
"""
end = time.time() + deadline_s end = time.time() + deadline_s
while time.time() < end: while time.time() < end:
try: try:
with urllib.request.urlopen(url, timeout=3) as resp: with urllib.request.urlopen(url, timeout=3) as resp:
if resp.status == 200: # Any 2xx/3xx → alive. urllib follows redirects by
# default, so a 302 → /login resolves to /login's 200.
if resp.status < 500:
return True
except urllib.error.HTTPError as e:
# 4xx → server is up, just refused us (auth, bad request,
# whatever). Counts as healthy for the "did it come back"
# check. 5xx → genuinely broken, don't accept.
if 400 <= e.code < 500:
return True return True
except urllib.error.URLError: except urllib.error.URLError:
# Connection refused / DNS / timeout → not up yet, retry.
pass pass
time.sleep(1) time.sleep(1)
return False return False

View file

@ -54,7 +54,7 @@ mDNS is wired: `avahi-daemon` + `nss-mdns` come from `packages.extra`, the live
Once `archinstall` finishes and you click **Reboot now**, the VM comes up into the installed system. No more port `:5000` — the wizard ISO is gone. Instead: Once `archinstall` finishes and you click **Reboot now**, the VM comes up into the installed system. No more port `:5000` — the wizard ISO is gone. Instead:
- **Console**: agetty shows `Furtka is ready. Open http://<hostname>.local …` with the IP fallback underneath. - **Console**: agetty shows `Furtka is ready. Open http://<hostname>.local …` with the IP fallback underneath.
- **Browser** at `http://<hostname>.local` (default `http://furtka.local` — the form's default hostname is `furtka`; only the live-installer ISO uses `proksi`): Caddy-served landing page with three live status tiles (uptime, Docker version, free disk) refreshed every 30 s by `furtka-status.timer`. Since 26.4-alpha, `https://<hostname>.local` is also served via Caddy's `tls internal` trust `rootCA.crt` from `/settings` to clear browser warnings. - **Browser** at `http://<hostname>.local` (default `http://furtka.local` — the form's default hostname is `furtka`; only the live-installer ISO uses `proksi`): Caddy-served landing page with three live status tiles (uptime, Docker version, free disk) refreshed every 30 s by `furtka-status.timer`. HTTPS is opt-in (26.15-alpha) — flip the toggle in `/settings` to switch on Caddy's `tls internal` on `:443`, then trust `rootCA.crt` from `/settings` to clear browser warnings.
- **SSH**: `ssh <user>@<hostname>.local` works; `docker ps` works without `sudo` because the user is in the `docker` group. - **SSH**: `ssh <user>@<hostname>.local` works; `docker ps` works without `sudo` because the user is in the `docker` group.
This is a demo shell — no Authentik, no app store yet. The landing page lives at `/srv/furtka/www/`, served by Caddy on `:80` per `/etc/caddy/Caddyfile`. All of this is written into the target by `webinstaller/app.py`'s `_post_install_commands` via archinstall's `custom_commands`. This is a demo shell — no Authentik, no app store yet. The landing page lives at `/srv/furtka/www/`, served by Caddy on `:80` per `/etc/caddy/Caddyfile`. All of this is written into the target by `webinstaller/app.py`'s `_post_install_commands` via archinstall's `custom_commands`.
@ -62,5 +62,4 @@ This is a demo shell — no Authentik, no app store yet. The landing page lives
## Known rough edges ## Known rough edges
- **Disk space**: the first time you build on a fresh host, the squashfs/xorriso steps need ~15 GB free. If the host's LVM-root is smaller, `xorriso` silently dies at the very end with "Image size exceeds free space on media". - **Disk space**: the first time you build on a fresh host, the squashfs/xorriso steps need ~15 GB free. If the host's LVM-root is smaller, `xorriso` silently dies at the very end with "Image size exceeds free space on media".
- **Live-installer wizard is still HTTP-only**. `http://proksi.local:5000` during install has no TLS; the installed box gets Caddy + `tls internal` on `:443` once it reboots (26.4-alpha), but bringing the same story to the wizard itself is a later milestone. - **Live-installer wizard is still HTTP-only**. `http://proksi.local:5000` during install has no TLS; once the box reboots, Caddy can serve `tls internal` on `:443` if the user opts in via `/settings` (26.15-alpha), but bringing TLS to the wizard itself is a later milestone.
- **Boot USB could appear as an install target on bare metal**. On a VM the ISO is a CD-ROM (filtered) and SATA is the only disk, so the picker only shows the install target. On bare metal with a USB stick, the USB is `TYPE=disk` and shows up alongside the real install drive; a user could in theory pick the USB they just booted from. Mitigating this needs detecting the boot media (via `findmnt /run/archiso/bootmnt` or similar) and filtering it out in `webinstaller/drives.py`.

View file

@ -8,6 +8,23 @@ server {
charset utf-8; charset utf-8;
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_comp_level 6;
gzip_types
text/css
text/plain
text/xml
application/javascript
application/json
application/xml
application/rss+xml
application/atom+xml
image/svg+xml
font/woff
font/woff2;
location / { location / {
try_files $uri $uri/ $uri.html =404; try_files $uri $uri/ $uri.html =404;
} }

View file

@ -1,6 +1,6 @@
[project] [project]
name = "furtka" name = "furtka"
version = "26.5-alpha" version = "26.15-alpha"
description = "Open-source home server OS — simple enough for everyone." description = "Open-source home server OS — simple enough for everyone."
requires-python = ">=3.11" requires-python = ">=3.11"
readme = "README.md" readme = "README.md"

View file

@ -99,4 +99,20 @@ upload_asset "$TARBALL"
upload_asset "$SHA_FILE" upload_asset "$SHA_FILE"
upload_asset "$RELEASE_JSON" upload_asset "$RELEASE_JSON"
# Optional: attach the live-installer ISO when dist/furtka-<version>.iso
# exists. Release workflows that want this build the ISO via iso/build.sh
# and move the output here before calling publish-release. Local runs
# that skip the ISO step still publish the core release successfully.
#
# Soft-fail: the ISO is ~1 GB and Forgejo's reverse proxy has returned
# 504 on the upload even when the write eventually succeeds. The core
# tarball (which boxes need for self-update) is already uploaded above,
# so don't let an ISO transport hiccup fail the whole release.
ISO="$DIST_DIR/furtka-$VERSION.iso"
if [ -f "$ISO" ]; then
if ! upload_asset "$ISO"; then
echo "warning: ISO upload failed — release published without ISO asset" >&2
fi
fi
echo "Release $VERSION published: https://$HOST/$REPO/releases/tag/$VERSION" echo "Release $VERSION published: https://$HOST/$REPO/releases/tag/$VERSION"

View file

@ -5,7 +5,7 @@ import urllib.request
import pytest import pytest
from furtka import api, dockerops from furtka import api, auth, dockerops
VALID_MANIFEST = { VALID_MANIFEST = {
"name": "fileshare", "name": "fileshare",
@ -22,13 +22,48 @@ VALID_MANIFEST = {
def fake_dirs(tmp_path, monkeypatch): def fake_dirs(tmp_path, monkeypatch):
apps = tmp_path / "apps" apps = tmp_path / "apps"
bundled = tmp_path / "bundled" bundled = tmp_path / "bundled"
catalog = tmp_path / "catalog"
users_file = tmp_path / "users.json"
static_www = tmp_path / "www"
apps.mkdir() apps.mkdir()
bundled.mkdir() bundled.mkdir()
static_www.mkdir()
(static_www / "index.html").write_text("<html>landing page</html>")
(static_www / "settings").mkdir()
(static_www / "settings" / "index.html").write_text("<html>settings page</html>")
monkeypatch.setenv("FURTKA_APPS_DIR", str(apps)) monkeypatch.setenv("FURTKA_APPS_DIR", str(apps))
monkeypatch.setenv("FURTKA_BUNDLED_APPS_DIR", str(bundled)) monkeypatch.setenv("FURTKA_BUNDLED_APPS_DIR", str(bundled))
monkeypatch.setenv("FURTKA_CATALOG_DIR", str(catalog))
monkeypatch.setenv("FURTKA_USERS_FILE", str(users_file))
monkeypatch.setenv("FURTKA_STATIC_WWW", str(static_www))
# install_runner writes to /var/lib/furtka/install-state.json and
# /run/furtka/install.lock by default — redirect into tmp_path so
# test code doesn't need root.
monkeypatch.setenv("FURTKA_INSTALL_STATE", str(tmp_path / "install-state.json"))
monkeypatch.setenv("FURTKA_INSTALL_LOCK", str(tmp_path / "install.lock"))
# install_runner caches env vars at import time, so reload it to
# pick up the tmp-path env vars this fixture just set.
import importlib
from furtka import install_runner
importlib.reload(install_runner)
# Scrub any sessions or lockout counters that leaked from a prior
# test — both stores are module-level.
auth.SESSIONS.clear()
auth.LOCKOUT.clear_all()
return apps, bundled return apps, bundled
@pytest.fixture
def admin_session(fake_dirs):
"""Pre-create an admin account + live session. Returns a Cookie header
value ready to drop into urllib.request.Request(headers=...)."""
auth.create_admin("daniel", "hunter2-pw")
session = auth.SESSIONS.create("daniel")
return f"{auth.COOKIE_NAME}={session.token}"
@pytest.fixture @pytest.fixture
def no_docker(monkeypatch): def no_docker(monkeypatch):
"""Stub docker calls so install/remove can run without a daemon.""" """Stub docker calls so install/remove can run without a daemon."""
@ -37,6 +72,29 @@ def no_docker(monkeypatch):
monkeypatch.setattr(dockerops, "compose_down", lambda app_dir, project: None) monkeypatch.setattr(dockerops, "compose_down", lambda app_dir, project: None)
@pytest.fixture
def no_systemd_run(monkeypatch):
"""Stub the systemd-run dispatch in _do_install so tests don't need it.
The install endpoint now spawns a background systemd-run unit to do
the docker-facing phases. Tests that exercise the install path only
care that the sync pre-phase succeeded and the dispatch was
attempted with the right args they shouldn't actually fire up
systemd. subprocess.run gets monkeypatched to return a fake success
CompletedProcess, and the call args get captured for assertions.
"""
import subprocess
calls = []
def fake_run(cmd, check=False, capture_output=False, text=False, **kwargs):
calls.append(cmd)
return subprocess.CompletedProcess(cmd, 0, stdout="", stderr="")
monkeypatch.setattr(subprocess, "run", fake_run)
return calls
def _write_bundled(bundled, name, manifest=None, env_example=None): def _write_bundled(bundled, name, manifest=None, env_example=None):
app = bundled / name app = bundled / name
app.mkdir() app.mkdir()
@ -51,17 +109,19 @@ def test_list_installed_empty(fake_dirs):
assert api._list_installed() == [] assert api._list_installed() == []
def test_list_bundled_empty(fake_dirs): def test_list_available_empty(fake_dirs):
assert api._list_bundled() == [] assert api._list_available() == []
def test_list_bundled_shows_uninstalled(fake_dirs): def test_list_available_shows_uninstalled(fake_dirs):
_, bundled = fake_dirs _, bundled = fake_dirs
_write_bundled(bundled, "fileshare") _write_bundled(bundled, "fileshare")
out = api._list_bundled() out = api._list_available()
assert len(out) == 1 assert len(out) == 1
assert out[0]["name"] == "fileshare" assert out[0]["name"] == "fileshare"
assert "display_name" in out[0] assert "display_name" in out[0]
# Source field lets the UI later distinguish catalog from bundled seed.
assert out[0]["source"] == "bundled"
# --- Icon inlining ---------------------------------------------------------- # --- Icon inlining ----------------------------------------------------------
@ -119,15 +179,15 @@ def test_read_icon_svg_rejects_javascript_url(tmp_path):
assert api._read_icon_svg(tmp_path, "icon.svg") is None assert api._read_icon_svg(tmp_path, "icon.svg") is None
def test_list_bundled_inlines_icon_svg(fake_dirs): def test_list_available_inlines_icon_svg(fake_dirs):
_, bundled = fake_dirs _, bundled = fake_dirs
app = _write_bundled(bundled, "fileshare") app = _write_bundled(bundled, "fileshare")
_write_icon(app, _SIMPLE_SVG) _write_icon(app, _SIMPLE_SVG)
[entry] = api._list_bundled() [entry] = api._list_available()
assert entry["icon_svg"] == _SIMPLE_SVG assert entry["icon_svg"] == _SIMPLE_SVG
def test_list_installed_inlines_icon_svg(fake_dirs, no_docker): def test_list_installed_inlines_icon_svg(fake_dirs, no_docker, no_systemd_run):
apps, bundled = fake_dirs apps, bundled = fake_dirs
app = _write_bundled(bundled, "fileshare", env_example="A=real") app = _write_bundled(bundled, "fileshare", env_example="A=real")
_write_icon(app, _SIMPLE_SVG) _write_icon(app, _SIMPLE_SVG)
@ -136,18 +196,38 @@ def test_list_installed_inlines_icon_svg(fake_dirs, no_docker):
assert entry["icon_svg"] == _SIMPLE_SVG assert entry["icon_svg"] == _SIMPLE_SVG
def test_list_bundled_hides_already_installed(fake_dirs, no_docker): def test_list_available_hides_already_installed(fake_dirs, no_docker, no_systemd_run):
apps, bundled = fake_dirs apps, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real") _write_bundled(bundled, "fileshare", env_example="A=real")
status, _ = api._do_install("fileshare") status, _ = api._do_install("fileshare")
assert status == 200 assert status == 202 # async dispatch
# Now bundled should NOT include fileshare anymore. # Now bundled should NOT include fileshare anymore — the app folder
assert api._list_bundled() == [] # exists on disk (install_from finished synchronously before the
# dispatch), which is what _list_available uses for the "installed"
# check.
assert api._list_available() == []
# But installed list should. # But installed list should.
installed = api._list_installed() installed = api._list_installed()
assert len(installed) == 1 and installed[0]["name"] == "fileshare" assert len(installed) == 1 and installed[0]["name"] == "fileshare"
def test_list_available_prefers_catalog_over_bundled(fake_dirs):
_, bundled = fake_dirs
catalog_root = bundled.parent / "catalog" / "apps"
catalog_root.mkdir(parents=True)
_write_bundled(bundled, "fileshare")
# A fileshare in the catalog as well — manifest version 0.2.0 to tell apart.
catalog_manifest = dict(VALID_MANIFEST, version="0.2.0")
cat_app = catalog_root / "fileshare"
cat_app.mkdir()
(cat_app / "manifest.json").write_text(json.dumps(catalog_manifest))
out = api._list_available()
assert len(out) == 1
assert out[0]["source"] == "catalog"
assert out[0]["version"] == "0.2.0"
def test_install_endpoint_rejects_placeholder(fake_dirs): def test_install_endpoint_rejects_placeholder(fake_dirs):
_, bundled = fake_dirs _, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="SMB_PASSWORD=changeme") _write_bundled(bundled, "fileshare", env_example="SMB_PASSWORD=changeme")
@ -167,7 +247,7 @@ def test_remove_endpoint_unknown(fake_dirs, no_docker):
assert status == 404 assert status == 404
def test_remove_endpoint_happy_path(fake_dirs, no_docker): def test_remove_endpoint_happy_path(fake_dirs, no_docker, no_systemd_run):
apps, bundled = fake_dirs apps, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real") _write_bundled(bundled, "fileshare", env_example="A=real")
api._do_install("fileshare") api._do_install("fileshare")
@ -178,23 +258,39 @@ def test_remove_endpoint_happy_path(fake_dirs, no_docker):
assert not (apps / "fileshare").exists() assert not (apps / "fileshare").exists()
def test_http_get_apps_route(fake_dirs, no_docker): def _request(port, path, cookie=None, method="GET", body=None):
headers = {}
if cookie is not None:
headers["Cookie"] = cookie
data = None
if body is not None:
headers["Content-Type"] = "application/json"
data = json.dumps(body).encode()
return urllib.request.Request(
f"http://127.0.0.1:{port}{path}",
data=data,
headers=headers,
method=method,
)
def test_http_get_apps_route(fake_dirs, no_docker, admin_session):
"""Smoke test the actual HTTP server with a real socket, urllib client.""" """Smoke test the actual HTTP server with a real socket, urllib client."""
server = api.HTTPServer(("127.0.0.1", 0), api._Handler) # port 0 → ephemeral server = api.HTTPServer(("127.0.0.1", 0), api._Handler) # port 0 → ephemeral
port = server.server_address[1] port = server.server_address[1]
t = threading.Thread(target=server.serve_forever, daemon=True) t = threading.Thread(target=server.serve_forever, daemon=True)
t.start() t.start()
try: try:
with urllib.request.urlopen(f"http://127.0.0.1:{port}/api/apps") as r: with urllib.request.urlopen(_request(port, "/api/apps", cookie=admin_session)) as r:
assert r.status == 200 assert r.status == 200
data = json.loads(r.read()) data = json.loads(r.read())
assert data == [] assert data == []
with urllib.request.urlopen(f"http://127.0.0.1:{port}/") as r: with urllib.request.urlopen(_request(port, "/apps", cookie=admin_session)) as r:
assert r.status == 200 assert r.status == 200
assert b"Furtka Apps" in r.read() assert b"Furtka Apps" in r.read()
# Unknown route → 404 JSON. # Unknown route → 404 JSON.
try: try:
urllib.request.urlopen(f"http://127.0.0.1:{port}/api/nope") urllib.request.urlopen(_request(port, "/api/nope", cookie=admin_session))
raise AssertionError("expected 404") raise AssertionError("expected 404")
except urllib.error.HTTPError as e: except urllib.error.HTTPError as e:
assert e.code == 404 assert e.code == 404
@ -203,17 +299,18 @@ def test_http_get_apps_route(fake_dirs, no_docker):
server.server_close() server.server_close()
def test_http_post_install_unknown_app(fake_dirs): def test_http_post_install_unknown_app(fake_dirs, admin_session):
server = api.HTTPServer(("127.0.0.1", 0), api._Handler) server = api.HTTPServer(("127.0.0.1", 0), api._Handler)
port = server.server_address[1] port = server.server_address[1]
t = threading.Thread(target=server.serve_forever, daemon=True) t = threading.Thread(target=server.serve_forever, daemon=True)
t.start() t.start()
try: try:
req = urllib.request.Request( req = _request(
f"http://127.0.0.1:{port}/api/apps/install", port,
data=json.dumps({"name": "ghost"}).encode(), "/api/apps/install",
headers={"Content-Type": "application/json"}, cookie=admin_session,
method="POST", method="POST",
body={"name": "ghost"},
) )
try: try:
urllib.request.urlopen(req) urllib.request.urlopen(req)
@ -227,6 +324,447 @@ def test_http_post_install_unknown_app(fake_dirs):
server.server_close() server.server_close()
# --- Auth guard + login flow ------------------------------------------------
def _start_server():
server = api.HTTPServer(("127.0.0.1", 0), api._Handler)
port = server.server_address[1]
t = threading.Thread(target=server.serve_forever, daemon=True)
t.start()
return server, port
def test_unauthenticated_api_returns_401(fake_dirs):
# No admin_session fixture → no cookie on the request.
server, port = _start_server()
try:
try:
urllib.request.urlopen(_request(port, "/api/apps"))
raise AssertionError("expected 401")
except urllib.error.HTTPError as e:
assert e.code == 401
body = json.loads(e.read())
assert body["error"] == "not authenticated"
finally:
server.shutdown()
server.server_close()
def test_unauthenticated_html_redirects_to_login(fake_dirs):
server, port = _start_server()
try:
# Disable redirect following so we can inspect the 302.
opener = urllib.request.build_opener(_NoRedirectHandler())
try:
opener.open(_request(port, "/apps"))
raise AssertionError("expected 302")
except urllib.error.HTTPError as e:
assert e.code == 302
assert e.headers["Location"] == "/login"
finally:
server.shutdown()
server.server_close()
class _NoRedirectHandler(urllib.request.HTTPRedirectHandler):
def redirect_request(self, *args, **kwargs):
return None
def test_unauth_root_redirects_to_login(fake_dirs):
"""/ was previously Caddy-direct static HTML, bypassing auth. Now
Python serves it and the auth-guard applies unauth visitor gets
bounced to /login just like /apps does."""
server, port = _start_server()
try:
opener = urllib.request.build_opener(_NoRedirectHandler())
try:
opener.open(_request(port, "/"))
raise AssertionError("expected 302")
except urllib.error.HTTPError as e:
assert e.code == 302
assert e.headers["Location"] == "/login"
finally:
server.shutdown()
server.server_close()
def test_unauth_settings_redirects_to_login(fake_dirs):
server, port = _start_server()
try:
opener = urllib.request.build_opener(_NoRedirectHandler())
for path in ("/settings", "/settings/"):
try:
opener.open(_request(port, path))
raise AssertionError(f"expected 302 for {path}")
except urllib.error.HTTPError as e:
assert e.code == 302
assert e.headers["Location"] == "/login"
finally:
server.shutdown()
server.server_close()
def test_authed_root_serves_static_index(fake_dirs, admin_session):
server, port = _start_server()
try:
with urllib.request.urlopen(_request(port, "/", cookie=admin_session)) as r:
assert r.status == 200
assert r.read() == b"<html>landing page</html>"
finally:
server.shutdown()
server.server_close()
def test_authed_settings_serves_static(fake_dirs, admin_session):
server, port = _start_server()
try:
for path in ("/settings", "/settings/"):
with urllib.request.urlopen(_request(port, path, cookie=admin_session)) as r:
assert r.status == 200
assert r.read() == b"<html>settings page</html>"
finally:
server.shutdown()
server.server_close()
def test_authed_root_does_not_serve_apps_html(fake_dirs, admin_session):
"""Regression guard: the pre-26.14 do_GET had `if self.path in ("/",
"/apps", ...)` which served _HTML (the apps page) for / too, since
Caddy wasn't proxying / so nobody noticed. Now that Caddy does
proxy /, the two paths must serve different content."""
server, port = _start_server()
try:
with urllib.request.urlopen(_request(port, "/", cookie=admin_session)) as r:
root_body = r.read()
with urllib.request.urlopen(_request(port, "/apps", cookie=admin_session)) as r:
apps_body = r.read()
assert root_body != apps_body
assert b"Furtka Apps" in apps_body
assert b"landing page" in root_body
finally:
server.shutdown()
server.server_close()
def test_get_login_renders_login_form_when_admin_exists(fake_dirs):
auth.create_admin("daniel", "hunter2-pw")
server, port = _start_server()
try:
with urllib.request.urlopen(_request(port, "/login")) as r:
html = r.read().decode()
assert r.status == 200
assert "Furtka login" in html
# No setup confirm-password field rendered in login mode.
assert 'id="password2"' not in html
assert "Repeat password" not in html
finally:
server.shutdown()
server.server_close()
def test_get_login_renders_setup_form_when_no_admin(fake_dirs):
server, port = _start_server()
try:
with urllib.request.urlopen(_request(port, "/login")) as r:
html = r.read().decode()
assert r.status == 200
assert "Set admin password" in html
assert "password2" in html # setup confirm field rendered
finally:
server.shutdown()
server.server_close()
def test_get_login_redirects_when_already_authed(fake_dirs, admin_session):
server, port = _start_server()
try:
opener = urllib.request.build_opener(_NoRedirectHandler())
try:
opener.open(_request(port, "/login", cookie=admin_session))
raise AssertionError("expected 302")
except urllib.error.HTTPError as e:
assert e.code == 302
assert e.headers["Location"] == "/apps"
finally:
server.shutdown()
server.server_close()
def test_post_login_setup_creates_admin(fake_dirs):
server, port = _start_server()
try:
req = _request(
port,
"/login",
method="POST",
body={
"username": "daniel",
"password": "a-real-password",
"password2": "a-real-password",
},
)
with urllib.request.urlopen(req) as r:
assert r.status == 200
set_cookie = r.headers["Set-Cookie"]
assert auth.COOKIE_NAME in set_cookie
assert "HttpOnly" in set_cookie
assert "SameSite=Strict" in set_cookie
# users.json got written.
assert auth.load_users()["admin"]["username"] == "daniel"
# And the password really works.
assert auth.authenticate("daniel", "a-real-password") is True
finally:
server.shutdown()
server.server_close()
def test_post_login_setup_rejects_password_mismatch(fake_dirs):
server, port = _start_server()
try:
req = _request(
port,
"/login",
method="POST",
body={"username": "x", "password": "abcdefgh", "password2": "different"},
)
try:
urllib.request.urlopen(req)
raise AssertionError("expected 400")
except urllib.error.HTTPError as e:
assert e.code == 400
body = json.loads(e.read())
assert "match" in body["error"].lower()
# No admin created.
assert auth.setup_needed() is True
finally:
server.shutdown()
server.server_close()
def test_post_login_setup_rejects_short_password(fake_dirs):
server, port = _start_server()
try:
req = _request(
port,
"/login",
method="POST",
body={"username": "x", "password": "short", "password2": "short"},
)
try:
urllib.request.urlopen(req)
raise AssertionError("expected 400")
except urllib.error.HTTPError as e:
assert e.code == 400
finally:
server.shutdown()
server.server_close()
def test_post_login_success_with_correct_credentials(fake_dirs):
auth.create_admin("daniel", "hunter2-pw")
server, port = _start_server()
try:
req = _request(
port,
"/login",
method="POST",
body={"username": "daniel", "password": "hunter2-pw"},
)
with urllib.request.urlopen(req) as r:
assert r.status == 200
set_cookie = r.headers["Set-Cookie"]
assert auth.COOKIE_NAME in set_cookie
finally:
server.shutdown()
server.server_close()
def test_post_login_rejects_wrong_password(fake_dirs):
auth.create_admin("daniel", "hunter2-pw")
server, port = _start_server()
try:
req = _request(
port,
"/login",
method="POST",
body={"username": "daniel", "password": "nope"},
)
try:
urllib.request.urlopen(req)
raise AssertionError("expected 401")
except urllib.error.HTTPError as e:
assert e.code == 401
finally:
server.shutdown()
server.server_close()
def _post_wrong_login(port, username="daniel", password="nope"):
req = _request(
port,
"/login",
method="POST",
body={"username": username, "password": password},
)
try:
urllib.request.urlopen(req)
raise AssertionError("expected HTTPError")
except urllib.error.HTTPError as e:
return e
def test_post_login_locks_out_after_repeated_failures(fake_dirs, monkeypatch):
auth.create_admin("daniel", "hunter2-pw")
# Flatten the 0.5s speed-bump so the test doesn't take 5 seconds.
monkeypatch.setattr(api.time, "sleep", lambda _s: None)
server, port = _start_server()
try:
for _ in range(auth.LoginAttempts.MAX_FAILURES):
err = _post_wrong_login(port)
assert err.code == 401
err = _post_wrong_login(port)
assert err.code == 429
assert err.headers.get("Retry-After") is not None
assert int(err.headers["Retry-After"]) > 0
finally:
server.shutdown()
server.server_close()
def test_post_login_429_masks_correctness(fake_dirs, monkeypatch):
"""Once locked, the correct password must also get 429 — no oracle."""
auth.create_admin("daniel", "hunter2-pw")
monkeypatch.setattr(api.time, "sleep", lambda _s: None)
server, port = _start_server()
try:
for _ in range(auth.LoginAttempts.MAX_FAILURES):
_post_wrong_login(port)
req = _request(
port,
"/login",
method="POST",
body={"username": "daniel", "password": "hunter2-pw"},
)
try:
urllib.request.urlopen(req)
raise AssertionError("expected 429")
except urllib.error.HTTPError as e:
assert e.code == 429
finally:
server.shutdown()
server.server_close()
def test_post_login_success_clears_lockout_counter(fake_dirs, monkeypatch):
auth.create_admin("daniel", "hunter2-pw")
monkeypatch.setattr(api.time, "sleep", lambda _s: None)
server, port = _start_server()
try:
# Get close to the threshold, then log in successfully.
for _ in range(auth.LoginAttempts.MAX_FAILURES - 1):
_post_wrong_login(port)
req = _request(
port,
"/login",
method="POST",
body={"username": "daniel", "password": "hunter2-pw"},
)
with urllib.request.urlopen(req) as r:
assert r.status == 200
# Counter must have been cleared: another full MAX_FAILURES-1
# fails shouldn't trigger 429.
for _ in range(auth.LoginAttempts.MAX_FAILURES - 1):
err = _post_wrong_login(port)
assert err.code == 401
finally:
server.shutdown()
server.server_close()
def test_post_login_setup_not_rate_limited(fake_dirs, monkeypatch):
"""First-run setup is never auth-ed against a hash, so the lockout
must not apply otherwise a clumsy admin could lock themselves out
of a box that has no admin yet."""
monkeypatch.setattr(api.time, "sleep", lambda _s: None)
server, port = _start_server()
try:
# Many mismatched setup submissions (400s) — no 429 should appear.
for _ in range(auth.LoginAttempts.MAX_FAILURES + 3):
req = _request(
port,
"/login",
method="POST",
body={
"username": "daniel",
"password": "longenough",
"password2": "different",
},
)
try:
urllib.request.urlopen(req)
raise AssertionError("expected 400")
except urllib.error.HTTPError as e:
assert e.code == 400
# Then a good setup still succeeds.
req = _request(
port,
"/login",
method="POST",
body={
"username": "daniel",
"password": "longenough",
"password2": "longenough",
},
)
with urllib.request.urlopen(req) as r:
assert r.status == 200
finally:
server.shutdown()
server.server_close()
def test_post_logout_revokes_session(fake_dirs, admin_session):
server, port = _start_server()
try:
# Logout returns 200 and clears the cookie.
with urllib.request.urlopen(
_request(port, "/logout", cookie=admin_session, method="POST", body={})
) as r:
assert r.status == 200
set_cookie = r.headers["Set-Cookie"]
assert "Max-Age=0" in set_cookie
# Subsequent API call with same cookie → 401 (session revoked).
try:
urllib.request.urlopen(_request(port, "/api/apps", cookie=admin_session))
raise AssertionError("expected 401")
except urllib.error.HTTPError as e:
assert e.code == 401
finally:
server.shutdown()
server.server_close()
def test_post_to_protected_route_without_auth_is_401(fake_dirs):
server, port = _start_server()
try:
req = _request(
port,
"/api/apps/install",
method="POST",
body={"name": "whatever"},
)
try:
urllib.request.urlopen(req)
raise AssertionError("expected 401")
except urllib.error.HTTPError as e:
assert e.code == 401
finally:
server.shutdown()
server.server_close()
# --- Settings endpoints ------------------------------------------------------ # --- Settings endpoints ------------------------------------------------------
SETTINGS_MANIFEST = dict( SETTINGS_MANIFEST = dict(
@ -269,13 +807,13 @@ def test_get_settings_not_found(fake_dirs):
assert status == 404 assert status == 404
def test_install_with_settings_writes_env_via_api(fake_dirs, no_docker): def test_install_with_settings_writes_env_via_api(fake_dirs, no_docker, no_systemd_run):
_, bundled = fake_dirs _, bundled = fake_dirs
_write_bundled(bundled, "fileshare", manifest=SETTINGS_MANIFEST) _write_bundled(bundled, "fileshare", manifest=SETTINGS_MANIFEST)
status, body = api._do_install( status, body = api._do_install(
"fileshare", settings={"SMB_USER": "alice", "SMB_PASSWORD": "s3cret"} "fileshare", settings={"SMB_USER": "alice", "SMB_PASSWORD": "s3cret"}
) )
assert status == 200, body assert status == 202, body
apps, _ = fake_dirs apps, _ = fake_dirs
env = (apps / "fileshare" / ".env").read_text() env = (apps / "fileshare" / ".env").read_text()
assert "SMB_USER=alice" in env assert "SMB_USER=alice" in env
@ -290,7 +828,7 @@ def test_install_with_settings_rejects_empty_required_via_api(fake_dirs, no_dock
assert "SMB_PASSWORD" in body["error"] assert "SMB_PASSWORD" in body["error"]
def test_update_settings_merges(fake_dirs, no_docker): def test_update_settings_merges(fake_dirs, no_docker, no_systemd_run):
_, bundled = fake_dirs _, bundled = fake_dirs
_write_bundled(bundled, "fileshare", manifest=SETTINGS_MANIFEST) _write_bundled(bundled, "fileshare", manifest=SETTINGS_MANIFEST)
api._do_install("fileshare", settings={"SMB_USER": "alice", "SMB_PASSWORD": "original"}) api._do_install("fileshare", settings={"SMB_USER": "alice", "SMB_PASSWORD": "original"})
@ -308,7 +846,7 @@ def test_update_settings_unknown_app(fake_dirs):
assert status == 404 assert status == 404
def test_http_get_settings_route(fake_dirs, no_docker): def test_http_get_settings_route(fake_dirs, no_docker, admin_session):
_, bundled = fake_dirs _, bundled = fake_dirs
_write_bundled(bundled, "fileshare", manifest=SETTINGS_MANIFEST) _write_bundled(bundled, "fileshare", manifest=SETTINGS_MANIFEST)
server = api.HTTPServer(("127.0.0.1", 0), api._Handler) server = api.HTTPServer(("127.0.0.1", 0), api._Handler)
@ -316,7 +854,9 @@ def test_http_get_settings_route(fake_dirs, no_docker):
t = threading.Thread(target=server.serve_forever, daemon=True) t = threading.Thread(target=server.serve_forever, daemon=True)
t.start() t.start()
try: try:
with urllib.request.urlopen(f"http://127.0.0.1:{port}/api/apps/fileshare/settings") as r: with urllib.request.urlopen(
_request(port, "/api/apps/fileshare/settings", cookie=admin_session)
) as r:
assert r.status == 200 assert r.status == 200
data = json.loads(r.read()) data = json.loads(r.read())
assert data["name"] == "fileshare" assert data["name"] == "fileshare"
@ -370,7 +910,7 @@ def test_update_not_installed(fake_dirs):
assert "not installed" in body["error"] assert "not installed" in body["error"]
def test_update_no_changes(fake_dirs, no_docker, update_docker_stubs): def test_update_no_changes(fake_dirs, no_docker, no_systemd_run, update_docker_stubs):
_, bundled = fake_dirs _, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real") _write_bundled(bundled, "fileshare", env_example="A=real")
api._do_install("fileshare") api._do_install("fileshare")
@ -383,7 +923,7 @@ def test_update_no_changes(fake_dirs, no_docker, update_docker_stubs):
assert update_docker_stubs["up_called"] == 0 assert update_docker_stubs["up_called"] == 0
def test_update_changes_applied(fake_dirs, no_docker, update_docker_stubs): def test_update_changes_applied(fake_dirs, no_docker, no_systemd_run, update_docker_stubs):
_, bundled = fake_dirs _, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real") _write_bundled(bundled, "fileshare", env_example="A=real")
api._do_install("fileshare") api._do_install("fileshare")
@ -403,7 +943,9 @@ def test_update_changes_applied(fake_dirs, no_docker, update_docker_stubs):
assert update_docker_stubs["up_called"] == 1 assert update_docker_stubs["up_called"] == 1
def test_update_skips_services_not_running(fake_dirs, no_docker, update_docker_stubs): def test_update_skips_services_not_running(
fake_dirs, no_docker, no_systemd_run, update_docker_stubs
):
_, bundled = fake_dirs _, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real") _write_bundled(bundled, "fileshare", env_example="A=real")
api._do_install("fileshare") api._do_install("fileshare")
@ -417,7 +959,9 @@ def test_update_skips_services_not_running(fake_dirs, no_docker, update_docker_s
assert update_docker_stubs["up_called"] == 0 assert update_docker_stubs["up_called"] == 0
def test_update_returns_502_on_pull_error(fake_dirs, no_docker, update_docker_stubs): def test_update_returns_502_on_pull_error(
fake_dirs, no_docker, no_systemd_run, update_docker_stubs
):
_, bundled = fake_dirs _, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real") _write_bundled(bundled, "fileshare", env_example="A=real")
api._do_install("fileshare") api._do_install("fileshare")
@ -528,7 +1072,9 @@ def test_furtka_update_status_endpoint(stub_furtka_updater):
assert stub_furtka_updater["status_called"] == 1 assert stub_furtka_updater["status_called"] == 1
def test_http_post_update_route(fake_dirs, no_docker, update_docker_stubs): def test_http_post_update_route(
fake_dirs, no_docker, no_systemd_run, update_docker_stubs, admin_session
):
_, bundled = fake_dirs _, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real") _write_bundled(bundled, "fileshare", env_example="A=real")
api._do_install("fileshare") api._do_install("fileshare")
@ -539,11 +1085,12 @@ def test_http_post_update_route(fake_dirs, no_docker, update_docker_stubs):
t = threading.Thread(target=server.serve_forever, daemon=True) t = threading.Thread(target=server.serve_forever, daemon=True)
t.start() t.start()
try: try:
req = urllib.request.Request( req = _request(
f"http://127.0.0.1:{port}/api/apps/fileshare/update", port,
data=b"{}", "/api/apps/fileshare/update",
headers={"Content-Type": "application/json"}, cookie=admin_session,
method="POST", method="POST",
body={},
) )
with urllib.request.urlopen(req) as r: with urllib.request.urlopen(req) as r:
assert r.status == 200 assert r.status == 200
@ -555,7 +1102,7 @@ def test_http_post_update_route(fake_dirs, no_docker, update_docker_stubs):
server.server_close() server.server_close()
def test_http_post_install_with_settings(fake_dirs, no_docker): def test_http_post_install_with_settings(fake_dirs, no_docker, no_systemd_run, admin_session):
_, bundled = fake_dirs _, bundled = fake_dirs
_write_bundled(bundled, "fileshare", manifest=SETTINGS_MANIFEST) _write_bundled(bundled, "fileshare", manifest=SETTINGS_MANIFEST)
server = api.HTTPServer(("127.0.0.1", 0), api._Handler) server = api.HTTPServer(("127.0.0.1", 0), api._Handler)
@ -563,21 +1110,186 @@ def test_http_post_install_with_settings(fake_dirs, no_docker):
t = threading.Thread(target=server.serve_forever, daemon=True) t = threading.Thread(target=server.serve_forever, daemon=True)
t.start() t.start()
try: try:
req = urllib.request.Request( req = _request(
f"http://127.0.0.1:{port}/api/apps/install", port,
data=json.dumps( "/api/apps/install",
{ cookie=admin_session,
method="POST",
body={
"name": "fileshare", "name": "fileshare",
"settings": {"SMB_USER": "alice", "SMB_PASSWORD": "s3cret"}, "settings": {"SMB_USER": "alice", "SMB_PASSWORD": "s3cret"},
} },
).encode(),
headers={"Content-Type": "application/json"},
method="POST",
) )
with urllib.request.urlopen(req) as r: with urllib.request.urlopen(req) as r:
assert r.status == 200 # Async: 202 Accepted + dispatched background job.
assert r.status == 202
body = json.loads(r.read())
assert body["status"] == "dispatched"
assert body["unit"] == "furtka-install-fileshare"
# Sync phase wrote the .env before dispatch.
apps, _ = fake_dirs apps, _ = fake_dirs
assert "SMB_PASSWORD=s3cret" in (apps / "fileshare" / ".env").read_text() assert "SMB_PASSWORD=s3cret" in (apps / "fileshare" / ".env").read_text()
# And systemd-run was called exactly once with the expected cmd.
assert len(no_systemd_run) == 1
assert no_systemd_run[0][:4] == [
"systemd-run",
"--unit=furtka-install-fileshare",
"--no-block",
"--collect",
]
assert no_systemd_run[0][-3:] == ["app", "install-bg", "fileshare"]
finally: finally:
server.shutdown() server.shutdown()
server.server_close() server.server_close()
def test_do_install_returns_409_when_locked(fake_dirs, no_docker, no_systemd_run):
_, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real")
# Hold the install lock so _do_install fast-fails.
fh = api.install_runner.acquire_lock()
try:
status, body = api._do_install("fileshare")
assert status == 409
assert "in progress" in body["error"]
finally:
fh.close()
def test_do_install_returns_409_when_state_reports_running(fake_dirs, no_docker, no_systemd_run):
"""Closes the race window where _do_install had already released
the fcntl lock (so the systemd-run child could grab it) but a
second POST tried to start a new install while the first was still
mid-flight. The state file's non-terminal stage is the reliable
"someone else is installing" signal."""
_, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real")
api.install_runner.write_state("pulling_image", app="jellyfin")
status, body = api._do_install("fileshare")
assert status == 409
assert "in progress" in body["error"]
assert "jellyfin" in body["error"]
assert "pulling_image" in body["error"]
def test_do_install_goes_through_after_terminal_state(fake_dirs, no_docker, no_systemd_run):
"""After a successful or failed install, the state file stays at
done/error a new install must be accepted, not blocked."""
_, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real")
api.install_runner.write_state("done", app="previous", version="1.0.0")
status, _ = api._do_install("fileshare")
assert status == 202
api.install_runner.write_state("error", app="previous", error="oops")
status, _ = api._do_install("fileshare")
assert status == 202
def test_do_install_status_returns_state(fake_dirs):
# Write state directly, then GET it via the status handler.
api.install_runner.write_state("pulling_image", app="jellyfin")
status, body = api._do_install_status()
assert status == 200
assert body["stage"] == "pulling_image"
assert body["app"] == "jellyfin"
# --- Catalog endpoints ------------------------------------------------------
def test_catalog_status_reports_absent_catalog(fake_dirs, monkeypatch):
"""With no /var/lib/furtka/catalog/ on disk, status reports current=None + empty state."""
# FURTKA_CATALOG_STATE is not touched by fake_dirs — point it at tmp so we
# don't hit the production path.
monkeypatch.setenv("FURTKA_CATALOG_STATE", str(fake_dirs[0].parent / "catalog-state.json"))
import importlib
from furtka import catalog as c
importlib.reload(c)
status, body = api._do_catalog_status()
assert status == 200
assert body["current"] is None
assert body["state"] == {}
def test_catalog_check_surfaces_forgejo_error(fake_dirs, monkeypatch):
monkeypatch.setenv("FURTKA_CATALOG_STATE", str(fake_dirs[0].parent / "catalog-state.json"))
import importlib
from furtka import _release_common as _rc
from furtka import catalog as c
importlib.reload(c)
def boom(host, repo, path, *, error_cls=RuntimeError):
raise error_cls("forgejo api down")
monkeypatch.setattr(_rc, "forgejo_api", boom)
status, body = api._do_catalog_check()
assert status == 502
assert "forgejo api down" in body["error"]
# --- Power endpoints --------------------------------------------------------
def test_power_rejects_unknown_action(fake_dirs):
status, body = api._do_power({"action": "format-harddrive"})
assert status == 400
assert "action" in body["error"]
def test_power_rejects_missing_action(fake_dirs):
status, body = api._do_power({})
assert status == 400
def test_power_reboot_dispatches_systemd_run(fake_dirs, monkeypatch):
seen = []
class _FakeCompleted:
returncode = 0
stdout = ""
stderr = ""
def fake_run(cmd, *, check=False, capture_output=False, text=False):
seen.append(cmd)
return _FakeCompleted()
monkeypatch.setattr("subprocess.run", fake_run)
status, body = api._do_power({"action": "reboot"})
assert status == 202
assert body == {"action": "reboot", "scheduled_in_seconds": 3}
# The dispatched command is a delayed systemd-run that eventually
# invokes `systemctl reboot`. Asserting the key flags catches
# accidental regressions (e.g. losing --no-block would block the API
# thread until the unit completes).
assert seen[0][:1] == ["systemd-run"]
assert "--on-active=3s" in seen[0]
assert "--no-block" in seen[0]
assert seen[0][-2:] == ["systemctl", "reboot"]
def test_power_poweroff_dispatches_systemctl_poweroff(fake_dirs, monkeypatch):
seen = []
class _FakeCompleted:
returncode = 0
monkeypatch.setattr("subprocess.run", lambda cmd, **kw: (seen.append(cmd), _FakeCompleted())[1])
status, body = api._do_power({"action": "poweroff"})
assert status == 202
assert body["action"] == "poweroff"
assert seen[0][-2:] == ["systemctl", "poweroff"]
def test_power_surfaces_systemd_run_missing(fake_dirs, monkeypatch):
def boom(*a, **kw):
raise FileNotFoundError(2, "No such file", "systemd-run")
monkeypatch.setattr("subprocess.run", boom)
status, body = api._do_power({"action": "reboot"})
assert status == 502
assert "systemd-run" in body["error"]

230
tests/test_auth.py Normal file
View file

@ -0,0 +1,230 @@
import json
from datetime import UTC, datetime, timedelta
import pytest
from furtka import auth
@pytest.fixture
def tmp_users_file(tmp_path, monkeypatch):
path = tmp_path / "users.json"
monkeypatch.setenv("FURTKA_USERS_FILE", str(path))
# Sessions and lockout state are module-level; wipe between tests so
# one doesn't leak a valid token (or a stale failure counter) into
# the next.
auth.SESSIONS.clear()
auth.LOCKOUT.clear_all()
return path
def test_hash_password_roundtrip():
h = auth.hash_password("hunter2")
assert h != "hunter2" # Not plain text.
assert auth.verify_password("hunter2", h) is True
assert auth.verify_password("hunter3", h) is False
def test_hash_password_is_salted():
# Two calls with the same password must produce different hashes.
a = auth.hash_password("same")
b = auth.hash_password("same")
assert a != b
# But both verify against the original.
assert auth.verify_password("same", a)
assert auth.verify_password("same", b)
def test_load_users_returns_empty_when_missing(tmp_users_file):
assert not tmp_users_file.exists()
assert auth.load_users() == {}
def test_load_users_returns_empty_on_junk(tmp_users_file):
tmp_users_file.write_text("{not json")
assert auth.load_users() == {}
def test_load_users_returns_empty_on_non_dict(tmp_users_file):
tmp_users_file.write_text("[]")
assert auth.load_users() == {}
def test_save_users_atomic_and_0600(tmp_users_file):
auth.save_users({"admin": {"hash": "x", "username": "daniel"}})
assert tmp_users_file.exists()
mode = tmp_users_file.stat().st_mode & 0o777
assert mode == 0o600, f"expected 0o600, got {oct(mode)}"
loaded = json.loads(tmp_users_file.read_text())
assert loaded["admin"]["username"] == "daniel"
def test_setup_needed_true_on_missing_file(tmp_users_file):
assert auth.setup_needed() is True
def test_setup_needed_true_on_empty_dict(tmp_users_file):
tmp_users_file.write_text("{}")
assert auth.setup_needed() is True
def test_setup_needed_false_when_admin_exists(tmp_users_file):
auth.create_admin("daniel", "secret-pw")
assert auth.setup_needed() is False
def test_create_admin_overwrites_file(tmp_users_file):
auth.create_admin("daniel", "secret-pw")
auth.create_admin("robert", "new-pw")
users = auth.load_users()
assert users["admin"]["username"] == "robert"
def test_authenticate_happy(tmp_users_file):
auth.create_admin("daniel", "secret-pw")
assert auth.authenticate("daniel", "secret-pw") is True
def test_authenticate_wrong_username(tmp_users_file):
auth.create_admin("daniel", "secret-pw")
assert auth.authenticate("robert", "secret-pw") is False
def test_authenticate_wrong_password(tmp_users_file):
auth.create_admin("daniel", "secret-pw")
assert auth.authenticate("daniel", "wrong") is False
def test_authenticate_no_admin(tmp_users_file):
assert auth.authenticate("daniel", "anything") is False
# ---- Session store ---------------------------------------------------------
def test_session_create_and_lookup(tmp_users_file):
s = auth.SESSIONS.create("daniel")
assert s.username == "daniel"
assert s.token
looked_up = auth.SESSIONS.lookup(s.token)
assert looked_up is not None
assert looked_up.username == "daniel"
def test_session_lookup_unknown_token(tmp_users_file):
assert auth.SESSIONS.lookup("not-a-real-token") is None
def test_session_lookup_none_token(tmp_users_file):
assert auth.SESSIONS.lookup(None) is None
assert auth.SESSIONS.lookup("") is None
def test_session_revoke(tmp_users_file):
s = auth.SESSIONS.create("daniel")
auth.SESSIONS.revoke(s.token)
assert auth.SESSIONS.lookup(s.token) is None
def test_session_expires(tmp_users_file, monkeypatch):
# Build a session store with a 0-second TTL so lookup immediately
# treats new sessions as expired.
store = auth.SessionStore(ttl_seconds=0)
s = store.create("daniel")
# Force the clock forward a hair so the > check fires.
monkeypatch.setattr(
auth,
"datetime",
_FakeDatetime(datetime.now(UTC) + timedelta(seconds=1)),
)
# The module-local datetime reference inside SessionStore.lookup
# resolves at call time. Verify that an expired session is dropped.
assert store.lookup(s.token) is None
class _FakeDatetime:
"""Tiny shim — only `.now(tz)` is used from SessionStore."""
def __init__(self, fixed_utc):
self._fixed = fixed_utc
def now(self, tz=None):
if tz is None:
return self._fixed.replace(tzinfo=None)
return self._fixed.astimezone(tz)
# ---- Login attempts / lockout ----------------------------------------------
def test_lockout_under_threshold_still_allowed(tmp_users_file):
store = auth.LoginAttempts(max_failures=3, window_seconds=60, lockout_seconds=60)
key = ("daniel", "10.0.0.1")
for _ in range(2):
store.register_failure(key)
assert store.is_locked(key) is False
assert store.retry_after_seconds(key) == 0
def test_lockout_triggers_at_threshold(tmp_users_file):
store = auth.LoginAttempts(max_failures=3, window_seconds=60, lockout_seconds=60)
key = ("daniel", "10.0.0.1")
for _ in range(3):
store.register_failure(key)
assert store.is_locked(key) is True
assert store.retry_after_seconds(key) > 0
assert store.retry_after_seconds(key) <= 60
def test_lockout_window_decay(tmp_users_file, monkeypatch):
store = auth.LoginAttempts(max_failures=3, window_seconds=60, lockout_seconds=60)
key = ("daniel", "10.0.0.1")
for _ in range(3):
store.register_failure(key)
assert store.is_locked(key) is True
# Jump 2 minutes ahead — all failures are older than the window
# and should be pruned on the next check.
monkeypatch.setattr(
auth,
"datetime",
_FakeDatetime(datetime.now(UTC) + timedelta(seconds=121)),
)
assert store.is_locked(key) is False
assert store.retry_after_seconds(key) == 0
def test_lockout_clear_resets(tmp_users_file):
store = auth.LoginAttempts(max_failures=2, window_seconds=60, lockout_seconds=60)
key = ("daniel", "10.0.0.1")
store.register_failure(key)
store.register_failure(key)
assert store.is_locked(key) is True
store.clear(key)
assert store.is_locked(key) is False
assert store.retry_after_seconds(key) == 0
def test_lockout_keys_are_independent(tmp_users_file):
store = auth.LoginAttempts(max_failures=2, window_seconds=60, lockout_seconds=60)
locked = ("daniel", "1.1.1.1")
other_ip = ("daniel", "2.2.2.2")
other_user = ("robert", "1.1.1.1")
store.register_failure(locked)
store.register_failure(locked)
assert store.is_locked(locked) is True
assert store.is_locked(other_ip) is False
assert store.is_locked(other_user) is False
def test_lockout_clear_all_wipes_every_key(tmp_users_file):
store = auth.LoginAttempts(max_failures=2, window_seconds=60, lockout_seconds=60)
a = ("daniel", "1.1.1.1")
b = ("robert", "2.2.2.2")
store.register_failure(a)
store.register_failure(a)
store.register_failure(b)
store.register_failure(b)
assert store.is_locked(a) and store.is_locked(b)
store.clear_all()
assert not store.is_locked(a)
assert not store.is_locked(b)

333
tests/test_catalog.py Normal file
View file

@ -0,0 +1,333 @@
"""Tests for the apps-catalog sync flow.
Same shape as ``tests/test_updater.py``: fixture reloads the module with
env-overridden paths, fake tarballs land in tmp_path, Forgejo API is
stubbed via ``urllib.request.urlopen`` monkeypatching so nothing talks
to the network.
Asserts end-to-end atomicity: on any failure path bad sha256, broken
tarball, invalid manifest the live catalog dir is either left
untouched (if one existed) or absent (if it didn't).
"""
from __future__ import annotations
import io
import json
import tarfile
from pathlib import Path
import pytest
@pytest.fixture
def catalog(tmp_path, monkeypatch):
monkeypatch.setenv("FURTKA_CATALOG_DIR", str(tmp_path / "var_lib_furtka_catalog"))
monkeypatch.setenv("FURTKA_CATALOG_STATE", str(tmp_path / "var_lib_furtka_catalog-state.json"))
monkeypatch.setenv("FURTKA_CATALOG_LOCK", str(tmp_path / "catalog.lock"))
monkeypatch.setenv("FURTKA_FORGEJO_HOST", "forgejo.test.local")
monkeypatch.setenv("FURTKA_CATALOG_REPO", "daniel/furtka-apps")
import importlib
from furtka import catalog as c
from furtka import paths as p
importlib.reload(p)
importlib.reload(c)
return c
def _manifest(name: str = "fileshare") -> dict:
return {
"name": name,
"display_name": "Fileshare",
"version": "0.1.0",
"description": "Test fixture app",
"volumes": ["files"],
"ports": [445],
"icon": "icon.svg",
}
def _make_catalog_tarball(
path: Path,
version: str,
*,
apps: list[tuple[str, dict]] | None = None,
extra_entries: list[tuple[str, bytes]] | None = None,
) -> None:
"""Build a minimal valid catalog tarball.
`apps` is a list of (folder_name, manifest_dict). Each app folder gets
a `manifest.json` + a stub `docker-compose.yaml` + `icon.svg`.
`extra_entries` lets tests inject malformed content (path-traversal,
missing VERSION, ...) without rebuilding the helper.
"""
apps = apps if apps is not None else [("fileshare", _manifest())]
buf = io.BytesIO()
with tarfile.open(fileobj=buf, mode="w:gz") as tf:
entries: list[tuple[str, bytes]] = [("VERSION", f"{version}\n".encode())]
for folder, m in apps:
entries.append((f"apps/{folder}/manifest.json", json.dumps(m).encode()))
entries.append(
(f"apps/{folder}/docker-compose.yaml", b"services:\n app:\n image: scratch\n")
)
entries.append((f"apps/{folder}/icon.svg", b"<svg/>"))
if extra_entries:
entries.extend(extra_entries)
for name, data in entries:
info = tarfile.TarInfo(name=name)
info.size = len(data)
tf.addfile(info, io.BytesIO(data))
path.write_bytes(buf.getvalue())
def _stub_forgejo_release(
monkeypatch,
catalog,
*,
tag: str,
tarball_url: str = "https://forgejo.test.local/t.tar.gz",
sha_url: str = "https://forgejo.test.local/t.tar.gz.sha256",
releases: list | None = None,
):
"""Patch ``_rc.forgejo_api`` so check_catalog sees a canned release list."""
if releases is None:
releases = [
{
"tag_name": tag,
"assets": [
{"name": f"furtka-apps-{tag}.tar.gz", "browser_download_url": tarball_url},
{
"name": f"furtka-apps-{tag}.tar.gz.sha256",
"browser_download_url": sha_url,
},
],
}
]
def fake_api(host, repo, path, *, error_cls=RuntimeError):
return releases
from furtka import _release_common as _rc
monkeypatch.setattr(_rc, "forgejo_api", fake_api)
def _stub_download(monkeypatch, catalog, mapping: dict[str, bytes]):
"""Patch ``_rc.download`` so sync_catalog pulls from an in-memory map."""
from furtka import _release_common as _rc
def fake_download(url, dest, *, error_cls=RuntimeError):
if url not in mapping:
raise error_cls(f"test: no fake content for {url}")
dest.parent.mkdir(parents=True, exist_ok=True)
dest.write_bytes(mapping[url])
monkeypatch.setattr(_rc, "download", fake_download)
# --------------------------------------------------------------------------- #
# check_catalog
# --------------------------------------------------------------------------- #
def test_check_catalog_reports_update_when_versions_differ(catalog, monkeypatch, tmp_path):
# Pretend we already have catalog version 26.5 on disk; Forgejo reports 26.6.
catalog.catalog_dir().mkdir(parents=True)
(catalog.catalog_dir() / "VERSION").write_text("26.5\n")
_stub_forgejo_release(monkeypatch, catalog, tag="26.6")
check = catalog.check_catalog()
assert check.current == "26.5"
assert check.latest == "26.6"
assert check.update_available is True
assert check.tarball_url.endswith(".tar.gz")
assert check.sha256_url.endswith(".sha256")
def test_check_catalog_reports_up_to_date_when_same_version(catalog, monkeypatch):
catalog.catalog_dir().mkdir(parents=True)
(catalog.catalog_dir() / "VERSION").write_text("26.5\n")
_stub_forgejo_release(monkeypatch, catalog, tag="26.5")
check = catalog.check_catalog()
assert check.current == "26.5"
assert check.latest == "26.5"
assert check.update_available is False
def test_check_catalog_treats_missing_current_as_installable(catalog, monkeypatch):
# Fresh box, no catalog ever synced — any release is an update.
_stub_forgejo_release(monkeypatch, catalog, tag="26.5")
check = catalog.check_catalog()
assert check.current is None
assert check.update_available is True
def test_check_catalog_raises_when_no_releases_published(catalog, monkeypatch):
_stub_forgejo_release(monkeypatch, catalog, tag="x", releases=[])
with pytest.raises(catalog.CatalogError, match="no catalog releases"):
catalog.check_catalog()
# --------------------------------------------------------------------------- #
# sync_catalog — happy + error paths
# --------------------------------------------------------------------------- #
def test_sync_catalog_happy_path(catalog, monkeypatch, tmp_path):
import hashlib
tarball_path = tmp_path / "tarball.tar.gz"
_make_catalog_tarball(tarball_path, "26.6")
tarball_bytes = tarball_path.read_bytes()
sha = hashlib.sha256(tarball_bytes).hexdigest()
_stub_forgejo_release(monkeypatch, catalog, tag="26.6")
_stub_download(
monkeypatch,
catalog,
{
"https://forgejo.test.local/t.tar.gz": tarball_bytes,
"https://forgejo.test.local/t.tar.gz.sha256": (
f"{sha} furtka-apps-26.6.tar.gz\n".encode()
),
},
)
check = catalog.sync_catalog()
assert check.latest == "26.6"
assert (catalog.catalog_dir() / "VERSION").read_text().strip() == "26.6"
assert (catalog.catalog_dir() / "apps" / "fileshare" / "manifest.json").is_file()
state = catalog.read_state()
assert state["stage"] == "done"
assert state["version"] == "26.6"
def test_sync_catalog_noop_when_already_current(catalog, monkeypatch, tmp_path):
catalog.catalog_dir().mkdir(parents=True)
(catalog.catalog_dir() / "VERSION").write_text("26.5\n")
_stub_forgejo_release(monkeypatch, catalog, tag="26.5")
check = catalog.sync_catalog()
assert check.update_available is False
assert catalog.read_state()["stage"] == "done"
def test_sync_catalog_refuses_sha256_mismatch(catalog, monkeypatch, tmp_path):
tarball_path = tmp_path / "tarball.tar.gz"
_make_catalog_tarball(tarball_path, "26.6")
_stub_forgejo_release(monkeypatch, catalog, tag="26.6")
_stub_download(
monkeypatch,
catalog,
{
"https://forgejo.test.local/t.tar.gz": tarball_path.read_bytes(),
# Hash for some OTHER content — will mismatch.
"https://forgejo.test.local/t.tar.gz.sha256": (b"0" * 64 + b" wrong.tar.gz\n"),
},
)
with pytest.raises(catalog.CatalogError, match="sha256 mismatch"):
catalog.sync_catalog()
# Live catalog never existed, must still not exist after the failed sync.
assert not catalog.catalog_dir().exists()
def test_sync_catalog_refuses_tarball_with_invalid_manifest(catalog, monkeypatch, tmp_path):
import hashlib
bad_manifest = {"name": "broken"} # missing required fields
tarball_path = tmp_path / "tarball.tar.gz"
_make_catalog_tarball(tarball_path, "26.6", apps=[("broken", bad_manifest)])
tarball_bytes = tarball_path.read_bytes()
sha = hashlib.sha256(tarball_bytes).hexdigest()
_stub_forgejo_release(monkeypatch, catalog, tag="26.6")
_stub_download(
monkeypatch,
catalog,
{
"https://forgejo.test.local/t.tar.gz": tarball_bytes,
"https://forgejo.test.local/t.tar.gz.sha256": (
f"{sha} furtka-apps-26.6.tar.gz\n".encode()
),
},
)
with pytest.raises(catalog.CatalogError, match="invalid manifest"):
catalog.sync_catalog()
# Staging was cleaned; live catalog never materialised.
assert not catalog.catalog_dir().exists()
def test_sync_catalog_preserves_existing_catalog_on_failure(catalog, monkeypatch, tmp_path):
"""A failed sync must leave the previous live catalog intact so boxes
keep working until the next successful sync."""
import hashlib
# Seed a live catalog that represents a previous successful sync.
live = catalog.catalog_dir()
live.mkdir(parents=True)
(live / "VERSION").write_text("26.5\n")
(live / "apps").mkdir()
bad_manifest = {"name": "broken"} # invalid
tarball_path = tmp_path / "tarball.tar.gz"
_make_catalog_tarball(tarball_path, "26.6", apps=[("broken", bad_manifest)])
sha = hashlib.sha256(tarball_path.read_bytes()).hexdigest()
_stub_forgejo_release(monkeypatch, catalog, tag="26.6")
_stub_download(
monkeypatch,
catalog,
{
"https://forgejo.test.local/t.tar.gz": tarball_path.read_bytes(),
"https://forgejo.test.local/t.tar.gz.sha256": f"{sha} x\n".encode(),
},
)
with pytest.raises(catalog.CatalogError):
catalog.sync_catalog()
# The 26.5 live catalog survives the failed 26.6 sync.
assert (live / "VERSION").read_text().strip() == "26.5"
def test_sync_catalog_lock_contention(catalog, monkeypatch):
_stub_forgejo_release(monkeypatch, catalog, tag="26.6")
# Hold the lock from outside; the real sync_catalog call must refuse.
first = catalog.acquire_lock()
try:
with pytest.raises(catalog.CatalogError, match="already in progress"):
catalog.sync_catalog()
finally:
first.close()
# --------------------------------------------------------------------------- #
# state + current-version helpers
# --------------------------------------------------------------------------- #
def test_read_current_catalog_version_absent(catalog):
assert catalog.read_current_catalog_version() is None
def test_read_current_catalog_version_empty_file(catalog):
catalog.catalog_dir().mkdir(parents=True)
(catalog.catalog_dir() / "VERSION").write_text("\n")
assert catalog.read_current_catalog_version() is None
def test_write_and_read_state_round_trip(catalog):
catalog.write_state("downloading", latest="26.6")
s = catalog.read_state()
assert s["stage"] == "downloading"
assert s["latest"] == "26.6"
assert "updated_at" in s

View file

@ -32,9 +32,21 @@ def test_app_list_json_with_one_app(tmp_path, monkeypatch, capsys):
"display_name": "Network Files", "display_name": "Network Files",
"version": "0.1.0", "version": "0.1.0",
"description": "SMB", "description": "SMB",
"description_long": "Long description here.",
"volumes": ["files"], "volumes": ["files"],
"ports": [445], "ports": [445],
"icon": "icon.svg", "icon": "icon.svg",
"open_url": "smb://{host}/files",
"settings": [
{
"name": "SMB_USER",
"label": "User",
"description": "SMB user",
"type": "text",
"default": "furtka",
"required": True,
}
],
} }
) )
) )
@ -43,7 +55,14 @@ def test_app_list_json_with_one_app(tmp_path, monkeypatch, capsys):
data = json.loads(capsys.readouterr().out) data = json.loads(capsys.readouterr().out)
assert len(data) == 1 assert len(data) == 1
assert data[0]["ok"] is True assert data[0]["ok"] is True
assert data[0]["manifest"]["name"] == "fileshare" m = data[0]["manifest"]
assert m["name"] == "fileshare"
assert m["description_long"] == "Long description here."
assert m["open_url"] == "smb://{host}/files"
assert len(m["settings"]) == 1
assert m["settings"][0]["name"] == "SMB_USER"
assert m["settings"][0]["required"] is True
assert m["settings"][0]["default"] == "furtka"
def test_reconcile_dry_run_empty(tmp_path, monkeypatch, capsys): def test_reconcile_dry_run_empty(tmp_path, monkeypatch, capsys):
@ -52,3 +71,35 @@ def test_reconcile_dry_run_empty(tmp_path, monkeypatch, capsys):
assert rc == 0 assert rc == 0
out = capsys.readouterr().out out = capsys.readouterr().out
assert "0 actions" in out assert "0 actions" in out
def test_app_install_bg_dispatches_to_runner(tmp_path, monkeypatch):
"""CLI `app install-bg <name>` must call install_runner.run_install(name).
This is the entry point the HTTP API fires via systemd-run; regression
here would leave the UI hanging at "pulling_image…" forever because
the background never transitions state.
"""
_set_env(monkeypatch, tmp_path)
from furtka import install_runner
called = []
monkeypatch.setattr(install_runner, "run_install", lambda name: called.append(name))
rc = main(["app", "install-bg", "fileshare"])
assert rc == 0
assert called == ["fileshare"]
def test_app_install_bg_returns_1_on_failure(tmp_path, monkeypatch, capsys):
_set_env(monkeypatch, tmp_path)
from furtka import install_runner
def boom(name):
raise RuntimeError("compose pull failed")
monkeypatch.setattr(install_runner, "run_install", boom)
rc = main(["app", "install-bg", "fileshare"])
assert rc == 1
err = capsys.readouterr().err
assert "install-bg failed" in err
assert "compose pull failed" in err

View file

@ -95,3 +95,23 @@ def test_drive_type_label_nvme_ssd_hdd():
def test_parse_lsblk_handles_empty_output(): def test_parse_lsblk_handles_empty_output():
assert parse_lsblk_output("") == [] assert parse_lsblk_output("") == []
def test_parse_lsblk_drops_boot_usb(monkeypatch):
import drives
monkeypatch.setattr(drives, "_smart_status", lambda _: "passed")
output = "sda 500G disk\nsdb 16G disk\nnvme0n1 1T disk\n"
devices = parse_lsblk_output(output, boot_disk="sdb")
names = [d["name"] for d in devices]
assert "/dev/sdb" not in names
assert names == ["/dev/nvme0n1", "/dev/sda"]
def test_parse_lsblk_no_boot_disk_keeps_all(monkeypatch):
import drives
monkeypatch.setattr(drives, "_smart_status", lambda _: "passed")
output = "sda 500G disk\nsdb 16G disk\n"
names = [d["name"] for d in parse_lsblk_output(output, boot_disk=None)]
assert set(names) == {"/dev/sda", "/dev/sdb"}

View file

@ -1,11 +1,15 @@
"""Tests for furtka.https — fingerprint extraction + force-HTTPS toggle. """Tests for furtka.https — fingerprint extraction + HTTPS toggle.
Since 26.15-alpha the toggle writes/removes TWO snippets atomically:
- The top-level HTTPS listener snippet (enables :443 + tls internal)
- The :80-scoped redirect snippet (forces HTTP HTTPS)
The fingerprint case uses a throwaway self-signed EC cert with a known The fingerprint case uses a throwaway self-signed EC cert with a known
reference fingerprint (computed once via `openssl x509 -fingerprint reference fingerprint (computed once via `openssl x509 -fingerprint
-sha256 -noout`) so we verify the PEM DER SHA256 path without a -sha256 -noout`) so we verify the PEM DER SHA256 path without a
runtime subprocess dependency. The toggle cases stub the caddy reload runtime subprocess dependency. The toggle cases stub the caddy reload
so we assert the snippet file is written / removed and that reload so we assert both snippet files are written / removed together and that
failures roll state back. reload failures roll BOTH state back.
""" """
import subprocess import subprocess
@ -34,6 +38,22 @@ _TEST_CERT_FP_SHA256 = (
) )
def _paths(tmp_path):
"""Return the four paths the toggle touches, in a dict for kwargs
spreading. Keeps each test's fixture boilerplate small."""
return {
"snippet_dir": tmp_path / "furtka.d",
"snippet": tmp_path / "furtka.d" / "redirect.caddyfile",
"https_snippet_dir": tmp_path / "furtka-https.d",
"https_snippet": tmp_path / "furtka-https.d" / "https.caddyfile",
"hostname_file": tmp_path / "etc_hostname",
}
def _prepare_hostname(tmp_path, value="testbox"):
(tmp_path / "etc_hostname").write_text(f"{value}\n")
def test_ca_fingerprint_matches_openssl(tmp_path): def test_ca_fingerprint_matches_openssl(tmp_path):
cert = tmp_path / "root.crt" cert = tmp_path / "root.crt"
cert.write_text(_TEST_CERT_PEM) cert.write_text(_TEST_CERT_PEM)
@ -53,7 +73,7 @@ def test_ca_fingerprint_no_pem_block(tmp_path):
def test_status_no_ca_no_snippet(tmp_path): def test_status_no_ca_no_snippet(tmp_path):
s = https.status(ca_path=tmp_path / "root.crt", snippet=tmp_path / "redirect.caddyfile") s = https.status(ca_path=tmp_path / "root.crt", https_snippet=tmp_path / "https.caddyfile")
assert s == { assert s == {
"ca_available": False, "ca_available": False,
"fingerprint_sha256": None, "fingerprint_sha256": None,
@ -62,105 +82,135 @@ def test_status_no_ca_no_snippet(tmp_path):
} }
def test_status_with_ca_and_snippet(tmp_path): def test_status_with_ca_and_https_snippet(tmp_path):
ca = tmp_path / "root.crt" ca = tmp_path / "root.crt"
ca.write_text(_TEST_CERT_PEM) ca.write_text(_TEST_CERT_PEM)
snippet = tmp_path / "redirect.caddyfile" https_snip = tmp_path / "https.caddyfile"
snippet.write_text(https.REDIRECT_CONTENT) https_snip.write_text("furtka.local, furtka {\n\ttls internal\n\timport furtka_routes\n}\n")
s = https.status(ca_path=ca, snippet=snippet) s = https.status(ca_path=ca, https_snippet=https_snip)
assert s["ca_available"] is True assert s["ca_available"] is True
assert s["fingerprint_sha256"] == _TEST_CERT_FP_SHA256 assert s["fingerprint_sha256"] == _TEST_CERT_FP_SHA256
assert s["force_https"] is True assert s["force_https"] is True
def test_set_force_enable_writes_snippet_and_reloads(tmp_path): def test_status_force_reflects_https_snippet_not_redirect(tmp_path):
snippet_dir = tmp_path / "furtka.d" """Authoritative signal for "HTTPS is on" is the listener snippet —
snippet = snippet_dir / "redirect.caddyfile" a lone redirect without a :443 listener wouldn't actually serve
HTTPS, so the status must NOT report it as on. Locks 26.15 semantic."""
ca = tmp_path / "root.crt"
ca.write_text(_TEST_CERT_PEM)
s = https.status(ca_path=ca, https_snippet=tmp_path / "does-not-exist.caddyfile")
assert s["force_https"] is False
def test_set_force_enable_writes_both_snippets_and_reloads(tmp_path):
_prepare_hostname(tmp_path)
p = _paths(tmp_path)
calls = [] calls = []
def fake_reload(): def fake_reload():
calls.append("reload") calls.append("reload")
result = https.set_force_https( result = https.set_force_https(True, reload_caddy=fake_reload, **p)
True, snippet_dir=snippet_dir, snippet=snippet, reload_caddy=fake_reload
)
assert result is True assert result is True
assert snippet.read_text() == https.REDIRECT_CONTENT assert p["snippet"].read_text() == https.REDIRECT_CONTENT
written = p["https_snippet"].read_text()
assert "testbox.local, testbox" in written
assert "tls internal" in written
assert "import furtka_routes" in written
assert calls == ["reload"] assert calls == ["reload"]
def test_set_force_disable_removes_snippet(tmp_path): def test_set_force_uses_fallback_hostname_when_file_missing(tmp_path):
snippet_dir = tmp_path / "furtka.d" # No /etc/hostname → fall back to 'furtka' so Caddy gets a parseable
snippet_dir.mkdir() # block instead of an empty hostname that would fail config load.
snippet = snippet_dir / "redirect.caddyfile" p = _paths(tmp_path)
snippet.write_text(https.REDIRECT_CONTENT) result = https.set_force_https(True, reload_caddy=lambda: None, **p)
assert result is True
assert "furtka.local, furtka" in p["https_snippet"].read_text()
result = https.set_force_https(
False, snippet_dir=snippet_dir, snippet=snippet, reload_caddy=lambda: None def test_set_force_disable_removes_both_snippets(tmp_path):
) _prepare_hostname(tmp_path)
p = _paths(tmp_path)
p["snippet_dir"].mkdir()
p["https_snippet_dir"].mkdir()
p["snippet"].write_text(https.REDIRECT_CONTENT)
p["https_snippet"].write_text("furtka.local { tls internal }\n")
result = https.set_force_https(False, reload_caddy=lambda: None, **p)
assert result is False assert result is False
assert not snippet.exists() assert not p["snippet"].exists()
assert not p["https_snippet"].exists()
def test_set_force_disable_is_idempotent_when_already_off(tmp_path): def test_set_force_disable_is_idempotent_when_already_off(tmp_path):
snippet_dir = tmp_path / "furtka.d" p = _paths(tmp_path)
snippet = snippet_dir / "redirect.caddyfile" result = https.set_force_https(False, reload_caddy=lambda: None, **p)
result = https.set_force_https(
False, snippet_dir=snippet_dir, snippet=snippet, reload_caddy=lambda: None
)
assert result is False assert result is False
assert not snippet.exists() assert not p["snippet"].exists()
assert not p["https_snippet"].exists()
def test_reload_failure_rolls_back_enable(tmp_path): def test_reload_failure_rolls_back_enable(tmp_path):
snippet_dir = tmp_path / "furtka.d" _prepare_hostname(tmp_path)
snippet = snippet_dir / "redirect.caddyfile" p = _paths(tmp_path)
def failing_reload(): def failing_reload():
raise subprocess.CalledProcessError(1, ["systemctl"], stderr="bad config") raise subprocess.CalledProcessError(1, ["systemctl"], stderr="bad config")
with pytest.raises(https.HttpsError, match="caddy reload failed: bad config"): with pytest.raises(https.HttpsError, match="caddy reload failed: bad config"):
https.set_force_https( https.set_force_https(True, reload_caddy=failing_reload, **p)
True, snippet_dir=snippet_dir, snippet=snippet, reload_caddy=failing_reload # Rollback: since neither snippet existed before, neither exists after.
) assert not p["snippet"].exists()
# Rollback: since snippet didn't exist before, it must not exist after. assert not p["https_snippet"].exists()
assert not snippet.exists()
def test_reload_failure_rolls_back_disable(tmp_path): def test_reload_failure_rolls_back_disable(tmp_path):
snippet_dir = tmp_path / "furtka.d" _prepare_hostname(tmp_path)
snippet_dir.mkdir() p = _paths(tmp_path)
snippet = snippet_dir / "redirect.caddyfile" p["snippet_dir"].mkdir()
original = "redir https://{host}{uri} permanent\n# marker\n" p["https_snippet_dir"].mkdir()
snippet.write_text(original) original_redirect = "redir https://{host}{uri} permanent\n# marker\n"
original_https = "# old https block\nfurtka.local { tls internal }\n"
p["snippet"].write_text(original_redirect)
p["https_snippet"].write_text(original_https)
def failing_reload(): def failing_reload():
raise subprocess.CalledProcessError(1, ["systemctl"], stderr="bad config") raise subprocess.CalledProcessError(1, ["systemctl"], stderr="bad config")
with pytest.raises(https.HttpsError): with pytest.raises(https.HttpsError):
https.set_force_https( https.set_force_https(False, reload_caddy=failing_reload, **p)
False, snippet_dir=snippet_dir, snippet=snippet, reload_caddy=failing_reload # Rollback: both snippets are restored to their exact prior contents.
) assert p["snippet"].read_text() == original_redirect
# Rollback: snippet is restored to its exact prior contents. assert p["https_snippet"].read_text() == original_https
assert snippet.read_text() == original
def test_systemctl_missing_raises_and_rolls_back(tmp_path): def test_systemctl_missing_raises_and_rolls_back(tmp_path):
snippet_dir = tmp_path / "furtka.d" _prepare_hostname(tmp_path)
snippet = snippet_dir / "redirect.caddyfile" p = _paths(tmp_path)
def missing_systemctl(): def missing_systemctl():
raise FileNotFoundError(2, "No such file", "systemctl") raise FileNotFoundError(2, "No such file", "systemctl")
with pytest.raises(https.HttpsError, match="systemctl not available"): with pytest.raises(https.HttpsError, match="systemctl not available"):
https.set_force_https( https.set_force_https(True, reload_caddy=missing_systemctl, **p)
True, snippet_dir=snippet_dir, snippet=snippet, reload_caddy=missing_systemctl assert not p["snippet"].exists()
) assert not p["https_snippet"].exists()
assert not snippet.exists()
def test_redirect_snippet_content_is_caddy_redir_directive(): def test_redirect_snippet_content_is_caddy_redir_directive():
# Lock the exact directive. A regression here silently stops the # Lock the exact directive. A regression here silently stops the
# redirect from taking effect even though the file-swap looks fine. # redirect from taking effect even though the file-swap looks fine.
assert https.REDIRECT_CONTENT.strip() == "redir https://{host}{uri} permanent" assert https.REDIRECT_CONTENT.strip() == "redir https://{host}{uri} permanent"
def test_https_snippet_content_has_tls_internal_and_routes(tmp_path):
# Lock the shape of the opt-in HTTPS listener block. Caddy parses
# this verbatim — changing the shape without updating the test
# risks shipping a silently-broken Caddyfile import.
s = https._https_snippet_content("mybox")
assert "mybox.local, mybox {" in s
assert "\ttls internal" in s
assert "\timport furtka_routes" in s
assert s.endswith("}\n")

View file

@ -0,0 +1,177 @@
"""Tests for the background app-install runner.
Same shape as test_catalog.py / test_updater.py: fixture reloads the
module with env-overridden paths, dockerops calls are stubbed so nothing
touches a real daemon. Asserts that state transitions happen in the
right order and that exceptions flip the state to "error" with the
message before re-raising.
"""
from __future__ import annotations
import json
from pathlib import Path
import pytest
@pytest.fixture
def runner(tmp_path, monkeypatch):
apps = tmp_path / "apps"
apps.mkdir()
monkeypatch.setenv("FURTKA_APPS_DIR", str(apps))
monkeypatch.setenv("FURTKA_INSTALL_STATE", str(tmp_path / "install-state.json"))
monkeypatch.setenv("FURTKA_INSTALL_LOCK", str(tmp_path / "install.lock"))
import importlib
from furtka import install_runner as r
from furtka import paths as p
importlib.reload(p)
importlib.reload(r)
return r
def _write_installed_app(apps_dir: Path, name: str = "fileshare"):
app = apps_dir / name
app.mkdir()
manifest = {
"name": name,
"display_name": "Fileshare",
"version": "0.1.0",
"description": "Test fixture",
"volumes": ["files"],
"ports": [445],
"icon": "icon.svg",
}
(app / "manifest.json").write_text(json.dumps(manifest))
(app / "docker-compose.yaml").write_text("services: {}\n")
return app
def test_write_and_read_state_round_trip(runner):
runner.write_state("pulling_image", app="jellyfin")
s = runner.read_state()
assert s["stage"] == "pulling_image"
assert s["app"] == "jellyfin"
assert "updated_at" in s
def test_read_state_returns_empty_when_missing(runner):
assert runner.read_state() == {}
def test_read_state_returns_empty_on_junk(runner):
runner.state_path().parent.mkdir(parents=True, exist_ok=True)
runner.state_path().write_text("{not json")
assert runner.read_state() == {}
def test_acquire_lock_prevents_concurrent_runs(runner):
held = runner.acquire_lock()
try:
with pytest.raises(runner.InstallRunnerError, match="in progress"):
runner.acquire_lock()
finally:
held.close()
def test_run_install_happy_path(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "fileshare")
calls = []
monkeypatch.setattr(dockerops, "compose_pull", lambda *a, **k: calls.append(("pull", a)))
monkeypatch.setattr(dockerops, "ensure_volume", lambda name: calls.append(("vol", name)))
monkeypatch.setattr(dockerops, "compose_up", lambda *a, **k: calls.append(("up", a)))
runner.run_install("fileshare")
# Ordering: pull first, then volumes, then up.
assert [c[0] for c in calls] == ["pull", "vol", "up"]
# Exactly the namespaced volume name got created.
assert calls[1] == ("vol", "furtka_fileshare_files")
# Final state is "done" with the manifest version.
s = runner.read_state()
assert s["stage"] == "done"
assert s["app"] == "fileshare"
assert s["version"] == "0.1.0"
def test_run_install_writes_error_on_pull_failure(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "fileshare")
def boom(*a, **k):
raise dockerops.DockerError("pull failed: registry unreachable")
monkeypatch.setattr(dockerops, "compose_pull", boom)
monkeypatch.setattr(dockerops, "ensure_volume", lambda name: None)
monkeypatch.setattr(dockerops, "compose_up", lambda *a, **k: None)
with pytest.raises(dockerops.DockerError):
runner.run_install("fileshare")
s = runner.read_state()
assert s["stage"] == "error"
assert s["app"] == "fileshare"
assert "registry unreachable" in s["error"]
def test_run_install_writes_error_on_up_failure(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "fileshare")
monkeypatch.setattr(dockerops, "compose_pull", lambda *a, **k: None)
monkeypatch.setattr(dockerops, "ensure_volume", lambda name: None)
def boom(*a, **k):
raise dockerops.DockerError("compose up: container refused to start")
monkeypatch.setattr(dockerops, "compose_up", boom)
with pytest.raises(dockerops.DockerError):
runner.run_install("fileshare")
s = runner.read_state()
assert s["stage"] == "error"
assert "refused to start" in s["error"]
def test_run_install_releases_lock_after_done(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "fileshare")
monkeypatch.setattr(dockerops, "compose_pull", lambda *a, **k: None)
monkeypatch.setattr(dockerops, "ensure_volume", lambda name: None)
monkeypatch.setattr(dockerops, "compose_up", lambda *a, **k: None)
runner.run_install("fileshare")
# Lock released — a fresh acquire must succeed.
fh = runner.acquire_lock()
fh.close()
def test_run_install_releases_lock_after_error(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "fileshare")
monkeypatch.setattr(
dockerops, "compose_pull", lambda *a, **k: (_ for _ in ()).throw(dockerops.DockerError("x"))
)
with pytest.raises(dockerops.DockerError):
runner.run_install("fileshare")
fh = runner.acquire_lock()
fh.close()

View file

@ -267,3 +267,91 @@ def test_read_env_values_roundtrip(tmp_path, fake_dirs):
write_env(p, {"A": "plain", "B": "has space", "C": 'has "quote"', "D": ""}) write_env(p, {"A": "plain", "B": "has space", "C": 'has "quote"', "D": ""})
values = read_env_values(p) values = read_env_values(p)
assert values == {"A": "plain", "B": "has space", "C": 'has "quote"', "D": ""} assert values == {"A": "plain", "B": "has space", "C": 'has "quote"', "D": ""}
# --- path-type settings ------------------------------------------------------
PATH_MANIFEST = dict(
VALID_MANIFEST,
name="jellyfin",
settings=[
{
"name": "MEDIA_PATH",
"label": "Medienordner",
"type": "path",
"required": True,
}
],
)
OPTIONAL_PATH_MANIFEST = dict(
VALID_MANIFEST,
name="jellyfin",
settings=[{"name": "OPTIONAL_PATH", "label": "Optional", "type": "path", "required": False}],
)
def test_install_with_valid_path_succeeds(tmp_path, fake_dirs):
media = tmp_path / "media"
media.mkdir()
src = _write_app_source(tmp_path, "jellyfin", PATH_MANIFEST)
installer.install_from(src, settings={"MEDIA_PATH": str(media)})
target = apps_dir() / "jellyfin"
assert f"MEDIA_PATH={media}" in (target / ".env").read_text()
def test_install_rejects_nonexistent_path(tmp_path, fake_dirs):
src = _write_app_source(tmp_path, "jellyfin", PATH_MANIFEST)
with pytest.raises(installer.InstallError, match="does not exist"):
installer.install_from(src, settings={"MEDIA_PATH": str(tmp_path / "ghost")})
def test_install_rejects_path_that_is_a_file(tmp_path, fake_dirs):
f = tmp_path / "not-a-dir"
f.write_text("hi")
src = _write_app_source(tmp_path, "jellyfin", PATH_MANIFEST)
with pytest.raises(installer.InstallError, match="is not a directory"):
installer.install_from(src, settings={"MEDIA_PATH": str(f)})
def test_install_rejects_relative_path(tmp_path, fake_dirs):
src = _write_app_source(tmp_path, "jellyfin", PATH_MANIFEST)
with pytest.raises(installer.InstallError, match="absolute path"):
installer.install_from(src, settings={"MEDIA_PATH": "media"})
def test_install_rejects_system_path(tmp_path, fake_dirs):
src = _write_app_source(tmp_path, "jellyfin", PATH_MANIFEST)
with pytest.raises(installer.InstallError, match="system path"):
installer.install_from(src, settings={"MEDIA_PATH": "/etc"})
def test_install_rejects_root_filesystem(tmp_path, fake_dirs):
src = _write_app_source(tmp_path, "jellyfin", PATH_MANIFEST)
with pytest.raises(installer.InstallError, match="system path"):
installer.install_from(src, settings={"MEDIA_PATH": "/"})
def test_install_rejects_deny_list_via_traversal(tmp_path, fake_dirs):
# /mnt/../etc resolves to /etc — must be caught after Path.resolve().
src = _write_app_source(tmp_path, "jellyfin", PATH_MANIFEST)
with pytest.raises(installer.InstallError, match="system path"):
installer.install_from(src, settings={"MEDIA_PATH": "/mnt/../etc"})
def test_install_accepts_empty_optional_path(tmp_path, fake_dirs):
src = _write_app_source(tmp_path, "jellyfin", OPTIONAL_PATH_MANIFEST)
installer.install_from(src, settings={"OPTIONAL_PATH": ""})
target = apps_dir() / "jellyfin"
assert (target / ".env").exists()
def test_update_env_rejects_invalid_path(tmp_path, fake_dirs):
# First install with a valid path.
media = tmp_path / "media"
media.mkdir()
src = _write_app_source(tmp_path, "jellyfin", PATH_MANIFEST)
installer.install_from(src, settings={"MEDIA_PATH": str(media)})
# Then try to update to a bad path.
with pytest.raises(installer.InstallError, match="does not exist"):
installer.update_env("jellyfin", {"MEDIA_PATH": str(tmp_path / "ghost")})

View file

@ -95,6 +95,21 @@ def test_settings_optional_default_empty(tmp_path):
m = load_manifest(path) m = load_manifest(path)
assert m.settings == () assert m.settings == ()
assert m.description_long == "" assert m.description_long == ""
assert m.open_url == ""
def test_open_url_stored_when_present(tmp_path):
payload = dict(VALID_MANIFEST, open_url="smb://{host}/files")
path = _write_app(tmp_path, "fileshare", payload)
m = load_manifest(path)
assert m.open_url == "smb://{host}/files"
def test_open_url_non_string_rejected(tmp_path):
payload = dict(VALID_MANIFEST, open_url=42)
path = _write_app(tmp_path, "fileshare", payload)
with pytest.raises(ManifestError, match="open_url"):
load_manifest(path)
def test_settings_parsed(tmp_path): def test_settings_parsed(tmp_path):
@ -140,6 +155,27 @@ def test_settings_reject_unknown_type(tmp_path):
load_manifest(path) load_manifest(path)
def test_settings_accept_path_type(tmp_path):
payload = dict(
VALID_MANIFEST,
settings=[
{
"name": "MEDIA_PATH",
"label": "Medienordner",
"description": "Absoluter Pfad zu deinen Medien",
"type": "path",
"required": True,
}
],
)
path = _write_app(tmp_path, "fileshare", payload)
m = load_manifest(path)
assert len(m.settings) == 1
assert m.settings[0].name == "MEDIA_PATH"
assert m.settings[0].type == "path"
assert m.settings[0].required is True
def test_settings_reject_duplicate_name(tmp_path): def test_settings_reject_duplicate_name(tmp_path):
bad = dict( bad = dict(
VALID_MANIFEST, VALID_MANIFEST,

74
tests/test_passwd.py Normal file
View file

@ -0,0 +1,74 @@
"""Tests for furtka.passwd — stdlib-only password hashing.
The primary contract: hash/verify roundtrips cleanly, AND the verifier
accepts the werkzeug hash format that 26.11 / 26.12 boxes wrote to
``users.json``. Losing that backward compat would lock out existing
admins after a 26.13+ upgrade.
"""
from __future__ import annotations
from furtka import passwd
def test_hash_roundtrip():
h = passwd.hash_password("hunter2")
assert passwd.verify_password("hunter2", h)
assert not passwd.verify_password("wrong", h)
def test_hash_is_salted():
# Two separate hashes of the same password must diverge.
a = passwd.hash_password("same-pw")
b = passwd.hash_password("same-pw")
assert a != b
assert passwd.verify_password("same-pw", a)
assert passwd.verify_password("same-pw", b)
def test_generated_hash_format():
# Shape is pbkdf2:<hash>:<iter>$<salt>$<hex>
h = passwd.hash_password("x")
parts = h.split("$", 2)
assert len(parts) == 3
method, salt, digest = parts
assert method.startswith("pbkdf2:sha256:")
assert salt
# digest is hex of pbkdf2_hmac sha256 → 64 hex chars
assert len(digest) == 64
assert all(c in "0123456789abcdef" for c in digest)
def test_verify_werkzeug_scrypt_hash():
"""Known werkzeug scrypt hash generated by 26.11 / 26.12 boxes.
Captured live off a .196 test VM after its auth bootstrap:
username=daniel, password=test-admin-pw1
Hash format: scrypt:32768:8:1$<salt>$<hex>
If this regresses, every existing box that upgraded via 26.11 and
set a password gets locked out on the next upgrade.
"""
known = (
"scrypt:32768:8:1$yWZUqJodowt9ieI1$"
"2d1059b3564da7492b4aa3c2be7fff6fef06085e5e1bfd52e897948c58246b7a"
"9603400355b7264f61c4436eba7bf8c947adec3d7a76be03b50efb4227e15a80"
)
assert passwd.verify_password("test-admin-pw1", known)
assert not passwd.verify_password("wrong-password", known)
def test_verify_rejects_malformed_hashes():
# Empty / missing delimiters / unknown method / bad int — all False.
assert not passwd.verify_password("x", "")
assert not passwd.verify_password("x", "nothingspecial")
assert not passwd.verify_password("x", "pbkdf2:sha256:600000") # no $salt$digest
assert not passwd.verify_password("x", "pbkdf2$salt$digest") # missing hash + iter
assert not passwd.verify_password("x", "bcrypt:12$salt$digest") # unsupported algo
assert not passwd.verify_password("x", "pbkdf2:sha256:abc$salt$digest") # bad iter int
def test_verify_rejects_nonstring_inputs():
# Defensive: users.json can be corrupted or have nulls.
assert not passwd.verify_password(None, "pbkdf2:sha256:1000$salt$digest") # type: ignore[arg-type]
assert not passwd.verify_password("x", None) # type: ignore[arg-type]
assert not passwd.verify_password("x", 12345) # type: ignore[arg-type]

108
tests/test_sources.py Normal file
View file

@ -0,0 +1,108 @@
"""Tests for the catalog > bundled resolver."""
from __future__ import annotations
import json
from pathlib import Path
import pytest
def _manifest(name: str = "fileshare") -> dict:
return {
"name": name,
"display_name": "Fileshare",
"version": "0.1.0",
"description": "x",
"volumes": [],
"ports": [],
"icon": "icon.svg",
}
@pytest.fixture
def sources_mod(tmp_path, monkeypatch):
monkeypatch.setenv("FURTKA_CATALOG_DIR", str(tmp_path / "catalog"))
monkeypatch.setenv("FURTKA_BUNDLED_APPS_DIR", str(tmp_path / "bundled"))
import importlib
from furtka import paths as p
from furtka import sources as s
importlib.reload(p)
importlib.reload(s)
return s
def _seed_app(root: Path, name: str, manifest: dict | None = None) -> Path:
folder = root / name
folder.mkdir(parents=True)
(folder / "manifest.json").write_text(json.dumps(manifest or _manifest(name)))
return folder
def test_resolve_app_name_returns_none_when_absent(sources_mod):
assert sources_mod.resolve_app_name("nope") is None
def test_resolve_app_name_prefers_catalog_over_bundled(sources_mod, tmp_path):
_seed_app(tmp_path / "catalog" / "apps", "fileshare")
_seed_app(tmp_path / "bundled", "fileshare")
result = sources_mod.resolve_app_name("fileshare")
assert result is not None
assert result.origin == "catalog"
assert result.path.parent.name == "apps"
assert result.path.parent.parent.name == "catalog"
def test_resolve_app_name_falls_back_to_bundled(sources_mod, tmp_path):
_seed_app(tmp_path / "bundled", "fileshare")
result = sources_mod.resolve_app_name("fileshare")
assert result is not None
assert result.origin == "bundled"
def test_resolve_app_name_ignores_folder_without_manifest(sources_mod, tmp_path):
# Empty folder is not a valid app even if the name matches.
(tmp_path / "catalog" / "apps" / "fileshare").mkdir(parents=True)
_seed_app(tmp_path / "bundled", "fileshare")
result = sources_mod.resolve_app_name("fileshare")
# Catalog entry without manifest is skipped; bundled wins.
assert result.origin == "bundled"
def test_list_available_unions_catalog_and_bundled(sources_mod, tmp_path):
_seed_app(tmp_path / "catalog" / "apps", "fileshare")
_seed_app(tmp_path / "bundled", "otherapp")
names = {s.path.name: s.origin for s in sources_mod.list_available()}
assert names == {"fileshare": "catalog", "otherapp": "bundled"}
def test_list_available_catalog_wins_on_collision(sources_mod, tmp_path):
_seed_app(tmp_path / "catalog" / "apps", "fileshare")
_seed_app(tmp_path / "bundled", "fileshare")
entries = sources_mod.list_available()
assert len(entries) == 1
assert entries[0].origin == "catalog"
def test_list_available_empty_when_neither_exists(sources_mod):
assert sources_mod.list_available() == []
def test_list_available_skips_non_dirs_and_no_manifest(sources_mod, tmp_path):
# A plain file in catalog/apps and an empty dir in bundled — both ignored.
cat_root = tmp_path / "catalog" / "apps"
cat_root.mkdir(parents=True)
(cat_root / "not-a-dir.txt").write_text("x")
(tmp_path / "bundled" / "emptyapp").mkdir(parents=True)
_seed_app(tmp_path / "bundled", "realapp")
entries = sources_mod.list_available()
assert [e.path.name for e in entries] == ["realapp"]

View file

@ -215,9 +215,7 @@ def test_refresh_caddyfile_substitutes_hostname_placeholder(updater, tmp_path):
# this the named-hostname :443 block ships with a literal # this the named-hostname :443 block ships with a literal
# `__FURTKA_HOSTNAME__` and Caddy refuses to load the config. # `__FURTKA_HOSTNAME__` and Caddy refuses to load the config.
src = tmp_path / "src" src = tmp_path / "src"
src.write_text( src.write_text("__FURTKA_HOSTNAME__.local, __FURTKA_HOSTNAME__ {\n\ttls internal\n}\n")
"__FURTKA_HOSTNAME__.local, __FURTKA_HOSTNAME__ {\n\ttls internal\n}\n"
)
assert updater._refresh_caddyfile(src) is True assert updater._refresh_caddyfile(src) is True
live = updater._CADDYFILE_LIVE.read_text() live = updater._CADDYFILE_LIVE.read_text()
assert "testbox.local, testbox {" in live assert "testbox.local, testbox {" in live
@ -226,6 +224,76 @@ def test_refresh_caddyfile_substitutes_hostname_placeholder(updater, tmp_path):
assert updater._refresh_caddyfile(src) is False assert updater._refresh_caddyfile(src) is False
def test_health_check_treats_4xx_as_healthy(updater, monkeypatch):
"""26.11+ auth makes /api/apps return 401 on unauth requests. If the
health check treated that as "down", every pre-auth auth upgrade
auto-rolls back. Server responding at all is enough signal for the
health check."""
import urllib.error
calls = {"n": 0}
class _FakeResp:
def __init__(self, code):
self.status = code
def __enter__(self):
return self
def __exit__(self, *a):
return False
def raising_401(url, timeout):
calls["n"] += 1
raise urllib.error.HTTPError(url, 401, "Unauthorized", {}, None)
monkeypatch.setattr("urllib.request.urlopen", raising_401)
assert updater._health_check("http://127.0.0.1:7000/api/apps", deadline_s=2.0) is True
# One call was enough — early exit on 4xx, no retry loop.
assert calls["n"] == 1
def test_health_check_rejects_5xx(updater, monkeypatch):
"""500s mean the server is up but broken — that's NOT healthy.
Distinguishes auth refusals (4xx = healthy) from real runtime
errors (5xx = unhealthy, roll back)."""
import urllib.error
def raising_500(url, timeout):
raise urllib.error.HTTPError(url, 500, "Internal Server Error", {}, None)
monkeypatch.setattr("urllib.request.urlopen", raising_500)
assert updater._health_check("http://127.0.0.1:7000/api/apps", deadline_s=1.5) is False
def test_health_check_retries_on_connection_refused(updater, monkeypatch):
"""While furtka-api is still starting, urlopen raises URLError.
The loop must keep polling until the server comes up or deadline."""
import urllib.error
calls = {"n": 0}
def flaky(url, timeout):
calls["n"] += 1
if calls["n"] < 3:
raise urllib.error.URLError("connection refused")
class _Resp:
status = 200
def __enter__(self):
return self
def __exit__(self, *a):
return False
return _Resp()
monkeypatch.setattr("urllib.request.urlopen", flaky)
assert updater._health_check("http://127.0.0.1:7000/api/apps", deadline_s=10.0) is True
assert calls["n"] == 3
def test_current_hostname_falls_back_when_file_missing(updater, monkeypatch, tmp_path): def test_current_hostname_falls_back_when_file_missing(updater, monkeypatch, tmp_path):
monkeypatch.setenv("FURTKA_HOSTNAME_FILE", str(tmp_path / "missing")) monkeypatch.setenv("FURTKA_HOSTNAME_FILE", str(tmp_path / "missing"))
import importlib import importlib
@ -248,17 +316,25 @@ def test_link_new_units_only_links_missing(updater, tmp_path, monkeypatch):
linked = updater._link_new_units(unit_dir) linked = updater._link_new_units(unit_dir)
assert linked == ["furtka-bar.timer"] assert linked == ["furtka-bar.timer"]
# Only one systemctl link call — for the new timer, not the existing service. # Two calls for the newly-linked timer: systemctl link + systemctl enable.
assert len(seen) == 1 # The already-linked service is untouched. Timers need the follow-up
# `enable` so self-updates that introduce new timers don't leave them
# dormant — fresh installs get their enable via the webinstaller.
assert len(seen) == 2
assert seen[0][:2] == ["systemctl", "link"] assert seen[0][:2] == ["systemctl", "link"]
assert seen[0][2].endswith("furtka-bar.timer") assert seen[0][2].endswith("furtka-bar.timer")
assert seen[1] == ["systemctl", "enable", "furtka-bar.timer"]
def test_extract_tarball_uses_data_filter_when_available(tmp_path, updater, monkeypatch): def test_extract_tarball_uses_data_filter_when_available(tmp_path, updater, monkeypatch):
# Confirm we pass filter='data' to extractall on Python 3.12+; fall back # Confirm we pass filter='data' to extractall on Python 3.12+; fall back
# cleanly on older runtimes. Capture the kwarg via a stub. # cleanly on older runtimes. Capture the kwarg via a stub. tarfile lives
# in furtka._release_common after the extraction refactor, so we patch
# that module — updater._extract_tarball delegates there.
from furtka import _release_common as _rc
calls = [] calls = []
real_open = updater.tarfile.open # capture before monkeypatching real_open = _rc.tarfile.open # capture before monkeypatching
class _Recorder: class _Recorder:
def __init__(self, tarball): def __init__(self, tarball):
@ -283,7 +359,7 @@ def test_extract_tarball_uses_data_filter_when_available(tmp_path, updater, monk
tar = tmp_path / "t.tar.gz" tar = tmp_path / "t.tar.gz"
_make_release_tarball(tar, "26.9-alpha") _make_release_tarball(tar, "26.9-alpha")
monkeypatch.setattr(updater.tarfile, "open", lambda *a, **kw: _Recorder(tar)) monkeypatch.setattr(_rc.tarfile, "open", lambda *a, **kw: _Recorder(tar))
dest = tmp_path / "dest" dest = tmp_path / "dest"
updater._extract_tarball(tar, dest) updater._extract_tarball(tar, dest)

View file

@ -54,7 +54,7 @@ def install_cmds(tmp_path, monkeypatch):
fake = tmp_path / "payload.tar.gz" fake = tmp_path / "payload.tar.gz"
fake.write_bytes(b"not a real tarball") fake.write_bytes(b"not a real tarball")
monkeypatch.setattr(app, "RESOURCE_MANAGER_PAYLOAD", fake) monkeypatch.setattr(app, "RESOURCE_MANAGER_PAYLOAD", fake)
return app._post_install_commands("testhost") return app._post_install_commands("testhost", "daniel", "test-admin-pw")
@pytest.mark.parametrize("target,asset_relpath", ASSET_TARGETS) @pytest.mark.parametrize("target,asset_relpath", ASSET_TARGETS)
@ -122,19 +122,39 @@ def test_caddyfile_asset_serves_from_current():
assert "root * /var/lib/furtka" in caddy assert "root * /var/lib/furtka" in caddy
def test_caddyfile_serves_both_http_and_https(): def _strip_caddy_comments(text: str) -> str:
# :80 stays so users who haven't installed the CA still reach the box; """Remove # comments + blank lines so string-match assertions can
# HTTPS is served via a named-hostname block so Caddy's `tls internal` target actual Caddyfile directives, not the leading doc block.
# has something to issue a leaf cert for. A bare `:443 { tls internal }` Comment intro is ``#`` at start-of-line or preceded by whitespace."""
# never triggers issuance — that was the 26.4-alpha regression. out = []
caddy = (ASSETS / "Caddyfile").read_text() for line in text.splitlines():
stripped = line.split("#", 1)[0].rstrip()
if stripped:
out.append(stripped)
return "\n".join(out)
def test_caddyfile_serves_http_by_default_https_opt_in():
# 26.15-alpha: HTTPS is opt-in. The default Caddyfile has a :80 block
# and imports /etc/caddy/furtka-https.d/*.caddyfile at top level —
# the /settings HTTPS toggle drops the hostname+tls-internal block
# into that dir when the user explicitly enables HTTPS. Default
# Caddyfile therefore contains no `tls internal` directive anywhere;
# if a future refactor puts it back, every fresh install regresses
# to the 26.14-era BAD_SIGNATURE trap. Strip comments first because
# the doc-block DOES mention `tls internal` in prose.
caddy_full = (ASSETS / "Caddyfile").read_text()
caddy = _strip_caddy_comments(caddy_full)
assert ":80 {" in caddy assert ":80 {" in caddy
assert "__FURTKA_HOSTNAME__.local, __FURTKA_HOSTNAME__ {" in caddy assert "tls internal" not in caddy
assert "tls internal" in caddy assert "__FURTKA_HOSTNAME__" not in caddy
# Shared routes live in a named snippet to avoid drift between the two assert "import /etc/caddy/furtka-https.d/*.caddyfile" in caddy
# listeners — both site blocks must import it. # Shared routes still live in a named snippet so the HTTPS toggle's
# snippet can import the same routes without duplication.
assert "(furtka_routes)" in caddy assert "(furtka_routes)" in caddy
assert caddy.count("import furtka_routes") == 2 # Default Caddyfile imports it once (inside :80). The HTTPS snippet,
# when written by the toggle, imports it a second time.
assert caddy.count("import furtka_routes") == 1
def test_caddyfile_disables_caddy_auto_redirects(): def test_caddyfile_disables_caddy_auto_redirects():
@ -167,18 +187,28 @@ def test_caddyfile_exposes_root_ca_download():
assert "attachment; filename=furtka-local-rootCA.crt" in caddy assert "attachment; filename=furtka-local-rootCA.crt" in caddy
def test_post_install_substitutes_hostname_in_caddyfile(install_cmds): def test_post_install_writes_caddyfile_without_hostname_placeholder(install_cmds):
# Fresh installs: the placeholder the asset ships with must be replaced # 26.15-alpha: the shipped Caddyfile no longer carries the
# with the hostname the user picked in the form. The `testhost` value # __FURTKA_HOSTNAME__ marker — HTTPS + hostname now live in the
# comes from the install_cmds fixture. Without substitution Caddy's # opt-in snippet written by set_force_https(), not in the base
# `tls internal` never issues a leaf cert for the real hostname. # Caddyfile. Verify the post-install writes the file as-is (no
caddyfile_cmd = next( # substitution expected) and it has the opt-in import glob.
(c for c in install_cmds if " > /etc/caddy/Caddyfile" in c), None caddyfile_cmd = next((c for c in install_cmds if " > /etc/caddy/Caddyfile" in c), None)
)
assert caddyfile_cmd is not None assert caddyfile_cmd is not None
written = _extract_written_content(caddyfile_cmd, "/etc/caddy/Caddyfile") written_full = _extract_written_content(caddyfile_cmd, "/etc/caddy/Caddyfile")
written = _strip_caddy_comments(written_full)
assert "__FURTKA_HOSTNAME__" not in written assert "__FURTKA_HOSTNAME__" not in written
assert "testhost.local, testhost {" in written assert "import /etc/caddy/furtka-https.d/*.caddyfile" in written
assert "tls internal" not in written
def test_post_install_creates_https_snippet_dir(install_cmds):
# The top-level HTTPS opt-in snippet dir must exist before Caddy's
# first start — its glob import tolerates an empty directory, but
# not a missing one on older Caddy builds. Parallel guarantee to
# test_post_install_creates_furtka_d_snippet_dir below.
matching = [c for c in install_cmds if "/etc/caddy/furtka-https.d" in c and "install -d" in c]
assert matching, "no install -d command creates /etc/caddy/furtka-https.d"
def test_post_install_creates_furtka_d_snippet_dir(install_cmds): def test_post_install_creates_furtka_d_snippet_dir(install_cmds):
@ -204,3 +234,28 @@ def test_read_asset_raises_for_missing_file():
def test_assets_dir_resolves_to_repo_tree(): def test_assets_dir_resolves_to_repo_tree():
assert app._ASSETS_DIR == ASSETS assert app._ASSETS_DIR == ASSETS
def test_post_install_writes_users_json_with_hashed_password(install_cmds):
"""The Furtka-admin users.json is created during the chroot post-install.
Without this, a fresh-install box lands at /login in first-run setup
mode and the user has to go through the browser to set a password
which defeats the "step-1 password works for everything" design. Also
check that the file is chmod 0600 (the PBKDF2 hash is a secret even
if it's slow to crack).
"""
import json as _json
from werkzeug.security import check_password_hash
users_cmd = next((c for c in install_cmds if " > /var/lib/furtka/users.json" in c), None)
assert users_cmd is not None, "no command writes /var/lib/furtka/users.json"
assert "chmod 600" in users_cmd, "users.json must be chmod 0600"
body = _extract_written_content(users_cmd, "/var/lib/furtka/users.json")
parsed = _json.loads(body)
assert "admin" in parsed
assert parsed["admin"]["username"] == "daniel" # matches fixture
# Hash is a real werkzeug hash, not the plaintext password.
assert parsed["admin"]["hash"] != "test-admin-pw"
assert check_password_hash(parsed["admin"]["hash"], "test-admin-pw")

View file

@ -8,6 +8,7 @@ import os
import re import re
import subprocess import subprocess
import sys import sys
from datetime import UTC
from pathlib import Path from pathlib import Path
from drives import list_scored_devices from drives import list_scored_devices
@ -49,6 +50,7 @@ FURTKA_VERSION = _resolve_version()
def _inject_version(): def _inject_version():
return {"furtka_version": FURTKA_VERSION} return {"furtka_version": FURTKA_VERSION}
LANGUAGES = { LANGUAGES = {
"en": {"locale": "en_US.UTF-8", "label": "English", "keyboard": "us"}, "en": {"locale": "en_US.UTF-8", "label": "English", "keyboard": "us"},
"de": {"locale": "de_DE.UTF-8", "label": "Deutsch", "keyboard": "de"}, "de": {"locale": "de_DE.UTF-8", "label": "Deutsch", "keyboard": "de"},
@ -262,6 +264,10 @@ _FURTKA_UNITS = (
"furtka-status.service", "furtka-status.service",
"furtka-status.timer", "furtka-status.timer",
"furtka-welcome.service", "furtka-welcome.service",
# Daily apps-catalog pull. Timer drives the service; the .service itself
# is oneshot and also callable ad-hoc via `furtka catalog sync`.
"furtka-catalog-sync.service",
"furtka-catalog-sync.timer",
) )
@ -343,7 +349,35 @@ def _furtka_json_cmd(hostname):
) )
def _post_install_commands(hostname): def _users_json_cmd(username, password):
"""Write /var/lib/furtka/users.json with the admin account hashed.
The core furtka-api reads this file on every login attempt; the
auth.py module treats `admin.username` + `admin.hash` as the only
credential. Hashing happens here in the webinstaller (werkzeug is a
flask transitive dep so it's already installed in this environment)
the chroot doesn't need pip. Mode 0600 so nobody but root on the
installed box can read the PBKDF2 hash.
"""
from datetime import datetime
from werkzeug.security import generate_password_hash
users = {
"admin": {
"username": username,
"hash": generate_password_hash(password),
"created_at": datetime.now(UTC).isoformat(timespec="seconds"),
}
}
return _write_file_cmd(
"/var/lib/furtka/users.json",
json.dumps(users, indent=2) + "\n",
mode="600",
)
def _post_install_commands(hostname, admin_username, admin_password):
# nss-mdns: splice `mdns_minimal [NOTFOUND=return]` before `resolve` on # nss-mdns: splice `mdns_minimal [NOTFOUND=return]` before `resolve` on
# the hosts line so `*.local` works from the installed system too. Guarded # the hosts line so `*.local` works from the installed system too. Guarded
# so a re-run (or a future Arch default that already ships mdns) is a # so a re-run (or a future Arch default that already ships mdns) is a
@ -361,6 +395,14 @@ def _post_install_commands(hostname):
# an empty dir but not a missing one on every Caddy version, so we # an empty dir but not a missing one on every Caddy version, so we
# create it up front and stay on the safe side. # create it up front and stay on the safe side.
"install -d -m 0755 -o root -g root /etc/caddy/furtka.d", "install -d -m 0755 -o root -g root /etc/caddy/furtka.d",
# Parallel dir for the top-level HTTPS-listener snippet, written
# by /api/furtka/https/force (26.15-alpha+) when the user opts
# into HTTPS. Empty by default so fresh installs never generate
# a tls internal cert — that was the 26.14 regression where
# Firefox hit unbypassable SEC_ERROR_BAD_SIGNATURE because
# Caddy's fixed intermediate-CN clashed with any cached trust
# from a previously-reinstalled Furtka box.
"install -d -m 0755 -o root -g root /etc/caddy/furtka-https.d",
# The Caddyfile lives at /etc/caddy/Caddyfile per Caddy's convention # The Caddyfile lives at /etc/caddy/Caddyfile per Caddy's convention
# (systemd unit points there). Content comes from the shipped asset, # (systemd unit points there). Content comes from the shipped asset,
# which we copy in at install time so updates that change routing # which we copy in at install time so updates that change routing
@ -384,6 +426,12 @@ def _post_install_commands(hostname):
# furtka.json depends on /opt/furtka/current/VERSION, so it has to # furtka.json depends on /opt/furtka/current/VERSION, so it has to
# run after the resource-manager commands. # run after the resource-manager commands.
_furtka_json_cmd(hostname), _furtka_json_cmd(hostname),
# Admin account for the Furtka web UI. Hashed here (werkzeug is
# already in scope for the Flask webinstaller) and materialised
# into /var/lib/furtka/users.json at mode 0600 on the target
# partition — the installed core's auth.py picks it up on first
# login.
_users_json_cmd(admin_username, admin_password),
] ]
@ -442,7 +490,7 @@ def build_archinstall_config(s):
# page, status timer, and welcome banner into place. # page, status timer, and welcome banner into place.
"custom_commands": [ "custom_commands": [
f"gpasswd -a {s['username']} docker", f"gpasswd -a {s['username']} docker",
*_post_install_commands(s["hostname"]), *_post_install_commands(s["hostname"], s["username"], s["password"]),
], ],
"network_config": {"type": "iso"}, "network_config": {"type": "iso"},
"ssh": True, "ssh": True,

View file

@ -1,6 +1,41 @@
import subprocess import subprocess
def _boot_disk_name():
"""Return the parent disk name of the live-ISO boot media (e.g. "sdb"), or None.
On a normal box `/run/archiso/bootmnt` does not exist and we return None,
leaving the device list untouched. On bare metal booted from USB this is
the stick we booted from we want to filter it out so the user can't
accidentally pick it as the install target.
"""
try:
result = subprocess.run(
["findmnt", "-no", "SOURCE", "/run/archiso/bootmnt"],
capture_output=True,
text=True,
)
except FileNotFoundError:
return None
if result.returncode != 0:
return None
partition = result.stdout.strip()
if not partition:
return None
try:
parent = subprocess.run(
["lsblk", "-no", "PKNAME", partition],
capture_output=True,
text=True,
)
except FileNotFoundError:
return None
if parent.returncode != 0:
return None
name = parent.stdout.strip().splitlines()[0] if parent.stdout.strip() else ""
return name or None
def _smart_status(device): def _smart_status(device):
try: try:
result = subprocess.run( result = subprocess.run(
@ -75,11 +110,14 @@ def score_device(device, size_gb):
return get_drive_type_score(device) + get_drive_health(device) + get_size_score(size_gb) return get_drive_type_score(device) + get_drive_health(device) + get_size_score(size_gb)
def parse_lsblk_output(output): def parse_lsblk_output(output, boot_disk=None):
"""Parse `lsblk -dn -o NAME,SIZE,TYPE` output into scored device dicts. """Parse `lsblk -dn -o NAME,SIZE,TYPE` output into scored device dicts.
Keeps only TYPE=disk so the live ISO's own squashfs (loop) and the boot Keeps only TYPE=disk so the live ISO's own squashfs (loop) and the boot
CD-ROM (rom) don't show up as install targets. CD-ROM (rom) don't show up as install targets. If `boot_disk` is given,
that disk is also dropped it's the USB stick the live ISO booted from
on bare metal, where it appears as TYPE=disk and would otherwise be a
valid-looking install target.
""" """
devices = [] devices = []
for line in output.strip().split("\n"): for line in output.strip().split("\n"):
@ -91,6 +129,8 @@ def parse_lsblk_output(output):
name, size, dev_type = parts[0], parts[1], parts[2] name, size, dev_type = parts[0], parts[1], parts[2]
if dev_type != "disk": if dev_type != "disk":
continue continue
if boot_disk and name == boot_disk:
continue
device = f"/dev/{name}" device = f"/dev/{name}"
size_gb = parse_size_gb(size) size_gb = parse_size_gb(size)
status = _smart_status(device) status = _smart_status(device)
@ -120,7 +160,7 @@ def list_scored_devices():
except subprocess.CalledProcessError as e: except subprocess.CalledProcessError as e:
print(f"Error listing devices: {e}") print(f"Error listing devices: {e}")
return [] return []
return parse_lsblk_output(result.stdout) return parse_lsblk_output(result.stdout, boot_disk=_boot_disk_name())
def main(): def main():

View file

@ -6,6 +6,8 @@
{% block content %} {% block content %}
<h1>Rebooting…</h1> <h1>Rebooting…</h1>
<p class="lede">The machine is restarting. This page will stop responding in a moment — that's expected.</p> <p class="lede">The machine is restarting. This page will stop responding in a moment — that's expected.</p>
<p><strong>Remove the USB stick now</strong> — if it's still plugged in when the machine reboots, some BIOS setups will boot into this installer again instead of starting Furtka.</p>
<p class="muted">If the installer does come back anyway, your BIOS is set to boot from USB before the disk. Press the one-time boot menu key at startup (often <kbd>F11</kbd>, <kbd>F12</kbd>, or <kbd>Esc</kbd> — it flashes briefly on screen) and pick the internal disk, or change the boot order in BIOS settings.</p>
<p>When the machine comes back up (~1 minute), open Furtka in your browser:</p> <p>When the machine comes back up (~1 minute), open Furtka in your browser:</p>
<p><a href="http://{{ hostname }}.local" class="btn btn-primary">http://{{ hostname }}.local</a></p> <p><a href="http://{{ hostname }}.local" class="btn btn-primary">http://{{ hostname }}.local</a></p>
<p class="muted">If that doesn't resolve, your network may not support mDNS — use the IP address shown on the machine's console instead.</p> <p class="muted">If that doesn't resolve, your network may not support mDNS — use the IP address shown on the machine's console instead.</p>

View file

@ -6,6 +6,10 @@
--accent: #c03a28; --accent: #c03a28;
--accent-hover: #a0301f; --accent-hover: #a0301f;
--border: #e4e3dc; --border: #e4e3dc;
--accent-glow: rgba(192, 58, 40, 0.2);
--card-bg: rgba(247, 246, 243, 0.72);
--card-border: var(--border);
--scene-opacity: 0.18;
--font-sans: --font-sans:
-apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue",
Arial, "Noto Sans", sans-serif; Arial, "Noto Sans", sans-serif;
@ -23,6 +27,10 @@
--accent: #ff6b56; --accent: #ff6b56;
--accent-hover: #ff8b78; --accent-hover: #ff8b78;
--border: #232326; --border: #232326;
--accent-glow: rgba(255, 107, 86, 0.4);
--card-bg: rgba(23, 23, 26, 0.65);
--card-border: #26262b;
--scene-opacity: 0.34;
} }
} }
@ -43,6 +51,25 @@ body {
flex-direction: column; flex-direction: column;
min-height: 100vh; min-height: 100vh;
text-rendering: optimizeLegibility; text-rendering: optimizeLegibility;
isolation: isolate;
}
/* ── Animated background canvas (home only) ─────────────── */
.scene-canvas {
position: fixed;
inset: 0;
width: 100vw;
height: 100vh;
z-index: 0;
pointer-events: none;
}
.site-header,
main.container,
.site-footer {
position: relative;
z-index: 1;
} }
.container { .container {
@ -171,11 +198,36 @@ main.container {
.home h1 { .home h1 {
font-family: var(--font-sans); font-family: var(--font-sans);
font-weight: 800; font-weight: 800;
font-size: clamp(3.25rem, 10vw, 6.5rem); font-size: clamp(3.5rem, 14vw, 11rem);
line-height: 0.95; line-height: 0.9;
letter-spacing: -0.035em; letter-spacing: -0.04em;
margin: 0 0 1.5rem; margin: 0 0 1.5rem;
color: var(--fg); color: var(--fg);
background-image: linear-gradient(180deg, var(--fg) 0%, var(--accent) 110%);
-webkit-background-clip: text;
background-clip: text;
-webkit-text-fill-color: transparent;
}
@media (prefers-color-scheme: dark) {
.home h1 {
filter: drop-shadow(0 0 28px var(--accent-glow));
}
.home .lede {
color: #c8c8cc;
}
}
.hero {
min-height: 78vh;
display: flex;
flex-direction: column;
justify-content: center;
padding-block: 4.5rem 3rem;
}
.home .lede {
font-weight: 450;
} }
.home .lede { .home .lede {
@ -258,3 +310,132 @@ main.container {
outline-offset: 3px; outline-offset: 3px;
border-radius: 2px; border-radius: 2px;
} }
/* ── Primary CTA ─────────────────────────────────────────── */
.cta-row { margin-top: 2.5rem; }
.cta {
display: inline-flex;
align-items: center;
gap: 0.55rem;
padding: 1.1rem 2rem;
font-family: var(--font-sans);
font-weight: 600;
font-size: 1.02rem;
letter-spacing: 0.005em;
text-decoration: none;
border-radius: 0.7rem;
transition: transform 180ms, box-shadow 180ms, background 180ms, color 180ms;
}
.cta--primary {
background: linear-gradient(135deg, var(--accent), var(--accent-hover));
color: #fff;
box-shadow: 0 10px 36px var(--accent-glow),
0 0 0 1px color-mix(in srgb, var(--accent) 40%, transparent);
animation: cta-pulse 2.8s ease-in-out infinite;
}
.cta--primary:hover {
transform: translateY(-3px);
box-shadow: 0 18px 52px var(--accent-glow),
0 0 0 1px var(--accent);
animation-play-state: paused;
}
.cta--primary:active { transform: translateY(-1px); }
.cta--primary span { transition: transform 180ms; }
.cta--primary:hover span { transform: translateX(4px); }
@keyframes cta-pulse {
0%, 100% { box-shadow: 0 10px 36px var(--accent-glow),
0 0 0 1px color-mix(in srgb, var(--accent) 40%, transparent); }
50% { box-shadow: 0 14px 48px var(--accent-glow),
0 0 0 1px color-mix(in srgb, var(--accent) 70%, transparent); }
}
@media (prefers-reduced-motion: reduce) {
.cta--primary { animation: none; }
}
/* ── Intro paragraph (home, between hero and feature grids) ─ */
.intro {
max-width: 38rem;
margin: 0 0 4rem;
font-size: 1.15rem;
line-height: 1.55;
color: var(--fg);
}
.intro p { margin: 0 0 1rem; }
.intro p:last-child { margin: 0; }
.intro strong { font-weight: 600; }
/* ── Feature sections (home) ─────────────────────────────── */
.feature-section { margin-block: 4rem; }
.section-eyebrow {
font-family: var(--font-sans);
font-weight: 500;
font-size: 0.72rem;
letter-spacing: 0.14em;
text-transform: uppercase;
color: var(--fg-muted);
margin: 0 0 1.25rem;
}
.feature-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(17rem, 1fr));
gap: 1rem;
}
.feature-card {
background: var(--card-bg);
border: 1px solid var(--card-border);
border-radius: 1rem;
padding: 1.5rem 1.5rem 1.4rem;
-webkit-backdrop-filter: blur(10px);
backdrop-filter: blur(10px);
transition: transform 240ms, border-color 240ms, box-shadow 240ms;
}
.feature-card:hover {
border-color: var(--accent);
box-shadow: 0 10px 32px var(--accent-glow);
transform: translateY(-2px);
}
.feature-card p {
margin: 0;
font-size: 1rem;
line-height: 1.55;
color: var(--fg);
}
.feature-card strong {
font-weight: 600;
color: var(--fg);
}
/* ── Closer prose (home, after feature grids) ────────────── */
.closer {
margin-top: 4rem;
max-width: var(--measure);
}
/* ── Reveal-on-load (hero) and reveal-on-scroll (cards) ──── */
.js .reveal,
.js [data-gsap="card"] {
opacity: 0;
transform: translateY(40px);
will-change: opacity, transform;
}
@media (prefers-reduced-motion: reduce) {
.scene-canvas { display: none; }
.js .reveal,
.js [data-gsap="card"] {
opacity: 1 !important;
transform: none !important;
will-change: auto;
}
}

View file

@ -0,0 +1,25 @@
(function () {
if (window.matchMedia('(prefers-reduced-motion: reduce)').matches) return;
if (!window.gsap || !window.ScrollTrigger || !window.Lenis) return;
gsap.registerPlugin(ScrollTrigger);
const lenis = new Lenis({ lerp: 0.1, smoothWheel: true });
lenis.on('scroll', ScrollTrigger.update);
gsap.ticker.add((time) => { lenis.raf(time * 1000); });
gsap.ticker.lagSmoothing(0);
// Hero stagger — runs once on load.
gsap.to('.hero .reveal', {
y: 0, opacity: 1, duration: 1.1, ease: 'power3.out', stagger: 0.12
});
// Card reveals — batched so cards in the same row come in together.
ScrollTrigger.batch('[data-gsap="card"]', {
start: 'top 90%',
onEnter: (els) => gsap.to(els, {
y: 0, opacity: 1, scale: 1,
duration: 0.9, ease: 'power3.out', stagger: 0.08, overwrite: true
})
});
})();

View file

@ -0,0 +1,98 @@
(function () {
if (window.matchMedia('(prefers-reduced-motion: reduce)').matches) return;
if (!window.WebGLRenderingContext || !window.THREE) return;
const canvas = document.getElementById('scene');
if (!canvas) return;
const root = document.documentElement;
const readVar = (name) => getComputedStyle(root).getPropertyValue(name).trim();
const readOpacity = () => parseFloat(readVar('--scene-opacity')) || 0.18;
const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera(
60, window.innerWidth / window.innerHeight, 0.1, 100
);
const renderer = new THREE.WebGLRenderer({ canvas, antialias: true, alpha: true });
renderer.setSize(window.innerWidth, window.innerHeight, false);
renderer.setPixelRatio(Math.min(window.devicePixelRatio || 1, 2));
const geometry = new THREE.TorusKnotGeometry(2.5, 0.4, 130, 20);
const material = new THREE.MeshPhongMaterial({
color: readVar('--accent') || '#c03a28',
wireframe: true,
transparent: true,
opacity: readOpacity()
});
const core = new THREE.Mesh(geometry, material);
scene.add(core);
scene.add(new THREE.AmbientLight(0xffffff, 0.6));
const dir = new THREE.DirectionalLight(0xffffff, 0.8);
dir.position.set(5, 5, 5);
scene.add(dir);
const BASE_Z = 9;
camera.position.z = BASE_Z;
let scrollY = window.scrollY || 0;
window.addEventListener('scroll', () => {
scrollY = window.scrollY || 0;
}, { passive: true });
let baseOpacity = readOpacity();
let running = true;
function tick() {
if (!running) return;
requestAnimationFrame(tick);
// Continuous slow drift.
core.rotation.y += 0.0015;
core.rotation.z += 0.0006;
// Scroll-driven motion: zoom in, scale up, tilt.
const s = Math.min(scrollY, 2000);
camera.position.z = BASE_Z - s * 0.0022;
const scale = 1 + s * 0.00035;
core.scale.set(scale, scale, scale);
core.rotation.x = s * 0.0008;
// Fade past hero so feature cards stay readable.
const vh = window.innerHeight;
const fadeStart = vh * 0.5;
const fadeEnd = vh * 1.4;
const t = Math.max(0, Math.min(1, (scrollY - fadeStart) / (fadeEnd - fadeStart)));
material.opacity = baseOpacity * (1 - t * 0.92);
renderer.render(scene, camera);
}
tick();
window.addEventListener('resize', () => {
camera.aspect = window.innerWidth / window.innerHeight;
camera.updateProjectionMatrix();
renderer.setSize(window.innerWidth, window.innerHeight, false);
});
document.addEventListener('visibilitychange', () => {
if (document.hidden) {
running = false;
} else if (!running) {
running = true;
tick();
}
});
const mql = window.matchMedia('(prefers-color-scheme: dark)');
const updateTheme = () => {
const accent = readVar('--accent');
if (accent) material.color.set(accent);
baseOpacity = readOpacity();
};
if (mql.addEventListener) {
mql.addEventListener('change', updateTheme);
} else if (mql.addListener) {
mql.addListener(updateTheme);
}
})();

19
website/assets/js/vendor/PROVENANCE.md vendored Normal file
View file

@ -0,0 +1,19 @@
# Vendored JavaScript libraries
These minified bundles are checked into the repo so furtka.org has zero
third-party-CDN dependencies at runtime. Pin date: **2026-04-27**.
| File | Version | Source |
|---|---|---|
| `three.min.js` | r128 | https://cdnjs.cloudflare.com/ajax/libs/three.js/r128/three.min.js |
| `gsap.min.js` | 3.12.2 (core only) | https://cdnjs.cloudflare.com/ajax/libs/gsap/3.12.2/gsap.min.js |
| `ScrollTrigger.min.js` | 3.12.2 | https://cdnjs.cloudflare.com/ajax/libs/gsap/3.12.2/ScrollTrigger.min.js |
| `lenis.min.js` | @studio-freight/lenis 1.0.33 | https://unpkg.com/@studio-freight/lenis@1.0.33/dist/lenis.min.js |
All four expose UMD globals (`THREE`, `gsap`, `ScrollTrigger`, `Lenis`).
None are ES modules, so no `js.Build` step is needed — Hugo just fingerprints them.
GSAP "Club" plugins (SplitText, MorphSVG, etc.) are **not** free for commercial use.
Only `gsap` core + `ScrollTrigger` (both standard MIT-style license) are bundled.
To refresh: re-run `curl -sSfL -o <file> <url>` and bump the pin date here.

File diff suppressed because one or more lines are too long

11
website/assets/js/vendor/gsap.min.js vendored Normal file

File diff suppressed because one or more lines are too long

1
website/assets/js/vendor/lenis.min.js vendored Normal file

File diff suppressed because one or more lines are too long

6
website/assets/js/vendor/three.min.js vendored Normal file

File diff suppressed because one or more lines are too long

View file

@ -1,33 +1,33 @@
--- ---
title: "Furtka" title: "Furtka"
description: "Offenes Heimserver-Betriebssystem — einfach genug für alle." description: "Offenes Heimserver-Betriebssystem — einfach genug für alle."
status: "<span class=\"mono\">26.5-alpha</span> — in Arbeit" status: "<span class=\"mono\">26.15-alpha</span> — in Arbeit"
--- # features_today / features_next müssen index-parallel zu content/_index.md bleiben.
intro: |
**Furtka** ist ein offenes Heimserver-Betriebssystem. **Furtka** ist ein offenes Heimserver-Betriebssystem.
USB-Stick einstecken, durch einen Assistenten klicken, und aus jedem USB-Stick einstecken, durch einen Assistenten klicken, und aus jedem
alten Rechner wird eine private Cloud für den Haushalt — mit eigenen alten Rechner wird eine private Cloud für den Haushalt — mit eigenen
Apps, eigenem Namen im Netz, eigenen Daten. Apps, eigenem Namen im Netz, eigenen Daten.
Das Ziel ist einfach: **dein Vater soll das einrichten können.** Das Ziel ist einfach: **dein Vater soll das einrichten können.**
features_today_label: "Was heute schon geht"
### Was als Nächstes kommt features_today:
- Apps für Fotos, Dateien, Smarthome, Spiele-Streaming und Medien - "Vom USB-Stick booten und Furtka auf die Festplatte einrichten"
- Einfachere Sprache im Einrichtungs-Assistenten - "Ein Assistent fragt nach Name, Benutzer und Netzwerk — fertig"
- Sichere Verbindung im Heimnetz (ohne Warnmeldung im Browser) - "Danach: Bedienseite im Browser öffnen"
- Mehrere Server zusammenschalten - "Erste App: **Dateifreigabe im Heimnetz** (alle im WLAN sehen den Ordner)"
- "Apps mit einem Klick installieren und entfernen"
### Was heute schon geht - "Eine installierte App mit einem Klick aktualisieren (holt das neueste Container-Image)"
- Vom USB-Stick booten und Furtka auf die Festplatte einrichten - "Furtka selbst mit einem Klick aktualisieren — keine Neuinstallation mehr für neue Features"
- Ein Assistent fragt nach Name, Benutzer und Netzwerk — fertig features_next_label: "Was als Nächstes kommt"
- Danach: Bedienseite im Browser öffnen features_next:
- Erste App: **Dateifreigabe im Heimnetz** (alle im WLAN sehen den Ordner) - "Apps für Fotos, Dateien, Smarthome, Spiele-Streaming und Medien"
- Apps mit einem Klick installieren und entfernen - "Einfachere Sprache im Einrichtungs-Assistenten"
- Eine installierte App mit einem Klick aktualisieren (holt das neueste Container-Image) - "Sichere Verbindung im Heimnetz (ohne Warnmeldung im Browser)"
- Furtka selbst mit einem Klick aktualisieren — keine Neuinstallation mehr für neue Features - "Mehrere Server zusammenschalten"
---
Wir sind zu zweit und bauen das öffentlich, abends und am Wochenende. Wir sind zu zweit und bauen das öffentlich, abends und am Wochenende.
Es ist früh. Es ist früh.
Mitlesen? Schreib an <a href="mailto:hallo@furtka.org">hallo@furtka.org</a>. Mitlesen? Schreib an <hallo@furtka.org>.

View file

@ -1,33 +1,33 @@
--- ---
title: "Furtka" title: "Furtka"
description: "Open-source home server OS — simple enough for everyone." description: "Open-source home server OS — simple enough for everyone."
status: "<span class=\"mono\">26.5-alpha</span> — work in progress" status: "<span class=\"mono\">26.15-alpha</span> — work in progress"
--- # Keep features_today / features_next index-aligned with content/_index.de.md.
intro: |
**Furtka** is an open-source home server OS. **Furtka** is an open-source home server OS.
Boot from USB, click through a wizard, and any old computer Boot from USB, click through a wizard, and any old computer
turns into a private cloud for your household — with your own apps, turns into a private cloud for your household — with your own apps,
your own name on the network, your own data. your own name on the network, your own data.
The goal is simple: **your dad should be able to set this up.** The goal is simple: **your dad should be able to set this up.**
features_today_label: "What works today"
### What's coming next features_today:
- Apps for photos, files, smart home, game streaming and media - "Boot from USB stick and install Furtka onto the hard drive"
- Plainer language in the setup wizard - "A wizard asks for name, user and network — done"
- Secure connection on your home network (no browser warning) - "Then: open the control page in your browser"
- Linking several servers together - "First app: **file sharing on the home network** (everyone on Wi-Fi sees the folder)"
- "Install and remove apps with one click"
### What works today - "Update an installed app with one click (pulls the newest container image)"
- Boot from USB stick and install Furtka onto the hard drive - "Update Furtka itself with one click — no reinstalling for new features"
- A wizard asks for name, user and network — done features_next_label: "What's coming next"
- Then: open the control page in your browser features_next:
- First app: **file sharing on the home network** (everyone on Wi-Fi sees the folder) - "Apps for photos, files, smart home, game streaming and media"
- Install and remove apps with one click - "Plainer language in the setup wizard"
- Update an installed app with one click (pulls the newest container image) - "Secure connection on your home network (no browser warning)"
- Update Furtka itself with one click — no reinstalling for new features - "Linking several servers together"
---
We're two people building it in public on evenings and weekends. We're two people building it in public on evenings and weekends.
It's early. It's early.
Want to follow along? Write to <a href="mailto:hallo@furtka.org">hallo@furtka.org</a>. Want to follow along? Write to <hallo@furtka.org>.

View file

@ -6,7 +6,7 @@ enableRobotsTXT = true
[params] [params]
description = "Open-source home server OS — simple enough for everyone." description = "Open-source home server OS — simple enough for everyone."
version = "26.5-alpha" version = "26.15-alpha"
contactEmail = "hallo@furtka.org" contactEmail = "hallo@furtka.org"
[markup.goldmark.renderer] [markup.goldmark.renderer]

View file

@ -1,13 +1,15 @@
<!DOCTYPE html> <!DOCTYPE html>
<html lang="{{ .Site.Language.Lang }}"> <html lang="{{ .Site.Language.Lang }}" class="no-js">
<head> <head>
{{ partial "head.html" . }} {{ partial "head.html" . }}
</head> </head>
<body> <body>
{{ if .IsHome }}<canvas id="scene" class="scene-canvas" aria-hidden="true"></canvas>{{ end }}
{{ partial "header.html" . }} {{ partial "header.html" . }}
<main class="container"> <main class="container">
{{ block "main" . }}{{ end }} {{ block "main" . }}{{ end }}
</main> </main>
{{ partial "footer.html" . }} {{ partial "footer.html" . }}
{{ if .IsHome }}{{ partial "scripts.html" . }}{{ end }}
</body> </body>
</html> </html>

View file

@ -2,13 +2,46 @@
<article class="home"> <article class="home">
<header class="hero"> <header class="hero">
{{ with .Params.status }} {{ with .Params.status }}
<p class="status-chip">{{ . | safeHTML }}</p> <p class="status-chip reveal">{{ . | safeHTML }}</p>
{{ end }} {{ end }}
<h1>{{ .Title }}</h1> <h1 class="reveal">{{ .Title }}</h1>
{{ with site.Params.description }}<p class="lede">{{ . }}</p>{{ end }} {{ with site.Params.description }}<p class="lede reveal">{{ . }}</p>{{ end }}
<p class="cta-row reveal">
<a class="cta cta--primary" href="https://forgejo.sourcegate.online/daniel/furtka/releases">
{{ if eq site.Language.Lang "de" }}Neuestes Release{{ else }}Latest release{{ end }}
<span aria-hidden="true"></span>
</a>
</p>
</header> </header>
<div class="prose">
{{ .Content }} {{ with .Params.intro }}
<section class="intro">{{ . | markdownify }}</section>
{{ end }}
{{ if .Params.features_today }}
<section class="feature-section">
{{ with .Params.features_today_label }}<p class="section-eyebrow">{{ . }}</p>{{ end }}
<div class="feature-grid">
{{ range .Params.features_today }}
<article class="feature-card" data-gsap="card">{{ . | markdownify }}</article>
{{ end }}
</div> </div>
</section>
{{ end }}
{{ if .Params.features_next }}
<section class="feature-section">
{{ with .Params.features_next_label }}<p class="section-eyebrow">{{ . }}</p>{{ end }}
<div class="feature-grid">
{{ range .Params.features_next }}
<article class="feature-card" data-gsap="card">{{ . | markdownify }}</article>
{{ end }}
</div>
</section>
{{ end }}
{{ with .Content }}
<section class="prose closer">{{ . }}</section>
{{ end }}
</article> </article>
{{ end }} {{ end }}

View file

@ -1,7 +1,10 @@
<meta charset="utf-8"> <meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="viewport" content="width=device-width, initial-scale=1">
<script>document.documentElement.classList.replace('no-js','js');</script>
<title>{{ if .IsHome }}{{ site.Title }} — {{ site.Params.description }}{{ else }}{{ .Title }} · {{ site.Title }}{{ end }}</title> <title>{{ if .IsHome }}{{ site.Title }} — {{ site.Params.description }}{{ else }}{{ .Title }} · {{ site.Title }}{{ end }}</title>
<meta name="description" content="{{ with .Params.description }}{{ . }}{{ else }}{{ site.Params.description }}{{ end }}"> <meta name="description" content="{{ with .Params.description }}{{ . }}{{ else }}{{ site.Params.description }}{{ end }}">
<meta name="theme-color" content="#f7f6f3" media="(prefers-color-scheme: light)">
<meta name="theme-color" content="#0d0d0f" media="(prefers-color-scheme: dark)">
<link rel="icon" type="image/svg+xml" href="/favicon.svg"> <link rel="icon" type="image/svg+xml" href="/favicon.svg">
<meta property="og:site_name" content="{{ site.Title }}"> <meta property="og:site_name" content="{{ site.Title }}">
<meta property="og:title" content="{{ if .IsHome }}{{ site.Title }}{{ else }}{{ .Title }} · {{ site.Title }}{{ end }}"> <meta property="og:title" content="{{ if .IsHome }}{{ site.Title }}{{ else }}{{ .Title }} · {{ site.Title }}{{ end }}">

View file

@ -0,0 +1,12 @@
{{ $three := resources.Get "js/vendor/three.min.js" | fingerprint }}
{{ $gsap := resources.Get "js/vendor/gsap.min.js" | fingerprint }}
{{ $st := resources.Get "js/vendor/ScrollTrigger.min.js" | fingerprint }}
{{ $lenis := resources.Get "js/vendor/lenis.min.js" | fingerprint }}
{{ $scene := resources.Get "js/scene.js" | fingerprint }}
{{ $anim := resources.Get "js/animations.js" | fingerprint }}
<script defer src="{{ $three.RelPermalink }}" integrity="{{ $three.Data.Integrity }}"></script>
<script defer src="{{ $gsap.RelPermalink }}" integrity="{{ $gsap.Data.Integrity }}"></script>
<script defer src="{{ $st.RelPermalink }}" integrity="{{ $st.Data.Integrity }}"></script>
<script defer src="{{ $lenis.RelPermalink }}" integrity="{{ $lenis.Data.Integrity }}"></script>
<script defer src="{{ $scene.RelPermalink }}" integrity="{{ $scene.Data.Integrity }}"></script>
<script defer src="{{ $anim.RelPermalink }}" integrity="{{ $anim.Data.Integrity }}"></script>