Compare commits

...

9 commits

Author SHA1 Message Date
ee132712be docs: sync READMEs with 26.15 HTTPS opt-in + boot-USB filter
All checks were successful
Build ISO / build-iso (push) Successful in 24m38s
CI / lint (push) Successful in 1m1s
CI / test (push) Successful in 2m42s
CI / validate-json (push) Successful in 58s
CI / markdown-links (push) Successful in 28s
- README roadmap: Local HTTPS Phase 1 entry now reflects the 26.15
  opt-in model (default off, toggle in /settings) instead of the
  26.4 auto-trust story.
- README + iso/README: boot-USB filtering is no longer a TODO; both
  files now describe the implemented `findmnt`/`PKNAME` behaviour.
- iso/README rough edges: drop the boot-USB bullet (closed) and
  re-word the wizard-still-HTTP-only bullet to match the 26.15 toggle
  flow (it was a stale dup of the same line under it).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 12:09:33 +02:00
1193504a1e perf(site): gzip CSS, JS, SVG and fonts on the furtka.org nginx
Default nginx only gzips text/html, so the homepage HTML was the only
asset coming back compressed. The ~600 KB three.min.js bundle (and the
hashed CSS) were being shipped uncompressed across the public openresty
proxy. `gzip_types` now covers css/js/json/xml/svg/woff2.

Needs `sudo ops/nginx/setup-vm.sh` on forge-runner-01 to take effect —
the site-deploy workflow only rebuilds Hugo, it doesn't touch the
nginx config.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 12:09:26 +02:00
65d48c92f8 feat(installer): filter the boot USB out of the install drive picker
On bare-metal installs, `lsblk` reports the USB stick the live ISO
booted from as TYPE=disk, so it showed up in the drive picker
alongside the real install target — a user could in theory pick the
USB they had just booted from. `findmnt /run/archiso/bootmnt` resolves
the boot partition and `lsblk -no PKNAME` walks it up to the parent
disk; that disk is dropped before scoring. On a normal box neither
file nor mountpoint exist and the picker is unchanged.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 12:09:19 +02:00
aa7dea0528 feat(site): pimp homepage with animated 3D background and scroll reveals
Some checks failed
CI / lint (push) Successful in 1m24s
CI / test (push) Successful in 2m24s
CI / validate-json (push) Successful in 57s
CI / markdown-links (push) Successful in 29s
Deploy site / deploy (push) Successful in 7s
Build ISO / build-iso (push) Failing after 14m59s
Adopts the visual feel of Pascal's prototype while keeping Furtka's
voice, brand palette, and bilingual structure intact.

What changed
- Three.js wireframe torus-knot behind the hero, color/opacity tied
  to the existing --accent / --scene-opacity CSS vars so light and
  dark modes both work without a scene re-init.
- Scroll-driven camera zoom + core scale + tilt; canvas opacity fades
  past hero so feature cards stay readable.
- GSAP + ScrollTrigger reveal hero on load and stagger feature cards
  in as they enter the viewport. Lenis smooths scroll.
- "What works today" / "What's coming next" lists move from markdown
  bullets into front-matter arrays and render as scroll-reveal cards
  (7 + 4 cards, EN/DE parallel; copy is 1:1 from the original lists).
- Hero scaled up: gradient text on the wordmark (fg → accent),
  drop-shadow glow in dark mode, brighter lede color.
- Primary CTA -> /releases listing on Forgejo (Forgejo has no
  /releases/latest), with a pulsing glow + arrow slide on hover.
- Version bump 26.8-alpha -> 26.15-alpha to match the actual release.

Performance / a11y
- Vendor JS (Three.js r128, GSAP 3.12.2 + ScrollTrigger, Lenis 1.0.33)
  vendored locally under assets/js/vendor/ - no third-party CDN at
  runtime. ~728 KB total, fingerprinted via Hugo's pipeline with SRI.
- Canvas + scripts gated to homepage only ({{ if .IsHome }}); the
  Impressum/Datenschutz pages stay plain.
- prefers-reduced-motion: scene + GSAP early-return, CSS forces cards
  to their resting state. No-JS users see all content.
- All scripts deferred so first paint isn't blocked.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 16:14:21 +02:00
1cff22658b feat(auth): rate-limit failed logins with per-(user, IP) lockout
All checks were successful
CI / lint (push) Successful in 1m59s
CI / test (push) Successful in 3m27s
CI / validate-json (push) Successful in 1m56s
CI / markdown-links (push) Successful in 1m24s
Build ISO / build-iso (push) Successful in 26m58s
Ten wrong passwords from the same (username, client-IP) tuple within
15 minutes now return 429 with Retry-After for the next 15 minutes;
authenticate() isn't even called while locked, so the 429 response is
identical whether the password would have been correct — no oracle.

Tuple keying prevents an attacker from one IP from locking the real
admin out of their own box: a different IP (or an ISP reconnect) keeps
them in. The client IP comes from the rightmost X-Forwarded-For entry,
which is what Caddy appends and thus trustworthy (no upstream proxy in
front of Caddy). First-run setup bypasses the lockout — otherwise a
clumsy operator could lock themselves out before an admin exists.

State is in-memory (parallel to SessionStore), so `systemctl restart
furtka` clears a stuck lockout.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-22 17:27:14 +02:00
e68ed279cc fix(https): make HTTPS opt-in to stop the BAD_SIGNATURE trap on fresh installs
All checks were successful
Build ISO / build-iso (push) Successful in 17m23s
CI / lint (push) Successful in 27s
CI / test (push) Successful in 1m2s
CI / validate-json (push) Successful in 24s
CI / markdown-links (push) Successful in 15s
Release / release (push) Successful in 11m34s
Every Furtka since 26.5 shipped a Caddyfile with a
`__FURTKA_HOSTNAME__.local { tls internal }` site block, so every
first boot auto-generated a fresh self-signed CA + intermediate +
leaf. That worked for the first-ever Furtka user, but every reinstall
(or second box on the same LAN) produced a new CA whose intermediate
shared the fixed CN `Caddy Local Authority - ECC Intermediate` with
the previous one. Firefox caches intermediates by CN across profiles
— even private windows share cert9.db — so any visitor who had
trusted an older Furtka's CA got a cached intermediate with
mismatched keys when they hit the new box, producing
`SEC_ERROR_BAD_SIGNATURE`. Unlike UNKNOWN_ISSUER, Firefox has NO
"Advanced → Accept Risk" bypass for BAD_SIGNATURE, so fresh-install
boxes were effectively unreachable over HTTPS in any browser that
had ever seen a previous Furtka.

Validated live on the .46 test VM: fresh 26.14 ISO install → Firefox
hits BAD_SIGNATURE on https://furtka.local/ (even in private mode).
Chromium bypasses it via mDNS failure but the issue is the same.
openssl verify on the box confirms the chain is internally valid —
this is purely client-side cache pollution across boxes.

Fix:
- assets/Caddyfile: removed the hostname site block. Default install
  serves :80 only — https://furtka.local connection-refuses, which is
  a normal error every browser handles instead of the unbypassable
  crypto fault. Added top-level import of
  /etc/caddy/furtka-https.d/*.caddyfile so the /settings HTTPS toggle
  can drop a listener snippet there when a user explicitly opts in.
- furtka/https.py: set_force_https now writes TWO snippets atomically
  — the top-level hostname + tls internal block (enables :443) and
  the :80-scoped redirect (forces HTTP→HTTPS). Disable removes both.
  Reload failure rolls both back. Added _read_hostname + _https_snippet_content
  helpers with `/etc/hostname` → 'furtka' fallback so a missing
  hostname file doesn't produce an empty site block Caddy rejects.
- furtka/https.py::status: force_https now reads the listener
  snippet (was reading the redirect snippet). A redirect without a
  listener isn't actually HTTPS being served, so the listener is the
  authoritative "HTTPS is on" signal.
- furtka/updater.py: new _maybe_migrate_preserve_https hook runs
  inside _refresh_caddyfile on the 26.14 → 26.15 transition. If the
  box had the redirect snippet on disk (user had opted into HTTPS
  under the old regime), it writes the new listener snippet too so
  HTTPS keeps working after the Caddyfile swap removes the hostname
  block.
- webinstaller/app.py: post-install creates /etc/caddy/furtka-https.d/
  alongside /etc/caddy/furtka.d/ so the glob import can't trip an
  older Caddy on a missing path during the first reload.

Live-tested on .46: set_force_https(True) writes both snippets, Caddy
reloads, :443 listener comes up with fresh CA, curl -k returns 302,
HTTP 301-redirects. set_force_https(False) removes both snippets
atomically, :443 goes back to connection-refused.

Tests: test_https.py expanded from 13 to 15 cases. Toggle-on asserts
both snippets written + hostname substituted. Toggle-off asserts
both removed. Rollback cases verify BOTH snippets restore on reload
failure. New test_https_snippet_content_has_tls_internal_and_routes
locks the exact shape of the listener block.
test_webinstaller_assets.py: updated two old asserts that assumed
hostname block was in Caddyfile; new test_post_install_creates_https_snippet_dir
guards the new directory.

276 tests pass, ruff check + format clean.

Known remaining wart (documented in CHANGELOG): a browser that
trusted a prior Furtka CA still hits BAD_SIGNATURE on this box's
HTTPS after enabling it, because the fixed intermediate CN is a
Caddy-side limitation. Workaround: clear cert9.db or visit in a
fresh profile. Won't affect end users with one Furtka box ever.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 19:30:04 +02:00
26f0424ae3 fix: auth-guard / and /settings, add Logout link to static navs
All checks were successful
Build ISO / build-iso (push) Successful in 17m14s
CI / lint (push) Successful in 26s
CI / test (push) Successful in 1m2s
CI / validate-json (push) Successful in 24s
CI / markdown-links (push) Successful in 15s
Release / release (push) Successful in 11m26s
Since 26.11 shipped login, two of the three nav pages were secretly
unauthenticated. The Caddyfile only reverse-proxied /api/*, /apps*,
/login*, /logout* to the Python auth-gated handler. Everything else —
including / (landing page) and /settings/ — fell through to Caddy's
catch-all file_server straight out of assets/www/, skipping the
session check entirely.

LAN visitor effect: they could read the box's hostname, IP, Furtka
version, uptime, and see all the Update-now / Reboot / HTTPS-toggle
buttons on /settings/. The API calls those buttons fired were
themselves 401-gated so nothing actually happened — but the info leak
plus "looks open" UX was real. Caught in the 26.13 SSH test session
when the user noticed Logout only appeared in the nav on /apps, and
not on / or /settings/.

Fix:
- Caddyfile: new `handle /settings*` and `handle /` blocks in the
  shared `(furtka_routes)` snippet reverse-proxy to localhost:7000,
  so both hit the Python auth-guard before the HTML goes out.
- api.py: new `_serve_static_www(relative_path)` helper reads
  assets/www/{index.html, settings/index.html} with a path-traversal
  clamp (resolved path must stay under static_www_dir). `do_GET`
  routes `/` and `/settings[/]` to it. Removed the `/` branch from
  the old combined-with-/apps line — those are different pages now.
- paths.py: new `static_www_dir()` helper with `FURTKA_STATIC_WWW`
  env override for tests.
- assets/www/*.html: both nav bars get the Logout link + a shared
  `doLogout()` inline script matching the _HTML pattern. Users never
  see the link unauthed (the Python handler 302s them before the
  page renders), but authed users get consistent navigation across
  all three pages.

Tests: 5 new cases in test_api.py — unauth / redirects, unauth
/settings redirects (both trailing-slash and not), authed / serves
index.html, authed /settings serves settings/index.html,
regression guard that / and /apps serve different content.
Existing test updated (the one that used / as a proxy for /apps).

Static /style.css, /rootCA.crt, /status.json, /furtka.json,
/update-state.json stay served by Caddy's catch-all — those are
public by design (login page needs style.css, fresh users need the
CA to trust HTTPS, runtime JSON is metadata not creds).

272 tests pass, ruff check + format clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 18:16:42 +02:00
8c1fd1da2b fix: unbreak upgrade path + install-lock race
All checks were successful
Build ISO / build-iso (push) Successful in 17m28s
CI / lint (push) Successful in 27s
CI / test (push) Successful in 59s
CI / validate-json (push) Successful in 23s
CI / markdown-links (push) Successful in 15s
Release / release (push) Successful in 11m38s
Three interlocking issues that made 26.11/26.12 effectively
un-upgradable from pre-auth versions without manual pacman +
symlink surgery. Caught while SSH-testing the .196 VM which landed
on a rollback loop after every Update-now click.

1. auth.py imported werkzeug.security, but the target system runs
   core as bare system Python — neither flask nor werkzeug are
   pip-installed. Fresh 26.11+ boxes died on import. Replaced with
   a 50-line stdlib `furtka/passwd.py` using hashlib.pbkdf2_hmac
   for new hashes and parsing werkzeug's `scrypt:N:r:p$salt$hex`
   format for backward-read so existing users.json survives.

2. updater._health_check pinged /api/apps expecting 200. Post-
   auth, /api/apps returns 401 for unauth requests → HTTPError
   caught as URLError → retry loop → 30s timeout → rollback. Now
   any 2xx-4xx counts as "server alive"; only 5xx / connection
   errors fail. Server responding at all is proof it came back up.

3. _do_install released the fcntl lock between sync pre-validation
   and the systemd-run dispatch. A second POST could slip in,
   pass the lock check, return 202, and leave its install-bg child
   to die silently on the in-child lock. Now the API also reads
   install-state.json and refuses 409 on non-terminal stages —
   the state file is the reliable signal, the fcntl lock is
   defence in depth.

Test coverage:
- tests/test_passwd.py (new, 6 cases): roundtrip, salt uniqueness,
  format shape, werkzeug scrypt backward-compat against a real
  hash captured from the .196 box, malformed + non-string
  rejection.
- tests/test_updater.py: +3 cases for _health_check — 4xx=healthy,
  5xx=unhealthy, URLError retry loop.
- tests/test_api.py: +2 cases for install 409 on non-terminal
  state + 202 after terminal.

All 267 tests green, ruff check + format clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 17:03:28 +02:00
f3cd9e963c feat(install): async background install with progress polling
All checks were successful
Build ISO / build-iso (push) Successful in 17m24s
CI / lint (push) Successful in 26s
CI / test (push) Successful in 43s
CI / validate-json (push) Successful in 24s
CI / markdown-links (push) Successful in 16s
Release / release (push) Successful in 11m34s
POST /api/apps/install now returns 202 Accepted after the synchronous
pre-validation (resolve source, copy files, write .env, check for
placeholder secrets, validate path-type settings). The docker-facing
phases (compose pull → ensure volumes → compose up) are dispatched as
a background systemd-run unit (furtka-install-<app>) that writes stage
transitions to /var/lib/furtka/install-state.json. The UI polls
GET /api/apps/install/status every 1.5s and re-labels the modal
submit button — "Image wird heruntergeladen…" →
"Speicherbereiche werden erstellt…" → "Container wird gestartet…" —
instead of sitting dead on "Installing…" for 30+ seconds on large
images like Jellyfin.

Mirrors the exact shape of /api/catalog/sync/apply and
/api/furtka/update/apply: same fcntl lock, same atomic state-file
writes, same terminal-state poll loop ("done" | "error"). New CLI
subcommand `furtka app install-bg <name>` is what systemd-run invokes;
it's hidden from --help because regular CLI users still want the
synchronous `furtka app install <name>`.

Reinstall button on the app list polls too — after dispatch, its text
reflects the background stage until terminal, matching the modal
flow.

Tests:
- tests/test_install_runner.py (new, 9 cases): state roundtrip, lock
  contention, happy-path phase ordering, error writes on pull/up
  failure, lock release on both terminal outcomes.
- tests/test_api.py: new no_systemd_run fixture stubs subprocess.run;
  existing install tests adapted to 202 response; new tests for 409
  lock contention and the status endpoint.
- tests/test_cli.py: install-bg dispatches correctly and returns 1
  on failure with journald-friendly stderr.

256 tests pass, ruff check + format clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 15:50:49 +02:00
43 changed files with 2382 additions and 242 deletions

1
.gitignore vendored
View file

@ -13,3 +13,4 @@ iso/out/
website/public/
website/resources/
website/.hugo_build.lock
website/hugo_stats.json

View file

@ -7,6 +7,138 @@ This project uses calendar versioning: `YY.N-stage` (e.g. `26.0-alpha` = 2026, r
## [Unreleased]
## [26.15-alpha] - 2026-04-21
### Fixed
- **HTTPS is now opt-in; fresh installs no longer hit unbypassable
SEC_ERROR_BAD_SIGNATURE.** Every version since 26.5 shipped a
Caddyfile with a `__FURTKA_HOSTNAME__.local { tls internal }` site
block, so Caddy auto-generated a self-signed root CA + intermediate
+ leaf on first boot. That worked for first-time-ever users, but
every reinstall (or second Furtka box on the same LAN) produced a
new CA with the **same intermediate CN** (`Caddy Local Authority -
ECC Intermediate` — Caddy hardcodes it). Any browser that had ever
trusted an earlier Furtka CA got a cached intermediate with
mismatched keys, then Firefox's cert lookup substituted the cached
intermediate when validating the new box's leaf → the signature
check failed → `SEC_ERROR_BAD_SIGNATURE`, which Firefox has no
"Advanced → Accept Risk" bypass for.
- Removed the hostname site block from the default Caddyfile.
Fresh installs serve `:80` only; visiting `https://furtka.local`
now yields a clean connection-refused instead of the crypto
fault.
- Added top-level `import /etc/caddy/furtka-https.d/*.caddyfile`.
The `/settings` HTTPS toggle (via `furtka.https.set_force_https`)
now writes TWO snippets atomically — the top-level hostname +
`tls internal` block (enables `:443`) and the `:80`-scoped
redirect (forces HTTP → HTTPS) — and removes both on disable.
Caddy reloads after the pair-swap; failure rolls both back.
- Webinstaller creates `/etc/caddy/furtka-https.d/` during
post-install alongside the existing `furtka.d/`.
- `updater._refresh_caddyfile` runs a 26.14 → 26.15 migration: if
the box already had the redirect snippet on disk (user had
explicitly enabled "Force HTTPS" under the old regime), the
migration also writes the new listener snippet so HTTPS keeps
working across the upgrade.
- **`status.force_https` now reads the listener snippet, not the
redirect snippet.** A lone redirect without a `:443` listener
wouldn't actually serve HTTPS, so the listener file is the
authoritative "HTTPS is on" signal. The UI on `/settings` sees the
correct state as a result.
Known remaining UX wart: a browser that trusted a previous Furtka box
still sees `BAD_SIGNATURE` when visiting this box's `https://` after
enabling HTTPS here — the fixed intermediate CN is a Caddy-side
limitation we can't fix from Furtka. Fresh installs on a browser that
never visited another Furtka box work correctly. Workaround:
`about:networking#sts` → Forget → clear `cert9.db`.
## [26.14-alpha] - 2026-04-21
### Fixed
- **Landing page and `/settings/` were silently bypassing the auth
guard.** Since 26.11 shipped login, the Caddyfile only
reverse-proxied `/api/*`, `/apps*`, `/login*`, and `/logout*` to
Python. Everything else — including `/` and `/settings/` — fell
through to Caddy's catch-all `file_server` and was served straight
from `assets/www/` without ever hitting the session check. The
effect: a LAN visitor saw the box's hostname, IP, Furtka version,
and the buttons for Update-now / Reboot / HTTPS-toggle. The API
calls those buttons fired were all 401-auth-gated so actions didn't
land, but the information leak and the "looks open" UX was a real
bug. Caught in the 26.13 SSH test session when the user noticed
Logout only showed up on `/apps`. Now Caddy routes `/` and
`/settings*` through Python; a new `_serve_static_www` handler
checks the session cookie, redirects to `/login` if unauthed, and
reads the HTML from `assets/www/` otherwise. Catch-all still
serves `/style.css`, `/rootCA.crt`, and the runtime JSON files
publicly — those don't need auth.
- **Logout link now shows on every authed page, not just `/apps`.**
The static HTML for `/` and `/settings/` maintained their own nav
separate from `_HTML` in `api.py`, so they never got the Logout
entry when it was added in 26.11. Both nav bars now include it
plus an inline `doLogout()` that POSTs `/logout` and bounces to
`/login`, matching the pattern in `_HTML`.
## [26.13-alpha] - 2026-04-21
### Fixed
- **Upgrade path from pre-auth releases actually works.** 26.11-alpha
introduced `from werkzeug.security import ...` in `furtka/auth.py`,
but werkzeug isn't installed on the target system — core runs as
system Python with stdlib only, and `flask>=3.0` in `pyproject.toml`
is never pip-installed on the box. Fresh boxes from the 26.11/26.12
ISO without a manually-installed werkzeug crashed on import; boxes
upgrading from pre-26.11 got double-broken by that plus the health
check below. Replaced the werkzeug dependency with a stdlib-only
`furtka/passwd.py` that uses `hashlib.pbkdf2_hmac` for new hashes
and parses werkzeug's `scrypt:N:r:p$salt$hex` format for backward
compatibility — existing `users.json` files created on the rare
boxes that did have werkzeug keep working after this upgrade, no
re-setup needed. `from werkzeug.security import ...` is gone from
the import chain entirely; `pyproject.toml`'s flask dep stays only
for the live-ISO webinstaller.
- **Self-update no longer auto-rolls-back when crossing the auth
boundary.** `updater._health_check` pinged `/api/apps` and demanded
a 200, which meant every 26.10 → 26.11+ upgrade hit the post-restart
check, got a 401 (auth guard), and treated that as "server dead"
→ rollback. Now any 2xx4xx response counts as "server alive"; only
connection-level failures or 5xx fail the check. 5xx still fails
rollback because that means the new process is up but broken.
- **Install lock closes its race window.** `POST /api/apps/install`
used to release the fcntl lock immediately after the sync
pre-validation so the systemd-run child could re-acquire it —
leaving a tiny gap where a second POST could slip in, pass the lock
check, and return 202. Both child processes would start, one would
win the in-child lock, the other would die silently. Now the API
also reads `install-state.json` and refuses with 409 if the stage
is non-terminal (`pulling_image`, `creating_volumes`,
`starting_container`). The fcntl lock stays as belt-and-suspenders.
## [26.12-alpha] - 2026-04-21
### Changed
- **App-Install geht async mit Live-Progress.** `POST /api/apps/install`
returnt jetzt `202 Accepted` nach der synchronen Pre-Validation
(Source auflösen, Files kopieren, `.env` schreiben, Placeholder- und
Path-Checks). Den eigentlichen Docker-Teil (`compose pull` → volumes
`compose up`) dispatched der Handler als `systemd-run
--unit=furtka-install-<app>` Hintergrund-Job, der seine Phase in
`/var/lib/furtka/install-state.json` schreibt. Neues
`GET /api/apps/install/status` für UI-Polling. Das Install-Modal
zeigt jetzt live "Image wird heruntergeladen…" →
"Speicherbereiche werden erstellt…" → "Container wird gestartet…"
statt ~30 Sekunden totem "Installing…". Muster 1:1 parallel zu
`/api/catalog/sync/apply` und `/api/furtka/update/apply`. Neue CLI-
Subcommand `furtka app install-bg <name>` (intern, von der API
aufgerufen); `furtka app install` für Terminal-User bleibt synchron.
Die Reinstall-Taste in der App-Liste pollt ebenfalls den
Install-Status und spiegelt die Phase im Button-Text.
## [26.11-alpha] - 2026-04-21
### Added
@ -222,7 +354,11 @@ First tagged snapshot. Pre-alpha — the installer does not yet boot, but the de
- **Containers:** Docker + Compose
- **License:** AGPL-3.0
[Unreleased]: https://forgejo.sourcegate.online/daniel/furtka/compare/26.11-alpha...HEAD
[Unreleased]: https://forgejo.sourcegate.online/daniel/furtka/compare/26.15-alpha...HEAD
[26.15-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.15-alpha
[26.14-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.14-alpha
[26.13-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.13-alpha
[26.12-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.12-alpha
[26.11-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.11-alpha
[26.10-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.10-alpha
[26.9-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.9-alpha

View file

@ -108,7 +108,7 @@ None of these nail the "your dad can set this up" experience. The installer wiza
- [x] **ISO-build in CI**`.forgejo/workflows/build-iso.yml` runs `iso/build.sh` on every push to `main` and publishes the resulting `.iso` as the `furtka-iso` artifact (14 d retention). Push → green run → download → test.
- [x] **Forgejo Releases + tag-driven release pipeline**`.forgejo/workflows/release.yml` fires on `[0-9]*` tags, `scripts/build-release-tarball.sh` packages `furtka/` + `apps/` + `assets/` + a root VERSION, `scripts/publish-release.sh` uploads tarball + sha256 + release.json to the Forgejo releases page. Releases `26.1-alpha`, `26.3-alpha`, and `26.4-alpha` live at [releases](https://forgejo.sourcegate.online/daniel/furtka/releases) (26.2 stalled on a `jq` apt hang, fixed in 26.3). Needs one repo secret (`FORGEJO_RELEASE_TOKEN`).
- [x] **Walking-skeleton live ISO — end to end**`iso/build.sh` produces a hybrid BIOS/UEFI Arch-based ISO. It boots in a Proxmox VM, DHCPs onto the LAN, shows a console welcome with `http://proksi.local:5000` (+ IP fallback), serves the Flask webinstaller, runs `archinstall --silent`, reboots the VM via a Reboot-now button, and the installed system logs in and runs `docker ps` without sudo. Build infra in [`iso/`](iso/).
- [x] **Drop loop/rom devices from drive list**`webinstaller/drives.py` filters by `lsblk` `TYPE=disk`, so the live squashfs and CD-ROM no longer appear as install targets. Boot-USB filtering on bare metal is still TODO; see [iso/README.md](iso/README.md).
- [x] **Drop loop/rom devices from drive list**`webinstaller/drives.py` filters by `lsblk` `TYPE=disk`, so the live squashfs and CD-ROM no longer appear as install targets. The boot USB itself is also filtered: on the live ISO, `findmnt /run/archiso/bootmnt` resolves the boot partition and its parent disk is dropped from the picker.
- [x] **Rebrand GRUB menu**`iso/build.sh` rewrites "Arch Linux install medium" → "Furtka Live Installer" across GRUB, syslinux, and systemd-boot configs; default entry marked `(Recommended)`.
- [x] **Wizard: account form → drive picker → overview → archinstall** — S1 collects hostname/user/password/language with validation, S2 picks boot drive, overview confirms, `/install/run` writes `user_configuration.json` + `user_credentials.json` (0600) and execs `archinstall --silent` against its 4.x schema (`default_layout` disk_config + `!root-password` / `!password` sentinel keys + `custom_commands` for post-install group joins). Install log page polls a JSON endpoint and renders a phase-based progress bar with a collapsible raw log. `FURTKA_DRY_RUN=1` skips the real exec for testing.
- [x] **mDNS `proksi.local`** — hostname baked into the live ISO, avahi + nss-mdns in the package list, advertised as soon as network-online fires. The HTTPS + local-CA half of this milestone is still open below.
@ -117,7 +117,7 @@ None of these nail the "your dad can set this up" experience. The installer wiza
- [x] **On-box web UI uplevel** — shared `/style.css` served by Caddy, persistent top nav, landing page with an "Your apps" tile grid + live status, `/apps` with real per-app icons (inlined SVG from each manifest), new `/settings` page (hostname, IP, version, kernel, RAM, Docker, uptime + Furtka-updates card). `prefers-color-scheme` light/dark.
- [x] **Versioned on-box layout + Phase 1 per-app updates**`/opt/furtka/versions/<ver>/` + `current` symlink; `/var/lib/furtka/` for runtime state. `POST /api/apps/<name>/update` runs `docker compose pull` + compares digests + conditional `up -d`.
- [x] **Phase 2 Furtka self-update**`/settings` → Check → Update now. Downloads signed tarball (SHA256), stages, atomic symlink flip, reloads Caddy, daemon-reload, restarts services, health-checks the new api with auto-rollback on failure. CLI: `furtka update [--check]` + `furtka rollback`. Validated end-to-end on VM 2026-04-16 (`26.0-alpha``26.3-alpha` → rollback → reboot).
- [x] **Local HTTPS Phase 1** — Caddy `tls internal` on `:443` alongside plain `:80`. Per-box root CA generated on first start, `rootCA.crt` downloadable from `/settings`, per-OS install guide at `/https-install/`. Opt-in "force HTTPS" toggle only exposes itself once the current browser already trusts the cert, so enabling it can't lock the user out. Shipped in 26.4-alpha.
- [x] **Local HTTPS Phase 1** — Caddy `tls internal` on `:443` is fully opt-in via the `/settings` toggle (26.15-alpha); fresh installs stay HTTP-only so a half-trusted cert chain can't lock the user out. Per-box root CA generated on first enable, `rootCA.crt` downloadable from `/settings`, per-OS install guide at `/https-install/`. The "force HTTPS" sub-toggle still only appears once the current browser already trusts the cert.
- [x] **Post-build smoke VM on Proxmox**`.forgejo/workflows/build-iso.yml` hands the freshly built ISO to `scripts/smoke-vm.sh`, which boots it in a throwaway VM on `pollux` (192.168.178.165) and curls the webinstaller on `:5000`. VMID range 90009099, last 5 kept. Green end-to-end since 26.4-alpha.
- [ ] Installer wizard screens S3S7 — per-device purpose, network, domain, SSL, diagnostic. S5/S6 blocked on managed-gateway DNS infra not yet built.
- [ ] Local HTTPS Phase 2 — dedicated local CA (not Caddy's `tls internal`), streamlined one-click install across Win/Mac/Linux/Android, and HTTPS on the live-installer wizard (`https://proksi.local:5000`).

View file

@ -1,25 +1,27 @@
# Serves the Furtka landing page + live JSON on :80 (plain HTTP) and on
# HTTPS via Caddy's built-in `tls internal` — locally-issued certs signed
# by a root CA that Caddy generates on first start and stores under
# /var/lib/caddy/pki/authorities/local/. Static pages are read from
# /opt/furtka/current/ — updates flip the symlink and everything picks up
# the new content without a Caddy restart (a `systemctl reload caddy` is
# still triggered post-swap to flush the file-server's handle cache).
# /apps and /api are reverse-proxied to the resource-manager API
# (furtka serve, bound to 127.0.0.1:7000).
# Serves the Furtka landing page + live JSON on :80 (plain HTTP). HTTPS
# is **opt-in** — Caddy doesn't serve :443 until the user clicks the
# "Enable HTTPS" toggle on /settings, which drops an import snippet into
# /etc/caddy/furtka-https.d/. Default install has NO tls site block →
# Caddy never generates a self-signed CA / leaf cert → no
# SEC_ERROR_BAD_SIGNATURE when a user visits https://furtka.local before
# they've trusted anything. That was the 26.14-era regression this file
# exists to cure: the old Caddyfile always served :443 with a freshly-
# generated cert, and a browser that had ever trusted an older Furtka
# box's CA would reject the new one with an unbypassable bad-sig error.
#
# Hostname templating: __FURTKA_HOSTNAME__ gets substituted with the
# install-time hostname by webinstaller/app.py on first install and by
# furtka.updater._refresh_caddyfile on every self-update. A bare `:443
# { tls internal }` (no hostname) never triggers leaf-cert issuance, so
# SNI-based handshakes die with `SSL_ERROR_INTERNAL_ERROR_ALERT` — the
# 26.4-alpha regression this file exists to cure.
# /apps, /api, /login, /logout, / (home), /settings are reverse-proxied
# to the resource-manager API (furtka serve, bound to 127.0.0.1:7000).
# Static pages are read from /opt/furtka/current/ — updates flip the
# symlink and everything picks up the new content without a Caddy
# restart (a `systemctl reload caddy` is still triggered post-swap to
# flush the file-server's handle cache).
#
# Force-HTTPS: /etc/caddy/furtka.d/*.caddyfile gets imported into the :80
# block. The /api/furtka/https/force endpoint creates or removes
# redirect.caddyfile there to toggle the HTTP→HTTPS redirect, then reloads
# Caddy. Glob imports silently no-op on an empty/missing directory, so the
# toggle-off state is "no file present" rather than "empty file".
# Two snippet dirs, both silently no-op when empty:
# - /etc/caddy/furtka.d/*.caddyfile → imported inside the :80 block.
# The HTTPS toggle's "force HTTP→HTTPS redirect" snippet lands here.
# - /etc/caddy/furtka-https.d/*.caddyfile → imported at TOP LEVEL, so
# the HTTPS hostname+tls-internal site block can drop in here when
# the toggle is on. Hostname is substituted at toggle-time.
{
# Named-hostname :443 blocks would otherwise make Caddy add its own
# HTTP→HTTPS redirect — but we already serve our own `:80` block and
@ -41,6 +43,20 @@
handle /logout* {
reverse_proxy localhost:7000
}
# /settings and / — these previously served as static HTML straight
# from the catch-all file_server, which meant the auth-guard was
# bypassed: a LAN visitor could see the box's version, IP, and
# reach the Update-now / Reboot buttons (the API calls behind them
# are auth-gated, but the page itself rendered without a redirect
# to /login). Route them through the Python handler which checks
# the session cookie and either serves the static HTML from
# assets/www/ or redirects to /login.
handle /settings* {
reverse_proxy localhost:7000
}
handle / {
reverse_proxy localhost:7000
}
# Runtime JSON lives under /var/lib/furtka/ so it survives self-updates
# (which only swap /opt/furtka/current).
handle /status.json {
@ -56,8 +72,8 @@
file_server
}
# Download the local root CA cert Caddy generated for `tls internal`.
# Available on both :80 and :443 so users can grab it before they've
# trusted it. The private key next to it stays 0600 / caddy-owned.
# Public because users need to grab it before they've trusted it.
# The private key next to it stays 0600 / caddy-owned.
handle /rootCA.crt {
root * /var/lib/caddy/pki/authorities/local
rewrite * /root.crt
@ -75,12 +91,12 @@
}
}
# HTTPS opt-in: when /settings toggles HTTPS on, a snippet gets written
# into /etc/caddy/furtka-https.d/ that adds the hostname+tls-internal
# site block. Empty directory = HTTP-only (default fresh install).
import /etc/caddy/furtka-https.d/*.caddyfile
:80 {
import /etc/caddy/furtka.d/*.caddyfile
import furtka_routes
}
__FURTKA_HOSTNAME__.local, __FURTKA_HOSTNAME__ {
tls internal
import furtka_routes
}

View file

@ -14,6 +14,7 @@
<a href="/" aria-current="page">Home</a>
<a href="/apps">Apps</a>
<a href="/settings/">Settings</a>
<a href="#" id="logout-link" onclick="return doLogout(event)">Logout</a>
</div>
</nav>
<header>
@ -67,6 +68,17 @@
</main>
<script>
// Revoke the cookie server-side and bounce to /login. Shared
// shape with the _HTML in furtka/api.py so the two logout
// links behave identically.
async function doLogout(ev) {
ev.preventDefault();
try { await fetch('/logout', { method: 'POST', credentials: 'same-origin' }); }
catch (e) { /* server may already be down */ }
window.location.href = '/login';
return false;
}
// Hostname + install metadata — written once at install time to
// /var/lib/furtka/furtka.json (see _furtka_json_cmd in the installer).
// Separate from status.json because these facts don't change between

View file

@ -14,6 +14,7 @@
<a href="/">Home</a>
<a href="/apps">Apps</a>
<a href="/settings/" aria-current="page">Settings</a>
<a href="#" id="logout-link" onclick="return doLogout(event)">Logout</a>
</div>
</nav>
@ -121,6 +122,15 @@
</main>
<script>
// Logout button in the nav — same shape as /apps and / pages.
async function doLogout(ev) {
ev.preventDefault();
try { await fetch('/logout', { method: 'POST', credentials: 'same-origin' }); }
catch (e) { /* server may already be down */ }
window.location.href = '/login';
return false;
}
async function refresh() {
try {
const r = await fetch('/status.json', { cache: 'no-store' });

View file

@ -21,9 +21,9 @@ import time
from http.cookies import SimpleCookie
from http.server import BaseHTTPRequestHandler, HTTPServer
from furtka import auth, dockerops, installer, reconciler, sources
from furtka import auth, dockerops, install_runner, installer, reconciler, sources
from furtka.manifest import ManifestError, load_manifest
from furtka.paths import apps_dir
from furtka.paths import apps_dir, static_www_dir
from furtka.scanner import scan
_ICON_MAX_BYTES = 16 * 1024
@ -214,6 +214,51 @@ async function openSettingsDialog(name, action) {
modal.submit.addEventListener('click', submitModal);
// Install progress phases written by the background job's state file.
// Mirrors furtka/install_runner.py stage strings. Unknown stages fall
// back to a neutral "Installing…" so a future phase rename doesn't
// leave the modal button blank.
const INSTALL_STAGE_LABELS = {
'pulling_image': 'Image wird heruntergeladen…',
'creating_volumes': 'Speicherbereiche werden erstellt…',
'starting_container': 'Container wird gestartet…',
'done': 'Fertig',
};
async function pollInstallStatus(original) {
// Two-minute ceiling: Jellyfin over a slow DSL line can take ~90s
// just on the image pull. Beyond that something's stuck — the
// background job is still running in systemd, but the UI gives up
// on the modal and lets the user close it.
const deadline = Date.now() + 120000;
while (Date.now() < deadline) {
await new Promise(res => setTimeout(res, 1500));
let s = {};
try {
s = await fetch('/api/apps/install/status').then(r => r.json());
} catch (e) { /* transient; keep polling */ }
const stage = s.stage || '';
modal.submit.textContent = INSTALL_STAGE_LABELS[stage] || 'Installing…';
if (stage === 'done') {
closeModal();
await refresh();
return;
}
if (stage === 'error') {
modal.error.textContent = s.error || 'Install failed';
modal.error.classList.add('show');
modal.submit.disabled = false;
modal.submit.textContent = original;
return;
}
}
// Timed out waiting for a terminal state don't lie to the user.
modal.error.textContent = 'Installation is taking longer than expected. Check /settings for the background job status.';
modal.error.classList.add('show');
modal.submit.disabled = false;
modal.submit.textContent = original;
}
async function submitModal() {
if (!modal.current) return;
const { name, action } = modal.current;
@ -247,6 +292,13 @@ async function submitModal() {
modal.submit.textContent = original;
return;
}
// Install dispatched a background job poll until terminal. The
// edit path stays synchronous (settings updates are fast: env write
// + reconcile, no image pull).
if (action === 'install' && r.status === 202) {
await pollInstallStatus(original);
return;
}
closeModal();
await refresh();
} catch (e) {
@ -339,10 +391,24 @@ async function handleButton(op, name, btn) {
: ' — already up to date';
}
document.getElementById('log').textContent = header + '\\n' + JSON.stringify(data, null, 2);
// Reinstall dispatches an async install the same way the modal does
// follow the background job on the button label until terminal.
if (op === 'reinstall' && r.status === 202) {
const deadline = Date.now() + 120000;
while (Date.now() < deadline) {
await new Promise(res => setTimeout(res, 1500));
let s = {};
try { s = await fetch('/api/apps/install/status').then(r => r.json()); } catch (e) {}
const stage = s.stage || '';
btn.textContent = INSTALL_STAGE_LABELS[stage] || 'Reinstalling…';
if (stage === 'done' || stage === 'error') break;
}
}
} catch (e) {
document.getElementById('log').textContent = `[${op} ${name}] network error: ${e.message}`;
}
btn.textContent = original;
btn.disabled = false;
await refresh();
}
@ -626,19 +692,86 @@ def _do_get_settings(name):
}
_INSTALL_TERMINAL_STAGES = frozenset({"done", "error"})
def _do_install(name, settings=None):
"""Kick off an app install. Synchronous sync-phase + async docker-phase.
Fast parts run inline so validation failures come back as immediate
4xx (bad path, placeholder secret, unknown app, etc.). The slow
`docker compose pull` then `compose up` are dispatched as a
background systemd-run unit that writes phase transitions to
/var/lib/furtka/install-state.json for the UI to poll.
"""
import subprocess
# Reject if the state file reports a non-terminal install. The
# fcntl lock below catches the same race, but only *after* the API
# releases it to let the systemd-run child grab it — a competing
# POST can sneak in during that tiny window. Reading the state
# first closes that gap: as long as a previous install hasn't
# written "done" or "error", we refuse.
current_state = install_runner.read_state()
current_stage = current_state.get("stage", "") if isinstance(current_state, dict) else ""
if current_stage and current_stage not in _INSTALL_TERMINAL_STAGES:
return 409, {
"error": (
f"another install is in progress ({current_state.get('app', '?')}"
f" at {current_stage})"
)
}
# Fast-fail if another install is already in flight. Lock lives under
# /run/ so a previous reboot clears it automatically.
try:
src = installer.resolve_source(name)
target = installer.install_from(src, settings=settings)
except installer.InstallError as e:
return 400, {"error": str(e)}
actions = reconciler.reconcile(apps_dir())
payload = {
"installed": str(target),
"actions": [{"kind": a.kind, "target": a.target, "detail": a.detail} for a in actions],
}
# 207 Multi-Status — install copy succeeded but reconcile had per-app errors.
return (207 if reconciler.has_errors(actions) else 200, payload)
fh = install_runner.acquire_lock()
except install_runner.InstallRunnerError as e:
return 409, {"error": str(e)}
try:
try:
src = installer.resolve_source(name)
target = installer.install_from(src, settings=settings)
except installer.InstallError as e:
return 400, {"error": str(e)}
# Initial state so the UI has something to show between this
# response and the background job's first write.
install_runner.write_state("pulling_image", app=name)
finally:
# Release the lock so the background job can re-acquire it.
fh.close()
unit = f"furtka-install-{name}"
try:
subprocess.run(
[
"systemd-run",
f"--unit={unit}",
"--no-block",
"--collect",
"/usr/local/bin/furtka",
"app",
"install-bg",
name,
],
check=True,
capture_output=True,
text=True,
)
except FileNotFoundError:
install_runner.write_state("error", app=name, error="systemd-run not available")
return 502, {"error": "systemd-run not available"}
except subprocess.CalledProcessError as e:
err = (e.stderr or e.stdout or "").strip()
install_runner.write_state("error", app=name, error=f"dispatch failed: {err}")
return 502, {"error": f"systemd-run failed: {err}"}
return 202, {"status": "dispatched", "unit": unit, "installed": str(target)}
def _do_install_status():
"""Return the current install-state.json contents (or {})."""
return 200, install_runner.read_state()
def _do_update_settings(name, settings):
@ -975,6 +1108,26 @@ class _Handler(BaseHTTPRequestHandler):
self.end_headers()
self.wfile.write(b)
def _serve_static_www(self, relative_path: str):
"""Read an HTML asset from assets/www/ and serve it as 200.
Only reached after the do_GET auth-guard so the caller is
already authed. Relative_path is hard-coded at the call site
(``index.html`` or ``settings/index.html``), not user-supplied,
so there's no path-traversal surface here — but we still clamp
the resolved path to static_www_dir() as a defensive check in
case a future refactor wires a dynamic path through.
"""
root = static_www_dir().resolve()
target = (root / relative_path).resolve()
if root not in target.parents and target != root:
return self._html(500, "<h1>internal error</h1>")
try:
body = target.read_text(encoding="utf-8")
except (FileNotFoundError, OSError):
return self._html(404, "<h1>not found</h1>")
return self._html(200, body)
def _redirect(self, location, extra_headers=None):
self.send_response(302)
self.send_header("Location", location)
@ -1024,6 +1177,16 @@ class _Handler(BaseHTTPRequestHandler):
f"{auth.COOKIE_NAME}=; HttpOnly; SameSite=Strict; Path=/; Max-Age=0",
)
def _client_ip(self) -> str:
# Caddy's reverse_proxy appends the real TCP peer to X-Forwarded-For;
# the rightmost entry is the one Caddy added, so it's trustworthy
# even if a client spoofed an XFF of their own. Caddy is the edge —
# no upstream proxy in front of it.
xff = self.headers.get("X-Forwarded-For")
if xff:
return xff.rsplit(",", 1)[-1].strip()
return self.client_address[0]
def _handle_login(self, payload):
username = payload.get("username") if isinstance(payload, dict) else None
password = payload.get("password") if isinstance(payload, dict) else None
@ -1047,12 +1210,26 @@ class _Handler(BaseHTTPRequestHandler):
)
auth.create_admin(username, password)
else:
# Tuple-keyed lockout: a flood from one IP can't lock the
# admin out from a different IP. When locked we return the
# same 429 regardless of whether the password is correct —
# no oracle, no timing leak via "would have worked."
lockout_key = (username, self._client_ip())
retry = auth.LOCKOUT.retry_after_seconds(lockout_key)
if retry > 0:
return self._json(
429,
{"error": "too many failed attempts, try again later"},
extra_headers=[("Retry-After", str(retry))],
)
if not auth.authenticate(username, password):
# Cheap brute-force speed bump. werkzeug's PBKDF2 is
# already slow per attempt, but a fixed sleep makes
# "try 1000 passwords over the LAN" even less fun.
# Register before the sleep so concurrent threads see a
# consistent count; keep the sleep so timing can't
# distinguish "locked" from "wrong password."
auth.LOCKOUT.register_failure(lockout_key)
time.sleep(0.5)
return self._json(401, {"error": "invalid username or password"})
auth.LOCKOUT.clear(lockout_key)
session = auth.SESSIONS.create(username)
cookie = self._session_cookie_header(session.token, auth.COOKIE_TTL_SECONDS)
@ -1083,8 +1260,21 @@ class _Handler(BaseHTTPRequestHandler):
return self._json(401, {"error": "not authenticated"})
return self._redirect("/login")
if self.path in ("/", "/apps", "/apps/"):
if self.path in ("/apps", "/apps/"):
return self._html(200, _HTML)
# Landing page + settings page used to be served directly by
# Caddy as static HTML, which silently bypassed this auth
# guard (26.11-era regression that shipped and nobody noticed
# until the 26.13 SSH test session — LAN visitors could read
# the box version, IP and fire pre-authed clicks at the
# update/reboot/https-toggle buttons even though the API calls
# themselves would 401). Python reads the static HTML from
# assets/www/ and serves it behind the session check; Caddy
# now proxies / and /settings* here (see Caddyfile).
if self.path == "/":
return self._serve_static_www("index.html")
if self.path in ("/settings", "/settings/"):
return self._serve_static_www("settings/index.html")
if self.path == "/api/apps":
return self._json(200, _list_installed())
# /api/bundled is the pre-26.6 name for this list; kept as an alias
@ -1100,6 +1290,9 @@ class _Handler(BaseHTTPRequestHandler):
if self.path == "/api/catalog/status":
status, body = _do_catalog_status()
return self._json(status, body)
if self.path == "/api/apps/install/status":
status, body = _do_install_status()
return self._json(status, body)
# /api/apps/<name>/settings
if self.path.startswith("/api/apps/") and self.path.endswith("/settings"):
name = self.path[len("/api/apps/") : -len("/settings")]

View file

@ -1,27 +1,35 @@
"""Login-guard primitives for the Furtka UI.
One admin, one password. Passwords are PBKDF2-hashed via werkzeug (already
pulled in by the flask runtime dep), stored in /var/lib/furtka/users.json
with mode 0600. Sessions live in memory a systemctl restart logs
everyone out again, which is fine for an alpha single-user box.
One admin, one password. Passwords are PBKDF2-SHA256 hashed via
``furtka.passwd`` (stdlib-only hashlib.pbkdf2_hmac / hashlib.scrypt),
stored in /var/lib/furtka/users.json with mode 0600. Sessions live in
memory a systemctl restart logs everyone out again, which is fine
for an alpha single-user box. The ``LoginAttempts`` store in this
module rate-limits failed logins per (username, IP) and is also
in-memory; a restart clears a stuck lockout.
On upgrade from 26.10-alpha the users.json file does not exist yet; the
api's GET /login detects this via `setup_needed()` and renders a first-
run form that POSTs to /login as if it were a setup submit. Fresh installs
get the file pre-populated by the webinstaller so the setup step is
skipped.
On upgrade from pre-auth Furtka the users.json file does not exist
yet; the api's GET /login detects this via ``setup_needed()`` and
renders a first-run form that POSTs to /login as if it were a setup
submit. Fresh installs get the file pre-populated by the webinstaller
so the setup step is skipped.
Hash format is compatible with werkzeug.security 26.11 / 26.12 boxes
that happened to have werkzeug installed can carry their users.json
forward without re-setup; see ``furtka.passwd`` for the scrypt reader.
"""
from __future__ import annotations
import json
import math
import secrets
import threading
from dataclasses import dataclass
from datetime import UTC, datetime, timedelta
from werkzeug.security import check_password_hash, generate_password_hash
from furtka.passwd import hash_password as _hash_password
from furtka.passwd import verify_password as _verify_password
from furtka.paths import users_file
COOKIE_NAME = "furtka_session"
@ -29,13 +37,13 @@ COOKIE_TTL_SECONDS = 7 * 24 * 3600 # one week
def hash_password(plain: str) -> str:
"""PBKDF2-SHA256 via werkzeug. Cost default (~600k iterations)."""
return generate_password_hash(plain)
"""PBKDF2-SHA256 via stdlib. 600k iterations (OWASP 2023)."""
return _hash_password(plain)
def verify_password(plain: str, hashed: str) -> bool:
# werkzeug's check_password_hash is constant-time.
return check_password_hash(hashed, plain)
"""Constant-time compare. Accepts stdlib + legacy werkzeug formats."""
return _verify_password(plain, hashed)
def load_users() -> dict:
@ -171,5 +179,82 @@ class SessionStore:
self._by_token.clear()
class LoginAttempts:
"""In-memory rate-limiter for failed logins, keyed by (username, ip).
Parallels SessionStore: thread-safe, uses ``datetime.now(UTC)`` so the
same ``_FakeDatetime`` test shim works, lives only in memory so a
``systemctl restart furtka`` wipes a stuck lockout. Tuple keying means
a flood from one source IP can't lock the admin out from elsewhere
(different IP different key) the trade-off is that an attacker
can keep probing forever by rotating IPs, but they still eat the
PBKDF2 cost per attempt.
Stored data is a dict[key list[datetime]] of recent failure
timestamps. Every call prunes entries older than ``WINDOW_SECONDS``,
so memory per active key is bounded by ``MAX_FAILURES``.
"""
MAX_FAILURES = 10
WINDOW_SECONDS = 15 * 60
LOCKOUT_SECONDS = 15 * 60
def __init__(
self,
max_failures: int = MAX_FAILURES,
window_seconds: int = WINDOW_SECONDS,
lockout_seconds: int = LOCKOUT_SECONDS,
) -> None:
self._max = max_failures
self._window = timedelta(seconds=window_seconds)
self._lockout = timedelta(seconds=lockout_seconds)
self._fails: dict[tuple[str, str], list[datetime]] = {}
self._lock = threading.Lock()
def _prune_locked(self, key: tuple[str, str], now: datetime) -> list[datetime]:
"""Drop timestamps older than the window; caller holds self._lock."""
cutoff = now - self._window
kept = [ts for ts in self._fails.get(key, ()) if ts >= cutoff]
if kept:
self._fails[key] = kept
else:
self._fails.pop(key, None)
return kept
def register_failure(self, key: tuple[str, str]) -> None:
now = datetime.now(UTC)
with self._lock:
self._prune_locked(key, now)
self._fails.setdefault(key, []).append(now)
def is_locked(self, key: tuple[str, str]) -> bool:
return self.retry_after_seconds(key) > 0
def retry_after_seconds(self, key: tuple[str, str]) -> int:
"""Seconds remaining on an active lockout, or 0 if not locked."""
now = datetime.now(UTC)
with self._lock:
kept = self._prune_locked(key, now)
if len(kept) < self._max:
return 0
# Lockout runs from the oldest retained failure; once it
# falls off the window the key is effectively released.
unlock_at = kept[0] + self._lockout
remaining = (unlock_at - now).total_seconds()
if remaining <= 0:
return 0
return max(1, math.ceil(remaining))
def clear(self, key: tuple[str, str]) -> None:
with self._lock:
self._fails.pop(key, None)
def clear_all(self) -> None:
"""Test helper — wipe all failure state."""
with self._lock:
self._fails.clear()
# Module-level singleton used by the HTTP handler.
SESSIONS = SessionStore()
LOCKOUT = LoginAttempts()

View file

@ -71,6 +71,24 @@ def _cmd_app_install(args: argparse.Namespace) -> int:
return 1 if reconciler.has_errors(actions) else 0
def _cmd_app_install_bg(args: argparse.Namespace) -> int:
"""Docker-facing phases of an install — called by the API via systemd-run.
Internal subcommand; normal CLI users want `app install` (synchronous).
This exists to separate the slow docker pull/up from the synchronous
validation the API does inline, so the UI can poll a state file.
"""
from furtka import install_runner
try:
install_runner.run_install(args.name)
except Exception as e:
# run_install already wrote state="error"; echo for journald.
print(f"install-bg failed: {e}", file=sys.stderr)
return 1
return 0
def _cmd_app_remove(args: argparse.Namespace) -> int:
target = apps_dir() / args.name
if not target.exists():
@ -237,6 +255,15 @@ def build_parser() -> argparse.ArgumentParser:
)
app_install.set_defaults(func=_cmd_app_install)
# Internal — called by the HTTP API via systemd-run. Deliberately omitted
# from the help listing; regular CLI users want `app install` above.
app_install_bg = app_sub.add_parser(
"install-bg",
help=argparse.SUPPRESS,
)
app_install_bg.add_argument("name", help="Installed app folder name")
app_install_bg.set_defaults(func=_cmd_app_install_bg)
app_remove = app_sub.add_parser("remove", help="Stop and uninstall an app (keeps volumes)")
app_remove.add_argument("name", help="App name (folder name under /var/lib/furtka/apps/)")
app_remove.set_defaults(func=_cmd_app_remove)

View file

@ -6,10 +6,25 @@ sets `XDG_DATA_HOME=/var/lib`, so on the target that resolves to
/var/lib/caddy/pki/authorities/local/. The private key stays 0600 /
caddy-owned; we only ever read the public root.crt next to it.
This module exposes two operations:
- status(): current CA fingerprint + whether force-HTTPS is on
- set_force_https(enabled): write/remove the Caddy import snippet that
redirects HTTP to HTTPS, reload Caddy, roll back on failure.
HTTPS is **opt-in** since 26.15-alpha. Default Caddyfile has no `:443`
site block, so `tls internal` never triggers cert issuance. The
/settings toggle drops a snippet file into /etc/caddy/furtka-https.d/
that adds the hostname+tls-internal block (plus the redirect snippet
inside /etc/caddy/furtka.d/ for HTTPHTTPS). Disabling the toggle
removes both snippets and reloads Caddy falls back to HTTP-only.
Why opt-in: fresh-install boxes used to always serve a self-signed
cert on :443. Any browser that had ever trusted a previous Furtka
box's local CA rejected the new cert with an unbypassable
SEC_ERROR_BAD_SIGNATURE Firefox in particular has no "Advanced →
Accept" for that case. Making HTTPS explicit means fresh installs
never hit that trap; users who want HTTPS download the rootCA.crt
first and then click the toggle.
This module exposes:
- status(): CA fingerprint + current toggle state
- set_force_https(enabled): write/remove BOTH snippets atomically,
reload Caddy, roll back on failure.
"""
import base64
@ -22,6 +37,9 @@ CA_CERT_PATH = Path("/var/lib/caddy/pki/authorities/local/root.crt")
SNIPPET_DIR = Path("/etc/caddy/furtka.d")
REDIRECT_SNIPPET = SNIPPET_DIR / "redirect.caddyfile"
REDIRECT_CONTENT = "redir https://{host}{uri} permanent\n"
HTTPS_SNIPPET_DIR = Path("/etc/caddy/furtka-https.d")
HTTPS_SNIPPET = HTTPS_SNIPPET_DIR / "https.caddyfile"
HOSTNAME_FILE = Path("/etc/hostname")
_PEM_RE = re.compile(
r"-----BEGIN CERTIFICATE-----\s*(.+?)\s*-----END CERTIFICATE-----",
@ -33,6 +51,30 @@ class HttpsError(Exception):
"""Recoverable failure from set_force_https — the caller should 5xx."""
def _read_hostname(hostname_file: Path = HOSTNAME_FILE) -> str:
"""Return the box's hostname, stripped. Falls back to 'furtka' so a
missing /etc/hostname doesn't produce an empty site block that Caddy
would reject at parse time."""
try:
value = hostname_file.read_text().strip()
except (FileNotFoundError, PermissionError, OSError):
return "furtka"
return value or "furtka"
def _https_snippet_content(hostname: str) -> str:
"""Caddy site block the HTTPS toggle installs at opt-in.
Serves <hostname>.local and <hostname> on :443 with Caddy's
`tls internal` (local CA auto-issuance), and imports the shared
furtka_routes snippet so the :443 listener exposes the same
routes as :80. Must be written at top-level (not inside another
site block) that's why the Caddyfile imports furtka-https.d at
top-level rather than inside :80.
"""
return f"{hostname}.local, {hostname} {{\n\ttls internal\n\timport furtka_routes\n}}\n"
def _ca_fingerprint(ca_path: Path) -> str | None:
try:
pem = ca_path.read_text()
@ -54,13 +96,20 @@ def _format_fingerprint(hex_upper: str) -> str:
def status(
ca_path: Path = CA_CERT_PATH,
snippet: Path = REDIRECT_SNIPPET,
https_snippet: Path = HTTPS_SNIPPET,
) -> dict:
"""force_https is True iff the HTTPS listener snippet exists.
Before 26.15-alpha this checked the redirect snippet instead but
the redirect alone without a :443 listener wouldn't actually serve
HTTPS, so the listener snippet is the authoritative "HTTPS is on"
signal.
"""
fp = _ca_fingerprint(ca_path)
return {
"ca_available": fp is not None,
"fingerprint_sha256": _format_fingerprint(fp) if fp else None,
"force_https": snippet.is_file(),
"force_https": https_snippet.is_file(),
"ca_download_url": "/rootCA.crt",
}
@ -78,29 +127,48 @@ def set_force_https(
enabled: bool,
snippet_dir: Path = SNIPPET_DIR,
snippet: Path = REDIRECT_SNIPPET,
https_snippet_dir: Path = HTTPS_SNIPPET_DIR,
https_snippet: Path = HTTPS_SNIPPET,
hostname_file: Path = HOSTNAME_FILE,
reload_caddy=_default_reload,
) -> bool:
"""Toggle the HTTP→HTTPS redirect by writing or removing the snippet
Caddy imports. Always reloads Caddy. Rolls the snippet state back on
reload failure so a broken config can't leave Caddy wedged on the next
restart.
"""Toggle HTTPS by writing or removing two snippets atomically:
1. The top-level HTTPS hostname+tls-internal block (enables :443
listener + Caddy's `tls internal` cert issuance)
2. The :80-scoped redirect snippet (forces HTTP HTTPS)
Reload Caddy after the snippet swap. On reload failure both
snippets are reverted to their pre-call state so a bad config
can't leave Caddy wedged.
"""
snippet_dir.mkdir(mode=0o755, parents=True, exist_ok=True)
had = snippet.is_file()
previous = snippet.read_text() if had else None
https_snippet_dir.mkdir(mode=0o755, parents=True, exist_ok=True)
had_redirect = snippet.is_file()
previous_redirect = snippet.read_text() if had_redirect else None
had_https = https_snippet.is_file()
previous_https = https_snippet.read_text() if had_https else None
if enabled:
snippet.write_text(REDIRECT_CONTENT)
elif had:
snippet.unlink()
https_snippet.write_text(_https_snippet_content(_read_hostname(hostname_file)))
else:
if had_redirect:
snippet.unlink()
if had_https:
https_snippet.unlink()
try:
reload_caddy()
except subprocess.CalledProcessError as e:
_revert(snippet, previous)
_revert(snippet, previous_redirect)
_revert(https_snippet, previous_https)
msg = (e.stderr or e.stdout or "").strip() or f"exit {e.returncode}"
raise HttpsError(f"caddy reload failed: {msg}") from e
except FileNotFoundError as e:
_revert(snippet, previous)
_revert(snippet, previous_redirect)
_revert(https_snippet, previous_https)
raise HttpsError(f"systemctl not available: {e}") from e
return enabled

121
furtka/install_runner.py Normal file
View file

@ -0,0 +1,121 @@
"""Background job for app installs — progress-visible via state file.
The slow part of installing an app is `docker compose pull` on a large
image (Jellyfin ~500 MB); without progress feedback, the UI modal sits
dead on "Installing…" for 30+ seconds and the user wonders if it hung.
This module mirrors the exact same shape as ``furtka.catalog`` and
``furtka.updater`` so the UI can poll an install just like it polls a
catalog sync or a self-update. The split is:
- ``furtka.api._do_install`` runs synchronously: resolve source, copy
the app folder, write .env, validate path settings + placeholders.
Those are fast, and their failures deserve an immediate 4xx so the
install modal can surface them in-line.
- After that the API writes an initial state file (stage
"pulling_image") and dispatches ``systemd-run --unit=furtka-install-
<name>`` to run ``furtka app install-bg <name>`` in the background.
That CLI subcommand is what calls ``run_install()`` here it does the
docker-facing phases and writes state transitions as it goes.
State file schema (``/var/lib/furtka/install-state.json``):
{
"stage": "pulling_image" | "creating_volumes"
| "starting_container" | "done" | "error",
"updated_at": "2026-04-21T17:30:45+0200",
"app": "jellyfin",
"version": "1.0.0", // added at "done"
"error": "details..." // added at "error"
}
Lock: ``/run/furtka/install.lock`` (tmpfs, reboot-safe). Global, not
per-app two parallel installs are not a v1 use-case and the lock
keeps the state-file representation simple (one in-flight install at
a time).
"""
from __future__ import annotations
import fcntl
import json
import os
import time
from pathlib import Path
from furtka import dockerops
from furtka.manifest import load_manifest
from furtka.paths import apps_dir
_INSTALL_STATE = Path(os.environ.get("FURTKA_INSTALL_STATE", "/var/lib/furtka/install-state.json"))
_LOCK_PATH = Path(os.environ.get("FURTKA_INSTALL_LOCK", "/run/furtka/install.lock"))
class InstallRunnerError(RuntimeError):
"""Any failure in the background install flow that should surface to the caller."""
def state_path() -> Path:
return _INSTALL_STATE
def lock_path() -> Path:
return _LOCK_PATH
def write_state(stage: str, **extra) -> None:
"""Atomic JSON state write — same shape as catalog/update-state."""
state_path().parent.mkdir(parents=True, exist_ok=True)
tmp = state_path().with_suffix(".tmp")
payload = {"stage": stage, "updated_at": time.strftime("%Y-%m-%dT%H:%M:%S%z"), **extra}
tmp.write_text(json.dumps(payload, indent=2))
tmp.replace(state_path())
def read_state() -> dict:
try:
return json.loads(state_path().read_text())
except (FileNotFoundError, json.JSONDecodeError):
return {}
def acquire_lock():
path = lock_path()
path.parent.mkdir(parents=True, exist_ok=True)
fh = path.open("w")
try:
fcntl.flock(fh, fcntl.LOCK_EX | fcntl.LOCK_NB)
except BlockingIOError as e:
fh.close()
raise InstallRunnerError("another install is already in progress") from e
return fh
def run_install(name: str) -> None:
"""Docker-facing phases of the install: pull → volumes → compose up.
Called by the ``furtka app install-bg <name>`` CLI subcommand from the
systemd-run spawned by the API. Assumes the API has already run
``installer.install_from()``, so the app folder, .env, and manifest
are on disk at ``apps_dir() / <name>``.
Every phase transition is written to the state file for the UI to
poll. On exception the state flips to ``"error"`` with the message,
then the exception is re-raised so the CLI exits non-zero and
journald has a traceback.
"""
with acquire_lock():
target = apps_dir() / name
manifest = load_manifest(target / "manifest.json", expected_name=name)
try:
write_state("pulling_image", app=name)
dockerops.compose_pull(target, name)
write_state("creating_volumes", app=name)
for short in manifest.volumes:
dockerops.ensure_volume(manifest.volume_name(short))
write_state("starting_container", app=name)
dockerops.compose_up(target, name)
write_state("done", app=name, version=manifest.version)
except Exception as e:
write_state("error", app=name, error=str(e))
raise

95
furtka/passwd.py Normal file
View file

@ -0,0 +1,95 @@
"""Stdlib-only password hashing, compatible with werkzeug's hash format.
Why this exists: 26.11-alpha introduced auth via ``werkzeug.security``,
but the target system doesn't have ``werkzeug`` installed (Core runs as
system Python with only the stdlib pyproject.toml's ``flask>=3.0``
dep is never pip-installed on the box). Fresh installs from a 26.11 /
26.12 ISO crashed on import; upgrades from pre-auth versions were
double-broken by that plus a too-strict updater health check.
Fix: replace werkzeug with stdlib equivalents using the same hash
**format** so existing ``users.json`` files created by 26.11 / 26.12 on
the rare boxes that happened to have werkzeug installed (Medion, .196
after manual pacman) still verify.
Format: ``<method>$<salt>$<hex digest>``
- ``pbkdf2:<hash>:<iterations>`` what we generate by default here
- ``scrypt:<N>:<r>:<p>`` what werkzeug's default produces
Both are implemented via ``hashlib`` which has been stdlib since 3.6.
"""
from __future__ import annotations
import hashlib
import hmac
import secrets
_PBKDF2_HASH = "sha256"
_PBKDF2_ITERATIONS = 600_000
_SALT_LEN = 16
def hash_password(password: str) -> str:
"""Return a ``pbkdf2:sha256:<iter>$<salt>$<hex>`` hash of *password*.
PBKDF2-SHA256 over UTF-8. 600k iterations same as werkzeug's
default in the 3.x series, roughly OWASP 2023's recommendation.
"""
if not isinstance(password, str):
raise TypeError("password must be str")
salt = secrets.token_urlsafe(_SALT_LEN)[:_SALT_LEN]
dk = hashlib.pbkdf2_hmac(
_PBKDF2_HASH, password.encode("utf-8"), salt.encode("utf-8"), _PBKDF2_ITERATIONS
)
return f"pbkdf2:{_PBKDF2_HASH}:{_PBKDF2_ITERATIONS}${salt}${dk.hex()}"
def verify_password(password: str, hashed: str) -> bool:
"""Constant-time verify *password* against a stored *hashed* value.
Accepts both our own pbkdf2 hashes and legacy werkzeug scrypt
hashes in ``scrypt:N:r:p$salt$hex`` form so users.json files
written by 26.11 / 26.12 keep working after upgrade.
"""
if not isinstance(password, str) or not isinstance(hashed, str):
return False
try:
method, salt, expected = hashed.split("$", 2)
except ValueError:
return False
parts = method.split(":")
if not parts:
return False
algo = parts[0]
pw_bytes = password.encode("utf-8")
salt_bytes = salt.encode("utf-8")
try:
if algo == "pbkdf2":
if len(parts) < 3:
return False
inner_hash = parts[1]
iterations = int(parts[2])
dk = hashlib.pbkdf2_hmac(inner_hash, pw_bytes, salt_bytes, iterations)
elif algo == "scrypt":
# werkzeug: scrypt:N:r:p, dklen=64, maxmem=132 MiB. Without
# the explicit maxmem we'd hit OpenSSL's default memory cap
# and throw ValueError on N >= 32768.
if len(parts) < 4:
return False
n = int(parts[1])
r = int(parts[2])
p = int(parts[3])
dk = hashlib.scrypt(
pw_bytes,
salt=salt_bytes,
n=n,
r=r,
p=p,
dklen=64,
maxmem=132 * 1024 * 1024,
)
else:
return False
except (ValueError, TypeError, OverflowError):
return False
return hmac.compare_digest(dk.hex(), expected)

View file

@ -16,6 +16,10 @@ DEFAULT_CATALOG_DIR = Path("/var/lib/furtka/catalog")
# enforced by furtka.auth.save_users (same atomic-write pattern as the app
# .env files).
DEFAULT_USERS_FILE = Path("/var/lib/furtka/users.json")
# Static-web asset dir served by the Python handler for / and
# /settings* so those pages pick up the auth-guard. Caddy also serves
# /style.css and other assets directly from here for the login page.
DEFAULT_STATIC_WWW = Path("/opt/furtka/current/assets/www")
def apps_dir() -> Path:
@ -36,3 +40,7 @@ def catalog_apps_dir() -> Path:
def users_file() -> Path:
return Path(os.environ.get("FURTKA_USERS_FILE", DEFAULT_USERS_FILE))
def static_www_dir() -> Path:
return Path(os.environ.get("FURTKA_STATIC_WWW", DEFAULT_STATIC_WWW))

View file

@ -49,6 +49,9 @@ _CADDYFILE_LIVE = Path(os.environ.get("FURTKA_CADDYFILE_PATH", "/etc/caddy/Caddy
_CADDY_SNIPPET_DIR = Path(
os.environ.get("FURTKA_CADDY_SNIPPET_DIR", str(_CADDYFILE_LIVE.parent / "furtka.d"))
)
_CADDY_HTTPS_SNIPPET_DIR = Path(
os.environ.get("FURTKA_CADDY_HTTPS_SNIPPET_DIR", str(_CADDYFILE_LIVE.parent / "furtka-https.d"))
)
_SYSTEMD_DIR = Path(os.environ.get("FURTKA_SYSTEMD_DIR", "/etc/systemd/system"))
_HOSTNAME_FILE = Path(os.environ.get("FURTKA_HOSTNAME_FILE", "/etc/hostname"))
_CADDYFILE_HOSTNAME_MARKER = "__FURTKA_HOSTNAME__"
@ -170,6 +173,24 @@ def _current_hostname() -> str:
return name or "furtka"
def _maybe_migrate_preserve_https() -> None:
"""26.14 → 26.15 migration: if the box already had the force-HTTPS
redirect snippet on disk, that means the user explicitly opted
into HTTPS under the old regime. Under the new opt-in regime,
HTTPS also requires a separate listener snippet write it here so
the user's HTTPS doesn't silently break when the Caddyfile refresh
removes the default hostname block.
"""
redirect_snippet = _CADDY_SNIPPET_DIR / "redirect.caddyfile"
https_snippet = _CADDY_HTTPS_SNIPPET_DIR / "https.caddyfile"
if not redirect_snippet.is_file() or https_snippet.is_file():
return
hostname = _current_hostname()
https_snippet.write_text(
f"{hostname}.local, {hostname} {{\n\ttls internal\n\timport furtka_routes\n}}\n"
)
def _refresh_caddyfile(source: Path) -> bool:
"""Copy the shipped Caddyfile to /etc/caddy/ iff it differs. Returns True
if the file changed (so caddy needs more than a bare reload).
@ -180,10 +201,19 @@ def _refresh_caddyfile(source: Path) -> bool:
"""
if not source.is_file():
return False
# Snippet dir for the /api/furtka/https/force toggle. Pre-HTTPS installs
# don't have this dir; ensure it so the Caddyfile's glob import can't
# trip an older Caddy on a missing path during the first reload.
# Snippet dirs for the /api/furtka/https/force toggle. Pre-HTTPS
# installs don't have them; ensure both so the Caddyfile's glob
# imports can't trip an older Caddy on missing paths during the
# first reload. furtka-https.d is new in 26.15-alpha — older boxes
# upgrading across this version line won't have it on disk yet.
_CADDY_SNIPPET_DIR.mkdir(mode=0o755, parents=True, exist_ok=True)
_CADDY_HTTPS_SNIPPET_DIR.mkdir(mode=0o755, parents=True, exist_ok=True)
# Migration: pre-26.15 Caddyfile always served :443 via tls internal,
# so a box that had the "force HTTPS" redirect toggle ON relied on
# HTTPS being there implicitly. After this Caddyfile refresh the
# hostname block is gone, so the redirect would 301 to a dead :443.
# Preserve intent by writing the HTTPS listener snippet too.
_maybe_migrate_preserve_https()
rendered = source.read_text().replace(_CADDYFILE_HOSTNAME_MARKER, _current_hostname())
if _CADDYFILE_LIVE.is_file() and rendered == _CADDYFILE_LIVE.read_text():
return False
@ -255,13 +285,35 @@ def _run(cmd: list[str]) -> None:
def _health_check(url: str, deadline_s: float = 30.0) -> bool:
"""Poll *url* until we get *any* response from the Python server.
Treats any 2xx-4xx response as "server is up". A 401 on
/api/apps after the 26.11-alpha auth-guard shipped is a perfectly
valid signal that the new code imported + the socket is listening
rejecting the request is still "alive". Only 5xx or connection-
level failures count as unhealthy.
Rationale: pre-26.13 this function hit /api/apps and expected 200,
which silently broke every upgrade across the auth boundary (26.10
26.11+) and auto-rolled back. Now we just need proof the new
process came up.
"""
end = time.time() + deadline_s
while time.time() < end:
try:
with urllib.request.urlopen(url, timeout=3) as resp:
if resp.status == 200:
# Any 2xx/3xx → alive. urllib follows redirects by
# default, so a 302 → /login resolves to /login's 200.
if resp.status < 500:
return True
except urllib.error.HTTPError as e:
# 4xx → server is up, just refused us (auth, bad request,
# whatever). Counts as healthy for the "did it come back"
# check. 5xx → genuinely broken, don't accept.
if 400 <= e.code < 500:
return True
except urllib.error.URLError:
# Connection refused / DNS / timeout → not up yet, retry.
pass
time.sleep(1)
return False

View file

@ -54,7 +54,7 @@ mDNS is wired: `avahi-daemon` + `nss-mdns` come from `packages.extra`, the live
Once `archinstall` finishes and you click **Reboot now**, the VM comes up into the installed system. No more port `:5000` — the wizard ISO is gone. Instead:
- **Console**: agetty shows `Furtka is ready. Open http://<hostname>.local …` with the IP fallback underneath.
- **Browser** at `http://<hostname>.local` (default `http://furtka.local` — the form's default hostname is `furtka`; only the live-installer ISO uses `proksi`): Caddy-served landing page with three live status tiles (uptime, Docker version, free disk) refreshed every 30 s by `furtka-status.timer`. Since 26.4-alpha, `https://<hostname>.local` is also served via Caddy's `tls internal` trust `rootCA.crt` from `/settings` to clear browser warnings.
- **Browser** at `http://<hostname>.local` (default `http://furtka.local` — the form's default hostname is `furtka`; only the live-installer ISO uses `proksi`): Caddy-served landing page with three live status tiles (uptime, Docker version, free disk) refreshed every 30 s by `furtka-status.timer`. HTTPS is opt-in (26.15-alpha) — flip the toggle in `/settings` to switch on Caddy's `tls internal` on `:443`, then trust `rootCA.crt` from `/settings` to clear browser warnings.
- **SSH**: `ssh <user>@<hostname>.local` works; `docker ps` works without `sudo` because the user is in the `docker` group.
This is a demo shell — no Authentik, no app store yet. The landing page lives at `/srv/furtka/www/`, served by Caddy on `:80` per `/etc/caddy/Caddyfile`. All of this is written into the target by `webinstaller/app.py`'s `_post_install_commands` via archinstall's `custom_commands`.
@ -62,5 +62,4 @@ This is a demo shell — no Authentik, no app store yet. The landing page lives
## Known rough edges
- **Disk space**: the first time you build on a fresh host, the squashfs/xorriso steps need ~15 GB free. If the host's LVM-root is smaller, `xorriso` silently dies at the very end with "Image size exceeds free space on media".
- **Live-installer wizard is still HTTP-only**. `http://proksi.local:5000` during install has no TLS; the installed box gets Caddy + `tls internal` on `:443` once it reboots (26.4-alpha), but bringing the same story to the wizard itself is a later milestone.
- **Boot USB could appear as an install target on bare metal**. On a VM the ISO is a CD-ROM (filtered) and SATA is the only disk, so the picker only shows the install target. On bare metal with a USB stick, the USB is `TYPE=disk` and shows up alongside the real install drive; a user could in theory pick the USB they just booted from. Mitigating this needs detecting the boot media (via `findmnt /run/archiso/bootmnt` or similar) and filtering it out in `webinstaller/drives.py`.
- **Live-installer wizard is still HTTP-only**. `http://proksi.local:5000` during install has no TLS; once the box reboots, Caddy can serve `tls internal` on `:443` if the user opts in via `/settings` (26.15-alpha), but bringing TLS to the wizard itself is a later milestone.

View file

@ -8,6 +8,23 @@ server {
charset utf-8;
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_comp_level 6;
gzip_types
text/css
text/plain
text/xml
application/javascript
application/json
application/xml
application/rss+xml
application/atom+xml
image/svg+xml
font/woff
font/woff2;
location / {
try_files $uri $uri/ $uri.html =404;
}

View file

@ -1,6 +1,6 @@
[project]
name = "furtka"
version = "26.11-alpha"
version = "26.15-alpha"
description = "Open-source home server OS — simple enough for everyone."
requires-python = ">=3.11"
readme = "README.md"

View file

@ -24,15 +24,34 @@ def fake_dirs(tmp_path, monkeypatch):
bundled = tmp_path / "bundled"
catalog = tmp_path / "catalog"
users_file = tmp_path / "users.json"
static_www = tmp_path / "www"
apps.mkdir()
bundled.mkdir()
static_www.mkdir()
(static_www / "index.html").write_text("<html>landing page</html>")
(static_www / "settings").mkdir()
(static_www / "settings" / "index.html").write_text("<html>settings page</html>")
monkeypatch.setenv("FURTKA_APPS_DIR", str(apps))
monkeypatch.setenv("FURTKA_BUNDLED_APPS_DIR", str(bundled))
monkeypatch.setenv("FURTKA_CATALOG_DIR", str(catalog))
monkeypatch.setenv("FURTKA_USERS_FILE", str(users_file))
# Scrub any sessions that leaked from a prior test — the SESSIONS
# store is module-level.
monkeypatch.setenv("FURTKA_STATIC_WWW", str(static_www))
# install_runner writes to /var/lib/furtka/install-state.json and
# /run/furtka/install.lock by default — redirect into tmp_path so
# test code doesn't need root.
monkeypatch.setenv("FURTKA_INSTALL_STATE", str(tmp_path / "install-state.json"))
monkeypatch.setenv("FURTKA_INSTALL_LOCK", str(tmp_path / "install.lock"))
# install_runner caches env vars at import time, so reload it to
# pick up the tmp-path env vars this fixture just set.
import importlib
from furtka import install_runner
importlib.reload(install_runner)
# Scrub any sessions or lockout counters that leaked from a prior
# test — both stores are module-level.
auth.SESSIONS.clear()
auth.LOCKOUT.clear_all()
return apps, bundled
@ -53,6 +72,29 @@ def no_docker(monkeypatch):
monkeypatch.setattr(dockerops, "compose_down", lambda app_dir, project: None)
@pytest.fixture
def no_systemd_run(monkeypatch):
"""Stub the systemd-run dispatch in _do_install so tests don't need it.
The install endpoint now spawns a background systemd-run unit to do
the docker-facing phases. Tests that exercise the install path only
care that the sync pre-phase succeeded and the dispatch was
attempted with the right args they shouldn't actually fire up
systemd. subprocess.run gets monkeypatched to return a fake success
CompletedProcess, and the call args get captured for assertions.
"""
import subprocess
calls = []
def fake_run(cmd, check=False, capture_output=False, text=False, **kwargs):
calls.append(cmd)
return subprocess.CompletedProcess(cmd, 0, stdout="", stderr="")
monkeypatch.setattr(subprocess, "run", fake_run)
return calls
def _write_bundled(bundled, name, manifest=None, env_example=None):
app = bundled / name
app.mkdir()
@ -145,7 +187,7 @@ def test_list_available_inlines_icon_svg(fake_dirs):
assert entry["icon_svg"] == _SIMPLE_SVG
def test_list_installed_inlines_icon_svg(fake_dirs, no_docker):
def test_list_installed_inlines_icon_svg(fake_dirs, no_docker, no_systemd_run):
apps, bundled = fake_dirs
app = _write_bundled(bundled, "fileshare", env_example="A=real")
_write_icon(app, _SIMPLE_SVG)
@ -154,12 +196,15 @@ def test_list_installed_inlines_icon_svg(fake_dirs, no_docker):
assert entry["icon_svg"] == _SIMPLE_SVG
def test_list_available_hides_already_installed(fake_dirs, no_docker):
def test_list_available_hides_already_installed(fake_dirs, no_docker, no_systemd_run):
apps, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real")
status, _ = api._do_install("fileshare")
assert status == 200
# Now bundled should NOT include fileshare anymore.
assert status == 202 # async dispatch
# Now bundled should NOT include fileshare anymore — the app folder
# exists on disk (install_from finished synchronously before the
# dispatch), which is what _list_available uses for the "installed"
# check.
assert api._list_available() == []
# But installed list should.
installed = api._list_installed()
@ -202,7 +247,7 @@ def test_remove_endpoint_unknown(fake_dirs, no_docker):
assert status == 404
def test_remove_endpoint_happy_path(fake_dirs, no_docker):
def test_remove_endpoint_happy_path(fake_dirs, no_docker, no_systemd_run):
apps, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real")
api._do_install("fileshare")
@ -240,7 +285,7 @@ def test_http_get_apps_route(fake_dirs, no_docker, admin_session):
assert r.status == 200
data = json.loads(r.read())
assert data == []
with urllib.request.urlopen(_request(port, "/", cookie=admin_session)) as r:
with urllib.request.urlopen(_request(port, "/apps", cookie=admin_session)) as r:
assert r.status == 200
assert b"Furtka Apps" in r.read()
# Unknown route → 404 JSON.
@ -327,6 +372,82 @@ class _NoRedirectHandler(urllib.request.HTTPRedirectHandler):
return None
def test_unauth_root_redirects_to_login(fake_dirs):
"""/ was previously Caddy-direct static HTML, bypassing auth. Now
Python serves it and the auth-guard applies unauth visitor gets
bounced to /login just like /apps does."""
server, port = _start_server()
try:
opener = urllib.request.build_opener(_NoRedirectHandler())
try:
opener.open(_request(port, "/"))
raise AssertionError("expected 302")
except urllib.error.HTTPError as e:
assert e.code == 302
assert e.headers["Location"] == "/login"
finally:
server.shutdown()
server.server_close()
def test_unauth_settings_redirects_to_login(fake_dirs):
server, port = _start_server()
try:
opener = urllib.request.build_opener(_NoRedirectHandler())
for path in ("/settings", "/settings/"):
try:
opener.open(_request(port, path))
raise AssertionError(f"expected 302 for {path}")
except urllib.error.HTTPError as e:
assert e.code == 302
assert e.headers["Location"] == "/login"
finally:
server.shutdown()
server.server_close()
def test_authed_root_serves_static_index(fake_dirs, admin_session):
server, port = _start_server()
try:
with urllib.request.urlopen(_request(port, "/", cookie=admin_session)) as r:
assert r.status == 200
assert r.read() == b"<html>landing page</html>"
finally:
server.shutdown()
server.server_close()
def test_authed_settings_serves_static(fake_dirs, admin_session):
server, port = _start_server()
try:
for path in ("/settings", "/settings/"):
with urllib.request.urlopen(_request(port, path, cookie=admin_session)) as r:
assert r.status == 200
assert r.read() == b"<html>settings page</html>"
finally:
server.shutdown()
server.server_close()
def test_authed_root_does_not_serve_apps_html(fake_dirs, admin_session):
"""Regression guard: the pre-26.14 do_GET had `if self.path in ("/",
"/apps", ...)` which served _HTML (the apps page) for / too, since
Caddy wasn't proxying / so nobody noticed. Now that Caddy does
proxy /, the two paths must serve different content."""
server, port = _start_server()
try:
with urllib.request.urlopen(_request(port, "/", cookie=admin_session)) as r:
root_body = r.read()
with urllib.request.urlopen(_request(port, "/apps", cookie=admin_session)) as r:
apps_body = r.read()
assert root_body != apps_body
assert b"Furtka Apps" in apps_body
assert b"landing page" in root_body
finally:
server.shutdown()
server.server_close()
def test_get_login_renders_login_form_when_admin_exists(fake_dirs):
auth.create_admin("daniel", "hunter2-pw")
server, port = _start_server()
@ -480,6 +601,130 @@ def test_post_login_rejects_wrong_password(fake_dirs):
server.server_close()
def _post_wrong_login(port, username="daniel", password="nope"):
req = _request(
port,
"/login",
method="POST",
body={"username": username, "password": password},
)
try:
urllib.request.urlopen(req)
raise AssertionError("expected HTTPError")
except urllib.error.HTTPError as e:
return e
def test_post_login_locks_out_after_repeated_failures(fake_dirs, monkeypatch):
auth.create_admin("daniel", "hunter2-pw")
# Flatten the 0.5s speed-bump so the test doesn't take 5 seconds.
monkeypatch.setattr(api.time, "sleep", lambda _s: None)
server, port = _start_server()
try:
for _ in range(auth.LoginAttempts.MAX_FAILURES):
err = _post_wrong_login(port)
assert err.code == 401
err = _post_wrong_login(port)
assert err.code == 429
assert err.headers.get("Retry-After") is not None
assert int(err.headers["Retry-After"]) > 0
finally:
server.shutdown()
server.server_close()
def test_post_login_429_masks_correctness(fake_dirs, monkeypatch):
"""Once locked, the correct password must also get 429 — no oracle."""
auth.create_admin("daniel", "hunter2-pw")
monkeypatch.setattr(api.time, "sleep", lambda _s: None)
server, port = _start_server()
try:
for _ in range(auth.LoginAttempts.MAX_FAILURES):
_post_wrong_login(port)
req = _request(
port,
"/login",
method="POST",
body={"username": "daniel", "password": "hunter2-pw"},
)
try:
urllib.request.urlopen(req)
raise AssertionError("expected 429")
except urllib.error.HTTPError as e:
assert e.code == 429
finally:
server.shutdown()
server.server_close()
def test_post_login_success_clears_lockout_counter(fake_dirs, monkeypatch):
auth.create_admin("daniel", "hunter2-pw")
monkeypatch.setattr(api.time, "sleep", lambda _s: None)
server, port = _start_server()
try:
# Get close to the threshold, then log in successfully.
for _ in range(auth.LoginAttempts.MAX_FAILURES - 1):
_post_wrong_login(port)
req = _request(
port,
"/login",
method="POST",
body={"username": "daniel", "password": "hunter2-pw"},
)
with urllib.request.urlopen(req) as r:
assert r.status == 200
# Counter must have been cleared: another full MAX_FAILURES-1
# fails shouldn't trigger 429.
for _ in range(auth.LoginAttempts.MAX_FAILURES - 1):
err = _post_wrong_login(port)
assert err.code == 401
finally:
server.shutdown()
server.server_close()
def test_post_login_setup_not_rate_limited(fake_dirs, monkeypatch):
"""First-run setup is never auth-ed against a hash, so the lockout
must not apply otherwise a clumsy admin could lock themselves out
of a box that has no admin yet."""
monkeypatch.setattr(api.time, "sleep", lambda _s: None)
server, port = _start_server()
try:
# Many mismatched setup submissions (400s) — no 429 should appear.
for _ in range(auth.LoginAttempts.MAX_FAILURES + 3):
req = _request(
port,
"/login",
method="POST",
body={
"username": "daniel",
"password": "longenough",
"password2": "different",
},
)
try:
urllib.request.urlopen(req)
raise AssertionError("expected 400")
except urllib.error.HTTPError as e:
assert e.code == 400
# Then a good setup still succeeds.
req = _request(
port,
"/login",
method="POST",
body={
"username": "daniel",
"password": "longenough",
"password2": "longenough",
},
)
with urllib.request.urlopen(req) as r:
assert r.status == 200
finally:
server.shutdown()
server.server_close()
def test_post_logout_revokes_session(fake_dirs, admin_session):
server, port = _start_server()
try:
@ -562,13 +807,13 @@ def test_get_settings_not_found(fake_dirs):
assert status == 404
def test_install_with_settings_writes_env_via_api(fake_dirs, no_docker):
def test_install_with_settings_writes_env_via_api(fake_dirs, no_docker, no_systemd_run):
_, bundled = fake_dirs
_write_bundled(bundled, "fileshare", manifest=SETTINGS_MANIFEST)
status, body = api._do_install(
"fileshare", settings={"SMB_USER": "alice", "SMB_PASSWORD": "s3cret"}
)
assert status == 200, body
assert status == 202, body
apps, _ = fake_dirs
env = (apps / "fileshare" / ".env").read_text()
assert "SMB_USER=alice" in env
@ -583,7 +828,7 @@ def test_install_with_settings_rejects_empty_required_via_api(fake_dirs, no_dock
assert "SMB_PASSWORD" in body["error"]
def test_update_settings_merges(fake_dirs, no_docker):
def test_update_settings_merges(fake_dirs, no_docker, no_systemd_run):
_, bundled = fake_dirs
_write_bundled(bundled, "fileshare", manifest=SETTINGS_MANIFEST)
api._do_install("fileshare", settings={"SMB_USER": "alice", "SMB_PASSWORD": "original"})
@ -665,7 +910,7 @@ def test_update_not_installed(fake_dirs):
assert "not installed" in body["error"]
def test_update_no_changes(fake_dirs, no_docker, update_docker_stubs):
def test_update_no_changes(fake_dirs, no_docker, no_systemd_run, update_docker_stubs):
_, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real")
api._do_install("fileshare")
@ -678,7 +923,7 @@ def test_update_no_changes(fake_dirs, no_docker, update_docker_stubs):
assert update_docker_stubs["up_called"] == 0
def test_update_changes_applied(fake_dirs, no_docker, update_docker_stubs):
def test_update_changes_applied(fake_dirs, no_docker, no_systemd_run, update_docker_stubs):
_, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real")
api._do_install("fileshare")
@ -698,7 +943,9 @@ def test_update_changes_applied(fake_dirs, no_docker, update_docker_stubs):
assert update_docker_stubs["up_called"] == 1
def test_update_skips_services_not_running(fake_dirs, no_docker, update_docker_stubs):
def test_update_skips_services_not_running(
fake_dirs, no_docker, no_systemd_run, update_docker_stubs
):
_, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real")
api._do_install("fileshare")
@ -712,7 +959,9 @@ def test_update_skips_services_not_running(fake_dirs, no_docker, update_docker_s
assert update_docker_stubs["up_called"] == 0
def test_update_returns_502_on_pull_error(fake_dirs, no_docker, update_docker_stubs):
def test_update_returns_502_on_pull_error(
fake_dirs, no_docker, no_systemd_run, update_docker_stubs
):
_, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real")
api._do_install("fileshare")
@ -823,7 +1072,9 @@ def test_furtka_update_status_endpoint(stub_furtka_updater):
assert stub_furtka_updater["status_called"] == 1
def test_http_post_update_route(fake_dirs, no_docker, update_docker_stubs, admin_session):
def test_http_post_update_route(
fake_dirs, no_docker, no_systemd_run, update_docker_stubs, admin_session
):
_, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real")
api._do_install("fileshare")
@ -851,7 +1102,7 @@ def test_http_post_update_route(fake_dirs, no_docker, update_docker_stubs, admin
server.server_close()
def test_http_post_install_with_settings(fake_dirs, no_docker, admin_session):
def test_http_post_install_with_settings(fake_dirs, no_docker, no_systemd_run, admin_session):
_, bundled = fake_dirs
_write_bundled(bundled, "fileshare", manifest=SETTINGS_MANIFEST)
server = api.HTTPServer(("127.0.0.1", 0), api._Handler)
@ -870,14 +1121,80 @@ def test_http_post_install_with_settings(fake_dirs, no_docker, admin_session):
},
)
with urllib.request.urlopen(req) as r:
assert r.status == 200
# Async: 202 Accepted + dispatched background job.
assert r.status == 202
body = json.loads(r.read())
assert body["status"] == "dispatched"
assert body["unit"] == "furtka-install-fileshare"
# Sync phase wrote the .env before dispatch.
apps, _ = fake_dirs
assert "SMB_PASSWORD=s3cret" in (apps / "fileshare" / ".env").read_text()
# And systemd-run was called exactly once with the expected cmd.
assert len(no_systemd_run) == 1
assert no_systemd_run[0][:4] == [
"systemd-run",
"--unit=furtka-install-fileshare",
"--no-block",
"--collect",
]
assert no_systemd_run[0][-3:] == ["app", "install-bg", "fileshare"]
finally:
server.shutdown()
server.server_close()
def test_do_install_returns_409_when_locked(fake_dirs, no_docker, no_systemd_run):
_, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real")
# Hold the install lock so _do_install fast-fails.
fh = api.install_runner.acquire_lock()
try:
status, body = api._do_install("fileshare")
assert status == 409
assert "in progress" in body["error"]
finally:
fh.close()
def test_do_install_returns_409_when_state_reports_running(fake_dirs, no_docker, no_systemd_run):
"""Closes the race window where _do_install had already released
the fcntl lock (so the systemd-run child could grab it) but a
second POST tried to start a new install while the first was still
mid-flight. The state file's non-terminal stage is the reliable
"someone else is installing" signal."""
_, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real")
api.install_runner.write_state("pulling_image", app="jellyfin")
status, body = api._do_install("fileshare")
assert status == 409
assert "in progress" in body["error"]
assert "jellyfin" in body["error"]
assert "pulling_image" in body["error"]
def test_do_install_goes_through_after_terminal_state(fake_dirs, no_docker, no_systemd_run):
"""After a successful or failed install, the state file stays at
done/error a new install must be accepted, not blocked."""
_, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real")
api.install_runner.write_state("done", app="previous", version="1.0.0")
status, _ = api._do_install("fileshare")
assert status == 202
api.install_runner.write_state("error", app="previous", error="oops")
status, _ = api._do_install("fileshare")
assert status == 202
def test_do_install_status_returns_state(fake_dirs):
# Write state directly, then GET it via the status handler.
api.install_runner.write_state("pulling_image", app="jellyfin")
status, body = api._do_install_status()
assert status == 200
assert body["stage"] == "pulling_image"
assert body["app"] == "jellyfin"
# --- Catalog endpoints ------------------------------------------------------

View file

@ -10,9 +10,11 @@ from furtka import auth
def tmp_users_file(tmp_path, monkeypatch):
path = tmp_path / "users.json"
monkeypatch.setenv("FURTKA_USERS_FILE", str(path))
# Sessions are module-level; wipe between tests so one doesn't leak a
# valid token into the next.
# Sessions and lockout state are module-level; wipe between tests so
# one doesn't leak a valid token (or a stale failure counter) into
# the next.
auth.SESSIONS.clear()
auth.LOCKOUT.clear_all()
return path
@ -150,3 +152,79 @@ class _FakeDatetime:
if tz is None:
return self._fixed.replace(tzinfo=None)
return self._fixed.astimezone(tz)
# ---- Login attempts / lockout ----------------------------------------------
def test_lockout_under_threshold_still_allowed(tmp_users_file):
store = auth.LoginAttempts(max_failures=3, window_seconds=60, lockout_seconds=60)
key = ("daniel", "10.0.0.1")
for _ in range(2):
store.register_failure(key)
assert store.is_locked(key) is False
assert store.retry_after_seconds(key) == 0
def test_lockout_triggers_at_threshold(tmp_users_file):
store = auth.LoginAttempts(max_failures=3, window_seconds=60, lockout_seconds=60)
key = ("daniel", "10.0.0.1")
for _ in range(3):
store.register_failure(key)
assert store.is_locked(key) is True
assert store.retry_after_seconds(key) > 0
assert store.retry_after_seconds(key) <= 60
def test_lockout_window_decay(tmp_users_file, monkeypatch):
store = auth.LoginAttempts(max_failures=3, window_seconds=60, lockout_seconds=60)
key = ("daniel", "10.0.0.1")
for _ in range(3):
store.register_failure(key)
assert store.is_locked(key) is True
# Jump 2 minutes ahead — all failures are older than the window
# and should be pruned on the next check.
monkeypatch.setattr(
auth,
"datetime",
_FakeDatetime(datetime.now(UTC) + timedelta(seconds=121)),
)
assert store.is_locked(key) is False
assert store.retry_after_seconds(key) == 0
def test_lockout_clear_resets(tmp_users_file):
store = auth.LoginAttempts(max_failures=2, window_seconds=60, lockout_seconds=60)
key = ("daniel", "10.0.0.1")
store.register_failure(key)
store.register_failure(key)
assert store.is_locked(key) is True
store.clear(key)
assert store.is_locked(key) is False
assert store.retry_after_seconds(key) == 0
def test_lockout_keys_are_independent(tmp_users_file):
store = auth.LoginAttempts(max_failures=2, window_seconds=60, lockout_seconds=60)
locked = ("daniel", "1.1.1.1")
other_ip = ("daniel", "2.2.2.2")
other_user = ("robert", "1.1.1.1")
store.register_failure(locked)
store.register_failure(locked)
assert store.is_locked(locked) is True
assert store.is_locked(other_ip) is False
assert store.is_locked(other_user) is False
def test_lockout_clear_all_wipes_every_key(tmp_users_file):
store = auth.LoginAttempts(max_failures=2, window_seconds=60, lockout_seconds=60)
a = ("daniel", "1.1.1.1")
b = ("robert", "2.2.2.2")
store.register_failure(a)
store.register_failure(a)
store.register_failure(b)
store.register_failure(b)
assert store.is_locked(a) and store.is_locked(b)
store.clear_all()
assert not store.is_locked(a)
assert not store.is_locked(b)

View file

@ -71,3 +71,35 @@ def test_reconcile_dry_run_empty(tmp_path, monkeypatch, capsys):
assert rc == 0
out = capsys.readouterr().out
assert "0 actions" in out
def test_app_install_bg_dispatches_to_runner(tmp_path, monkeypatch):
"""CLI `app install-bg <name>` must call install_runner.run_install(name).
This is the entry point the HTTP API fires via systemd-run; regression
here would leave the UI hanging at "pulling_image…" forever because
the background never transitions state.
"""
_set_env(monkeypatch, tmp_path)
from furtka import install_runner
called = []
monkeypatch.setattr(install_runner, "run_install", lambda name: called.append(name))
rc = main(["app", "install-bg", "fileshare"])
assert rc == 0
assert called == ["fileshare"]
def test_app_install_bg_returns_1_on_failure(tmp_path, monkeypatch, capsys):
_set_env(monkeypatch, tmp_path)
from furtka import install_runner
def boom(name):
raise RuntimeError("compose pull failed")
monkeypatch.setattr(install_runner, "run_install", boom)
rc = main(["app", "install-bg", "fileshare"])
assert rc == 1
err = capsys.readouterr().err
assert "install-bg failed" in err
assert "compose pull failed" in err

View file

@ -95,3 +95,23 @@ def test_drive_type_label_nvme_ssd_hdd():
def test_parse_lsblk_handles_empty_output():
assert parse_lsblk_output("") == []
def test_parse_lsblk_drops_boot_usb(monkeypatch):
import drives
monkeypatch.setattr(drives, "_smart_status", lambda _: "passed")
output = "sda 500G disk\nsdb 16G disk\nnvme0n1 1T disk\n"
devices = parse_lsblk_output(output, boot_disk="sdb")
names = [d["name"] for d in devices]
assert "/dev/sdb" not in names
assert names == ["/dev/nvme0n1", "/dev/sda"]
def test_parse_lsblk_no_boot_disk_keeps_all(monkeypatch):
import drives
monkeypatch.setattr(drives, "_smart_status", lambda _: "passed")
output = "sda 500G disk\nsdb 16G disk\n"
names = [d["name"] for d in parse_lsblk_output(output, boot_disk=None)]
assert set(names) == {"/dev/sda", "/dev/sdb"}

View file

@ -1,11 +1,15 @@
"""Tests for furtka.https — fingerprint extraction + force-HTTPS toggle.
"""Tests for furtka.https — fingerprint extraction + HTTPS toggle.
Since 26.15-alpha the toggle writes/removes TWO snippets atomically:
- The top-level HTTPS listener snippet (enables :443 + tls internal)
- The :80-scoped redirect snippet (forces HTTP HTTPS)
The fingerprint case uses a throwaway self-signed EC cert with a known
reference fingerprint (computed once via `openssl x509 -fingerprint
-sha256 -noout`) so we verify the PEM DER SHA256 path without a
runtime subprocess dependency. The toggle cases stub the caddy reload
so we assert the snippet file is written / removed and that reload
failures roll state back.
so we assert both snippet files are written / removed together and that
reload failures roll BOTH state back.
"""
import subprocess
@ -34,6 +38,22 @@ _TEST_CERT_FP_SHA256 = (
)
def _paths(tmp_path):
"""Return the four paths the toggle touches, in a dict for kwargs
spreading. Keeps each test's fixture boilerplate small."""
return {
"snippet_dir": tmp_path / "furtka.d",
"snippet": tmp_path / "furtka.d" / "redirect.caddyfile",
"https_snippet_dir": tmp_path / "furtka-https.d",
"https_snippet": tmp_path / "furtka-https.d" / "https.caddyfile",
"hostname_file": tmp_path / "etc_hostname",
}
def _prepare_hostname(tmp_path, value="testbox"):
(tmp_path / "etc_hostname").write_text(f"{value}\n")
def test_ca_fingerprint_matches_openssl(tmp_path):
cert = tmp_path / "root.crt"
cert.write_text(_TEST_CERT_PEM)
@ -53,7 +73,7 @@ def test_ca_fingerprint_no_pem_block(tmp_path):
def test_status_no_ca_no_snippet(tmp_path):
s = https.status(ca_path=tmp_path / "root.crt", snippet=tmp_path / "redirect.caddyfile")
s = https.status(ca_path=tmp_path / "root.crt", https_snippet=tmp_path / "https.caddyfile")
assert s == {
"ca_available": False,
"fingerprint_sha256": None,
@ -62,105 +82,135 @@ def test_status_no_ca_no_snippet(tmp_path):
}
def test_status_with_ca_and_snippet(tmp_path):
def test_status_with_ca_and_https_snippet(tmp_path):
ca = tmp_path / "root.crt"
ca.write_text(_TEST_CERT_PEM)
snippet = tmp_path / "redirect.caddyfile"
snippet.write_text(https.REDIRECT_CONTENT)
s = https.status(ca_path=ca, snippet=snippet)
https_snip = tmp_path / "https.caddyfile"
https_snip.write_text("furtka.local, furtka {\n\ttls internal\n\timport furtka_routes\n}\n")
s = https.status(ca_path=ca, https_snippet=https_snip)
assert s["ca_available"] is True
assert s["fingerprint_sha256"] == _TEST_CERT_FP_SHA256
assert s["force_https"] is True
def test_set_force_enable_writes_snippet_and_reloads(tmp_path):
snippet_dir = tmp_path / "furtka.d"
snippet = snippet_dir / "redirect.caddyfile"
def test_status_force_reflects_https_snippet_not_redirect(tmp_path):
"""Authoritative signal for "HTTPS is on" is the listener snippet —
a lone redirect without a :443 listener wouldn't actually serve
HTTPS, so the status must NOT report it as on. Locks 26.15 semantic."""
ca = tmp_path / "root.crt"
ca.write_text(_TEST_CERT_PEM)
s = https.status(ca_path=ca, https_snippet=tmp_path / "does-not-exist.caddyfile")
assert s["force_https"] is False
def test_set_force_enable_writes_both_snippets_and_reloads(tmp_path):
_prepare_hostname(tmp_path)
p = _paths(tmp_path)
calls = []
def fake_reload():
calls.append("reload")
result = https.set_force_https(
True, snippet_dir=snippet_dir, snippet=snippet, reload_caddy=fake_reload
)
result = https.set_force_https(True, reload_caddy=fake_reload, **p)
assert result is True
assert snippet.read_text() == https.REDIRECT_CONTENT
assert p["snippet"].read_text() == https.REDIRECT_CONTENT
written = p["https_snippet"].read_text()
assert "testbox.local, testbox" in written
assert "tls internal" in written
assert "import furtka_routes" in written
assert calls == ["reload"]
def test_set_force_disable_removes_snippet(tmp_path):
snippet_dir = tmp_path / "furtka.d"
snippet_dir.mkdir()
snippet = snippet_dir / "redirect.caddyfile"
snippet.write_text(https.REDIRECT_CONTENT)
def test_set_force_uses_fallback_hostname_when_file_missing(tmp_path):
# No /etc/hostname → fall back to 'furtka' so Caddy gets a parseable
# block instead of an empty hostname that would fail config load.
p = _paths(tmp_path)
result = https.set_force_https(True, reload_caddy=lambda: None, **p)
assert result is True
assert "furtka.local, furtka" in p["https_snippet"].read_text()
result = https.set_force_https(
False, snippet_dir=snippet_dir, snippet=snippet, reload_caddy=lambda: None
)
def test_set_force_disable_removes_both_snippets(tmp_path):
_prepare_hostname(tmp_path)
p = _paths(tmp_path)
p["snippet_dir"].mkdir()
p["https_snippet_dir"].mkdir()
p["snippet"].write_text(https.REDIRECT_CONTENT)
p["https_snippet"].write_text("furtka.local { tls internal }\n")
result = https.set_force_https(False, reload_caddy=lambda: None, **p)
assert result is False
assert not snippet.exists()
assert not p["snippet"].exists()
assert not p["https_snippet"].exists()
def test_set_force_disable_is_idempotent_when_already_off(tmp_path):
snippet_dir = tmp_path / "furtka.d"
snippet = snippet_dir / "redirect.caddyfile"
result = https.set_force_https(
False, snippet_dir=snippet_dir, snippet=snippet, reload_caddy=lambda: None
)
p = _paths(tmp_path)
result = https.set_force_https(False, reload_caddy=lambda: None, **p)
assert result is False
assert not snippet.exists()
assert not p["snippet"].exists()
assert not p["https_snippet"].exists()
def test_reload_failure_rolls_back_enable(tmp_path):
snippet_dir = tmp_path / "furtka.d"
snippet = snippet_dir / "redirect.caddyfile"
_prepare_hostname(tmp_path)
p = _paths(tmp_path)
def failing_reload():
raise subprocess.CalledProcessError(1, ["systemctl"], stderr="bad config")
with pytest.raises(https.HttpsError, match="caddy reload failed: bad config"):
https.set_force_https(
True, snippet_dir=snippet_dir, snippet=snippet, reload_caddy=failing_reload
)
# Rollback: since snippet didn't exist before, it must not exist after.
assert not snippet.exists()
https.set_force_https(True, reload_caddy=failing_reload, **p)
# Rollback: since neither snippet existed before, neither exists after.
assert not p["snippet"].exists()
assert not p["https_snippet"].exists()
def test_reload_failure_rolls_back_disable(tmp_path):
snippet_dir = tmp_path / "furtka.d"
snippet_dir.mkdir()
snippet = snippet_dir / "redirect.caddyfile"
original = "redir https://{host}{uri} permanent\n# marker\n"
snippet.write_text(original)
_prepare_hostname(tmp_path)
p = _paths(tmp_path)
p["snippet_dir"].mkdir()
p["https_snippet_dir"].mkdir()
original_redirect = "redir https://{host}{uri} permanent\n# marker\n"
original_https = "# old https block\nfurtka.local { tls internal }\n"
p["snippet"].write_text(original_redirect)
p["https_snippet"].write_text(original_https)
def failing_reload():
raise subprocess.CalledProcessError(1, ["systemctl"], stderr="bad config")
with pytest.raises(https.HttpsError):
https.set_force_https(
False, snippet_dir=snippet_dir, snippet=snippet, reload_caddy=failing_reload
)
# Rollback: snippet is restored to its exact prior contents.
assert snippet.read_text() == original
https.set_force_https(False, reload_caddy=failing_reload, **p)
# Rollback: both snippets are restored to their exact prior contents.
assert p["snippet"].read_text() == original_redirect
assert p["https_snippet"].read_text() == original_https
def test_systemctl_missing_raises_and_rolls_back(tmp_path):
snippet_dir = tmp_path / "furtka.d"
snippet = snippet_dir / "redirect.caddyfile"
_prepare_hostname(tmp_path)
p = _paths(tmp_path)
def missing_systemctl():
raise FileNotFoundError(2, "No such file", "systemctl")
with pytest.raises(https.HttpsError, match="systemctl not available"):
https.set_force_https(
True, snippet_dir=snippet_dir, snippet=snippet, reload_caddy=missing_systemctl
)
assert not snippet.exists()
https.set_force_https(True, reload_caddy=missing_systemctl, **p)
assert not p["snippet"].exists()
assert not p["https_snippet"].exists()
def test_redirect_snippet_content_is_caddy_redir_directive():
# Lock the exact directive. A regression here silently stops the
# redirect from taking effect even though the file-swap looks fine.
assert https.REDIRECT_CONTENT.strip() == "redir https://{host}{uri} permanent"
def test_https_snippet_content_has_tls_internal_and_routes(tmp_path):
# Lock the shape of the opt-in HTTPS listener block. Caddy parses
# this verbatim — changing the shape without updating the test
# risks shipping a silently-broken Caddyfile import.
s = https._https_snippet_content("mybox")
assert "mybox.local, mybox {" in s
assert "\ttls internal" in s
assert "\timport furtka_routes" in s
assert s.endswith("}\n")

View file

@ -0,0 +1,177 @@
"""Tests for the background app-install runner.
Same shape as test_catalog.py / test_updater.py: fixture reloads the
module with env-overridden paths, dockerops calls are stubbed so nothing
touches a real daemon. Asserts that state transitions happen in the
right order and that exceptions flip the state to "error" with the
message before re-raising.
"""
from __future__ import annotations
import json
from pathlib import Path
import pytest
@pytest.fixture
def runner(tmp_path, monkeypatch):
apps = tmp_path / "apps"
apps.mkdir()
monkeypatch.setenv("FURTKA_APPS_DIR", str(apps))
monkeypatch.setenv("FURTKA_INSTALL_STATE", str(tmp_path / "install-state.json"))
monkeypatch.setenv("FURTKA_INSTALL_LOCK", str(tmp_path / "install.lock"))
import importlib
from furtka import install_runner as r
from furtka import paths as p
importlib.reload(p)
importlib.reload(r)
return r
def _write_installed_app(apps_dir: Path, name: str = "fileshare"):
app = apps_dir / name
app.mkdir()
manifest = {
"name": name,
"display_name": "Fileshare",
"version": "0.1.0",
"description": "Test fixture",
"volumes": ["files"],
"ports": [445],
"icon": "icon.svg",
}
(app / "manifest.json").write_text(json.dumps(manifest))
(app / "docker-compose.yaml").write_text("services: {}\n")
return app
def test_write_and_read_state_round_trip(runner):
runner.write_state("pulling_image", app="jellyfin")
s = runner.read_state()
assert s["stage"] == "pulling_image"
assert s["app"] == "jellyfin"
assert "updated_at" in s
def test_read_state_returns_empty_when_missing(runner):
assert runner.read_state() == {}
def test_read_state_returns_empty_on_junk(runner):
runner.state_path().parent.mkdir(parents=True, exist_ok=True)
runner.state_path().write_text("{not json")
assert runner.read_state() == {}
def test_acquire_lock_prevents_concurrent_runs(runner):
held = runner.acquire_lock()
try:
with pytest.raises(runner.InstallRunnerError, match="in progress"):
runner.acquire_lock()
finally:
held.close()
def test_run_install_happy_path(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "fileshare")
calls = []
monkeypatch.setattr(dockerops, "compose_pull", lambda *a, **k: calls.append(("pull", a)))
monkeypatch.setattr(dockerops, "ensure_volume", lambda name: calls.append(("vol", name)))
monkeypatch.setattr(dockerops, "compose_up", lambda *a, **k: calls.append(("up", a)))
runner.run_install("fileshare")
# Ordering: pull first, then volumes, then up.
assert [c[0] for c in calls] == ["pull", "vol", "up"]
# Exactly the namespaced volume name got created.
assert calls[1] == ("vol", "furtka_fileshare_files")
# Final state is "done" with the manifest version.
s = runner.read_state()
assert s["stage"] == "done"
assert s["app"] == "fileshare"
assert s["version"] == "0.1.0"
def test_run_install_writes_error_on_pull_failure(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "fileshare")
def boom(*a, **k):
raise dockerops.DockerError("pull failed: registry unreachable")
monkeypatch.setattr(dockerops, "compose_pull", boom)
monkeypatch.setattr(dockerops, "ensure_volume", lambda name: None)
monkeypatch.setattr(dockerops, "compose_up", lambda *a, **k: None)
with pytest.raises(dockerops.DockerError):
runner.run_install("fileshare")
s = runner.read_state()
assert s["stage"] == "error"
assert s["app"] == "fileshare"
assert "registry unreachable" in s["error"]
def test_run_install_writes_error_on_up_failure(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "fileshare")
monkeypatch.setattr(dockerops, "compose_pull", lambda *a, **k: None)
monkeypatch.setattr(dockerops, "ensure_volume", lambda name: None)
def boom(*a, **k):
raise dockerops.DockerError("compose up: container refused to start")
monkeypatch.setattr(dockerops, "compose_up", boom)
with pytest.raises(dockerops.DockerError):
runner.run_install("fileshare")
s = runner.read_state()
assert s["stage"] == "error"
assert "refused to start" in s["error"]
def test_run_install_releases_lock_after_done(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "fileshare")
monkeypatch.setattr(dockerops, "compose_pull", lambda *a, **k: None)
monkeypatch.setattr(dockerops, "ensure_volume", lambda name: None)
monkeypatch.setattr(dockerops, "compose_up", lambda *a, **k: None)
runner.run_install("fileshare")
# Lock released — a fresh acquire must succeed.
fh = runner.acquire_lock()
fh.close()
def test_run_install_releases_lock_after_error(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "fileshare")
monkeypatch.setattr(
dockerops, "compose_pull", lambda *a, **k: (_ for _ in ()).throw(dockerops.DockerError("x"))
)
with pytest.raises(dockerops.DockerError):
runner.run_install("fileshare")
fh = runner.acquire_lock()
fh.close()

74
tests/test_passwd.py Normal file
View file

@ -0,0 +1,74 @@
"""Tests for furtka.passwd — stdlib-only password hashing.
The primary contract: hash/verify roundtrips cleanly, AND the verifier
accepts the werkzeug hash format that 26.11 / 26.12 boxes wrote to
``users.json``. Losing that backward compat would lock out existing
admins after a 26.13+ upgrade.
"""
from __future__ import annotations
from furtka import passwd
def test_hash_roundtrip():
h = passwd.hash_password("hunter2")
assert passwd.verify_password("hunter2", h)
assert not passwd.verify_password("wrong", h)
def test_hash_is_salted():
# Two separate hashes of the same password must diverge.
a = passwd.hash_password("same-pw")
b = passwd.hash_password("same-pw")
assert a != b
assert passwd.verify_password("same-pw", a)
assert passwd.verify_password("same-pw", b)
def test_generated_hash_format():
# Shape is pbkdf2:<hash>:<iter>$<salt>$<hex>
h = passwd.hash_password("x")
parts = h.split("$", 2)
assert len(parts) == 3
method, salt, digest = parts
assert method.startswith("pbkdf2:sha256:")
assert salt
# digest is hex of pbkdf2_hmac sha256 → 64 hex chars
assert len(digest) == 64
assert all(c in "0123456789abcdef" for c in digest)
def test_verify_werkzeug_scrypt_hash():
"""Known werkzeug scrypt hash generated by 26.11 / 26.12 boxes.
Captured live off a .196 test VM after its auth bootstrap:
username=daniel, password=test-admin-pw1
Hash format: scrypt:32768:8:1$<salt>$<hex>
If this regresses, every existing box that upgraded via 26.11 and
set a password gets locked out on the next upgrade.
"""
known = (
"scrypt:32768:8:1$yWZUqJodowt9ieI1$"
"2d1059b3564da7492b4aa3c2be7fff6fef06085e5e1bfd52e897948c58246b7a"
"9603400355b7264f61c4436eba7bf8c947adec3d7a76be03b50efb4227e15a80"
)
assert passwd.verify_password("test-admin-pw1", known)
assert not passwd.verify_password("wrong-password", known)
def test_verify_rejects_malformed_hashes():
# Empty / missing delimiters / unknown method / bad int — all False.
assert not passwd.verify_password("x", "")
assert not passwd.verify_password("x", "nothingspecial")
assert not passwd.verify_password("x", "pbkdf2:sha256:600000") # no $salt$digest
assert not passwd.verify_password("x", "pbkdf2$salt$digest") # missing hash + iter
assert not passwd.verify_password("x", "bcrypt:12$salt$digest") # unsupported algo
assert not passwd.verify_password("x", "pbkdf2:sha256:abc$salt$digest") # bad iter int
def test_verify_rejects_nonstring_inputs():
# Defensive: users.json can be corrupted or have nulls.
assert not passwd.verify_password(None, "pbkdf2:sha256:1000$salt$digest") # type: ignore[arg-type]
assert not passwd.verify_password("x", None) # type: ignore[arg-type]
assert not passwd.verify_password("x", 12345) # type: ignore[arg-type]

View file

@ -224,6 +224,76 @@ def test_refresh_caddyfile_substitutes_hostname_placeholder(updater, tmp_path):
assert updater._refresh_caddyfile(src) is False
def test_health_check_treats_4xx_as_healthy(updater, monkeypatch):
"""26.11+ auth makes /api/apps return 401 on unauth requests. If the
health check treated that as "down", every pre-auth auth upgrade
auto-rolls back. Server responding at all is enough signal for the
health check."""
import urllib.error
calls = {"n": 0}
class _FakeResp:
def __init__(self, code):
self.status = code
def __enter__(self):
return self
def __exit__(self, *a):
return False
def raising_401(url, timeout):
calls["n"] += 1
raise urllib.error.HTTPError(url, 401, "Unauthorized", {}, None)
monkeypatch.setattr("urllib.request.urlopen", raising_401)
assert updater._health_check("http://127.0.0.1:7000/api/apps", deadline_s=2.0) is True
# One call was enough — early exit on 4xx, no retry loop.
assert calls["n"] == 1
def test_health_check_rejects_5xx(updater, monkeypatch):
"""500s mean the server is up but broken — that's NOT healthy.
Distinguishes auth refusals (4xx = healthy) from real runtime
errors (5xx = unhealthy, roll back)."""
import urllib.error
def raising_500(url, timeout):
raise urllib.error.HTTPError(url, 500, "Internal Server Error", {}, None)
monkeypatch.setattr("urllib.request.urlopen", raising_500)
assert updater._health_check("http://127.0.0.1:7000/api/apps", deadline_s=1.5) is False
def test_health_check_retries_on_connection_refused(updater, monkeypatch):
"""While furtka-api is still starting, urlopen raises URLError.
The loop must keep polling until the server comes up or deadline."""
import urllib.error
calls = {"n": 0}
def flaky(url, timeout):
calls["n"] += 1
if calls["n"] < 3:
raise urllib.error.URLError("connection refused")
class _Resp:
status = 200
def __enter__(self):
return self
def __exit__(self, *a):
return False
return _Resp()
monkeypatch.setattr("urllib.request.urlopen", flaky)
assert updater._health_check("http://127.0.0.1:7000/api/apps", deadline_s=10.0) is True
assert calls["n"] == 3
def test_current_hostname_falls_back_when_file_missing(updater, monkeypatch, tmp_path):
monkeypatch.setenv("FURTKA_HOSTNAME_FILE", str(tmp_path / "missing"))
import importlib

View file

@ -122,19 +122,39 @@ def test_caddyfile_asset_serves_from_current():
assert "root * /var/lib/furtka" in caddy
def test_caddyfile_serves_both_http_and_https():
# :80 stays so users who haven't installed the CA still reach the box;
# HTTPS is served via a named-hostname block so Caddy's `tls internal`
# has something to issue a leaf cert for. A bare `:443 { tls internal }`
# never triggers issuance — that was the 26.4-alpha regression.
caddy = (ASSETS / "Caddyfile").read_text()
def _strip_caddy_comments(text: str) -> str:
"""Remove # comments + blank lines so string-match assertions can
target actual Caddyfile directives, not the leading doc block.
Comment intro is ``#`` at start-of-line or preceded by whitespace."""
out = []
for line in text.splitlines():
stripped = line.split("#", 1)[0].rstrip()
if stripped:
out.append(stripped)
return "\n".join(out)
def test_caddyfile_serves_http_by_default_https_opt_in():
# 26.15-alpha: HTTPS is opt-in. The default Caddyfile has a :80 block
# and imports /etc/caddy/furtka-https.d/*.caddyfile at top level —
# the /settings HTTPS toggle drops the hostname+tls-internal block
# into that dir when the user explicitly enables HTTPS. Default
# Caddyfile therefore contains no `tls internal` directive anywhere;
# if a future refactor puts it back, every fresh install regresses
# to the 26.14-era BAD_SIGNATURE trap. Strip comments first because
# the doc-block DOES mention `tls internal` in prose.
caddy_full = (ASSETS / "Caddyfile").read_text()
caddy = _strip_caddy_comments(caddy_full)
assert ":80 {" in caddy
assert "__FURTKA_HOSTNAME__.local, __FURTKA_HOSTNAME__ {" in caddy
assert "tls internal" in caddy
# Shared routes live in a named snippet to avoid drift between the two
# listeners — both site blocks must import it.
assert "tls internal" not in caddy
assert "__FURTKA_HOSTNAME__" not in caddy
assert "import /etc/caddy/furtka-https.d/*.caddyfile" in caddy
# Shared routes still live in a named snippet so the HTTPS toggle's
# snippet can import the same routes without duplication.
assert "(furtka_routes)" in caddy
assert caddy.count("import furtka_routes") == 2
# Default Caddyfile imports it once (inside :80). The HTTPS snippet,
# when written by the toggle, imports it a second time.
assert caddy.count("import furtka_routes") == 1
def test_caddyfile_disables_caddy_auto_redirects():
@ -167,16 +187,28 @@ def test_caddyfile_exposes_root_ca_download():
assert "attachment; filename=furtka-local-rootCA.crt" in caddy
def test_post_install_substitutes_hostname_in_caddyfile(install_cmds):
# Fresh installs: the placeholder the asset ships with must be replaced
# with the hostname the user picked in the form. The `testhost` value
# comes from the install_cmds fixture. Without substitution Caddy's
# `tls internal` never issues a leaf cert for the real hostname.
def test_post_install_writes_caddyfile_without_hostname_placeholder(install_cmds):
# 26.15-alpha: the shipped Caddyfile no longer carries the
# __FURTKA_HOSTNAME__ marker — HTTPS + hostname now live in the
# opt-in snippet written by set_force_https(), not in the base
# Caddyfile. Verify the post-install writes the file as-is (no
# substitution expected) and it has the opt-in import glob.
caddyfile_cmd = next((c for c in install_cmds if " > /etc/caddy/Caddyfile" in c), None)
assert caddyfile_cmd is not None
written = _extract_written_content(caddyfile_cmd, "/etc/caddy/Caddyfile")
written_full = _extract_written_content(caddyfile_cmd, "/etc/caddy/Caddyfile")
written = _strip_caddy_comments(written_full)
assert "__FURTKA_HOSTNAME__" not in written
assert "testhost.local, testhost {" in written
assert "import /etc/caddy/furtka-https.d/*.caddyfile" in written
assert "tls internal" not in written
def test_post_install_creates_https_snippet_dir(install_cmds):
# The top-level HTTPS opt-in snippet dir must exist before Caddy's
# first start — its glob import tolerates an empty directory, but
# not a missing one on older Caddy builds. Parallel guarantee to
# test_post_install_creates_furtka_d_snippet_dir below.
matching = [c for c in install_cmds if "/etc/caddy/furtka-https.d" in c and "install -d" in c]
assert matching, "no install -d command creates /etc/caddy/furtka-https.d"
def test_post_install_creates_furtka_d_snippet_dir(install_cmds):

View file

@ -395,6 +395,14 @@ def _post_install_commands(hostname, admin_username, admin_password):
# an empty dir but not a missing one on every Caddy version, so we
# create it up front and stay on the safe side.
"install -d -m 0755 -o root -g root /etc/caddy/furtka.d",
# Parallel dir for the top-level HTTPS-listener snippet, written
# by /api/furtka/https/force (26.15-alpha+) when the user opts
# into HTTPS. Empty by default so fresh installs never generate
# a tls internal cert — that was the 26.14 regression where
# Firefox hit unbypassable SEC_ERROR_BAD_SIGNATURE because
# Caddy's fixed intermediate-CN clashed with any cached trust
# from a previously-reinstalled Furtka box.
"install -d -m 0755 -o root -g root /etc/caddy/furtka-https.d",
# The Caddyfile lives at /etc/caddy/Caddyfile per Caddy's convention
# (systemd unit points there). Content comes from the shipped asset,
# which we copy in at install time so updates that change routing

View file

@ -1,6 +1,41 @@
import subprocess
def _boot_disk_name():
"""Return the parent disk name of the live-ISO boot media (e.g. "sdb"), or None.
On a normal box `/run/archiso/bootmnt` does not exist and we return None,
leaving the device list untouched. On bare metal booted from USB this is
the stick we booted from we want to filter it out so the user can't
accidentally pick it as the install target.
"""
try:
result = subprocess.run(
["findmnt", "-no", "SOURCE", "/run/archiso/bootmnt"],
capture_output=True,
text=True,
)
except FileNotFoundError:
return None
if result.returncode != 0:
return None
partition = result.stdout.strip()
if not partition:
return None
try:
parent = subprocess.run(
["lsblk", "-no", "PKNAME", partition],
capture_output=True,
text=True,
)
except FileNotFoundError:
return None
if parent.returncode != 0:
return None
name = parent.stdout.strip().splitlines()[0] if parent.stdout.strip() else ""
return name or None
def _smart_status(device):
try:
result = subprocess.run(
@ -75,11 +110,14 @@ def score_device(device, size_gb):
return get_drive_type_score(device) + get_drive_health(device) + get_size_score(size_gb)
def parse_lsblk_output(output):
def parse_lsblk_output(output, boot_disk=None):
"""Parse `lsblk -dn -o NAME,SIZE,TYPE` output into scored device dicts.
Keeps only TYPE=disk so the live ISO's own squashfs (loop) and the boot
CD-ROM (rom) don't show up as install targets.
CD-ROM (rom) don't show up as install targets. If `boot_disk` is given,
that disk is also dropped it's the USB stick the live ISO booted from
on bare metal, where it appears as TYPE=disk and would otherwise be a
valid-looking install target.
"""
devices = []
for line in output.strip().split("\n"):
@ -91,6 +129,8 @@ def parse_lsblk_output(output):
name, size, dev_type = parts[0], parts[1], parts[2]
if dev_type != "disk":
continue
if boot_disk and name == boot_disk:
continue
device = f"/dev/{name}"
size_gb = parse_size_gb(size)
status = _smart_status(device)
@ -120,7 +160,7 @@ def list_scored_devices():
except subprocess.CalledProcessError as e:
print(f"Error listing devices: {e}")
return []
return parse_lsblk_output(result.stdout)
return parse_lsblk_output(result.stdout, boot_disk=_boot_disk_name())
def main():

View file

@ -6,6 +6,10 @@
--accent: #c03a28;
--accent-hover: #a0301f;
--border: #e4e3dc;
--accent-glow: rgba(192, 58, 40, 0.2);
--card-bg: rgba(247, 246, 243, 0.72);
--card-border: var(--border);
--scene-opacity: 0.18;
--font-sans:
-apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue",
Arial, "Noto Sans", sans-serif;
@ -23,6 +27,10 @@
--accent: #ff6b56;
--accent-hover: #ff8b78;
--border: #232326;
--accent-glow: rgba(255, 107, 86, 0.4);
--card-bg: rgba(23, 23, 26, 0.65);
--card-border: #26262b;
--scene-opacity: 0.34;
}
}
@ -43,6 +51,25 @@ body {
flex-direction: column;
min-height: 100vh;
text-rendering: optimizeLegibility;
isolation: isolate;
}
/* ── Animated background canvas (home only) ─────────────── */
.scene-canvas {
position: fixed;
inset: 0;
width: 100vw;
height: 100vh;
z-index: 0;
pointer-events: none;
}
.site-header,
main.container,
.site-footer {
position: relative;
z-index: 1;
}
.container {
@ -171,11 +198,36 @@ main.container {
.home h1 {
font-family: var(--font-sans);
font-weight: 800;
font-size: clamp(3.25rem, 10vw, 6.5rem);
line-height: 0.95;
letter-spacing: -0.035em;
font-size: clamp(3.5rem, 14vw, 11rem);
line-height: 0.9;
letter-spacing: -0.04em;
margin: 0 0 1.5rem;
color: var(--fg);
background-image: linear-gradient(180deg, var(--fg) 0%, var(--accent) 110%);
-webkit-background-clip: text;
background-clip: text;
-webkit-text-fill-color: transparent;
}
@media (prefers-color-scheme: dark) {
.home h1 {
filter: drop-shadow(0 0 28px var(--accent-glow));
}
.home .lede {
color: #c8c8cc;
}
}
.hero {
min-height: 78vh;
display: flex;
flex-direction: column;
justify-content: center;
padding-block: 4.5rem 3rem;
}
.home .lede {
font-weight: 450;
}
.home .lede {
@ -258,3 +310,132 @@ main.container {
outline-offset: 3px;
border-radius: 2px;
}
/* ── Primary CTA ─────────────────────────────────────────── */
.cta-row { margin-top: 2.5rem; }
.cta {
display: inline-flex;
align-items: center;
gap: 0.55rem;
padding: 1.1rem 2rem;
font-family: var(--font-sans);
font-weight: 600;
font-size: 1.02rem;
letter-spacing: 0.005em;
text-decoration: none;
border-radius: 0.7rem;
transition: transform 180ms, box-shadow 180ms, background 180ms, color 180ms;
}
.cta--primary {
background: linear-gradient(135deg, var(--accent), var(--accent-hover));
color: #fff;
box-shadow: 0 10px 36px var(--accent-glow),
0 0 0 1px color-mix(in srgb, var(--accent) 40%, transparent);
animation: cta-pulse 2.8s ease-in-out infinite;
}
.cta--primary:hover {
transform: translateY(-3px);
box-shadow: 0 18px 52px var(--accent-glow),
0 0 0 1px var(--accent);
animation-play-state: paused;
}
.cta--primary:active { transform: translateY(-1px); }
.cta--primary span { transition: transform 180ms; }
.cta--primary:hover span { transform: translateX(4px); }
@keyframes cta-pulse {
0%, 100% { box-shadow: 0 10px 36px var(--accent-glow),
0 0 0 1px color-mix(in srgb, var(--accent) 40%, transparent); }
50% { box-shadow: 0 14px 48px var(--accent-glow),
0 0 0 1px color-mix(in srgb, var(--accent) 70%, transparent); }
}
@media (prefers-reduced-motion: reduce) {
.cta--primary { animation: none; }
}
/* ── Intro paragraph (home, between hero and feature grids) ─ */
.intro {
max-width: 38rem;
margin: 0 0 4rem;
font-size: 1.15rem;
line-height: 1.55;
color: var(--fg);
}
.intro p { margin: 0 0 1rem; }
.intro p:last-child { margin: 0; }
.intro strong { font-weight: 600; }
/* ── Feature sections (home) ─────────────────────────────── */
.feature-section { margin-block: 4rem; }
.section-eyebrow {
font-family: var(--font-sans);
font-weight: 500;
font-size: 0.72rem;
letter-spacing: 0.14em;
text-transform: uppercase;
color: var(--fg-muted);
margin: 0 0 1.25rem;
}
.feature-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(17rem, 1fr));
gap: 1rem;
}
.feature-card {
background: var(--card-bg);
border: 1px solid var(--card-border);
border-radius: 1rem;
padding: 1.5rem 1.5rem 1.4rem;
-webkit-backdrop-filter: blur(10px);
backdrop-filter: blur(10px);
transition: transform 240ms, border-color 240ms, box-shadow 240ms;
}
.feature-card:hover {
border-color: var(--accent);
box-shadow: 0 10px 32px var(--accent-glow);
transform: translateY(-2px);
}
.feature-card p {
margin: 0;
font-size: 1rem;
line-height: 1.55;
color: var(--fg);
}
.feature-card strong {
font-weight: 600;
color: var(--fg);
}
/* ── Closer prose (home, after feature grids) ────────────── */
.closer {
margin-top: 4rem;
max-width: var(--measure);
}
/* ── Reveal-on-load (hero) and reveal-on-scroll (cards) ──── */
.js .reveal,
.js [data-gsap="card"] {
opacity: 0;
transform: translateY(40px);
will-change: opacity, transform;
}
@media (prefers-reduced-motion: reduce) {
.scene-canvas { display: none; }
.js .reveal,
.js [data-gsap="card"] {
opacity: 1 !important;
transform: none !important;
will-change: auto;
}
}

View file

@ -0,0 +1,25 @@
(function () {
if (window.matchMedia('(prefers-reduced-motion: reduce)').matches) return;
if (!window.gsap || !window.ScrollTrigger || !window.Lenis) return;
gsap.registerPlugin(ScrollTrigger);
const lenis = new Lenis({ lerp: 0.1, smoothWheel: true });
lenis.on('scroll', ScrollTrigger.update);
gsap.ticker.add((time) => { lenis.raf(time * 1000); });
gsap.ticker.lagSmoothing(0);
// Hero stagger — runs once on load.
gsap.to('.hero .reveal', {
y: 0, opacity: 1, duration: 1.1, ease: 'power3.out', stagger: 0.12
});
// Card reveals — batched so cards in the same row come in together.
ScrollTrigger.batch('[data-gsap="card"]', {
start: 'top 90%',
onEnter: (els) => gsap.to(els, {
y: 0, opacity: 1, scale: 1,
duration: 0.9, ease: 'power3.out', stagger: 0.08, overwrite: true
})
});
})();

View file

@ -0,0 +1,98 @@
(function () {
if (window.matchMedia('(prefers-reduced-motion: reduce)').matches) return;
if (!window.WebGLRenderingContext || !window.THREE) return;
const canvas = document.getElementById('scene');
if (!canvas) return;
const root = document.documentElement;
const readVar = (name) => getComputedStyle(root).getPropertyValue(name).trim();
const readOpacity = () => parseFloat(readVar('--scene-opacity')) || 0.18;
const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera(
60, window.innerWidth / window.innerHeight, 0.1, 100
);
const renderer = new THREE.WebGLRenderer({ canvas, antialias: true, alpha: true });
renderer.setSize(window.innerWidth, window.innerHeight, false);
renderer.setPixelRatio(Math.min(window.devicePixelRatio || 1, 2));
const geometry = new THREE.TorusKnotGeometry(2.5, 0.4, 130, 20);
const material = new THREE.MeshPhongMaterial({
color: readVar('--accent') || '#c03a28',
wireframe: true,
transparent: true,
opacity: readOpacity()
});
const core = new THREE.Mesh(geometry, material);
scene.add(core);
scene.add(new THREE.AmbientLight(0xffffff, 0.6));
const dir = new THREE.DirectionalLight(0xffffff, 0.8);
dir.position.set(5, 5, 5);
scene.add(dir);
const BASE_Z = 9;
camera.position.z = BASE_Z;
let scrollY = window.scrollY || 0;
window.addEventListener('scroll', () => {
scrollY = window.scrollY || 0;
}, { passive: true });
let baseOpacity = readOpacity();
let running = true;
function tick() {
if (!running) return;
requestAnimationFrame(tick);
// Continuous slow drift.
core.rotation.y += 0.0015;
core.rotation.z += 0.0006;
// Scroll-driven motion: zoom in, scale up, tilt.
const s = Math.min(scrollY, 2000);
camera.position.z = BASE_Z - s * 0.0022;
const scale = 1 + s * 0.00035;
core.scale.set(scale, scale, scale);
core.rotation.x = s * 0.0008;
// Fade past hero so feature cards stay readable.
const vh = window.innerHeight;
const fadeStart = vh * 0.5;
const fadeEnd = vh * 1.4;
const t = Math.max(0, Math.min(1, (scrollY - fadeStart) / (fadeEnd - fadeStart)));
material.opacity = baseOpacity * (1 - t * 0.92);
renderer.render(scene, camera);
}
tick();
window.addEventListener('resize', () => {
camera.aspect = window.innerWidth / window.innerHeight;
camera.updateProjectionMatrix();
renderer.setSize(window.innerWidth, window.innerHeight, false);
});
document.addEventListener('visibilitychange', () => {
if (document.hidden) {
running = false;
} else if (!running) {
running = true;
tick();
}
});
const mql = window.matchMedia('(prefers-color-scheme: dark)');
const updateTheme = () => {
const accent = readVar('--accent');
if (accent) material.color.set(accent);
baseOpacity = readOpacity();
};
if (mql.addEventListener) {
mql.addEventListener('change', updateTheme);
} else if (mql.addListener) {
mql.addListener(updateTheme);
}
})();

19
website/assets/js/vendor/PROVENANCE.md vendored Normal file
View file

@ -0,0 +1,19 @@
# Vendored JavaScript libraries
These minified bundles are checked into the repo so furtka.org has zero
third-party-CDN dependencies at runtime. Pin date: **2026-04-27**.
| File | Version | Source |
|---|---|---|
| `three.min.js` | r128 | https://cdnjs.cloudflare.com/ajax/libs/three.js/r128/three.min.js |
| `gsap.min.js` | 3.12.2 (core only) | https://cdnjs.cloudflare.com/ajax/libs/gsap/3.12.2/gsap.min.js |
| `ScrollTrigger.min.js` | 3.12.2 | https://cdnjs.cloudflare.com/ajax/libs/gsap/3.12.2/ScrollTrigger.min.js |
| `lenis.min.js` | @studio-freight/lenis 1.0.33 | https://unpkg.com/@studio-freight/lenis@1.0.33/dist/lenis.min.js |
All four expose UMD globals (`THREE`, `gsap`, `ScrollTrigger`, `Lenis`).
None are ES modules, so no `js.Build` step is needed — Hugo just fingerprints them.
GSAP "Club" plugins (SplitText, MorphSVG, etc.) are **not** free for commercial use.
Only `gsap` core + `ScrollTrigger` (both standard MIT-style license) are bundled.
To refresh: re-run `curl -sSfL -o <file> <url>` and bump the pin date here.

File diff suppressed because one or more lines are too long

11
website/assets/js/vendor/gsap.min.js vendored Normal file

File diff suppressed because one or more lines are too long

1
website/assets/js/vendor/lenis.min.js vendored Normal file

File diff suppressed because one or more lines are too long

6
website/assets/js/vendor/three.min.js vendored Normal file

File diff suppressed because one or more lines are too long

View file

@ -1,33 +1,33 @@
---
title: "Furtka"
description: "Offenes Heimserver-Betriebssystem — einfach genug für alle."
status: "<span class=\"mono\">26.8-alpha</span> — in Arbeit"
status: "<span class=\"mono\">26.15-alpha</span> — in Arbeit"
# features_today / features_next müssen index-parallel zu content/_index.md bleiben.
intro: |
**Furtka** ist ein offenes Heimserver-Betriebssystem.
USB-Stick einstecken, durch einen Assistenten klicken, und aus jedem
alten Rechner wird eine private Cloud für den Haushalt — mit eigenen
Apps, eigenem Namen im Netz, eigenen Daten.
Das Ziel ist einfach: **dein Vater soll das einrichten können.**
features_today_label: "Was heute schon geht"
features_today:
- "Vom USB-Stick booten und Furtka auf die Festplatte einrichten"
- "Ein Assistent fragt nach Name, Benutzer und Netzwerk — fertig"
- "Danach: Bedienseite im Browser öffnen"
- "Erste App: **Dateifreigabe im Heimnetz** (alle im WLAN sehen den Ordner)"
- "Apps mit einem Klick installieren und entfernen"
- "Eine installierte App mit einem Klick aktualisieren (holt das neueste Container-Image)"
- "Furtka selbst mit einem Klick aktualisieren — keine Neuinstallation mehr für neue Features"
features_next_label: "Was als Nächstes kommt"
features_next:
- "Apps für Fotos, Dateien, Smarthome, Spiele-Streaming und Medien"
- "Einfachere Sprache im Einrichtungs-Assistenten"
- "Sichere Verbindung im Heimnetz (ohne Warnmeldung im Browser)"
- "Mehrere Server zusammenschalten"
---
**Furtka** ist ein offenes Heimserver-Betriebssystem.
USB-Stick einstecken, durch einen Assistenten klicken, und aus jedem
alten Rechner wird eine private Cloud für den Haushalt — mit eigenen
Apps, eigenem Namen im Netz, eigenen Daten.
Das Ziel ist einfach: **dein Vater soll das einrichten können.**
### Was als Nächstes kommt
- Apps für Fotos, Dateien, Smarthome, Spiele-Streaming und Medien
- Einfachere Sprache im Einrichtungs-Assistenten
- Sichere Verbindung im Heimnetz (ohne Warnmeldung im Browser)
- Mehrere Server zusammenschalten
### Was heute schon geht
- Vom USB-Stick booten und Furtka auf die Festplatte einrichten
- Ein Assistent fragt nach Name, Benutzer und Netzwerk — fertig
- Danach: Bedienseite im Browser öffnen
- Erste App: **Dateifreigabe im Heimnetz** (alle im WLAN sehen den Ordner)
- Apps mit einem Klick installieren und entfernen
- Eine installierte App mit einem Klick aktualisieren (holt das neueste Container-Image)
- Furtka selbst mit einem Klick aktualisieren — keine Neuinstallation mehr für neue Features
Wir sind zu zweit und bauen das öffentlich, abends und am Wochenende.
Es ist früh.
Mitlesen? Schreib an <a href="mailto:hallo@furtka.org">hallo@furtka.org</a>.
Mitlesen? Schreib an <hallo@furtka.org>.

View file

@ -1,33 +1,33 @@
---
title: "Furtka"
description: "Open-source home server OS — simple enough for everyone."
status: "<span class=\"mono\">26.8-alpha</span> — work in progress"
status: "<span class=\"mono\">26.15-alpha</span> — work in progress"
# Keep features_today / features_next index-aligned with content/_index.de.md.
intro: |
**Furtka** is an open-source home server OS.
Boot from USB, click through a wizard, and any old computer
turns into a private cloud for your household — with your own apps,
your own name on the network, your own data.
The goal is simple: **your dad should be able to set this up.**
features_today_label: "What works today"
features_today:
- "Boot from USB stick and install Furtka onto the hard drive"
- "A wizard asks for name, user and network — done"
- "Then: open the control page in your browser"
- "First app: **file sharing on the home network** (everyone on Wi-Fi sees the folder)"
- "Install and remove apps with one click"
- "Update an installed app with one click (pulls the newest container image)"
- "Update Furtka itself with one click — no reinstalling for new features"
features_next_label: "What's coming next"
features_next:
- "Apps for photos, files, smart home, game streaming and media"
- "Plainer language in the setup wizard"
- "Secure connection on your home network (no browser warning)"
- "Linking several servers together"
---
**Furtka** is an open-source home server OS.
Boot from USB, click through a wizard, and any old computer
turns into a private cloud for your household — with your own apps,
your own name on the network, your own data.
The goal is simple: **your dad should be able to set this up.**
### What's coming next
- Apps for photos, files, smart home, game streaming and media
- Plainer language in the setup wizard
- Secure connection on your home network (no browser warning)
- Linking several servers together
### What works today
- Boot from USB stick and install Furtka onto the hard drive
- A wizard asks for name, user and network — done
- Then: open the control page in your browser
- First app: **file sharing on the home network** (everyone on Wi-Fi sees the folder)
- Install and remove apps with one click
- Update an installed app with one click (pulls the newest container image)
- Update Furtka itself with one click — no reinstalling for new features
We're two people building it in public on evenings and weekends.
It's early.
Want to follow along? Write to <a href="mailto:hallo@furtka.org">hallo@furtka.org</a>.
Want to follow along? Write to <hallo@furtka.org>.

View file

@ -6,7 +6,7 @@ enableRobotsTXT = true
[params]
description = "Open-source home server OS — simple enough for everyone."
version = "26.8-alpha"
version = "26.15-alpha"
contactEmail = "hallo@furtka.org"
[markup.goldmark.renderer]

View file

@ -1,13 +1,15 @@
<!DOCTYPE html>
<html lang="{{ .Site.Language.Lang }}">
<html lang="{{ .Site.Language.Lang }}" class="no-js">
<head>
{{ partial "head.html" . }}
</head>
<body>
{{ if .IsHome }}<canvas id="scene" class="scene-canvas" aria-hidden="true"></canvas>{{ end }}
{{ partial "header.html" . }}
<main class="container">
{{ block "main" . }}{{ end }}
</main>
{{ partial "footer.html" . }}
{{ if .IsHome }}{{ partial "scripts.html" . }}{{ end }}
</body>
</html>

View file

@ -2,13 +2,46 @@
<article class="home">
<header class="hero">
{{ with .Params.status }}
<p class="status-chip">{{ . | safeHTML }}</p>
<p class="status-chip reveal">{{ . | safeHTML }}</p>
{{ end }}
<h1>{{ .Title }}</h1>
{{ with site.Params.description }}<p class="lede">{{ . }}</p>{{ end }}
<h1 class="reveal">{{ .Title }}</h1>
{{ with site.Params.description }}<p class="lede reveal">{{ . }}</p>{{ end }}
<p class="cta-row reveal">
<a class="cta cta--primary" href="https://forgejo.sourcegate.online/daniel/furtka/releases">
{{ if eq site.Language.Lang "de" }}Neuestes Release{{ else }}Latest release{{ end }}
<span aria-hidden="true"></span>
</a>
</p>
</header>
<div class="prose">
{{ .Content }}
</div>
{{ with .Params.intro }}
<section class="intro">{{ . | markdownify }}</section>
{{ end }}
{{ if .Params.features_today }}
<section class="feature-section">
{{ with .Params.features_today_label }}<p class="section-eyebrow">{{ . }}</p>{{ end }}
<div class="feature-grid">
{{ range .Params.features_today }}
<article class="feature-card" data-gsap="card">{{ . | markdownify }}</article>
{{ end }}
</div>
</section>
{{ end }}
{{ if .Params.features_next }}
<section class="feature-section">
{{ with .Params.features_next_label }}<p class="section-eyebrow">{{ . }}</p>{{ end }}
<div class="feature-grid">
{{ range .Params.features_next }}
<article class="feature-card" data-gsap="card">{{ . | markdownify }}</article>
{{ end }}
</div>
</section>
{{ end }}
{{ with .Content }}
<section class="prose closer">{{ . }}</section>
{{ end }}
</article>
{{ end }}

View file

@ -1,7 +1,10 @@
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<script>document.documentElement.classList.replace('no-js','js');</script>
<title>{{ if .IsHome }}{{ site.Title }} — {{ site.Params.description }}{{ else }}{{ .Title }} · {{ site.Title }}{{ end }}</title>
<meta name="description" content="{{ with .Params.description }}{{ . }}{{ else }}{{ site.Params.description }}{{ end }}">
<meta name="theme-color" content="#f7f6f3" media="(prefers-color-scheme: light)">
<meta name="theme-color" content="#0d0d0f" media="(prefers-color-scheme: dark)">
<link rel="icon" type="image/svg+xml" href="/favicon.svg">
<meta property="og:site_name" content="{{ site.Title }}">
<meta property="og:title" content="{{ if .IsHome }}{{ site.Title }}{{ else }}{{ .Title }} · {{ site.Title }}{{ end }}">

View file

@ -0,0 +1,12 @@
{{ $three := resources.Get "js/vendor/three.min.js" | fingerprint }}
{{ $gsap := resources.Get "js/vendor/gsap.min.js" | fingerprint }}
{{ $st := resources.Get "js/vendor/ScrollTrigger.min.js" | fingerprint }}
{{ $lenis := resources.Get "js/vendor/lenis.min.js" | fingerprint }}
{{ $scene := resources.Get "js/scene.js" | fingerprint }}
{{ $anim := resources.Get "js/animations.js" | fingerprint }}
<script defer src="{{ $three.RelPermalink }}" integrity="{{ $three.Data.Integrity }}"></script>
<script defer src="{{ $gsap.RelPermalink }}" integrity="{{ $gsap.Data.Integrity }}"></script>
<script defer src="{{ $st.RelPermalink }}" integrity="{{ $st.Data.Integrity }}"></script>
<script defer src="{{ $lenis.RelPermalink }}" integrity="{{ $lenis.Data.Integrity }}"></script>
<script defer src="{{ $scene.RelPermalink }}" integrity="{{ $scene.Data.Integrity }}"></script>
<script defer src="{{ $anim.RelPermalink }}" integrity="{{ $anim.Data.Integrity }}"></script>