Commit graph

129 commits

Author SHA1 Message Date
ee132712be docs: sync READMEs with 26.15 HTTPS opt-in + boot-USB filter
All checks were successful
Build ISO / build-iso (push) Successful in 24m38s
CI / lint (push) Successful in 1m1s
CI / test (push) Successful in 2m42s
CI / validate-json (push) Successful in 58s
CI / markdown-links (push) Successful in 28s
- README roadmap: Local HTTPS Phase 1 entry now reflects the 26.15
  opt-in model (default off, toggle in /settings) instead of the
  26.4 auto-trust story.
- README + iso/README: boot-USB filtering is no longer a TODO; both
  files now describe the implemented `findmnt`/`PKNAME` behaviour.
- iso/README rough edges: drop the boot-USB bullet (closed) and
  re-word the wizard-still-HTTP-only bullet to match the 26.15 toggle
  flow (it was a stale dup of the same line under it).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 12:09:33 +02:00
1193504a1e perf(site): gzip CSS, JS, SVG and fonts on the furtka.org nginx
Default nginx only gzips text/html, so the homepage HTML was the only
asset coming back compressed. The ~600 KB three.min.js bundle (and the
hashed CSS) were being shipped uncompressed across the public openresty
proxy. `gzip_types` now covers css/js/json/xml/svg/woff2.

Needs `sudo ops/nginx/setup-vm.sh` on forge-runner-01 to take effect —
the site-deploy workflow only rebuilds Hugo, it doesn't touch the
nginx config.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 12:09:26 +02:00
65d48c92f8 feat(installer): filter the boot USB out of the install drive picker
On bare-metal installs, `lsblk` reports the USB stick the live ISO
booted from as TYPE=disk, so it showed up in the drive picker
alongside the real install target — a user could in theory pick the
USB they had just booted from. `findmnt /run/archiso/bootmnt` resolves
the boot partition and `lsblk -no PKNAME` walks it up to the parent
disk; that disk is dropped before scoring. On a normal box neither
file nor mountpoint exist and the picker is unchanged.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 12:09:19 +02:00
aa7dea0528 feat(site): pimp homepage with animated 3D background and scroll reveals
Some checks failed
CI / lint (push) Successful in 1m24s
CI / test (push) Successful in 2m24s
CI / validate-json (push) Successful in 57s
CI / markdown-links (push) Successful in 29s
Deploy site / deploy (push) Successful in 7s
Build ISO / build-iso (push) Failing after 14m59s
Adopts the visual feel of Pascal's prototype while keeping Furtka's
voice, brand palette, and bilingual structure intact.

What changed
- Three.js wireframe torus-knot behind the hero, color/opacity tied
  to the existing --accent / --scene-opacity CSS vars so light and
  dark modes both work without a scene re-init.
- Scroll-driven camera zoom + core scale + tilt; canvas opacity fades
  past hero so feature cards stay readable.
- GSAP + ScrollTrigger reveal hero on load and stagger feature cards
  in as they enter the viewport. Lenis smooths scroll.
- "What works today" / "What's coming next" lists move from markdown
  bullets into front-matter arrays and render as scroll-reveal cards
  (7 + 4 cards, EN/DE parallel; copy is 1:1 from the original lists).
- Hero scaled up: gradient text on the wordmark (fg → accent),
  drop-shadow glow in dark mode, brighter lede color.
- Primary CTA -> /releases listing on Forgejo (Forgejo has no
  /releases/latest), with a pulsing glow + arrow slide on hover.
- Version bump 26.8-alpha -> 26.15-alpha to match the actual release.

Performance / a11y
- Vendor JS (Three.js r128, GSAP 3.12.2 + ScrollTrigger, Lenis 1.0.33)
  vendored locally under assets/js/vendor/ - no third-party CDN at
  runtime. ~728 KB total, fingerprinted via Hugo's pipeline with SRI.
- Canvas + scripts gated to homepage only ({{ if .IsHome }}); the
  Impressum/Datenschutz pages stay plain.
- prefers-reduced-motion: scene + GSAP early-return, CSS forces cards
  to their resting state. No-JS users see all content.
- All scripts deferred so first paint isn't blocked.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 16:14:21 +02:00
1cff22658b feat(auth): rate-limit failed logins with per-(user, IP) lockout
All checks were successful
CI / lint (push) Successful in 1m59s
CI / test (push) Successful in 3m27s
CI / validate-json (push) Successful in 1m56s
CI / markdown-links (push) Successful in 1m24s
Build ISO / build-iso (push) Successful in 26m58s
Ten wrong passwords from the same (username, client-IP) tuple within
15 minutes now return 429 with Retry-After for the next 15 minutes;
authenticate() isn't even called while locked, so the 429 response is
identical whether the password would have been correct — no oracle.

Tuple keying prevents an attacker from one IP from locking the real
admin out of their own box: a different IP (or an ISP reconnect) keeps
them in. The client IP comes from the rightmost X-Forwarded-For entry,
which is what Caddy appends and thus trustworthy (no upstream proxy in
front of Caddy). First-run setup bypasses the lockout — otherwise a
clumsy operator could lock themselves out before an admin exists.

State is in-memory (parallel to SessionStore), so `systemctl restart
furtka` clears a stuck lockout.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-22 17:27:14 +02:00
e68ed279cc fix(https): make HTTPS opt-in to stop the BAD_SIGNATURE trap on fresh installs
All checks were successful
Build ISO / build-iso (push) Successful in 17m23s
CI / lint (push) Successful in 27s
CI / test (push) Successful in 1m2s
CI / validate-json (push) Successful in 24s
CI / markdown-links (push) Successful in 15s
Release / release (push) Successful in 11m34s
Every Furtka since 26.5 shipped a Caddyfile with a
`__FURTKA_HOSTNAME__.local { tls internal }` site block, so every
first boot auto-generated a fresh self-signed CA + intermediate +
leaf. That worked for the first-ever Furtka user, but every reinstall
(or second box on the same LAN) produced a new CA whose intermediate
shared the fixed CN `Caddy Local Authority - ECC Intermediate` with
the previous one. Firefox caches intermediates by CN across profiles
— even private windows share cert9.db — so any visitor who had
trusted an older Furtka's CA got a cached intermediate with
mismatched keys when they hit the new box, producing
`SEC_ERROR_BAD_SIGNATURE`. Unlike UNKNOWN_ISSUER, Firefox has NO
"Advanced → Accept Risk" bypass for BAD_SIGNATURE, so fresh-install
boxes were effectively unreachable over HTTPS in any browser that
had ever seen a previous Furtka.

Validated live on the .46 test VM: fresh 26.14 ISO install → Firefox
hits BAD_SIGNATURE on https://furtka.local/ (even in private mode).
Chromium bypasses it via mDNS failure but the issue is the same.
openssl verify on the box confirms the chain is internally valid —
this is purely client-side cache pollution across boxes.

Fix:
- assets/Caddyfile: removed the hostname site block. Default install
  serves :80 only — https://furtka.local connection-refuses, which is
  a normal error every browser handles instead of the unbypassable
  crypto fault. Added top-level import of
  /etc/caddy/furtka-https.d/*.caddyfile so the /settings HTTPS toggle
  can drop a listener snippet there when a user explicitly opts in.
- furtka/https.py: set_force_https now writes TWO snippets atomically
  — the top-level hostname + tls internal block (enables :443) and
  the :80-scoped redirect (forces HTTP→HTTPS). Disable removes both.
  Reload failure rolls both back. Added _read_hostname + _https_snippet_content
  helpers with `/etc/hostname` → 'furtka' fallback so a missing
  hostname file doesn't produce an empty site block Caddy rejects.
- furtka/https.py::status: force_https now reads the listener
  snippet (was reading the redirect snippet). A redirect without a
  listener isn't actually HTTPS being served, so the listener is the
  authoritative "HTTPS is on" signal.
- furtka/updater.py: new _maybe_migrate_preserve_https hook runs
  inside _refresh_caddyfile on the 26.14 → 26.15 transition. If the
  box had the redirect snippet on disk (user had opted into HTTPS
  under the old regime), it writes the new listener snippet too so
  HTTPS keeps working after the Caddyfile swap removes the hostname
  block.
- webinstaller/app.py: post-install creates /etc/caddy/furtka-https.d/
  alongside /etc/caddy/furtka.d/ so the glob import can't trip an
  older Caddy on a missing path during the first reload.

Live-tested on .46: set_force_https(True) writes both snippets, Caddy
reloads, :443 listener comes up with fresh CA, curl -k returns 302,
HTTP 301-redirects. set_force_https(False) removes both snippets
atomically, :443 goes back to connection-refused.

Tests: test_https.py expanded from 13 to 15 cases. Toggle-on asserts
both snippets written + hostname substituted. Toggle-off asserts
both removed. Rollback cases verify BOTH snippets restore on reload
failure. New test_https_snippet_content_has_tls_internal_and_routes
locks the exact shape of the listener block.
test_webinstaller_assets.py: updated two old asserts that assumed
hostname block was in Caddyfile; new test_post_install_creates_https_snippet_dir
guards the new directory.

276 tests pass, ruff check + format clean.

Known remaining wart (documented in CHANGELOG): a browser that
trusted a prior Furtka CA still hits BAD_SIGNATURE on this box's
HTTPS after enabling it, because the fixed intermediate CN is a
Caddy-side limitation. Workaround: clear cert9.db or visit in a
fresh profile. Won't affect end users with one Furtka box ever.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 19:30:04 +02:00
26f0424ae3 fix: auth-guard / and /settings, add Logout link to static navs
All checks were successful
Build ISO / build-iso (push) Successful in 17m14s
CI / lint (push) Successful in 26s
CI / test (push) Successful in 1m2s
CI / validate-json (push) Successful in 24s
CI / markdown-links (push) Successful in 15s
Release / release (push) Successful in 11m26s
Since 26.11 shipped login, two of the three nav pages were secretly
unauthenticated. The Caddyfile only reverse-proxied /api/*, /apps*,
/login*, /logout* to the Python auth-gated handler. Everything else —
including / (landing page) and /settings/ — fell through to Caddy's
catch-all file_server straight out of assets/www/, skipping the
session check entirely.

LAN visitor effect: they could read the box's hostname, IP, Furtka
version, uptime, and see all the Update-now / Reboot / HTTPS-toggle
buttons on /settings/. The API calls those buttons fired were
themselves 401-gated so nothing actually happened — but the info leak
plus "looks open" UX was real. Caught in the 26.13 SSH test session
when the user noticed Logout only appeared in the nav on /apps, and
not on / or /settings/.

Fix:
- Caddyfile: new `handle /settings*` and `handle /` blocks in the
  shared `(furtka_routes)` snippet reverse-proxy to localhost:7000,
  so both hit the Python auth-guard before the HTML goes out.
- api.py: new `_serve_static_www(relative_path)` helper reads
  assets/www/{index.html, settings/index.html} with a path-traversal
  clamp (resolved path must stay under static_www_dir). `do_GET`
  routes `/` and `/settings[/]` to it. Removed the `/` branch from
  the old combined-with-/apps line — those are different pages now.
- paths.py: new `static_www_dir()` helper with `FURTKA_STATIC_WWW`
  env override for tests.
- assets/www/*.html: both nav bars get the Logout link + a shared
  `doLogout()` inline script matching the _HTML pattern. Users never
  see the link unauthed (the Python handler 302s them before the
  page renders), but authed users get consistent navigation across
  all three pages.

Tests: 5 new cases in test_api.py — unauth / redirects, unauth
/settings redirects (both trailing-slash and not), authed / serves
index.html, authed /settings serves settings/index.html,
regression guard that / and /apps serve different content.
Existing test updated (the one that used / as a proxy for /apps).

Static /style.css, /rootCA.crt, /status.json, /furtka.json,
/update-state.json stay served by Caddy's catch-all — those are
public by design (login page needs style.css, fresh users need the
CA to trust HTTPS, runtime JSON is metadata not creds).

272 tests pass, ruff check + format clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 18:16:42 +02:00
8c1fd1da2b fix: unbreak upgrade path + install-lock race
All checks were successful
Build ISO / build-iso (push) Successful in 17m28s
CI / lint (push) Successful in 27s
CI / test (push) Successful in 59s
CI / validate-json (push) Successful in 23s
CI / markdown-links (push) Successful in 15s
Release / release (push) Successful in 11m38s
Three interlocking issues that made 26.11/26.12 effectively
un-upgradable from pre-auth versions without manual pacman +
symlink surgery. Caught while SSH-testing the .196 VM which landed
on a rollback loop after every Update-now click.

1. auth.py imported werkzeug.security, but the target system runs
   core as bare system Python — neither flask nor werkzeug are
   pip-installed. Fresh 26.11+ boxes died on import. Replaced with
   a 50-line stdlib `furtka/passwd.py` using hashlib.pbkdf2_hmac
   for new hashes and parsing werkzeug's `scrypt:N:r:p$salt$hex`
   format for backward-read so existing users.json survives.

2. updater._health_check pinged /api/apps expecting 200. Post-
   auth, /api/apps returns 401 for unauth requests → HTTPError
   caught as URLError → retry loop → 30s timeout → rollback. Now
   any 2xx-4xx counts as "server alive"; only 5xx / connection
   errors fail. Server responding at all is proof it came back up.

3. _do_install released the fcntl lock between sync pre-validation
   and the systemd-run dispatch. A second POST could slip in,
   pass the lock check, return 202, and leave its install-bg child
   to die silently on the in-child lock. Now the API also reads
   install-state.json and refuses 409 on non-terminal stages —
   the state file is the reliable signal, the fcntl lock is
   defence in depth.

Test coverage:
- tests/test_passwd.py (new, 6 cases): roundtrip, salt uniqueness,
  format shape, werkzeug scrypt backward-compat against a real
  hash captured from the .196 box, malformed + non-string
  rejection.
- tests/test_updater.py: +3 cases for _health_check — 4xx=healthy,
  5xx=unhealthy, URLError retry loop.
- tests/test_api.py: +2 cases for install 409 on non-terminal
  state + 202 after terminal.

All 267 tests green, ruff check + format clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 17:03:28 +02:00
f3cd9e963c feat(install): async background install with progress polling
All checks were successful
Build ISO / build-iso (push) Successful in 17m24s
CI / lint (push) Successful in 26s
CI / test (push) Successful in 43s
CI / validate-json (push) Successful in 24s
CI / markdown-links (push) Successful in 16s
Release / release (push) Successful in 11m34s
POST /api/apps/install now returns 202 Accepted after the synchronous
pre-validation (resolve source, copy files, write .env, check for
placeholder secrets, validate path-type settings). The docker-facing
phases (compose pull → ensure volumes → compose up) are dispatched as
a background systemd-run unit (furtka-install-<app>) that writes stage
transitions to /var/lib/furtka/install-state.json. The UI polls
GET /api/apps/install/status every 1.5s and re-labels the modal
submit button — "Image wird heruntergeladen…" →
"Speicherbereiche werden erstellt…" → "Container wird gestartet…" —
instead of sitting dead on "Installing…" for 30+ seconds on large
images like Jellyfin.

Mirrors the exact shape of /api/catalog/sync/apply and
/api/furtka/update/apply: same fcntl lock, same atomic state-file
writes, same terminal-state poll loop ("done" | "error"). New CLI
subcommand `furtka app install-bg <name>` is what systemd-run invokes;
it's hidden from --help because regular CLI users still want the
synchronous `furtka app install <name>`.

Reinstall button on the app list polls too — after dispatch, its text
reflects the background stage until terminal, matching the modal
flow.

Tests:
- tests/test_install_runner.py (new, 9 cases): state roundtrip, lock
  contention, happy-path phase ordering, error writes on pull/up
  failure, lock release on both terminal outcomes.
- tests/test_api.py: new no_systemd_run fixture stubs subprocess.run;
  existing install tests adapted to 202 response; new tests for 409
  lock contention and the status endpoint.
- tests/test_cli.py: install-bg dispatches correctly and returns 1
  on failure with journald-friendly stderr.

256 tests pass, ruff check + format clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 15:50:49 +02:00
470823b347 feat(auth): login-guard the Furtka UI with a cookie session
All checks were successful
Build ISO / build-iso (push) Successful in 17m30s
CI / lint (push) Successful in 27s
CI / test (push) Successful in 43s
CI / validate-json (push) Successful in 31s
CI / markdown-links (push) Successful in 15s
Release / release (push) Successful in 11m38s
One-admin, one-password model — all of /apps, /api/*, /, and
/settings/ now require a signed-in session. Passwords are werkzeug
PBKDF2-hashed in /var/lib/furtka/users.json (mode 0600, atomic write
via the same .tmp+chmod+rename dance installer.write_env uses).
Sessions are secrets.token_urlsafe(32) tokens held in a module-level
SessionStore dict (thread-safe lock included for when we swap to
ThreadingHTTPServer). Cookies are HttpOnly, SameSite=Strict, and
Path=/, with Secure set when X-Forwarded-Proto from Caddy says HTTPS.

Two bootstrap paths:
  * Fresh install — webinstaller step-1 collects Linux user + password,
    the chroot post-install step hashes the password and writes
    users.json on the target partition. First browser visit lands on
    /login with the account already present.
  * Upgrade from 26.10-alpha — no users.json yet, so /login detects
    setup_needed() and renders a first-run setup form. POST creates
    the admin and immediately logs in.

POST /logout revokes the server session and clears the cookie.
Unauthenticated HTML requests 302 to /login; unauthenticated API
requests 401 JSON so fetch() callers see a clean error. A sleep(0.5)
on failed logins is the brute-force speed bump on top of werkzeug's
~600k-iter PBKDF2.

Caddyfile gains /login* and /logout* handle blocks in the shared
furtka_routes snippet so both :80 and the HTTPS hostname block
forward the auth endpoints to localhost:7000. Without this Caddy
would 404 from the static file server.

Test surface:
  * tests/test_auth.py (new, 19 cases): hash roundtrip, users.json
    I/O, session create/lookup/expire/revoke.
  * tests/test_api.py: new admin_session fixture; existing HTTP
    tests updated to send the cookie; new tests cover login setup,
    login success, wrong-password 401, logout revocation, and the
    guard's 302/401 split.
  * tests/test_webinstaller_assets.py: new case that unpacks the
    users.json _write_file_cmd body and verifies the werkzeug hash
    round-trips against the step-1 password.

Bumped version to 26.11-alpha and rolled CHANGELOG. Also folded in
the ruff-format fix that was pending from 26.10-alpha's lint red.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 13:01:17 +02:00
577c2469f7 style(tests): reflow OPTIONAL_PATH_MANIFEST to match ruff format
All checks were successful
Build ISO / build-iso (push) Successful in 20m27s
CI / lint (push) Successful in 29s
CI / test (push) Successful in 1m3s
CI / validate-json (push) Successful in 46s
CI / markdown-links (push) Successful in 23s
Fixes the lint failure on the 26.10-alpha commit — ruff format wanted
the single-item settings list on one line rather than spread over
three. Pure formatting, no behaviour change.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 11:56:52 +02:00
e8c5317660 chore: release 26.10-alpha
Some checks failed
CI / lint (push) Failing after 50s
CI / test (push) Successful in 1m6s
CI / validate-json (push) Successful in 42s
CI / markdown-links (push) Successful in 22s
Release / release (push) Successful in 13m27s
Ships the new path-type setting (the schema extension that unlocks
host bind mounts for Jellyfin / Paperless / Nextcloud / Immich-class
apps), server-side path validation, app-author docs for the new type,
and the remove-USB-stick hint on the installer's reboot screen.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 11:48:07 +02:00
474af8fb2d feat(installer): remove-USB-stick hint on the reboot screen
Some checks failed
CI / lint (push) Waiting to run
CI / test (push) Waiting to run
CI / validate-json (push) Waiting to run
CI / markdown-links (push) Waiting to run
Build ISO / build-iso (push) Failing after 4m15s
Adds a bold "Remove the USB stick now" line before the reboot, plus a
muted fallback paragraph pointing at the BIOS one-time boot menu keys
(F11/F12/Esc) for when removal isn't enough. Caught on the 2026-04-21
Medion bare-metal test: the box didn't boot the installed system on
first reboot and required manual BIOS boot-order changes, which
non-technical users won't know how to do.

Template-only change. No new CSS, no new code paths — <kbd> uses
browser defaults, <strong> keeps the hierarchy readable.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 11:46:38 +02:00
7c6da3d051 docs(apps): document the new path setting type
Some checks failed
CI / lint (push) Failing after 38s
CI / test (push) Successful in 54s
CI / validate-json (push) Successful in 34s
CI / markdown-links (push) Successful in 19s
Covers the path-type declaration in manifest.json, the companion
compose bind-mount pattern (${MEDIA_PATH}:/media:ro), and the full
server-side validation rules the installer applies (absolute, exists,
is-directory, resolve-then-deny-list, traversal caught).

Clarifies the mental split between manifest.volumes (internal state
the app owns) and path settings (user data the container mounts and
usually reads without owning), and recommends :ro as the default for
consumer-only mounts.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 11:43:09 +02:00
04762f5dd1 feat(manifest): add 'path' setting type with server-side validation
Some checks failed
CI / lint (push) Waiting to run
CI / test (push) Waiting to run
CI / validate-json (push) Waiting to run
CI / markdown-links (push) Waiting to run
Build ISO / build-iso (push) Failing after 4m34s
Apps can now declare a setting with "type": "path" whose value is an
absolute host filesystem path. Compose bind-mounts it via standard .env
substitution (${MEDIA_PATH}:/media) — no reconciler changes needed.
Unlocks media/data-heavy apps (Jellyfin, later Paperless, Nextcloud,
Immich) that point at existing user data instead of copying it into a
Docker volume.

Install/update refuses values that aren't absolute, don't exist, aren't
directories, or resolve into a system-path deny-list (/, /etc, /root,
/boot, /proc, /sys, /dev, /bin, /sbin, /usr/bin, /usr/sbin,
/var/lib/furtka). Path.resolve() is applied before the deny-list check
so /mnt/../etc traversal is caught too. Error messages surface in the
existing install/edit modal.

UI: path settings render as a text input with a /mnt/… placeholder.
The manifest's `description` field carries the actual hint ("Absoluter
Pfad zu deinem Filme-Ordner, z.B. /mnt/media"). No new form
components, no new API routes.

Tests: 9 new cases for install + update path validation; 1 new case
for manifest schema accepting the path type. 211 total passing.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 11:39:15 +02:00
c7e7c8b1e5 chore: release 26.9-alpha
All checks were successful
Build ISO / build-iso (push) Successful in 20m49s
CI / lint (push) Successful in 1m13s
CI / test (push) Successful in 48s
CI / validate-json (push) Successful in 44s
CI / markdown-links (push) Successful in 16s
Release / release (push) Successful in 13m31s
Three small fixes surfaced by the 26.8 QA pass on fresh VM .161:

- Landing-page app tiles now open external `open_url` links in a new
  tab, matching /apps Open-button behaviour. Without this a Kuma click
  on the home screen replaced Furtka itself.
- `scripts/publish-release.sh` treats the ISO upload as best-effort;
  a Forgejo-proxy 504 no longer kills the whole release after tarball
  + sha + release.json are already uploaded.
- `furtka app list --json` now mirrors /api/apps — includes
  `description_long`, `open_url`, and `settings` that the previous
  slim projection dropped.
2026-04-20 18:51:30 +02:00
cf93ef44cb chore: release 26.8-alpha (power actions, supersedes orphan 26.7 tag)
Some checks failed
Build ISO / build-iso (push) Successful in 26m56s
Deploy site / deploy (push) Successful in 23s
CI / lint (push) Successful in 34s
CI / test (push) Successful in 1m4s
CI / validate-json (push) Successful in 51s
CI / markdown-links (push) Successful in 28s
Release / release (push) Failing after 7m38s
Adds Reboot + Shut down buttons on /settings, backed by a new
POST /api/furtka/power endpoint that kicks a delayed `systemd-run
--on-active=3s systemctl {reboot|poweroff}` so the HTTP response
flushes before the kernel loses network. Both buttons open a native
confirm dialog; after reboot, the page polls /furtka.json until the
box is back and reloads itself.

26.7-alpha was tagged on 5d8ac63 but release.yml never fired for that
tag (Forgejo race with the concurrent main push; re-push of the deleted
tag didn't wake the workflow either). 26.8 supersedes it and carries
the same open_url + Open-button content plus the power actions.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 16:00:19 +02:00
5d8ac63d9f chore: release 26.7-alpha
Some checks failed
Deploy site / deploy (push) Waiting to run
Build ISO / build-iso (push) Has been cancelled
CI / lint (push) Successful in 1m26s
CI / test (push) Successful in 1m18s
CI / validate-json (push) Successful in 52s
CI / markdown-links (push) Successful in 27s
Release / release (push) Has been cancelled
Ships the open_url manifest field + the Open button in /apps and on
the landing page, replacing the fileshare-only hardcoded deep-link
with a generalised {host}-templated URL. Fileshare seed manifest
bumps to 0.1.2; the furtka-apps catalog release that goes with this
adds matching open_url values for fileshare + uptime-kuma.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 15:44:01 +02:00
018f2e20b0 chore: release 26.6-alpha
All checks were successful
Build ISO / build-iso (push) Successful in 21m23s
CI / lint (push) Successful in 1m31s
CI / test (push) Successful in 1m20s
CI / validate-json (push) Successful in 48s
CI / markdown-links (push) Successful in 27s
Deploy site / deploy (push) Successful in 8s
Release / release (push) Successful in 24s
Rolls the apps-catalog split, the /settings CSS wrap fix, and the version
bump to 26.6-alpha across pyproject + website copy. Core release tarball
still carries apps/fileshare as the offline first-boot seed; the new
daniel/furtka-apps catalog (tagged 26.6-alpha today) is the authoritative
source on boxes that have synced at least once.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 14:49:31 +02:00
3a8fad5185 feat(catalog): on-box apps catalog synced independently of core version
New `furtka catalog sync` pulls the latest daniel/furtka-apps release,
verifies its sha256, extracts under /var/lib/furtka/catalog/, and
atomically swaps into place — so apps can ship without cutting a new
Furtka core release. A daily timer (furtka-catalog-sync.timer, 10 min
post-boot + 24 h with ±6 h jitter) drives the sync; /apps gets a
manual "Sync apps catalog" button that kicks the same code path via a
detached systemd-run unit.

Layout of the new on-box tree:

  /var/lib/furtka/catalog/            synced catalog (survives self-updates)
    ├── VERSION
    └── apps/<name>/ ...
  /var/lib/furtka/catalog-state.json  sync stage + last version, UI-polled
  /run/furtka/catalog.lock            flock so timer + manual click can't race

Resolver precedence (furtka/sources.py): catalog wins over the bundled
seed (/opt/furtka/current/apps/, carried by the core release for offline
first-boot). Installed apps under /var/lib/furtka/apps/ are never auto-
swapped — user clicks Reinstall to move an existing install onto a
newer catalog version; settings merge-preserved via the existing
installer.install_from path.

New files:
- furtka/_release_common.py — shared Forgejo/tarball primitives lifted
  from furtka/updater.py. Both modules now import from here; updater's
  behaviour and public API unchanged.
- furtka/catalog.py — check_catalog(), sync_catalog() with staging +
  manifest validation + atomic rename. Refuses bad sha256 / broken
  manifests and leaves the live catalog intact on any failure path.
- furtka/sources.py — resolve_app_name() / list_available() abstraction
  used by installer.resolve_source and api._list_available.
- assets/systemd/furtka-catalog-sync.{service,timer} — oneshot service
  + daily timer. Timer auto-enables on self-update via a one-line
  addition to _link_new_units (fresh installs get enabled via the
  webinstaller's _FURTKA_UNITS list).

API + UI:
- /api/bundled renamed internally to _list_available; endpoint stays as
  a backcompat alias; /api/apps/available is the new canonical name.
  Each list entry carries a `source` field ("catalog" | "bundled").
- POST /api/catalog/sync/check + /apply + GET /api/catalog/status.
- /apps page grows a catalog-status row + Sync button; poll loop
  mirrors the Furtka self-update flow.

CLI: `furtka catalog sync [--check]` + `furtka catalog status` (both
support --json). Old `furtka app install` / `reconcile` / `update` /
`rollback` surfaces are unchanged.

Test gate: 194/170 baseline + 24 new tests covering catalog sync
(happy path, sha256 mismatch, invalid manifest, lock contention,
preserves-on-failure) + resolver precedence + api renames. ruff
check + format clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 14:16:02 +02:00
e7ee1698bd fix(ui): stop SHA-256 fingerprint overflowing the Local HTTPS card
The /settings "CA fingerprint (SHA-256)" value is a 95-char colon-
separated hex string with no whitespace, so CSS had no valid break
points and the value pushed past the card's right edge — visible on
the 192.168.178.23 fresh-install test.

.kv is a two-column grid (max-content 1fr); grid items default to
min-width: auto (= content width), which overrides the 1fr track's
width constraint. min-width: 0 lets the track shrink, and
overflow-wrap: anywhere gives the fingerprint valid break points at
any character. The styling stays scoped to .kv dd so card prose isn't
affected.

Verified live on .23 via hot-patch into /opt/furtka/current/assets/
www/style.css + caddy reload.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 13:41:33 +02:00
54357aa2a3 style: ruff format — collapse two-line hostname file path + version loop
All checks were successful
Build ISO / build-iso (push) Successful in 21m29s
CI / lint (push) Successful in 37s
CI / test (push) Successful in 58s
CI / validate-json (push) Successful in 42s
CI / markdown-links (push) Successful in 23s
Format-only diff from `ruff format`. The 26.5-alpha push's CI run failed
on `ruff format --check`; these three files had two-line constructs that
fit on one line at ruff's default line length. No behaviour change.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 12:41:58 +02:00
fec962e3d2 chore: release 26.5-alpha
Some checks failed
Build ISO / build-iso (push) Successful in 20m10s
Deploy site / deploy (push) Successful in 13s
CI / lint (push) Failing after 26s
CI / test (push) Successful in 33s
CI / validate-json (push) Successful in 24s
CI / markdown-links (push) Successful in 14s
Release / release (push) Successful in 6s
Rolls the HTTPS handshake fix (#10) and the README realignment into a
tagged release. Also closes the 26.4 follow-up that the wizard footer
version was hand-pinned: webinstaller/app.py now resolves the version
via a Flask context processor (reads /opt/furtka/VERSION on the live
ISO, written by iso/build.sh from pyproject.toml at build time; falls
back to pyproject.toml in dev runs, then to "dev"). pyproject.toml and
the website version strings bumped in the same commit so every surface
reports 26.5-alpha consistently.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 11:52:36 +02:00
8fbe67ffb9 fix(https): restore TLS handshake — name hostname + correct PKI path
Some checks failed
Build ISO / build-iso (push) Waiting to run
CI / lint (push) Failing after 2m11s
CI / test (push) Successful in 2m8s
CI / validate-json (push) Successful in 55s
CI / markdown-links (push) Successful in 25s
Deploy site / deploy (push) Successful in 8s
Closes #10. Two linked bugs in 26.4-alpha's Phase 1 HTTPS made the
force-HTTPS toggle fatal: every SNI handshake on :443 died with
SSL_ERROR_INTERNAL_ERROR_ALERT, so the toggle redirected users from
working HTTP to broken HTTPS.

Root cause 1: bare `:443 { tls internal }` gives Caddy no hostname to
issue a leaf cert for, so /var/lib/caddy/certificates/ stayed empty and
Caddy sent TLS `internal_error` on every handshake. Fix: the :443 block
is now `__FURTKA_HOSTNAME__.local, __FURTKA_HOSTNAME__ { tls internal }`,
with the marker substituted by webinstaller/app.py at install time and
by furtka.updater._refresh_caddyfile on self-update (reads /etc/hostname,
falls back to "furtka"). `auto_https disable_redirects` keeps Caddy's
built-in redirect out of the way of the /settings toggle.

Root cause 2: furtka/https.py and the /rootCA.crt handler both referenced
/var/lib/caddy/.local/share/caddy/pki/authorities/local/ — a path that
doesn't exist. caddy.service sets XDG_DATA_HOME=/var/lib, so Caddy's
storage is /var/lib/caddy/ directly. Fix: both paths corrected.

Verified on the 192.168.178.110 smoke VM by swapping the Caddyfile in,
reloading, handshaking, restoring: TLS 1.3 handshake succeeds, leaf cert
issued under /var/lib/caddy/certificates/local/, /rootCA.crt returns 200.

Tests: new cases assert the Caddyfile ships the hostname placeholder,
the webinstaller substitutes it, _refresh_caddyfile re-substitutes from
/etc/hostname on update, and the asset sets auto_https disable_redirects.
Unit tests still stub the Caddy reload — the real handshake regression
needs a smoke-VM integration test (follow-up, separate from this fix).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 11:39:48 +02:00
9ae14f4108 docs: add apps/ authoring guide + realign READMEs with 26.4-alpha
Closes #9. New apps/README.md walks through the four-file contract
(manifest.json, docker-compose.yaml, .env.example, icon.svg) with
the rules enforced by furtka/manifest.py and the SVG sanitiser, using
apps/fileshare as the reference.

Root README: release list now covers 26.1/26.3/26.4 (26.2 stalled on
the jq apt hang). Local HTTPS Phase 1 and the post-build smoke VM on
pollux both flip to [x]; the old proksi.local HTTPS TODO becomes a
Phase 2 entry (dedicated local CA + HTTPS on the live-installer wizard).

iso/README: mDNS is wired — live ISO advertises proksi.local, installed
box defaults to furtka.local (the form's default hostname, not proksi).
HTTPS section notes Caddy tls internal on :443 shipped in 26.4 while
the wizard itself is still HTTP. Overlay table picks up etc/hostname,
etc/issue, furtka-update-issue, and furtka-issue.service.

website/README: auto-deploy via .forgejo/workflows/deploy-site.yml is
the default path now; website/deploy.sh stays as the SSH-hop fallback
for off-CI pushes, and deploy-ci.sh is called out in the structure map.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 11:39:48 +02:00
850d656169 Merge pull request 'fix(smoke): capture arp-scan output instead of piping into awk' (#8) from fix-smoke-pipefail into main
All checks were successful
Build ISO / build-iso (push) Successful in 17m6s
CI / lint (push) Successful in 26s
CI / test (push) Successful in 33s
CI / validate-json (push) Successful in 28s
CI / markdown-links (push) Successful in 13s
Reviewed-on: #8
2026-04-18 15:43:50 +02:00
93c6b838a7 fix(smoke): capture arp-scan output instead of piping into awk
All checks were successful
CI / lint (pull_request) Successful in 26s
CI / test (pull_request) Successful in 34s
CI / validate-json (pull_request) Successful in 23s
CI / markdown-links (pull_request) Successful in 14s
When host-networking finally gave arp-scan a real LAN to scan, the
first MAC-match emitted a line, awk hit its `exit` clause, closed the
pipe, and arp-scan died from SIGPIPE (exit 141). With `set -o pipefail`
active, that killed the whole smoke-vm.sh run immediately after
"==> starting VM" — no IP discovery, no curl, no prune.

Fix: capture arp-scan's output into a variable first, then let awk
parse a here-string. Same treatment for the `ip neigh show` fallback.
No pipe, no pipefail cascade.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 15:26:10 +02:00
caa8609908 Merge pull request 'release-26.4-alpha' (#7) from release-26.4-alpha into main
Some checks failed
Build ISO / build-iso (push) Successful in 26m22s
Deploy site / deploy (push) Successful in 3s
CI / lint (push) Successful in 26s
CI / test (push) Successful in 1m37s
CI / markdown-links (push) Successful in 33s
Release / release (push) Successful in 6s
CI / validate-json (push) Failing after 14m0s
Reviewed-on: #7
2026-04-18 14:29:19 +02:00
522ea06cd0 fix(smoke): bump smoke-VM RAM to 8 GiB + make cores/memory configurable
All checks were successful
CI / lint (pull_request) Successful in 1m10s
CI / test (pull_request) Successful in 2m17s
CI / validate-json (pull_request) Successful in 1m5s
CI / markdown-links (pull_request) Successful in 41s
pollux (192.168.178.165) wedged at the network level during an
end-to-end install test today — mkinitcpio on a 4 GiB smoke VM +
the cached 1.5 GB ISO + a busy runner container pushed the host into
OOM, taking pveproxy and the SSH path down with it. Recovered by
physical reset.

Smoke VM now defaults to 8192 MiB / 2 vCPU, configurable via
PVE_TEST_VM_MEMORY / PVE_TEST_VM_CORES. Host has 64 GiB, so one
smoke VM at 8 GiB is well within headroom.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 14:28:29 +02:00
d567317538 chore: release 26.4-alpha
Bumps version everywhere user-facing that had drifted from the tag:

- pyproject.toml 26.0 → 26.4
- website/hugo.toml 26.0 → 26.4 (driving furtka.org landing + footer)
- website/content/_index{.md,.de.md} status string
- webinstaller/templates/base.html footer (was hardcoded — noted as
  follow-up to read dynamically from pyproject.toml)

Promotes the Unreleased section to 26.4-alpha and folds in today's
additions:

- Local HTTPS via Caddy tls internal + opt-in redirect toggle
- Two self-update UX fixes (Installed-field refresh + 45s reload
  fallback)
- Impressum + Datenschutzerklärung on furtka.org
- deploy-site.yml auto-deploy of the Hugo site on push-to-main
- Smoke VM pipeline on .165 Proxmox (build-iso inline smoke step +
  workflow_dispatch Smoke latest ISO for cheap re-tests)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 14:21:43 +02:00
931d62149f Merge pull request 'chore(smoke): surface PVE response body on API failure' (#6) from debug-smoke-errors into main
Some checks failed
CI / validate-json (push) Waiting to run
CI / markdown-links (push) Waiting to run
Build ISO / build-iso (push) Has been cancelled
CI / lint (push) Has been cancelled
CI / test (push) Has been cancelled
Reviewed-on: #6
2026-04-18 14:06:47 +02:00
f4f7d853ba chore(smoke): surface PVE response body on API failure
Some checks failed
CI / lint (pull_request) Successful in 1m3s
CI / test (pull_request) Successful in 1m23s
CI / markdown-links (pull_request) Has been cancelled
CI / validate-json (pull_request) Has been cancelled
api() was swallowing Proxmox's error body because callers pipe its
output to /dev/null. With a bare "curl: (22) 403" in the log we can't
tell which permission is missing. Now we capture the response body,
print it to stderr on failure, and only emit it to stdout on success.

No behaviour change on the happy path.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 14:06:09 +02:00
cb6e92aa92 Merge pull request 'fix(smoke): reuse existing PVE-side ISO instead of delete+re-upload' (#5) from fix-smoke-reuse-iso into main
Some checks failed
CI / validate-json (push) Waiting to run
CI / markdown-links (push) Waiting to run
Build ISO / build-iso (push) Has been cancelled
CI / lint (push) Has been cancelled
CI / test (push) Has been cancelled
Reviewed-on: #5
2026-04-18 14:00:40 +02:00
afbb8d59f9 fix(smoke): reuse existing PVE-side ISO instead of delete+re-upload
Some checks failed
CI / markdown-links (pull_request) Waiting to run
CI / lint (pull_request) Successful in 1m5s
CI / validate-json (pull_request) Has been cancelled
CI / test (pull_request) Has been cancelled
The delete branch required Datastore.Allocate (or was hitting a
privilege-separated token ACL edge case) and produced 403s on re-runs
against the same commit SHA. Since the ISO bytes are reproducible for
a given SHA — furtka-<sha>.iso is content-addressed — we can just
reuse whatever is already in PVE storage instead of cycling it.

Fixes the "runs-on-same-sha" re-dispatch case without needing any extra
PVE permission, and shaves ~2 min off repeated smoke runs.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 13:59:42 +02:00
2cfe54e03a Merge pull request 'fix(ci): apk-install smoke prerequisites before running smoke-vm.sh' (#4) from fix-smoke-deps into main
All checks were successful
Build ISO / build-iso (push) Successful in 22m40s
CI / lint (push) Successful in 1m4s
CI / test (push) Successful in 1m24s
CI / validate-json (push) Successful in 55s
CI / markdown-links (push) Successful in 26s
Reviewed-on: #4
2026-04-18 13:20:52 +02:00
1d75a165c4 fix(ci): apk-install smoke prerequisites before running smoke-vm.sh
All checks were successful
CI / lint (pull_request) Successful in 2m2s
CI / test (pull_request) Successful in 1m23s
CI / validate-json (pull_request) Successful in 58s
CI / markdown-links (pull_request) Successful in 26s
The Forgejo runner container is Alpine with a near-empty base — no
curl, python3, arp-scan, or sudo out of the box. scripts/smoke-vm.sh
needs all four:
  - curl: every PVE API call
  - python3: JSON parsing of PVE responses
  - arp-scan: MAC→IP discovery on the LAN (live ISO has no guest agent)
  - sudo: so the same script also works from a dev laptop as non-root

Without this step the smoke job fails immediately on "curl: not found",
regardless of whether the PVE secrets are correctly set.

Added to both build-iso.yml (inline smoke after ISO build) and
smoke-latest.yml (workflow_dispatch retest path).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 13:17:51 +02:00
2cc3fab027 Merge pull request 'feat(ci): workflow_dispatch smoke-latest + cache ISO for fast retests' (#3) from feat-smoke-latest into main
Some checks failed
Build ISO / build-iso (push) Has been cancelled
CI / test (push) Has been cancelled
CI / validate-json (push) Has been cancelled
CI / markdown-links (push) Has been cancelled
CI / lint (push) Has been cancelled
Reviewed-on: #3
2026-04-18 13:11:41 +02:00
41d0e7a398 feat(ci): workflow_dispatch smoke-latest + cache ISO for fast retests
Some checks failed
CI / lint (pull_request) Successful in 2m6s
CI / test (pull_request) Successful in 3m23s
CI / validate-json (pull_request) Has been cancelled
CI / markdown-links (pull_request) Has been cancelled
When smoke-vm.sh / PVE setup / secrets change, we want to verify the
fix without waiting for a full 25-min build-iso rebuild (most of which
is the upload-artifact step for a 1.5 GB file).

Adds two things:

1. build-iso.yml grows a "Cache ISO for smoke-latest" step that copies
   the freshly built ISO to /data/smoke-cache/latest.iso. /data is
   already bind-mounted into the runner container at a matching host
   path, so no compose.yml change or runner restart needed.

2. smoke-latest.yml is a workflow_dispatch-only workflow that reads
   /data/smoke-cache/latest.iso and runs scripts/smoke-vm.sh against
   it. ~2 min end-to-end. Errors cleanly if the cache is empty (build-
   iso.yml hasn't populated it yet).

First build-iso run after this merges will populate the cache; from
then on smoke-latest is available for on-demand re-tests.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 13:04:22 +02:00
a511f5418d Merge pull request 'feat(website): legal pages (Impressum/Datenschutz) + auto-deploy on push-to-main' (#1) from website-legal into main
Some checks are pending
CI / test (push) Waiting to run
Build ISO / build-iso (push) Successful in 24m57s
CI / lint (push) Successful in 2m17s
CI / validate-json (push) Successful in 2m0s
CI / markdown-links (push) Successful in 1m52s
Deploy site / deploy (push) Successful in 15s
Reviewed-on: #1
2026-04-18 12:32:15 +02:00
cf85217c0d Merge pull request 'fix(ci): inline smoke-vm as a step instead of a downstream job' (#2) from fix-smoke-inline into main
Some checks are pending
Build ISO / build-iso (push) Waiting to run
CI / lint (push) Waiting to run
CI / test (push) Waiting to run
CI / validate-json (push) Waiting to run
CI / markdown-links (push) Waiting to run
Reviewed-on: #2
2026-04-18 12:31:50 +02:00
7b894f096f fix(ci): inline smoke-vm as a step instead of a downstream job
All checks were successful
CI / lint (pull_request) Successful in 1m6s
CI / test (pull_request) Successful in 1m23s
CI / validate-json (pull_request) Successful in 56s
CI / markdown-links (pull_request) Successful in 24s
The separate smoke-vm job with `needs: build-iso` required round-tripping
the 1.5 GB ISO through actions/upload-artifact + download-artifact. v3
on Forgejo has a known issue where large artifacts stall at 0.0% in the
download step — the smoke run hung today with endless "Total file count:
1 ---- Processed file #0 (0.0%)" output.

Since both jobs run on the same self-hosted runner (host mode, same
workspace available), there was never a real need for the artifact
indirection. Inlining as a step after the artifact upload reuses the
ISO already in iso/out/ and skips the download entirely.

step-level continue-on-error preserves the original guarantee that a
VM-side flake doesn't mark the ISO build red.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 12:20:58 +02:00
b77ef80b56 feat(website): legal pages (Impressum/Datenschutz) + auto-deploy on push-to-main
All checks were successful
CI / lint (pull_request) Successful in 1m2s
CI / test (pull_request) Successful in 1m19s
CI / validate-json (pull_request) Successful in 55s
CI / markdown-links (pull_request) Successful in 27s
Two coupled changes that make sense to land together:

1. Legal pages required under German law
   - /imprint/ + /de/impressum/ — §5 DDG disclosure (contact is email
     plus Forgejo-Issues as the second quick-contact channel, per ECJ
     C-298/07 no phone number required)
   - /privacy/ + /de/datenschutz/ — Art. 13 GDPR minimum: server-log
     processing (IP, UA, URL, retention ≤30 days), no cookies, no
     tracking, no third-party embeds. RLP Landesbeauftragter as the
     competent supervisory authority.
   - Footer partial linked from every page, localized per language.
   - DE versions are legally binding; EN versions are courtesy
     translations noting that.

2. Auto-deploy wired up
   - New workflow .forgejo/workflows/deploy-site.yml fires on
     push-to-main with paths under website/**. Runs on the self-hosted
     runner, which *is* forge-runner-01 — so "deploy" is just a local
     rsync into /srv/furtka-site and a hugo build into
     /var/www/furtka.org. No SSH, no secrets.
   - website/deploy-ci.sh is the SSH-free counterpart of deploy.sh,
     invoked by the workflow.
   - compose.yml bind-mounts /srv/furtka-site and /var/www/furtka.org
     into the runner container at matching paths so the workflow can
     reach them. Requires a one-time `docker compose up -d` on the
     runner host to pick the mounts up.
   - deploy.sh is kept for out-of-band manual deploys (testing from a
     local branch, CI outage) but gets a header comment pointing at
     the CI path as the normal flow.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 12:10:06 +02:00
d499907613 feat(ci): auto-boot every main-ISO in smoke VM on .165 Proxmox
Some checks failed
Build ISO / smoke-vm (push) Blocked by required conditions
Build ISO / build-iso (push) Successful in 24m28s
CI / test (push) Successful in 3m1s
CI / validate-json (push) Successful in 55s
CI / markdown-links (push) Successful in 37s
CI / lint (push) Failing after 13m19s
After build-iso, a new smoke-vm job uploads the freshly built ISO to
the test Proxmox at 192.168.178.165 via PVE API token, boots it in a
fresh VM (VMID range 9000-9099, MAC derived from commit SHA so the
runner can find the DHCP IP by scanning the LAN), and curls :5000 to
confirm the webinstaller answers HTTP 200. Last 5 smoke VMs + their
ISOs are kept for post-mortem; older ones are purged. continue-on-error
on the smoke job so a VM-side flake doesn't mark the ISO build red.

Shortens the feedback loop on ISO regressions from "next manual VM
test session" (days) to "next push" (minutes) — the 2026-04-15/16 VM
sessions each found real boot-time bugs that unit tests missed.

Docs at docs/smoke-vm.md. Requires Forgejo secrets PVE_TEST_HOST and
PVE_TEST_TOKEN (dedicated smoke@pve!ci PVE token, privilege-separated).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 11:41:44 +02:00
3f7b97c8c7 style: ruff format two files the pre-commit hook caught
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 11:41:28 +02:00
663bd74572 feat(https): local HTTPS via Caddy tls internal + opt-in redirect toggle
Some checks failed
Build ISO / build-iso (push) Successful in 20m57s
CI / lint (push) Failing after 31s
CI / test (push) Successful in 36s
CI / validate-json (push) Successful in 23s
CI / markdown-links (push) Successful in 14s
Caddy now serves both :80 (plain HTTP, unchanged default) and :443 with
tls internal — it generates its own per-box root CA on first start,
stored under /var/lib/caddy/.local/share/caddy/pki/authorities/local/.
Users can download rootCA.crt at /rootCA.crt (served on both listeners)
and install it per-OS via the new /https-install/ guide.

Settings page grows a Local HTTPS card with CA fingerprint, download
button, reachability probe, and an opt-in "force HTTPS" toggle. The
toggle only unhides itself once the current browser already trusts the
cert, so enabling it can't lock the user out of the settings page.

Backend: GET /api/furtka/https/status and POST /api/furtka/https/force
in furtka.https. The force toggle drops a Caddy import snippet into
/etc/caddy/furtka.d/redirect.caddyfile and reloads Caddy; reload
failure rolls the snippet state back so a bad config can't wedge the
next service start.

updater._refresh_caddyfile() ensures /etc/caddy/furtka.d exists before
every reload so 26.3-alpha → 26.4-alpha self-updates don't trip on the
new glob import directive.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 12:19:06 +02:00
a5de3d7622 fix(settings): close the two self-update UX gaps from 2026-04-16 VM test
Drive upd-current from the /api/furtka/update/check response so a
post-update Check reflects the new installed version without Ctrl+F5,
and arm a 45s fallback location.reload on apply-click so the page still
comes up on the new version when the mid-apply API restart drops the
/update-state.json poll before stage=done is observed.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 09:22:34 +02:00
bf86ffaf4c docs(website): ship the two update bullets — validated on VM today
All checks were successful
CI / lint (push) Successful in 26s
CI / test (push) Successful in 33s
CI / validate-json (push) Successful in 23s
CI / markdown-links (push) Successful in 12s
End-to-end validation of the per-app container update and the Furtka
self-update ran green on VM 192.168.178.128 this afternoon (26.0-alpha
→ 26.3-alpha → rollback → reboot). Both flows are real — promote the
drafted HTML-comment bullets from _index.md and _index.de.md into the
visible "What works today" list.

The "plain-English Wi-Fi story" was the only one the copy was missing
a truthful on-box outcome for, and it still is (for a moment here
a-few-days-ago, but the update story moved past that).

Matches the commitment in feedback_no_invented_content.md — we only
publish after confirmation.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 17:44:51 +02:00
8de8f3fd87 docs(readme): roadmap through 2026-04-16 — resource mgr, UI, self-update
All checks were successful
CI / lint (push) Successful in 36s
CI / test (push) Successful in 34s
CI / validate-json (push) Successful in 24s
CI / markdown-links (push) Successful in 13s
Roadmap section drifted far enough that "re-tag 26.0-alpha" was still
listed as open while 26.1-alpha and 26.3-alpha are live releases.

Updated:
- Replaced the stale "re-tag 26.0-alpha" line with the actual state:
  tag-driven release pipeline is wired, two pre-releases published,
  all assets downloadable anonymously.
- Added five new checked items for the work that landed this month:
  resource manager + fileshare (validated), on-box UI uplevel (shared
  CSS / settings page / icons), versioned layout + per-app container
  updates, Phase 2 Furtka self-update (tag → release.yml → /settings
  Update now → atomic swap + auto-rollback), plus the broader Forgejo
  release pipeline that underpins the update story.
- Kept open items (wizard S3-S7, managed gateway, Authentik, local CA,
  Nextcloud first service, UI mockups) as the remaining TODO surface.

No code or test changes; pytest + ruff still green from the last push.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 17:40:25 +02:00
25bef628c2 docs(changelog): note two /settings update-flow UX gaps for next release
All checks were successful
CI / lint (push) Successful in 26s
CI / test (push) Successful in 34s
CI / validate-json (push) Successful in 23s
CI / markdown-links (push) Successful in 12s
End-to-end validated the Phase-2 self-update today on a fresh install
(192.168.178.128 → 26.0-alpha → 26.3-alpha): the symlink flip, the
tarball verify, the stage-by-stage progress, and the rollback slots
all work. But two browser-side UX bits are rough:

1. The "Installed" version displayed on /settings doesn't refresh
   right after the update; a hard reload shows the new value.
2. The auto-reload that should fire 5s after stage=done missed on
   the test — the polling connection likely dropped during the
   mid-update API restart.

Neither affects the integrity of the update itself. Landed the notes
in [Unreleased] so the next release cycle picks them up.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 17:31:41 +02:00
b4c65f46bf fix(release): drop jq dependency, use python3 for JSON assembly
All checks were successful
Build ISO / build-iso (push) Successful in 17m30s
CI / lint (push) Successful in 25s
CI / test (push) Successful in 33s
CI / validate-json (push) Successful in 24s
CI / markdown-links (push) Successful in 12s
Release / release (push) Successful in 6s
The 26.2-alpha release workflow hung for 15+ minutes on
"apt-get install -y jq" — the runner's apt mirror was unreachable
(or very slow), and the whole publish stalled.

jq was only used for two tiny things: building the release-create
POST body and reading the release id from the response. Both are
one-liners in Python, which is guaranteed-present on the Forgejo
Actions ubuntu-latest runner image. Replaced both uses; removed
the apt-get step from release.yml entirely. Slow mirrors no
longer block tagged releases.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 17:05:21 +02:00