Compare commits

..

No commits in common. "main" and "release-26.4-alpha" have entirely different histories.

65 changed files with 421 additions and 5685 deletions

View file

@ -1,58 +1,27 @@
name: Release
# Tag-triggered: when `git push origin <version>` lands, this builds the
# release tarball + the live-installer ISO, and publishes them both to
# the Forgejo releases page. Boxes POST /api/furtka/update to pull the
# tarball; fresh-install users download the ISO from the release page.
# release tarball and publishes it + the sha256 + release.json to the
# Forgejo releases page for that tag. Boxes then POST /api/furtka/update
# to pull from here.
#
# Runs on the self-hosted runner because iso/build.sh needs privileged
# docker access (mkarchiso wants root + loop mounts), and because the
# ubuntu-latest Forgejo hosted runner doesn't carry the docker socket
# bind-mount the build needs. Self-hosted adds ~5-7 min to the release
# (ISO build) but keeps the release page self-contained.
#
# Version tags only (CalVer like 26.0-alpha, 26.1, 27.0-beta). Random
# tags are ignored by the [0-9]* prefix.
# Version tags only (pattern matches CalVer like 26.0-alpha, 26.1, 27.0-beta).
# Documentation / random tags are ignored by the [0-9]* prefix.
on:
push:
tags: ['[0-9]*']
jobs:
release:
runs-on: self-hosted
timeout-minutes: 45
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0 # changelog section extraction needs history
- name: Install prerequisites
# Alpine runner is near-empty: we need curl + python3 for the
# publish script, bash for the build scripts.
run: apk add --no-cache curl python3 bash
- name: Build release tarball
run: ./scripts/build-release-tarball.sh "${GITHUB_REF_NAME}"
- name: Build live-installer ISO
# Same script build-iso.yml uses on every main push. Re-running
# here is intentional: guarantees the ISO matches the exact
# tagged commit without coordinating across workflows. Step-level
# continue-on-error so an ISO build flake doesn't block the
# core tarball (which is what boxes need for self-update) from
# publishing.
continue-on-error: true
id: build_iso
run: ./iso/build.sh
- name: Move ISO into dist/
# publish-release.sh attaches dist/furtka-<ver>.iso if present.
# Skipped gracefully when the build step above failed.
if: steps.build_iso.outcome == 'success'
run: |
iso=$(ls iso/out/*.iso | head -1)
cp "$iso" "dist/furtka-${GITHUB_REF_NAME}.iso"
- name: Publish to Forgejo releases
env:
FORGEJO_TOKEN: ${{ secrets.FORGEJO_RELEASE_TOKEN }}

1
.gitignore vendored
View file

@ -13,4 +13,3 @@ iso/out/
website/public/
website/resources/
website/.hugo_build.lock
website/hugo_stats.json

View file

@ -7,274 +7,6 @@ This project uses calendar versioning: `YY.N-stage` (e.g. `26.0-alpha` = 2026, r
## [Unreleased]
## [26.15-alpha] - 2026-04-21
### Fixed
- **HTTPS is now opt-in; fresh installs no longer hit unbypassable
SEC_ERROR_BAD_SIGNATURE.** Every version since 26.5 shipped a
Caddyfile with a `__FURTKA_HOSTNAME__.local { tls internal }` site
block, so Caddy auto-generated a self-signed root CA + intermediate
+ leaf on first boot. That worked for first-time-ever users, but
every reinstall (or second Furtka box on the same LAN) produced a
new CA with the **same intermediate CN** (`Caddy Local Authority -
ECC Intermediate` — Caddy hardcodes it). Any browser that had ever
trusted an earlier Furtka CA got a cached intermediate with
mismatched keys, then Firefox's cert lookup substituted the cached
intermediate when validating the new box's leaf → the signature
check failed → `SEC_ERROR_BAD_SIGNATURE`, which Firefox has no
"Advanced → Accept Risk" bypass for.
- Removed the hostname site block from the default Caddyfile.
Fresh installs serve `:80` only; visiting `https://furtka.local`
now yields a clean connection-refused instead of the crypto
fault.
- Added top-level `import /etc/caddy/furtka-https.d/*.caddyfile`.
The `/settings` HTTPS toggle (via `furtka.https.set_force_https`)
now writes TWO snippets atomically — the top-level hostname +
`tls internal` block (enables `:443`) and the `:80`-scoped
redirect (forces HTTP → HTTPS) — and removes both on disable.
Caddy reloads after the pair-swap; failure rolls both back.
- Webinstaller creates `/etc/caddy/furtka-https.d/` during
post-install alongside the existing `furtka.d/`.
- `updater._refresh_caddyfile` runs a 26.14 → 26.15 migration: if
the box already had the redirect snippet on disk (user had
explicitly enabled "Force HTTPS" under the old regime), the
migration also writes the new listener snippet so HTTPS keeps
working across the upgrade.
- **`status.force_https` now reads the listener snippet, not the
redirect snippet.** A lone redirect without a `:443` listener
wouldn't actually serve HTTPS, so the listener file is the
authoritative "HTTPS is on" signal. The UI on `/settings` sees the
correct state as a result.
Known remaining UX wart: a browser that trusted a previous Furtka box
still sees `BAD_SIGNATURE` when visiting this box's `https://` after
enabling HTTPS here — the fixed intermediate CN is a Caddy-side
limitation we can't fix from Furtka. Fresh installs on a browser that
never visited another Furtka box work correctly. Workaround:
`about:networking#sts` → Forget → clear `cert9.db`.
## [26.14-alpha] - 2026-04-21
### Fixed
- **Landing page and `/settings/` were silently bypassing the auth
guard.** Since 26.11 shipped login, the Caddyfile only
reverse-proxied `/api/*`, `/apps*`, `/login*`, and `/logout*` to
Python. Everything else — including `/` and `/settings/` — fell
through to Caddy's catch-all `file_server` and was served straight
from `assets/www/` without ever hitting the session check. The
effect: a LAN visitor saw the box's hostname, IP, Furtka version,
and the buttons for Update-now / Reboot / HTTPS-toggle. The API
calls those buttons fired were all 401-auth-gated so actions didn't
land, but the information leak and the "looks open" UX was a real
bug. Caught in the 26.13 SSH test session when the user noticed
Logout only showed up on `/apps`. Now Caddy routes `/` and
`/settings*` through Python; a new `_serve_static_www` handler
checks the session cookie, redirects to `/login` if unauthed, and
reads the HTML from `assets/www/` otherwise. Catch-all still
serves `/style.css`, `/rootCA.crt`, and the runtime JSON files
publicly — those don't need auth.
- **Logout link now shows on every authed page, not just `/apps`.**
The static HTML for `/` and `/settings/` maintained their own nav
separate from `_HTML` in `api.py`, so they never got the Logout
entry when it was added in 26.11. Both nav bars now include it
plus an inline `doLogout()` that POSTs `/logout` and bounces to
`/login`, matching the pattern in `_HTML`.
## [26.13-alpha] - 2026-04-21
### Fixed
- **Upgrade path from pre-auth releases actually works.** 26.11-alpha
introduced `from werkzeug.security import ...` in `furtka/auth.py`,
but werkzeug isn't installed on the target system — core runs as
system Python with stdlib only, and `flask>=3.0` in `pyproject.toml`
is never pip-installed on the box. Fresh boxes from the 26.11/26.12
ISO without a manually-installed werkzeug crashed on import; boxes
upgrading from pre-26.11 got double-broken by that plus the health
check below. Replaced the werkzeug dependency with a stdlib-only
`furtka/passwd.py` that uses `hashlib.pbkdf2_hmac` for new hashes
and parses werkzeug's `scrypt:N:r:p$salt$hex` format for backward
compatibility — existing `users.json` files created on the rare
boxes that did have werkzeug keep working after this upgrade, no
re-setup needed. `from werkzeug.security import ...` is gone from
the import chain entirely; `pyproject.toml`'s flask dep stays only
for the live-ISO webinstaller.
- **Self-update no longer auto-rolls-back when crossing the auth
boundary.** `updater._health_check` pinged `/api/apps` and demanded
a 200, which meant every 26.10 → 26.11+ upgrade hit the post-restart
check, got a 401 (auth guard), and treated that as "server dead"
→ rollback. Now any 2xx4xx response counts as "server alive"; only
connection-level failures or 5xx fail the check. 5xx still fails
rollback because that means the new process is up but broken.
- **Install lock closes its race window.** `POST /api/apps/install`
used to release the fcntl lock immediately after the sync
pre-validation so the systemd-run child could re-acquire it —
leaving a tiny gap where a second POST could slip in, pass the lock
check, and return 202. Both child processes would start, one would
win the in-child lock, the other would die silently. Now the API
also reads `install-state.json` and refuses with 409 if the stage
is non-terminal (`pulling_image`, `creating_volumes`,
`starting_container`). The fcntl lock stays as belt-and-suspenders.
## [26.12-alpha] - 2026-04-21
### Changed
- **App-Install geht async mit Live-Progress.** `POST /api/apps/install`
returnt jetzt `202 Accepted` nach der synchronen Pre-Validation
(Source auflösen, Files kopieren, `.env` schreiben, Placeholder- und
Path-Checks). Den eigentlichen Docker-Teil (`compose pull` → volumes
`compose up`) dispatched der Handler als `systemd-run
--unit=furtka-install-<app>` Hintergrund-Job, der seine Phase in
`/var/lib/furtka/install-state.json` schreibt. Neues
`GET /api/apps/install/status` für UI-Polling. Das Install-Modal
zeigt jetzt live "Image wird heruntergeladen…" →
"Speicherbereiche werden erstellt…" → "Container wird gestartet…"
statt ~30 Sekunden totem "Installing…". Muster 1:1 parallel zu
`/api/catalog/sync/apply` und `/api/furtka/update/apply`. Neue CLI-
Subcommand `furtka app install-bg <name>` (intern, von der API
aufgerufen); `furtka app install` für Terminal-User bleibt synchron.
Die Reinstall-Taste in der App-Liste pollt ebenfalls den
Install-Status und spiegelt die Phase im Button-Text.
## [26.11-alpha] - 2026-04-21
### Added
- **Login-auth for the Furtka web UI.** Every `/apps`, `/api/*`, `/`,
and `/settings/` route now requires a signed-in session. New
`/login` page serves a username/password form; `POST /login`
validates against `/var/lib/furtka/users.json` (werkzeug PBKDF2-
hashed), sets a `furtka_session` cookie (`HttpOnly`, `SameSite=
Strict`, 7-day TTL), and redirects to `/apps`. `POST /logout`
revokes the server-side session and clears the cookie.
Unauthenticated HTML requests get a 302 to `/login`; unauthenticated
API requests get 401 JSON. The old "No authentication on this UI
yet" banner is gone; the `/apps` header picks up a `Logout` link
instead.
- **First-run setup fallback for upgrade-path boxes.** Boxes
upgrading from 26.10-alpha have no `users.json` yet — on the first
visit `/login` renders a setup form (username + password +
password-confirm) that creates the admin record on submit. Fresh
installs skip this: the webinstaller writes `users.json` during
the chroot post-install step using the step-1 password, so the
first browser visit after boot goes straight to the login form.
- **Caddy proxy routes `/login` and `/logout`.** `assets/Caddyfile`
gets two new `handle` blocks in the shared `(furtka_routes)`
snippet so both the `:80` block and the `hostname.local, hostname`
HTTPS block forward the auth endpoints to the stdlib server on
`127.0.0.1:7000`. Without this Caddy would serve a 404 from the
static file server.
### Fixed
- `tests/test_installer.py` ruff-format nit — the 26.10-alpha
release commit had a misformatted list literal that failed
`ruff format --check`. Caught when the Release page on Forgejo
showed a red CI badge for the tag.
- `pyproject.toml` version string bumped from the stale 26.8-alpha
to 26.11-alpha. Release pipeline uses `GITHUB_REF_NAME` as source
of truth for the artefact name, but having the two agree matters
for local dev runs that read `pyproject.toml`.
## [26.10-alpha] - 2026-04-21
### Added
- **Remove-USB-stick hint on the installer's post-install screen.**
`webinstaller/templates/install/rebooting.html` now shows a bold
"Remove the USB stick now" line before the reboot, plus a muted
fallback explaining the BIOS boot-menu keys (F11/F12/Esc) if the
machine boots back into the installer anyway. Caught on the first
bare-metal test (Medion i5-4gen, 2026-04-21) where the box didn't
boot the installed system without manual BIOS-order changes.
- **New `path` setting type for app manifests.** Apps can now declare a
setting with `"type": "path"` whose value is an absolute filesystem
path on the host; docker-compose bind-mounts it via the usual `.env`
substitution (`${MEDIA_PATH}:/media`). Unlocks media/data-heavy apps
(Jellyfin, later Paperless/Nextcloud/Immich) where the user points at
an existing folder instead of copying everything into a Docker
volume. The install form renders path settings as a plain text input
with a `/mnt/…` placeholder hint.
- **Server-side path validation.** Both `install_from()` and
`update_env()` refuse values that aren't absolute, don't exist,
aren't directories, or resolve (after `Path.resolve()`) into a
system-path deny-list (`/`, `/etc`, `/root`, `/boot`, `/proc`,
`/sys`, `/dev`, `/bin`, `/sbin`, `/usr/bin`, `/usr/sbin`,
`/var/lib/furtka`). Catches `/mnt/../etc`-style traversal too. Error
messages surface in the existing install/edit modal error line.
## [26.9-alpha] - 2026-04-21
### Fixed
- Landing-page app tiles with an `open_url` now open in a new tab
(`target="_blank" rel="noopener"`), matching the Open button
behaviour on `/apps`. Without this, clicking "Uptime Kuma" on the
home screen replaced Furtka itself with the Kuma admin page.
Internal links (the `Manage →` fallback for apps without an
`open_url`) still open in the same tab.
- `scripts/publish-release.sh` no longer fails the whole release when
the ISO upload hits a Forgejo proxy 504. The core tarball + sha256 +
release.json (which running boxes need for self-update) are uploaded
first and the ISO is attempted last as a best-effort; a 504 now logs
a warning and exits 0 so the release page still publishes. Surfaced
by the 26.8-alpha cut: the tarball landed but the ~1 GB ISO upload
timed out at the Forgejo reverse proxy.
### Changed
- `furtka app list --json` now mirrors `/api/apps` field-for-field —
previously the CLI emitted a slim projection missing
`description_long`, `open_url`, and `settings`. Anyone piping the
CLI output into jq for automation was seeing an incomplete view.
## [26.8-alpha] - 2026-04-20
### Added
- **Live-installer ISO attached to the Forgejo release page.** `.forgejo/workflows/release.yml` moves to the self-hosted runner, builds both the self-update tarball and the ISO, and `scripts/publish-release.sh` uploads the ISO as a fourth release asset (`furtka-<version>.iso`) alongside the existing tarball + sha256 + release.json. Fresh-install users can now grab the ISO from the release page instead of hunting through `build-iso.yml` artifact retention windows. ISO build step is `continue-on-error` so an ISO flake doesn't hold back the core tarball that running boxes need for self-update.
- **Reboot + Shut down buttons on `/settings`.** Replaces the two "Coming next" placeholders with real actions backed by `POST /api/furtka/power` (`{"action": "reboot" | "poweroff"}`). Handler kicks a delayed `systemd-run --on-active=3s systemctl {reboot|poweroff}` so the HTTP response reaches the browser before the kernel loses network. Each button opens a native confirm dialog first (reboot: "back in ~30 s", shut down: "need to press the physical power button"), then the UI swaps to a status line and — after a reboot — polls `/furtka.json` until the box is back, reloading the page automatically. No auth (same posture as install/remove).
- **Manifest `open_url` field + Open button in `/apps` and on the landing page.** Apps declare a URL template (e.g. `smb://{host}/files` for fileshare, `http://{host}:3001/` for Uptime Kuma); the UI substitutes `{host}` with the current browser's hostname at render time so the link follows however the user reached Furtka (furtka.local, raw IP, a future reverse-proxy hostname). The landing page's hardcoded `if app.name === 'fileshare'` special-case is gone — any app with an `open_url` in its manifest now gets a proper "Open" link. The core seed `apps/fileshare/manifest.json` bumps to v0.1.2 to carry it.
### Changed
- `.btn` CSS class introduced so an `<a>` rendered-as-button lines up with its `<button>` siblings in `.buttons`. Needed because "Open" is a real link (middle-click, copy URL, screen readers) and HTML doesn't let `<button>` carry `href`.
### Notes
- `26.7-alpha` was tagged but never published — the tag push didn't trigger `release.yml` (Forgejo race with the concurrent main push). `26.8-alpha` supersedes it and carries the same content plus power actions.
## [26.6-alpha] - 2026-04-20
### Added
- **Apps catalog synced independently of core.** A new `daniel/furtka-apps` Forgejo repo carries the bundled app catalog; running boxes pull the latest release via `furtka-catalog-sync.timer` (10 min post-boot + daily, ±6 h jitter) and extract atomically into `/var/lib/furtka/catalog/`. The resolver now prefers catalog apps over the seed `/opt/furtka/current/apps/` tree that ships inside the core release tarball, so apps can update without cutting a Furtka core release. Manual trigger: "Sync apps catalog" button on `/apps`, or `sudo furtka catalog sync` at the console. Fresh boxes with no network fall back to the seed, so offline first-boot still shows installable apps. Installed apps are never auto-swapped — users click Reinstall in `/apps` to move an existing install onto a newer catalog version (settings merge-preserved via the existing `installer.install_from` path).
- **Catalog CLI**: `furtka catalog sync [--check] [--json]` + `furtka catalog status [--json]`. Same shape as the core `furtka update` commands.
- **Catalog API endpoints**: `POST /api/catalog/sync/check`, `POST /api/catalog/sync/apply` (detached via `systemd-run` for symmetry with `/api/furtka/update/apply`), `GET /api/catalog/status`. The existing `/api/bundled` endpoint keeps working as a backwards-compat alias for `/api/apps/available`, which now returns the union of catalog + seed apps with a new `"source"` field on each entry (`"catalog"` | `"bundled"`).
### Changed
- **`furtka._release_common`** extracted from `furtka.updater`. Both `updater` and the new `catalog` module now share one implementation of the Forgejo-releases-API call, SHA256 verification, path-traversal-guarded tarball extraction, and CalVer comparison. Public updater surface unchanged.
- **`_link_new_units` now auto-enables newly-linked `.timer` units.** On self-update, a fresh timer file (e.g. `furtka-catalog-sync.timer` added in this release) needs `systemctl enable` to actually start firing — linking alone isn't enough. Fresh installs get their enable via the webinstaller's `_FURTKA_UNITS` list as before.
### Fixed
- **SHA-256 CA fingerprint no longer overflows the `/settings` Local HTTPS card** on narrow viewports. `.kv dd` grid items now set `min-width: 0` + `overflow-wrap: anywhere` so the colon-separated hex string breaks within the card's right edge instead of pushing past it.
## [26.5-alpha] - 2026-04-20
### Fixed
- **HTTPS handshake regression on the installed box (#10).** Phase 1 shipped two linked bugs: the `:443 { tls internal }` site block had no hostname, so Caddy never issued a leaf cert and every SNI handshake died with `SSL_ERROR_INTERNAL_ERROR_ALERT`; and both `furtka.https` and the Caddyfile's `/rootCA.crt` handler referenced `/var/lib/caddy/.local/share/caddy/pki/…`, a path that doesn't exist because our systemd unit sets `XDG_DATA_HOME=/var/lib`. Force-HTTPS toggle made the brokenness user-visible by redirecting working HTTP to dead HTTPS. Fixed: the Caddyfile now ships a `__FURTKA_HOSTNAME__.local, __FURTKA_HOSTNAME__ { tls internal }` block with the placeholder substituted at install time (`webinstaller/app.py`) and on every self-update (`furtka.updater._refresh_caddyfile` reads `/etc/hostname`). `auto_https disable_redirects` keeps Caddy's built-in redirect out of the way of the `/settings` toggle. PKI paths corrected in both `furtka/https.py` and `assets/Caddyfile`. Verified end-to-end on the 192.168.178.110 test VM: TLS 1.3 handshake completes, leaf cert issued, `/rootCA.crt` returns 200.
### Changed
- **Wizard footer version is now dynamic.** `webinstaller/app.py` resolves the Furtka version at startup via a Flask context processor — reads `/opt/furtka/VERSION` on the live ISO (written by `iso/build.sh` from `pyproject.toml` at build time), falls back to `pyproject.toml` in dev runs, then to literal `"dev"`. The 26.4 footer was hand-pinned and drifted within hours of release; that follow-up item is now closed.
- **Docs realigned with 26.4-alpha reality.** `apps/README.md` added (manifest schema, volume namespacing, `.env.example` guardrails, SVG sanitiser limits, install/test flow). Root `README.md` roadmap updated with Phase 1 HTTPS + smoke-VM pipeline as shipped items and 26.4-alpha in the release list. `iso/README.md` corrected: mDNS is wired (not "later milestone"), post-install default URL is `http://furtka.local` (not `proksi.local`), HTTPS is available via `tls internal` since 26.4. `website/README.md` now documents the auto-deploy on push-to-main as the default path, manual `deploy.sh` as the SSH-hop fallback.
## [26.4-alpha] - 2026-04-18
### Added
@ -354,17 +86,7 @@ First tagged snapshot. Pre-alpha — the installer does not yet boot, but the de
- **Containers:** Docker + Compose
- **License:** AGPL-3.0
[Unreleased]: https://forgejo.sourcegate.online/daniel/furtka/compare/26.15-alpha...HEAD
[26.15-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.15-alpha
[26.14-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.14-alpha
[26.13-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.13-alpha
[26.12-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.12-alpha
[26.11-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.11-alpha
[26.10-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.10-alpha
[26.9-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.9-alpha
[26.8-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.8-alpha
[26.6-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.6-alpha
[26.5-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.5-alpha
[Unreleased]: https://forgejo.sourcegate.online/daniel/furtka/compare/26.4-alpha...HEAD
[26.4-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.4-alpha
[26.3-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.3-alpha
[26.2-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.2-alpha

View file

@ -106,9 +106,9 @@ None of these nail the "your dad can set this up" experience. The installer wiza
- [x] Release process + CI — CalVer tags, conventional commits, Forgejo Actions (ruff, pytest, JSON, link checks), `26.0-alpha` tagged
- [x] Forgejo runner live on Proxmox VM (`forge-runner-01`, Ubuntu 24.04) — docker-outside-of-docker with host-mode jobs for ISO builds, setup captured in [docs/runner-setup.md](docs/runner-setup.md) + [ops/forgejo-runner/](ops/forgejo-runner/)
- [x] **ISO-build in CI**`.forgejo/workflows/build-iso.yml` runs `iso/build.sh` on every push to `main` and publishes the resulting `.iso` as the `furtka-iso` artifact (14 d retention). Push → green run → download → test.
- [x] **Forgejo Releases + tag-driven release pipeline**`.forgejo/workflows/release.yml` fires on `[0-9]*` tags, `scripts/build-release-tarball.sh` packages `furtka/` + `apps/` + `assets/` + a root VERSION, `scripts/publish-release.sh` uploads tarball + sha256 + release.json to the Forgejo releases page. Releases `26.1-alpha`, `26.3-alpha`, and `26.4-alpha` live at [releases](https://forgejo.sourcegate.online/daniel/furtka/releases) (26.2 stalled on a `jq` apt hang, fixed in 26.3). Needs one repo secret (`FORGEJO_RELEASE_TOKEN`).
- [x] **Forgejo Releases + tag-driven release pipeline**`.forgejo/workflows/release.yml` fires on `[0-9]*` tags, `scripts/build-release-tarball.sh` packages `furtka/` + `apps/` + `assets/` + a root VERSION, `scripts/publish-release.sh` uploads tarball + sha256 + release.json to the Forgejo releases page. `26.1-alpha` and `26.3-alpha` live at [releases](https://forgejo.sourcegate.online/daniel/furtka/releases). Needs one repo secret (`FORGEJO_RELEASE_TOKEN`).
- [x] **Walking-skeleton live ISO — end to end**`iso/build.sh` produces a hybrid BIOS/UEFI Arch-based ISO. It boots in a Proxmox VM, DHCPs onto the LAN, shows a console welcome with `http://proksi.local:5000` (+ IP fallback), serves the Flask webinstaller, runs `archinstall --silent`, reboots the VM via a Reboot-now button, and the installed system logs in and runs `docker ps` without sudo. Build infra in [`iso/`](iso/).
- [x] **Drop loop/rom devices from drive list**`webinstaller/drives.py` filters by `lsblk` `TYPE=disk`, so the live squashfs and CD-ROM no longer appear as install targets. The boot USB itself is also filtered: on the live ISO, `findmnt /run/archiso/bootmnt` resolves the boot partition and its parent disk is dropped from the picker.
- [x] **Drop loop/rom devices from drive list**`webinstaller/drives.py` filters by `lsblk` `TYPE=disk`, so the live squashfs and CD-ROM no longer appear as install targets. Boot-USB filtering on bare metal is still TODO; see [iso/README.md](iso/README.md).
- [x] **Rebrand GRUB menu**`iso/build.sh` rewrites "Arch Linux install medium" → "Furtka Live Installer" across GRUB, syslinux, and systemd-boot configs; default entry marked `(Recommended)`.
- [x] **Wizard: account form → drive picker → overview → archinstall** — S1 collects hostname/user/password/language with validation, S2 picks boot drive, overview confirms, `/install/run` writes `user_configuration.json` + `user_credentials.json` (0600) and execs `archinstall --silent` against its 4.x schema (`default_layout` disk_config + `!root-password` / `!password` sentinel keys + `custom_commands` for post-install group joins). Install log page polls a JSON endpoint and renders a phase-based progress bar with a collapsible raw log. `FURTKA_DRY_RUN=1` skips the real exec for testing.
- [x] **mDNS `proksi.local`** — hostname baked into the live ISO, avahi + nss-mdns in the package list, advertised as soon as network-online fires. The HTTPS + local-CA half of this milestone is still open below.
@ -117,10 +117,8 @@ None of these nail the "your dad can set this up" experience. The installer wiza
- [x] **On-box web UI uplevel** — shared `/style.css` served by Caddy, persistent top nav, landing page with an "Your apps" tile grid + live status, `/apps` with real per-app icons (inlined SVG from each manifest), new `/settings` page (hostname, IP, version, kernel, RAM, Docker, uptime + Furtka-updates card). `prefers-color-scheme` light/dark.
- [x] **Versioned on-box layout + Phase 1 per-app updates**`/opt/furtka/versions/<ver>/` + `current` symlink; `/var/lib/furtka/` for runtime state. `POST /api/apps/<name>/update` runs `docker compose pull` + compares digests + conditional `up -d`.
- [x] **Phase 2 Furtka self-update**`/settings` → Check → Update now. Downloads signed tarball (SHA256), stages, atomic symlink flip, reloads Caddy, daemon-reload, restarts services, health-checks the new api with auto-rollback on failure. CLI: `furtka update [--check]` + `furtka rollback`. Validated end-to-end on VM 2026-04-16 (`26.0-alpha``26.3-alpha` → rollback → reboot).
- [x] **Local HTTPS Phase 1** — Caddy `tls internal` on `:443` is fully opt-in via the `/settings` toggle (26.15-alpha); fresh installs stay HTTP-only so a half-trusted cert chain can't lock the user out. Per-box root CA generated on first enable, `rootCA.crt` downloadable from `/settings`, per-OS install guide at `/https-install/`. The "force HTTPS" sub-toggle still only appears once the current browser already trusts the cert.
- [x] **Post-build smoke VM on Proxmox**`.forgejo/workflows/build-iso.yml` hands the freshly built ISO to `scripts/smoke-vm.sh`, which boots it in a throwaway VM on `pollux` (192.168.178.165) and curls the webinstaller on `:5000`. VMID range 90009099, last 5 kept. Green end-to-end since 26.4-alpha.
- [ ] Installer wizard screens S3S7 — per-device purpose, network, domain, SSL, diagnostic. S5/S6 blocked on managed-gateway DNS infra not yet built.
- [ ] Local HTTPS Phase 2 — dedicated local CA (not Caddy's `tls internal`), streamlined one-click install across Win/Mac/Linux/Android, and HTTPS on the live-installer wizard (`https://proksi.local:5000`).
- [ ] `https://proksi.local` with a local CA (today: plain HTTP at `http://proksi.local:5000`)
- [ ] Caddy + Authentik wired into first-boot bootstrap
- [ ] Managed gateway infrastructure — `ns1/ns2.furtka.org` + DNS-01 wildcard automation
- [ ] First containerized service (Nextcloud?) with auto-SSO + auto-subdomain

View file

@ -45,7 +45,7 @@ Tag per meaningful milestone, not on a calendar. A milestone is: ISO boots, a wi
git push origin 26.1-alpha
```
5. **The release workflow does the rest.** `.forgejo/workflows/release.yml` fires on the tag push and runs on the self-hosted runner: `scripts/build-release-tarball.sh` builds the self-update payload (tarball + sha256 + release.json under `dist/`), `iso/build.sh` builds the live-installer ISO, `scripts/publish-release.sh` uploads tarball + sha256 + release.json + ISO to the Forgejo release page. Pre-release is flagged automatically based on the suffix (`-alpha`/`-beta`/`-rc`). ISO build is `continue-on-error`: a flaky ISO step doesn't block the core tarball (the thing boxes need for self-update).
5. **The release workflow does the rest.** `.forgejo/workflows/release.yml` fires on the tag push: `scripts/build-release-tarball.sh` builds the self-update payload (tarball + sha256 + release.json under `dist/`), `scripts/publish-release.sh` uploads all three assets to the Forgejo release page. Pre-release is flagged automatically based on the suffix (`-alpha`/`-beta`/`-rc`).
The release workflow needs one secret set at repo **Settings → Secrets → Actions**:
- `FORGEJO_RELEASE_TOKEN` — a PAT with `write:repository` scope.

View file

@ -1,145 +0,0 @@
# Building a Furtka app from a Docker image
A Furtka app is a folder with four files. The reconciler walks `/var/lib/furtka/apps/*` at boot, validates each manifest, ensures the declared volumes exist, and runs `docker compose up -d` per app. Filesystem is the only source of truth — no database.
Use `apps/fileshare/` as the reference implementation.
## Folder layout
```
apps/<name>/
manifest.json # required — app metadata and user-facing settings
docker-compose.yaml # required — filename is .yaml, not .yml
.env.example # required — keys consumed by docker-compose, with safe defaults
icon.svg # required — referenced by manifest.icon
```
The folder name must equal `manifest.name`. The scanner rejects mismatches.
## `manifest.json`
All top-level fields except `description_long` and `settings` are required.
```json
{
"name": "myapp",
"display_name": "My App",
"version": "0.1.0",
"description": "One-line summary shown in the app list.",
"description_long": "Longer German prose shown on the app page. Optional.",
"volumes": ["data"],
"ports": [8080],
"icon": "icon.svg",
"settings": [
{
"name": "ADMIN_PASSWORD",
"label": "Passwort",
"description": "Wird beim ersten Start gesetzt.",
"type": "password",
"required": true
}
]
}
```
Rules enforced by `furtka/manifest.py`:
- `volumes` — short names, strings. Namespaced to `furtka_<app>_<short>` at runtime.
- `ports` — integers. Informational only; compose owns the actual port binding.
- `settings[].name` — must match `^[A-Z_][A-Z0-9_]*$`. This name becomes both the env-var key and the form-field ID.
- `settings[].type` — one of `text`, `password`, `number`, `path`.
- `settings[].required` — if true, the install refuses when the value is empty.
- `settings[].default` — optional string. Used to pre-fill the form and the bootstrapped `.env`.
### Path-type settings (host bind mounts)
Use `"type": "path"` when the app should point at an existing folder on the host — media libraries, document archives, photo backups. The value is written to `.env` like any other setting, and compose consumes it via `${VAR}` substitution as a bind mount.
```json
{
"name": "MEDIA_PATH",
"label": "Medienordner",
"description": "Absoluter Pfad zu deinem Medien-Ordner, z.B. /mnt/media.",
"type": "path",
"required": true
}
```
```yaml
services:
app:
volumes:
- ${MEDIA_PATH}:/media:ro
```
The installer (`install_from` and `update_env`) refuses values that:
- aren't absolute (must start with `/`),
- don't exist on the host,
- aren't directories,
- resolve (after `Path.resolve()`) into a system-path deny-list: `/`, `/etc`, `/root`, `/boot`, `/proc`, `/sys`, `/dev`, `/bin`, `/sbin`, `/usr/bin`, `/usr/sbin`, `/var/lib/furtka`.
Traversal like `/mnt/../etc` is caught too — the deny-list check runs on the resolved path.
Path settings sit alongside manifest-declared volumes. Use `manifest.volumes` for internal state the app owns (databases, caches, config), and path settings for user data the container should mount and — usually — read without owning. Mounting read-only (`:ro`) is a good default for data the app only consumes.
## `docker-compose.yaml`
- File extension is `.yaml`. The compose runner hardcodes this — `.yml` will not be found.
- Reference manifest volumes as `furtka_<app>_<short>` with `external: true`. The reconciler creates the volume *before* `compose up`, so compose must not try to manage its lifecycle.
- Values from `.env` are substituted by compose in the usual `${VAR}` form.
- If the upstream image ships a HEALTHCHECK that misbehaves on Furtka's setup, disable it — a permanently-unhealthy container scares users reading `docker ps`.
- Pin images to a digest or stable tag when you can. `:latest` is acceptable for an MVP but noisy.
Minimal example:
```yaml
services:
app:
image: ghcr.io/example/myapp:1.2.3
restart: unless-stopped
environment:
- ADMIN_PASSWORD=${ADMIN_PASSWORD}
ports:
- "8080:8080"
volumes:
- furtka_myapp_data:/var/lib/myapp
volumes:
furtka_myapp_data:
external: true
```
## `.env.example`
One `KEY=VALUE` per line. Every key declared in `manifest.settings` should have a line here so the compose file resolves cleanly on first install even before the user opens the form.
Do not use `changeme` (or any value listed in `furtka.installer.PLACEHOLDER_SECRETS`) as the default for a required secret. The install step scans the final `.env` and refuses to finish if a placeholder survives — this is the guardrail that stops us shipping an app with a known password.
For non-secret values (usernames, paths), sensible defaults are fine and go straight into `.env` on first install.
## `icon.svg`
- 64×64 viewBox, no width/height attributes so the UI can scale it.
- Use `fill="currentColor"` (and `stroke="currentColor"`) so the icon picks up the current theme instead of baking in a color.
- Keep it single-path-ish. These render small in the app grid.
- The icon is inlined into the `/apps` page by the defensive SVG sanitiser, which strips `<script>`, `on*` attributes, and `javascript:` refs and enforces a 16 KB cap. Anything fancier than static paths and shapes will be rejected.
## Install and test
From the repo root on a dev box with Furtka installed:
```
sudo furtka app install ./apps/myapp
```
`furtka app install` runs a reconcile as its last step, so the container is up once the command returns. Open the Web UI (`http://furtka.local/`), fill in the settings form, and confirm the app starts. `docker ps` should show one container per compose service; `docker volume ls` should show `furtka_myapp_*`.
To bundle the app into the ISO, drop the folder into `apps/` before `iso/build.sh` runs — the build tarballs the whole `apps/` tree into the image.
## Out of scope (for now)
- Sharing volumes between apps. v1 keeps them isolated.
- Auth on the Web UI. The UI itself has a banner about this.
- Automatic updates. User-triggered per-app update is `POST /api/apps/<name>/update`.
- A network catalog. `furtka app install <name>` only resolves bundled apps in `/opt/furtka/apps/`.

View file

@ -1,13 +1,12 @@
{
"name": "fileshare",
"display_name": "Network Files",
"version": "0.1.2",
"version": "0.1.1",
"description": "SMB share for Mac, Windows, Linux and Android devices on the LAN.",
"description_long": "Alle Geräte im WLAN sehen einen gemeinsamen Ordner. Funktioniert mit Windows, Mac, Linux und Android. Verbinden zu smb://furtka.local — Anmeldung mit dem hier gesetzten Benutzernamen und Passwort.",
"volumes": ["files"],
"ports": [445, 139],
"icon": "icon.svg",
"open_url": "smb://{host}/files",
"settings": [
{
"name": "SMB_USER",

View file

@ -1,35 +1,18 @@
# Serves the Furtka landing page + live JSON on :80 (plain HTTP). HTTPS
# is **opt-in** — Caddy doesn't serve :443 until the user clicks the
# "Enable HTTPS" toggle on /settings, which drops an import snippet into
# /etc/caddy/furtka-https.d/. Default install has NO tls site block →
# Caddy never generates a self-signed CA / leaf cert → no
# SEC_ERROR_BAD_SIGNATURE when a user visits https://furtka.local before
# they've trusted anything. That was the 26.14-era regression this file
# exists to cure: the old Caddyfile always served :443 with a freshly-
# generated cert, and a browser that had ever trusted an older Furtka
# box's CA would reject the new one with an unbypassable bad-sig error.
# Serves the Furtka landing page + live JSON on :80 (plain HTTP) and :443
# (HTTPS via Caddy's built-in `tls internal` — locally-issued certs signed
# by a root CA that Caddy generates on first start and stores under
# /var/lib/caddy/.local/share/caddy/pki/authorities/local/). Static pages
# are read from /opt/furtka/current/ — updates flip the symlink and
# everything picks up the new content without a Caddy restart (a
# `systemctl reload caddy` is still triggered post-swap to flush the
# file-server's handle cache). /apps and /api are reverse-proxied to the
# resource-manager API (furtka serve, bound to 127.0.0.1:7000).
#
# /apps, /api, /login, /logout, / (home), /settings are reverse-proxied
# to the resource-manager API (furtka serve, bound to 127.0.0.1:7000).
# Static pages are read from /opt/furtka/current/ — updates flip the
# symlink and everything picks up the new content without a Caddy
# restart (a `systemctl reload caddy` is still triggered post-swap to
# flush the file-server's handle cache).
#
# Two snippet dirs, both silently no-op when empty:
# - /etc/caddy/furtka.d/*.caddyfile → imported inside the :80 block.
# The HTTPS toggle's "force HTTP→HTTPS redirect" snippet lands here.
# - /etc/caddy/furtka-https.d/*.caddyfile → imported at TOP LEVEL, so
# the HTTPS hostname+tls-internal site block can drop in here when
# the toggle is on. Hostname is substituted at toggle-time.
{
# Named-hostname :443 blocks would otherwise make Caddy add its own
# HTTP→HTTPS redirect — but we already serve our own `:80` block and
# the opt-in /settings toggle owns the redirect. Disable the built-in
# to keep a single source of truth.
auto_https disable_redirects
}
# Force-HTTPS: /etc/caddy/furtka.d/*.caddyfile gets imported into the :80
# block. The /api/furtka/https/force endpoint creates or removes
# redirect.caddyfile there to toggle the HTTP→HTTPS redirect, then reloads
# Caddy. Glob imports silently no-op on an empty/missing directory, so the
# toggle-off state is "no file present" rather than "empty file".
(furtka_routes) {
handle /api/* {
reverse_proxy localhost:7000
@ -37,26 +20,6 @@
handle /apps* {
reverse_proxy localhost:7000
}
handle /login* {
reverse_proxy localhost:7000
}
handle /logout* {
reverse_proxy localhost:7000
}
# /settings and / — these previously served as static HTML straight
# from the catch-all file_server, which meant the auth-guard was
# bypassed: a LAN visitor could see the box's version, IP, and
# reach the Update-now / Reboot buttons (the API calls behind them
# are auth-gated, but the page itself rendered without a redirect
# to /login). Route them through the Python handler which checks
# the session cookie and either serves the static HTML from
# assets/www/ or redirects to /login.
handle /settings* {
reverse_proxy localhost:7000
}
handle / {
reverse_proxy localhost:7000
}
# Runtime JSON lives under /var/lib/furtka/ so it survives self-updates
# (which only swap /opt/furtka/current).
handle /status.json {
@ -72,10 +35,10 @@
file_server
}
# Download the local root CA cert Caddy generated for `tls internal`.
# Public because users need to grab it before they've trusted it.
# The private key next to it stays 0600 / caddy-owned.
# Available on both :80 and :443 so users can grab it before they've
# trusted it. The private key next to it stays 0600 / caddy-owned.
handle /rootCA.crt {
root * /var/lib/caddy/pki/authorities/local
root * /var/lib/caddy/.local/share/caddy/pki/authorities/local
rewrite * /root.crt
file_server
header Content-Type "application/x-x509-ca-cert"
@ -91,12 +54,12 @@
}
}
# HTTPS opt-in: when /settings toggles HTTPS on, a snippet gets written
# into /etc/caddy/furtka-https.d/ that adds the hostname+tls-internal
# site block. Empty directory = HTTP-only (default fresh install).
import /etc/caddy/furtka-https.d/*.caddyfile
:80 {
import /etc/caddy/furtka.d/*.caddyfile
import furtka_routes
}
:443 {
tls internal
import furtka_routes
}

View file

@ -1,12 +0,0 @@
[Unit]
Description=Furtka apps catalog sync
Requires=network-online.target
After=network-online.target
[Service]
Type=oneshot
ExecStart=/usr/local/bin/furtka catalog sync
TimeoutStartSec=5min
[Install]
WantedBy=multi-user.target

View file

@ -1,14 +0,0 @@
[Unit]
Description=Furtka apps catalog daily sync
[Timer]
# First sync 10 min after boot, then once per day with up to 6 h jitter so
# a fleet of boxes doesn't all hit Forgejo at the same second. Persistent
# = catch up if the box was off when the timer should have fired.
OnBootSec=10min
OnUnitActiveSec=24h
RandomizedDelaySec=6h
Persistent=true
[Install]
WantedBy=timers.target

View file

@ -14,7 +14,6 @@
<a href="/" aria-current="page">Home</a>
<a href="/apps">Apps</a>
<a href="/settings/">Settings</a>
<a href="#" id="logout-link" onclick="return doLogout(event)">Logout</a>
</div>
</nav>
<header>
@ -68,17 +67,6 @@
</main>
<script>
// Revoke the cookie server-side and bounce to /login. Shared
// shape with the _HTML in furtka/api.py so the two logout
// links behave identically.
async function doLogout(ev) {
ev.preventDefault();
try { await fetch('/logout', { method: 'POST', credentials: 'same-origin' }); }
catch (e) { /* server may already be down */ }
window.location.href = '/login';
return false;
}
// Hostname + install metadata — written once at install time to
// /var/lib/furtka/furtka.json (see _furtka_json_cmd in the installer).
// Separate from status.json because these facts don't change between
@ -104,17 +92,13 @@
}
function primaryAction(app) {
// open_url is a manifest-declared template with a `{host}`
// placeholder — substituted against the current browser's
// hostname so smb://host/files and http://host:3001/ both
// follow however the user reached Furtka (furtka.local, raw
// IP, a future reverse-proxy hostname). Apps without a
// frontend fall back to /apps for management.
if (app.open_url) {
const host = HOSTNAME || location.hostname;
return { href: app.open_url.replace('{host}', host), label: 'Open', external: true };
// Only fileshare has a direct "open" link today. Future apps with
// HTTP endpoints would surface a URL here; everything else falls
// back to the /apps manage page.
if (app.name === 'fileshare' && HOSTNAME) {
return { href: `smb://${HOSTNAME}.local/files`, label: 'Open files' };
}
return { href: '/apps', label: 'Manage →', external: false };
return { href: '/apps', label: 'Manage →' };
}
async function renderApps() {
@ -131,9 +115,8 @@
}
target.innerHTML = apps.map(a => {
const icon = a.icon_svg || FALLBACK_ICON;
const { href, label, external } = primaryAction(a);
const tgt = external ? ' target="_blank" rel="noopener"' : '';
return `<a class="app-tile" href="${esc(href)}"${tgt}>
const { href, label } = primaryAction(a);
return `<a class="app-tile" href="${esc(href)}">
<div class="icon">${icon}</div>
<span class="name">${esc(a.display_name || a.name)}</span>
<span class="cta">${esc(label)}</span>

View file

@ -14,7 +14,6 @@
<a href="/">Home</a>
<a href="/apps">Apps</a>
<a href="/settings/" aria-current="page">Settings</a>
<a href="#" id="logout-link" onclick="return doLogout(event)">Logout</a>
</div>
</nav>
@ -90,25 +89,12 @@
</div>
</section>
<section>
<h2>Power</h2>
<div class="card">
<p class="lede">
Reboot or shut down the whole Furtka box. Takes a few seconds to
finish; the UI will reconnect itself after a reboot.
</p>
<div class="power-actions">
<button type="button" id="power-reboot" class="secondary">Reboot</button>
<button type="button" id="power-poweroff" class="danger">Shut down</button>
</div>
<p id="power-status" class="hint"></p>
</div>
</section>
<section>
<h2>Coming next</h2>
<div class="coming">
<p class="hint">Controls we're building — follow progress on <a href="https://furtka.org">furtka.org</a>.</p>
<a href="https://furtka.org/#planned">Reboot</a>
<a href="https://furtka.org/#planned">Shut down</a>
<a href="https://furtka.org/#planned">Change hostname</a>
<a href="https://furtka.org/#planned">Backup</a>
<a href="https://furtka.org/#planned">User accounts</a>
@ -122,15 +108,6 @@
</main>
<script>
// Logout button in the nav — same shape as /apps and / pages.
async function doLogout(ev) {
ev.preventDefault();
try { await fetch('/logout', { method: 'POST', credentials: 'same-origin' }); }
catch (e) { /* server may already be down */ }
window.location.href = '/login';
return false;
}
async function refresh() {
try {
const r = await fetch('/status.json', { cache: 'no-store' });
@ -363,85 +340,6 @@
/* keep polling; restart blip expected */
}
}
// Power buttons: confirm, POST, then swap the whole card into a
// "going down" state so the user doesn't keep clicking. After a
// reboot we try to reconnect after ~45s; for shutdown we just
// tell the user the box is off — no auto-reconnect attempt.
const powerStatusEl = document.getElementById('power-status');
const rebootBtn = document.getElementById('power-reboot');
const poweroffBtn = document.getElementById('power-poweroff');
function setPowerStatus(msg, tone = 'muted') {
powerStatusEl.textContent = msg;
powerStatusEl.style.color =
tone === 'error' ? 'var(--danger)' : 'var(--muted)';
}
async function triggerPower(action, confirmMsg, inflightLabel) {
if (!confirm(confirmMsg)) return;
rebootBtn.disabled = true;
poweroffBtn.disabled = true;
setPowerStatus(inflightLabel);
try {
const r = await fetch('/api/furtka/power', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ action }),
});
if (!r.ok) {
const data = await r.json().catch(() => ({}));
setPowerStatus(data.error || `HTTP ${r.status}`, 'error');
rebootBtn.disabled = false;
poweroffBtn.disabled = false;
return;
}
if (action === 'reboot') {
setPowerStatus('Rebooting… this page will reload when the box is back.');
// Try reconnecting after a generous delay. archinstall
// + boot + services typically takes 3045 s; give it 30
// before the first poke so we don't just spin against
// a down kernel.
setTimeout(pollForReconnect, 30000);
} else {
setPowerStatus(
'Shutdown scheduled. Press the physical power button to turn it back on.'
);
}
} catch (e) {
setPowerStatus(`Network error: ${e.message}`, 'error');
rebootBtn.disabled = false;
poweroffBtn.disabled = false;
}
}
async function pollForReconnect() {
// Fetch a tiny static file; when it comes back 200 the box is up.
try {
const r = await fetch('/furtka.json', { cache: 'no-store' });
if (r.ok) {
setPowerStatus('Back up — reloading…');
setTimeout(() => location.reload(), 1500);
return;
}
} catch (e) { /* still down */ }
setTimeout(pollForReconnect, 3000);
}
rebootBtn.addEventListener('click', () =>
triggerPower(
'reboot',
"Wirklich neu starten? Die Box ist für ~30 Sekunden nicht erreichbar.",
'Rebooting…'
)
);
poweroffBtn.addEventListener('click', () =>
triggerPower(
'poweroff',
"Wirklich ausschalten? Du kannst die Box erst wieder starten, wenn du den physischen Power-Knopf drückst.",
'Shutting down…'
)
);
</script>
</body>
</html>

View file

@ -198,7 +198,7 @@ h2 {
flex-wrap: wrap;
justify-content: flex-end;
}
button, .btn {
button {
background: var(--accent);
border: none;
color: var(--bg);
@ -209,39 +209,16 @@ button, .btn {
white-space: nowrap;
font-size: 0.9rem;
font-family: inherit;
/* Anchor rendered-as-button: strip underline + keep the button's
rectangular hit area. `display: inline-flex` so an <a class="btn">
lines up vertically with its <button> siblings in .buttons. */
text-decoration: none;
display: inline-flex;
align-items: center;
}
button.secondary, .btn.secondary {
button.secondary {
background: var(--card);
color: var(--fg);
border: 1px solid var(--border);
}
button.danger { background: var(--danger); color: #fff; }
button:disabled { opacity: 0.5; cursor: wait; }
button:focus-visible, .btn:focus-visible { outline: none; box-shadow: var(--ring); }
button:focus-visible { outline: none; box-shadow: var(--ring); }
.empty { color: var(--muted); font-style: italic; padding: 0.5rem 0; }
.catalog-row {
display: flex;
justify-content: space-between;
align-items: center;
flex-wrap: wrap;
gap: 0.75rem;
padding: 0.5rem 0 0.75rem;
}
.catalog-state {
margin: 0;
color: var(--muted);
font-size: 0.9rem;
}
.catalog-stage.pending {
color: var(--fg);
font-style: italic;
}
pre {
background: var(--card);
padding: 1rem;
@ -310,8 +287,7 @@ details.log-details[open] > summary { color: var(--fg); }
}
.field input:focus { outline: 2px solid var(--accent); outline-offset: -1px; }
.field .req { color: var(--danger); margin-left: 0.25rem; }
.modal .error,
.login-wrap .error {
.modal .error {
background: var(--warn);
color: var(--warn-fg);
padding: 0.5rem 0.75rem;
@ -320,15 +296,7 @@ details.log-details[open] > summary { color: var(--fg); }
font-size: 0.9rem;
display: none;
}
.modal .error.show,
.login-wrap .error.show { display: block; }
/* Login + first-run setup page. Shares .wrap's max-width so the form
sits in the same column the rest of the app uses, just without the
Home/Apps/Settings nav. A bit of top padding so the H1 isn't glued
to the viewport edge. */
.login-wrap { padding-top: 3rem; }
.login-wrap .actions { margin-top: 0.5rem; }
.modal .error.show { display: block; }
.modal-actions {
display: flex;
justify-content: flex-end;
@ -338,8 +306,7 @@ details.log-details[open] > summary { color: var(--fg); }
/* Row of buttons beneath a card used by the Furtka updates card on
/settings. Left-aligned, wraps on narrow screens. */
.update-actions,
.power-actions {
.update-actions {
display: flex;
gap: 0.5rem;
flex-wrap: wrap;
@ -398,18 +365,7 @@ details.log-details[open] > summary { color: var(--fg); }
font-size: 0.95rem;
}
.kv dt { color: var(--muted); }
.kv dd {
margin: 0;
color: var(--fg);
font-family: ui-monospace, SFMono-Regular, Menlo, monospace;
/* Grid items default to min-width: auto (= content width), so a long
unbreakable value like a SHA-256 fingerprint would push past the
card. min-width: 0 lets the 1fr track enforce the column width, and
overflow-wrap: anywhere gives the colon-separated hex string valid
break opportunities. */
min-width: 0;
overflow-wrap: anywhere;
}
.kv dd { margin: 0; color: var(--fg); font-family: ui-monospace, SFMono-Regular, Menlo, monospace; }
.coming {
display: flex;

View file

@ -1,115 +0,0 @@
"""Shared primitives for release-tarball flows.
Both ``furtka.updater`` (core self-update) and ``furtka.catalog`` (apps
catalog sync) pull a tarball from a Forgejo Releases page, verify its
SHA256 against the ``.sha256`` sidecar, and extract it with a path-
traversal guard. The helpers here are the single implementation of
that dance.
Each error-raising helper accepts an ``error_cls`` kwarg so callers can
keep their domain-specific exception type (``UpdateError``,
``CatalogError``) at call sites the helper itself defaults to a
neutral ``ReleaseError`` for use in tests or standalone scripts.
"""
from __future__ import annotations
import hashlib
import json
import shutil
import tarfile
import urllib.error
import urllib.request
from pathlib import Path
class ReleaseError(RuntimeError):
"""Neutral failure for release-tarball operations."""
def forgejo_api(host: str, repo: str, path: str, *, error_cls: type = ReleaseError) -> dict | list:
url = f"https://{host}/api/v1/repos/{repo}{path}"
req = urllib.request.Request(url, headers={"Accept": "application/json"})
try:
with urllib.request.urlopen(req, timeout=15) as resp:
return json.loads(resp.read())
except (urllib.error.URLError, json.JSONDecodeError) as e:
raise error_cls(f"forgejo api {url}: {e}") from e
def download(url: str, dest: Path, *, error_cls: type = ReleaseError) -> None:
dest.parent.mkdir(parents=True, exist_ok=True)
req = urllib.request.Request(url)
try:
with urllib.request.urlopen(req, timeout=60) as resp, dest.open("wb") as f:
shutil.copyfileobj(resp, f)
except urllib.error.URLError as e:
raise error_cls(f"download {url}: {e}") from e
def sha256_of(path: Path) -> str:
h = hashlib.sha256()
with path.open("rb") as f:
for chunk in iter(lambda: f.read(1024 * 1024), b""):
h.update(chunk)
return h.hexdigest()
def verify_tarball(tarball: Path, expected_sha: str, *, error_cls: type = ReleaseError) -> None:
actual = sha256_of(tarball)
if actual != expected_sha:
raise error_cls(f"sha256 mismatch: expected {expected_sha}, got {actual}")
def parse_sha256_sidecar(text: str, *, error_cls: type = ReleaseError) -> str:
"""Extract the hash from a standard `sha256sum` sidecar line."""
line = text.strip().split("\n", 1)[0].strip()
if not line:
raise error_cls("empty sha256 sidecar")
return line.split()[0]
def extract_tarball(tarball: Path, dest: Path, *, error_cls: type = ReleaseError) -> str:
"""Extract the tarball and return the VERSION read from its root.
Refuses entries that could escape ``dest`` via absolute paths or ``..``
segments. On Python 3.12+ the stricter ``data`` filter is additionally
enabled to catch symlink-escape / device-node / setuid tricks that the
regex check can't see.
"""
dest.mkdir(parents=True, exist_ok=True)
with tarfile.open(tarball, "r:gz") as tf:
for member in tf.getmembers():
if member.name.startswith(("/", "..")) or ".." in Path(member.name).parts:
raise error_cls(f"refusing tarball entry {member.name!r}")
try:
tf.extractall(dest, filter="data")
except TypeError:
tf.extractall(dest)
version_file = dest / "VERSION"
if not version_file.is_file():
raise error_cls("tarball has no VERSION file at root")
return version_file.read_text().strip()
def version_tuple(v: str) -> tuple:
"""CalVer comparator: 26.1-alpha < 26.1-beta < 26.1-rc < 26.1 < 26.2-alpha.
Pre-release stages sort before the corresponding stable (no-suffix)
release. Unknown suffixes sort below everything except the malformed
fallback. Returns a tuple of (year, release, stage_rank, suffix).
"""
stage_rank = {"alpha": 0, "beta": 1, "rc": 2}
head, _, suffix = v.partition("-")
try:
year_str, release_str = head.split(".", 1)
year = int(year_str)
release = int(release_str)
except (ValueError, IndexError):
return (-1, -1, -1, v)
if not suffix:
return (year, release, 3, "")
for name, rank in stage_rank.items():
if suffix.startswith(name):
return (year, release, rank, suffix)
return (year, release, -1, suffix)

View file

@ -2,28 +2,22 @@
# its lines hurts readability and the rendered output is what matters here.
"""Tiny HTTP API + management UI for the Furtka resource manager.
Single stdlib http.server process, served behind Caddy (reverse-proxies
/apps, /api, /login and /logout from :80 to here).
Single stdlib http.server process, no Flask/no third-party deps so we don't
have to pip-install anything on the target. Caddy reverse-proxies /apps and
/api from :80 to here.
Security: single-admin password login, cookie-session, werkzeug-hashed
password stored at /var/lib/furtka/users.json (0600). Sessions live in
memory `systemctl restart furtka-api` invalidates everyone. Fresh
installs pre-populate users.json from the webinstaller step-1 password;
upgrades from pre-auth releases fall into a first-run setup form at
/login where the admin password is created from the browser. Authentik
integration remains the long-term plan; this is the pragmatic alpha
stopgap.
Security: NO AUTH. Bound to 127.0.0.1 by default; the Caddy proxy makes it
LAN-reachable. Anyone on the LAN can install/remove apps. The UI shouts this
out at the top. Auth lands when Authentik does.
"""
import json
import re
import time
from http.cookies import SimpleCookie
from http.server import BaseHTTPRequestHandler, HTTPServer
from furtka import auth, dockerops, install_runner, installer, reconciler, sources
from furtka import dockerops, installer, reconciler
from furtka.manifest import ManifestError, load_manifest
from furtka.paths import apps_dir, static_www_dir
from furtka.paths import apps_dir, bundled_apps_dir
from furtka.scanner import scan
_ICON_MAX_BYTES = 16 * 1024
@ -83,21 +77,17 @@ _HTML = """<!DOCTYPE html>
<a href="/">Home</a>
<a href="/apps" aria-current="page">Apps</a>
<a href="/settings/">Settings</a>
<a href="#" id="logout-link" onclick="return doLogout(event)">Logout</a>
</div>
</nav>
<h1>Furtka Apps</h1>
<p class="lede">Install or remove resource-manager apps on this Furtka box.</p>
<div class="warn">No authentication on this UI yet. Anyone on your LAN can install or remove apps. Don't expose this to the wider internet.</div>
<h2>Installed</h2>
<div id="installed"></div>
<h2>Available to install</h2>
<div class="catalog-row">
<p class="catalog-state">Catalog version <span id="catalog-current"></span> · last sync <span id="catalog-last-sync">never</span> <span id="catalog-stage" class="catalog-stage"></span></p>
<button type="button" class="secondary" id="catalog-sync-btn">Sync apps catalog</button>
</div>
<div id="available"></div>
<details class="log-details">
@ -126,15 +116,6 @@ function esc(s) {
return d.innerHTML;
}
async function doLogout(ev) {
ev.preventDefault();
try {
await fetch('/logout', { method: 'POST', credentials: 'same-origin' });
} catch (e) { /* best-effort server may already be down */ }
window.location.href = '/login';
return false;
}
// Fallback when an app doesn't ship a parseable icon.svg. Simple
// stroked folder currentColor so the tile's accent tint applies.
const FALLBACK_ICON = '<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"><path d="M3 7v12a2 2 0 0 0 2 2h14a2 2 0 0 0 2-2V9a2 2 0 0 0-2-2h-7l-2-2H5a2 2 0 0 0-2 2z"/></svg>';
@ -188,9 +169,7 @@ async function openSettingsDialog(name, action) {
modal.form.innerHTML = data.settings.map(s => {
const id = `field-${esc(s.name)}`;
const value = action === 'edit' && s.type === 'password' ? '' : esc(s.value || '');
const placeholder = action === 'edit' && s.type === 'password' ? 'Leave blank to keep current'
: s.type === 'path' ? '/mnt/…'
: '';
const placeholder = action === 'edit' && s.type === 'password' ? 'Leave blank to keep current' : '';
return `
<div class="field">
<label for="${id}">${esc(s.label)}${s.required ? '<span class="req">*</span>' : ''}</label>
@ -214,51 +193,6 @@ async function openSettingsDialog(name, action) {
modal.submit.addEventListener('click', submitModal);
// Install progress phases written by the background job's state file.
// Mirrors furtka/install_runner.py stage strings. Unknown stages fall
// back to a neutral "Installing…" so a future phase rename doesn't
// leave the modal button blank.
const INSTALL_STAGE_LABELS = {
'pulling_image': 'Image wird heruntergeladen…',
'creating_volumes': 'Speicherbereiche werden erstellt…',
'starting_container': 'Container wird gestartet…',
'done': 'Fertig',
};
async function pollInstallStatus(original) {
// Two-minute ceiling: Jellyfin over a slow DSL line can take ~90s
// just on the image pull. Beyond that something's stuck — the
// background job is still running in systemd, but the UI gives up
// on the modal and lets the user close it.
const deadline = Date.now() + 120000;
while (Date.now() < deadline) {
await new Promise(res => setTimeout(res, 1500));
let s = {};
try {
s = await fetch('/api/apps/install/status').then(r => r.json());
} catch (e) { /* transient; keep polling */ }
const stage = s.stage || '';
modal.submit.textContent = INSTALL_STAGE_LABELS[stage] || 'Installing…';
if (stage === 'done') {
closeModal();
await refresh();
return;
}
if (stage === 'error') {
modal.error.textContent = s.error || 'Install failed';
modal.error.classList.add('show');
modal.submit.disabled = false;
modal.submit.textContent = original;
return;
}
}
// Timed out waiting for a terminal state don't lie to the user.
modal.error.textContent = 'Installation is taking longer than expected. Check /settings for the background job status.';
modal.error.classList.add('show');
modal.submit.disabled = false;
modal.submit.textContent = original;
}
async function submitModal() {
if (!modal.current) return;
const { name, action } = modal.current;
@ -292,13 +226,6 @@ async function submitModal() {
modal.submit.textContent = original;
return;
}
// Install dispatched a background job poll until terminal. The
// edit path stays synchronous (settings updates are fast: env write
// + reconcile, no image pull).
if (action === 'install' && r.status === 202) {
await pollInstallStatus(original);
return;
}
closeModal();
await refresh();
} catch (e) {
@ -317,14 +244,6 @@ async function refresh() {
document.getElementById('installed').innerHTML = installed.length
? installed.map(a => {
const hasSettings = a.has_settings;
const openHref = a.open_url ? a.open_url.replace('{host}', location.hostname) : '';
// Plain <a> rendered as a button so it behaves like a real link
// (middle-click, right-click "copy link", screen readers) instead
// of a JS onclick. Most installed apps will want this fileshare
// deep-links to smb://, Kuma to http://host:3001/.
const openBtn = openHref
? `<a class="btn" href="${esc(openHref)}" target="_blank" rel="noopener">Open</a>`
: '';
return `
<div class="app">
<div class="left">
@ -335,7 +254,6 @@ async function refresh() {
</div>
</div>
<div class="buttons">
${openBtn}
${hasSettings ? `<button data-op="edit" data-name="${esc(a.name)}">Settings</button>` : ''}
<button class="secondary" data-op="update" data-name="${esc(a.name)}">Update</button>
<button class="secondary" data-op="reinstall" data-name="${esc(a.name)}">Reinstall</button>
@ -391,197 +309,20 @@ async function handleButton(op, name, btn) {
: ' — already up to date';
}
document.getElementById('log').textContent = header + '\\n' + JSON.stringify(data, null, 2);
// Reinstall dispatches an async install the same way the modal does
// follow the background job on the button label until terminal.
if (op === 'reinstall' && r.status === 202) {
const deadline = Date.now() + 120000;
while (Date.now() < deadline) {
await new Promise(res => setTimeout(res, 1500));
let s = {};
try { s = await fetch('/api/apps/install/status').then(r => r.json()); } catch (e) {}
const stage = s.stage || '';
btn.textContent = INSTALL_STAGE_LABELS[stage] || 'Reinstalling…';
if (stage === 'done' || stage === 'error') break;
}
}
} catch (e) {
document.getElementById('log').textContent = `[${op} ${name}] network error: ${e.message}`;
}
btn.textContent = original;
btn.disabled = false;
await refresh();
}
async function refreshCatalog() {
let status;
try {
status = await fetch('/api/catalog/status').then(r => r.json());
} catch (e) {
return;
}
const cur = status.current || 'never synced';
document.getElementById('catalog-current').textContent = cur;
const stage = (status.state || {}).stage || '';
const updatedAt = (status.state || {}).updated_at || '';
document.getElementById('catalog-last-sync').textContent = updatedAt || 'never';
const stageEl = document.getElementById('catalog-stage');
if (stage && stage !== 'done') {
stageEl.textContent = '· ' + stage + '';
stageEl.classList.add('pending');
} else {
stageEl.textContent = '';
stageEl.classList.remove('pending');
}
}
const catalogBtn = document.getElementById('catalog-sync-btn');
catalogBtn.addEventListener('click', async () => {
catalogBtn.disabled = true;
const original = catalogBtn.textContent;
catalogBtn.textContent = 'Syncing…';
try {
const r = await fetch('/api/catalog/sync/apply', {method: 'POST'});
const data = await r.json();
document.getElementById('log').textContent = `[catalog sync] HTTP ${r.status}\\n` + JSON.stringify(data, null, 2);
// Poll for completion sync is fast (KB-range tarball) so 30 s is plenty.
const deadline = Date.now() + 30000;
while (Date.now() < deadline) {
await new Promise(res => setTimeout(res, 1500));
const s = await fetch('/api/catalog/status').then(r => r.json()).catch(() => null);
const stage = (s && s.state && s.state.stage) || '';
if (stage === 'done' || stage === 'error') break;
}
await refreshCatalog();
await refresh();
} catch (e) {
document.getElementById('log').textContent = `[catalog sync] network error: ${e.message}`;
}
catalogBtn.disabled = false;
catalogBtn.textContent = original;
});
refresh();
refreshCatalog();
</script>
</body>
</html>
"""
# Login / first-run setup page. Rendered standalone (no main-UI chrome) so
# an unauthenticated visitor never gets a glimpse of the app list. Reuses
# /style.css for the look — the page is just a form + optional error line.
# The template has a {{ SETUP }} marker the server flips on/off depending
# on whether users.json exists yet (first-run vs. normal login).
_HTML_LOGIN = """<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Furtka · {{ TITLE }}</title>
<meta name="viewport" content="width=device-width,initial-scale=1">
<link rel="stylesheet" href="/style.css">
</head>
<body>
<main class="wrap login-wrap">
<h1>{{ HEADING }}</h1>
<p class="lede">{{ LEDE }}</p>
<form id="login-form" onsubmit="return doLogin(event)">
<div class="field">
<label for="username">Username</label>
<input id="username" name="username" type="text" autocomplete="username" required value="{{ DEFAULT_USERNAME }}" autofocus>
</div>
<div class="field">
<label for="password">Password</label>
<input id="password" name="password" type="password" autocomplete="{{ PWD_AUTOCOMPLETE }}" required minlength="8">
</div>
{{ PASSWORD2_FIELD }}
<div id="login-error" class="error"></div>
<div class="actions">
<button type="submit" id="login-submit">{{ SUBMIT_LABEL }}</button>
</div>
</form>
</main>
<script>
const SETUP = {{ SETUP_JSON }};
const errBox = document.getElementById('login-error');
async function doLogin(ev) {
ev.preventDefault();
errBox.classList.remove('show');
errBox.textContent = '';
const btn = document.getElementById('login-submit');
btn.disabled = true;
const body = {
username: document.getElementById('username').value,
password: document.getElementById('password').value,
};
if (SETUP) body.password2 = document.getElementById('password2').value;
try {
const r = await fetch('/login', {
method: 'POST',
headers: {'Content-Type': 'application/json'},
credentials: 'same-origin',
body: JSON.stringify(body),
});
if (r.ok) {
window.location.href = '/apps';
return false;
}
const data = await r.json().catch(() => ({error: 'HTTP ' + r.status}));
errBox.textContent = data.error || 'Login failed';
errBox.classList.add('show');
} catch (e) {
errBox.textContent = 'Network error — is the box reachable?';
errBox.classList.add('show');
} finally {
btn.disabled = false;
}
return false;
}
</script>
</body>
</html>
"""
def _render_login_html(setup: bool, default_username: str = "") -> str:
if setup:
password2_field = (
'<div class="field"><label for="password2">Repeat password</label>'
'<input id="password2" name="password2" type="password" '
'autocomplete="new-password" required minlength="8"></div>'
)
subs = {
"TITLE": "First-run setup",
"HEADING": "Set admin password",
"LEDE": "No admin account exists yet on this box. Pick a username and password — you'll use them to sign in to the Furtka UI.",
"PWD_AUTOCOMPLETE": "new-password",
"PASSWORD2_FIELD": password2_field,
"SUBMIT_LABEL": "Create admin",
"DEFAULT_USERNAME": "admin",
"SETUP_JSON": "true",
}
else:
subs = {
"TITLE": "Login",
"HEADING": "Furtka login",
"LEDE": "Sign in with the admin credentials you set during install.",
"PWD_AUTOCOMPLETE": "current-password",
"PASSWORD2_FIELD": "",
"SUBMIT_LABEL": "Log in",
"DEFAULT_USERNAME": default_username,
"SETUP_JSON": "false",
}
out = _HTML_LOGIN
for key, val in subs.items():
out = out.replace("{{ " + key + " }}", val)
return out
# Minimum password length enforced server-side (browser also enforces it
# via the input's minlength, but don't rely on client-side only).
_MIN_PASSWORD_LEN = 8
def _manifest_summary(m, app_dir=None):
return {
"name": m.name,
@ -593,9 +334,6 @@ def _manifest_summary(m, app_dir=None):
"icon": m.icon,
"icon_svg": _read_icon_svg(app_dir, m.icon),
"has_settings": bool(m.settings),
# Optional template URL with `{host}` placeholder; frontend
# substitutes against location.hostname at render time.
"open_url": m.open_url,
}
@ -611,31 +349,28 @@ def _list_installed():
return out
def _list_available():
"""Apps available to install — catalog union bundled, catalog wins on collision.
Each entry carries a `"source"` field (`"catalog"` | `"bundled"`) so the
UI can visually differentiate later. Already-installed apps are filtered
out so the UI shows them only in the installed list.
"""
def _list_bundled():
installed_names = {r.path.name for r in scan(apps_dir()) if r.ok}
bundled = bundled_apps_dir()
if not bundled.exists():
return []
out = []
for app_source in sources.list_available():
if app_source.path.name in installed_names:
for entry in sorted(bundled.iterdir()):
if not entry.is_dir() or entry.name in installed_names:
continue
manifest_path = entry / "manifest.json"
if not manifest_path.exists():
continue
manifest_path = app_source.path / "manifest.json"
try:
m = load_manifest(manifest_path)
except ManifestError:
continue
summary = _manifest_summary(m, app_source.path)
summary["source"] = app_source.origin
out.append(summary)
out.append(_manifest_summary(m, entry))
return out
def _load_manifest_for(name):
"""Return (manifest, env_values, installed_bool) for an installed or bundled/catalog app.
"""Return (manifest, env_values, installed_bool) for an installed or bundled app.
Returns (None, None, False) if the name doesn't resolve anywhere.
"""
@ -647,13 +382,13 @@ def _load_manifest_for(name):
return None, None, False
values = installer.read_env_values(target / ".env")
return m, values, True
resolved = sources.resolve_app_name(name)
if resolved is not None:
bundled = bundled_apps_dir() / name
if bundled.exists() and (bundled / "manifest.json").exists():
try:
m = load_manifest(resolved.path / "manifest.json")
m = load_manifest(bundled / "manifest.json")
except ManifestError:
return None, None, False
env_example = resolved.path / ".env.example"
env_example = bundled / ".env.example"
values = installer.read_env_values(env_example) if env_example.exists() else {}
return m, values, False
return None, None, False
@ -692,86 +427,19 @@ def _do_get_settings(name):
}
_INSTALL_TERMINAL_STAGES = frozenset({"done", "error"})
def _do_install(name, settings=None):
"""Kick off an app install. Synchronous sync-phase + async docker-phase.
Fast parts run inline so validation failures come back as immediate
4xx (bad path, placeholder secret, unknown app, etc.). The slow
`docker compose pull` then `compose up` are dispatched as a
background systemd-run unit that writes phase transitions to
/var/lib/furtka/install-state.json for the UI to poll.
"""
import subprocess
# Reject if the state file reports a non-terminal install. The
# fcntl lock below catches the same race, but only *after* the API
# releases it to let the systemd-run child grab it — a competing
# POST can sneak in during that tiny window. Reading the state
# first closes that gap: as long as a previous install hasn't
# written "done" or "error", we refuse.
current_state = install_runner.read_state()
current_stage = current_state.get("stage", "") if isinstance(current_state, dict) else ""
if current_stage and current_stage not in _INSTALL_TERMINAL_STAGES:
return 409, {
"error": (
f"another install is in progress ({current_state.get('app', '?')}"
f" at {current_stage})"
)
}
# Fast-fail if another install is already in flight. Lock lives under
# /run/ so a previous reboot clears it automatically.
try:
fh = install_runner.acquire_lock()
except install_runner.InstallRunnerError as e:
return 409, {"error": str(e)}
try:
try:
src = installer.resolve_source(name)
target = installer.install_from(src, settings=settings)
except installer.InstallError as e:
return 400, {"error": str(e)}
# Initial state so the UI has something to show between this
# response and the background job's first write.
install_runner.write_state("pulling_image", app=name)
finally:
# Release the lock so the background job can re-acquire it.
fh.close()
unit = f"furtka-install-{name}"
try:
subprocess.run(
[
"systemd-run",
f"--unit={unit}",
"--no-block",
"--collect",
"/usr/local/bin/furtka",
"app",
"install-bg",
name,
],
check=True,
capture_output=True,
text=True,
)
except FileNotFoundError:
install_runner.write_state("error", app=name, error="systemd-run not available")
return 502, {"error": "systemd-run not available"}
except subprocess.CalledProcessError as e:
err = (e.stderr or e.stdout or "").strip()
install_runner.write_state("error", app=name, error=f"dispatch failed: {err}")
return 502, {"error": f"systemd-run failed: {err}"}
return 202, {"status": "dispatched", "unit": unit, "installed": str(target)}
def _do_install_status():
"""Return the current install-state.json contents (or {})."""
return 200, install_runner.read_state()
actions = reconciler.reconcile(apps_dir())
payload = {
"installed": str(target),
"actions": [{"kind": a.kind, "target": a.target, "detail": a.detail} for a in actions],
}
# 207 Multi-Status — install copy succeeded but reconcile had per-app errors.
return (207 if reconciler.has_errors(actions) else 200, payload)
def _do_update_settings(name, settings):
@ -915,131 +583,6 @@ def _do_furtka_status():
return 200, updater.read_state()
def _do_catalog_check():
"""Check Forgejo for a newer apps-catalog release.
Parallels _do_furtka_check: returns current/latest/update_available.
"""
from furtka import catalog
try:
check = catalog.check_catalog()
except catalog.CatalogError as e:
return 502, {"error": str(e)}
return 200, {
"current": check.current,
"latest": check.latest,
"update_available": check.update_available,
}
def _do_catalog_apply():
"""Kick off a catalog sync detached from this process.
Catalog sync doesn't restart furtka-api, so the lifecycle constraint that
forces the Furtka self-update to detach doesn't strictly apply here — but
using the same systemd-run pattern keeps the two UI flows symmetric and
means a slow network can't tie up the API thread. Client polls
/api/catalog/status the same way it polls /update-state.json.
"""
import subprocess
from furtka import catalog
try:
fh = catalog.acquire_lock()
except catalog.CatalogError as e:
return 409, {"error": str(e)}
fh.close()
try:
subprocess.run(
[
"systemd-run",
"--unit=furtka-catalog-sync-api",
"--no-block",
"--collect",
"/usr/local/bin/furtka",
"catalog",
"sync",
],
check=True,
capture_output=True,
text=True,
)
except FileNotFoundError:
return 502, {"error": "systemd-run not available"}
except subprocess.CalledProcessError as e:
return 502, {
"error": f"systemd-run failed: {(e.stderr or e.stdout or '').strip()}",
}
return 202, {"status": "dispatched", "unit": "furtka-catalog-sync-api"}
def _do_catalog_status():
"""Return {current, state} for the apps catalog.
`current` is the catalog's on-disk VERSION; `state` is whatever was last
written by sync_catalog to catalog-state.json. UI uses both: show the
version next to a last-sync timestamp plus a stage indicator.
"""
from furtka import catalog
return 200, {
"current": catalog.read_current_catalog_version(),
"state": catalog.read_state(),
}
_POWER_ACTIONS = {
"reboot": "reboot",
"poweroff": "poweroff",
}
def _do_power(payload):
"""Schedule a reboot or poweroff with a short delay.
`systemd-run --on-active=3s` kicks a transient timer that fires
`systemctl {reboot|poweroff}` a few seconds after the API returns
long enough for the HTTP response to reach the browser + the UI to
swap to a "Going down…" state before the kernel loses network.
The `--no-block` flag makes the systemd-run call itself return
immediately; `--collect` GCs the transient unit once it fires.
No auth: same posture as the install/remove endpoints. Anyone on the
LAN can reboot the box. The /settings banner warns about this;
Authentik will lock it down.
"""
import subprocess
action = payload.get("action")
systemctl_verb = _POWER_ACTIONS.get(action)
if systemctl_verb is None:
return 400, {"error": f"'action' must be one of {sorted(_POWER_ACTIONS)}"}
try:
subprocess.run(
[
"systemd-run",
"--on-active=3s",
"--no-block",
"--collect",
"systemctl",
systemctl_verb,
],
check=True,
capture_output=True,
text=True,
)
except FileNotFoundError:
return 502, {"error": "systemd-run not available"}
except subprocess.CalledProcessError as e:
return 502, {
"error": f"systemd-run failed: {(e.stderr or e.stdout or '').strip()}",
}
return 202, {"action": action, "scheduled_in_seconds": 3}
def _do_update(name):
"""Pull newer container images for an installed app; restart if any changed.
@ -1088,211 +631,35 @@ def _parse_settings_body(payload):
class _Handler(BaseHTTPRequestHandler):
def _json(self, status, payload, extra_headers=None):
def _json(self, status, payload):
body = json.dumps(payload).encode()
self.send_response(status)
self.send_header("Content-Type", "application/json")
self.send_header("Content-Length", str(len(body)))
for name, value in extra_headers or []:
self.send_header(name, value)
self.end_headers()
self.wfile.write(body)
def _html(self, status, body, extra_headers=None):
def _html(self, status, body):
b = body.encode()
self.send_response(status)
self.send_header("Content-Type", "text/html; charset=utf-8")
self.send_header("Content-Length", str(len(b)))
for name, value in extra_headers or []:
self.send_header(name, value)
self.end_headers()
self.wfile.write(b)
def _serve_static_www(self, relative_path: str):
"""Read an HTML asset from assets/www/ and serve it as 200.
Only reached after the do_GET auth-guard so the caller is
already authed. Relative_path is hard-coded at the call site
(``index.html`` or ``settings/index.html``), not user-supplied,
so there's no path-traversal surface here — but we still clamp
the resolved path to static_www_dir() as a defensive check in
case a future refactor wires a dynamic path through.
"""
root = static_www_dir().resolve()
target = (root / relative_path).resolve()
if root not in target.parents and target != root:
return self._html(500, "<h1>internal error</h1>")
try:
body = target.read_text(encoding="utf-8")
except (FileNotFoundError, OSError):
return self._html(404, "<h1>not found</h1>")
return self._html(200, body)
def _redirect(self, location, extra_headers=None):
self.send_response(302)
self.send_header("Location", location)
self.send_header("Content-Length", "0")
for name, value in extra_headers or []:
self.send_header(name, value)
self.end_headers()
# ---- Auth helpers -------------------------------------------------
def _request_cookies(self) -> SimpleCookie:
cookies = SimpleCookie()
header = self.headers.get("Cookie")
if header:
try:
cookies.load(header)
except Exception:
# Malformed Cookie header — treat as no cookies rather
# than 500ing. Same posture as browsers.
return SimpleCookie()
return cookies
def _current_session(self):
cookies = self._request_cookies()
morsel = cookies.get(auth.COOKIE_NAME)
if morsel is None:
return None
return auth.SESSIONS.lookup(morsel.value)
def _session_cookie_header(self, token: str, max_age: int) -> tuple[str, str]:
secure = self.headers.get("X-Forwarded-Proto", "").lower() == "https"
parts = [
f"{auth.COOKIE_NAME}={token}",
"HttpOnly",
"SameSite=Strict",
"Path=/",
f"Max-Age={max_age}",
]
if secure:
parts.append("Secure")
return ("Set-Cookie", "; ".join(parts))
def _clear_cookie_header(self) -> tuple[str, str]:
# Max-Age=0 with an empty value tells the browser to drop it.
return (
"Set-Cookie",
f"{auth.COOKIE_NAME}=; HttpOnly; SameSite=Strict; Path=/; Max-Age=0",
)
def _client_ip(self) -> str:
# Caddy's reverse_proxy appends the real TCP peer to X-Forwarded-For;
# the rightmost entry is the one Caddy added, so it's trustworthy
# even if a client spoofed an XFF of their own. Caddy is the edge —
# no upstream proxy in front of it.
xff = self.headers.get("X-Forwarded-For")
if xff:
return xff.rsplit(",", 1)[-1].strip()
return self.client_address[0]
def _handle_login(self, payload):
username = payload.get("username") if isinstance(payload, dict) else None
password = payload.get("password") if isinstance(payload, dict) else None
if not isinstance(username, str) or not username.strip():
return self._json(400, {"error": "username is required"})
if not isinstance(password, str) or not password:
return self._json(400, {"error": "password is required"})
username = username.strip()
if auth.setup_needed():
# First-run setup path — create the admin account, then log
# in. Require password2 so a typo doesn't lock the user out
# of their own box.
password2 = payload.get("password2")
if password2 != password:
return self._json(400, {"error": "passwords do not match"})
if len(password) < _MIN_PASSWORD_LEN:
return self._json(
400,
{"error": f"password must be at least {_MIN_PASSWORD_LEN} characters"},
)
auth.create_admin(username, password)
else:
# Tuple-keyed lockout: a flood from one IP can't lock the
# admin out from a different IP. When locked we return the
# same 429 regardless of whether the password is correct —
# no oracle, no timing leak via "would have worked."
lockout_key = (username, self._client_ip())
retry = auth.LOCKOUT.retry_after_seconds(lockout_key)
if retry > 0:
return self._json(
429,
{"error": "too many failed attempts, try again later"},
extra_headers=[("Retry-After", str(retry))],
)
if not auth.authenticate(username, password):
# Register before the sleep so concurrent threads see a
# consistent count; keep the sleep so timing can't
# distinguish "locked" from "wrong password."
auth.LOCKOUT.register_failure(lockout_key)
time.sleep(0.5)
return self._json(401, {"error": "invalid username or password"})
auth.LOCKOUT.clear(lockout_key)
session = auth.SESSIONS.create(username)
cookie = self._session_cookie_header(session.token, auth.COOKIE_TTL_SECONDS)
return self._json(200, {"ok": True, "username": username}, extra_headers=[cookie])
def _handle_logout(self):
cookies = self._request_cookies()
morsel = cookies.get(auth.COOKIE_NAME)
if morsel is not None:
auth.SESSIONS.revoke(morsel.value)
return self._json(200, {"ok": True}, extra_headers=[self._clear_cookie_header()])
def do_GET(self): # noqa: N802 — http.server convention
# --- Public routes: login page + its assets ------------------
if self.path in ("/login", "/login/"):
# Already authed? Skip straight to the app list.
if self._current_session() is not None:
return self._redirect("/apps")
return self._html(200, _render_login_html(auth.setup_needed()))
# --- Auth guard for everything below -------------------------
session = self._current_session()
if session is None:
# API paths get a 401 JSON so fetch() callers see a clean
# error. HTML paths get a redirect to /login so the browser
# naturally ends up on the login form.
if self.path.startswith("/api/"):
return self._json(401, {"error": "not authenticated"})
return self._redirect("/login")
if self.path in ("/apps", "/apps/"):
if self.path in ("/", "/apps", "/apps/"):
return self._html(200, _HTML)
# Landing page + settings page used to be served directly by
# Caddy as static HTML, which silently bypassed this auth
# guard (26.11-era regression that shipped and nobody noticed
# until the 26.13 SSH test session — LAN visitors could read
# the box version, IP and fire pre-authed clicks at the
# update/reboot/https-toggle buttons even though the API calls
# themselves would 401). Python reads the static HTML from
# assets/www/ and serves it behind the session check; Caddy
# now proxies / and /settings* here (see Caddyfile).
if self.path == "/":
return self._serve_static_www("index.html")
if self.path in ("/settings", "/settings/"):
return self._serve_static_www("settings/index.html")
if self.path == "/api/apps":
return self._json(200, _list_installed())
# /api/bundled is the pre-26.6 name for this list; kept as an alias
# so any external tooling survives the rename to /api/apps/available.
if self.path in ("/api/bundled", "/api/apps/available"):
return self._json(200, _list_available())
if self.path == "/api/bundled":
return self._json(200, _list_bundled())
if self.path == "/api/furtka/update/status":
status, body = _do_furtka_status()
return self._json(status, body)
if self.path == "/api/furtka/https/status":
status, body = _do_https_status()
return self._json(status, body)
if self.path == "/api/catalog/status":
status, body = _do_catalog_status()
return self._json(status, body)
if self.path == "/api/apps/install/status":
status, body = _do_install_status()
return self._json(status, body)
# /api/apps/<name>/settings
if self.path.startswith("/api/apps/") and self.path.endswith("/settings"):
name = self.path[len("/api/apps/") : -len("/settings")]
@ -1312,16 +679,6 @@ class _Handler(BaseHTTPRequestHandler):
if not isinstance(payload, dict):
return self._json(400, {"error": "body must be a JSON object"})
# --- Public routes: login + logout ----------------------------
if self.path in ("/login", "/login/"):
return self._handle_login(payload)
if self.path in ("/logout", "/logout/"):
return self._handle_logout()
# --- Auth guard for every other POST --------------------------
if self._current_session() is None:
return self._json(401, {"error": "not authenticated"})
# Per-app settings update: /api/apps/<name>/settings
if self.path.startswith("/api/apps/") and self.path.endswith("/settings"):
name = self.path[len("/api/apps/") : -len("/settings")]
@ -1352,19 +709,6 @@ class _Handler(BaseHTTPRequestHandler):
status, body = _do_https_force(payload)
return self._json(status, body)
# Apps catalog: check + apply (daily timer + manual UI button).
if self.path == "/api/catalog/sync/check":
status, body = _do_catalog_check()
return self._json(status, body)
if self.path == "/api/catalog/sync/apply":
status, body = _do_catalog_apply()
return self._json(status, body)
# System power: /settings Reboot / Shut down buttons.
if self.path == "/api/furtka/power":
status, body = _do_power(payload)
return self._json(status, body)
name = payload.get("name")
if not isinstance(name, str) or not name:
return self._json(400, {"error": "missing or empty 'name' field"})

View file

@ -1,260 +0,0 @@
"""Login-guard primitives for the Furtka UI.
One admin, one password. Passwords are PBKDF2-SHA256 hashed via
``furtka.passwd`` (stdlib-only hashlib.pbkdf2_hmac / hashlib.scrypt),
stored in /var/lib/furtka/users.json with mode 0600. Sessions live in
memory a systemctl restart logs everyone out again, which is fine
for an alpha single-user box. The ``LoginAttempts`` store in this
module rate-limits failed logins per (username, IP) and is also
in-memory; a restart clears a stuck lockout.
On upgrade from pre-auth Furtka the users.json file does not exist
yet; the api's GET /login detects this via ``setup_needed()`` and
renders a first-run form that POSTs to /login as if it were a setup
submit. Fresh installs get the file pre-populated by the webinstaller
so the setup step is skipped.
Hash format is compatible with werkzeug.security 26.11 / 26.12 boxes
that happened to have werkzeug installed can carry their users.json
forward without re-setup; see ``furtka.passwd`` for the scrypt reader.
"""
from __future__ import annotations
import json
import math
import secrets
import threading
from dataclasses import dataclass
from datetime import UTC, datetime, timedelta
from furtka.passwd import hash_password as _hash_password
from furtka.passwd import verify_password as _verify_password
from furtka.paths import users_file
COOKIE_NAME = "furtka_session"
COOKIE_TTL_SECONDS = 7 * 24 * 3600 # one week
def hash_password(plain: str) -> str:
"""PBKDF2-SHA256 via stdlib. 600k iterations (OWASP 2023)."""
return _hash_password(plain)
def verify_password(plain: str, hashed: str) -> bool:
"""Constant-time compare. Accepts stdlib + legacy werkzeug formats."""
return _verify_password(plain, hashed)
def load_users() -> dict:
"""Return the users dict, or {} if the file is missing or empty.
Missing-file is the expected state on first boot and on upgrades from
pre-auth versions callers treat empty-dict as "setup required".
"""
path = users_file()
if not path.exists():
return {}
try:
raw = path.read_text()
except OSError:
return {}
if not raw.strip():
return {}
try:
data = json.loads(raw)
except json.JSONDecodeError:
return {}
if not isinstance(data, dict):
return {}
return data
def save_users(users: dict) -> None:
"""Atomically write users.json with mode 0600.
Same pattern as installer.write_env write to .tmp, chmod, rename
so a crash between open() and close() can't leave a world-readable
partial file.
"""
path = users_file()
path.parent.mkdir(parents=True, exist_ok=True)
tmp = path.with_suffix(path.suffix + ".tmp")
tmp.write_text(json.dumps(users, indent=2) + "\n")
tmp.chmod(0o600)
tmp.replace(path)
def setup_needed() -> bool:
"""True when no admin is registered yet — initial setup is required."""
users = load_users()
return not users or "admin" not in users
def create_admin(username: str, password: str) -> None:
"""Overwrite users.json with a single admin account.
The webinstaller calls this post-install (with the step-1 password) so
the installed system is login-guarded from first boot. The /login
route calls it on first setup for upgrade-path boxes that don't
already have a users.json.
"""
users = {
"admin": {
"username": username,
"hash": hash_password(password),
"created_at": datetime.now(UTC).isoformat(timespec="seconds"),
}
}
save_users(users)
def authenticate(username: str, password: str) -> bool:
"""Return True iff the supplied credentials match the admin record."""
users = load_users()
admin = users.get("admin")
if not admin:
return False
if admin.get("username") != username:
return False
hashed = admin.get("hash")
if not isinstance(hashed, str) or not hashed:
return False
return verify_password(password, hashed)
@dataclass(frozen=True)
class Session:
token: str
username: str
expires_at: datetime
class SessionStore:
"""In-memory session table. Thread-safe (api.py uses the stdlib
HTTPServer which handles one request per thread though the default
variant is single-threaded, we keep the lock so swapping to
ThreadingHTTPServer later doesn't require revisiting this).
"""
def __init__(self, ttl_seconds: int = COOKIE_TTL_SECONDS) -> None:
self._ttl = timedelta(seconds=ttl_seconds)
self._by_token: dict[str, Session] = {}
self._lock = threading.Lock()
def create(self, username: str) -> Session:
token = secrets.token_urlsafe(32)
session = Session(
token=token,
username=username,
expires_at=datetime.now(UTC) + self._ttl,
)
with self._lock:
self._by_token[token] = session
return session
def lookup(self, token: str | None) -> Session | None:
if not token:
return None
with self._lock:
session = self._by_token.get(token)
if session is None:
return None
if datetime.now(UTC) >= session.expires_at:
# Expired — drop it on the floor so repeat lookups stay fast.
self._by_token.pop(token, None)
return None
return session
def revoke(self, token: str | None) -> None:
if not token:
return
with self._lock:
self._by_token.pop(token, None)
def clear(self) -> None:
"""Test helper — wipe all sessions."""
with self._lock:
self._by_token.clear()
class LoginAttempts:
"""In-memory rate-limiter for failed logins, keyed by (username, ip).
Parallels SessionStore: thread-safe, uses ``datetime.now(UTC)`` so the
same ``_FakeDatetime`` test shim works, lives only in memory so a
``systemctl restart furtka`` wipes a stuck lockout. Tuple keying means
a flood from one source IP can't lock the admin out from elsewhere
(different IP different key) the trade-off is that an attacker
can keep probing forever by rotating IPs, but they still eat the
PBKDF2 cost per attempt.
Stored data is a dict[key list[datetime]] of recent failure
timestamps. Every call prunes entries older than ``WINDOW_SECONDS``,
so memory per active key is bounded by ``MAX_FAILURES``.
"""
MAX_FAILURES = 10
WINDOW_SECONDS = 15 * 60
LOCKOUT_SECONDS = 15 * 60
def __init__(
self,
max_failures: int = MAX_FAILURES,
window_seconds: int = WINDOW_SECONDS,
lockout_seconds: int = LOCKOUT_SECONDS,
) -> None:
self._max = max_failures
self._window = timedelta(seconds=window_seconds)
self._lockout = timedelta(seconds=lockout_seconds)
self._fails: dict[tuple[str, str], list[datetime]] = {}
self._lock = threading.Lock()
def _prune_locked(self, key: tuple[str, str], now: datetime) -> list[datetime]:
"""Drop timestamps older than the window; caller holds self._lock."""
cutoff = now - self._window
kept = [ts for ts in self._fails.get(key, ()) if ts >= cutoff]
if kept:
self._fails[key] = kept
else:
self._fails.pop(key, None)
return kept
def register_failure(self, key: tuple[str, str]) -> None:
now = datetime.now(UTC)
with self._lock:
self._prune_locked(key, now)
self._fails.setdefault(key, []).append(now)
def is_locked(self, key: tuple[str, str]) -> bool:
return self.retry_after_seconds(key) > 0
def retry_after_seconds(self, key: tuple[str, str]) -> int:
"""Seconds remaining on an active lockout, or 0 if not locked."""
now = datetime.now(UTC)
with self._lock:
kept = self._prune_locked(key, now)
if len(kept) < self._max:
return 0
# Lockout runs from the oldest retained failure; once it
# falls off the window the key is effectively released.
unlock_at = kept[0] + self._lockout
remaining = (unlock_at - now).total_seconds()
if remaining <= 0:
return 0
return max(1, math.ceil(remaining))
def clear(self, key: tuple[str, str]) -> None:
with self._lock:
self._fails.pop(key, None)
def clear_all(self) -> None:
"""Test helper — wipe all failure state."""
with self._lock:
self._fails.clear()
# Module-level singleton used by the HTTP handler.
SESSIONS = SessionStore()
LOCKOUT = LoginAttempts()

View file

@ -1,253 +0,0 @@
"""Furtka apps catalog sync.
Mirrors the shape of ``furtka.updater`` but targets a separate Forgejo
repo (``daniel/furtka-apps`` by default) whose releases carry a single
``furtka-apps-<ver>.tar.gz`` with ``VERSION`` at the root and an
``apps/<name>/`` tree underneath. Pulling the catalog keeps the on-box
app ecosystem fresh without requiring a Furtka core release core
ships a seed ``apps/`` under ``/opt/furtka/current/apps/`` that the
resolver falls back to when the catalog is empty or stale.
Flow of ``sync_catalog()``:
1. flock on ``/run/furtka/catalog.lock`` so two triggers (timer + manual
UI click) can't race.
2. ``check_catalog()`` asks Forgejo for the latest release and picks out
the tarball + sidecar URLs.
3. Download tarball + sidecar to ``/var/lib/furtka/catalog/_downloads/``.
4. Verify the sha256 sidecar against the tarball.
5. Extract into ``/var/lib/furtka/catalog/_staging/``.
6. Validate every ``apps/<name>/manifest.json`` via ``furtka.manifest.
load_manifest``. A broken catalog release is refused here, not half-
applied.
7. Atomic rename: existing live catalog ``catalog.prev/``, staging
``catalog/``, then rmtree the prev. Any failure before this step
leaves the live catalog untouched.
8. Write ``/var/lib/furtka/catalog-state.json`` for the UI.
Paths can be overridden via env vars so tests can redirect everything to
a tmp dir.
"""
from __future__ import annotations
import fcntl
import json
import os
import shutil
import time
from dataclasses import dataclass
from pathlib import Path
from furtka import _release_common as _rc
from furtka.manifest import ManifestError, load_manifest
from furtka.paths import catalog_dir
FORGEJO_HOST = os.environ.get("FURTKA_FORGEJO_HOST", "forgejo.sourcegate.online")
CATALOG_REPO = os.environ.get("FURTKA_CATALOG_REPO", "daniel/furtka-apps")
_CATALOG_STATE = Path(os.environ.get("FURTKA_CATALOG_STATE", "/var/lib/furtka/catalog-state.json"))
_LOCK_PATH = Path(os.environ.get("FURTKA_CATALOG_LOCK", "/run/furtka/catalog.lock"))
_STAGING_NAME = "_staging"
_DOWNLOADS_NAME = "_downloads"
_PREV_SUFFIX = ".prev"
_VERSION_FILE = "VERSION"
class CatalogError(RuntimeError):
"""Any failure in the catalog sync flow that should surface to the caller."""
@dataclass(frozen=True)
class CatalogCheck:
current: str | None
latest: str
update_available: bool
tarball_url: str | None
sha256_url: str | None
def state_path() -> Path:
return _CATALOG_STATE
def lock_path() -> Path:
return _LOCK_PATH
def read_current_catalog_version() -> str | None:
"""Return the string in <catalog_dir>/VERSION, or None if absent / unreadable."""
try:
value = (catalog_dir() / _VERSION_FILE).read_text().strip()
except (FileNotFoundError, NotADirectoryError, OSError):
return None
return value or None
def check_catalog() -> CatalogCheck:
"""Query Forgejo for the latest catalog release.
Uses ``/releases?limit=1`` (not ``/releases/latest``) for the same
reason the core updater does Forgejo's ``latest`` endpoint skips
pre-releases and 404s when every tag carries a suffix.
"""
current = read_current_catalog_version()
releases = _rc.forgejo_api(
FORGEJO_HOST, CATALOG_REPO, "/releases?limit=1", error_cls=CatalogError
)
if not isinstance(releases, list) or not releases:
raise CatalogError("no catalog releases published yet")
release = releases[0]
latest = str(release.get("tag_name") or "").strip()
if not latest:
raise CatalogError("latest catalog release has empty tag_name")
tarball_url = None
sha256_url = None
for asset in release.get("assets") or []:
name = asset.get("name") or ""
url = asset.get("browser_download_url") or ""
if name.endswith(".tar.gz") and "furtka-apps-" in name:
tarball_url = url
elif name.endswith(".tar.gz.sha256"):
sha256_url = url
available = latest != current and (
current is None or _rc.version_tuple(latest) > _rc.version_tuple(current)
)
return CatalogCheck(
current=current,
latest=latest,
update_available=available,
tarball_url=tarball_url,
sha256_url=sha256_url,
)
def write_state(stage: str, **extra) -> None:
"""Atomic JSON state write — same shape as updater's update-state.json."""
state_path().parent.mkdir(parents=True, exist_ok=True)
tmp = state_path().with_suffix(".tmp")
payload = {"stage": stage, "updated_at": time.strftime("%Y-%m-%dT%H:%M:%S%z"), **extra}
tmp.write_text(json.dumps(payload, indent=2))
tmp.replace(state_path())
def read_state() -> dict:
try:
return json.loads(state_path().read_text())
except (FileNotFoundError, json.JSONDecodeError):
return {}
def acquire_lock():
path = lock_path()
path.parent.mkdir(parents=True, exist_ok=True)
fh = path.open("w")
try:
fcntl.flock(fh, fcntl.LOCK_EX | fcntl.LOCK_NB)
except BlockingIOError as e:
fh.close()
raise CatalogError("another catalog sync is already in progress") from e
return fh
def _validate_staging(staging: Path, expected_version: str) -> None:
"""Fail hard if the staging tree isn't a well-formed catalog release."""
version_file = staging / _VERSION_FILE
if not version_file.is_file():
raise CatalogError("catalog tarball has no VERSION file at root")
actual = version_file.read_text().strip()
if actual != expected_version:
raise CatalogError(
f"catalog tarball VERSION ({actual!r}) doesn't match expected ({expected_version!r})"
)
apps_root = staging / "apps"
if not apps_root.is_dir():
raise CatalogError("catalog tarball has no apps/ directory")
for entry in sorted(apps_root.iterdir()):
if not entry.is_dir():
continue
manifest_path = entry / "manifest.json"
if not manifest_path.exists():
raise CatalogError(f"catalog app {entry.name!r} has no manifest.json")
try:
load_manifest(manifest_path, expected_name=entry.name)
except ManifestError as e:
raise CatalogError(f"catalog app {entry.name!r}: invalid manifest: {e}") from e
def _atomic_swap(staging: Path) -> None:
"""Move staging → live catalog, keeping the previous tree as .prev until
the rename succeeds so we never leave a half-written catalog on disk."""
live = catalog_dir()
live.parent.mkdir(parents=True, exist_ok=True)
prev = live.with_name(live.name + _PREV_SUFFIX)
if prev.exists():
shutil.rmtree(prev)
if live.exists():
live.rename(prev)
try:
staging.rename(live)
except OSError as e:
if prev.exists():
# try to restore the previous tree; if that also fails the box
# has no catalog at all until the next sync — still better than
# a partially-extracted tree.
try:
prev.rename(live)
except OSError:
pass
raise CatalogError(f"atomic catalog swap failed: {e}") from e
if prev.exists():
shutil.rmtree(prev, ignore_errors=True)
def sync_catalog() -> CatalogCheck:
"""End-to-end sync. Acquires the lock, writes state at each stage, and
leaves the live catalog untouched on any failure before the rename step.
"""
with acquire_lock():
write_state("checking")
check = check_catalog()
if not check.update_available:
write_state("done", version=check.current or check.latest, note="already up to date")
return check
if not check.tarball_url or not check.sha256_url:
raise CatalogError("catalog release is missing tarball or sha256 asset")
# Downloads land in a sibling of the live catalog so half-finished
# artefacts never pollute the live tree, and stay under /var/lib/
# furtka/ so a sync interrupted by reboot can resume instead of
# starting over from /tmp (which clears).
dl_dir = catalog_dir().with_name(catalog_dir().name + _DOWNLOADS_NAME)
dl_dir.mkdir(parents=True, exist_ok=True)
tarball = dl_dir / f"furtka-apps-{check.latest}.tar.gz"
sha_file = dl_dir / f"furtka-apps-{check.latest}.tar.gz.sha256"
write_state("downloading", latest=check.latest)
_rc.download(check.tarball_url, tarball, error_cls=CatalogError)
_rc.download(check.sha256_url, sha_file, error_cls=CatalogError)
write_state("verifying", latest=check.latest)
expected = _rc.parse_sha256_sidecar(sha_file.read_text(), error_cls=CatalogError)
_rc.verify_tarball(tarball, expected, error_cls=CatalogError)
write_state("extracting", latest=check.latest)
staging = catalog_dir().with_name(catalog_dir().name + _STAGING_NAME)
if staging.exists():
shutil.rmtree(staging)
try:
_rc.extract_tarball(tarball, staging, error_cls=CatalogError)
_validate_staging(staging, check.latest)
except CatalogError:
shutil.rmtree(staging, ignore_errors=True)
raise
write_state("swapping", latest=check.latest)
try:
_atomic_swap(staging)
except CatalogError:
shutil.rmtree(staging, ignore_errors=True)
raise
write_state("done", version=check.latest, previous=check.current)
return check

View file

@ -21,22 +21,9 @@ def _cmd_app_list(args: argparse.Namespace) -> int:
"display_name": r.manifest.display_name,
"version": r.manifest.version,
"description": r.manifest.description,
"description_long": r.manifest.description_long,
"volumes": list(r.manifest.volumes),
"ports": list(r.manifest.ports),
"icon": r.manifest.icon,
"open_url": r.manifest.open_url,
"settings": [
{
"name": s.name,
"label": s.label,
"description": s.description,
"type": s.type,
"required": s.required,
"default": s.default,
}
for s in r.manifest.settings
],
}
if r.manifest
else None,
@ -71,24 +58,6 @@ def _cmd_app_install(args: argparse.Namespace) -> int:
return 1 if reconciler.has_errors(actions) else 0
def _cmd_app_install_bg(args: argparse.Namespace) -> int:
"""Docker-facing phases of an install — called by the API via systemd-run.
Internal subcommand; normal CLI users want `app install` (synchronous).
This exists to separate the slow docker pull/up from the synchronous
validation the API does inline, so the UI can poll a state file.
"""
from furtka import install_runner
try:
install_runner.run_install(args.name)
except Exception as e:
# run_install already wrote state="error"; echo for journald.
print(f"install-bg failed: {e}", file=sys.stderr)
return 1
return 0
def _cmd_app_remove(args: argparse.Namespace) -> int:
target = apps_dir() / args.name
if not target.exists():
@ -180,60 +149,6 @@ def _cmd_rollback(args: argparse.Namespace) -> int:
return 0
def _cmd_catalog_sync(args: argparse.Namespace) -> int:
from furtka import catalog
if args.check:
try:
check = catalog.check_catalog()
except catalog.CatalogError as e:
print(f"error: {e}", file=sys.stderr)
return 2
if args.json:
print(
json.dumps(
{
"current": check.current,
"latest": check.latest,
"update_available": check.update_available,
},
indent=2,
)
)
elif check.update_available:
print(f"Catalog update available: {check.current or '(none)'}{check.latest}")
else:
print(f"Catalog already up to date ({check.current or check.latest})")
return 0
try:
check = catalog.sync_catalog()
except catalog.CatalogError as e:
print(f"error: {e}", file=sys.stderr)
return 2
if not check.update_available:
print(f"Catalog already up to date ({check.current or check.latest})")
else:
print(f"Synced catalog {check.current or '(none)'}{check.latest}")
return 0
def _cmd_catalog_status(args: argparse.Namespace) -> int:
from furtka import catalog
current = catalog.read_current_catalog_version()
state = catalog.read_state()
if args.json:
print(json.dumps({"current": current, "state": state}, indent=2))
return 0
print(f"Catalog version: {current or '(none — run `furtka catalog sync`)'}")
if state:
print(f"Last sync stage: {state.get('stage', '?')} at {state.get('updated_at', '?')}")
else:
print("Last sync stage: (never)")
return 0
def build_parser() -> argparse.ArgumentParser:
p = argparse.ArgumentParser(prog="furtka", description="Furtka resource manager")
sub = p.add_subparsers(dest="command", required=True)
@ -255,15 +170,6 @@ def build_parser() -> argparse.ArgumentParser:
)
app_install.set_defaults(func=_cmd_app_install)
# Internal — called by the HTTP API via systemd-run. Deliberately omitted
# from the help listing; regular CLI users want `app install` above.
app_install_bg = app_sub.add_parser(
"install-bg",
help=argparse.SUPPRESS,
)
app_install_bg.add_argument("name", help="Installed app folder name")
app_install_bg.set_defaults(func=_cmd_app_install_bg)
app_remove = app_sub.add_parser("remove", help="Stop and uninstall an app (keeps volumes)")
app_remove.add_argument("name", help="App name (folder name under /var/lib/furtka/apps/)")
app_remove.set_defaults(func=_cmd_app_remove)
@ -306,36 +212,6 @@ def build_parser() -> argparse.ArgumentParser:
)
rollback.set_defaults(func=_cmd_rollback)
catalog = sub.add_parser("catalog", help="Manage the apps catalog (daniel/furtka-apps)")
catalog_sub = catalog.add_subparsers(dest="subcommand", required=True)
catalog_sync = catalog_sub.add_parser(
"sync",
help="Download and install the latest apps catalog from Forgejo",
)
catalog_sync.add_argument(
"--check",
action="store_true",
help="Only check whether a catalog update is available; don't apply",
)
catalog_sync.add_argument(
"--json",
action="store_true",
help="Emit machine-readable JSON (only honoured with --check)",
)
catalog_sync.set_defaults(func=_cmd_catalog_sync)
catalog_status = catalog_sub.add_parser(
"status",
help="Print the currently-installed catalog version and last-sync stage",
)
catalog_status.add_argument(
"--json",
action="store_true",
help="Emit machine-readable JSON",
)
catalog_status.set_defaults(func=_cmd_catalog_status)
return p

View file

@ -1,30 +1,15 @@
"""Local-CA HTTPS helpers for the `tls internal` setup.
Caddy generates the local root CA lazily on first start and keeps it under
$XDG_DATA_HOME/caddy/pki/authorities/local/ our packaged caddy.service
sets `XDG_DATA_HOME=/var/lib`, so on the target that resolves to
/var/lib/caddy/pki/authorities/local/. The private key stays 0600 /
$XDG_DATA_HOME/caddy/pki/authorities/local/ on the target that's
/var/lib/caddy/.local/share/caddy/pki/authorities/local/ (the caddy system
user's XDG_DATA_HOME resolves there). The private key stays 0600 /
caddy-owned; we only ever read the public root.crt next to it.
HTTPS is **opt-in** since 26.15-alpha. Default Caddyfile has no `:443`
site block, so `tls internal` never triggers cert issuance. The
/settings toggle drops a snippet file into /etc/caddy/furtka-https.d/
that adds the hostname+tls-internal block (plus the redirect snippet
inside /etc/caddy/furtka.d/ for HTTPHTTPS). Disabling the toggle
removes both snippets and reloads Caddy falls back to HTTP-only.
Why opt-in: fresh-install boxes used to always serve a self-signed
cert on :443. Any browser that had ever trusted a previous Furtka
box's local CA rejected the new cert with an unbypassable
SEC_ERROR_BAD_SIGNATURE Firefox in particular has no "Advanced →
Accept" for that case. Making HTTPS explicit means fresh installs
never hit that trap; users who want HTTPS download the rootCA.crt
first and then click the toggle.
This module exposes:
- status(): CA fingerprint + current toggle state
- set_force_https(enabled): write/remove BOTH snippets atomically,
reload Caddy, roll back on failure.
This module exposes two operations:
- status(): current CA fingerprint + whether force-HTTPS is on
- set_force_https(enabled): write/remove the Caddy import snippet that
redirects HTTP to HTTPS, reload Caddy, roll back on failure.
"""
import base64
@ -33,13 +18,10 @@ import re
import subprocess
from pathlib import Path
CA_CERT_PATH = Path("/var/lib/caddy/pki/authorities/local/root.crt")
CA_CERT_PATH = Path("/var/lib/caddy/.local/share/caddy/pki/authorities/local/root.crt")
SNIPPET_DIR = Path("/etc/caddy/furtka.d")
REDIRECT_SNIPPET = SNIPPET_DIR / "redirect.caddyfile"
REDIRECT_CONTENT = "redir https://{host}{uri} permanent\n"
HTTPS_SNIPPET_DIR = Path("/etc/caddy/furtka-https.d")
HTTPS_SNIPPET = HTTPS_SNIPPET_DIR / "https.caddyfile"
HOSTNAME_FILE = Path("/etc/hostname")
_PEM_RE = re.compile(
r"-----BEGIN CERTIFICATE-----\s*(.+?)\s*-----END CERTIFICATE-----",
@ -51,30 +33,6 @@ class HttpsError(Exception):
"""Recoverable failure from set_force_https — the caller should 5xx."""
def _read_hostname(hostname_file: Path = HOSTNAME_FILE) -> str:
"""Return the box's hostname, stripped. Falls back to 'furtka' so a
missing /etc/hostname doesn't produce an empty site block that Caddy
would reject at parse time."""
try:
value = hostname_file.read_text().strip()
except (FileNotFoundError, PermissionError, OSError):
return "furtka"
return value or "furtka"
def _https_snippet_content(hostname: str) -> str:
"""Caddy site block the HTTPS toggle installs at opt-in.
Serves <hostname>.local and <hostname> on :443 with Caddy's
`tls internal` (local CA auto-issuance), and imports the shared
furtka_routes snippet so the :443 listener exposes the same
routes as :80. Must be written at top-level (not inside another
site block) that's why the Caddyfile imports furtka-https.d at
top-level rather than inside :80.
"""
return f"{hostname}.local, {hostname} {{\n\ttls internal\n\timport furtka_routes\n}}\n"
def _ca_fingerprint(ca_path: Path) -> str | None:
try:
pem = ca_path.read_text()
@ -96,20 +54,13 @@ def _format_fingerprint(hex_upper: str) -> str:
def status(
ca_path: Path = CA_CERT_PATH,
https_snippet: Path = HTTPS_SNIPPET,
snippet: Path = REDIRECT_SNIPPET,
) -> dict:
"""force_https is True iff the HTTPS listener snippet exists.
Before 26.15-alpha this checked the redirect snippet instead but
the redirect alone without a :443 listener wouldn't actually serve
HTTPS, so the listener snippet is the authoritative "HTTPS is on"
signal.
"""
fp = _ca_fingerprint(ca_path)
return {
"ca_available": fp is not None,
"fingerprint_sha256": _format_fingerprint(fp) if fp else None,
"force_https": https_snippet.is_file(),
"force_https": snippet.is_file(),
"ca_download_url": "/rootCA.crt",
}
@ -127,48 +78,29 @@ def set_force_https(
enabled: bool,
snippet_dir: Path = SNIPPET_DIR,
snippet: Path = REDIRECT_SNIPPET,
https_snippet_dir: Path = HTTPS_SNIPPET_DIR,
https_snippet: Path = HTTPS_SNIPPET,
hostname_file: Path = HOSTNAME_FILE,
reload_caddy=_default_reload,
) -> bool:
"""Toggle HTTPS by writing or removing two snippets atomically:
1. The top-level HTTPS hostname+tls-internal block (enables :443
listener + Caddy's `tls internal` cert issuance)
2. The :80-scoped redirect snippet (forces HTTP HTTPS)
Reload Caddy after the snippet swap. On reload failure both
snippets are reverted to their pre-call state so a bad config
can't leave Caddy wedged.
"""Toggle the HTTP→HTTPS redirect by writing or removing the snippet
Caddy imports. Always reloads Caddy. Rolls the snippet state back on
reload failure so a broken config can't leave Caddy wedged on the next
restart.
"""
snippet_dir.mkdir(mode=0o755, parents=True, exist_ok=True)
https_snippet_dir.mkdir(mode=0o755, parents=True, exist_ok=True)
had_redirect = snippet.is_file()
previous_redirect = snippet.read_text() if had_redirect else None
had_https = https_snippet.is_file()
previous_https = https_snippet.read_text() if had_https else None
had = snippet.is_file()
previous = snippet.read_text() if had else None
if enabled:
snippet.write_text(REDIRECT_CONTENT)
https_snippet.write_text(_https_snippet_content(_read_hostname(hostname_file)))
else:
if had_redirect:
elif had:
snippet.unlink()
if had_https:
https_snippet.unlink()
try:
reload_caddy()
except subprocess.CalledProcessError as e:
_revert(snippet, previous_redirect)
_revert(https_snippet, previous_https)
_revert(snippet, previous)
msg = (e.stderr or e.stdout or "").strip() or f"exit {e.returncode}"
raise HttpsError(f"caddy reload failed: {msg}") from e
except FileNotFoundError as e:
_revert(snippet, previous_redirect)
_revert(https_snippet, previous_https)
_revert(snippet, previous)
raise HttpsError(f"systemctl not available: {e}") from e
return enabled

View file

@ -1,121 +0,0 @@
"""Background job for app installs — progress-visible via state file.
The slow part of installing an app is `docker compose pull` on a large
image (Jellyfin ~500 MB); without progress feedback, the UI modal sits
dead on "Installing…" for 30+ seconds and the user wonders if it hung.
This module mirrors the exact same shape as ``furtka.catalog`` and
``furtka.updater`` so the UI can poll an install just like it polls a
catalog sync or a self-update. The split is:
- ``furtka.api._do_install`` runs synchronously: resolve source, copy
the app folder, write .env, validate path settings + placeholders.
Those are fast, and their failures deserve an immediate 4xx so the
install modal can surface them in-line.
- After that the API writes an initial state file (stage
"pulling_image") and dispatches ``systemd-run --unit=furtka-install-
<name>`` to run ``furtka app install-bg <name>`` in the background.
That CLI subcommand is what calls ``run_install()`` here it does the
docker-facing phases and writes state transitions as it goes.
State file schema (``/var/lib/furtka/install-state.json``):
{
"stage": "pulling_image" | "creating_volumes"
| "starting_container" | "done" | "error",
"updated_at": "2026-04-21T17:30:45+0200",
"app": "jellyfin",
"version": "1.0.0", // added at "done"
"error": "details..." // added at "error"
}
Lock: ``/run/furtka/install.lock`` (tmpfs, reboot-safe). Global, not
per-app two parallel installs are not a v1 use-case and the lock
keeps the state-file representation simple (one in-flight install at
a time).
"""
from __future__ import annotations
import fcntl
import json
import os
import time
from pathlib import Path
from furtka import dockerops
from furtka.manifest import load_manifest
from furtka.paths import apps_dir
_INSTALL_STATE = Path(os.environ.get("FURTKA_INSTALL_STATE", "/var/lib/furtka/install-state.json"))
_LOCK_PATH = Path(os.environ.get("FURTKA_INSTALL_LOCK", "/run/furtka/install.lock"))
class InstallRunnerError(RuntimeError):
"""Any failure in the background install flow that should surface to the caller."""
def state_path() -> Path:
return _INSTALL_STATE
def lock_path() -> Path:
return _LOCK_PATH
def write_state(stage: str, **extra) -> None:
"""Atomic JSON state write — same shape as catalog/update-state."""
state_path().parent.mkdir(parents=True, exist_ok=True)
tmp = state_path().with_suffix(".tmp")
payload = {"stage": stage, "updated_at": time.strftime("%Y-%m-%dT%H:%M:%S%z"), **extra}
tmp.write_text(json.dumps(payload, indent=2))
tmp.replace(state_path())
def read_state() -> dict:
try:
return json.loads(state_path().read_text())
except (FileNotFoundError, json.JSONDecodeError):
return {}
def acquire_lock():
path = lock_path()
path.parent.mkdir(parents=True, exist_ok=True)
fh = path.open("w")
try:
fcntl.flock(fh, fcntl.LOCK_EX | fcntl.LOCK_NB)
except BlockingIOError as e:
fh.close()
raise InstallRunnerError("another install is already in progress") from e
return fh
def run_install(name: str) -> None:
"""Docker-facing phases of the install: pull → volumes → compose up.
Called by the ``furtka app install-bg <name>`` CLI subcommand from the
systemd-run spawned by the API. Assumes the API has already run
``installer.install_from()``, so the app folder, .env, and manifest
are on disk at ``apps_dir() / <name>``.
Every phase transition is written to the state file for the UI to
poll. On exception the state flips to ``"error"`` with the message,
then the exception is re-raised so the CLI exits non-zero and
journald has a traceback.
"""
with acquire_lock():
target = apps_dir() / name
manifest = load_manifest(target / "manifest.json", expected_name=name)
try:
write_state("pulling_image", app=name)
dockerops.compose_pull(target, name)
write_state("creating_volumes", app=name)
for short in manifest.volumes:
dockerops.ensure_volume(manifest.volume_name(short))
write_state("starting_container", app=name)
dockerops.compose_up(target, name)
write_state("done", app=name, version=manifest.version)
except Exception as e:
write_state("error", app=name, error=str(e))
raise

View file

@ -1,9 +1,8 @@
import shutil
from pathlib import Path
from furtka import sources
from furtka.manifest import Manifest, ManifestError, load_manifest
from furtka.paths import apps_dir
from furtka.manifest import ManifestError, load_manifest
from furtka.paths import apps_dir, bundled_apps_dir
# Values that an app's .env.example may use as obvious "fill me in" markers.
# If any of these reach the live .env, install refuses — otherwise we'd ship
@ -11,25 +10,6 @@ from furtka.paths import apps_dir
# default that ends up screenshotted on Hacker News.
PLACEHOLDER_SECRETS: frozenset[str] = frozenset({"changeme"})
# System paths that must never be accepted as a user-supplied `path`-type
# setting. The user is root on their own box, so this is about preventing
# accidental footguns (typing `/etc` when they meant `/mnt/etc`), not
# defending against an attacker. Matches exact paths and their subtrees
# after `Path.resolve()` — so `/mnt/../etc` also lands here.
DENIED_PATH_PREFIXES: tuple[str, ...] = (
"/etc",
"/root",
"/boot",
"/proc",
"/sys",
"/dev",
"/bin",
"/sbin",
"/usr/bin",
"/usr/sbin",
"/var/lib/furtka",
)
class InstallError(RuntimeError):
pass
@ -50,53 +30,6 @@ def _placeholder_keys(env_path: Path) -> list[str]:
return bad
def _is_denied_system_path(resolved: str) -> bool:
if resolved == "/":
return True
for bad in DENIED_PATH_PREFIXES:
if resolved == bad or resolved.startswith(bad + "/"):
return True
return False
def _path_setting_errors(m: Manifest, env_path: Path) -> list[str]:
"""Validate the filesystem paths named by `path`-type settings.
Returns one human-readable message per offending setting. Empty values
on non-required settings are allowed the required-field check in the
caller already refuses blanks on required fields before write.
"""
if not env_path.exists():
return []
values = _read_env(env_path)
errors: list[str] = []
for s in m.settings:
if s.type != "path":
continue
value = values.get(s.name, "")
if not value:
continue
p = Path(value)
if not p.is_absolute():
errors.append(f"{s.name}={value!r} must be an absolute path (start with /)")
continue
try:
resolved = p.resolve(strict=False)
except (OSError, RuntimeError) as e:
errors.append(f"{s.name}={value!r} cannot be resolved: {e}")
continue
if _is_denied_system_path(str(resolved)):
errors.append(f"{s.name}={value!r} resolves into a system path and is not allowed")
continue
if not resolved.exists():
errors.append(f"{s.name}={value!r} does not exist on this box")
continue
if not resolved.is_dir():
errors.append(f"{s.name}={value!r} is not a directory")
continue
return errors
def _format_env_value(v: str) -> str:
# Quote values that contain whitespace, quotes, or shell metacharacters so
# docker-compose's env substitution reads them back intact. Simple values
@ -125,18 +58,17 @@ def resolve_source(source: str) -> Path:
"""Resolve a `furtka app install <source>` arg to a real source folder.
If `source` looks like a path (or exists on disk), use it. Otherwise treat
it as an app name and look it up via `furtka.sources.resolve_app_name`
which checks the synced catalog first and falls back to the bundled seed.
it as a bundled app name and look up under /opt/furtka/apps/<name>.
"""
p = Path(source)
if p.is_dir():
return p
if "/" in source or source.startswith("."):
raise InstallError(f"{source!r} is not a directory")
resolved = sources.resolve_app_name(source)
if resolved is None:
raise InstallError(f"{source!r} not found as a path, catalog app, or bundled app")
return resolved.path
bundled = bundled_apps_dir() / source
if bundled.is_dir():
return bundled
raise InstallError(f"{source!r} not found as a path or bundled app")
def install_from(src: Path, settings: dict[str, str] | None = None) -> Path:
@ -226,10 +158,6 @@ def install_from(src: Path, settings: dict[str, str] | None = None) -> Path:
f"file and re-run `furtka app install {m.name}`."
)
path_errors = _path_setting_errors(m, env)
if path_errors:
raise InstallError(f"{m.name}: {'; '.join(path_errors)}")
return target
@ -301,9 +229,6 @@ def update_env(name: str, settings: dict[str, str]) -> Path:
bad = _placeholder_keys(env)
if bad:
raise InstallError(f"{m.name}: {env} still has placeholder values for {', '.join(bad)}.")
path_errors = _path_setting_errors(m, env)
if path_errors:
raise InstallError(f"{m.name}: {'; '.join(path_errors)}")
return target

View file

@ -13,7 +13,7 @@ REQUIRED_FIELDS = (
"icon",
)
VALID_SETTING_TYPES = frozenset({"text", "password", "number", "path"})
VALID_SETTING_TYPES = frozenset({"text", "password", "number"})
SETTING_NAME_RE = re.compile(r"^[A-Z_][A-Z0-9_]*$")
@ -42,12 +42,6 @@ class Manifest:
icon: str
description_long: str = ""
settings: tuple[Setting, ...] = field(default_factory=tuple)
# Optional "Open" link for the landing page + installed-app row.
# `{host}` is substituted with the current browser hostname at render
# time so the URL follows whatever the user typed to reach Furtka —
# furtka.local, a raw IP, a future reverse-proxy hostname. Apps with
# no frontend (CLI-only, background workers) leave this empty.
open_url: str = ""
def volume_name(self, short: str) -> str:
# Namespace volume names so two apps can each declare e.g. "data"
@ -133,10 +127,6 @@ def load_manifest(path: Path, expected_name: str | None = None) -> Manifest:
settings = _parse_settings(raw.get("settings"), path)
open_url_raw = raw.get("open_url", "")
if not isinstance(open_url_raw, str):
raise ManifestError(f"{path}: open_url must be a string if set")
return Manifest(
name=name,
display_name=str(raw["display_name"]),
@ -147,5 +137,4 @@ def load_manifest(path: Path, expected_name: str | None = None) -> Manifest:
icon=str(raw["icon"]),
description_long=str(raw.get("description_long", "")),
settings=settings,
open_url=open_url_raw,
)

View file

@ -1,95 +0,0 @@
"""Stdlib-only password hashing, compatible with werkzeug's hash format.
Why this exists: 26.11-alpha introduced auth via ``werkzeug.security``,
but the target system doesn't have ``werkzeug`` installed (Core runs as
system Python with only the stdlib pyproject.toml's ``flask>=3.0``
dep is never pip-installed on the box). Fresh installs from a 26.11 /
26.12 ISO crashed on import; upgrades from pre-auth versions were
double-broken by that plus a too-strict updater health check.
Fix: replace werkzeug with stdlib equivalents using the same hash
**format** so existing ``users.json`` files created by 26.11 / 26.12 on
the rare boxes that happened to have werkzeug installed (Medion, .196
after manual pacman) still verify.
Format: ``<method>$<salt>$<hex digest>``
- ``pbkdf2:<hash>:<iterations>`` what we generate by default here
- ``scrypt:<N>:<r>:<p>`` what werkzeug's default produces
Both are implemented via ``hashlib`` which has been stdlib since 3.6.
"""
from __future__ import annotations
import hashlib
import hmac
import secrets
_PBKDF2_HASH = "sha256"
_PBKDF2_ITERATIONS = 600_000
_SALT_LEN = 16
def hash_password(password: str) -> str:
"""Return a ``pbkdf2:sha256:<iter>$<salt>$<hex>`` hash of *password*.
PBKDF2-SHA256 over UTF-8. 600k iterations same as werkzeug's
default in the 3.x series, roughly OWASP 2023's recommendation.
"""
if not isinstance(password, str):
raise TypeError("password must be str")
salt = secrets.token_urlsafe(_SALT_LEN)[:_SALT_LEN]
dk = hashlib.pbkdf2_hmac(
_PBKDF2_HASH, password.encode("utf-8"), salt.encode("utf-8"), _PBKDF2_ITERATIONS
)
return f"pbkdf2:{_PBKDF2_HASH}:{_PBKDF2_ITERATIONS}${salt}${dk.hex()}"
def verify_password(password: str, hashed: str) -> bool:
"""Constant-time verify *password* against a stored *hashed* value.
Accepts both our own pbkdf2 hashes and legacy werkzeug scrypt
hashes in ``scrypt:N:r:p$salt$hex`` form so users.json files
written by 26.11 / 26.12 keep working after upgrade.
"""
if not isinstance(password, str) or not isinstance(hashed, str):
return False
try:
method, salt, expected = hashed.split("$", 2)
except ValueError:
return False
parts = method.split(":")
if not parts:
return False
algo = parts[0]
pw_bytes = password.encode("utf-8")
salt_bytes = salt.encode("utf-8")
try:
if algo == "pbkdf2":
if len(parts) < 3:
return False
inner_hash = parts[1]
iterations = int(parts[2])
dk = hashlib.pbkdf2_hmac(inner_hash, pw_bytes, salt_bytes, iterations)
elif algo == "scrypt":
# werkzeug: scrypt:N:r:p, dklen=64, maxmem=132 MiB. Without
# the explicit maxmem we'd hit OpenSSL's default memory cap
# and throw ValueError on N >= 32768.
if len(parts) < 4:
return False
n = int(parts[1])
r = int(parts[2])
p = int(parts[3])
dk = hashlib.scrypt(
pw_bytes,
salt=salt_bytes,
n=n,
r=r,
p=p,
dklen=64,
maxmem=132 * 1024 * 1024,
)
else:
return False
except (ValueError, TypeError, OverflowError):
return False
return hmac.compare_digest(dk.hex(), expected)

View file

@ -7,19 +7,6 @@ DEFAULT_APPS_DIR = Path("/var/lib/furtka/apps")
# symlink. A flat /opt/furtka/apps path would break the Phase-2 self-update
# flow (symlink swap wouldn't move the bundled-app tree along with the code).
DEFAULT_BUNDLED_APPS_DIR = Path("/opt/furtka/current/apps")
# Catalog apps come from `furtka catalog sync` pulling the daniel/furtka-apps
# release tarball. Lives under /var/lib/furtka/ so it survives core self-
# updates — the resolver (furtka.sources) prefers it over the bundled seed.
DEFAULT_CATALOG_DIR = Path("/var/lib/furtka/catalog")
# Users / auth state. One JSON file keyed by role — today only "admin" exists.
# Lives under /var/lib/furtka/ so self-updates don't stomp it. Mode 0600 is
# enforced by furtka.auth.save_users (same atomic-write pattern as the app
# .env files).
DEFAULT_USERS_FILE = Path("/var/lib/furtka/users.json")
# Static-web asset dir served by the Python handler for / and
# /settings* so those pages pick up the auth-guard. Caddy also serves
# /style.css and other assets directly from here for the login page.
DEFAULT_STATIC_WWW = Path("/opt/furtka/current/assets/www")
def apps_dir() -> Path:
@ -28,19 +15,3 @@ def apps_dir() -> Path:
def bundled_apps_dir() -> Path:
return Path(os.environ.get("FURTKA_BUNDLED_APPS_DIR", DEFAULT_BUNDLED_APPS_DIR))
def catalog_dir() -> Path:
return Path(os.environ.get("FURTKA_CATALOG_DIR", DEFAULT_CATALOG_DIR))
def catalog_apps_dir() -> Path:
return catalog_dir() / "apps"
def users_file() -> Path:
return Path(os.environ.get("FURTKA_USERS_FILE", DEFAULT_USERS_FILE))
def static_www_dir() -> Path:
return Path(os.environ.get("FURTKA_STATIC_WWW", DEFAULT_STATIC_WWW))

View file

@ -1,75 +0,0 @@
"""Single lookup layer for "where does app <name> live right now?".
Three origins an app folder can come from:
- ``catalog`` the daily-synced ``/var/lib/furtka/catalog/apps/`` tree
that ``furtka.catalog.sync_catalog`` maintains.
- ``bundled`` the seed ``/opt/furtka/current/apps/`` tree shipped
inside the core release tarball. Used for first-boot before any
catalog sync has run, and as the fallback when the catalog is stale,
missing, or doesn't know about this app.
- ``local`` an explicit directory path passed to ``furtka app install
/path/to/src``; bypasses this module entirely.
Catalog wins on collision. The precedence is deliberate when the user
pressed "Sync apps catalog" they want what they synced, not whatever the
core tarball happened to carry.
"""
from __future__ import annotations
from dataclasses import dataclass
from pathlib import Path
from furtka.paths import bundled_apps_dir, catalog_apps_dir
@dataclass(frozen=True)
class AppSource:
path: Path
origin: str # "catalog" | "bundled" | "local"
def resolve_app_name(name: str) -> AppSource | None:
"""Return the source folder for a bundled/catalog app name.
Checks catalog first, then bundled seed. Presence is tested by
``manifest.json`` existing an empty folder or a stray ``.env``
won't register. Returns ``None`` if the name isn't known anywhere.
"""
cat = catalog_apps_dir() / name
if (cat / "manifest.json").is_file():
return AppSource(cat, "catalog")
bundled = bundled_apps_dir() / name
if (bundled / "manifest.json").is_file():
return AppSource(bundled, "bundled")
return None
def list_available() -> list[AppSource]:
"""Catalog bundled, catalog wins on name collision.
Each entry is a folder containing a manifest.json. Ordering is
alphabetical by folder name, which matches how the scanner sorts so
the UI list stays stable across sync/reboot.
"""
seen: dict[str, AppSource] = {}
cat_root = catalog_apps_dir()
if cat_root.is_dir():
for entry in sorted(cat_root.iterdir()):
if not entry.is_dir():
continue
if not (entry / "manifest.json").is_file():
continue
seen[entry.name] = AppSource(entry, "catalog")
bundled_root = bundled_apps_dir()
if bundled_root.is_dir():
for entry in sorted(bundled_root.iterdir()):
if not entry.is_dir():
continue
if entry.name in seen:
continue
if not (entry / "manifest.json").is_file():
continue
seen[entry.name] = AppSource(entry, "bundled")
return [seen[name] for name in sorted(seen)]

View file

@ -29,18 +29,18 @@ the updater at a tmpdir.
from __future__ import annotations
import fcntl
import hashlib
import json
import os
import shutil
import subprocess
import tarfile
import time
import urllib.error
import urllib.request
from dataclasses import dataclass
from pathlib import Path
from furtka import _release_common as _rc
FORGEJO_HOST = os.environ.get("FURTKA_FORGEJO_HOST", "forgejo.sourcegate.online")
FORGEJO_REPO = os.environ.get("FURTKA_FORGEJO_REPO", "daniel/furtka")
_FURTKA_ROOT = Path(os.environ.get("FURTKA_ROOT", "/opt/furtka"))
@ -49,12 +49,7 @@ _CADDYFILE_LIVE = Path(os.environ.get("FURTKA_CADDYFILE_PATH", "/etc/caddy/Caddy
_CADDY_SNIPPET_DIR = Path(
os.environ.get("FURTKA_CADDY_SNIPPET_DIR", str(_CADDYFILE_LIVE.parent / "furtka.d"))
)
_CADDY_HTTPS_SNIPPET_DIR = Path(
os.environ.get("FURTKA_CADDY_HTTPS_SNIPPET_DIR", str(_CADDYFILE_LIVE.parent / "furtka-https.d"))
)
_SYSTEMD_DIR = Path(os.environ.get("FURTKA_SYSTEMD_DIR", "/etc/systemd/system"))
_HOSTNAME_FILE = Path(os.environ.get("FURTKA_HOSTNAME_FILE", "/etc/hostname"))
_CADDYFILE_HOSTNAME_MARKER = "__FURTKA_HOSTNAME__"
class UpdateError(RuntimeError):
@ -98,11 +93,37 @@ def read_current_version() -> str:
return "dev"
def _forgejo_api(path: str) -> dict | list:
return _rc.forgejo_api(FORGEJO_HOST, FORGEJO_REPO, path, error_cls=UpdateError)
def _forgejo_api(path: str) -> dict:
url = f"https://{FORGEJO_HOST}/api/v1/repos/{FORGEJO_REPO}{path}"
req = urllib.request.Request(url, headers={"Accept": "application/json"})
try:
with urllib.request.urlopen(req, timeout=15) as resp:
return json.loads(resp.read())
except (urllib.error.URLError, json.JSONDecodeError) as e:
raise UpdateError(f"forgejo api {url}: {e}") from e
_version_tuple = _rc.version_tuple
def _version_tuple(v: str) -> tuple:
"""Compare CalVer tags like 26.1-alpha < 26.1-beta < 26.1 < 26.2-alpha.
The "stable" release (no suffix) sorts after its own pre-releases. Uses a
tuple of (year, release, stage-rank, stage-tag). Stage rank: alpha=0,
beta=1, rc=2, stable=3, unknown=-1.
"""
stage_rank = {"alpha": 0, "beta": 1, "rc": 2}
head, _, suffix = v.partition("-")
try:
year_str, release_str = head.split(".", 1)
year = int(year_str)
release = int(release_str)
except (ValueError, IndexError):
return (-1, -1, -1, v)
if not suffix:
return (year, release, 3, "")
for name, rank in stage_rank.items():
if suffix.startswith(name):
return (year, release, rank, suffix)
return (year, release, -1, suffix)
def check_update() -> UpdateCheck:
@ -142,97 +163,78 @@ def check_update() -> UpdateCheck:
def _download(url: str, dest: Path) -> None:
_rc.download(url, dest, error_cls=UpdateError)
dest.parent.mkdir(parents=True, exist_ok=True)
req = urllib.request.Request(url)
try:
with urllib.request.urlopen(req, timeout=60) as resp, dest.open("wb") as f:
shutil.copyfileobj(resp, f)
except urllib.error.URLError as e:
raise UpdateError(f"download {url}: {e}") from e
_sha256_of = _rc.sha256_of
def _sha256_of(path: Path) -> str:
h = hashlib.sha256()
with path.open("rb") as f:
for chunk in iter(lambda: f.read(1024 * 1024), b""):
h.update(chunk)
return h.hexdigest()
def verify_tarball(tarball: Path, expected_sha: str) -> None:
_rc.verify_tarball(tarball, expected_sha, error_cls=UpdateError)
actual = _sha256_of(tarball)
if actual != expected_sha:
raise UpdateError(f"sha256 mismatch: expected {expected_sha}, got {actual}")
def _parse_sha256_sidecar(text: str) -> str:
return _rc.parse_sha256_sidecar(text, error_cls=UpdateError)
"""Extract the hash from a standard `sha256sum` sidecar line."""
line = text.strip().split("\n", 1)[0].strip()
if not line:
raise UpdateError("empty sha256 sidecar")
return line.split()[0]
def _extract_tarball(tarball: Path, dest: Path) -> str:
return _rc.extract_tarball(tarball, dest, error_cls=UpdateError)
def _current_hostname() -> str:
"""Read the box's hostname from /etc/hostname, falling back to 'furtka'.
Used to substitute the __FURTKA_HOSTNAME__ marker in the shipped Caddyfile
so Caddy's `tls internal` sees a real name to issue a leaf cert for.
"""
"""Extract the tarball and return the VERSION read from its root."""
dest.mkdir(parents=True, exist_ok=True)
with tarfile.open(tarball, "r:gz") as tf:
# defensive: refuse entries that would escape dest
for member in tf.getmembers():
if member.name.startswith(("/", "..")) or ".." in Path(member.name).parts:
raise UpdateError(f"refusing tarball entry {member.name!r}")
# Python 3.12+ grew a stricter default filter; opt into it where
# available to catch symlink-escape / device-node / setuid tricks
# that our regex check can't see. Older Pythons fall back to the
# historical permissive behaviour.
try:
name = _HOSTNAME_FILE.read_text().strip()
except (FileNotFoundError, PermissionError, OSError):
return "furtka"
return name or "furtka"
def _maybe_migrate_preserve_https() -> None:
"""26.14 → 26.15 migration: if the box already had the force-HTTPS
redirect snippet on disk, that means the user explicitly opted
into HTTPS under the old regime. Under the new opt-in regime,
HTTPS also requires a separate listener snippet write it here so
the user's HTTPS doesn't silently break when the Caddyfile refresh
removes the default hostname block.
"""
redirect_snippet = _CADDY_SNIPPET_DIR / "redirect.caddyfile"
https_snippet = _CADDY_HTTPS_SNIPPET_DIR / "https.caddyfile"
if not redirect_snippet.is_file() or https_snippet.is_file():
return
hostname = _current_hostname()
https_snippet.write_text(
f"{hostname}.local, {hostname} {{\n\ttls internal\n\timport furtka_routes\n}}\n"
)
tf.extractall(dest, filter="data")
except TypeError:
tf.extractall(dest)
version_file = dest / "VERSION"
if not version_file.is_file():
raise UpdateError("tarball has no VERSION file at root")
return version_file.read_text().strip()
def _refresh_caddyfile(source: Path) -> bool:
"""Copy the shipped Caddyfile to /etc/caddy/ iff it differs. Returns True
if the file changed (so caddy needs more than a bare reload).
Substitutes __FURTKA_HOSTNAME__ with the current hostname before comparing
and writing same rendering the webinstaller applies at install time, so
a self-update lands byte-identical content when nothing else changed.
"""
if the file changed (so caddy needs more than a bare reload)."""
if not source.is_file():
return False
# Snippet dirs for the /api/furtka/https/force toggle. Pre-HTTPS
# installs don't have them; ensure both so the Caddyfile's glob
# imports can't trip an older Caddy on missing paths during the
# first reload. furtka-https.d is new in 26.15-alpha — older boxes
# upgrading across this version line won't have it on disk yet.
# Snippet dir for the /api/furtka/https/force toggle. Pre-HTTPS installs
# don't have this dir; ensure it so the Caddyfile's glob import can't
# trip an older Caddy on a missing path during the first reload.
_CADDY_SNIPPET_DIR.mkdir(mode=0o755, parents=True, exist_ok=True)
_CADDY_HTTPS_SNIPPET_DIR.mkdir(mode=0o755, parents=True, exist_ok=True)
# Migration: pre-26.15 Caddyfile always served :443 via tls internal,
# so a box that had the "force HTTPS" redirect toggle ON relied on
# HTTPS being there implicitly. After this Caddyfile refresh the
# hostname block is gone, so the redirect would 301 to a dead :443.
# Preserve intent by writing the HTTPS listener snippet too.
_maybe_migrate_preserve_https()
rendered = source.read_text().replace(_CADDYFILE_HOSTNAME_MARKER, _current_hostname())
if _CADDYFILE_LIVE.is_file() and rendered == _CADDYFILE_LIVE.read_text():
if _CADDYFILE_LIVE.is_file() and source.read_bytes() == _CADDYFILE_LIVE.read_bytes():
return False
_CADDYFILE_LIVE.parent.mkdir(parents=True, exist_ok=True)
_CADDYFILE_LIVE.write_text(rendered)
shutil.copy(source, _CADDYFILE_LIVE)
return True
def _link_new_units(unit_dir: Path) -> list[str]:
"""`systemctl link` any unit file in unit_dir that isn't already symlinked
into /etc/systemd/system/. Returns the list of newly-linked unit names.
Newly-linked `.timer` units are additionally `systemctl enable`d so that
a self-update introducing a timer (e.g. 26.5 26.6 adding
furtka-catalog-sync.timer) activates it automatically the installer's
enable list only applies to fresh installs. A linked-but-disabled timer
never fires on its own, so without this step catalog sync would never
happen on upgraded boxes.
"""
into /etc/systemd/system/. Returns the list of newly-linked unit names."""
if not unit_dir.is_dir():
return []
linked = []
@ -243,8 +245,6 @@ def _link_new_units(unit_dir: Path) -> list[str]:
if target.exists() or target.is_symlink():
continue
_run(["systemctl", "link", str(unit_file)])
if unit_file.suffix == ".timer":
_run(["systemctl", "enable", unit_file.name])
linked.append(unit_file.name)
return linked
@ -285,35 +285,13 @@ def _run(cmd: list[str]) -> None:
def _health_check(url: str, deadline_s: float = 30.0) -> bool:
"""Poll *url* until we get *any* response from the Python server.
Treats any 2xx-4xx response as "server is up". A 401 on
/api/apps after the 26.11-alpha auth-guard shipped is a perfectly
valid signal that the new code imported + the socket is listening
rejecting the request is still "alive". Only 5xx or connection-
level failures count as unhealthy.
Rationale: pre-26.13 this function hit /api/apps and expected 200,
which silently broke every upgrade across the auth boundary (26.10
26.11+) and auto-rolled back. Now we just need proof the new
process came up.
"""
end = time.time() + deadline_s
while time.time() < end:
try:
with urllib.request.urlopen(url, timeout=3) as resp:
# Any 2xx/3xx → alive. urllib follows redirects by
# default, so a 302 → /login resolves to /login's 200.
if resp.status < 500:
return True
except urllib.error.HTTPError as e:
# 4xx → server is up, just refused us (auth, bad request,
# whatever). Counts as healthy for the "did it come back"
# check. 5xx → genuinely broken, don't accept.
if 400 <= e.code < 500:
if resp.status == 200:
return True
except urllib.error.URLError:
# Connection refused / DNS / timeout → not up yet, retry.
pass
time.sleep(1)
return False

View file

@ -21,18 +21,15 @@ The script re-execs itself inside a privileged `archlinux:latest` container. Tha
The build starts from Arch's stock `releng` profile (the same one used to build the official Arch ISO), then overlays our customizations from `overlay/`:
| Overlay file | Effect |
|----------------------------------------------|----------------------------------------------------------------------------------|
|-------------------------------------------|----------------------------------------------------------------------------------|
| `overlay/packages.extra` | Appended to the package list. Adds `python`, `python-flask`, `avahi`, `nss-mdns` |
| `overlay/profiledef.sh` | Appended to `profiledef.sh`. Renames the ISO to `furtka-*` with a dated version |
| `overlay/airootfs/opt/furtka/` | Directory where `webinstaller/` is copied at build time |
| `overlay/airootfs/etc/hostname` | Live-ISO hostname (`proksi`) so mDNS advertises the installer as `proksi.local` |
| `overlay/airootfs/etc/issue` | Welcome banner on the TTY pointing users at `http://proksi.local:5000` |
| `overlay/airootfs/usr/local/bin/furtka-update-issue` | Rewrites `/etc/issue` at runtime so the banner also shows the DHCP-assigned IP as a fallback URL |
| `overlay/airootfs/etc/systemd/system/` | `furtka-webinstaller.service` (Flask on :5000) + `furtka-issue.service` (runs the banner-updater on network-online), each symlinked into `multi-user.target.wants/` to auto-start on boot |
| `overlay/airootfs/etc/systemd/system/` | Contains `furtka-webinstaller.service` + a symlink into `multi-user.target.wants/` so it auto-starts on boot |
The systemd service runs `flask --app app run --host 0.0.0.0 --port 5000` under `/opt/furtka`. The `0.0.0.0` binding is important — the Flask default is localhost-only, which wouldn't be reachable from another machine on the LAN.
mDNS is wired: `avahi-daemon` + `nss-mdns` come from `packages.extra`, the live ISO's hostname is `proksi`, and as soon as `systemd-networkd-wait-online` fires the installer is reachable at `http://proksi.local:5000`. The raw IP still shows on the console for fallback — some Windows clients need the Bonjour service for `.local` to resolve at all.
mDNS (`proksi.local`) via avahi is installed but not yet wired. First milestone is just "boot → browser → wizard at raw IP". Naming comes next.
## Test flow
@ -54,7 +51,7 @@ mDNS is wired: `avahi-daemon` + `nss-mdns` come from `packages.extra`, the live
Once `archinstall` finishes and you click **Reboot now**, the VM comes up into the installed system. No more port `:5000` — the wizard ISO is gone. Instead:
- **Console**: agetty shows `Furtka is ready. Open http://<hostname>.local …` with the IP fallback underneath.
- **Browser** at `http://<hostname>.local` (default `http://furtka.local` — the form's default hostname is `furtka`; only the live-installer ISO uses `proksi`): Caddy-served landing page with three live status tiles (uptime, Docker version, free disk) refreshed every 30 s by `furtka-status.timer`. HTTPS is opt-in (26.15-alpha) — flip the toggle in `/settings` to switch on Caddy's `tls internal` on `:443`, then trust `rootCA.crt` from `/settings` to clear browser warnings.
- **Browser** at `http://<hostname>.local` (default `http://proksi.local`): Caddy-served landing page with three live status tiles (uptime, Docker version, free disk) refreshed every 30 s by `furtka-status.timer`.
- **SSH**: `ssh <user>@<hostname>.local` works; `docker ps` works without `sudo` because the user is in the `docker` group.
This is a demo shell — no Authentik, no app store yet. The landing page lives at `/srv/furtka/www/`, served by Caddy on `:80` per `/etc/caddy/Caddyfile`. All of this is written into the target by `webinstaller/app.py`'s `_post_install_commands` via archinstall's `custom_commands`.
@ -62,4 +59,5 @@ This is a demo shell — no Authentik, no app store yet. The landing page lives
## Known rough edges
- **Disk space**: the first time you build on a fresh host, the squashfs/xorriso steps need ~15 GB free. If the host's LVM-root is smaller, `xorriso` silently dies at the very end with "Image size exceeds free space on media".
- **Live-installer wizard is still HTTP-only**. `http://proksi.local:5000` during install has no TLS; once the box reboots, Caddy can serve `tls internal` on `:443` if the user opts in via `/settings` (26.15-alpha), but bringing TLS to the wizard itself is a later milestone.
- **No HTTPS yet**. The Furtka plan is "local CA + green padlock on `https://proksi.local`" — that's a later milestone. For now, plain HTTP.
- **Boot USB could appear as an install target on bare metal**. On a VM the ISO is a CD-ROM (filtered) and SATA is the only disk, so the picker only shows the install target. On bare metal with a USB stick, the USB is `TYPE=disk` and shows up alongside the real install drive; a user could in theory pick the USB they just booted from. Mitigating this needs detecting the boot media (via `findmnt /run/archiso/bootmnt` or similar) and filtering it out in `webinstaller/drives.py`.

View file

@ -78,11 +78,6 @@ cp -a "$REPO_ROOT/webinstaller/." "$PROFILE_WORK/airootfs/opt/furtka/"
# next to webinstaller/app.py so _resolve_assets_dir() finds it at runtime.
cp -a "$REPO_ROOT/assets" "$PROFILE_WORK/airootfs/opt/furtka/assets"
rm -rf "$PROFILE_WORK/airootfs/opt/furtka/__pycache__"
# VERSION next to the webinstaller so the wizard footer can render the
# release string at runtime instead of carrying a hardcoded one. Matches
# what the resource-manager payload ships in its own VERSION file below.
ISO_VERSION=$(grep -E '^version = ' "$REPO_ROOT/pyproject.toml" | head -1 | sed 's/.*= "\(.*\)"/\1/')
echo "$ISO_VERSION" > "$PROFILE_WORK/airootfs/opt/furtka/VERSION"
# Pack the resource manager (furtka/ Python package + bundled apps/) as a
# tarball that webinstaller hands to archinstall via custom_commands. Lives at
@ -99,9 +94,9 @@ cp -a "$REPO_ROOT/apps" "$PAYLOAD_STAGE/"
cp -a "$REPO_ROOT/assets" "$PAYLOAD_STAGE/"
find "$PAYLOAD_STAGE" -type d -name __pycache__ -exec rm -rf {} +
# VERSION at tarball root: the installer reads it to choose the versions/<ver>/
# directory name and /opt/furtka/current/VERSION reports it at runtime. Same
# value we wrote into /opt/furtka/VERSION for the live wizard footer above.
echo "$ISO_VERSION" > "$PAYLOAD_STAGE/VERSION"
# directory name and /opt/furtka/current/VERSION reports it at runtime.
grep -E '^version = ' "$REPO_ROOT/pyproject.toml" | head -1 \
| sed 's/.*= "\(.*\)"/\1/' > "$PAYLOAD_STAGE/VERSION"
tar -czf "$PROFILE_WORK/airootfs/opt/furtka-resource-manager.tar.gz" \
-C "$PAYLOAD_STAGE" .
rm -rf "$PAYLOAD_STAGE"

View file

@ -8,23 +8,6 @@ server {
charset utf-8;
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_comp_level 6;
gzip_types
text/css
text/plain
text/xml
application/javascript
application/json
application/xml
application/rss+xml
application/atom+xml
image/svg+xml
font/woff
font/woff2;
location / {
try_files $uri $uri/ $uri.html =404;
}

View file

@ -1,6 +1,6 @@
[project]
name = "furtka"
version = "26.15-alpha"
version = "26.4-alpha"
description = "Open-source home server OS — simple enough for everyone."
requires-python = ">=3.11"
readme = "README.md"

View file

@ -99,20 +99,4 @@ upload_asset "$TARBALL"
upload_asset "$SHA_FILE"
upload_asset "$RELEASE_JSON"
# Optional: attach the live-installer ISO when dist/furtka-<version>.iso
# exists. Release workflows that want this build the ISO via iso/build.sh
# and move the output here before calling publish-release. Local runs
# that skip the ISO step still publish the core release successfully.
#
# Soft-fail: the ISO is ~1 GB and Forgejo's reverse proxy has returned
# 504 on the upload even when the write eventually succeeds. The core
# tarball (which boxes need for self-update) is already uploaded above,
# so don't let an ISO transport hiccup fail the whole release.
ISO="$DIST_DIR/furtka-$VERSION.iso"
if [ -f "$ISO" ]; then
if ! upload_asset "$ISO"; then
echo "warning: ISO upload failed — release published without ISO asset" >&2
fi
fi
echo "Release $VERSION published: https://$HOST/$REPO/releases/tag/$VERSION"

View file

@ -50,22 +50,9 @@ SHORT_SHA="${SHA:0:12}"
API="https://${PVE_TEST_HOST}:8006/api2/json"
api() {
# Wrapper so that on non-2xx we print the PVE response body to stderr
# before bubbling the failure — otherwise `--fail-with-body` output
# gets swallowed by callers that pipe to /dev/null, and you're left
# staring at "curl: (22)" with no idea which permission is missing.
local body rc
body=$(curl --silent --show-error --fail-with-body -k \
curl --silent --show-error --fail-with-body -k \
--header "Authorization: PVEAPIToken=${PVE_TEST_TOKEN}" \
"$@" 2>&1)
rc=$?
if [[ $rc -ne 0 ]]; then
echo "!! PVE API call failed (rc=$rc)" >&2
echo "!! request: $*" >&2
[[ -n "$body" ]] && echo "!! response: $body" >&2
return $rc
fi
printf '%s' "$body"
"$@"
}
# PVE returns {"data": <payload>}; grab .data into a python expression.
@ -153,19 +140,14 @@ MAC_LOWER="$(echo "$MAC" | tr 'A-Z' 'a-z')"
IP=""
deadline=$((SECONDS + 150))
while (( SECONDS < deadline )); do
# Capture-then-parse instead of piping directly into awk. `awk '... exit'`
# exits on first match, which SIGPIPEs the upstream arp-scan (exit 141).
# With `set -o pipefail` active that kills the whole script — exactly what
# happened the first time host-networking gave arp-scan real matches.
SCAN=""
if command -v arp-scan >/dev/null 2>&1; then
SCAN="$(sudo arp-scan --localnet --quiet --ignoredups 2>/dev/null || true)"
IP="$(awk -v m="$MAC_LOWER" 'tolower($2) == m { print $1; exit }' <<<"$SCAN")"
IP="$(sudo arp-scan --localnet --quiet --ignoredups 2>/dev/null \
| awk -v m="$MAC_LOWER" 'tolower($2) == m { print $1; exit }')"
fi
if [[ -z "$IP" ]] && command -v nmap >/dev/null 2>&1; then
sudo nmap -sn -T4 192.168.178.0/24 >/dev/null 2>&1 || true
NEIGH="$(ip neigh show)"
IP="$(awk -v m="$MAC_LOWER" 'tolower($5) == m && $1 ~ /^[0-9]/ { print $1; exit }' <<<"$NEIGH")"
IP="$(ip neigh show \
| awk -v m="$MAC_LOWER" 'tolower($5) == m && $1 ~ /^[0-9]/ { print $1; exit }')"
fi
[[ -n "$IP" ]] && break
sleep 5

View file

@ -5,7 +5,7 @@ import urllib.request
import pytest
from furtka import api, auth, dockerops
from furtka import api, dockerops
VALID_MANIFEST = {
"name": "fileshare",
@ -22,48 +22,13 @@ VALID_MANIFEST = {
def fake_dirs(tmp_path, monkeypatch):
apps = tmp_path / "apps"
bundled = tmp_path / "bundled"
catalog = tmp_path / "catalog"
users_file = tmp_path / "users.json"
static_www = tmp_path / "www"
apps.mkdir()
bundled.mkdir()
static_www.mkdir()
(static_www / "index.html").write_text("<html>landing page</html>")
(static_www / "settings").mkdir()
(static_www / "settings" / "index.html").write_text("<html>settings page</html>")
monkeypatch.setenv("FURTKA_APPS_DIR", str(apps))
monkeypatch.setenv("FURTKA_BUNDLED_APPS_DIR", str(bundled))
monkeypatch.setenv("FURTKA_CATALOG_DIR", str(catalog))
monkeypatch.setenv("FURTKA_USERS_FILE", str(users_file))
monkeypatch.setenv("FURTKA_STATIC_WWW", str(static_www))
# install_runner writes to /var/lib/furtka/install-state.json and
# /run/furtka/install.lock by default — redirect into tmp_path so
# test code doesn't need root.
monkeypatch.setenv("FURTKA_INSTALL_STATE", str(tmp_path / "install-state.json"))
monkeypatch.setenv("FURTKA_INSTALL_LOCK", str(tmp_path / "install.lock"))
# install_runner caches env vars at import time, so reload it to
# pick up the tmp-path env vars this fixture just set.
import importlib
from furtka import install_runner
importlib.reload(install_runner)
# Scrub any sessions or lockout counters that leaked from a prior
# test — both stores are module-level.
auth.SESSIONS.clear()
auth.LOCKOUT.clear_all()
return apps, bundled
@pytest.fixture
def admin_session(fake_dirs):
"""Pre-create an admin account + live session. Returns a Cookie header
value ready to drop into urllib.request.Request(headers=...)."""
auth.create_admin("daniel", "hunter2-pw")
session = auth.SESSIONS.create("daniel")
return f"{auth.COOKIE_NAME}={session.token}"
@pytest.fixture
def no_docker(monkeypatch):
"""Stub docker calls so install/remove can run without a daemon."""
@ -72,29 +37,6 @@ def no_docker(monkeypatch):
monkeypatch.setattr(dockerops, "compose_down", lambda app_dir, project: None)
@pytest.fixture
def no_systemd_run(monkeypatch):
"""Stub the systemd-run dispatch in _do_install so tests don't need it.
The install endpoint now spawns a background systemd-run unit to do
the docker-facing phases. Tests that exercise the install path only
care that the sync pre-phase succeeded and the dispatch was
attempted with the right args they shouldn't actually fire up
systemd. subprocess.run gets monkeypatched to return a fake success
CompletedProcess, and the call args get captured for assertions.
"""
import subprocess
calls = []
def fake_run(cmd, check=False, capture_output=False, text=False, **kwargs):
calls.append(cmd)
return subprocess.CompletedProcess(cmd, 0, stdout="", stderr="")
monkeypatch.setattr(subprocess, "run", fake_run)
return calls
def _write_bundled(bundled, name, manifest=None, env_example=None):
app = bundled / name
app.mkdir()
@ -109,19 +51,17 @@ def test_list_installed_empty(fake_dirs):
assert api._list_installed() == []
def test_list_available_empty(fake_dirs):
assert api._list_available() == []
def test_list_bundled_empty(fake_dirs):
assert api._list_bundled() == []
def test_list_available_shows_uninstalled(fake_dirs):
def test_list_bundled_shows_uninstalled(fake_dirs):
_, bundled = fake_dirs
_write_bundled(bundled, "fileshare")
out = api._list_available()
out = api._list_bundled()
assert len(out) == 1
assert out[0]["name"] == "fileshare"
assert "display_name" in out[0]
# Source field lets the UI later distinguish catalog from bundled seed.
assert out[0]["source"] == "bundled"
# --- Icon inlining ----------------------------------------------------------
@ -179,15 +119,15 @@ def test_read_icon_svg_rejects_javascript_url(tmp_path):
assert api._read_icon_svg(tmp_path, "icon.svg") is None
def test_list_available_inlines_icon_svg(fake_dirs):
def test_list_bundled_inlines_icon_svg(fake_dirs):
_, bundled = fake_dirs
app = _write_bundled(bundled, "fileshare")
_write_icon(app, _SIMPLE_SVG)
[entry] = api._list_available()
[entry] = api._list_bundled()
assert entry["icon_svg"] == _SIMPLE_SVG
def test_list_installed_inlines_icon_svg(fake_dirs, no_docker, no_systemd_run):
def test_list_installed_inlines_icon_svg(fake_dirs, no_docker):
apps, bundled = fake_dirs
app = _write_bundled(bundled, "fileshare", env_example="A=real")
_write_icon(app, _SIMPLE_SVG)
@ -196,38 +136,18 @@ def test_list_installed_inlines_icon_svg(fake_dirs, no_docker, no_systemd_run):
assert entry["icon_svg"] == _SIMPLE_SVG
def test_list_available_hides_already_installed(fake_dirs, no_docker, no_systemd_run):
def test_list_bundled_hides_already_installed(fake_dirs, no_docker):
apps, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real")
status, _ = api._do_install("fileshare")
assert status == 202 # async dispatch
# Now bundled should NOT include fileshare anymore — the app folder
# exists on disk (install_from finished synchronously before the
# dispatch), which is what _list_available uses for the "installed"
# check.
assert api._list_available() == []
assert status == 200
# Now bundled should NOT include fileshare anymore.
assert api._list_bundled() == []
# But installed list should.
installed = api._list_installed()
assert len(installed) == 1 and installed[0]["name"] == "fileshare"
def test_list_available_prefers_catalog_over_bundled(fake_dirs):
_, bundled = fake_dirs
catalog_root = bundled.parent / "catalog" / "apps"
catalog_root.mkdir(parents=True)
_write_bundled(bundled, "fileshare")
# A fileshare in the catalog as well — manifest version 0.2.0 to tell apart.
catalog_manifest = dict(VALID_MANIFEST, version="0.2.0")
cat_app = catalog_root / "fileshare"
cat_app.mkdir()
(cat_app / "manifest.json").write_text(json.dumps(catalog_manifest))
out = api._list_available()
assert len(out) == 1
assert out[0]["source"] == "catalog"
assert out[0]["version"] == "0.2.0"
def test_install_endpoint_rejects_placeholder(fake_dirs):
_, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="SMB_PASSWORD=changeme")
@ -247,7 +167,7 @@ def test_remove_endpoint_unknown(fake_dirs, no_docker):
assert status == 404
def test_remove_endpoint_happy_path(fake_dirs, no_docker, no_systemd_run):
def test_remove_endpoint_happy_path(fake_dirs, no_docker):
apps, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real")
api._do_install("fileshare")
@ -258,39 +178,23 @@ def test_remove_endpoint_happy_path(fake_dirs, no_docker, no_systemd_run):
assert not (apps / "fileshare").exists()
def _request(port, path, cookie=None, method="GET", body=None):
headers = {}
if cookie is not None:
headers["Cookie"] = cookie
data = None
if body is not None:
headers["Content-Type"] = "application/json"
data = json.dumps(body).encode()
return urllib.request.Request(
f"http://127.0.0.1:{port}{path}",
data=data,
headers=headers,
method=method,
)
def test_http_get_apps_route(fake_dirs, no_docker, admin_session):
def test_http_get_apps_route(fake_dirs, no_docker):
"""Smoke test the actual HTTP server with a real socket, urllib client."""
server = api.HTTPServer(("127.0.0.1", 0), api._Handler) # port 0 → ephemeral
port = server.server_address[1]
t = threading.Thread(target=server.serve_forever, daemon=True)
t.start()
try:
with urllib.request.urlopen(_request(port, "/api/apps", cookie=admin_session)) as r:
with urllib.request.urlopen(f"http://127.0.0.1:{port}/api/apps") as r:
assert r.status == 200
data = json.loads(r.read())
assert data == []
with urllib.request.urlopen(_request(port, "/apps", cookie=admin_session)) as r:
with urllib.request.urlopen(f"http://127.0.0.1:{port}/") as r:
assert r.status == 200
assert b"Furtka Apps" in r.read()
# Unknown route → 404 JSON.
try:
urllib.request.urlopen(_request(port, "/api/nope", cookie=admin_session))
urllib.request.urlopen(f"http://127.0.0.1:{port}/api/nope")
raise AssertionError("expected 404")
except urllib.error.HTTPError as e:
assert e.code == 404
@ -299,18 +203,17 @@ def test_http_get_apps_route(fake_dirs, no_docker, admin_session):
server.server_close()
def test_http_post_install_unknown_app(fake_dirs, admin_session):
def test_http_post_install_unknown_app(fake_dirs):
server = api.HTTPServer(("127.0.0.1", 0), api._Handler)
port = server.server_address[1]
t = threading.Thread(target=server.serve_forever, daemon=True)
t.start()
try:
req = _request(
port,
"/api/apps/install",
cookie=admin_session,
req = urllib.request.Request(
f"http://127.0.0.1:{port}/api/apps/install",
data=json.dumps({"name": "ghost"}).encode(),
headers={"Content-Type": "application/json"},
method="POST",
body={"name": "ghost"},
)
try:
urllib.request.urlopen(req)
@ -324,447 +227,6 @@ def test_http_post_install_unknown_app(fake_dirs, admin_session):
server.server_close()
# --- Auth guard + login flow ------------------------------------------------
def _start_server():
server = api.HTTPServer(("127.0.0.1", 0), api._Handler)
port = server.server_address[1]
t = threading.Thread(target=server.serve_forever, daemon=True)
t.start()
return server, port
def test_unauthenticated_api_returns_401(fake_dirs):
# No admin_session fixture → no cookie on the request.
server, port = _start_server()
try:
try:
urllib.request.urlopen(_request(port, "/api/apps"))
raise AssertionError("expected 401")
except urllib.error.HTTPError as e:
assert e.code == 401
body = json.loads(e.read())
assert body["error"] == "not authenticated"
finally:
server.shutdown()
server.server_close()
def test_unauthenticated_html_redirects_to_login(fake_dirs):
server, port = _start_server()
try:
# Disable redirect following so we can inspect the 302.
opener = urllib.request.build_opener(_NoRedirectHandler())
try:
opener.open(_request(port, "/apps"))
raise AssertionError("expected 302")
except urllib.error.HTTPError as e:
assert e.code == 302
assert e.headers["Location"] == "/login"
finally:
server.shutdown()
server.server_close()
class _NoRedirectHandler(urllib.request.HTTPRedirectHandler):
def redirect_request(self, *args, **kwargs):
return None
def test_unauth_root_redirects_to_login(fake_dirs):
"""/ was previously Caddy-direct static HTML, bypassing auth. Now
Python serves it and the auth-guard applies unauth visitor gets
bounced to /login just like /apps does."""
server, port = _start_server()
try:
opener = urllib.request.build_opener(_NoRedirectHandler())
try:
opener.open(_request(port, "/"))
raise AssertionError("expected 302")
except urllib.error.HTTPError as e:
assert e.code == 302
assert e.headers["Location"] == "/login"
finally:
server.shutdown()
server.server_close()
def test_unauth_settings_redirects_to_login(fake_dirs):
server, port = _start_server()
try:
opener = urllib.request.build_opener(_NoRedirectHandler())
for path in ("/settings", "/settings/"):
try:
opener.open(_request(port, path))
raise AssertionError(f"expected 302 for {path}")
except urllib.error.HTTPError as e:
assert e.code == 302
assert e.headers["Location"] == "/login"
finally:
server.shutdown()
server.server_close()
def test_authed_root_serves_static_index(fake_dirs, admin_session):
server, port = _start_server()
try:
with urllib.request.urlopen(_request(port, "/", cookie=admin_session)) as r:
assert r.status == 200
assert r.read() == b"<html>landing page</html>"
finally:
server.shutdown()
server.server_close()
def test_authed_settings_serves_static(fake_dirs, admin_session):
server, port = _start_server()
try:
for path in ("/settings", "/settings/"):
with urllib.request.urlopen(_request(port, path, cookie=admin_session)) as r:
assert r.status == 200
assert r.read() == b"<html>settings page</html>"
finally:
server.shutdown()
server.server_close()
def test_authed_root_does_not_serve_apps_html(fake_dirs, admin_session):
"""Regression guard: the pre-26.14 do_GET had `if self.path in ("/",
"/apps", ...)` which served _HTML (the apps page) for / too, since
Caddy wasn't proxying / so nobody noticed. Now that Caddy does
proxy /, the two paths must serve different content."""
server, port = _start_server()
try:
with urllib.request.urlopen(_request(port, "/", cookie=admin_session)) as r:
root_body = r.read()
with urllib.request.urlopen(_request(port, "/apps", cookie=admin_session)) as r:
apps_body = r.read()
assert root_body != apps_body
assert b"Furtka Apps" in apps_body
assert b"landing page" in root_body
finally:
server.shutdown()
server.server_close()
def test_get_login_renders_login_form_when_admin_exists(fake_dirs):
auth.create_admin("daniel", "hunter2-pw")
server, port = _start_server()
try:
with urllib.request.urlopen(_request(port, "/login")) as r:
html = r.read().decode()
assert r.status == 200
assert "Furtka login" in html
# No setup confirm-password field rendered in login mode.
assert 'id="password2"' not in html
assert "Repeat password" not in html
finally:
server.shutdown()
server.server_close()
def test_get_login_renders_setup_form_when_no_admin(fake_dirs):
server, port = _start_server()
try:
with urllib.request.urlopen(_request(port, "/login")) as r:
html = r.read().decode()
assert r.status == 200
assert "Set admin password" in html
assert "password2" in html # setup confirm field rendered
finally:
server.shutdown()
server.server_close()
def test_get_login_redirects_when_already_authed(fake_dirs, admin_session):
server, port = _start_server()
try:
opener = urllib.request.build_opener(_NoRedirectHandler())
try:
opener.open(_request(port, "/login", cookie=admin_session))
raise AssertionError("expected 302")
except urllib.error.HTTPError as e:
assert e.code == 302
assert e.headers["Location"] == "/apps"
finally:
server.shutdown()
server.server_close()
def test_post_login_setup_creates_admin(fake_dirs):
server, port = _start_server()
try:
req = _request(
port,
"/login",
method="POST",
body={
"username": "daniel",
"password": "a-real-password",
"password2": "a-real-password",
},
)
with urllib.request.urlopen(req) as r:
assert r.status == 200
set_cookie = r.headers["Set-Cookie"]
assert auth.COOKIE_NAME in set_cookie
assert "HttpOnly" in set_cookie
assert "SameSite=Strict" in set_cookie
# users.json got written.
assert auth.load_users()["admin"]["username"] == "daniel"
# And the password really works.
assert auth.authenticate("daniel", "a-real-password") is True
finally:
server.shutdown()
server.server_close()
def test_post_login_setup_rejects_password_mismatch(fake_dirs):
server, port = _start_server()
try:
req = _request(
port,
"/login",
method="POST",
body={"username": "x", "password": "abcdefgh", "password2": "different"},
)
try:
urllib.request.urlopen(req)
raise AssertionError("expected 400")
except urllib.error.HTTPError as e:
assert e.code == 400
body = json.loads(e.read())
assert "match" in body["error"].lower()
# No admin created.
assert auth.setup_needed() is True
finally:
server.shutdown()
server.server_close()
def test_post_login_setup_rejects_short_password(fake_dirs):
server, port = _start_server()
try:
req = _request(
port,
"/login",
method="POST",
body={"username": "x", "password": "short", "password2": "short"},
)
try:
urllib.request.urlopen(req)
raise AssertionError("expected 400")
except urllib.error.HTTPError as e:
assert e.code == 400
finally:
server.shutdown()
server.server_close()
def test_post_login_success_with_correct_credentials(fake_dirs):
auth.create_admin("daniel", "hunter2-pw")
server, port = _start_server()
try:
req = _request(
port,
"/login",
method="POST",
body={"username": "daniel", "password": "hunter2-pw"},
)
with urllib.request.urlopen(req) as r:
assert r.status == 200
set_cookie = r.headers["Set-Cookie"]
assert auth.COOKIE_NAME in set_cookie
finally:
server.shutdown()
server.server_close()
def test_post_login_rejects_wrong_password(fake_dirs):
auth.create_admin("daniel", "hunter2-pw")
server, port = _start_server()
try:
req = _request(
port,
"/login",
method="POST",
body={"username": "daniel", "password": "nope"},
)
try:
urllib.request.urlopen(req)
raise AssertionError("expected 401")
except urllib.error.HTTPError as e:
assert e.code == 401
finally:
server.shutdown()
server.server_close()
def _post_wrong_login(port, username="daniel", password="nope"):
req = _request(
port,
"/login",
method="POST",
body={"username": username, "password": password},
)
try:
urllib.request.urlopen(req)
raise AssertionError("expected HTTPError")
except urllib.error.HTTPError as e:
return e
def test_post_login_locks_out_after_repeated_failures(fake_dirs, monkeypatch):
auth.create_admin("daniel", "hunter2-pw")
# Flatten the 0.5s speed-bump so the test doesn't take 5 seconds.
monkeypatch.setattr(api.time, "sleep", lambda _s: None)
server, port = _start_server()
try:
for _ in range(auth.LoginAttempts.MAX_FAILURES):
err = _post_wrong_login(port)
assert err.code == 401
err = _post_wrong_login(port)
assert err.code == 429
assert err.headers.get("Retry-After") is not None
assert int(err.headers["Retry-After"]) > 0
finally:
server.shutdown()
server.server_close()
def test_post_login_429_masks_correctness(fake_dirs, monkeypatch):
"""Once locked, the correct password must also get 429 — no oracle."""
auth.create_admin("daniel", "hunter2-pw")
monkeypatch.setattr(api.time, "sleep", lambda _s: None)
server, port = _start_server()
try:
for _ in range(auth.LoginAttempts.MAX_FAILURES):
_post_wrong_login(port)
req = _request(
port,
"/login",
method="POST",
body={"username": "daniel", "password": "hunter2-pw"},
)
try:
urllib.request.urlopen(req)
raise AssertionError("expected 429")
except urllib.error.HTTPError as e:
assert e.code == 429
finally:
server.shutdown()
server.server_close()
def test_post_login_success_clears_lockout_counter(fake_dirs, monkeypatch):
auth.create_admin("daniel", "hunter2-pw")
monkeypatch.setattr(api.time, "sleep", lambda _s: None)
server, port = _start_server()
try:
# Get close to the threshold, then log in successfully.
for _ in range(auth.LoginAttempts.MAX_FAILURES - 1):
_post_wrong_login(port)
req = _request(
port,
"/login",
method="POST",
body={"username": "daniel", "password": "hunter2-pw"},
)
with urllib.request.urlopen(req) as r:
assert r.status == 200
# Counter must have been cleared: another full MAX_FAILURES-1
# fails shouldn't trigger 429.
for _ in range(auth.LoginAttempts.MAX_FAILURES - 1):
err = _post_wrong_login(port)
assert err.code == 401
finally:
server.shutdown()
server.server_close()
def test_post_login_setup_not_rate_limited(fake_dirs, monkeypatch):
"""First-run setup is never auth-ed against a hash, so the lockout
must not apply otherwise a clumsy admin could lock themselves out
of a box that has no admin yet."""
monkeypatch.setattr(api.time, "sleep", lambda _s: None)
server, port = _start_server()
try:
# Many mismatched setup submissions (400s) — no 429 should appear.
for _ in range(auth.LoginAttempts.MAX_FAILURES + 3):
req = _request(
port,
"/login",
method="POST",
body={
"username": "daniel",
"password": "longenough",
"password2": "different",
},
)
try:
urllib.request.urlopen(req)
raise AssertionError("expected 400")
except urllib.error.HTTPError as e:
assert e.code == 400
# Then a good setup still succeeds.
req = _request(
port,
"/login",
method="POST",
body={
"username": "daniel",
"password": "longenough",
"password2": "longenough",
},
)
with urllib.request.urlopen(req) as r:
assert r.status == 200
finally:
server.shutdown()
server.server_close()
def test_post_logout_revokes_session(fake_dirs, admin_session):
server, port = _start_server()
try:
# Logout returns 200 and clears the cookie.
with urllib.request.urlopen(
_request(port, "/logout", cookie=admin_session, method="POST", body={})
) as r:
assert r.status == 200
set_cookie = r.headers["Set-Cookie"]
assert "Max-Age=0" in set_cookie
# Subsequent API call with same cookie → 401 (session revoked).
try:
urllib.request.urlopen(_request(port, "/api/apps", cookie=admin_session))
raise AssertionError("expected 401")
except urllib.error.HTTPError as e:
assert e.code == 401
finally:
server.shutdown()
server.server_close()
def test_post_to_protected_route_without_auth_is_401(fake_dirs):
server, port = _start_server()
try:
req = _request(
port,
"/api/apps/install",
method="POST",
body={"name": "whatever"},
)
try:
urllib.request.urlopen(req)
raise AssertionError("expected 401")
except urllib.error.HTTPError as e:
assert e.code == 401
finally:
server.shutdown()
server.server_close()
# --- Settings endpoints ------------------------------------------------------
SETTINGS_MANIFEST = dict(
@ -807,13 +269,13 @@ def test_get_settings_not_found(fake_dirs):
assert status == 404
def test_install_with_settings_writes_env_via_api(fake_dirs, no_docker, no_systemd_run):
def test_install_with_settings_writes_env_via_api(fake_dirs, no_docker):
_, bundled = fake_dirs
_write_bundled(bundled, "fileshare", manifest=SETTINGS_MANIFEST)
status, body = api._do_install(
"fileshare", settings={"SMB_USER": "alice", "SMB_PASSWORD": "s3cret"}
)
assert status == 202, body
assert status == 200, body
apps, _ = fake_dirs
env = (apps / "fileshare" / ".env").read_text()
assert "SMB_USER=alice" in env
@ -828,7 +290,7 @@ def test_install_with_settings_rejects_empty_required_via_api(fake_dirs, no_dock
assert "SMB_PASSWORD" in body["error"]
def test_update_settings_merges(fake_dirs, no_docker, no_systemd_run):
def test_update_settings_merges(fake_dirs, no_docker):
_, bundled = fake_dirs
_write_bundled(bundled, "fileshare", manifest=SETTINGS_MANIFEST)
api._do_install("fileshare", settings={"SMB_USER": "alice", "SMB_PASSWORD": "original"})
@ -846,7 +308,7 @@ def test_update_settings_unknown_app(fake_dirs):
assert status == 404
def test_http_get_settings_route(fake_dirs, no_docker, admin_session):
def test_http_get_settings_route(fake_dirs, no_docker):
_, bundled = fake_dirs
_write_bundled(bundled, "fileshare", manifest=SETTINGS_MANIFEST)
server = api.HTTPServer(("127.0.0.1", 0), api._Handler)
@ -854,9 +316,7 @@ def test_http_get_settings_route(fake_dirs, no_docker, admin_session):
t = threading.Thread(target=server.serve_forever, daemon=True)
t.start()
try:
with urllib.request.urlopen(
_request(port, "/api/apps/fileshare/settings", cookie=admin_session)
) as r:
with urllib.request.urlopen(f"http://127.0.0.1:{port}/api/apps/fileshare/settings") as r:
assert r.status == 200
data = json.loads(r.read())
assert data["name"] == "fileshare"
@ -910,7 +370,7 @@ def test_update_not_installed(fake_dirs):
assert "not installed" in body["error"]
def test_update_no_changes(fake_dirs, no_docker, no_systemd_run, update_docker_stubs):
def test_update_no_changes(fake_dirs, no_docker, update_docker_stubs):
_, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real")
api._do_install("fileshare")
@ -923,7 +383,7 @@ def test_update_no_changes(fake_dirs, no_docker, no_systemd_run, update_docker_s
assert update_docker_stubs["up_called"] == 0
def test_update_changes_applied(fake_dirs, no_docker, no_systemd_run, update_docker_stubs):
def test_update_changes_applied(fake_dirs, no_docker, update_docker_stubs):
_, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real")
api._do_install("fileshare")
@ -943,9 +403,7 @@ def test_update_changes_applied(fake_dirs, no_docker, no_systemd_run, update_doc
assert update_docker_stubs["up_called"] == 1
def test_update_skips_services_not_running(
fake_dirs, no_docker, no_systemd_run, update_docker_stubs
):
def test_update_skips_services_not_running(fake_dirs, no_docker, update_docker_stubs):
_, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real")
api._do_install("fileshare")
@ -959,9 +417,7 @@ def test_update_skips_services_not_running(
assert update_docker_stubs["up_called"] == 0
def test_update_returns_502_on_pull_error(
fake_dirs, no_docker, no_systemd_run, update_docker_stubs
):
def test_update_returns_502_on_pull_error(fake_dirs, no_docker, update_docker_stubs):
_, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real")
api._do_install("fileshare")
@ -1072,9 +528,7 @@ def test_furtka_update_status_endpoint(stub_furtka_updater):
assert stub_furtka_updater["status_called"] == 1
def test_http_post_update_route(
fake_dirs, no_docker, no_systemd_run, update_docker_stubs, admin_session
):
def test_http_post_update_route(fake_dirs, no_docker, update_docker_stubs):
_, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real")
api._do_install("fileshare")
@ -1085,12 +539,11 @@ def test_http_post_update_route(
t = threading.Thread(target=server.serve_forever, daemon=True)
t.start()
try:
req = _request(
port,
"/api/apps/fileshare/update",
cookie=admin_session,
req = urllib.request.Request(
f"http://127.0.0.1:{port}/api/apps/fileshare/update",
data=b"{}",
headers={"Content-Type": "application/json"},
method="POST",
body={},
)
with urllib.request.urlopen(req) as r:
assert r.status == 200
@ -1102,7 +555,7 @@ def test_http_post_update_route(
server.server_close()
def test_http_post_install_with_settings(fake_dirs, no_docker, no_systemd_run, admin_session):
def test_http_post_install_with_settings(fake_dirs, no_docker):
_, bundled = fake_dirs
_write_bundled(bundled, "fileshare", manifest=SETTINGS_MANIFEST)
server = api.HTTPServer(("127.0.0.1", 0), api._Handler)
@ -1110,186 +563,21 @@ def test_http_post_install_with_settings(fake_dirs, no_docker, no_systemd_run, a
t = threading.Thread(target=server.serve_forever, daemon=True)
t.start()
try:
req = _request(
port,
"/api/apps/install",
cookie=admin_session,
method="POST",
body={
req = urllib.request.Request(
f"http://127.0.0.1:{port}/api/apps/install",
data=json.dumps(
{
"name": "fileshare",
"settings": {"SMB_USER": "alice", "SMB_PASSWORD": "s3cret"},
},
}
).encode(),
headers={"Content-Type": "application/json"},
method="POST",
)
with urllib.request.urlopen(req) as r:
# Async: 202 Accepted + dispatched background job.
assert r.status == 202
body = json.loads(r.read())
assert body["status"] == "dispatched"
assert body["unit"] == "furtka-install-fileshare"
# Sync phase wrote the .env before dispatch.
assert r.status == 200
apps, _ = fake_dirs
assert "SMB_PASSWORD=s3cret" in (apps / "fileshare" / ".env").read_text()
# And systemd-run was called exactly once with the expected cmd.
assert len(no_systemd_run) == 1
assert no_systemd_run[0][:4] == [
"systemd-run",
"--unit=furtka-install-fileshare",
"--no-block",
"--collect",
]
assert no_systemd_run[0][-3:] == ["app", "install-bg", "fileshare"]
finally:
server.shutdown()
server.server_close()
def test_do_install_returns_409_when_locked(fake_dirs, no_docker, no_systemd_run):
_, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real")
# Hold the install lock so _do_install fast-fails.
fh = api.install_runner.acquire_lock()
try:
status, body = api._do_install("fileshare")
assert status == 409
assert "in progress" in body["error"]
finally:
fh.close()
def test_do_install_returns_409_when_state_reports_running(fake_dirs, no_docker, no_systemd_run):
"""Closes the race window where _do_install had already released
the fcntl lock (so the systemd-run child could grab it) but a
second POST tried to start a new install while the first was still
mid-flight. The state file's non-terminal stage is the reliable
"someone else is installing" signal."""
_, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real")
api.install_runner.write_state("pulling_image", app="jellyfin")
status, body = api._do_install("fileshare")
assert status == 409
assert "in progress" in body["error"]
assert "jellyfin" in body["error"]
assert "pulling_image" in body["error"]
def test_do_install_goes_through_after_terminal_state(fake_dirs, no_docker, no_systemd_run):
"""After a successful or failed install, the state file stays at
done/error a new install must be accepted, not blocked."""
_, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real")
api.install_runner.write_state("done", app="previous", version="1.0.0")
status, _ = api._do_install("fileshare")
assert status == 202
api.install_runner.write_state("error", app="previous", error="oops")
status, _ = api._do_install("fileshare")
assert status == 202
def test_do_install_status_returns_state(fake_dirs):
# Write state directly, then GET it via the status handler.
api.install_runner.write_state("pulling_image", app="jellyfin")
status, body = api._do_install_status()
assert status == 200
assert body["stage"] == "pulling_image"
assert body["app"] == "jellyfin"
# --- Catalog endpoints ------------------------------------------------------
def test_catalog_status_reports_absent_catalog(fake_dirs, monkeypatch):
"""With no /var/lib/furtka/catalog/ on disk, status reports current=None + empty state."""
# FURTKA_CATALOG_STATE is not touched by fake_dirs — point it at tmp so we
# don't hit the production path.
monkeypatch.setenv("FURTKA_CATALOG_STATE", str(fake_dirs[0].parent / "catalog-state.json"))
import importlib
from furtka import catalog as c
importlib.reload(c)
status, body = api._do_catalog_status()
assert status == 200
assert body["current"] is None
assert body["state"] == {}
def test_catalog_check_surfaces_forgejo_error(fake_dirs, monkeypatch):
monkeypatch.setenv("FURTKA_CATALOG_STATE", str(fake_dirs[0].parent / "catalog-state.json"))
import importlib
from furtka import _release_common as _rc
from furtka import catalog as c
importlib.reload(c)
def boom(host, repo, path, *, error_cls=RuntimeError):
raise error_cls("forgejo api down")
monkeypatch.setattr(_rc, "forgejo_api", boom)
status, body = api._do_catalog_check()
assert status == 502
assert "forgejo api down" in body["error"]
# --- Power endpoints --------------------------------------------------------
def test_power_rejects_unknown_action(fake_dirs):
status, body = api._do_power({"action": "format-harddrive"})
assert status == 400
assert "action" in body["error"]
def test_power_rejects_missing_action(fake_dirs):
status, body = api._do_power({})
assert status == 400
def test_power_reboot_dispatches_systemd_run(fake_dirs, monkeypatch):
seen = []
class _FakeCompleted:
returncode = 0
stdout = ""
stderr = ""
def fake_run(cmd, *, check=False, capture_output=False, text=False):
seen.append(cmd)
return _FakeCompleted()
monkeypatch.setattr("subprocess.run", fake_run)
status, body = api._do_power({"action": "reboot"})
assert status == 202
assert body == {"action": "reboot", "scheduled_in_seconds": 3}
# The dispatched command is a delayed systemd-run that eventually
# invokes `systemctl reboot`. Asserting the key flags catches
# accidental regressions (e.g. losing --no-block would block the API
# thread until the unit completes).
assert seen[0][:1] == ["systemd-run"]
assert "--on-active=3s" in seen[0]
assert "--no-block" in seen[0]
assert seen[0][-2:] == ["systemctl", "reboot"]
def test_power_poweroff_dispatches_systemctl_poweroff(fake_dirs, monkeypatch):
seen = []
class _FakeCompleted:
returncode = 0
monkeypatch.setattr("subprocess.run", lambda cmd, **kw: (seen.append(cmd), _FakeCompleted())[1])
status, body = api._do_power({"action": "poweroff"})
assert status == 202
assert body["action"] == "poweroff"
assert seen[0][-2:] == ["systemctl", "poweroff"]
def test_power_surfaces_systemd_run_missing(fake_dirs, monkeypatch):
def boom(*a, **kw):
raise FileNotFoundError(2, "No such file", "systemd-run")
monkeypatch.setattr("subprocess.run", boom)
status, body = api._do_power({"action": "reboot"})
assert status == 502
assert "systemd-run" in body["error"]

View file

@ -1,230 +0,0 @@
import json
from datetime import UTC, datetime, timedelta
import pytest
from furtka import auth
@pytest.fixture
def tmp_users_file(tmp_path, monkeypatch):
path = tmp_path / "users.json"
monkeypatch.setenv("FURTKA_USERS_FILE", str(path))
# Sessions and lockout state are module-level; wipe between tests so
# one doesn't leak a valid token (or a stale failure counter) into
# the next.
auth.SESSIONS.clear()
auth.LOCKOUT.clear_all()
return path
def test_hash_password_roundtrip():
h = auth.hash_password("hunter2")
assert h != "hunter2" # Not plain text.
assert auth.verify_password("hunter2", h) is True
assert auth.verify_password("hunter3", h) is False
def test_hash_password_is_salted():
# Two calls with the same password must produce different hashes.
a = auth.hash_password("same")
b = auth.hash_password("same")
assert a != b
# But both verify against the original.
assert auth.verify_password("same", a)
assert auth.verify_password("same", b)
def test_load_users_returns_empty_when_missing(tmp_users_file):
assert not tmp_users_file.exists()
assert auth.load_users() == {}
def test_load_users_returns_empty_on_junk(tmp_users_file):
tmp_users_file.write_text("{not json")
assert auth.load_users() == {}
def test_load_users_returns_empty_on_non_dict(tmp_users_file):
tmp_users_file.write_text("[]")
assert auth.load_users() == {}
def test_save_users_atomic_and_0600(tmp_users_file):
auth.save_users({"admin": {"hash": "x", "username": "daniel"}})
assert tmp_users_file.exists()
mode = tmp_users_file.stat().st_mode & 0o777
assert mode == 0o600, f"expected 0o600, got {oct(mode)}"
loaded = json.loads(tmp_users_file.read_text())
assert loaded["admin"]["username"] == "daniel"
def test_setup_needed_true_on_missing_file(tmp_users_file):
assert auth.setup_needed() is True
def test_setup_needed_true_on_empty_dict(tmp_users_file):
tmp_users_file.write_text("{}")
assert auth.setup_needed() is True
def test_setup_needed_false_when_admin_exists(tmp_users_file):
auth.create_admin("daniel", "secret-pw")
assert auth.setup_needed() is False
def test_create_admin_overwrites_file(tmp_users_file):
auth.create_admin("daniel", "secret-pw")
auth.create_admin("robert", "new-pw")
users = auth.load_users()
assert users["admin"]["username"] == "robert"
def test_authenticate_happy(tmp_users_file):
auth.create_admin("daniel", "secret-pw")
assert auth.authenticate("daniel", "secret-pw") is True
def test_authenticate_wrong_username(tmp_users_file):
auth.create_admin("daniel", "secret-pw")
assert auth.authenticate("robert", "secret-pw") is False
def test_authenticate_wrong_password(tmp_users_file):
auth.create_admin("daniel", "secret-pw")
assert auth.authenticate("daniel", "wrong") is False
def test_authenticate_no_admin(tmp_users_file):
assert auth.authenticate("daniel", "anything") is False
# ---- Session store ---------------------------------------------------------
def test_session_create_and_lookup(tmp_users_file):
s = auth.SESSIONS.create("daniel")
assert s.username == "daniel"
assert s.token
looked_up = auth.SESSIONS.lookup(s.token)
assert looked_up is not None
assert looked_up.username == "daniel"
def test_session_lookup_unknown_token(tmp_users_file):
assert auth.SESSIONS.lookup("not-a-real-token") is None
def test_session_lookup_none_token(tmp_users_file):
assert auth.SESSIONS.lookup(None) is None
assert auth.SESSIONS.lookup("") is None
def test_session_revoke(tmp_users_file):
s = auth.SESSIONS.create("daniel")
auth.SESSIONS.revoke(s.token)
assert auth.SESSIONS.lookup(s.token) is None
def test_session_expires(tmp_users_file, monkeypatch):
# Build a session store with a 0-second TTL so lookup immediately
# treats new sessions as expired.
store = auth.SessionStore(ttl_seconds=0)
s = store.create("daniel")
# Force the clock forward a hair so the > check fires.
monkeypatch.setattr(
auth,
"datetime",
_FakeDatetime(datetime.now(UTC) + timedelta(seconds=1)),
)
# The module-local datetime reference inside SessionStore.lookup
# resolves at call time. Verify that an expired session is dropped.
assert store.lookup(s.token) is None
class _FakeDatetime:
"""Tiny shim — only `.now(tz)` is used from SessionStore."""
def __init__(self, fixed_utc):
self._fixed = fixed_utc
def now(self, tz=None):
if tz is None:
return self._fixed.replace(tzinfo=None)
return self._fixed.astimezone(tz)
# ---- Login attempts / lockout ----------------------------------------------
def test_lockout_under_threshold_still_allowed(tmp_users_file):
store = auth.LoginAttempts(max_failures=3, window_seconds=60, lockout_seconds=60)
key = ("daniel", "10.0.0.1")
for _ in range(2):
store.register_failure(key)
assert store.is_locked(key) is False
assert store.retry_after_seconds(key) == 0
def test_lockout_triggers_at_threshold(tmp_users_file):
store = auth.LoginAttempts(max_failures=3, window_seconds=60, lockout_seconds=60)
key = ("daniel", "10.0.0.1")
for _ in range(3):
store.register_failure(key)
assert store.is_locked(key) is True
assert store.retry_after_seconds(key) > 0
assert store.retry_after_seconds(key) <= 60
def test_lockout_window_decay(tmp_users_file, monkeypatch):
store = auth.LoginAttempts(max_failures=3, window_seconds=60, lockout_seconds=60)
key = ("daniel", "10.0.0.1")
for _ in range(3):
store.register_failure(key)
assert store.is_locked(key) is True
# Jump 2 minutes ahead — all failures are older than the window
# and should be pruned on the next check.
monkeypatch.setattr(
auth,
"datetime",
_FakeDatetime(datetime.now(UTC) + timedelta(seconds=121)),
)
assert store.is_locked(key) is False
assert store.retry_after_seconds(key) == 0
def test_lockout_clear_resets(tmp_users_file):
store = auth.LoginAttempts(max_failures=2, window_seconds=60, lockout_seconds=60)
key = ("daniel", "10.0.0.1")
store.register_failure(key)
store.register_failure(key)
assert store.is_locked(key) is True
store.clear(key)
assert store.is_locked(key) is False
assert store.retry_after_seconds(key) == 0
def test_lockout_keys_are_independent(tmp_users_file):
store = auth.LoginAttempts(max_failures=2, window_seconds=60, lockout_seconds=60)
locked = ("daniel", "1.1.1.1")
other_ip = ("daniel", "2.2.2.2")
other_user = ("robert", "1.1.1.1")
store.register_failure(locked)
store.register_failure(locked)
assert store.is_locked(locked) is True
assert store.is_locked(other_ip) is False
assert store.is_locked(other_user) is False
def test_lockout_clear_all_wipes_every_key(tmp_users_file):
store = auth.LoginAttempts(max_failures=2, window_seconds=60, lockout_seconds=60)
a = ("daniel", "1.1.1.1")
b = ("robert", "2.2.2.2")
store.register_failure(a)
store.register_failure(a)
store.register_failure(b)
store.register_failure(b)
assert store.is_locked(a) and store.is_locked(b)
store.clear_all()
assert not store.is_locked(a)
assert not store.is_locked(b)

View file

@ -1,333 +0,0 @@
"""Tests for the apps-catalog sync flow.
Same shape as ``tests/test_updater.py``: fixture reloads the module with
env-overridden paths, fake tarballs land in tmp_path, Forgejo API is
stubbed via ``urllib.request.urlopen`` monkeypatching so nothing talks
to the network.
Asserts end-to-end atomicity: on any failure path bad sha256, broken
tarball, invalid manifest the live catalog dir is either left
untouched (if one existed) or absent (if it didn't).
"""
from __future__ import annotations
import io
import json
import tarfile
from pathlib import Path
import pytest
@pytest.fixture
def catalog(tmp_path, monkeypatch):
monkeypatch.setenv("FURTKA_CATALOG_DIR", str(tmp_path / "var_lib_furtka_catalog"))
monkeypatch.setenv("FURTKA_CATALOG_STATE", str(tmp_path / "var_lib_furtka_catalog-state.json"))
monkeypatch.setenv("FURTKA_CATALOG_LOCK", str(tmp_path / "catalog.lock"))
monkeypatch.setenv("FURTKA_FORGEJO_HOST", "forgejo.test.local")
monkeypatch.setenv("FURTKA_CATALOG_REPO", "daniel/furtka-apps")
import importlib
from furtka import catalog as c
from furtka import paths as p
importlib.reload(p)
importlib.reload(c)
return c
def _manifest(name: str = "fileshare") -> dict:
return {
"name": name,
"display_name": "Fileshare",
"version": "0.1.0",
"description": "Test fixture app",
"volumes": ["files"],
"ports": [445],
"icon": "icon.svg",
}
def _make_catalog_tarball(
path: Path,
version: str,
*,
apps: list[tuple[str, dict]] | None = None,
extra_entries: list[tuple[str, bytes]] | None = None,
) -> None:
"""Build a minimal valid catalog tarball.
`apps` is a list of (folder_name, manifest_dict). Each app folder gets
a `manifest.json` + a stub `docker-compose.yaml` + `icon.svg`.
`extra_entries` lets tests inject malformed content (path-traversal,
missing VERSION, ...) without rebuilding the helper.
"""
apps = apps if apps is not None else [("fileshare", _manifest())]
buf = io.BytesIO()
with tarfile.open(fileobj=buf, mode="w:gz") as tf:
entries: list[tuple[str, bytes]] = [("VERSION", f"{version}\n".encode())]
for folder, m in apps:
entries.append((f"apps/{folder}/manifest.json", json.dumps(m).encode()))
entries.append(
(f"apps/{folder}/docker-compose.yaml", b"services:\n app:\n image: scratch\n")
)
entries.append((f"apps/{folder}/icon.svg", b"<svg/>"))
if extra_entries:
entries.extend(extra_entries)
for name, data in entries:
info = tarfile.TarInfo(name=name)
info.size = len(data)
tf.addfile(info, io.BytesIO(data))
path.write_bytes(buf.getvalue())
def _stub_forgejo_release(
monkeypatch,
catalog,
*,
tag: str,
tarball_url: str = "https://forgejo.test.local/t.tar.gz",
sha_url: str = "https://forgejo.test.local/t.tar.gz.sha256",
releases: list | None = None,
):
"""Patch ``_rc.forgejo_api`` so check_catalog sees a canned release list."""
if releases is None:
releases = [
{
"tag_name": tag,
"assets": [
{"name": f"furtka-apps-{tag}.tar.gz", "browser_download_url": tarball_url},
{
"name": f"furtka-apps-{tag}.tar.gz.sha256",
"browser_download_url": sha_url,
},
],
}
]
def fake_api(host, repo, path, *, error_cls=RuntimeError):
return releases
from furtka import _release_common as _rc
monkeypatch.setattr(_rc, "forgejo_api", fake_api)
def _stub_download(monkeypatch, catalog, mapping: dict[str, bytes]):
"""Patch ``_rc.download`` so sync_catalog pulls from an in-memory map."""
from furtka import _release_common as _rc
def fake_download(url, dest, *, error_cls=RuntimeError):
if url not in mapping:
raise error_cls(f"test: no fake content for {url}")
dest.parent.mkdir(parents=True, exist_ok=True)
dest.write_bytes(mapping[url])
monkeypatch.setattr(_rc, "download", fake_download)
# --------------------------------------------------------------------------- #
# check_catalog
# --------------------------------------------------------------------------- #
def test_check_catalog_reports_update_when_versions_differ(catalog, monkeypatch, tmp_path):
# Pretend we already have catalog version 26.5 on disk; Forgejo reports 26.6.
catalog.catalog_dir().mkdir(parents=True)
(catalog.catalog_dir() / "VERSION").write_text("26.5\n")
_stub_forgejo_release(monkeypatch, catalog, tag="26.6")
check = catalog.check_catalog()
assert check.current == "26.5"
assert check.latest == "26.6"
assert check.update_available is True
assert check.tarball_url.endswith(".tar.gz")
assert check.sha256_url.endswith(".sha256")
def test_check_catalog_reports_up_to_date_when_same_version(catalog, monkeypatch):
catalog.catalog_dir().mkdir(parents=True)
(catalog.catalog_dir() / "VERSION").write_text("26.5\n")
_stub_forgejo_release(monkeypatch, catalog, tag="26.5")
check = catalog.check_catalog()
assert check.current == "26.5"
assert check.latest == "26.5"
assert check.update_available is False
def test_check_catalog_treats_missing_current_as_installable(catalog, monkeypatch):
# Fresh box, no catalog ever synced — any release is an update.
_stub_forgejo_release(monkeypatch, catalog, tag="26.5")
check = catalog.check_catalog()
assert check.current is None
assert check.update_available is True
def test_check_catalog_raises_when_no_releases_published(catalog, monkeypatch):
_stub_forgejo_release(monkeypatch, catalog, tag="x", releases=[])
with pytest.raises(catalog.CatalogError, match="no catalog releases"):
catalog.check_catalog()
# --------------------------------------------------------------------------- #
# sync_catalog — happy + error paths
# --------------------------------------------------------------------------- #
def test_sync_catalog_happy_path(catalog, monkeypatch, tmp_path):
import hashlib
tarball_path = tmp_path / "tarball.tar.gz"
_make_catalog_tarball(tarball_path, "26.6")
tarball_bytes = tarball_path.read_bytes()
sha = hashlib.sha256(tarball_bytes).hexdigest()
_stub_forgejo_release(monkeypatch, catalog, tag="26.6")
_stub_download(
monkeypatch,
catalog,
{
"https://forgejo.test.local/t.tar.gz": tarball_bytes,
"https://forgejo.test.local/t.tar.gz.sha256": (
f"{sha} furtka-apps-26.6.tar.gz\n".encode()
),
},
)
check = catalog.sync_catalog()
assert check.latest == "26.6"
assert (catalog.catalog_dir() / "VERSION").read_text().strip() == "26.6"
assert (catalog.catalog_dir() / "apps" / "fileshare" / "manifest.json").is_file()
state = catalog.read_state()
assert state["stage"] == "done"
assert state["version"] == "26.6"
def test_sync_catalog_noop_when_already_current(catalog, monkeypatch, tmp_path):
catalog.catalog_dir().mkdir(parents=True)
(catalog.catalog_dir() / "VERSION").write_text("26.5\n")
_stub_forgejo_release(monkeypatch, catalog, tag="26.5")
check = catalog.sync_catalog()
assert check.update_available is False
assert catalog.read_state()["stage"] == "done"
def test_sync_catalog_refuses_sha256_mismatch(catalog, monkeypatch, tmp_path):
tarball_path = tmp_path / "tarball.tar.gz"
_make_catalog_tarball(tarball_path, "26.6")
_stub_forgejo_release(monkeypatch, catalog, tag="26.6")
_stub_download(
monkeypatch,
catalog,
{
"https://forgejo.test.local/t.tar.gz": tarball_path.read_bytes(),
# Hash for some OTHER content — will mismatch.
"https://forgejo.test.local/t.tar.gz.sha256": (b"0" * 64 + b" wrong.tar.gz\n"),
},
)
with pytest.raises(catalog.CatalogError, match="sha256 mismatch"):
catalog.sync_catalog()
# Live catalog never existed, must still not exist after the failed sync.
assert not catalog.catalog_dir().exists()
def test_sync_catalog_refuses_tarball_with_invalid_manifest(catalog, monkeypatch, tmp_path):
import hashlib
bad_manifest = {"name": "broken"} # missing required fields
tarball_path = tmp_path / "tarball.tar.gz"
_make_catalog_tarball(tarball_path, "26.6", apps=[("broken", bad_manifest)])
tarball_bytes = tarball_path.read_bytes()
sha = hashlib.sha256(tarball_bytes).hexdigest()
_stub_forgejo_release(monkeypatch, catalog, tag="26.6")
_stub_download(
monkeypatch,
catalog,
{
"https://forgejo.test.local/t.tar.gz": tarball_bytes,
"https://forgejo.test.local/t.tar.gz.sha256": (
f"{sha} furtka-apps-26.6.tar.gz\n".encode()
),
},
)
with pytest.raises(catalog.CatalogError, match="invalid manifest"):
catalog.sync_catalog()
# Staging was cleaned; live catalog never materialised.
assert not catalog.catalog_dir().exists()
def test_sync_catalog_preserves_existing_catalog_on_failure(catalog, monkeypatch, tmp_path):
"""A failed sync must leave the previous live catalog intact so boxes
keep working until the next successful sync."""
import hashlib
# Seed a live catalog that represents a previous successful sync.
live = catalog.catalog_dir()
live.mkdir(parents=True)
(live / "VERSION").write_text("26.5\n")
(live / "apps").mkdir()
bad_manifest = {"name": "broken"} # invalid
tarball_path = tmp_path / "tarball.tar.gz"
_make_catalog_tarball(tarball_path, "26.6", apps=[("broken", bad_manifest)])
sha = hashlib.sha256(tarball_path.read_bytes()).hexdigest()
_stub_forgejo_release(monkeypatch, catalog, tag="26.6")
_stub_download(
monkeypatch,
catalog,
{
"https://forgejo.test.local/t.tar.gz": tarball_path.read_bytes(),
"https://forgejo.test.local/t.tar.gz.sha256": f"{sha} x\n".encode(),
},
)
with pytest.raises(catalog.CatalogError):
catalog.sync_catalog()
# The 26.5 live catalog survives the failed 26.6 sync.
assert (live / "VERSION").read_text().strip() == "26.5"
def test_sync_catalog_lock_contention(catalog, monkeypatch):
_stub_forgejo_release(monkeypatch, catalog, tag="26.6")
# Hold the lock from outside; the real sync_catalog call must refuse.
first = catalog.acquire_lock()
try:
with pytest.raises(catalog.CatalogError, match="already in progress"):
catalog.sync_catalog()
finally:
first.close()
# --------------------------------------------------------------------------- #
# state + current-version helpers
# --------------------------------------------------------------------------- #
def test_read_current_catalog_version_absent(catalog):
assert catalog.read_current_catalog_version() is None
def test_read_current_catalog_version_empty_file(catalog):
catalog.catalog_dir().mkdir(parents=True)
(catalog.catalog_dir() / "VERSION").write_text("\n")
assert catalog.read_current_catalog_version() is None
def test_write_and_read_state_round_trip(catalog):
catalog.write_state("downloading", latest="26.6")
s = catalog.read_state()
assert s["stage"] == "downloading"
assert s["latest"] == "26.6"
assert "updated_at" in s

View file

@ -32,21 +32,9 @@ def test_app_list_json_with_one_app(tmp_path, monkeypatch, capsys):
"display_name": "Network Files",
"version": "0.1.0",
"description": "SMB",
"description_long": "Long description here.",
"volumes": ["files"],
"ports": [445],
"icon": "icon.svg",
"open_url": "smb://{host}/files",
"settings": [
{
"name": "SMB_USER",
"label": "User",
"description": "SMB user",
"type": "text",
"default": "furtka",
"required": True,
}
],
}
)
)
@ -55,14 +43,7 @@ def test_app_list_json_with_one_app(tmp_path, monkeypatch, capsys):
data = json.loads(capsys.readouterr().out)
assert len(data) == 1
assert data[0]["ok"] is True
m = data[0]["manifest"]
assert m["name"] == "fileshare"
assert m["description_long"] == "Long description here."
assert m["open_url"] == "smb://{host}/files"
assert len(m["settings"]) == 1
assert m["settings"][0]["name"] == "SMB_USER"
assert m["settings"][0]["required"] is True
assert m["settings"][0]["default"] == "furtka"
assert data[0]["manifest"]["name"] == "fileshare"
def test_reconcile_dry_run_empty(tmp_path, monkeypatch, capsys):
@ -71,35 +52,3 @@ def test_reconcile_dry_run_empty(tmp_path, monkeypatch, capsys):
assert rc == 0
out = capsys.readouterr().out
assert "0 actions" in out
def test_app_install_bg_dispatches_to_runner(tmp_path, monkeypatch):
"""CLI `app install-bg <name>` must call install_runner.run_install(name).
This is the entry point the HTTP API fires via systemd-run; regression
here would leave the UI hanging at "pulling_image…" forever because
the background never transitions state.
"""
_set_env(monkeypatch, tmp_path)
from furtka import install_runner
called = []
monkeypatch.setattr(install_runner, "run_install", lambda name: called.append(name))
rc = main(["app", "install-bg", "fileshare"])
assert rc == 0
assert called == ["fileshare"]
def test_app_install_bg_returns_1_on_failure(tmp_path, monkeypatch, capsys):
_set_env(monkeypatch, tmp_path)
from furtka import install_runner
def boom(name):
raise RuntimeError("compose pull failed")
monkeypatch.setattr(install_runner, "run_install", boom)
rc = main(["app", "install-bg", "fileshare"])
assert rc == 1
err = capsys.readouterr().err
assert "install-bg failed" in err
assert "compose pull failed" in err

View file

@ -95,23 +95,3 @@ def test_drive_type_label_nvme_ssd_hdd():
def test_parse_lsblk_handles_empty_output():
assert parse_lsblk_output("") == []
def test_parse_lsblk_drops_boot_usb(monkeypatch):
import drives
monkeypatch.setattr(drives, "_smart_status", lambda _: "passed")
output = "sda 500G disk\nsdb 16G disk\nnvme0n1 1T disk\n"
devices = parse_lsblk_output(output, boot_disk="sdb")
names = [d["name"] for d in devices]
assert "/dev/sdb" not in names
assert names == ["/dev/nvme0n1", "/dev/sda"]
def test_parse_lsblk_no_boot_disk_keeps_all(monkeypatch):
import drives
monkeypatch.setattr(drives, "_smart_status", lambda _: "passed")
output = "sda 500G disk\nsdb 16G disk\n"
names = [d["name"] for d in parse_lsblk_output(output, boot_disk=None)]
assert set(names) == {"/dev/sda", "/dev/sdb"}

View file

@ -1,15 +1,11 @@
"""Tests for furtka.https — fingerprint extraction + HTTPS toggle.
Since 26.15-alpha the toggle writes/removes TWO snippets atomically:
- The top-level HTTPS listener snippet (enables :443 + tls internal)
- The :80-scoped redirect snippet (forces HTTP HTTPS)
"""Tests for furtka.https — fingerprint extraction + force-HTTPS toggle.
The fingerprint case uses a throwaway self-signed EC cert with a known
reference fingerprint (computed once via `openssl x509 -fingerprint
-sha256 -noout`) so we verify the PEM DER SHA256 path without a
runtime subprocess dependency. The toggle cases stub the caddy reload
so we assert both snippet files are written / removed together and that
reload failures roll BOTH state back.
so we assert the snippet file is written / removed and that reload
failures roll state back.
"""
import subprocess
@ -38,22 +34,6 @@ _TEST_CERT_FP_SHA256 = (
)
def _paths(tmp_path):
"""Return the four paths the toggle touches, in a dict for kwargs
spreading. Keeps each test's fixture boilerplate small."""
return {
"snippet_dir": tmp_path / "furtka.d",
"snippet": tmp_path / "furtka.d" / "redirect.caddyfile",
"https_snippet_dir": tmp_path / "furtka-https.d",
"https_snippet": tmp_path / "furtka-https.d" / "https.caddyfile",
"hostname_file": tmp_path / "etc_hostname",
}
def _prepare_hostname(tmp_path, value="testbox"):
(tmp_path / "etc_hostname").write_text(f"{value}\n")
def test_ca_fingerprint_matches_openssl(tmp_path):
cert = tmp_path / "root.crt"
cert.write_text(_TEST_CERT_PEM)
@ -73,7 +53,7 @@ def test_ca_fingerprint_no_pem_block(tmp_path):
def test_status_no_ca_no_snippet(tmp_path):
s = https.status(ca_path=tmp_path / "root.crt", https_snippet=tmp_path / "https.caddyfile")
s = https.status(ca_path=tmp_path / "root.crt", snippet=tmp_path / "redirect.caddyfile")
assert s == {
"ca_available": False,
"fingerprint_sha256": None,
@ -82,135 +62,105 @@ def test_status_no_ca_no_snippet(tmp_path):
}
def test_status_with_ca_and_https_snippet(tmp_path):
def test_status_with_ca_and_snippet(tmp_path):
ca = tmp_path / "root.crt"
ca.write_text(_TEST_CERT_PEM)
https_snip = tmp_path / "https.caddyfile"
https_snip.write_text("furtka.local, furtka {\n\ttls internal\n\timport furtka_routes\n}\n")
s = https.status(ca_path=ca, https_snippet=https_snip)
snippet = tmp_path / "redirect.caddyfile"
snippet.write_text(https.REDIRECT_CONTENT)
s = https.status(ca_path=ca, snippet=snippet)
assert s["ca_available"] is True
assert s["fingerprint_sha256"] == _TEST_CERT_FP_SHA256
assert s["force_https"] is True
def test_status_force_reflects_https_snippet_not_redirect(tmp_path):
"""Authoritative signal for "HTTPS is on" is the listener snippet —
a lone redirect without a :443 listener wouldn't actually serve
HTTPS, so the status must NOT report it as on. Locks 26.15 semantic."""
ca = tmp_path / "root.crt"
ca.write_text(_TEST_CERT_PEM)
s = https.status(ca_path=ca, https_snippet=tmp_path / "does-not-exist.caddyfile")
assert s["force_https"] is False
def test_set_force_enable_writes_both_snippets_and_reloads(tmp_path):
_prepare_hostname(tmp_path)
p = _paths(tmp_path)
def test_set_force_enable_writes_snippet_and_reloads(tmp_path):
snippet_dir = tmp_path / "furtka.d"
snippet = snippet_dir / "redirect.caddyfile"
calls = []
def fake_reload():
calls.append("reload")
result = https.set_force_https(True, reload_caddy=fake_reload, **p)
result = https.set_force_https(
True, snippet_dir=snippet_dir, snippet=snippet, reload_caddy=fake_reload
)
assert result is True
assert p["snippet"].read_text() == https.REDIRECT_CONTENT
written = p["https_snippet"].read_text()
assert "testbox.local, testbox" in written
assert "tls internal" in written
assert "import furtka_routes" in written
assert snippet.read_text() == https.REDIRECT_CONTENT
assert calls == ["reload"]
def test_set_force_uses_fallback_hostname_when_file_missing(tmp_path):
# No /etc/hostname → fall back to 'furtka' so Caddy gets a parseable
# block instead of an empty hostname that would fail config load.
p = _paths(tmp_path)
result = https.set_force_https(True, reload_caddy=lambda: None, **p)
assert result is True
assert "furtka.local, furtka" in p["https_snippet"].read_text()
def test_set_force_disable_removes_snippet(tmp_path):
snippet_dir = tmp_path / "furtka.d"
snippet_dir.mkdir()
snippet = snippet_dir / "redirect.caddyfile"
snippet.write_text(https.REDIRECT_CONTENT)
def test_set_force_disable_removes_both_snippets(tmp_path):
_prepare_hostname(tmp_path)
p = _paths(tmp_path)
p["snippet_dir"].mkdir()
p["https_snippet_dir"].mkdir()
p["snippet"].write_text(https.REDIRECT_CONTENT)
p["https_snippet"].write_text("furtka.local { tls internal }\n")
result = https.set_force_https(False, reload_caddy=lambda: None, **p)
result = https.set_force_https(
False, snippet_dir=snippet_dir, snippet=snippet, reload_caddy=lambda: None
)
assert result is False
assert not p["snippet"].exists()
assert not p["https_snippet"].exists()
assert not snippet.exists()
def test_set_force_disable_is_idempotent_when_already_off(tmp_path):
p = _paths(tmp_path)
result = https.set_force_https(False, reload_caddy=lambda: None, **p)
snippet_dir = tmp_path / "furtka.d"
snippet = snippet_dir / "redirect.caddyfile"
result = https.set_force_https(
False, snippet_dir=snippet_dir, snippet=snippet, reload_caddy=lambda: None
)
assert result is False
assert not p["snippet"].exists()
assert not p["https_snippet"].exists()
assert not snippet.exists()
def test_reload_failure_rolls_back_enable(tmp_path):
_prepare_hostname(tmp_path)
p = _paths(tmp_path)
snippet_dir = tmp_path / "furtka.d"
snippet = snippet_dir / "redirect.caddyfile"
def failing_reload():
raise subprocess.CalledProcessError(1, ["systemctl"], stderr="bad config")
with pytest.raises(https.HttpsError, match="caddy reload failed: bad config"):
https.set_force_https(True, reload_caddy=failing_reload, **p)
# Rollback: since neither snippet existed before, neither exists after.
assert not p["snippet"].exists()
assert not p["https_snippet"].exists()
https.set_force_https(
True, snippet_dir=snippet_dir, snippet=snippet, reload_caddy=failing_reload
)
# Rollback: since snippet didn't exist before, it must not exist after.
assert not snippet.exists()
def test_reload_failure_rolls_back_disable(tmp_path):
_prepare_hostname(tmp_path)
p = _paths(tmp_path)
p["snippet_dir"].mkdir()
p["https_snippet_dir"].mkdir()
original_redirect = "redir https://{host}{uri} permanent\n# marker\n"
original_https = "# old https block\nfurtka.local { tls internal }\n"
p["snippet"].write_text(original_redirect)
p["https_snippet"].write_text(original_https)
snippet_dir = tmp_path / "furtka.d"
snippet_dir.mkdir()
snippet = snippet_dir / "redirect.caddyfile"
original = "redir https://{host}{uri} permanent\n# marker\n"
snippet.write_text(original)
def failing_reload():
raise subprocess.CalledProcessError(1, ["systemctl"], stderr="bad config")
with pytest.raises(https.HttpsError):
https.set_force_https(False, reload_caddy=failing_reload, **p)
# Rollback: both snippets are restored to their exact prior contents.
assert p["snippet"].read_text() == original_redirect
assert p["https_snippet"].read_text() == original_https
https.set_force_https(
False, snippet_dir=snippet_dir, snippet=snippet, reload_caddy=failing_reload
)
# Rollback: snippet is restored to its exact prior contents.
assert snippet.read_text() == original
def test_systemctl_missing_raises_and_rolls_back(tmp_path):
_prepare_hostname(tmp_path)
p = _paths(tmp_path)
snippet_dir = tmp_path / "furtka.d"
snippet = snippet_dir / "redirect.caddyfile"
def missing_systemctl():
raise FileNotFoundError(2, "No such file", "systemctl")
with pytest.raises(https.HttpsError, match="systemctl not available"):
https.set_force_https(True, reload_caddy=missing_systemctl, **p)
assert not p["snippet"].exists()
assert not p["https_snippet"].exists()
https.set_force_https(
True, snippet_dir=snippet_dir, snippet=snippet, reload_caddy=missing_systemctl
)
assert not snippet.exists()
def test_redirect_snippet_content_is_caddy_redir_directive():
# Lock the exact directive. A regression here silently stops the
# redirect from taking effect even though the file-swap looks fine.
assert https.REDIRECT_CONTENT.strip() == "redir https://{host}{uri} permanent"
def test_https_snippet_content_has_tls_internal_and_routes(tmp_path):
# Lock the shape of the opt-in HTTPS listener block. Caddy parses
# this verbatim — changing the shape without updating the test
# risks shipping a silently-broken Caddyfile import.
s = https._https_snippet_content("mybox")
assert "mybox.local, mybox {" in s
assert "\ttls internal" in s
assert "\timport furtka_routes" in s
assert s.endswith("}\n")

View file

@ -1,177 +0,0 @@
"""Tests for the background app-install runner.
Same shape as test_catalog.py / test_updater.py: fixture reloads the
module with env-overridden paths, dockerops calls are stubbed so nothing
touches a real daemon. Asserts that state transitions happen in the
right order and that exceptions flip the state to "error" with the
message before re-raising.
"""
from __future__ import annotations
import json
from pathlib import Path
import pytest
@pytest.fixture
def runner(tmp_path, monkeypatch):
apps = tmp_path / "apps"
apps.mkdir()
monkeypatch.setenv("FURTKA_APPS_DIR", str(apps))
monkeypatch.setenv("FURTKA_INSTALL_STATE", str(tmp_path / "install-state.json"))
monkeypatch.setenv("FURTKA_INSTALL_LOCK", str(tmp_path / "install.lock"))
import importlib
from furtka import install_runner as r
from furtka import paths as p
importlib.reload(p)
importlib.reload(r)
return r
def _write_installed_app(apps_dir: Path, name: str = "fileshare"):
app = apps_dir / name
app.mkdir()
manifest = {
"name": name,
"display_name": "Fileshare",
"version": "0.1.0",
"description": "Test fixture",
"volumes": ["files"],
"ports": [445],
"icon": "icon.svg",
}
(app / "manifest.json").write_text(json.dumps(manifest))
(app / "docker-compose.yaml").write_text("services: {}\n")
return app
def test_write_and_read_state_round_trip(runner):
runner.write_state("pulling_image", app="jellyfin")
s = runner.read_state()
assert s["stage"] == "pulling_image"
assert s["app"] == "jellyfin"
assert "updated_at" in s
def test_read_state_returns_empty_when_missing(runner):
assert runner.read_state() == {}
def test_read_state_returns_empty_on_junk(runner):
runner.state_path().parent.mkdir(parents=True, exist_ok=True)
runner.state_path().write_text("{not json")
assert runner.read_state() == {}
def test_acquire_lock_prevents_concurrent_runs(runner):
held = runner.acquire_lock()
try:
with pytest.raises(runner.InstallRunnerError, match="in progress"):
runner.acquire_lock()
finally:
held.close()
def test_run_install_happy_path(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "fileshare")
calls = []
monkeypatch.setattr(dockerops, "compose_pull", lambda *a, **k: calls.append(("pull", a)))
monkeypatch.setattr(dockerops, "ensure_volume", lambda name: calls.append(("vol", name)))
monkeypatch.setattr(dockerops, "compose_up", lambda *a, **k: calls.append(("up", a)))
runner.run_install("fileshare")
# Ordering: pull first, then volumes, then up.
assert [c[0] for c in calls] == ["pull", "vol", "up"]
# Exactly the namespaced volume name got created.
assert calls[1] == ("vol", "furtka_fileshare_files")
# Final state is "done" with the manifest version.
s = runner.read_state()
assert s["stage"] == "done"
assert s["app"] == "fileshare"
assert s["version"] == "0.1.0"
def test_run_install_writes_error_on_pull_failure(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "fileshare")
def boom(*a, **k):
raise dockerops.DockerError("pull failed: registry unreachable")
monkeypatch.setattr(dockerops, "compose_pull", boom)
monkeypatch.setattr(dockerops, "ensure_volume", lambda name: None)
monkeypatch.setattr(dockerops, "compose_up", lambda *a, **k: None)
with pytest.raises(dockerops.DockerError):
runner.run_install("fileshare")
s = runner.read_state()
assert s["stage"] == "error"
assert s["app"] == "fileshare"
assert "registry unreachable" in s["error"]
def test_run_install_writes_error_on_up_failure(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "fileshare")
monkeypatch.setattr(dockerops, "compose_pull", lambda *a, **k: None)
monkeypatch.setattr(dockerops, "ensure_volume", lambda name: None)
def boom(*a, **k):
raise dockerops.DockerError("compose up: container refused to start")
monkeypatch.setattr(dockerops, "compose_up", boom)
with pytest.raises(dockerops.DockerError):
runner.run_install("fileshare")
s = runner.read_state()
assert s["stage"] == "error"
assert "refused to start" in s["error"]
def test_run_install_releases_lock_after_done(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "fileshare")
monkeypatch.setattr(dockerops, "compose_pull", lambda *a, **k: None)
monkeypatch.setattr(dockerops, "ensure_volume", lambda name: None)
monkeypatch.setattr(dockerops, "compose_up", lambda *a, **k: None)
runner.run_install("fileshare")
# Lock released — a fresh acquire must succeed.
fh = runner.acquire_lock()
fh.close()
def test_run_install_releases_lock_after_error(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "fileshare")
monkeypatch.setattr(
dockerops, "compose_pull", lambda *a, **k: (_ for _ in ()).throw(dockerops.DockerError("x"))
)
with pytest.raises(dockerops.DockerError):
runner.run_install("fileshare")
fh = runner.acquire_lock()
fh.close()

View file

@ -267,91 +267,3 @@ def test_read_env_values_roundtrip(tmp_path, fake_dirs):
write_env(p, {"A": "plain", "B": "has space", "C": 'has "quote"', "D": ""})
values = read_env_values(p)
assert values == {"A": "plain", "B": "has space", "C": 'has "quote"', "D": ""}
# --- path-type settings ------------------------------------------------------
PATH_MANIFEST = dict(
VALID_MANIFEST,
name="jellyfin",
settings=[
{
"name": "MEDIA_PATH",
"label": "Medienordner",
"type": "path",
"required": True,
}
],
)
OPTIONAL_PATH_MANIFEST = dict(
VALID_MANIFEST,
name="jellyfin",
settings=[{"name": "OPTIONAL_PATH", "label": "Optional", "type": "path", "required": False}],
)
def test_install_with_valid_path_succeeds(tmp_path, fake_dirs):
media = tmp_path / "media"
media.mkdir()
src = _write_app_source(tmp_path, "jellyfin", PATH_MANIFEST)
installer.install_from(src, settings={"MEDIA_PATH": str(media)})
target = apps_dir() / "jellyfin"
assert f"MEDIA_PATH={media}" in (target / ".env").read_text()
def test_install_rejects_nonexistent_path(tmp_path, fake_dirs):
src = _write_app_source(tmp_path, "jellyfin", PATH_MANIFEST)
with pytest.raises(installer.InstallError, match="does not exist"):
installer.install_from(src, settings={"MEDIA_PATH": str(tmp_path / "ghost")})
def test_install_rejects_path_that_is_a_file(tmp_path, fake_dirs):
f = tmp_path / "not-a-dir"
f.write_text("hi")
src = _write_app_source(tmp_path, "jellyfin", PATH_MANIFEST)
with pytest.raises(installer.InstallError, match="is not a directory"):
installer.install_from(src, settings={"MEDIA_PATH": str(f)})
def test_install_rejects_relative_path(tmp_path, fake_dirs):
src = _write_app_source(tmp_path, "jellyfin", PATH_MANIFEST)
with pytest.raises(installer.InstallError, match="absolute path"):
installer.install_from(src, settings={"MEDIA_PATH": "media"})
def test_install_rejects_system_path(tmp_path, fake_dirs):
src = _write_app_source(tmp_path, "jellyfin", PATH_MANIFEST)
with pytest.raises(installer.InstallError, match="system path"):
installer.install_from(src, settings={"MEDIA_PATH": "/etc"})
def test_install_rejects_root_filesystem(tmp_path, fake_dirs):
src = _write_app_source(tmp_path, "jellyfin", PATH_MANIFEST)
with pytest.raises(installer.InstallError, match="system path"):
installer.install_from(src, settings={"MEDIA_PATH": "/"})
def test_install_rejects_deny_list_via_traversal(tmp_path, fake_dirs):
# /mnt/../etc resolves to /etc — must be caught after Path.resolve().
src = _write_app_source(tmp_path, "jellyfin", PATH_MANIFEST)
with pytest.raises(installer.InstallError, match="system path"):
installer.install_from(src, settings={"MEDIA_PATH": "/mnt/../etc"})
def test_install_accepts_empty_optional_path(tmp_path, fake_dirs):
src = _write_app_source(tmp_path, "jellyfin", OPTIONAL_PATH_MANIFEST)
installer.install_from(src, settings={"OPTIONAL_PATH": ""})
target = apps_dir() / "jellyfin"
assert (target / ".env").exists()
def test_update_env_rejects_invalid_path(tmp_path, fake_dirs):
# First install with a valid path.
media = tmp_path / "media"
media.mkdir()
src = _write_app_source(tmp_path, "jellyfin", PATH_MANIFEST)
installer.install_from(src, settings={"MEDIA_PATH": str(media)})
# Then try to update to a bad path.
with pytest.raises(installer.InstallError, match="does not exist"):
installer.update_env("jellyfin", {"MEDIA_PATH": str(tmp_path / "ghost")})

View file

@ -95,21 +95,6 @@ def test_settings_optional_default_empty(tmp_path):
m = load_manifest(path)
assert m.settings == ()
assert m.description_long == ""
assert m.open_url == ""
def test_open_url_stored_when_present(tmp_path):
payload = dict(VALID_MANIFEST, open_url="smb://{host}/files")
path = _write_app(tmp_path, "fileshare", payload)
m = load_manifest(path)
assert m.open_url == "smb://{host}/files"
def test_open_url_non_string_rejected(tmp_path):
payload = dict(VALID_MANIFEST, open_url=42)
path = _write_app(tmp_path, "fileshare", payload)
with pytest.raises(ManifestError, match="open_url"):
load_manifest(path)
def test_settings_parsed(tmp_path):
@ -155,27 +140,6 @@ def test_settings_reject_unknown_type(tmp_path):
load_manifest(path)
def test_settings_accept_path_type(tmp_path):
payload = dict(
VALID_MANIFEST,
settings=[
{
"name": "MEDIA_PATH",
"label": "Medienordner",
"description": "Absoluter Pfad zu deinen Medien",
"type": "path",
"required": True,
}
],
)
path = _write_app(tmp_path, "fileshare", payload)
m = load_manifest(path)
assert len(m.settings) == 1
assert m.settings[0].name == "MEDIA_PATH"
assert m.settings[0].type == "path"
assert m.settings[0].required is True
def test_settings_reject_duplicate_name(tmp_path):
bad = dict(
VALID_MANIFEST,

View file

@ -1,74 +0,0 @@
"""Tests for furtka.passwd — stdlib-only password hashing.
The primary contract: hash/verify roundtrips cleanly, AND the verifier
accepts the werkzeug hash format that 26.11 / 26.12 boxes wrote to
``users.json``. Losing that backward compat would lock out existing
admins after a 26.13+ upgrade.
"""
from __future__ import annotations
from furtka import passwd
def test_hash_roundtrip():
h = passwd.hash_password("hunter2")
assert passwd.verify_password("hunter2", h)
assert not passwd.verify_password("wrong", h)
def test_hash_is_salted():
# Two separate hashes of the same password must diverge.
a = passwd.hash_password("same-pw")
b = passwd.hash_password("same-pw")
assert a != b
assert passwd.verify_password("same-pw", a)
assert passwd.verify_password("same-pw", b)
def test_generated_hash_format():
# Shape is pbkdf2:<hash>:<iter>$<salt>$<hex>
h = passwd.hash_password("x")
parts = h.split("$", 2)
assert len(parts) == 3
method, salt, digest = parts
assert method.startswith("pbkdf2:sha256:")
assert salt
# digest is hex of pbkdf2_hmac sha256 → 64 hex chars
assert len(digest) == 64
assert all(c in "0123456789abcdef" for c in digest)
def test_verify_werkzeug_scrypt_hash():
"""Known werkzeug scrypt hash generated by 26.11 / 26.12 boxes.
Captured live off a .196 test VM after its auth bootstrap:
username=daniel, password=test-admin-pw1
Hash format: scrypt:32768:8:1$<salt>$<hex>
If this regresses, every existing box that upgraded via 26.11 and
set a password gets locked out on the next upgrade.
"""
known = (
"scrypt:32768:8:1$yWZUqJodowt9ieI1$"
"2d1059b3564da7492b4aa3c2be7fff6fef06085e5e1bfd52e897948c58246b7a"
"9603400355b7264f61c4436eba7bf8c947adec3d7a76be03b50efb4227e15a80"
)
assert passwd.verify_password("test-admin-pw1", known)
assert not passwd.verify_password("wrong-password", known)
def test_verify_rejects_malformed_hashes():
# Empty / missing delimiters / unknown method / bad int — all False.
assert not passwd.verify_password("x", "")
assert not passwd.verify_password("x", "nothingspecial")
assert not passwd.verify_password("x", "pbkdf2:sha256:600000") # no $salt$digest
assert not passwd.verify_password("x", "pbkdf2$salt$digest") # missing hash + iter
assert not passwd.verify_password("x", "bcrypt:12$salt$digest") # unsupported algo
assert not passwd.verify_password("x", "pbkdf2:sha256:abc$salt$digest") # bad iter int
def test_verify_rejects_nonstring_inputs():
# Defensive: users.json can be corrupted or have nulls.
assert not passwd.verify_password(None, "pbkdf2:sha256:1000$salt$digest") # type: ignore[arg-type]
assert not passwd.verify_password("x", None) # type: ignore[arg-type]
assert not passwd.verify_password("x", 12345) # type: ignore[arg-type]

View file

@ -1,108 +0,0 @@
"""Tests for the catalog > bundled resolver."""
from __future__ import annotations
import json
from pathlib import Path
import pytest
def _manifest(name: str = "fileshare") -> dict:
return {
"name": name,
"display_name": "Fileshare",
"version": "0.1.0",
"description": "x",
"volumes": [],
"ports": [],
"icon": "icon.svg",
}
@pytest.fixture
def sources_mod(tmp_path, monkeypatch):
monkeypatch.setenv("FURTKA_CATALOG_DIR", str(tmp_path / "catalog"))
monkeypatch.setenv("FURTKA_BUNDLED_APPS_DIR", str(tmp_path / "bundled"))
import importlib
from furtka import paths as p
from furtka import sources as s
importlib.reload(p)
importlib.reload(s)
return s
def _seed_app(root: Path, name: str, manifest: dict | None = None) -> Path:
folder = root / name
folder.mkdir(parents=True)
(folder / "manifest.json").write_text(json.dumps(manifest or _manifest(name)))
return folder
def test_resolve_app_name_returns_none_when_absent(sources_mod):
assert sources_mod.resolve_app_name("nope") is None
def test_resolve_app_name_prefers_catalog_over_bundled(sources_mod, tmp_path):
_seed_app(tmp_path / "catalog" / "apps", "fileshare")
_seed_app(tmp_path / "bundled", "fileshare")
result = sources_mod.resolve_app_name("fileshare")
assert result is not None
assert result.origin == "catalog"
assert result.path.parent.name == "apps"
assert result.path.parent.parent.name == "catalog"
def test_resolve_app_name_falls_back_to_bundled(sources_mod, tmp_path):
_seed_app(tmp_path / "bundled", "fileshare")
result = sources_mod.resolve_app_name("fileshare")
assert result is not None
assert result.origin == "bundled"
def test_resolve_app_name_ignores_folder_without_manifest(sources_mod, tmp_path):
# Empty folder is not a valid app even if the name matches.
(tmp_path / "catalog" / "apps" / "fileshare").mkdir(parents=True)
_seed_app(tmp_path / "bundled", "fileshare")
result = sources_mod.resolve_app_name("fileshare")
# Catalog entry without manifest is skipped; bundled wins.
assert result.origin == "bundled"
def test_list_available_unions_catalog_and_bundled(sources_mod, tmp_path):
_seed_app(tmp_path / "catalog" / "apps", "fileshare")
_seed_app(tmp_path / "bundled", "otherapp")
names = {s.path.name: s.origin for s in sources_mod.list_available()}
assert names == {"fileshare": "catalog", "otherapp": "bundled"}
def test_list_available_catalog_wins_on_collision(sources_mod, tmp_path):
_seed_app(tmp_path / "catalog" / "apps", "fileshare")
_seed_app(tmp_path / "bundled", "fileshare")
entries = sources_mod.list_available()
assert len(entries) == 1
assert entries[0].origin == "catalog"
def test_list_available_empty_when_neither_exists(sources_mod):
assert sources_mod.list_available() == []
def test_list_available_skips_non_dirs_and_no_manifest(sources_mod, tmp_path):
# A plain file in catalog/apps and an empty dir in bundled — both ignored.
cat_root = tmp_path / "catalog" / "apps"
cat_root.mkdir(parents=True)
(cat_root / "not-a-dir.txt").write_text("x")
(tmp_path / "bundled" / "emptyapp").mkdir(parents=True)
_seed_app(tmp_path / "bundled", "realapp")
entries = sources_mod.list_available()
assert [e.path.name for e in entries] == ["realapp"]

View file

@ -24,9 +24,6 @@ def updater(tmp_path, monkeypatch):
monkeypatch.setenv("FURTKA_LOCK_PATH", str(tmp_path / "update.lock"))
monkeypatch.setenv("FURTKA_CADDYFILE_PATH", str(tmp_path / "etc_caddy" / "Caddyfile"))
monkeypatch.setenv("FURTKA_SYSTEMD_DIR", str(tmp_path / "etc_systemd_system"))
hostname_file = tmp_path / "etc_hostname"
hostname_file.write_text("testbox\n")
monkeypatch.setenv("FURTKA_HOSTNAME_FILE", str(hostname_file))
(tmp_path / "etc_systemd_system").mkdir()
# Reload the module so the path constants pick up the env vars.
import importlib
@ -209,99 +206,6 @@ def test_refresh_caddyfile_noops_if_source_missing(updater, tmp_path):
assert updater._refresh_caddyfile(tmp_path / "does-not-exist") is False
def test_refresh_caddyfile_substitutes_hostname_placeholder(updater, tmp_path):
# Self-update rewrites the shipped Caddyfile against the box's real
# hostname, same substitution the installer does on first boot. Without
# this the named-hostname :443 block ships with a literal
# `__FURTKA_HOSTNAME__` and Caddy refuses to load the config.
src = tmp_path / "src"
src.write_text("__FURTKA_HOSTNAME__.local, __FURTKA_HOSTNAME__ {\n\ttls internal\n}\n")
assert updater._refresh_caddyfile(src) is True
live = updater._CADDYFILE_LIVE.read_text()
assert "testbox.local, testbox {" in live
assert "__FURTKA_HOSTNAME__" not in live
# Second call with the same source is a no-op — rendered content matches.
assert updater._refresh_caddyfile(src) is False
def test_health_check_treats_4xx_as_healthy(updater, monkeypatch):
"""26.11+ auth makes /api/apps return 401 on unauth requests. If the
health check treated that as "down", every pre-auth auth upgrade
auto-rolls back. Server responding at all is enough signal for the
health check."""
import urllib.error
calls = {"n": 0}
class _FakeResp:
def __init__(self, code):
self.status = code
def __enter__(self):
return self
def __exit__(self, *a):
return False
def raising_401(url, timeout):
calls["n"] += 1
raise urllib.error.HTTPError(url, 401, "Unauthorized", {}, None)
monkeypatch.setattr("urllib.request.urlopen", raising_401)
assert updater._health_check("http://127.0.0.1:7000/api/apps", deadline_s=2.0) is True
# One call was enough — early exit on 4xx, no retry loop.
assert calls["n"] == 1
def test_health_check_rejects_5xx(updater, monkeypatch):
"""500s mean the server is up but broken — that's NOT healthy.
Distinguishes auth refusals (4xx = healthy) from real runtime
errors (5xx = unhealthy, roll back)."""
import urllib.error
def raising_500(url, timeout):
raise urllib.error.HTTPError(url, 500, "Internal Server Error", {}, None)
monkeypatch.setattr("urllib.request.urlopen", raising_500)
assert updater._health_check("http://127.0.0.1:7000/api/apps", deadline_s=1.5) is False
def test_health_check_retries_on_connection_refused(updater, monkeypatch):
"""While furtka-api is still starting, urlopen raises URLError.
The loop must keep polling until the server comes up or deadline."""
import urllib.error
calls = {"n": 0}
def flaky(url, timeout):
calls["n"] += 1
if calls["n"] < 3:
raise urllib.error.URLError("connection refused")
class _Resp:
status = 200
def __enter__(self):
return self
def __exit__(self, *a):
return False
return _Resp()
monkeypatch.setattr("urllib.request.urlopen", flaky)
assert updater._health_check("http://127.0.0.1:7000/api/apps", deadline_s=10.0) is True
assert calls["n"] == 3
def test_current_hostname_falls_back_when_file_missing(updater, monkeypatch, tmp_path):
monkeypatch.setenv("FURTKA_HOSTNAME_FILE", str(tmp_path / "missing"))
import importlib
importlib.reload(updater)
assert updater._current_hostname() == "furtka"
def test_link_new_units_only_links_missing(updater, tmp_path, monkeypatch):
unit_dir = tmp_path / "assets_systemd"
unit_dir.mkdir()
@ -316,25 +220,17 @@ def test_link_new_units_only_links_missing(updater, tmp_path, monkeypatch):
linked = updater._link_new_units(unit_dir)
assert linked == ["furtka-bar.timer"]
# Two calls for the newly-linked timer: systemctl link + systemctl enable.
# The already-linked service is untouched. Timers need the follow-up
# `enable` so self-updates that introduce new timers don't leave them
# dormant — fresh installs get their enable via the webinstaller.
assert len(seen) == 2
# Only one systemctl link call — for the new timer, not the existing service.
assert len(seen) == 1
assert seen[0][:2] == ["systemctl", "link"]
assert seen[0][2].endswith("furtka-bar.timer")
assert seen[1] == ["systemctl", "enable", "furtka-bar.timer"]
def test_extract_tarball_uses_data_filter_when_available(tmp_path, updater, monkeypatch):
# Confirm we pass filter='data' to extractall on Python 3.12+; fall back
# cleanly on older runtimes. Capture the kwarg via a stub. tarfile lives
# in furtka._release_common after the extraction refactor, so we patch
# that module — updater._extract_tarball delegates there.
from furtka import _release_common as _rc
# cleanly on older runtimes. Capture the kwarg via a stub.
calls = []
real_open = _rc.tarfile.open # capture before monkeypatching
real_open = updater.tarfile.open # capture before monkeypatching
class _Recorder:
def __init__(self, tarball):
@ -359,7 +255,7 @@ def test_extract_tarball_uses_data_filter_when_available(tmp_path, updater, monk
tar = tmp_path / "t.tar.gz"
_make_release_tarball(tar, "26.9-alpha")
monkeypatch.setattr(_rc.tarfile, "open", lambda *a, **kw: _Recorder(tar))
monkeypatch.setattr(updater.tarfile, "open", lambda *a, **kw: _Recorder(tar))
dest = tmp_path / "dest"
updater._extract_tarball(tar, dest)

View file

@ -31,10 +31,9 @@ ASSETS = REPO_ROOT / "assets"
# (install target path, asset path under furtka/assets/) — only the files we
# still copy bit-for-bit at install time. Scripts + unit files are no longer
# copied; they're reached via /opt/furtka/current and `systemctl link`. The
# Caddyfile is not in this list because it's written with the hostname
# placeholder substituted — see test_post_install_substitutes_hostname_in_caddyfile.
# copied; they're reached via /opt/furtka/current and `systemctl link`.
ASSET_TARGETS = [
("/etc/caddy/Caddyfile", "Caddyfile"),
("/var/lib/furtka/status.json", "www/status.json"),
]
@ -54,7 +53,7 @@ def install_cmds(tmp_path, monkeypatch):
fake = tmp_path / "payload.tar.gz"
fake.write_bytes(b"not a real tarball")
monkeypatch.setattr(app, "RESOURCE_MANAGER_PAYLOAD", fake)
return app._post_install_commands("testhost", "daniel", "test-admin-pw")
return app._post_install_commands("testhost")
@pytest.mark.parametrize("target,asset_relpath", ASSET_TARGETS)
@ -122,47 +121,18 @@ def test_caddyfile_asset_serves_from_current():
assert "root * /var/lib/furtka" in caddy
def _strip_caddy_comments(text: str) -> str:
"""Remove # comments + blank lines so string-match assertions can
target actual Caddyfile directives, not the leading doc block.
Comment intro is ``#`` at start-of-line or preceded by whitespace."""
out = []
for line in text.splitlines():
stripped = line.split("#", 1)[0].rstrip()
if stripped:
out.append(stripped)
return "\n".join(out)
def test_caddyfile_serves_http_by_default_https_opt_in():
# 26.15-alpha: HTTPS is opt-in. The default Caddyfile has a :80 block
# and imports /etc/caddy/furtka-https.d/*.caddyfile at top level —
# the /settings HTTPS toggle drops the hostname+tls-internal block
# into that dir when the user explicitly enables HTTPS. Default
# Caddyfile therefore contains no `tls internal` directive anywhere;
# if a future refactor puts it back, every fresh install regresses
# to the 26.14-era BAD_SIGNATURE trap. Strip comments first because
# the doc-block DOES mention `tls internal` in prose.
caddy_full = (ASSETS / "Caddyfile").read_text()
caddy = _strip_caddy_comments(caddy_full)
assert ":80 {" in caddy
assert "tls internal" not in caddy
assert "__FURTKA_HOSTNAME__" not in caddy
assert "import /etc/caddy/furtka-https.d/*.caddyfile" in caddy
# Shared routes still live in a named snippet so the HTTPS toggle's
# snippet can import the same routes without duplication.
assert "(furtka_routes)" in caddy
# Default Caddyfile imports it once (inside :80). The HTTPS snippet,
# when written by the toggle, imports it a second time.
assert caddy.count("import furtka_routes") == 1
def test_caddyfile_disables_caddy_auto_redirects():
# Named-hostname :443 block makes Caddy want to add its own HTTP→HTTPS
# redirect. The /settings toggle is the single source of truth, so the
# built-in has to be off — otherwise the toggle and auto_https race.
def test_caddyfile_serves_both_http_and_https():
# :80 stays so users who haven't installed the CA still reach the box;
# :443 uses Caddy's built-in local CA (tls internal) so users who have
# installed it get the green padlock.
caddy = (ASSETS / "Caddyfile").read_text()
assert "auto_https disable_redirects" in caddy
assert ":80 {" in caddy
assert ":443 {" in caddy
assert "tls internal" in caddy
# Shared routes live in a named snippet to avoid drift between the two
# listeners — both site blocks must import it.
assert "(furtka_routes)" in caddy
assert caddy.count("import furtka_routes") == 2
def test_caddyfile_imports_force_redirect_snippet_dir():
@ -176,41 +146,13 @@ def test_caddyfile_imports_force_redirect_snippet_dir():
def test_caddyfile_exposes_root_ca_download():
# /rootCA.crt is the download handle the UI uses. It must map to the
# Caddy local-CA pki path and set a Content-Disposition so the browser
# treats it as a download rather than trying to render it. Path is the
# real one Caddy uses under XDG_DATA_HOME=/var/lib (see caddy.service
# Environment= directive) — not the /var/lib/caddy/.local/share/caddy/
# path Caddy docs show for non-systemd installs.
# treats it as a download rather than trying to render it.
caddy = (ASSETS / "Caddyfile").read_text()
assert "handle /rootCA.crt" in caddy
assert "/var/lib/caddy/pki/authorities/local" in caddy
assert ".local/share/caddy" not in caddy
assert "/var/lib/caddy/.local/share/caddy/pki/authorities/local" in caddy
assert "attachment; filename=furtka-local-rootCA.crt" in caddy
def test_post_install_writes_caddyfile_without_hostname_placeholder(install_cmds):
# 26.15-alpha: the shipped Caddyfile no longer carries the
# __FURTKA_HOSTNAME__ marker — HTTPS + hostname now live in the
# opt-in snippet written by set_force_https(), not in the base
# Caddyfile. Verify the post-install writes the file as-is (no
# substitution expected) and it has the opt-in import glob.
caddyfile_cmd = next((c for c in install_cmds if " > /etc/caddy/Caddyfile" in c), None)
assert caddyfile_cmd is not None
written_full = _extract_written_content(caddyfile_cmd, "/etc/caddy/Caddyfile")
written = _strip_caddy_comments(written_full)
assert "__FURTKA_HOSTNAME__" not in written
assert "import /etc/caddy/furtka-https.d/*.caddyfile" in written
assert "tls internal" not in written
def test_post_install_creates_https_snippet_dir(install_cmds):
# The top-level HTTPS opt-in snippet dir must exist before Caddy's
# first start — its glob import tolerates an empty directory, but
# not a missing one on older Caddy builds. Parallel guarantee to
# test_post_install_creates_furtka_d_snippet_dir below.
matching = [c for c in install_cmds if "/etc/caddy/furtka-https.d" in c and "install -d" in c]
assert matching, "no install -d command creates /etc/caddy/furtka-https.d"
def test_post_install_creates_furtka_d_snippet_dir(install_cmds):
# Pre-existing installs pick up the import path via updater._refresh_caddyfile,
# but fresh installs never run that — this command is the only guarantee
@ -234,28 +176,3 @@ def test_read_asset_raises_for_missing_file():
def test_assets_dir_resolves_to_repo_tree():
assert app._ASSETS_DIR == ASSETS
def test_post_install_writes_users_json_with_hashed_password(install_cmds):
"""The Furtka-admin users.json is created during the chroot post-install.
Without this, a fresh-install box lands at /login in first-run setup
mode and the user has to go through the browser to set a password
which defeats the "step-1 password works for everything" design. Also
check that the file is chmod 0600 (the PBKDF2 hash is a secret even
if it's slow to crack).
"""
import json as _json
from werkzeug.security import check_password_hash
users_cmd = next((c for c in install_cmds if " > /var/lib/furtka/users.json" in c), None)
assert users_cmd is not None, "no command writes /var/lib/furtka/users.json"
assert "chmod 600" in users_cmd, "users.json must be chmod 0600"
body = _extract_written_content(users_cmd, "/var/lib/furtka/users.json")
parsed = _json.loads(body)
assert "admin" in parsed
assert parsed["admin"]["username"] == "daniel" # matches fixture
# Hash is a real werkzeug hash, not the plaintext password.
assert parsed["admin"]["hash"] != "test-admin-pw"
assert check_password_hash(parsed["admin"]["hash"], "test-admin-pw")

View file

@ -8,7 +8,6 @@ import os
import re
import subprocess
import sys
from datetime import UTC
from pathlib import Path
from drives import list_scored_devices
@ -16,41 +15,6 @@ from flask import Flask, jsonify, redirect, render_template, request, url_for
app = Flask(__name__)
def _resolve_version() -> str:
"""Resolve the Furtka version to display in the wizard footer.
On the live ISO `iso/build.sh` writes `/opt/furtka/VERSION` at build time
from `pyproject.toml`; that's the authoritative source at runtime. For
local dev runs (pytest, `flask run` outside the ISO) fall back to
reading `pyproject.toml` directly, then to the literal "dev" so the
footer never 500s if both files are missing.
"""
iso_path = Path(__file__).resolve().parent / "VERSION"
for candidate in (iso_path, Path(__file__).resolve().parent.parent / "pyproject.toml"):
try:
text = candidate.read_text(encoding="utf-8")
except (FileNotFoundError, PermissionError, OSError):
continue
if candidate.name == "VERSION":
value = text.strip()
if value:
return value
else:
match = re.search(r'^version\s*=\s*"([^"]+)"', text, re.MULTILINE)
if match:
return match.group(1)
return "dev"
FURTKA_VERSION = _resolve_version()
@app.context_processor
def _inject_version():
return {"furtka_version": FURTKA_VERSION}
LANGUAGES = {
"en": {"locale": "en_US.UTF-8", "label": "English", "keyboard": "us"},
"de": {"locale": "de_DE.UTF-8", "label": "Deutsch", "keyboard": "de"},
@ -264,10 +228,6 @@ _FURTKA_UNITS = (
"furtka-status.service",
"furtka-status.timer",
"furtka-welcome.service",
# Daily apps-catalog pull. Timer drives the service; the .service itself
# is oneshot and also callable ad-hoc via `furtka catalog sync`.
"furtka-catalog-sync.service",
"furtka-catalog-sync.timer",
)
@ -349,35 +309,7 @@ def _furtka_json_cmd(hostname):
)
def _users_json_cmd(username, password):
"""Write /var/lib/furtka/users.json with the admin account hashed.
The core furtka-api reads this file on every login attempt; the
auth.py module treats `admin.username` + `admin.hash` as the only
credential. Hashing happens here in the webinstaller (werkzeug is a
flask transitive dep so it's already installed in this environment)
the chroot doesn't need pip. Mode 0600 so nobody but root on the
installed box can read the PBKDF2 hash.
"""
from datetime import datetime
from werkzeug.security import generate_password_hash
users = {
"admin": {
"username": username,
"hash": generate_password_hash(password),
"created_at": datetime.now(UTC).isoformat(timespec="seconds"),
}
}
return _write_file_cmd(
"/var/lib/furtka/users.json",
json.dumps(users, indent=2) + "\n",
mode="600",
)
def _post_install_commands(hostname, admin_username, admin_password):
def _post_install_commands(hostname):
# nss-mdns: splice `mdns_minimal [NOTFOUND=return]` before `resolve` on
# the hosts line so `*.local` works from the installed system too. Guarded
# so a re-run (or a future Arch default that already ships mdns) is a
@ -395,28 +327,11 @@ def _post_install_commands(hostname, admin_username, admin_password):
# an empty dir but not a missing one on every Caddy version, so we
# create it up front and stay on the safe side.
"install -d -m 0755 -o root -g root /etc/caddy/furtka.d",
# Parallel dir for the top-level HTTPS-listener snippet, written
# by /api/furtka/https/force (26.15-alpha+) when the user opts
# into HTTPS. Empty by default so fresh installs never generate
# a tls internal cert — that was the 26.14 regression where
# Firefox hit unbypassable SEC_ERROR_BAD_SIGNATURE because
# Caddy's fixed intermediate-CN clashed with any cached trust
# from a previously-reinstalled Furtka box.
"install -d -m 0755 -o root -g root /etc/caddy/furtka-https.d",
# The Caddyfile lives at /etc/caddy/Caddyfile per Caddy's convention
# (systemd unit points there). Content comes from the shipped asset,
# which we copy in at install time so updates that change routing
# need a new release to refresh it.
#
# __FURTKA_HOSTNAME__ is the placeholder the asset carries in place
# of the real hostname — Caddy's `tls internal` needs a named site
# block to issue a leaf cert, and the hostname isn't known until
# the user fills in the form. Self-updates re-apply the same
# substitution against /etc/hostname (see updater._refresh_caddyfile).
_write_file_cmd(
"/etc/caddy/Caddyfile",
_read_asset("Caddyfile").replace("__FURTKA_HOSTNAME__", hostname),
),
_write_file_cmd("/etc/caddy/Caddyfile", _read_asset("Caddyfile")),
# Initial status.json so Caddy doesn't 404 before furtka-status fires.
_write_file_cmd("/var/lib/furtka/status.json", _read_asset("www/status.json")),
nss_sed,
@ -426,12 +341,6 @@ def _post_install_commands(hostname, admin_username, admin_password):
# furtka.json depends on /opt/furtka/current/VERSION, so it has to
# run after the resource-manager commands.
_furtka_json_cmd(hostname),
# Admin account for the Furtka web UI. Hashed here (werkzeug is
# already in scope for the Flask webinstaller) and materialised
# into /var/lib/furtka/users.json at mode 0600 on the target
# partition — the installed core's auth.py picks it up on first
# login.
_users_json_cmd(admin_username, admin_password),
]
@ -490,7 +399,7 @@ def build_archinstall_config(s):
# page, status timer, and welcome banner into place.
"custom_commands": [
f"gpasswd -a {s['username']} docker",
*_post_install_commands(s["hostname"], s["username"], s["password"]),
*_post_install_commands(s["hostname"]),
],
"network_config": {"type": "iso"},
"ssh": True,

View file

@ -1,41 +1,6 @@
import subprocess
def _boot_disk_name():
"""Return the parent disk name of the live-ISO boot media (e.g. "sdb"), or None.
On a normal box `/run/archiso/bootmnt` does not exist and we return None,
leaving the device list untouched. On bare metal booted from USB this is
the stick we booted from we want to filter it out so the user can't
accidentally pick it as the install target.
"""
try:
result = subprocess.run(
["findmnt", "-no", "SOURCE", "/run/archiso/bootmnt"],
capture_output=True,
text=True,
)
except FileNotFoundError:
return None
if result.returncode != 0:
return None
partition = result.stdout.strip()
if not partition:
return None
try:
parent = subprocess.run(
["lsblk", "-no", "PKNAME", partition],
capture_output=True,
text=True,
)
except FileNotFoundError:
return None
if parent.returncode != 0:
return None
name = parent.stdout.strip().splitlines()[0] if parent.stdout.strip() else ""
return name or None
def _smart_status(device):
try:
result = subprocess.run(
@ -110,14 +75,11 @@ def score_device(device, size_gb):
return get_drive_type_score(device) + get_drive_health(device) + get_size_score(size_gb)
def parse_lsblk_output(output, boot_disk=None):
def parse_lsblk_output(output):
"""Parse `lsblk -dn -o NAME,SIZE,TYPE` output into scored device dicts.
Keeps only TYPE=disk so the live ISO's own squashfs (loop) and the boot
CD-ROM (rom) don't show up as install targets. If `boot_disk` is given,
that disk is also dropped it's the USB stick the live ISO booted from
on bare metal, where it appears as TYPE=disk and would otherwise be a
valid-looking install target.
CD-ROM (rom) don't show up as install targets.
"""
devices = []
for line in output.strip().split("\n"):
@ -129,8 +91,6 @@ def parse_lsblk_output(output, boot_disk=None):
name, size, dev_type = parts[0], parts[1], parts[2]
if dev_type != "disk":
continue
if boot_disk and name == boot_disk:
continue
device = f"/dev/{name}"
size_gb = parse_size_gb(size)
status = _smart_status(device)
@ -160,7 +120,7 @@ def list_scored_devices():
except subprocess.CalledProcessError as e:
print(f"Error listing devices: {e}")
return []
return parse_lsblk_output(result.stdout, boot_disk=_boot_disk_name())
return parse_lsblk_output(result.stdout)
def main():

View file

@ -30,7 +30,7 @@
<footer class="site-footer">
<div class="container">
<p class="kicker">Furtka {{ furtka_version }} · AGPL-3.0</p>
<p class="kicker">Furtka 26.4-alpha · AGPL-3.0</p>
<p class="kicker"><a href="https://furtka.org" style="color: inherit; text-decoration: none">furtka.org</a></p>
</div>
</footer>

View file

@ -6,8 +6,6 @@
{% block content %}
<h1>Rebooting…</h1>
<p class="lede">The machine is restarting. This page will stop responding in a moment — that's expected.</p>
<p><strong>Remove the USB stick now</strong> — if it's still plugged in when the machine reboots, some BIOS setups will boot into this installer again instead of starting Furtka.</p>
<p class="muted">If the installer does come back anyway, your BIOS is set to boot from USB before the disk. Press the one-time boot menu key at startup (often <kbd>F11</kbd>, <kbd>F12</kbd>, or <kbd>Esc</kbd> — it flashes briefly on screen) and pick the internal disk, or change the boot order in BIOS settings.</p>
<p>When the machine comes back up (~1 minute), open Furtka in your browser:</p>
<p><a href="http://{{ hostname }}.local" class="btn btn-primary">http://{{ hostname }}.local</a></p>
<p class="muted">If that doesn't resolve, your network may not support mDNS — use the IP address shown on the machine's console instead.</p>

View file

@ -19,38 +19,22 @@ Hosted on `forge-runner-01` (Proxmox VM, Ubuntu 24.04). Hugo runs on the VM;
nginx serves the built output from `/var/www/furtka.org`. TLS is terminated by
an upstream openresty reverse proxy — the VM itself only speaks plain HTTP.
### Auto-deploy on push-to-main (default)
`.forgejo/workflows/deploy-site.yml` fires on every push to `main` that touches
`website/**`. The self-hosted runner *is* forge-runner-01, so the whole deploy
collapses to a local rsync into `/srv/furtka-site/` + `hugo --minify` into
`/var/www/furtka.org/`. No SSH hop, no secrets. Runs in under a minute.
The in-CI script is `website/deploy-ci.sh`. Don't invoke it from your dev box —
it assumes it's already on the target host.
### Manual deploy (fallback)
For out-of-band pushes (feature branch, CI outage), deploy from your dev
machine:
```sh
./website/deploy.sh
```
This rsyncs `website/` to `/srv/furtka-site/` on the VM over SSH and runs
`hugo --minify` into `/var/www/furtka.org`. Same end state as the CI path,
just with an SSH hop.
### First-time VM setup
Only needed once, when provisioning a fresh forge-runner VM:
First time only, on the VM:
```sh
ssh forge-runner
sudo /srv/furtka-site/ops/nginx/setup-vm.sh # or copy the script over first
```
From then on, deploy from your dev machine:
```sh
./website/deploy.sh
```
The script rsyncs `website/` to `/srv/furtka-site/` on the VM and runs
`hugo --minify` into `/var/www/furtka.org`.
## Structure
```
@ -64,8 +48,7 @@ layouts/ Custom inline theme — no external theme or framework
index.html Home-only layout with editorial hero
assets/css/main.css Stylesheet (fingerprinted + minified on build)
static/favicon.svg Gate mark in crimson
deploy.sh Manual rsync + remote Hugo build (over SSH, for off-CI pushes)
deploy-ci.sh Local rsync + Hugo build — runs on forge-runner-01 from CI
deploy.sh Rsync + remote Hugo build
```
## Design

View file

@ -6,10 +6,6 @@
--accent: #c03a28;
--accent-hover: #a0301f;
--border: #e4e3dc;
--accent-glow: rgba(192, 58, 40, 0.2);
--card-bg: rgba(247, 246, 243, 0.72);
--card-border: var(--border);
--scene-opacity: 0.18;
--font-sans:
-apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue",
Arial, "Noto Sans", sans-serif;
@ -27,10 +23,6 @@
--accent: #ff6b56;
--accent-hover: #ff8b78;
--border: #232326;
--accent-glow: rgba(255, 107, 86, 0.4);
--card-bg: rgba(23, 23, 26, 0.65);
--card-border: #26262b;
--scene-opacity: 0.34;
}
}
@ -51,25 +43,6 @@ body {
flex-direction: column;
min-height: 100vh;
text-rendering: optimizeLegibility;
isolation: isolate;
}
/* ── Animated background canvas (home only) ─────────────── */
.scene-canvas {
position: fixed;
inset: 0;
width: 100vw;
height: 100vh;
z-index: 0;
pointer-events: none;
}
.site-header,
main.container,
.site-footer {
position: relative;
z-index: 1;
}
.container {
@ -198,36 +171,11 @@ main.container {
.home h1 {
font-family: var(--font-sans);
font-weight: 800;
font-size: clamp(3.5rem, 14vw, 11rem);
line-height: 0.9;
letter-spacing: -0.04em;
font-size: clamp(3.25rem, 10vw, 6.5rem);
line-height: 0.95;
letter-spacing: -0.035em;
margin: 0 0 1.5rem;
color: var(--fg);
background-image: linear-gradient(180deg, var(--fg) 0%, var(--accent) 110%);
-webkit-background-clip: text;
background-clip: text;
-webkit-text-fill-color: transparent;
}
@media (prefers-color-scheme: dark) {
.home h1 {
filter: drop-shadow(0 0 28px var(--accent-glow));
}
.home .lede {
color: #c8c8cc;
}
}
.hero {
min-height: 78vh;
display: flex;
flex-direction: column;
justify-content: center;
padding-block: 4.5rem 3rem;
}
.home .lede {
font-weight: 450;
}
.home .lede {
@ -310,132 +258,3 @@ main.container {
outline-offset: 3px;
border-radius: 2px;
}
/* ── Primary CTA ─────────────────────────────────────────── */
.cta-row { margin-top: 2.5rem; }
.cta {
display: inline-flex;
align-items: center;
gap: 0.55rem;
padding: 1.1rem 2rem;
font-family: var(--font-sans);
font-weight: 600;
font-size: 1.02rem;
letter-spacing: 0.005em;
text-decoration: none;
border-radius: 0.7rem;
transition: transform 180ms, box-shadow 180ms, background 180ms, color 180ms;
}
.cta--primary {
background: linear-gradient(135deg, var(--accent), var(--accent-hover));
color: #fff;
box-shadow: 0 10px 36px var(--accent-glow),
0 0 0 1px color-mix(in srgb, var(--accent) 40%, transparent);
animation: cta-pulse 2.8s ease-in-out infinite;
}
.cta--primary:hover {
transform: translateY(-3px);
box-shadow: 0 18px 52px var(--accent-glow),
0 0 0 1px var(--accent);
animation-play-state: paused;
}
.cta--primary:active { transform: translateY(-1px); }
.cta--primary span { transition: transform 180ms; }
.cta--primary:hover span { transform: translateX(4px); }
@keyframes cta-pulse {
0%, 100% { box-shadow: 0 10px 36px var(--accent-glow),
0 0 0 1px color-mix(in srgb, var(--accent) 40%, transparent); }
50% { box-shadow: 0 14px 48px var(--accent-glow),
0 0 0 1px color-mix(in srgb, var(--accent) 70%, transparent); }
}
@media (prefers-reduced-motion: reduce) {
.cta--primary { animation: none; }
}
/* ── Intro paragraph (home, between hero and feature grids) ─ */
.intro {
max-width: 38rem;
margin: 0 0 4rem;
font-size: 1.15rem;
line-height: 1.55;
color: var(--fg);
}
.intro p { margin: 0 0 1rem; }
.intro p:last-child { margin: 0; }
.intro strong { font-weight: 600; }
/* ── Feature sections (home) ─────────────────────────────── */
.feature-section { margin-block: 4rem; }
.section-eyebrow {
font-family: var(--font-sans);
font-weight: 500;
font-size: 0.72rem;
letter-spacing: 0.14em;
text-transform: uppercase;
color: var(--fg-muted);
margin: 0 0 1.25rem;
}
.feature-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(17rem, 1fr));
gap: 1rem;
}
.feature-card {
background: var(--card-bg);
border: 1px solid var(--card-border);
border-radius: 1rem;
padding: 1.5rem 1.5rem 1.4rem;
-webkit-backdrop-filter: blur(10px);
backdrop-filter: blur(10px);
transition: transform 240ms, border-color 240ms, box-shadow 240ms;
}
.feature-card:hover {
border-color: var(--accent);
box-shadow: 0 10px 32px var(--accent-glow);
transform: translateY(-2px);
}
.feature-card p {
margin: 0;
font-size: 1rem;
line-height: 1.55;
color: var(--fg);
}
.feature-card strong {
font-weight: 600;
color: var(--fg);
}
/* ── Closer prose (home, after feature grids) ────────────── */
.closer {
margin-top: 4rem;
max-width: var(--measure);
}
/* ── Reveal-on-load (hero) and reveal-on-scroll (cards) ──── */
.js .reveal,
.js [data-gsap="card"] {
opacity: 0;
transform: translateY(40px);
will-change: opacity, transform;
}
@media (prefers-reduced-motion: reduce) {
.scene-canvas { display: none; }
.js .reveal,
.js [data-gsap="card"] {
opacity: 1 !important;
transform: none !important;
will-change: auto;
}
}

View file

@ -1,25 +0,0 @@
(function () {
if (window.matchMedia('(prefers-reduced-motion: reduce)').matches) return;
if (!window.gsap || !window.ScrollTrigger || !window.Lenis) return;
gsap.registerPlugin(ScrollTrigger);
const lenis = new Lenis({ lerp: 0.1, smoothWheel: true });
lenis.on('scroll', ScrollTrigger.update);
gsap.ticker.add((time) => { lenis.raf(time * 1000); });
gsap.ticker.lagSmoothing(0);
// Hero stagger — runs once on load.
gsap.to('.hero .reveal', {
y: 0, opacity: 1, duration: 1.1, ease: 'power3.out', stagger: 0.12
});
// Card reveals — batched so cards in the same row come in together.
ScrollTrigger.batch('[data-gsap="card"]', {
start: 'top 90%',
onEnter: (els) => gsap.to(els, {
y: 0, opacity: 1, scale: 1,
duration: 0.9, ease: 'power3.out', stagger: 0.08, overwrite: true
})
});
})();

View file

@ -1,98 +0,0 @@
(function () {
if (window.matchMedia('(prefers-reduced-motion: reduce)').matches) return;
if (!window.WebGLRenderingContext || !window.THREE) return;
const canvas = document.getElementById('scene');
if (!canvas) return;
const root = document.documentElement;
const readVar = (name) => getComputedStyle(root).getPropertyValue(name).trim();
const readOpacity = () => parseFloat(readVar('--scene-opacity')) || 0.18;
const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera(
60, window.innerWidth / window.innerHeight, 0.1, 100
);
const renderer = new THREE.WebGLRenderer({ canvas, antialias: true, alpha: true });
renderer.setSize(window.innerWidth, window.innerHeight, false);
renderer.setPixelRatio(Math.min(window.devicePixelRatio || 1, 2));
const geometry = new THREE.TorusKnotGeometry(2.5, 0.4, 130, 20);
const material = new THREE.MeshPhongMaterial({
color: readVar('--accent') || '#c03a28',
wireframe: true,
transparent: true,
opacity: readOpacity()
});
const core = new THREE.Mesh(geometry, material);
scene.add(core);
scene.add(new THREE.AmbientLight(0xffffff, 0.6));
const dir = new THREE.DirectionalLight(0xffffff, 0.8);
dir.position.set(5, 5, 5);
scene.add(dir);
const BASE_Z = 9;
camera.position.z = BASE_Z;
let scrollY = window.scrollY || 0;
window.addEventListener('scroll', () => {
scrollY = window.scrollY || 0;
}, { passive: true });
let baseOpacity = readOpacity();
let running = true;
function tick() {
if (!running) return;
requestAnimationFrame(tick);
// Continuous slow drift.
core.rotation.y += 0.0015;
core.rotation.z += 0.0006;
// Scroll-driven motion: zoom in, scale up, tilt.
const s = Math.min(scrollY, 2000);
camera.position.z = BASE_Z - s * 0.0022;
const scale = 1 + s * 0.00035;
core.scale.set(scale, scale, scale);
core.rotation.x = s * 0.0008;
// Fade past hero so feature cards stay readable.
const vh = window.innerHeight;
const fadeStart = vh * 0.5;
const fadeEnd = vh * 1.4;
const t = Math.max(0, Math.min(1, (scrollY - fadeStart) / (fadeEnd - fadeStart)));
material.opacity = baseOpacity * (1 - t * 0.92);
renderer.render(scene, camera);
}
tick();
window.addEventListener('resize', () => {
camera.aspect = window.innerWidth / window.innerHeight;
camera.updateProjectionMatrix();
renderer.setSize(window.innerWidth, window.innerHeight, false);
});
document.addEventListener('visibilitychange', () => {
if (document.hidden) {
running = false;
} else if (!running) {
running = true;
tick();
}
});
const mql = window.matchMedia('(prefers-color-scheme: dark)');
const updateTheme = () => {
const accent = readVar('--accent');
if (accent) material.color.set(accent);
baseOpacity = readOpacity();
};
if (mql.addEventListener) {
mql.addEventListener('change', updateTheme);
} else if (mql.addListener) {
mql.addListener(updateTheme);
}
})();

View file

@ -1,19 +0,0 @@
# Vendored JavaScript libraries
These minified bundles are checked into the repo so furtka.org has zero
third-party-CDN dependencies at runtime. Pin date: **2026-04-27**.
| File | Version | Source |
|---|---|---|
| `three.min.js` | r128 | https://cdnjs.cloudflare.com/ajax/libs/three.js/r128/three.min.js |
| `gsap.min.js` | 3.12.2 (core only) | https://cdnjs.cloudflare.com/ajax/libs/gsap/3.12.2/gsap.min.js |
| `ScrollTrigger.min.js` | 3.12.2 | https://cdnjs.cloudflare.com/ajax/libs/gsap/3.12.2/ScrollTrigger.min.js |
| `lenis.min.js` | @studio-freight/lenis 1.0.33 | https://unpkg.com/@studio-freight/lenis@1.0.33/dist/lenis.min.js |
All four expose UMD globals (`THREE`, `gsap`, `ScrollTrigger`, `Lenis`).
None are ES modules, so no `js.Build` step is needed — Hugo just fingerprints them.
GSAP "Club" plugins (SplitText, MorphSVG, etc.) are **not** free for commercial use.
Only `gsap` core + `ScrollTrigger` (both standard MIT-style license) are bundled.
To refresh: re-run `curl -sSfL -o <file> <url>` and bump the pin date here.

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View file

@ -1,33 +1,33 @@
---
title: "Furtka"
description: "Offenes Heimserver-Betriebssystem — einfach genug für alle."
status: "<span class=\"mono\">26.15-alpha</span> — in Arbeit"
# features_today / features_next müssen index-parallel zu content/_index.md bleiben.
intro: |
status: "<span class=\"mono\">26.4-alpha</span> — in Arbeit"
---
**Furtka** ist ein offenes Heimserver-Betriebssystem.
USB-Stick einstecken, durch einen Assistenten klicken, und aus jedem
alten Rechner wird eine private Cloud für den Haushalt — mit eigenen
Apps, eigenem Namen im Netz, eigenen Daten.
Das Ziel ist einfach: **dein Vater soll das einrichten können.**
features_today_label: "Was heute schon geht"
features_today:
- "Vom USB-Stick booten und Furtka auf die Festplatte einrichten"
- "Ein Assistent fragt nach Name, Benutzer und Netzwerk — fertig"
- "Danach: Bedienseite im Browser öffnen"
- "Erste App: **Dateifreigabe im Heimnetz** (alle im WLAN sehen den Ordner)"
- "Apps mit einem Klick installieren und entfernen"
- "Eine installierte App mit einem Klick aktualisieren (holt das neueste Container-Image)"
- "Furtka selbst mit einem Klick aktualisieren — keine Neuinstallation mehr für neue Features"
features_next_label: "Was als Nächstes kommt"
features_next:
- "Apps für Fotos, Dateien, Smarthome, Spiele-Streaming und Medien"
- "Einfachere Sprache im Einrichtungs-Assistenten"
- "Sichere Verbindung im Heimnetz (ohne Warnmeldung im Browser)"
- "Mehrere Server zusammenschalten"
---
### Was als Nächstes kommt
- Apps für Fotos, Dateien, Smarthome, Spiele-Streaming und Medien
- Einfachere Sprache im Einrichtungs-Assistenten
- Sichere Verbindung im Heimnetz (ohne Warnmeldung im Browser)
- Mehrere Server zusammenschalten
### Was heute schon geht
- Vom USB-Stick booten und Furtka auf die Festplatte einrichten
- Ein Assistent fragt nach Name, Benutzer und Netzwerk — fertig
- Danach: Bedienseite im Browser öffnen
- Erste App: **Dateifreigabe im Heimnetz** (alle im WLAN sehen den Ordner)
- Apps mit einem Klick installieren und entfernen
- Eine installierte App mit einem Klick aktualisieren (holt das neueste Container-Image)
- Furtka selbst mit einem Klick aktualisieren — keine Neuinstallation mehr für neue Features
Wir sind zu zweit und bauen das öffentlich, abends und am Wochenende.
Es ist früh.
Mitlesen? Schreib an <hallo@furtka.org>.
Mitlesen? Schreib an <a href="mailto:hallo@furtka.org">hallo@furtka.org</a>.

View file

@ -1,33 +1,33 @@
---
title: "Furtka"
description: "Open-source home server OS — simple enough for everyone."
status: "<span class=\"mono\">26.15-alpha</span> — work in progress"
# Keep features_today / features_next index-aligned with content/_index.de.md.
intro: |
status: "<span class=\"mono\">26.4-alpha</span> — work in progress"
---
**Furtka** is an open-source home server OS.
Boot from USB, click through a wizard, and any old computer
turns into a private cloud for your household — with your own apps,
your own name on the network, your own data.
The goal is simple: **your dad should be able to set this up.**
features_today_label: "What works today"
features_today:
- "Boot from USB stick and install Furtka onto the hard drive"
- "A wizard asks for name, user and network — done"
- "Then: open the control page in your browser"
- "First app: **file sharing on the home network** (everyone on Wi-Fi sees the folder)"
- "Install and remove apps with one click"
- "Update an installed app with one click (pulls the newest container image)"
- "Update Furtka itself with one click — no reinstalling for new features"
features_next_label: "What's coming next"
features_next:
- "Apps for photos, files, smart home, game streaming and media"
- "Plainer language in the setup wizard"
- "Secure connection on your home network (no browser warning)"
- "Linking several servers together"
---
### What's coming next
- Apps for photos, files, smart home, game streaming and media
- Plainer language in the setup wizard
- Secure connection on your home network (no browser warning)
- Linking several servers together
### What works today
- Boot from USB stick and install Furtka onto the hard drive
- A wizard asks for name, user and network — done
- Then: open the control page in your browser
- First app: **file sharing on the home network** (everyone on Wi-Fi sees the folder)
- Install and remove apps with one click
- Update an installed app with one click (pulls the newest container image)
- Update Furtka itself with one click — no reinstalling for new features
We're two people building it in public on evenings and weekends.
It's early.
Want to follow along? Write to <hallo@furtka.org>.
Want to follow along? Write to <a href="mailto:hallo@furtka.org">hallo@furtka.org</a>.

View file

@ -6,7 +6,7 @@ enableRobotsTXT = true
[params]
description = "Open-source home server OS — simple enough for everyone."
version = "26.15-alpha"
version = "26.4-alpha"
contactEmail = "hallo@furtka.org"
[markup.goldmark.renderer]

View file

@ -1,15 +1,13 @@
<!DOCTYPE html>
<html lang="{{ .Site.Language.Lang }}" class="no-js">
<html lang="{{ .Site.Language.Lang }}">
<head>
{{ partial "head.html" . }}
</head>
<body>
{{ if .IsHome }}<canvas id="scene" class="scene-canvas" aria-hidden="true"></canvas>{{ end }}
{{ partial "header.html" . }}
<main class="container">
{{ block "main" . }}{{ end }}
</main>
{{ partial "footer.html" . }}
{{ if .IsHome }}{{ partial "scripts.html" . }}{{ end }}
</body>
</html>

View file

@ -2,46 +2,13 @@
<article class="home">
<header class="hero">
{{ with .Params.status }}
<p class="status-chip reveal">{{ . | safeHTML }}</p>
<p class="status-chip">{{ . | safeHTML }}</p>
{{ end }}
<h1 class="reveal">{{ .Title }}</h1>
{{ with site.Params.description }}<p class="lede reveal">{{ . }}</p>{{ end }}
<p class="cta-row reveal">
<a class="cta cta--primary" href="https://forgejo.sourcegate.online/daniel/furtka/releases">
{{ if eq site.Language.Lang "de" }}Neuestes Release{{ else }}Latest release{{ end }}
<span aria-hidden="true"></span>
</a>
</p>
<h1>{{ .Title }}</h1>
{{ with site.Params.description }}<p class="lede">{{ . }}</p>{{ end }}
</header>
{{ with .Params.intro }}
<section class="intro">{{ . | markdownify }}</section>
{{ end }}
{{ if .Params.features_today }}
<section class="feature-section">
{{ with .Params.features_today_label }}<p class="section-eyebrow">{{ . }}</p>{{ end }}
<div class="feature-grid">
{{ range .Params.features_today }}
<article class="feature-card" data-gsap="card">{{ . | markdownify }}</article>
{{ end }}
<div class="prose">
{{ .Content }}
</div>
</section>
{{ end }}
{{ if .Params.features_next }}
<section class="feature-section">
{{ with .Params.features_next_label }}<p class="section-eyebrow">{{ . }}</p>{{ end }}
<div class="feature-grid">
{{ range .Params.features_next }}
<article class="feature-card" data-gsap="card">{{ . | markdownify }}</article>
{{ end }}
</div>
</section>
{{ end }}
{{ with .Content }}
<section class="prose closer">{{ . }}</section>
{{ end }}
</article>
{{ end }}

View file

@ -1,10 +1,7 @@
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<script>document.documentElement.classList.replace('no-js','js');</script>
<title>{{ if .IsHome }}{{ site.Title }} — {{ site.Params.description }}{{ else }}{{ .Title }} · {{ site.Title }}{{ end }}</title>
<meta name="description" content="{{ with .Params.description }}{{ . }}{{ else }}{{ site.Params.description }}{{ end }}">
<meta name="theme-color" content="#f7f6f3" media="(prefers-color-scheme: light)">
<meta name="theme-color" content="#0d0d0f" media="(prefers-color-scheme: dark)">
<link rel="icon" type="image/svg+xml" href="/favicon.svg">
<meta property="og:site_name" content="{{ site.Title }}">
<meta property="og:title" content="{{ if .IsHome }}{{ site.Title }}{{ else }}{{ .Title }} · {{ site.Title }}{{ end }}">

View file

@ -1,12 +0,0 @@
{{ $three := resources.Get "js/vendor/three.min.js" | fingerprint }}
{{ $gsap := resources.Get "js/vendor/gsap.min.js" | fingerprint }}
{{ $st := resources.Get "js/vendor/ScrollTrigger.min.js" | fingerprint }}
{{ $lenis := resources.Get "js/vendor/lenis.min.js" | fingerprint }}
{{ $scene := resources.Get "js/scene.js" | fingerprint }}
{{ $anim := resources.Get "js/animations.js" | fingerprint }}
<script defer src="{{ $three.RelPermalink }}" integrity="{{ $three.Data.Integrity }}"></script>
<script defer src="{{ $gsap.RelPermalink }}" integrity="{{ $gsap.Data.Integrity }}"></script>
<script defer src="{{ $st.RelPermalink }}" integrity="{{ $st.Data.Integrity }}"></script>
<script defer src="{{ $lenis.RelPermalink }}" integrity="{{ $lenis.Data.Integrity }}"></script>
<script defer src="{{ $scene.RelPermalink }}" integrity="{{ $scene.Data.Integrity }}"></script>
<script defer src="{{ $anim.RelPermalink }}" integrity="{{ $anim.Data.Integrity }}"></script>