Compare commits
18 commits
26.6-alpha
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| ee132712be | |||
| 1193504a1e | |||
| 65d48c92f8 | |||
| aa7dea0528 | |||
| 1cff22658b | |||
| e68ed279cc | |||
| 26f0424ae3 | |||
| 8c1fd1da2b | |||
| f3cd9e963c | |||
| 470823b347 | |||
| 577c2469f7 | |||
| e8c5317660 | |||
| 474af8fb2d | |||
| 7c6da3d051 | |||
| 04762f5dd1 | |||
| c7e7c8b1e5 | |||
| cf93ef44cb | |||
| 5d8ac63d9f |
54 changed files with 4029 additions and 285 deletions
|
|
@ -1,27 +1,58 @@
|
|||
name: Release
|
||||
|
||||
# Tag-triggered: when `git push origin <version>` lands, this builds the
|
||||
# release tarball and publishes it + the sha256 + release.json to the
|
||||
# Forgejo releases page for that tag. Boxes then POST /api/furtka/update
|
||||
# to pull from here.
|
||||
# release tarball + the live-installer ISO, and publishes them both to
|
||||
# the Forgejo releases page. Boxes POST /api/furtka/update to pull the
|
||||
# tarball; fresh-install users download the ISO from the release page.
|
||||
#
|
||||
# Version tags only (pattern matches CalVer like 26.0-alpha, 26.1, 27.0-beta).
|
||||
# Documentation / random tags are ignored by the [0-9]* prefix.
|
||||
# Runs on the self-hosted runner because iso/build.sh needs privileged
|
||||
# docker access (mkarchiso wants root + loop mounts), and because the
|
||||
# ubuntu-latest Forgejo hosted runner doesn't carry the docker socket
|
||||
# bind-mount the build needs. Self-hosted adds ~5-7 min to the release
|
||||
# (ISO build) but keeps the release page self-contained.
|
||||
#
|
||||
# Version tags only (CalVer like 26.0-alpha, 26.1, 27.0-beta). Random
|
||||
# tags are ignored by the [0-9]* prefix.
|
||||
on:
|
||||
push:
|
||||
tags: ['[0-9]*']
|
||||
|
||||
jobs:
|
||||
release:
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: self-hosted
|
||||
timeout-minutes: 45
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0 # changelog section extraction needs history
|
||||
|
||||
- name: Install prerequisites
|
||||
# Alpine runner is near-empty: we need curl + python3 for the
|
||||
# publish script, bash for the build scripts.
|
||||
run: apk add --no-cache curl python3 bash
|
||||
|
||||
- name: Build release tarball
|
||||
run: ./scripts/build-release-tarball.sh "${GITHUB_REF_NAME}"
|
||||
|
||||
- name: Build live-installer ISO
|
||||
# Same script build-iso.yml uses on every main push. Re-running
|
||||
# here is intentional: guarantees the ISO matches the exact
|
||||
# tagged commit without coordinating across workflows. Step-level
|
||||
# continue-on-error so an ISO build flake doesn't block the
|
||||
# core tarball (which is what boxes need for self-update) from
|
||||
# publishing.
|
||||
continue-on-error: true
|
||||
id: build_iso
|
||||
run: ./iso/build.sh
|
||||
|
||||
- name: Move ISO into dist/
|
||||
# publish-release.sh attaches dist/furtka-<ver>.iso if present.
|
||||
# Skipped gracefully when the build step above failed.
|
||||
if: steps.build_iso.outcome == 'success'
|
||||
run: |
|
||||
iso=$(ls iso/out/*.iso | head -1)
|
||||
cp "$iso" "dist/furtka-${GITHUB_REF_NAME}.iso"
|
||||
|
||||
- name: Publish to Forgejo releases
|
||||
env:
|
||||
FORGEJO_TOKEN: ${{ secrets.FORGEJO_RELEASE_TOKEN }}
|
||||
|
|
|
|||
1
.gitignore
vendored
1
.gitignore
vendored
|
|
@ -13,3 +13,4 @@ iso/out/
|
|||
website/public/
|
||||
website/resources/
|
||||
website/.hugo_build.lock
|
||||
website/hugo_stats.json
|
||||
|
|
|
|||
250
CHANGELOG.md
250
CHANGELOG.md
|
|
@ -7,6 +7,246 @@ This project uses calendar versioning: `YY.N-stage` (e.g. `26.0-alpha` = 2026, r
|
|||
|
||||
## [Unreleased]
|
||||
|
||||
## [26.15-alpha] - 2026-04-21
|
||||
|
||||
### Fixed
|
||||
|
||||
- **HTTPS is now opt-in; fresh installs no longer hit unbypassable
|
||||
SEC_ERROR_BAD_SIGNATURE.** Every version since 26.5 shipped a
|
||||
Caddyfile with a `__FURTKA_HOSTNAME__.local { tls internal }` site
|
||||
block, so Caddy auto-generated a self-signed root CA + intermediate
|
||||
+ leaf on first boot. That worked for first-time-ever users, but
|
||||
every reinstall (or second Furtka box on the same LAN) produced a
|
||||
new CA with the **same intermediate CN** (`Caddy Local Authority -
|
||||
ECC Intermediate` — Caddy hardcodes it). Any browser that had ever
|
||||
trusted an earlier Furtka CA got a cached intermediate with
|
||||
mismatched keys, then Firefox's cert lookup substituted the cached
|
||||
intermediate when validating the new box's leaf → the signature
|
||||
check failed → `SEC_ERROR_BAD_SIGNATURE`, which Firefox has no
|
||||
"Advanced → Accept Risk" bypass for.
|
||||
- Removed the hostname site block from the default Caddyfile.
|
||||
Fresh installs serve `:80` only; visiting `https://furtka.local`
|
||||
now yields a clean connection-refused instead of the crypto
|
||||
fault.
|
||||
- Added top-level `import /etc/caddy/furtka-https.d/*.caddyfile`.
|
||||
The `/settings` HTTPS toggle (via `furtka.https.set_force_https`)
|
||||
now writes TWO snippets atomically — the top-level hostname +
|
||||
`tls internal` block (enables `:443`) and the `:80`-scoped
|
||||
redirect (forces HTTP → HTTPS) — and removes both on disable.
|
||||
Caddy reloads after the pair-swap; failure rolls both back.
|
||||
- Webinstaller creates `/etc/caddy/furtka-https.d/` during
|
||||
post-install alongside the existing `furtka.d/`.
|
||||
- `updater._refresh_caddyfile` runs a 26.14 → 26.15 migration: if
|
||||
the box already had the redirect snippet on disk (user had
|
||||
explicitly enabled "Force HTTPS" under the old regime), the
|
||||
migration also writes the new listener snippet so HTTPS keeps
|
||||
working across the upgrade.
|
||||
- **`status.force_https` now reads the listener snippet, not the
|
||||
redirect snippet.** A lone redirect without a `:443` listener
|
||||
wouldn't actually serve HTTPS, so the listener file is the
|
||||
authoritative "HTTPS is on" signal. The UI on `/settings` sees the
|
||||
correct state as a result.
|
||||
|
||||
Known remaining UX wart: a browser that trusted a previous Furtka box
|
||||
still sees `BAD_SIGNATURE` when visiting this box's `https://` after
|
||||
enabling HTTPS here — the fixed intermediate CN is a Caddy-side
|
||||
limitation we can't fix from Furtka. Fresh installs on a browser that
|
||||
never visited another Furtka box work correctly. Workaround:
|
||||
`about:networking#sts` → Forget → clear `cert9.db`.
|
||||
|
||||
## [26.14-alpha] - 2026-04-21
|
||||
|
||||
### Fixed
|
||||
|
||||
- **Landing page and `/settings/` were silently bypassing the auth
|
||||
guard.** Since 26.11 shipped login, the Caddyfile only
|
||||
reverse-proxied `/api/*`, `/apps*`, `/login*`, and `/logout*` to
|
||||
Python. Everything else — including `/` and `/settings/` — fell
|
||||
through to Caddy's catch-all `file_server` and was served straight
|
||||
from `assets/www/` without ever hitting the session check. The
|
||||
effect: a LAN visitor saw the box's hostname, IP, Furtka version,
|
||||
and the buttons for Update-now / Reboot / HTTPS-toggle. The API
|
||||
calls those buttons fired were all 401-auth-gated so actions didn't
|
||||
land, but the information leak and the "looks open" UX was a real
|
||||
bug. Caught in the 26.13 SSH test session when the user noticed
|
||||
Logout only showed up on `/apps`. Now Caddy routes `/` and
|
||||
`/settings*` through Python; a new `_serve_static_www` handler
|
||||
checks the session cookie, redirects to `/login` if unauthed, and
|
||||
reads the HTML from `assets/www/` otherwise. Catch-all still
|
||||
serves `/style.css`, `/rootCA.crt`, and the runtime JSON files
|
||||
publicly — those don't need auth.
|
||||
- **Logout link now shows on every authed page, not just `/apps`.**
|
||||
The static HTML for `/` and `/settings/` maintained their own nav
|
||||
separate from `_HTML` in `api.py`, so they never got the Logout
|
||||
entry when it was added in 26.11. Both nav bars now include it
|
||||
plus an inline `doLogout()` that POSTs `/logout` and bounces to
|
||||
`/login`, matching the pattern in `_HTML`.
|
||||
|
||||
## [26.13-alpha] - 2026-04-21
|
||||
|
||||
### Fixed
|
||||
|
||||
- **Upgrade path from pre-auth releases actually works.** 26.11-alpha
|
||||
introduced `from werkzeug.security import ...` in `furtka/auth.py`,
|
||||
but werkzeug isn't installed on the target system — core runs as
|
||||
system Python with stdlib only, and `flask>=3.0` in `pyproject.toml`
|
||||
is never pip-installed on the box. Fresh boxes from the 26.11/26.12
|
||||
ISO without a manually-installed werkzeug crashed on import; boxes
|
||||
upgrading from pre-26.11 got double-broken by that plus the health
|
||||
check below. Replaced the werkzeug dependency with a stdlib-only
|
||||
`furtka/passwd.py` that uses `hashlib.pbkdf2_hmac` for new hashes
|
||||
and parses werkzeug's `scrypt:N:r:p$salt$hex` format for backward
|
||||
compatibility — existing `users.json` files created on the rare
|
||||
boxes that did have werkzeug keep working after this upgrade, no
|
||||
re-setup needed. `from werkzeug.security import ...` is gone from
|
||||
the import chain entirely; `pyproject.toml`'s flask dep stays only
|
||||
for the live-ISO webinstaller.
|
||||
- **Self-update no longer auto-rolls-back when crossing the auth
|
||||
boundary.** `updater._health_check` pinged `/api/apps` and demanded
|
||||
a 200, which meant every 26.10 → 26.11+ upgrade hit the post-restart
|
||||
check, got a 401 (auth guard), and treated that as "server dead"
|
||||
→ rollback. Now any 2xx–4xx response counts as "server alive"; only
|
||||
connection-level failures or 5xx fail the check. 5xx still fails
|
||||
rollback because that means the new process is up but broken.
|
||||
- **Install lock closes its race window.** `POST /api/apps/install`
|
||||
used to release the fcntl lock immediately after the sync
|
||||
pre-validation so the systemd-run child could re-acquire it —
|
||||
leaving a tiny gap where a second POST could slip in, pass the lock
|
||||
check, and return 202. Both child processes would start, one would
|
||||
win the in-child lock, the other would die silently. Now the API
|
||||
also reads `install-state.json` and refuses with 409 if the stage
|
||||
is non-terminal (`pulling_image`, `creating_volumes`,
|
||||
`starting_container`). The fcntl lock stays as belt-and-suspenders.
|
||||
|
||||
## [26.12-alpha] - 2026-04-21
|
||||
|
||||
### Changed
|
||||
|
||||
- **App-Install geht async mit Live-Progress.** `POST /api/apps/install`
|
||||
returnt jetzt `202 Accepted` nach der synchronen Pre-Validation
|
||||
(Source auflösen, Files kopieren, `.env` schreiben, Placeholder- und
|
||||
Path-Checks). Den eigentlichen Docker-Teil (`compose pull` → volumes
|
||||
→ `compose up`) dispatched der Handler als `systemd-run
|
||||
--unit=furtka-install-<app>` Hintergrund-Job, der seine Phase in
|
||||
`/var/lib/furtka/install-state.json` schreibt. Neues
|
||||
`GET /api/apps/install/status` für UI-Polling. Das Install-Modal
|
||||
zeigt jetzt live "Image wird heruntergeladen…" →
|
||||
"Speicherbereiche werden erstellt…" → "Container wird gestartet…"
|
||||
statt ~30 Sekunden totem "Installing…". Muster 1:1 parallel zu
|
||||
`/api/catalog/sync/apply` und `/api/furtka/update/apply`. Neue CLI-
|
||||
Subcommand `furtka app install-bg <name>` (intern, von der API
|
||||
aufgerufen); `furtka app install` für Terminal-User bleibt synchron.
|
||||
Die Reinstall-Taste in der App-Liste pollt ebenfalls den
|
||||
Install-Status und spiegelt die Phase im Button-Text.
|
||||
|
||||
## [26.11-alpha] - 2026-04-21
|
||||
|
||||
### Added
|
||||
|
||||
- **Login-auth for the Furtka web UI.** Every `/apps`, `/api/*`, `/`,
|
||||
and `/settings/` route now requires a signed-in session. New
|
||||
`/login` page serves a username/password form; `POST /login`
|
||||
validates against `/var/lib/furtka/users.json` (werkzeug PBKDF2-
|
||||
hashed), sets a `furtka_session` cookie (`HttpOnly`, `SameSite=
|
||||
Strict`, 7-day TTL), and redirects to `/apps`. `POST /logout`
|
||||
revokes the server-side session and clears the cookie.
|
||||
Unauthenticated HTML requests get a 302 to `/login`; unauthenticated
|
||||
API requests get 401 JSON. The old "No authentication on this UI
|
||||
yet" banner is gone; the `/apps` header picks up a `Logout` link
|
||||
instead.
|
||||
- **First-run setup fallback for upgrade-path boxes.** Boxes
|
||||
upgrading from 26.10-alpha have no `users.json` yet — on the first
|
||||
visit `/login` renders a setup form (username + password +
|
||||
password-confirm) that creates the admin record on submit. Fresh
|
||||
installs skip this: the webinstaller writes `users.json` during
|
||||
the chroot post-install step using the step-1 password, so the
|
||||
first browser visit after boot goes straight to the login form.
|
||||
- **Caddy proxy routes `/login` and `/logout`.** `assets/Caddyfile`
|
||||
gets two new `handle` blocks in the shared `(furtka_routes)`
|
||||
snippet so both the `:80` block and the `hostname.local, hostname`
|
||||
HTTPS block forward the auth endpoints to the stdlib server on
|
||||
`127.0.0.1:7000`. Without this Caddy would serve a 404 from the
|
||||
static file server.
|
||||
|
||||
### Fixed
|
||||
|
||||
- `tests/test_installer.py` ruff-format nit — the 26.10-alpha
|
||||
release commit had a misformatted list literal that failed
|
||||
`ruff format --check`. Caught when the Release page on Forgejo
|
||||
showed a red CI badge for the tag.
|
||||
- `pyproject.toml` version string bumped from the stale 26.8-alpha
|
||||
to 26.11-alpha. Release pipeline uses `GITHUB_REF_NAME` as source
|
||||
of truth for the artefact name, but having the two agree matters
|
||||
for local dev runs that read `pyproject.toml`.
|
||||
|
||||
## [26.10-alpha] - 2026-04-21
|
||||
|
||||
### Added
|
||||
|
||||
- **Remove-USB-stick hint on the installer's post-install screen.**
|
||||
`webinstaller/templates/install/rebooting.html` now shows a bold
|
||||
"Remove the USB stick now" line before the reboot, plus a muted
|
||||
fallback explaining the BIOS boot-menu keys (F11/F12/Esc) if the
|
||||
machine boots back into the installer anyway. Caught on the first
|
||||
bare-metal test (Medion i5-4gen, 2026-04-21) where the box didn't
|
||||
boot the installed system without manual BIOS-order changes.
|
||||
- **New `path` setting type for app manifests.** Apps can now declare a
|
||||
setting with `"type": "path"` whose value is an absolute filesystem
|
||||
path on the host; docker-compose bind-mounts it via the usual `.env`
|
||||
substitution (`${MEDIA_PATH}:/media`). Unlocks media/data-heavy apps
|
||||
(Jellyfin, later Paperless/Nextcloud/Immich) where the user points at
|
||||
an existing folder instead of copying everything into a Docker
|
||||
volume. The install form renders path settings as a plain text input
|
||||
with a `/mnt/…` placeholder hint.
|
||||
- **Server-side path validation.** Both `install_from()` and
|
||||
`update_env()` refuse values that aren't absolute, don't exist,
|
||||
aren't directories, or resolve (after `Path.resolve()`) into a
|
||||
system-path deny-list (`/`, `/etc`, `/root`, `/boot`, `/proc`,
|
||||
`/sys`, `/dev`, `/bin`, `/sbin`, `/usr/bin`, `/usr/sbin`,
|
||||
`/var/lib/furtka`). Catches `/mnt/../etc`-style traversal too. Error
|
||||
messages surface in the existing install/edit modal error line.
|
||||
|
||||
## [26.9-alpha] - 2026-04-21
|
||||
|
||||
### Fixed
|
||||
|
||||
- Landing-page app tiles with an `open_url` now open in a new tab
|
||||
(`target="_blank" rel="noopener"`), matching the Open button
|
||||
behaviour on `/apps`. Without this, clicking "Uptime Kuma" on the
|
||||
home screen replaced Furtka itself with the Kuma admin page.
|
||||
Internal links (the `Manage →` fallback for apps without an
|
||||
`open_url`) still open in the same tab.
|
||||
- `scripts/publish-release.sh` no longer fails the whole release when
|
||||
the ISO upload hits a Forgejo proxy 504. The core tarball + sha256 +
|
||||
release.json (which running boxes need for self-update) are uploaded
|
||||
first and the ISO is attempted last as a best-effort; a 504 now logs
|
||||
a warning and exits 0 so the release page still publishes. Surfaced
|
||||
by the 26.8-alpha cut: the tarball landed but the ~1 GB ISO upload
|
||||
timed out at the Forgejo reverse proxy.
|
||||
|
||||
### Changed
|
||||
|
||||
- `furtka app list --json` now mirrors `/api/apps` field-for-field —
|
||||
previously the CLI emitted a slim projection missing
|
||||
`description_long`, `open_url`, and `settings`. Anyone piping the
|
||||
CLI output into jq for automation was seeing an incomplete view.
|
||||
|
||||
## [26.8-alpha] - 2026-04-20
|
||||
|
||||
### Added
|
||||
|
||||
- **Live-installer ISO attached to the Forgejo release page.** `.forgejo/workflows/release.yml` moves to the self-hosted runner, builds both the self-update tarball and the ISO, and `scripts/publish-release.sh` uploads the ISO as a fourth release asset (`furtka-<version>.iso`) alongside the existing tarball + sha256 + release.json. Fresh-install users can now grab the ISO from the release page instead of hunting through `build-iso.yml` artifact retention windows. ISO build step is `continue-on-error` so an ISO flake doesn't hold back the core tarball that running boxes need for self-update.
|
||||
- **Reboot + Shut down buttons on `/settings`.** Replaces the two "Coming next" placeholders with real actions backed by `POST /api/furtka/power` (`{"action": "reboot" | "poweroff"}`). Handler kicks a delayed `systemd-run --on-active=3s systemctl {reboot|poweroff}` so the HTTP response reaches the browser before the kernel loses network. Each button opens a native confirm dialog first (reboot: "back in ~30 s", shut down: "need to press the physical power button"), then the UI swaps to a status line and — after a reboot — polls `/furtka.json` until the box is back, reloading the page automatically. No auth (same posture as install/remove).
|
||||
- **Manifest `open_url` field + Open button in `/apps` and on the landing page.** Apps declare a URL template (e.g. `smb://{host}/files` for fileshare, `http://{host}:3001/` for Uptime Kuma); the UI substitutes `{host}` with the current browser's hostname at render time so the link follows however the user reached Furtka (furtka.local, raw IP, a future reverse-proxy hostname). The landing page's hardcoded `if app.name === 'fileshare'` special-case is gone — any app with an `open_url` in its manifest now gets a proper "Open" link. The core seed `apps/fileshare/manifest.json` bumps to v0.1.2 to carry it.
|
||||
|
||||
### Changed
|
||||
|
||||
- `.btn` CSS class introduced so an `<a>` rendered-as-button lines up with its `<button>` siblings in `.buttons`. Needed because "Open" is a real link (middle-click, copy URL, screen readers) and HTML doesn't let `<button>` carry `href`.
|
||||
|
||||
### Notes
|
||||
|
||||
- `26.7-alpha` was tagged but never published — the tag push didn't trigger `release.yml` (Forgejo race with the concurrent main push). `26.8-alpha` supersedes it and carries the same content plus power actions.
|
||||
|
||||
## [26.6-alpha] - 2026-04-20
|
||||
|
||||
### Added
|
||||
|
|
@ -114,7 +354,15 @@ First tagged snapshot. Pre-alpha — the installer does not yet boot, but the de
|
|||
- **Containers:** Docker + Compose
|
||||
- **License:** AGPL-3.0
|
||||
|
||||
[Unreleased]: https://forgejo.sourcegate.online/daniel/furtka/compare/26.6-alpha...HEAD
|
||||
[Unreleased]: https://forgejo.sourcegate.online/daniel/furtka/compare/26.15-alpha...HEAD
|
||||
[26.15-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.15-alpha
|
||||
[26.14-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.14-alpha
|
||||
[26.13-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.13-alpha
|
||||
[26.12-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.12-alpha
|
||||
[26.11-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.11-alpha
|
||||
[26.10-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.10-alpha
|
||||
[26.9-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.9-alpha
|
||||
[26.8-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.8-alpha
|
||||
[26.6-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.6-alpha
|
||||
[26.5-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.5-alpha
|
||||
[26.4-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.4-alpha
|
||||
|
|
|
|||
|
|
@ -108,7 +108,7 @@ None of these nail the "your dad can set this up" experience. The installer wiza
|
|||
- [x] **ISO-build in CI** — `.forgejo/workflows/build-iso.yml` runs `iso/build.sh` on every push to `main` and publishes the resulting `.iso` as the `furtka-iso` artifact (14 d retention). Push → green run → download → test.
|
||||
- [x] **Forgejo Releases + tag-driven release pipeline** — `.forgejo/workflows/release.yml` fires on `[0-9]*` tags, `scripts/build-release-tarball.sh` packages `furtka/` + `apps/` + `assets/` + a root VERSION, `scripts/publish-release.sh` uploads tarball + sha256 + release.json to the Forgejo releases page. Releases `26.1-alpha`, `26.3-alpha`, and `26.4-alpha` live at [releases](https://forgejo.sourcegate.online/daniel/furtka/releases) (26.2 stalled on a `jq` apt hang, fixed in 26.3). Needs one repo secret (`FORGEJO_RELEASE_TOKEN`).
|
||||
- [x] **Walking-skeleton live ISO — end to end** — `iso/build.sh` produces a hybrid BIOS/UEFI Arch-based ISO. It boots in a Proxmox VM, DHCPs onto the LAN, shows a console welcome with `http://proksi.local:5000` (+ IP fallback), serves the Flask webinstaller, runs `archinstall --silent`, reboots the VM via a Reboot-now button, and the installed system logs in and runs `docker ps` without sudo. Build infra in [`iso/`](iso/).
|
||||
- [x] **Drop loop/rom devices from drive list** — `webinstaller/drives.py` filters by `lsblk` `TYPE=disk`, so the live squashfs and CD-ROM no longer appear as install targets. Boot-USB filtering on bare metal is still TODO; see [iso/README.md](iso/README.md).
|
||||
- [x] **Drop loop/rom devices from drive list** — `webinstaller/drives.py` filters by `lsblk` `TYPE=disk`, so the live squashfs and CD-ROM no longer appear as install targets. The boot USB itself is also filtered: on the live ISO, `findmnt /run/archiso/bootmnt` resolves the boot partition and its parent disk is dropped from the picker.
|
||||
- [x] **Rebrand GRUB menu** — `iso/build.sh` rewrites "Arch Linux install medium" → "Furtka Live Installer" across GRUB, syslinux, and systemd-boot configs; default entry marked `(Recommended)`.
|
||||
- [x] **Wizard: account form → drive picker → overview → archinstall** — S1 collects hostname/user/password/language with validation, S2 picks boot drive, overview confirms, `/install/run` writes `user_configuration.json` + `user_credentials.json` (0600) and execs `archinstall --silent` against its 4.x schema (`default_layout` disk_config + `!root-password` / `!password` sentinel keys + `custom_commands` for post-install group joins). Install log page polls a JSON endpoint and renders a phase-based progress bar with a collapsible raw log. `FURTKA_DRY_RUN=1` skips the real exec for testing.
|
||||
- [x] **mDNS `proksi.local`** — hostname baked into the live ISO, avahi + nss-mdns in the package list, advertised as soon as network-online fires. The HTTPS + local-CA half of this milestone is still open below.
|
||||
|
|
@ -117,7 +117,7 @@ None of these nail the "your dad can set this up" experience. The installer wiza
|
|||
- [x] **On-box web UI uplevel** — shared `/style.css` served by Caddy, persistent top nav, landing page with an "Your apps" tile grid + live status, `/apps` with real per-app icons (inlined SVG from each manifest), new `/settings` page (hostname, IP, version, kernel, RAM, Docker, uptime + Furtka-updates card). `prefers-color-scheme` light/dark.
|
||||
- [x] **Versioned on-box layout + Phase 1 per-app updates** — `/opt/furtka/versions/<ver>/` + `current` symlink; `/var/lib/furtka/` for runtime state. `POST /api/apps/<name>/update` runs `docker compose pull` + compares digests + conditional `up -d`.
|
||||
- [x] **Phase 2 Furtka self-update** — `/settings` → Check → Update now. Downloads signed tarball (SHA256), stages, atomic symlink flip, reloads Caddy, daemon-reload, restarts services, health-checks the new api with auto-rollback on failure. CLI: `furtka update [--check]` + `furtka rollback`. Validated end-to-end on VM 2026-04-16 (`26.0-alpha` → `26.3-alpha` → rollback → reboot).
|
||||
- [x] **Local HTTPS Phase 1** — Caddy `tls internal` on `:443` alongside plain `:80`. Per-box root CA generated on first start, `rootCA.crt` downloadable from `/settings`, per-OS install guide at `/https-install/`. Opt-in "force HTTPS" toggle only exposes itself once the current browser already trusts the cert, so enabling it can't lock the user out. Shipped in 26.4-alpha.
|
||||
- [x] **Local HTTPS Phase 1** — Caddy `tls internal` on `:443` is fully opt-in via the `/settings` toggle (26.15-alpha); fresh installs stay HTTP-only so a half-trusted cert chain can't lock the user out. Per-box root CA generated on first enable, `rootCA.crt` downloadable from `/settings`, per-OS install guide at `/https-install/`. The "force HTTPS" sub-toggle still only appears once the current browser already trusts the cert.
|
||||
- [x] **Post-build smoke VM on Proxmox** — `.forgejo/workflows/build-iso.yml` hands the freshly built ISO to `scripts/smoke-vm.sh`, which boots it in a throwaway VM on `pollux` (192.168.178.165) and curls the webinstaller on `:5000`. VMID range 9000–9099, last 5 kept. Green end-to-end since 26.4-alpha.
|
||||
- [ ] Installer wizard screens S3–S7 — per-device purpose, network, domain, SSL, diagnostic. S5/S6 blocked on managed-gateway DNS infra not yet built.
|
||||
- [ ] Local HTTPS Phase 2 — dedicated local CA (not Caddy's `tls internal`), streamlined one-click install across Win/Mac/Linux/Android, and HTTPS on the live-installer wizard (`https://proksi.local:5000`).
|
||||
|
|
|
|||
|
|
@ -45,7 +45,7 @@ Tag per meaningful milestone, not on a calendar. A milestone is: ISO boots, a wi
|
|||
git push origin 26.1-alpha
|
||||
```
|
||||
|
||||
5. **The release workflow does the rest.** `.forgejo/workflows/release.yml` fires on the tag push: `scripts/build-release-tarball.sh` builds the self-update payload (tarball + sha256 + release.json under `dist/`), `scripts/publish-release.sh` uploads all three assets to the Forgejo release page. Pre-release is flagged automatically based on the suffix (`-alpha`/`-beta`/`-rc`).
|
||||
5. **The release workflow does the rest.** `.forgejo/workflows/release.yml` fires on the tag push and runs on the self-hosted runner: `scripts/build-release-tarball.sh` builds the self-update payload (tarball + sha256 + release.json under `dist/`), `iso/build.sh` builds the live-installer ISO, `scripts/publish-release.sh` uploads tarball + sha256 + release.json + ISO to the Forgejo release page. Pre-release is flagged automatically based on the suffix (`-alpha`/`-beta`/`-rc`). ISO build is `continue-on-error`: a flaky ISO step doesn't block the core tarball (the thing boxes need for self-update).
|
||||
|
||||
The release workflow needs one secret set at repo **Settings → Secrets → Actions**:
|
||||
- `FORGEJO_RELEASE_TOKEN` — a PAT with `write:repository` scope.
|
||||
|
|
|
|||
|
|
@ -47,10 +47,42 @@ Rules enforced by `furtka/manifest.py`:
|
|||
- `volumes` — short names, strings. Namespaced to `furtka_<app>_<short>` at runtime.
|
||||
- `ports` — integers. Informational only; compose owns the actual port binding.
|
||||
- `settings[].name` — must match `^[A-Z_][A-Z0-9_]*$`. This name becomes both the env-var key and the form-field ID.
|
||||
- `settings[].type` — one of `text`, `password`, `number`.
|
||||
- `settings[].type` — one of `text`, `password`, `number`, `path`.
|
||||
- `settings[].required` — if true, the install refuses when the value is empty.
|
||||
- `settings[].default` — optional string. Used to pre-fill the form and the bootstrapped `.env`.
|
||||
|
||||
### Path-type settings (host bind mounts)
|
||||
|
||||
Use `"type": "path"` when the app should point at an existing folder on the host — media libraries, document archives, photo backups. The value is written to `.env` like any other setting, and compose consumes it via `${VAR}` substitution as a bind mount.
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "MEDIA_PATH",
|
||||
"label": "Medienordner",
|
||||
"description": "Absoluter Pfad zu deinem Medien-Ordner, z.B. /mnt/media.",
|
||||
"type": "path",
|
||||
"required": true
|
||||
}
|
||||
```
|
||||
|
||||
```yaml
|
||||
services:
|
||||
app:
|
||||
volumes:
|
||||
- ${MEDIA_PATH}:/media:ro
|
||||
```
|
||||
|
||||
The installer (`install_from` and `update_env`) refuses values that:
|
||||
|
||||
- aren't absolute (must start with `/`),
|
||||
- don't exist on the host,
|
||||
- aren't directories,
|
||||
- resolve (after `Path.resolve()`) into a system-path deny-list: `/`, `/etc`, `/root`, `/boot`, `/proc`, `/sys`, `/dev`, `/bin`, `/sbin`, `/usr/bin`, `/usr/sbin`, `/var/lib/furtka`.
|
||||
|
||||
Traversal like `/mnt/../etc` is caught too — the deny-list check runs on the resolved path.
|
||||
|
||||
Path settings sit alongside manifest-declared volumes. Use `manifest.volumes` for internal state the app owns (databases, caches, config), and path settings for user data the container should mount and — usually — read without owning. Mounting read-only (`:ro`) is a good default for data the app only consumes.
|
||||
|
||||
## `docker-compose.yaml`
|
||||
|
||||
- File extension is `.yaml`. The compose runner hardcodes this — `.yml` will not be found.
|
||||
|
|
|
|||
|
|
@ -1,12 +1,13 @@
|
|||
{
|
||||
"name": "fileshare",
|
||||
"display_name": "Network Files",
|
||||
"version": "0.1.1",
|
||||
"version": "0.1.2",
|
||||
"description": "SMB share for Mac, Windows, Linux and Android devices on the LAN.",
|
||||
"description_long": "Alle Geräte im WLAN sehen einen gemeinsamen Ordner. Funktioniert mit Windows, Mac, Linux und Android. Verbinden zu smb://furtka.local — Anmeldung mit dem hier gesetzten Benutzernamen und Passwort.",
|
||||
"volumes": ["files"],
|
||||
"ports": [445, 139],
|
||||
"icon": "icon.svg",
|
||||
"open_url": "smb://{host}/files",
|
||||
"settings": [
|
||||
{
|
||||
"name": "SMB_USER",
|
||||
|
|
|
|||
|
|
@ -1,25 +1,27 @@
|
|||
# Serves the Furtka landing page + live JSON on :80 (plain HTTP) and on
|
||||
# HTTPS via Caddy's built-in `tls internal` — locally-issued certs signed
|
||||
# by a root CA that Caddy generates on first start and stores under
|
||||
# /var/lib/caddy/pki/authorities/local/. Static pages are read from
|
||||
# /opt/furtka/current/ — updates flip the symlink and everything picks up
|
||||
# the new content without a Caddy restart (a `systemctl reload caddy` is
|
||||
# still triggered post-swap to flush the file-server's handle cache).
|
||||
# /apps and /api are reverse-proxied to the resource-manager API
|
||||
# (furtka serve, bound to 127.0.0.1:7000).
|
||||
# Serves the Furtka landing page + live JSON on :80 (plain HTTP). HTTPS
|
||||
# is **opt-in** — Caddy doesn't serve :443 until the user clicks the
|
||||
# "Enable HTTPS" toggle on /settings, which drops an import snippet into
|
||||
# /etc/caddy/furtka-https.d/. Default install has NO tls site block →
|
||||
# Caddy never generates a self-signed CA / leaf cert → no
|
||||
# SEC_ERROR_BAD_SIGNATURE when a user visits https://furtka.local before
|
||||
# they've trusted anything. That was the 26.14-era regression this file
|
||||
# exists to cure: the old Caddyfile always served :443 with a freshly-
|
||||
# generated cert, and a browser that had ever trusted an older Furtka
|
||||
# box's CA would reject the new one with an unbypassable bad-sig error.
|
||||
#
|
||||
# Hostname templating: __FURTKA_HOSTNAME__ gets substituted with the
|
||||
# install-time hostname by webinstaller/app.py on first install and by
|
||||
# furtka.updater._refresh_caddyfile on every self-update. A bare `:443
|
||||
# { tls internal }` (no hostname) never triggers leaf-cert issuance, so
|
||||
# SNI-based handshakes die with `SSL_ERROR_INTERNAL_ERROR_ALERT` — the
|
||||
# 26.4-alpha regression this file exists to cure.
|
||||
# /apps, /api, /login, /logout, / (home), /settings are reverse-proxied
|
||||
# to the resource-manager API (furtka serve, bound to 127.0.0.1:7000).
|
||||
# Static pages are read from /opt/furtka/current/ — updates flip the
|
||||
# symlink and everything picks up the new content without a Caddy
|
||||
# restart (a `systemctl reload caddy` is still triggered post-swap to
|
||||
# flush the file-server's handle cache).
|
||||
#
|
||||
# Force-HTTPS: /etc/caddy/furtka.d/*.caddyfile gets imported into the :80
|
||||
# block. The /api/furtka/https/force endpoint creates or removes
|
||||
# redirect.caddyfile there to toggle the HTTP→HTTPS redirect, then reloads
|
||||
# Caddy. Glob imports silently no-op on an empty/missing directory, so the
|
||||
# toggle-off state is "no file present" rather than "empty file".
|
||||
# Two snippet dirs, both silently no-op when empty:
|
||||
# - /etc/caddy/furtka.d/*.caddyfile → imported inside the :80 block.
|
||||
# The HTTPS toggle's "force HTTP→HTTPS redirect" snippet lands here.
|
||||
# - /etc/caddy/furtka-https.d/*.caddyfile → imported at TOP LEVEL, so
|
||||
# the HTTPS hostname+tls-internal site block can drop in here when
|
||||
# the toggle is on. Hostname is substituted at toggle-time.
|
||||
{
|
||||
# Named-hostname :443 blocks would otherwise make Caddy add its own
|
||||
# HTTP→HTTPS redirect — but we already serve our own `:80` block and
|
||||
|
|
@ -35,6 +37,26 @@
|
|||
handle /apps* {
|
||||
reverse_proxy localhost:7000
|
||||
}
|
||||
handle /login* {
|
||||
reverse_proxy localhost:7000
|
||||
}
|
||||
handle /logout* {
|
||||
reverse_proxy localhost:7000
|
||||
}
|
||||
# /settings and / — these previously served as static HTML straight
|
||||
# from the catch-all file_server, which meant the auth-guard was
|
||||
# bypassed: a LAN visitor could see the box's version, IP, and
|
||||
# reach the Update-now / Reboot buttons (the API calls behind them
|
||||
# are auth-gated, but the page itself rendered without a redirect
|
||||
# to /login). Route them through the Python handler which checks
|
||||
# the session cookie and either serves the static HTML from
|
||||
# assets/www/ or redirects to /login.
|
||||
handle /settings* {
|
||||
reverse_proxy localhost:7000
|
||||
}
|
||||
handle / {
|
||||
reverse_proxy localhost:7000
|
||||
}
|
||||
# Runtime JSON lives under /var/lib/furtka/ so it survives self-updates
|
||||
# (which only swap /opt/furtka/current).
|
||||
handle /status.json {
|
||||
|
|
@ -50,8 +72,8 @@
|
|||
file_server
|
||||
}
|
||||
# Download the local root CA cert Caddy generated for `tls internal`.
|
||||
# Available on both :80 and :443 so users can grab it before they've
|
||||
# trusted it. The private key next to it stays 0600 / caddy-owned.
|
||||
# Public because users need to grab it before they've trusted it.
|
||||
# The private key next to it stays 0600 / caddy-owned.
|
||||
handle /rootCA.crt {
|
||||
root * /var/lib/caddy/pki/authorities/local
|
||||
rewrite * /root.crt
|
||||
|
|
@ -69,12 +91,12 @@
|
|||
}
|
||||
}
|
||||
|
||||
# HTTPS opt-in: when /settings toggles HTTPS on, a snippet gets written
|
||||
# into /etc/caddy/furtka-https.d/ that adds the hostname+tls-internal
|
||||
# site block. Empty directory = HTTP-only (default fresh install).
|
||||
import /etc/caddy/furtka-https.d/*.caddyfile
|
||||
|
||||
:80 {
|
||||
import /etc/caddy/furtka.d/*.caddyfile
|
||||
import furtka_routes
|
||||
}
|
||||
|
||||
__FURTKA_HOSTNAME__.local, __FURTKA_HOSTNAME__ {
|
||||
tls internal
|
||||
import furtka_routes
|
||||
}
|
||||
|
|
|
|||
|
|
@ -14,6 +14,7 @@
|
|||
<a href="/" aria-current="page">Home</a>
|
||||
<a href="/apps">Apps</a>
|
||||
<a href="/settings/">Settings</a>
|
||||
<a href="#" id="logout-link" onclick="return doLogout(event)">Logout</a>
|
||||
</div>
|
||||
</nav>
|
||||
<header>
|
||||
|
|
@ -67,6 +68,17 @@
|
|||
</main>
|
||||
|
||||
<script>
|
||||
// Revoke the cookie server-side and bounce to /login. Shared
|
||||
// shape with the _HTML in furtka/api.py so the two logout
|
||||
// links behave identically.
|
||||
async function doLogout(ev) {
|
||||
ev.preventDefault();
|
||||
try { await fetch('/logout', { method: 'POST', credentials: 'same-origin' }); }
|
||||
catch (e) { /* server may already be down */ }
|
||||
window.location.href = '/login';
|
||||
return false;
|
||||
}
|
||||
|
||||
// Hostname + install metadata — written once at install time to
|
||||
// /var/lib/furtka/furtka.json (see _furtka_json_cmd in the installer).
|
||||
// Separate from status.json because these facts don't change between
|
||||
|
|
@ -92,13 +104,17 @@
|
|||
}
|
||||
|
||||
function primaryAction(app) {
|
||||
// Only fileshare has a direct "open" link today. Future apps with
|
||||
// HTTP endpoints would surface a URL here; everything else falls
|
||||
// back to the /apps manage page.
|
||||
if (app.name === 'fileshare' && HOSTNAME) {
|
||||
return { href: `smb://${HOSTNAME}.local/files`, label: 'Open files' };
|
||||
// open_url is a manifest-declared template with a `{host}`
|
||||
// placeholder — substituted against the current browser's
|
||||
// hostname so smb://host/files and http://host:3001/ both
|
||||
// follow however the user reached Furtka (furtka.local, raw
|
||||
// IP, a future reverse-proxy hostname). Apps without a
|
||||
// frontend fall back to /apps for management.
|
||||
if (app.open_url) {
|
||||
const host = HOSTNAME || location.hostname;
|
||||
return { href: app.open_url.replace('{host}', host), label: 'Open', external: true };
|
||||
}
|
||||
return { href: '/apps', label: 'Manage →' };
|
||||
return { href: '/apps', label: 'Manage →', external: false };
|
||||
}
|
||||
|
||||
async function renderApps() {
|
||||
|
|
@ -115,8 +131,9 @@
|
|||
}
|
||||
target.innerHTML = apps.map(a => {
|
||||
const icon = a.icon_svg || FALLBACK_ICON;
|
||||
const { href, label } = primaryAction(a);
|
||||
return `<a class="app-tile" href="${esc(href)}">
|
||||
const { href, label, external } = primaryAction(a);
|
||||
const tgt = external ? ' target="_blank" rel="noopener"' : '';
|
||||
return `<a class="app-tile" href="${esc(href)}"${tgt}>
|
||||
<div class="icon">${icon}</div>
|
||||
<span class="name">${esc(a.display_name || a.name)}</span>
|
||||
<span class="cta">${esc(label)}</span>
|
||||
|
|
|
|||
|
|
@ -14,6 +14,7 @@
|
|||
<a href="/">Home</a>
|
||||
<a href="/apps">Apps</a>
|
||||
<a href="/settings/" aria-current="page">Settings</a>
|
||||
<a href="#" id="logout-link" onclick="return doLogout(event)">Logout</a>
|
||||
</div>
|
||||
</nav>
|
||||
|
||||
|
|
@ -89,12 +90,25 @@
|
|||
</div>
|
||||
</section>
|
||||
|
||||
<section>
|
||||
<h2>Power</h2>
|
||||
<div class="card">
|
||||
<p class="lede">
|
||||
Reboot or shut down the whole Furtka box. Takes a few seconds to
|
||||
finish; the UI will reconnect itself after a reboot.
|
||||
</p>
|
||||
<div class="power-actions">
|
||||
<button type="button" id="power-reboot" class="secondary">Reboot</button>
|
||||
<button type="button" id="power-poweroff" class="danger">Shut down</button>
|
||||
</div>
|
||||
<p id="power-status" class="hint"></p>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<section>
|
||||
<h2>Coming next</h2>
|
||||
<div class="coming">
|
||||
<p class="hint">Controls we're building — follow progress on <a href="https://furtka.org">furtka.org</a>.</p>
|
||||
<a href="https://furtka.org/#planned">Reboot</a>
|
||||
<a href="https://furtka.org/#planned">Shut down</a>
|
||||
<a href="https://furtka.org/#planned">Change hostname</a>
|
||||
<a href="https://furtka.org/#planned">Backup</a>
|
||||
<a href="https://furtka.org/#planned">User accounts</a>
|
||||
|
|
@ -108,6 +122,15 @@
|
|||
</main>
|
||||
|
||||
<script>
|
||||
// Logout button in the nav — same shape as /apps and / pages.
|
||||
async function doLogout(ev) {
|
||||
ev.preventDefault();
|
||||
try { await fetch('/logout', { method: 'POST', credentials: 'same-origin' }); }
|
||||
catch (e) { /* server may already be down */ }
|
||||
window.location.href = '/login';
|
||||
return false;
|
||||
}
|
||||
|
||||
async function refresh() {
|
||||
try {
|
||||
const r = await fetch('/status.json', { cache: 'no-store' });
|
||||
|
|
@ -340,6 +363,85 @@
|
|||
/* keep polling; restart blip expected */
|
||||
}
|
||||
}
|
||||
|
||||
// Power buttons: confirm, POST, then swap the whole card into a
|
||||
// "going down" state so the user doesn't keep clicking. After a
|
||||
// reboot we try to reconnect after ~45s; for shutdown we just
|
||||
// tell the user the box is off — no auto-reconnect attempt.
|
||||
const powerStatusEl = document.getElementById('power-status');
|
||||
const rebootBtn = document.getElementById('power-reboot');
|
||||
const poweroffBtn = document.getElementById('power-poweroff');
|
||||
|
||||
function setPowerStatus(msg, tone = 'muted') {
|
||||
powerStatusEl.textContent = msg;
|
||||
powerStatusEl.style.color =
|
||||
tone === 'error' ? 'var(--danger)' : 'var(--muted)';
|
||||
}
|
||||
|
||||
async function triggerPower(action, confirmMsg, inflightLabel) {
|
||||
if (!confirm(confirmMsg)) return;
|
||||
rebootBtn.disabled = true;
|
||||
poweroffBtn.disabled = true;
|
||||
setPowerStatus(inflightLabel);
|
||||
try {
|
||||
const r = await fetch('/api/furtka/power', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({ action }),
|
||||
});
|
||||
if (!r.ok) {
|
||||
const data = await r.json().catch(() => ({}));
|
||||
setPowerStatus(data.error || `HTTP ${r.status}`, 'error');
|
||||
rebootBtn.disabled = false;
|
||||
poweroffBtn.disabled = false;
|
||||
return;
|
||||
}
|
||||
if (action === 'reboot') {
|
||||
setPowerStatus('Rebooting… this page will reload when the box is back.');
|
||||
// Try reconnecting after a generous delay. archinstall
|
||||
// + boot + services typically takes 30–45 s; give it 30
|
||||
// before the first poke so we don't just spin against
|
||||
// a down kernel.
|
||||
setTimeout(pollForReconnect, 30000);
|
||||
} else {
|
||||
setPowerStatus(
|
||||
'Shutdown scheduled. Press the physical power button to turn it back on.'
|
||||
);
|
||||
}
|
||||
} catch (e) {
|
||||
setPowerStatus(`Network error: ${e.message}`, 'error');
|
||||
rebootBtn.disabled = false;
|
||||
poweroffBtn.disabled = false;
|
||||
}
|
||||
}
|
||||
|
||||
async function pollForReconnect() {
|
||||
// Fetch a tiny static file; when it comes back 200 the box is up.
|
||||
try {
|
||||
const r = await fetch('/furtka.json', { cache: 'no-store' });
|
||||
if (r.ok) {
|
||||
setPowerStatus('Back up — reloading…');
|
||||
setTimeout(() => location.reload(), 1500);
|
||||
return;
|
||||
}
|
||||
} catch (e) { /* still down */ }
|
||||
setTimeout(pollForReconnect, 3000);
|
||||
}
|
||||
|
||||
rebootBtn.addEventListener('click', () =>
|
||||
triggerPower(
|
||||
'reboot',
|
||||
"Wirklich neu starten? Die Box ist für ~30 Sekunden nicht erreichbar.",
|
||||
'Rebooting…'
|
||||
)
|
||||
);
|
||||
poweroffBtn.addEventListener('click', () =>
|
||||
triggerPower(
|
||||
'poweroff',
|
||||
"Wirklich ausschalten? Du kannst die Box erst wieder starten, wenn du den physischen Power-Knopf drückst.",
|
||||
'Shutting down…'
|
||||
)
|
||||
);
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
|
|
|
|||
|
|
@ -198,7 +198,7 @@ h2 {
|
|||
flex-wrap: wrap;
|
||||
justify-content: flex-end;
|
||||
}
|
||||
button {
|
||||
button, .btn {
|
||||
background: var(--accent);
|
||||
border: none;
|
||||
color: var(--bg);
|
||||
|
|
@ -209,15 +209,21 @@ button {
|
|||
white-space: nowrap;
|
||||
font-size: 0.9rem;
|
||||
font-family: inherit;
|
||||
/* Anchor rendered-as-button: strip underline + keep the button's
|
||||
rectangular hit area. `display: inline-flex` so an <a class="btn">
|
||||
lines up vertically with its <button> siblings in .buttons. */
|
||||
text-decoration: none;
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
}
|
||||
button.secondary {
|
||||
button.secondary, .btn.secondary {
|
||||
background: var(--card);
|
||||
color: var(--fg);
|
||||
border: 1px solid var(--border);
|
||||
}
|
||||
button.danger { background: var(--danger); color: #fff; }
|
||||
button:disabled { opacity: 0.5; cursor: wait; }
|
||||
button:focus-visible { outline: none; box-shadow: var(--ring); }
|
||||
button:focus-visible, .btn:focus-visible { outline: none; box-shadow: var(--ring); }
|
||||
.empty { color: var(--muted); font-style: italic; padding: 0.5rem 0; }
|
||||
.catalog-row {
|
||||
display: flex;
|
||||
|
|
@ -304,7 +310,8 @@ details.log-details[open] > summary { color: var(--fg); }
|
|||
}
|
||||
.field input:focus { outline: 2px solid var(--accent); outline-offset: -1px; }
|
||||
.field .req { color: var(--danger); margin-left: 0.25rem; }
|
||||
.modal .error {
|
||||
.modal .error,
|
||||
.login-wrap .error {
|
||||
background: var(--warn);
|
||||
color: var(--warn-fg);
|
||||
padding: 0.5rem 0.75rem;
|
||||
|
|
@ -313,7 +320,15 @@ details.log-details[open] > summary { color: var(--fg); }
|
|||
font-size: 0.9rem;
|
||||
display: none;
|
||||
}
|
||||
.modal .error.show { display: block; }
|
||||
.modal .error.show,
|
||||
.login-wrap .error.show { display: block; }
|
||||
|
||||
/* Login + first-run setup page. Shares .wrap's max-width so the form
|
||||
sits in the same column the rest of the app uses, just without the
|
||||
Home/Apps/Settings nav. A bit of top padding so the H1 isn't glued
|
||||
to the viewport edge. */
|
||||
.login-wrap { padding-top: 3rem; }
|
||||
.login-wrap .actions { margin-top: 0.5rem; }
|
||||
.modal-actions {
|
||||
display: flex;
|
||||
justify-content: flex-end;
|
||||
|
|
@ -323,7 +338,8 @@ details.log-details[open] > summary { color: var(--fg); }
|
|||
|
||||
/* Row of buttons beneath a card — used by the Furtka updates card on
|
||||
/settings. Left-aligned, wraps on narrow screens. */
|
||||
.update-actions {
|
||||
.update-actions,
|
||||
.power-actions {
|
||||
display: flex;
|
||||
gap: 0.5rem;
|
||||
flex-wrap: wrap;
|
||||
|
|
|
|||
551
furtka/api.py
551
furtka/api.py
|
|
@ -2,22 +2,28 @@
|
|||
# its lines hurts readability and the rendered output is what matters here.
|
||||
"""Tiny HTTP API + management UI for the Furtka resource manager.
|
||||
|
||||
Single stdlib http.server process, no Flask/no third-party deps so we don't
|
||||
have to pip-install anything on the target. Caddy reverse-proxies /apps and
|
||||
/api from :80 to here.
|
||||
Single stdlib http.server process, served behind Caddy (reverse-proxies
|
||||
/apps, /api, /login and /logout from :80 to here).
|
||||
|
||||
Security: NO AUTH. Bound to 127.0.0.1 by default; the Caddy proxy makes it
|
||||
LAN-reachable. Anyone on the LAN can install/remove apps. The UI shouts this
|
||||
out at the top. Auth lands when Authentik does.
|
||||
Security: single-admin password login, cookie-session, werkzeug-hashed
|
||||
password stored at /var/lib/furtka/users.json (0600). Sessions live in
|
||||
memory — `systemctl restart furtka-api` invalidates everyone. Fresh
|
||||
installs pre-populate users.json from the webinstaller step-1 password;
|
||||
upgrades from pre-auth releases fall into a first-run setup form at
|
||||
/login where the admin password is created from the browser. Authentik
|
||||
integration remains the long-term plan; this is the pragmatic alpha
|
||||
stopgap.
|
||||
"""
|
||||
|
||||
import json
|
||||
import re
|
||||
import time
|
||||
from http.cookies import SimpleCookie
|
||||
from http.server import BaseHTTPRequestHandler, HTTPServer
|
||||
|
||||
from furtka import dockerops, installer, reconciler, sources
|
||||
from furtka import auth, dockerops, install_runner, installer, reconciler, sources
|
||||
from furtka.manifest import ManifestError, load_manifest
|
||||
from furtka.paths import apps_dir
|
||||
from furtka.paths import apps_dir, static_www_dir
|
||||
from furtka.scanner import scan
|
||||
|
||||
_ICON_MAX_BYTES = 16 * 1024
|
||||
|
|
@ -77,12 +83,12 @@ _HTML = """<!DOCTYPE html>
|
|||
<a href="/">Home</a>
|
||||
<a href="/apps" aria-current="page">Apps</a>
|
||||
<a href="/settings/">Settings</a>
|
||||
<a href="#" id="logout-link" onclick="return doLogout(event)">Logout</a>
|
||||
</div>
|
||||
</nav>
|
||||
|
||||
<h1>Furtka Apps</h1>
|
||||
<p class="lede">Install or remove resource-manager apps on this Furtka box.</p>
|
||||
<div class="warn">No authentication on this UI yet. Anyone on your LAN can install or remove apps. Don't expose this to the wider internet.</div>
|
||||
|
||||
<h2>Installed</h2>
|
||||
<div id="installed"></div>
|
||||
|
|
@ -120,6 +126,15 @@ function esc(s) {
|
|||
return d.innerHTML;
|
||||
}
|
||||
|
||||
async function doLogout(ev) {
|
||||
ev.preventDefault();
|
||||
try {
|
||||
await fetch('/logout', { method: 'POST', credentials: 'same-origin' });
|
||||
} catch (e) { /* best-effort — server may already be down */ }
|
||||
window.location.href = '/login';
|
||||
return false;
|
||||
}
|
||||
|
||||
// Fallback when an app doesn't ship a parseable icon.svg. Simple
|
||||
// stroked folder — currentColor so the tile's accent tint applies.
|
||||
const FALLBACK_ICON = '<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"><path d="M3 7v12a2 2 0 0 0 2 2h14a2 2 0 0 0 2-2V9a2 2 0 0 0-2-2h-7l-2-2H5a2 2 0 0 0-2 2z"/></svg>';
|
||||
|
|
@ -173,7 +188,9 @@ async function openSettingsDialog(name, action) {
|
|||
modal.form.innerHTML = data.settings.map(s => {
|
||||
const id = `field-${esc(s.name)}`;
|
||||
const value = action === 'edit' && s.type === 'password' ? '' : esc(s.value || '');
|
||||
const placeholder = action === 'edit' && s.type === 'password' ? 'Leave blank to keep current' : '';
|
||||
const placeholder = action === 'edit' && s.type === 'password' ? 'Leave blank to keep current'
|
||||
: s.type === 'path' ? '/mnt/…'
|
||||
: '';
|
||||
return `
|
||||
<div class="field">
|
||||
<label for="${id}">${esc(s.label)}${s.required ? '<span class="req">*</span>' : ''}</label>
|
||||
|
|
@ -197,6 +214,51 @@ async function openSettingsDialog(name, action) {
|
|||
|
||||
modal.submit.addEventListener('click', submitModal);
|
||||
|
||||
// Install progress phases written by the background job's state file.
|
||||
// Mirrors furtka/install_runner.py stage strings. Unknown stages fall
|
||||
// back to a neutral "Installing…" so a future phase rename doesn't
|
||||
// leave the modal button blank.
|
||||
const INSTALL_STAGE_LABELS = {
|
||||
'pulling_image': 'Image wird heruntergeladen…',
|
||||
'creating_volumes': 'Speicherbereiche werden erstellt…',
|
||||
'starting_container': 'Container wird gestartet…',
|
||||
'done': 'Fertig',
|
||||
};
|
||||
|
||||
async function pollInstallStatus(original) {
|
||||
// Two-minute ceiling: Jellyfin over a slow DSL line can take ~90s
|
||||
// just on the image pull. Beyond that something's stuck — the
|
||||
// background job is still running in systemd, but the UI gives up
|
||||
// on the modal and lets the user close it.
|
||||
const deadline = Date.now() + 120000;
|
||||
while (Date.now() < deadline) {
|
||||
await new Promise(res => setTimeout(res, 1500));
|
||||
let s = {};
|
||||
try {
|
||||
s = await fetch('/api/apps/install/status').then(r => r.json());
|
||||
} catch (e) { /* transient; keep polling */ }
|
||||
const stage = s.stage || '';
|
||||
modal.submit.textContent = INSTALL_STAGE_LABELS[stage] || 'Installing…';
|
||||
if (stage === 'done') {
|
||||
closeModal();
|
||||
await refresh();
|
||||
return;
|
||||
}
|
||||
if (stage === 'error') {
|
||||
modal.error.textContent = s.error || 'Install failed';
|
||||
modal.error.classList.add('show');
|
||||
modal.submit.disabled = false;
|
||||
modal.submit.textContent = original;
|
||||
return;
|
||||
}
|
||||
}
|
||||
// Timed out waiting for a terminal state — don't lie to the user.
|
||||
modal.error.textContent = 'Installation is taking longer than expected. Check /settings for the background job status.';
|
||||
modal.error.classList.add('show');
|
||||
modal.submit.disabled = false;
|
||||
modal.submit.textContent = original;
|
||||
}
|
||||
|
||||
async function submitModal() {
|
||||
if (!modal.current) return;
|
||||
const { name, action } = modal.current;
|
||||
|
|
@ -230,6 +292,13 @@ async function submitModal() {
|
|||
modal.submit.textContent = original;
|
||||
return;
|
||||
}
|
||||
// Install dispatched a background job — poll until terminal. The
|
||||
// edit path stays synchronous (settings updates are fast: env write
|
||||
// + reconcile, no image pull).
|
||||
if (action === 'install' && r.status === 202) {
|
||||
await pollInstallStatus(original);
|
||||
return;
|
||||
}
|
||||
closeModal();
|
||||
await refresh();
|
||||
} catch (e) {
|
||||
|
|
@ -248,6 +317,14 @@ async function refresh() {
|
|||
document.getElementById('installed').innerHTML = installed.length
|
||||
? installed.map(a => {
|
||||
const hasSettings = a.has_settings;
|
||||
const openHref = a.open_url ? a.open_url.replace('{host}', location.hostname) : '';
|
||||
// Plain <a> rendered as a button so it behaves like a real link
|
||||
// (middle-click, right-click "copy link", screen readers) instead
|
||||
// of a JS onclick. Most installed apps will want this — fileshare
|
||||
// deep-links to smb://, Kuma to http://host:3001/.
|
||||
const openBtn = openHref
|
||||
? `<a class="btn" href="${esc(openHref)}" target="_blank" rel="noopener">Open</a>`
|
||||
: '';
|
||||
return `
|
||||
<div class="app">
|
||||
<div class="left">
|
||||
|
|
@ -258,6 +335,7 @@ async function refresh() {
|
|||
</div>
|
||||
</div>
|
||||
<div class="buttons">
|
||||
${openBtn}
|
||||
${hasSettings ? `<button data-op="edit" data-name="${esc(a.name)}">Settings</button>` : ''}
|
||||
<button class="secondary" data-op="update" data-name="${esc(a.name)}">Update</button>
|
||||
<button class="secondary" data-op="reinstall" data-name="${esc(a.name)}">Reinstall</button>
|
||||
|
|
@ -313,10 +391,24 @@ async function handleButton(op, name, btn) {
|
|||
: ' — already up to date';
|
||||
}
|
||||
document.getElementById('log').textContent = header + '\\n' + JSON.stringify(data, null, 2);
|
||||
// Reinstall dispatches an async install the same way the modal does
|
||||
// — follow the background job on the button label until terminal.
|
||||
if (op === 'reinstall' && r.status === 202) {
|
||||
const deadline = Date.now() + 120000;
|
||||
while (Date.now() < deadline) {
|
||||
await new Promise(res => setTimeout(res, 1500));
|
||||
let s = {};
|
||||
try { s = await fetch('/api/apps/install/status').then(r => r.json()); } catch (e) {}
|
||||
const stage = s.stage || '';
|
||||
btn.textContent = INSTALL_STAGE_LABELS[stage] || 'Reinstalling…';
|
||||
if (stage === 'done' || stage === 'error') break;
|
||||
}
|
||||
}
|
||||
} catch (e) {
|
||||
document.getElementById('log').textContent = `[${op} ${name}] network error: ${e.message}`;
|
||||
}
|
||||
btn.textContent = original;
|
||||
btn.disabled = false;
|
||||
await refresh();
|
||||
}
|
||||
|
||||
|
|
@ -376,6 +468,120 @@ refreshCatalog();
|
|||
"""
|
||||
|
||||
|
||||
# Login / first-run setup page. Rendered standalone (no main-UI chrome) so
|
||||
# an unauthenticated visitor never gets a glimpse of the app list. Reuses
|
||||
# /style.css for the look — the page is just a form + optional error line.
|
||||
# The template has a {{ SETUP }} marker the server flips on/off depending
|
||||
# on whether users.json exists yet (first-run vs. normal login).
|
||||
_HTML_LOGIN = """<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<title>Furtka · {{ TITLE }}</title>
|
||||
<meta name="viewport" content="width=device-width,initial-scale=1">
|
||||
<link rel="stylesheet" href="/style.css">
|
||||
</head>
|
||||
<body>
|
||||
<main class="wrap login-wrap">
|
||||
<h1>{{ HEADING }}</h1>
|
||||
<p class="lede">{{ LEDE }}</p>
|
||||
<form id="login-form" onsubmit="return doLogin(event)">
|
||||
<div class="field">
|
||||
<label for="username">Username</label>
|
||||
<input id="username" name="username" type="text" autocomplete="username" required value="{{ DEFAULT_USERNAME }}" autofocus>
|
||||
</div>
|
||||
<div class="field">
|
||||
<label for="password">Password</label>
|
||||
<input id="password" name="password" type="password" autocomplete="{{ PWD_AUTOCOMPLETE }}" required minlength="8">
|
||||
</div>
|
||||
{{ PASSWORD2_FIELD }}
|
||||
<div id="login-error" class="error"></div>
|
||||
<div class="actions">
|
||||
<button type="submit" id="login-submit">{{ SUBMIT_LABEL }}</button>
|
||||
</div>
|
||||
</form>
|
||||
</main>
|
||||
<script>
|
||||
const SETUP = {{ SETUP_JSON }};
|
||||
const errBox = document.getElementById('login-error');
|
||||
async function doLogin(ev) {
|
||||
ev.preventDefault();
|
||||
errBox.classList.remove('show');
|
||||
errBox.textContent = '';
|
||||
const btn = document.getElementById('login-submit');
|
||||
btn.disabled = true;
|
||||
const body = {
|
||||
username: document.getElementById('username').value,
|
||||
password: document.getElementById('password').value,
|
||||
};
|
||||
if (SETUP) body.password2 = document.getElementById('password2').value;
|
||||
try {
|
||||
const r = await fetch('/login', {
|
||||
method: 'POST',
|
||||
headers: {'Content-Type': 'application/json'},
|
||||
credentials: 'same-origin',
|
||||
body: JSON.stringify(body),
|
||||
});
|
||||
if (r.ok) {
|
||||
window.location.href = '/apps';
|
||||
return false;
|
||||
}
|
||||
const data = await r.json().catch(() => ({error: 'HTTP ' + r.status}));
|
||||
errBox.textContent = data.error || 'Login failed';
|
||||
errBox.classList.add('show');
|
||||
} catch (e) {
|
||||
errBox.textContent = 'Network error — is the box reachable?';
|
||||
errBox.classList.add('show');
|
||||
} finally {
|
||||
btn.disabled = false;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
"""
|
||||
|
||||
|
||||
def _render_login_html(setup: bool, default_username: str = "") -> str:
|
||||
if setup:
|
||||
password2_field = (
|
||||
'<div class="field"><label for="password2">Repeat password</label>'
|
||||
'<input id="password2" name="password2" type="password" '
|
||||
'autocomplete="new-password" required minlength="8"></div>'
|
||||
)
|
||||
subs = {
|
||||
"TITLE": "First-run setup",
|
||||
"HEADING": "Set admin password",
|
||||
"LEDE": "No admin account exists yet on this box. Pick a username and password — you'll use them to sign in to the Furtka UI.",
|
||||
"PWD_AUTOCOMPLETE": "new-password",
|
||||
"PASSWORD2_FIELD": password2_field,
|
||||
"SUBMIT_LABEL": "Create admin",
|
||||
"DEFAULT_USERNAME": "admin",
|
||||
"SETUP_JSON": "true",
|
||||
}
|
||||
else:
|
||||
subs = {
|
||||
"TITLE": "Login",
|
||||
"HEADING": "Furtka login",
|
||||
"LEDE": "Sign in with the admin credentials you set during install.",
|
||||
"PWD_AUTOCOMPLETE": "current-password",
|
||||
"PASSWORD2_FIELD": "",
|
||||
"SUBMIT_LABEL": "Log in",
|
||||
"DEFAULT_USERNAME": default_username,
|
||||
"SETUP_JSON": "false",
|
||||
}
|
||||
out = _HTML_LOGIN
|
||||
for key, val in subs.items():
|
||||
out = out.replace("{{ " + key + " }}", val)
|
||||
return out
|
||||
|
||||
|
||||
# Minimum password length enforced server-side (browser also enforces it
|
||||
# via the input's minlength, but don't rely on client-side only).
|
||||
_MIN_PASSWORD_LEN = 8
|
||||
|
||||
|
||||
def _manifest_summary(m, app_dir=None):
|
||||
return {
|
||||
"name": m.name,
|
||||
|
|
@ -387,6 +593,9 @@ def _manifest_summary(m, app_dir=None):
|
|||
"icon": m.icon,
|
||||
"icon_svg": _read_icon_svg(app_dir, m.icon),
|
||||
"has_settings": bool(m.settings),
|
||||
# Optional template URL with `{host}` placeholder; frontend
|
||||
# substitutes against location.hostname at render time.
|
||||
"open_url": m.open_url,
|
||||
}
|
||||
|
||||
|
||||
|
|
@ -483,19 +692,86 @@ def _do_get_settings(name):
|
|||
}
|
||||
|
||||
|
||||
_INSTALL_TERMINAL_STAGES = frozenset({"done", "error"})
|
||||
|
||||
|
||||
def _do_install(name, settings=None):
|
||||
"""Kick off an app install. Synchronous sync-phase + async docker-phase.
|
||||
|
||||
Fast parts run inline so validation failures come back as immediate
|
||||
4xx (bad path, placeholder secret, unknown app, etc.). The slow
|
||||
`docker compose pull` then `compose up` are dispatched as a
|
||||
background systemd-run unit that writes phase transitions to
|
||||
/var/lib/furtka/install-state.json for the UI to poll.
|
||||
"""
|
||||
import subprocess
|
||||
|
||||
# Reject if the state file reports a non-terminal install. The
|
||||
# fcntl lock below catches the same race, but only *after* the API
|
||||
# releases it to let the systemd-run child grab it — a competing
|
||||
# POST can sneak in during that tiny window. Reading the state
|
||||
# first closes that gap: as long as a previous install hasn't
|
||||
# written "done" or "error", we refuse.
|
||||
current_state = install_runner.read_state()
|
||||
current_stage = current_state.get("stage", "") if isinstance(current_state, dict) else ""
|
||||
if current_stage and current_stage not in _INSTALL_TERMINAL_STAGES:
|
||||
return 409, {
|
||||
"error": (
|
||||
f"another install is in progress ({current_state.get('app', '?')}"
|
||||
f" at {current_stage})"
|
||||
)
|
||||
}
|
||||
|
||||
# Fast-fail if another install is already in flight. Lock lives under
|
||||
# /run/ so a previous reboot clears it automatically.
|
||||
try:
|
||||
fh = install_runner.acquire_lock()
|
||||
except install_runner.InstallRunnerError as e:
|
||||
return 409, {"error": str(e)}
|
||||
try:
|
||||
try:
|
||||
src = installer.resolve_source(name)
|
||||
target = installer.install_from(src, settings=settings)
|
||||
except installer.InstallError as e:
|
||||
return 400, {"error": str(e)}
|
||||
actions = reconciler.reconcile(apps_dir())
|
||||
payload = {
|
||||
"installed": str(target),
|
||||
"actions": [{"kind": a.kind, "target": a.target, "detail": a.detail} for a in actions],
|
||||
}
|
||||
# 207 Multi-Status — install copy succeeded but reconcile had per-app errors.
|
||||
return (207 if reconciler.has_errors(actions) else 200, payload)
|
||||
# Initial state so the UI has something to show between this
|
||||
# response and the background job's first write.
|
||||
install_runner.write_state("pulling_image", app=name)
|
||||
finally:
|
||||
# Release the lock so the background job can re-acquire it.
|
||||
fh.close()
|
||||
|
||||
unit = f"furtka-install-{name}"
|
||||
try:
|
||||
subprocess.run(
|
||||
[
|
||||
"systemd-run",
|
||||
f"--unit={unit}",
|
||||
"--no-block",
|
||||
"--collect",
|
||||
"/usr/local/bin/furtka",
|
||||
"app",
|
||||
"install-bg",
|
||||
name,
|
||||
],
|
||||
check=True,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
)
|
||||
except FileNotFoundError:
|
||||
install_runner.write_state("error", app=name, error="systemd-run not available")
|
||||
return 502, {"error": "systemd-run not available"}
|
||||
except subprocess.CalledProcessError as e:
|
||||
err = (e.stderr or e.stdout or "").strip()
|
||||
install_runner.write_state("error", app=name, error=f"dispatch failed: {err}")
|
||||
return 502, {"error": f"systemd-run failed: {err}"}
|
||||
|
||||
return 202, {"status": "dispatched", "unit": unit, "installed": str(target)}
|
||||
|
||||
|
||||
def _do_install_status():
|
||||
"""Return the current install-state.json contents (or {})."""
|
||||
return 200, install_runner.read_state()
|
||||
|
||||
|
||||
def _do_update_settings(name, settings):
|
||||
|
|
@ -715,6 +991,55 @@ def _do_catalog_status():
|
|||
}
|
||||
|
||||
|
||||
_POWER_ACTIONS = {
|
||||
"reboot": "reboot",
|
||||
"poweroff": "poweroff",
|
||||
}
|
||||
|
||||
|
||||
def _do_power(payload):
|
||||
"""Schedule a reboot or poweroff with a short delay.
|
||||
|
||||
`systemd-run --on-active=3s` kicks a transient timer that fires
|
||||
`systemctl {reboot|poweroff}` a few seconds after the API returns —
|
||||
long enough for the HTTP response to reach the browser + the UI to
|
||||
swap to a "Going down…" state before the kernel loses network.
|
||||
The `--no-block` flag makes the systemd-run call itself return
|
||||
immediately; `--collect` GCs the transient unit once it fires.
|
||||
|
||||
No auth: same posture as the install/remove endpoints. Anyone on the
|
||||
LAN can reboot the box. The /settings banner warns about this;
|
||||
Authentik will lock it down.
|
||||
"""
|
||||
import subprocess
|
||||
|
||||
action = payload.get("action")
|
||||
systemctl_verb = _POWER_ACTIONS.get(action)
|
||||
if systemctl_verb is None:
|
||||
return 400, {"error": f"'action' must be one of {sorted(_POWER_ACTIONS)}"}
|
||||
try:
|
||||
subprocess.run(
|
||||
[
|
||||
"systemd-run",
|
||||
"--on-active=3s",
|
||||
"--no-block",
|
||||
"--collect",
|
||||
"systemctl",
|
||||
systemctl_verb,
|
||||
],
|
||||
check=True,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
)
|
||||
except FileNotFoundError:
|
||||
return 502, {"error": "systemd-run not available"}
|
||||
except subprocess.CalledProcessError as e:
|
||||
return 502, {
|
||||
"error": f"systemd-run failed: {(e.stderr or e.stdout or '').strip()}",
|
||||
}
|
||||
return 202, {"action": action, "scheduled_in_seconds": 3}
|
||||
|
||||
|
||||
def _do_update(name):
|
||||
"""Pull newer container images for an installed app; restart if any changed.
|
||||
|
||||
|
|
@ -763,25 +1088,193 @@ def _parse_settings_body(payload):
|
|||
|
||||
|
||||
class _Handler(BaseHTTPRequestHandler):
|
||||
def _json(self, status, payload):
|
||||
def _json(self, status, payload, extra_headers=None):
|
||||
body = json.dumps(payload).encode()
|
||||
self.send_response(status)
|
||||
self.send_header("Content-Type", "application/json")
|
||||
self.send_header("Content-Length", str(len(body)))
|
||||
for name, value in extra_headers or []:
|
||||
self.send_header(name, value)
|
||||
self.end_headers()
|
||||
self.wfile.write(body)
|
||||
|
||||
def _html(self, status, body):
|
||||
def _html(self, status, body, extra_headers=None):
|
||||
b = body.encode()
|
||||
self.send_response(status)
|
||||
self.send_header("Content-Type", "text/html; charset=utf-8")
|
||||
self.send_header("Content-Length", str(len(b)))
|
||||
for name, value in extra_headers or []:
|
||||
self.send_header(name, value)
|
||||
self.end_headers()
|
||||
self.wfile.write(b)
|
||||
|
||||
def _serve_static_www(self, relative_path: str):
|
||||
"""Read an HTML asset from assets/www/ and serve it as 200.
|
||||
|
||||
Only reached after the do_GET auth-guard — so the caller is
|
||||
already authed. Relative_path is hard-coded at the call site
|
||||
(``index.html`` or ``settings/index.html``), not user-supplied,
|
||||
so there's no path-traversal surface here — but we still clamp
|
||||
the resolved path to static_www_dir() as a defensive check in
|
||||
case a future refactor wires a dynamic path through.
|
||||
"""
|
||||
root = static_www_dir().resolve()
|
||||
target = (root / relative_path).resolve()
|
||||
if root not in target.parents and target != root:
|
||||
return self._html(500, "<h1>internal error</h1>")
|
||||
try:
|
||||
body = target.read_text(encoding="utf-8")
|
||||
except (FileNotFoundError, OSError):
|
||||
return self._html(404, "<h1>not found</h1>")
|
||||
return self._html(200, body)
|
||||
|
||||
def _redirect(self, location, extra_headers=None):
|
||||
self.send_response(302)
|
||||
self.send_header("Location", location)
|
||||
self.send_header("Content-Length", "0")
|
||||
for name, value in extra_headers or []:
|
||||
self.send_header(name, value)
|
||||
self.end_headers()
|
||||
|
||||
# ---- Auth helpers -------------------------------------------------
|
||||
|
||||
def _request_cookies(self) -> SimpleCookie:
|
||||
cookies = SimpleCookie()
|
||||
header = self.headers.get("Cookie")
|
||||
if header:
|
||||
try:
|
||||
cookies.load(header)
|
||||
except Exception:
|
||||
# Malformed Cookie header — treat as no cookies rather
|
||||
# than 500ing. Same posture as browsers.
|
||||
return SimpleCookie()
|
||||
return cookies
|
||||
|
||||
def _current_session(self):
|
||||
cookies = self._request_cookies()
|
||||
morsel = cookies.get(auth.COOKIE_NAME)
|
||||
if morsel is None:
|
||||
return None
|
||||
return auth.SESSIONS.lookup(morsel.value)
|
||||
|
||||
def _session_cookie_header(self, token: str, max_age: int) -> tuple[str, str]:
|
||||
secure = self.headers.get("X-Forwarded-Proto", "").lower() == "https"
|
||||
parts = [
|
||||
f"{auth.COOKIE_NAME}={token}",
|
||||
"HttpOnly",
|
||||
"SameSite=Strict",
|
||||
"Path=/",
|
||||
f"Max-Age={max_age}",
|
||||
]
|
||||
if secure:
|
||||
parts.append("Secure")
|
||||
return ("Set-Cookie", "; ".join(parts))
|
||||
|
||||
def _clear_cookie_header(self) -> tuple[str, str]:
|
||||
# Max-Age=0 with an empty value tells the browser to drop it.
|
||||
return (
|
||||
"Set-Cookie",
|
||||
f"{auth.COOKIE_NAME}=; HttpOnly; SameSite=Strict; Path=/; Max-Age=0",
|
||||
)
|
||||
|
||||
def _client_ip(self) -> str:
|
||||
# Caddy's reverse_proxy appends the real TCP peer to X-Forwarded-For;
|
||||
# the rightmost entry is the one Caddy added, so it's trustworthy
|
||||
# even if a client spoofed an XFF of their own. Caddy is the edge —
|
||||
# no upstream proxy in front of it.
|
||||
xff = self.headers.get("X-Forwarded-For")
|
||||
if xff:
|
||||
return xff.rsplit(",", 1)[-1].strip()
|
||||
return self.client_address[0]
|
||||
|
||||
def _handle_login(self, payload):
|
||||
username = payload.get("username") if isinstance(payload, dict) else None
|
||||
password = payload.get("password") if isinstance(payload, dict) else None
|
||||
if not isinstance(username, str) or not username.strip():
|
||||
return self._json(400, {"error": "username is required"})
|
||||
if not isinstance(password, str) or not password:
|
||||
return self._json(400, {"error": "password is required"})
|
||||
username = username.strip()
|
||||
|
||||
if auth.setup_needed():
|
||||
# First-run setup path — create the admin account, then log
|
||||
# in. Require password2 so a typo doesn't lock the user out
|
||||
# of their own box.
|
||||
password2 = payload.get("password2")
|
||||
if password2 != password:
|
||||
return self._json(400, {"error": "passwords do not match"})
|
||||
if len(password) < _MIN_PASSWORD_LEN:
|
||||
return self._json(
|
||||
400,
|
||||
{"error": f"password must be at least {_MIN_PASSWORD_LEN} characters"},
|
||||
)
|
||||
auth.create_admin(username, password)
|
||||
else:
|
||||
# Tuple-keyed lockout: a flood from one IP can't lock the
|
||||
# admin out from a different IP. When locked we return the
|
||||
# same 429 regardless of whether the password is correct —
|
||||
# no oracle, no timing leak via "would have worked."
|
||||
lockout_key = (username, self._client_ip())
|
||||
retry = auth.LOCKOUT.retry_after_seconds(lockout_key)
|
||||
if retry > 0:
|
||||
return self._json(
|
||||
429,
|
||||
{"error": "too many failed attempts, try again later"},
|
||||
extra_headers=[("Retry-After", str(retry))],
|
||||
)
|
||||
if not auth.authenticate(username, password):
|
||||
# Register before the sleep so concurrent threads see a
|
||||
# consistent count; keep the sleep so timing can't
|
||||
# distinguish "locked" from "wrong password."
|
||||
auth.LOCKOUT.register_failure(lockout_key)
|
||||
time.sleep(0.5)
|
||||
return self._json(401, {"error": "invalid username or password"})
|
||||
auth.LOCKOUT.clear(lockout_key)
|
||||
|
||||
session = auth.SESSIONS.create(username)
|
||||
cookie = self._session_cookie_header(session.token, auth.COOKIE_TTL_SECONDS)
|
||||
return self._json(200, {"ok": True, "username": username}, extra_headers=[cookie])
|
||||
|
||||
def _handle_logout(self):
|
||||
cookies = self._request_cookies()
|
||||
morsel = cookies.get(auth.COOKIE_NAME)
|
||||
if morsel is not None:
|
||||
auth.SESSIONS.revoke(morsel.value)
|
||||
return self._json(200, {"ok": True}, extra_headers=[self._clear_cookie_header()])
|
||||
|
||||
def do_GET(self): # noqa: N802 — http.server convention
|
||||
if self.path in ("/", "/apps", "/apps/"):
|
||||
# --- Public routes: login page + its assets ------------------
|
||||
if self.path in ("/login", "/login/"):
|
||||
# Already authed? Skip straight to the app list.
|
||||
if self._current_session() is not None:
|
||||
return self._redirect("/apps")
|
||||
return self._html(200, _render_login_html(auth.setup_needed()))
|
||||
|
||||
# --- Auth guard for everything below -------------------------
|
||||
session = self._current_session()
|
||||
if session is None:
|
||||
# API paths get a 401 JSON so fetch() callers see a clean
|
||||
# error. HTML paths get a redirect to /login so the browser
|
||||
# naturally ends up on the login form.
|
||||
if self.path.startswith("/api/"):
|
||||
return self._json(401, {"error": "not authenticated"})
|
||||
return self._redirect("/login")
|
||||
|
||||
if self.path in ("/apps", "/apps/"):
|
||||
return self._html(200, _HTML)
|
||||
# Landing page + settings page used to be served directly by
|
||||
# Caddy as static HTML, which silently bypassed this auth
|
||||
# guard (26.11-era regression that shipped and nobody noticed
|
||||
# until the 26.13 SSH test session — LAN visitors could read
|
||||
# the box version, IP and fire pre-authed clicks at the
|
||||
# update/reboot/https-toggle buttons even though the API calls
|
||||
# themselves would 401). Python reads the static HTML from
|
||||
# assets/www/ and serves it behind the session check; Caddy
|
||||
# now proxies / and /settings* here (see Caddyfile).
|
||||
if self.path == "/":
|
||||
return self._serve_static_www("index.html")
|
||||
if self.path in ("/settings", "/settings/"):
|
||||
return self._serve_static_www("settings/index.html")
|
||||
if self.path == "/api/apps":
|
||||
return self._json(200, _list_installed())
|
||||
# /api/bundled is the pre-26.6 name for this list; kept as an alias
|
||||
|
|
@ -797,6 +1290,9 @@ class _Handler(BaseHTTPRequestHandler):
|
|||
if self.path == "/api/catalog/status":
|
||||
status, body = _do_catalog_status()
|
||||
return self._json(status, body)
|
||||
if self.path == "/api/apps/install/status":
|
||||
status, body = _do_install_status()
|
||||
return self._json(status, body)
|
||||
# /api/apps/<name>/settings
|
||||
if self.path.startswith("/api/apps/") and self.path.endswith("/settings"):
|
||||
name = self.path[len("/api/apps/") : -len("/settings")]
|
||||
|
|
@ -816,6 +1312,16 @@ class _Handler(BaseHTTPRequestHandler):
|
|||
if not isinstance(payload, dict):
|
||||
return self._json(400, {"error": "body must be a JSON object"})
|
||||
|
||||
# --- Public routes: login + logout ----------------------------
|
||||
if self.path in ("/login", "/login/"):
|
||||
return self._handle_login(payload)
|
||||
if self.path in ("/logout", "/logout/"):
|
||||
return self._handle_logout()
|
||||
|
||||
# --- Auth guard for every other POST --------------------------
|
||||
if self._current_session() is None:
|
||||
return self._json(401, {"error": "not authenticated"})
|
||||
|
||||
# Per-app settings update: /api/apps/<name>/settings
|
||||
if self.path.startswith("/api/apps/") and self.path.endswith("/settings"):
|
||||
name = self.path[len("/api/apps/") : -len("/settings")]
|
||||
|
|
@ -854,6 +1360,11 @@ class _Handler(BaseHTTPRequestHandler):
|
|||
status, body = _do_catalog_apply()
|
||||
return self._json(status, body)
|
||||
|
||||
# System power: /settings Reboot / Shut down buttons.
|
||||
if self.path == "/api/furtka/power":
|
||||
status, body = _do_power(payload)
|
||||
return self._json(status, body)
|
||||
|
||||
name = payload.get("name")
|
||||
if not isinstance(name, str) or not name:
|
||||
return self._json(400, {"error": "missing or empty 'name' field"})
|
||||
|
|
|
|||
260
furtka/auth.py
Normal file
260
furtka/auth.py
Normal file
|
|
@ -0,0 +1,260 @@
|
|||
"""Login-guard primitives for the Furtka UI.
|
||||
|
||||
One admin, one password. Passwords are PBKDF2-SHA256 hashed via
|
||||
``furtka.passwd`` (stdlib-only — hashlib.pbkdf2_hmac / hashlib.scrypt),
|
||||
stored in /var/lib/furtka/users.json with mode 0600. Sessions live in
|
||||
memory — a systemctl restart logs everyone out again, which is fine
|
||||
for an alpha single-user box. The ``LoginAttempts`` store in this
|
||||
module rate-limits failed logins per (username, IP) and is also
|
||||
in-memory; a restart clears a stuck lockout.
|
||||
|
||||
On upgrade from pre-auth Furtka the users.json file does not exist
|
||||
yet; the api's GET /login detects this via ``setup_needed()`` and
|
||||
renders a first-run form that POSTs to /login as if it were a setup
|
||||
submit. Fresh installs get the file pre-populated by the webinstaller
|
||||
so the setup step is skipped.
|
||||
|
||||
Hash format is compatible with werkzeug.security — 26.11 / 26.12 boxes
|
||||
that happened to have werkzeug installed can carry their users.json
|
||||
forward without re-setup; see ``furtka.passwd`` for the scrypt reader.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import math
|
||||
import secrets
|
||||
import threading
|
||||
from dataclasses import dataclass
|
||||
from datetime import UTC, datetime, timedelta
|
||||
|
||||
from furtka.passwd import hash_password as _hash_password
|
||||
from furtka.passwd import verify_password as _verify_password
|
||||
from furtka.paths import users_file
|
||||
|
||||
COOKIE_NAME = "furtka_session"
|
||||
COOKIE_TTL_SECONDS = 7 * 24 * 3600 # one week
|
||||
|
||||
|
||||
def hash_password(plain: str) -> str:
|
||||
"""PBKDF2-SHA256 via stdlib. 600k iterations (OWASP 2023)."""
|
||||
return _hash_password(plain)
|
||||
|
||||
|
||||
def verify_password(plain: str, hashed: str) -> bool:
|
||||
"""Constant-time compare. Accepts stdlib + legacy werkzeug formats."""
|
||||
return _verify_password(plain, hashed)
|
||||
|
||||
|
||||
def load_users() -> dict:
|
||||
"""Return the users dict, or {} if the file is missing or empty.
|
||||
|
||||
Missing-file is the expected state on first boot and on upgrades from
|
||||
pre-auth versions — callers treat empty-dict as "setup required".
|
||||
"""
|
||||
path = users_file()
|
||||
if not path.exists():
|
||||
return {}
|
||||
try:
|
||||
raw = path.read_text()
|
||||
except OSError:
|
||||
return {}
|
||||
if not raw.strip():
|
||||
return {}
|
||||
try:
|
||||
data = json.loads(raw)
|
||||
except json.JSONDecodeError:
|
||||
return {}
|
||||
if not isinstance(data, dict):
|
||||
return {}
|
||||
return data
|
||||
|
||||
|
||||
def save_users(users: dict) -> None:
|
||||
"""Atomically write users.json with mode 0600.
|
||||
|
||||
Same pattern as installer.write_env — write to .tmp, chmod, rename —
|
||||
so a crash between open() and close() can't leave a world-readable
|
||||
partial file.
|
||||
"""
|
||||
path = users_file()
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
tmp = path.with_suffix(path.suffix + ".tmp")
|
||||
tmp.write_text(json.dumps(users, indent=2) + "\n")
|
||||
tmp.chmod(0o600)
|
||||
tmp.replace(path)
|
||||
|
||||
|
||||
def setup_needed() -> bool:
|
||||
"""True when no admin is registered yet — initial setup is required."""
|
||||
users = load_users()
|
||||
return not users or "admin" not in users
|
||||
|
||||
|
||||
def create_admin(username: str, password: str) -> None:
|
||||
"""Overwrite users.json with a single admin account.
|
||||
|
||||
The webinstaller calls this post-install (with the step-1 password) so
|
||||
the installed system is login-guarded from first boot. The /login
|
||||
route calls it on first setup for upgrade-path boxes that don't
|
||||
already have a users.json.
|
||||
"""
|
||||
users = {
|
||||
"admin": {
|
||||
"username": username,
|
||||
"hash": hash_password(password),
|
||||
"created_at": datetime.now(UTC).isoformat(timespec="seconds"),
|
||||
}
|
||||
}
|
||||
save_users(users)
|
||||
|
||||
|
||||
def authenticate(username: str, password: str) -> bool:
|
||||
"""Return True iff the supplied credentials match the admin record."""
|
||||
users = load_users()
|
||||
admin = users.get("admin")
|
||||
if not admin:
|
||||
return False
|
||||
if admin.get("username") != username:
|
||||
return False
|
||||
hashed = admin.get("hash")
|
||||
if not isinstance(hashed, str) or not hashed:
|
||||
return False
|
||||
return verify_password(password, hashed)
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class Session:
|
||||
token: str
|
||||
username: str
|
||||
expires_at: datetime
|
||||
|
||||
|
||||
class SessionStore:
|
||||
"""In-memory session table. Thread-safe (api.py uses the stdlib
|
||||
|
||||
HTTPServer which handles one request per thread — though the default
|
||||
variant is single-threaded, we keep the lock so swapping to
|
||||
ThreadingHTTPServer later doesn't require revisiting this).
|
||||
"""
|
||||
|
||||
def __init__(self, ttl_seconds: int = COOKIE_TTL_SECONDS) -> None:
|
||||
self._ttl = timedelta(seconds=ttl_seconds)
|
||||
self._by_token: dict[str, Session] = {}
|
||||
self._lock = threading.Lock()
|
||||
|
||||
def create(self, username: str) -> Session:
|
||||
token = secrets.token_urlsafe(32)
|
||||
session = Session(
|
||||
token=token,
|
||||
username=username,
|
||||
expires_at=datetime.now(UTC) + self._ttl,
|
||||
)
|
||||
with self._lock:
|
||||
self._by_token[token] = session
|
||||
return session
|
||||
|
||||
def lookup(self, token: str | None) -> Session | None:
|
||||
if not token:
|
||||
return None
|
||||
with self._lock:
|
||||
session = self._by_token.get(token)
|
||||
if session is None:
|
||||
return None
|
||||
if datetime.now(UTC) >= session.expires_at:
|
||||
# Expired — drop it on the floor so repeat lookups stay fast.
|
||||
self._by_token.pop(token, None)
|
||||
return None
|
||||
return session
|
||||
|
||||
def revoke(self, token: str | None) -> None:
|
||||
if not token:
|
||||
return
|
||||
with self._lock:
|
||||
self._by_token.pop(token, None)
|
||||
|
||||
def clear(self) -> None:
|
||||
"""Test helper — wipe all sessions."""
|
||||
with self._lock:
|
||||
self._by_token.clear()
|
||||
|
||||
|
||||
class LoginAttempts:
|
||||
"""In-memory rate-limiter for failed logins, keyed by (username, ip).
|
||||
|
||||
Parallels SessionStore: thread-safe, uses ``datetime.now(UTC)`` so the
|
||||
same ``_FakeDatetime`` test shim works, lives only in memory so a
|
||||
``systemctl restart furtka`` wipes a stuck lockout. Tuple keying means
|
||||
a flood from one source IP can't lock the admin out from elsewhere
|
||||
(different IP → different key) — the trade-off is that an attacker
|
||||
can keep probing forever by rotating IPs, but they still eat the
|
||||
PBKDF2 cost per attempt.
|
||||
|
||||
Stored data is a dict[key → list[datetime]] of recent failure
|
||||
timestamps. Every call prunes entries older than ``WINDOW_SECONDS``,
|
||||
so memory per active key is bounded by ``MAX_FAILURES``.
|
||||
"""
|
||||
|
||||
MAX_FAILURES = 10
|
||||
WINDOW_SECONDS = 15 * 60
|
||||
LOCKOUT_SECONDS = 15 * 60
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
max_failures: int = MAX_FAILURES,
|
||||
window_seconds: int = WINDOW_SECONDS,
|
||||
lockout_seconds: int = LOCKOUT_SECONDS,
|
||||
) -> None:
|
||||
self._max = max_failures
|
||||
self._window = timedelta(seconds=window_seconds)
|
||||
self._lockout = timedelta(seconds=lockout_seconds)
|
||||
self._fails: dict[tuple[str, str], list[datetime]] = {}
|
||||
self._lock = threading.Lock()
|
||||
|
||||
def _prune_locked(self, key: tuple[str, str], now: datetime) -> list[datetime]:
|
||||
"""Drop timestamps older than the window; caller holds self._lock."""
|
||||
cutoff = now - self._window
|
||||
kept = [ts for ts in self._fails.get(key, ()) if ts >= cutoff]
|
||||
if kept:
|
||||
self._fails[key] = kept
|
||||
else:
|
||||
self._fails.pop(key, None)
|
||||
return kept
|
||||
|
||||
def register_failure(self, key: tuple[str, str]) -> None:
|
||||
now = datetime.now(UTC)
|
||||
with self._lock:
|
||||
self._prune_locked(key, now)
|
||||
self._fails.setdefault(key, []).append(now)
|
||||
|
||||
def is_locked(self, key: tuple[str, str]) -> bool:
|
||||
return self.retry_after_seconds(key) > 0
|
||||
|
||||
def retry_after_seconds(self, key: tuple[str, str]) -> int:
|
||||
"""Seconds remaining on an active lockout, or 0 if not locked."""
|
||||
now = datetime.now(UTC)
|
||||
with self._lock:
|
||||
kept = self._prune_locked(key, now)
|
||||
if len(kept) < self._max:
|
||||
return 0
|
||||
# Lockout runs from the oldest retained failure; once it
|
||||
# falls off the window the key is effectively released.
|
||||
unlock_at = kept[0] + self._lockout
|
||||
remaining = (unlock_at - now).total_seconds()
|
||||
if remaining <= 0:
|
||||
return 0
|
||||
return max(1, math.ceil(remaining))
|
||||
|
||||
def clear(self, key: tuple[str, str]) -> None:
|
||||
with self._lock:
|
||||
self._fails.pop(key, None)
|
||||
|
||||
def clear_all(self) -> None:
|
||||
"""Test helper — wipe all failure state."""
|
||||
with self._lock:
|
||||
self._fails.clear()
|
||||
|
||||
|
||||
# Module-level singleton used by the HTTP handler.
|
||||
SESSIONS = SessionStore()
|
||||
LOCKOUT = LoginAttempts()
|
||||
|
|
@ -21,9 +21,22 @@ def _cmd_app_list(args: argparse.Namespace) -> int:
|
|||
"display_name": r.manifest.display_name,
|
||||
"version": r.manifest.version,
|
||||
"description": r.manifest.description,
|
||||
"description_long": r.manifest.description_long,
|
||||
"volumes": list(r.manifest.volumes),
|
||||
"ports": list(r.manifest.ports),
|
||||
"icon": r.manifest.icon,
|
||||
"open_url": r.manifest.open_url,
|
||||
"settings": [
|
||||
{
|
||||
"name": s.name,
|
||||
"label": s.label,
|
||||
"description": s.description,
|
||||
"type": s.type,
|
||||
"required": s.required,
|
||||
"default": s.default,
|
||||
}
|
||||
for s in r.manifest.settings
|
||||
],
|
||||
}
|
||||
if r.manifest
|
||||
else None,
|
||||
|
|
@ -58,6 +71,24 @@ def _cmd_app_install(args: argparse.Namespace) -> int:
|
|||
return 1 if reconciler.has_errors(actions) else 0
|
||||
|
||||
|
||||
def _cmd_app_install_bg(args: argparse.Namespace) -> int:
|
||||
"""Docker-facing phases of an install — called by the API via systemd-run.
|
||||
|
||||
Internal subcommand; normal CLI users want `app install` (synchronous).
|
||||
This exists to separate the slow docker pull/up from the synchronous
|
||||
validation the API does inline, so the UI can poll a state file.
|
||||
"""
|
||||
from furtka import install_runner
|
||||
|
||||
try:
|
||||
install_runner.run_install(args.name)
|
||||
except Exception as e:
|
||||
# run_install already wrote state="error"; echo for journald.
|
||||
print(f"install-bg failed: {e}", file=sys.stderr)
|
||||
return 1
|
||||
return 0
|
||||
|
||||
|
||||
def _cmd_app_remove(args: argparse.Namespace) -> int:
|
||||
target = apps_dir() / args.name
|
||||
if not target.exists():
|
||||
|
|
@ -224,6 +255,15 @@ def build_parser() -> argparse.ArgumentParser:
|
|||
)
|
||||
app_install.set_defaults(func=_cmd_app_install)
|
||||
|
||||
# Internal — called by the HTTP API via systemd-run. Deliberately omitted
|
||||
# from the help listing; regular CLI users want `app install` above.
|
||||
app_install_bg = app_sub.add_parser(
|
||||
"install-bg",
|
||||
help=argparse.SUPPRESS,
|
||||
)
|
||||
app_install_bg.add_argument("name", help="Installed app folder name")
|
||||
app_install_bg.set_defaults(func=_cmd_app_install_bg)
|
||||
|
||||
app_remove = app_sub.add_parser("remove", help="Stop and uninstall an app (keeps volumes)")
|
||||
app_remove.add_argument("name", help="App name (folder name under /var/lib/furtka/apps/)")
|
||||
app_remove.set_defaults(func=_cmd_app_remove)
|
||||
|
|
|
|||
|
|
@ -6,10 +6,25 @@ sets `XDG_DATA_HOME=/var/lib`, so on the target that resolves to
|
|||
/var/lib/caddy/pki/authorities/local/. The private key stays 0600 /
|
||||
caddy-owned; we only ever read the public root.crt next to it.
|
||||
|
||||
This module exposes two operations:
|
||||
- status(): current CA fingerprint + whether force-HTTPS is on
|
||||
- set_force_https(enabled): write/remove the Caddy import snippet that
|
||||
redirects HTTP to HTTPS, reload Caddy, roll back on failure.
|
||||
HTTPS is **opt-in** since 26.15-alpha. Default Caddyfile has no `:443`
|
||||
site block, so `tls internal` never triggers cert issuance. The
|
||||
/settings toggle drops a snippet file into /etc/caddy/furtka-https.d/
|
||||
that adds the hostname+tls-internal block (plus the redirect snippet
|
||||
inside /etc/caddy/furtka.d/ for HTTP→HTTPS). Disabling the toggle
|
||||
removes both snippets and reloads — Caddy falls back to HTTP-only.
|
||||
|
||||
Why opt-in: fresh-install boxes used to always serve a self-signed
|
||||
cert on :443. Any browser that had ever trusted a previous Furtka
|
||||
box's local CA rejected the new cert with an unbypassable
|
||||
SEC_ERROR_BAD_SIGNATURE — Firefox in particular has no "Advanced →
|
||||
Accept" for that case. Making HTTPS explicit means fresh installs
|
||||
never hit that trap; users who want HTTPS download the rootCA.crt
|
||||
first and then click the toggle.
|
||||
|
||||
This module exposes:
|
||||
- status(): CA fingerprint + current toggle state
|
||||
- set_force_https(enabled): write/remove BOTH snippets atomically,
|
||||
reload Caddy, roll back on failure.
|
||||
"""
|
||||
|
||||
import base64
|
||||
|
|
@ -22,6 +37,9 @@ CA_CERT_PATH = Path("/var/lib/caddy/pki/authorities/local/root.crt")
|
|||
SNIPPET_DIR = Path("/etc/caddy/furtka.d")
|
||||
REDIRECT_SNIPPET = SNIPPET_DIR / "redirect.caddyfile"
|
||||
REDIRECT_CONTENT = "redir https://{host}{uri} permanent\n"
|
||||
HTTPS_SNIPPET_DIR = Path("/etc/caddy/furtka-https.d")
|
||||
HTTPS_SNIPPET = HTTPS_SNIPPET_DIR / "https.caddyfile"
|
||||
HOSTNAME_FILE = Path("/etc/hostname")
|
||||
|
||||
_PEM_RE = re.compile(
|
||||
r"-----BEGIN CERTIFICATE-----\s*(.+?)\s*-----END CERTIFICATE-----",
|
||||
|
|
@ -33,6 +51,30 @@ class HttpsError(Exception):
|
|||
"""Recoverable failure from set_force_https — the caller should 5xx."""
|
||||
|
||||
|
||||
def _read_hostname(hostname_file: Path = HOSTNAME_FILE) -> str:
|
||||
"""Return the box's hostname, stripped. Falls back to 'furtka' so a
|
||||
missing /etc/hostname doesn't produce an empty site block that Caddy
|
||||
would reject at parse time."""
|
||||
try:
|
||||
value = hostname_file.read_text().strip()
|
||||
except (FileNotFoundError, PermissionError, OSError):
|
||||
return "furtka"
|
||||
return value or "furtka"
|
||||
|
||||
|
||||
def _https_snippet_content(hostname: str) -> str:
|
||||
"""Caddy site block the HTTPS toggle installs at opt-in.
|
||||
|
||||
Serves <hostname>.local and <hostname> on :443 with Caddy's
|
||||
`tls internal` (local CA auto-issuance), and imports the shared
|
||||
furtka_routes snippet so the :443 listener exposes the same
|
||||
routes as :80. Must be written at top-level (not inside another
|
||||
site block) — that's why the Caddyfile imports furtka-https.d at
|
||||
top-level rather than inside :80.
|
||||
"""
|
||||
return f"{hostname}.local, {hostname} {{\n\ttls internal\n\timport furtka_routes\n}}\n"
|
||||
|
||||
|
||||
def _ca_fingerprint(ca_path: Path) -> str | None:
|
||||
try:
|
||||
pem = ca_path.read_text()
|
||||
|
|
@ -54,13 +96,20 @@ def _format_fingerprint(hex_upper: str) -> str:
|
|||
|
||||
def status(
|
||||
ca_path: Path = CA_CERT_PATH,
|
||||
snippet: Path = REDIRECT_SNIPPET,
|
||||
https_snippet: Path = HTTPS_SNIPPET,
|
||||
) -> dict:
|
||||
"""force_https is True iff the HTTPS listener snippet exists.
|
||||
|
||||
Before 26.15-alpha this checked the redirect snippet instead — but
|
||||
the redirect alone without a :443 listener wouldn't actually serve
|
||||
HTTPS, so the listener snippet is the authoritative "HTTPS is on"
|
||||
signal.
|
||||
"""
|
||||
fp = _ca_fingerprint(ca_path)
|
||||
return {
|
||||
"ca_available": fp is not None,
|
||||
"fingerprint_sha256": _format_fingerprint(fp) if fp else None,
|
||||
"force_https": snippet.is_file(),
|
||||
"force_https": https_snippet.is_file(),
|
||||
"ca_download_url": "/rootCA.crt",
|
||||
}
|
||||
|
||||
|
|
@ -78,29 +127,48 @@ def set_force_https(
|
|||
enabled: bool,
|
||||
snippet_dir: Path = SNIPPET_DIR,
|
||||
snippet: Path = REDIRECT_SNIPPET,
|
||||
https_snippet_dir: Path = HTTPS_SNIPPET_DIR,
|
||||
https_snippet: Path = HTTPS_SNIPPET,
|
||||
hostname_file: Path = HOSTNAME_FILE,
|
||||
reload_caddy=_default_reload,
|
||||
) -> bool:
|
||||
"""Toggle the HTTP→HTTPS redirect by writing or removing the snippet
|
||||
Caddy imports. Always reloads Caddy. Rolls the snippet state back on
|
||||
reload failure so a broken config can't leave Caddy wedged on the next
|
||||
restart.
|
||||
"""Toggle HTTPS by writing or removing two snippets atomically:
|
||||
|
||||
1. The top-level HTTPS hostname+tls-internal block (enables :443
|
||||
listener + Caddy's `tls internal` cert issuance)
|
||||
2. The :80-scoped redirect snippet (forces HTTP → HTTPS)
|
||||
|
||||
Reload Caddy after the snippet swap. On reload failure both
|
||||
snippets are reverted to their pre-call state so a bad config
|
||||
can't leave Caddy wedged.
|
||||
"""
|
||||
snippet_dir.mkdir(mode=0o755, parents=True, exist_ok=True)
|
||||
had = snippet.is_file()
|
||||
previous = snippet.read_text() if had else None
|
||||
https_snippet_dir.mkdir(mode=0o755, parents=True, exist_ok=True)
|
||||
|
||||
had_redirect = snippet.is_file()
|
||||
previous_redirect = snippet.read_text() if had_redirect else None
|
||||
had_https = https_snippet.is_file()
|
||||
previous_https = https_snippet.read_text() if had_https else None
|
||||
|
||||
if enabled:
|
||||
snippet.write_text(REDIRECT_CONTENT)
|
||||
elif had:
|
||||
https_snippet.write_text(_https_snippet_content(_read_hostname(hostname_file)))
|
||||
else:
|
||||
if had_redirect:
|
||||
snippet.unlink()
|
||||
if had_https:
|
||||
https_snippet.unlink()
|
||||
|
||||
try:
|
||||
reload_caddy()
|
||||
except subprocess.CalledProcessError as e:
|
||||
_revert(snippet, previous)
|
||||
_revert(snippet, previous_redirect)
|
||||
_revert(https_snippet, previous_https)
|
||||
msg = (e.stderr or e.stdout or "").strip() or f"exit {e.returncode}"
|
||||
raise HttpsError(f"caddy reload failed: {msg}") from e
|
||||
except FileNotFoundError as e:
|
||||
_revert(snippet, previous)
|
||||
_revert(snippet, previous_redirect)
|
||||
_revert(https_snippet, previous_https)
|
||||
raise HttpsError(f"systemctl not available: {e}") from e
|
||||
return enabled
|
||||
|
||||
|
|
|
|||
121
furtka/install_runner.py
Normal file
121
furtka/install_runner.py
Normal file
|
|
@ -0,0 +1,121 @@
|
|||
"""Background job for app installs — progress-visible via state file.
|
||||
|
||||
The slow part of installing an app is `docker compose pull` on a large
|
||||
image (Jellyfin ~500 MB); without progress feedback, the UI modal sits
|
||||
dead on "Installing…" for 30+ seconds and the user wonders if it hung.
|
||||
|
||||
This module mirrors the exact same shape as ``furtka.catalog`` and
|
||||
``furtka.updater`` so the UI can poll an install just like it polls a
|
||||
catalog sync or a self-update. The split is:
|
||||
|
||||
- ``furtka.api._do_install`` runs synchronously: resolve source, copy
|
||||
the app folder, write .env, validate path settings + placeholders.
|
||||
Those are fast, and their failures deserve an immediate 4xx so the
|
||||
install modal can surface them in-line.
|
||||
- After that the API writes an initial state file (stage
|
||||
"pulling_image") and dispatches ``systemd-run --unit=furtka-install-
|
||||
<name>`` to run ``furtka app install-bg <name>`` in the background.
|
||||
That CLI subcommand is what calls ``run_install()`` here — it does the
|
||||
docker-facing phases and writes state transitions as it goes.
|
||||
|
||||
State file schema (``/var/lib/furtka/install-state.json``):
|
||||
|
||||
{
|
||||
"stage": "pulling_image" | "creating_volumes"
|
||||
| "starting_container" | "done" | "error",
|
||||
"updated_at": "2026-04-21T17:30:45+0200",
|
||||
"app": "jellyfin",
|
||||
"version": "1.0.0", // added at "done"
|
||||
"error": "details..." // added at "error"
|
||||
}
|
||||
|
||||
Lock: ``/run/furtka/install.lock`` (tmpfs, reboot-safe). Global, not
|
||||
per-app — two parallel installs are not a v1 use-case and the lock
|
||||
keeps the state-file representation simple (one in-flight install at
|
||||
a time).
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import fcntl
|
||||
import json
|
||||
import os
|
||||
import time
|
||||
from pathlib import Path
|
||||
|
||||
from furtka import dockerops
|
||||
from furtka.manifest import load_manifest
|
||||
from furtka.paths import apps_dir
|
||||
|
||||
_INSTALL_STATE = Path(os.environ.get("FURTKA_INSTALL_STATE", "/var/lib/furtka/install-state.json"))
|
||||
_LOCK_PATH = Path(os.environ.get("FURTKA_INSTALL_LOCK", "/run/furtka/install.lock"))
|
||||
|
||||
|
||||
class InstallRunnerError(RuntimeError):
|
||||
"""Any failure in the background install flow that should surface to the caller."""
|
||||
|
||||
|
||||
def state_path() -> Path:
|
||||
return _INSTALL_STATE
|
||||
|
||||
|
||||
def lock_path() -> Path:
|
||||
return _LOCK_PATH
|
||||
|
||||
|
||||
def write_state(stage: str, **extra) -> None:
|
||||
"""Atomic JSON state write — same shape as catalog/update-state."""
|
||||
state_path().parent.mkdir(parents=True, exist_ok=True)
|
||||
tmp = state_path().with_suffix(".tmp")
|
||||
payload = {"stage": stage, "updated_at": time.strftime("%Y-%m-%dT%H:%M:%S%z"), **extra}
|
||||
tmp.write_text(json.dumps(payload, indent=2))
|
||||
tmp.replace(state_path())
|
||||
|
||||
|
||||
def read_state() -> dict:
|
||||
try:
|
||||
return json.loads(state_path().read_text())
|
||||
except (FileNotFoundError, json.JSONDecodeError):
|
||||
return {}
|
||||
|
||||
|
||||
def acquire_lock():
|
||||
path = lock_path()
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
fh = path.open("w")
|
||||
try:
|
||||
fcntl.flock(fh, fcntl.LOCK_EX | fcntl.LOCK_NB)
|
||||
except BlockingIOError as e:
|
||||
fh.close()
|
||||
raise InstallRunnerError("another install is already in progress") from e
|
||||
return fh
|
||||
|
||||
|
||||
def run_install(name: str) -> None:
|
||||
"""Docker-facing phases of the install: pull → volumes → compose up.
|
||||
|
||||
Called by the ``furtka app install-bg <name>`` CLI subcommand from the
|
||||
systemd-run spawned by the API. Assumes the API has already run
|
||||
``installer.install_from()``, so the app folder, .env, and manifest
|
||||
are on disk at ``apps_dir() / <name>``.
|
||||
|
||||
Every phase transition is written to the state file for the UI to
|
||||
poll. On exception the state flips to ``"error"`` with the message,
|
||||
then the exception is re-raised so the CLI exits non-zero and
|
||||
journald has a traceback.
|
||||
"""
|
||||
with acquire_lock():
|
||||
target = apps_dir() / name
|
||||
manifest = load_manifest(target / "manifest.json", expected_name=name)
|
||||
try:
|
||||
write_state("pulling_image", app=name)
|
||||
dockerops.compose_pull(target, name)
|
||||
write_state("creating_volumes", app=name)
|
||||
for short in manifest.volumes:
|
||||
dockerops.ensure_volume(manifest.volume_name(short))
|
||||
write_state("starting_container", app=name)
|
||||
dockerops.compose_up(target, name)
|
||||
write_state("done", app=name, version=manifest.version)
|
||||
except Exception as e:
|
||||
write_state("error", app=name, error=str(e))
|
||||
raise
|
||||
|
|
@ -2,7 +2,7 @@ import shutil
|
|||
from pathlib import Path
|
||||
|
||||
from furtka import sources
|
||||
from furtka.manifest import ManifestError, load_manifest
|
||||
from furtka.manifest import Manifest, ManifestError, load_manifest
|
||||
from furtka.paths import apps_dir
|
||||
|
||||
# Values that an app's .env.example may use as obvious "fill me in" markers.
|
||||
|
|
@ -11,6 +11,25 @@ from furtka.paths import apps_dir
|
|||
# default that ends up screenshotted on Hacker News.
|
||||
PLACEHOLDER_SECRETS: frozenset[str] = frozenset({"changeme"})
|
||||
|
||||
# System paths that must never be accepted as a user-supplied `path`-type
|
||||
# setting. The user is root on their own box, so this is about preventing
|
||||
# accidental footguns (typing `/etc` when they meant `/mnt/etc`), not
|
||||
# defending against an attacker. Matches exact paths and their subtrees
|
||||
# after `Path.resolve()` — so `/mnt/../etc` also lands here.
|
||||
DENIED_PATH_PREFIXES: tuple[str, ...] = (
|
||||
"/etc",
|
||||
"/root",
|
||||
"/boot",
|
||||
"/proc",
|
||||
"/sys",
|
||||
"/dev",
|
||||
"/bin",
|
||||
"/sbin",
|
||||
"/usr/bin",
|
||||
"/usr/sbin",
|
||||
"/var/lib/furtka",
|
||||
)
|
||||
|
||||
|
||||
class InstallError(RuntimeError):
|
||||
pass
|
||||
|
|
@ -31,6 +50,53 @@ def _placeholder_keys(env_path: Path) -> list[str]:
|
|||
return bad
|
||||
|
||||
|
||||
def _is_denied_system_path(resolved: str) -> bool:
|
||||
if resolved == "/":
|
||||
return True
|
||||
for bad in DENIED_PATH_PREFIXES:
|
||||
if resolved == bad or resolved.startswith(bad + "/"):
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def _path_setting_errors(m: Manifest, env_path: Path) -> list[str]:
|
||||
"""Validate the filesystem paths named by `path`-type settings.
|
||||
|
||||
Returns one human-readable message per offending setting. Empty values
|
||||
on non-required settings are allowed — the required-field check in the
|
||||
caller already refuses blanks on required fields before write.
|
||||
"""
|
||||
if not env_path.exists():
|
||||
return []
|
||||
values = _read_env(env_path)
|
||||
errors: list[str] = []
|
||||
for s in m.settings:
|
||||
if s.type != "path":
|
||||
continue
|
||||
value = values.get(s.name, "")
|
||||
if not value:
|
||||
continue
|
||||
p = Path(value)
|
||||
if not p.is_absolute():
|
||||
errors.append(f"{s.name}={value!r} must be an absolute path (start with /)")
|
||||
continue
|
||||
try:
|
||||
resolved = p.resolve(strict=False)
|
||||
except (OSError, RuntimeError) as e:
|
||||
errors.append(f"{s.name}={value!r} cannot be resolved: {e}")
|
||||
continue
|
||||
if _is_denied_system_path(str(resolved)):
|
||||
errors.append(f"{s.name}={value!r} resolves into a system path and is not allowed")
|
||||
continue
|
||||
if not resolved.exists():
|
||||
errors.append(f"{s.name}={value!r} does not exist on this box")
|
||||
continue
|
||||
if not resolved.is_dir():
|
||||
errors.append(f"{s.name}={value!r} is not a directory")
|
||||
continue
|
||||
return errors
|
||||
|
||||
|
||||
def _format_env_value(v: str) -> str:
|
||||
# Quote values that contain whitespace, quotes, or shell metacharacters so
|
||||
# docker-compose's env substitution reads them back intact. Simple values
|
||||
|
|
@ -160,6 +226,10 @@ def install_from(src: Path, settings: dict[str, str] | None = None) -> Path:
|
|||
f"file and re-run `furtka app install {m.name}`."
|
||||
)
|
||||
|
||||
path_errors = _path_setting_errors(m, env)
|
||||
if path_errors:
|
||||
raise InstallError(f"{m.name}: {'; '.join(path_errors)}")
|
||||
|
||||
return target
|
||||
|
||||
|
||||
|
|
@ -231,6 +301,9 @@ def update_env(name: str, settings: dict[str, str]) -> Path:
|
|||
bad = _placeholder_keys(env)
|
||||
if bad:
|
||||
raise InstallError(f"{m.name}: {env} still has placeholder values for {', '.join(bad)}.")
|
||||
path_errors = _path_setting_errors(m, env)
|
||||
if path_errors:
|
||||
raise InstallError(f"{m.name}: {'; '.join(path_errors)}")
|
||||
return target
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -13,7 +13,7 @@ REQUIRED_FIELDS = (
|
|||
"icon",
|
||||
)
|
||||
|
||||
VALID_SETTING_TYPES = frozenset({"text", "password", "number"})
|
||||
VALID_SETTING_TYPES = frozenset({"text", "password", "number", "path"})
|
||||
SETTING_NAME_RE = re.compile(r"^[A-Z_][A-Z0-9_]*$")
|
||||
|
||||
|
||||
|
|
@ -42,6 +42,12 @@ class Manifest:
|
|||
icon: str
|
||||
description_long: str = ""
|
||||
settings: tuple[Setting, ...] = field(default_factory=tuple)
|
||||
# Optional "Open" link for the landing page + installed-app row.
|
||||
# `{host}` is substituted with the current browser hostname at render
|
||||
# time so the URL follows whatever the user typed to reach Furtka —
|
||||
# furtka.local, a raw IP, a future reverse-proxy hostname. Apps with
|
||||
# no frontend (CLI-only, background workers) leave this empty.
|
||||
open_url: str = ""
|
||||
|
||||
def volume_name(self, short: str) -> str:
|
||||
# Namespace volume names so two apps can each declare e.g. "data"
|
||||
|
|
@ -127,6 +133,10 @@ def load_manifest(path: Path, expected_name: str | None = None) -> Manifest:
|
|||
|
||||
settings = _parse_settings(raw.get("settings"), path)
|
||||
|
||||
open_url_raw = raw.get("open_url", "")
|
||||
if not isinstance(open_url_raw, str):
|
||||
raise ManifestError(f"{path}: open_url must be a string if set")
|
||||
|
||||
return Manifest(
|
||||
name=name,
|
||||
display_name=str(raw["display_name"]),
|
||||
|
|
@ -137,4 +147,5 @@ def load_manifest(path: Path, expected_name: str | None = None) -> Manifest:
|
|||
icon=str(raw["icon"]),
|
||||
description_long=str(raw.get("description_long", "")),
|
||||
settings=settings,
|
||||
open_url=open_url_raw,
|
||||
)
|
||||
|
|
|
|||
95
furtka/passwd.py
Normal file
95
furtka/passwd.py
Normal file
|
|
@ -0,0 +1,95 @@
|
|||
"""Stdlib-only password hashing, compatible with werkzeug's hash format.
|
||||
|
||||
Why this exists: 26.11-alpha introduced auth via ``werkzeug.security``,
|
||||
but the target system doesn't have ``werkzeug`` installed (Core runs as
|
||||
system Python with only the stdlib — pyproject.toml's ``flask>=3.0``
|
||||
dep is never pip-installed on the box). Fresh installs from a 26.11 /
|
||||
26.12 ISO crashed on import; upgrades from pre-auth versions were
|
||||
double-broken by that plus a too-strict updater health check.
|
||||
|
||||
Fix: replace werkzeug with stdlib equivalents using the same hash
|
||||
**format** so existing ``users.json`` files created by 26.11 / 26.12 on
|
||||
the rare boxes that happened to have werkzeug installed (Medion, .196
|
||||
after manual pacman) still verify.
|
||||
|
||||
Format: ``<method>$<salt>$<hex digest>``
|
||||
- ``pbkdf2:<hash>:<iterations>`` — what we generate by default here
|
||||
- ``scrypt:<N>:<r>:<p>`` — what werkzeug's default produces
|
||||
Both are implemented via ``hashlib`` which has been stdlib since 3.6.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import hashlib
|
||||
import hmac
|
||||
import secrets
|
||||
|
||||
_PBKDF2_HASH = "sha256"
|
||||
_PBKDF2_ITERATIONS = 600_000
|
||||
_SALT_LEN = 16
|
||||
|
||||
|
||||
def hash_password(password: str) -> str:
|
||||
"""Return a ``pbkdf2:sha256:<iter>$<salt>$<hex>`` hash of *password*.
|
||||
|
||||
PBKDF2-SHA256 over UTF-8. 600k iterations — same as werkzeug's
|
||||
default in the 3.x series, roughly OWASP 2023's recommendation.
|
||||
"""
|
||||
if not isinstance(password, str):
|
||||
raise TypeError("password must be str")
|
||||
salt = secrets.token_urlsafe(_SALT_LEN)[:_SALT_LEN]
|
||||
dk = hashlib.pbkdf2_hmac(
|
||||
_PBKDF2_HASH, password.encode("utf-8"), salt.encode("utf-8"), _PBKDF2_ITERATIONS
|
||||
)
|
||||
return f"pbkdf2:{_PBKDF2_HASH}:{_PBKDF2_ITERATIONS}${salt}${dk.hex()}"
|
||||
|
||||
|
||||
def verify_password(password: str, hashed: str) -> bool:
|
||||
"""Constant-time verify *password* against a stored *hashed* value.
|
||||
|
||||
Accepts both our own pbkdf2 hashes and legacy werkzeug scrypt
|
||||
hashes in ``scrypt:N:r:p$salt$hex`` form — so users.json files
|
||||
written by 26.11 / 26.12 keep working after upgrade.
|
||||
"""
|
||||
if not isinstance(password, str) or not isinstance(hashed, str):
|
||||
return False
|
||||
try:
|
||||
method, salt, expected = hashed.split("$", 2)
|
||||
except ValueError:
|
||||
return False
|
||||
parts = method.split(":")
|
||||
if not parts:
|
||||
return False
|
||||
algo = parts[0]
|
||||
pw_bytes = password.encode("utf-8")
|
||||
salt_bytes = salt.encode("utf-8")
|
||||
try:
|
||||
if algo == "pbkdf2":
|
||||
if len(parts) < 3:
|
||||
return False
|
||||
inner_hash = parts[1]
|
||||
iterations = int(parts[2])
|
||||
dk = hashlib.pbkdf2_hmac(inner_hash, pw_bytes, salt_bytes, iterations)
|
||||
elif algo == "scrypt":
|
||||
# werkzeug: scrypt:N:r:p, dklen=64, maxmem=132 MiB. Without
|
||||
# the explicit maxmem we'd hit OpenSSL's default memory cap
|
||||
# and throw ValueError on N >= 32768.
|
||||
if len(parts) < 4:
|
||||
return False
|
||||
n = int(parts[1])
|
||||
r = int(parts[2])
|
||||
p = int(parts[3])
|
||||
dk = hashlib.scrypt(
|
||||
pw_bytes,
|
||||
salt=salt_bytes,
|
||||
n=n,
|
||||
r=r,
|
||||
p=p,
|
||||
dklen=64,
|
||||
maxmem=132 * 1024 * 1024,
|
||||
)
|
||||
else:
|
||||
return False
|
||||
except (ValueError, TypeError, OverflowError):
|
||||
return False
|
||||
return hmac.compare_digest(dk.hex(), expected)
|
||||
|
|
@ -11,6 +11,15 @@ DEFAULT_BUNDLED_APPS_DIR = Path("/opt/furtka/current/apps")
|
|||
# release tarball. Lives under /var/lib/furtka/ so it survives core self-
|
||||
# updates — the resolver (furtka.sources) prefers it over the bundled seed.
|
||||
DEFAULT_CATALOG_DIR = Path("/var/lib/furtka/catalog")
|
||||
# Users / auth state. One JSON file keyed by role — today only "admin" exists.
|
||||
# Lives under /var/lib/furtka/ so self-updates don't stomp it. Mode 0600 is
|
||||
# enforced by furtka.auth.save_users (same atomic-write pattern as the app
|
||||
# .env files).
|
||||
DEFAULT_USERS_FILE = Path("/var/lib/furtka/users.json")
|
||||
# Static-web asset dir served by the Python handler for / and
|
||||
# /settings* so those pages pick up the auth-guard. Caddy also serves
|
||||
# /style.css and other assets directly from here for the login page.
|
||||
DEFAULT_STATIC_WWW = Path("/opt/furtka/current/assets/www")
|
||||
|
||||
|
||||
def apps_dir() -> Path:
|
||||
|
|
@ -27,3 +36,11 @@ def catalog_dir() -> Path:
|
|||
|
||||
def catalog_apps_dir() -> Path:
|
||||
return catalog_dir() / "apps"
|
||||
|
||||
|
||||
def users_file() -> Path:
|
||||
return Path(os.environ.get("FURTKA_USERS_FILE", DEFAULT_USERS_FILE))
|
||||
|
||||
|
||||
def static_www_dir() -> Path:
|
||||
return Path(os.environ.get("FURTKA_STATIC_WWW", DEFAULT_STATIC_WWW))
|
||||
|
|
|
|||
|
|
@ -49,6 +49,9 @@ _CADDYFILE_LIVE = Path(os.environ.get("FURTKA_CADDYFILE_PATH", "/etc/caddy/Caddy
|
|||
_CADDY_SNIPPET_DIR = Path(
|
||||
os.environ.get("FURTKA_CADDY_SNIPPET_DIR", str(_CADDYFILE_LIVE.parent / "furtka.d"))
|
||||
)
|
||||
_CADDY_HTTPS_SNIPPET_DIR = Path(
|
||||
os.environ.get("FURTKA_CADDY_HTTPS_SNIPPET_DIR", str(_CADDYFILE_LIVE.parent / "furtka-https.d"))
|
||||
)
|
||||
_SYSTEMD_DIR = Path(os.environ.get("FURTKA_SYSTEMD_DIR", "/etc/systemd/system"))
|
||||
_HOSTNAME_FILE = Path(os.environ.get("FURTKA_HOSTNAME_FILE", "/etc/hostname"))
|
||||
_CADDYFILE_HOSTNAME_MARKER = "__FURTKA_HOSTNAME__"
|
||||
|
|
@ -170,6 +173,24 @@ def _current_hostname() -> str:
|
|||
return name or "furtka"
|
||||
|
||||
|
||||
def _maybe_migrate_preserve_https() -> None:
|
||||
"""26.14 → 26.15 migration: if the box already had the force-HTTPS
|
||||
redirect snippet on disk, that means the user explicitly opted
|
||||
into HTTPS under the old regime. Under the new opt-in regime,
|
||||
HTTPS also requires a separate listener snippet — write it here so
|
||||
the user's HTTPS doesn't silently break when the Caddyfile refresh
|
||||
removes the default hostname block.
|
||||
"""
|
||||
redirect_snippet = _CADDY_SNIPPET_DIR / "redirect.caddyfile"
|
||||
https_snippet = _CADDY_HTTPS_SNIPPET_DIR / "https.caddyfile"
|
||||
if not redirect_snippet.is_file() or https_snippet.is_file():
|
||||
return
|
||||
hostname = _current_hostname()
|
||||
https_snippet.write_text(
|
||||
f"{hostname}.local, {hostname} {{\n\ttls internal\n\timport furtka_routes\n}}\n"
|
||||
)
|
||||
|
||||
|
||||
def _refresh_caddyfile(source: Path) -> bool:
|
||||
"""Copy the shipped Caddyfile to /etc/caddy/ iff it differs. Returns True
|
||||
if the file changed (so caddy needs more than a bare reload).
|
||||
|
|
@ -180,10 +201,19 @@ def _refresh_caddyfile(source: Path) -> bool:
|
|||
"""
|
||||
if not source.is_file():
|
||||
return False
|
||||
# Snippet dir for the /api/furtka/https/force toggle. Pre-HTTPS installs
|
||||
# don't have this dir; ensure it so the Caddyfile's glob import can't
|
||||
# trip an older Caddy on a missing path during the first reload.
|
||||
# Snippet dirs for the /api/furtka/https/force toggle. Pre-HTTPS
|
||||
# installs don't have them; ensure both so the Caddyfile's glob
|
||||
# imports can't trip an older Caddy on missing paths during the
|
||||
# first reload. furtka-https.d is new in 26.15-alpha — older boxes
|
||||
# upgrading across this version line won't have it on disk yet.
|
||||
_CADDY_SNIPPET_DIR.mkdir(mode=0o755, parents=True, exist_ok=True)
|
||||
_CADDY_HTTPS_SNIPPET_DIR.mkdir(mode=0o755, parents=True, exist_ok=True)
|
||||
# Migration: pre-26.15 Caddyfile always served :443 via tls internal,
|
||||
# so a box that had the "force HTTPS" redirect toggle ON relied on
|
||||
# HTTPS being there implicitly. After this Caddyfile refresh the
|
||||
# hostname block is gone, so the redirect would 301 to a dead :443.
|
||||
# Preserve intent by writing the HTTPS listener snippet too.
|
||||
_maybe_migrate_preserve_https()
|
||||
rendered = source.read_text().replace(_CADDYFILE_HOSTNAME_MARKER, _current_hostname())
|
||||
if _CADDYFILE_LIVE.is_file() and rendered == _CADDYFILE_LIVE.read_text():
|
||||
return False
|
||||
|
|
@ -255,13 +285,35 @@ def _run(cmd: list[str]) -> None:
|
|||
|
||||
|
||||
def _health_check(url: str, deadline_s: float = 30.0) -> bool:
|
||||
"""Poll *url* until we get *any* response from the Python server.
|
||||
|
||||
Treats any 2xx-4xx response as "server is up". A 401 on
|
||||
/api/apps after the 26.11-alpha auth-guard shipped is a perfectly
|
||||
valid signal that the new code imported + the socket is listening
|
||||
— rejecting the request is still "alive". Only 5xx or connection-
|
||||
level failures count as unhealthy.
|
||||
|
||||
Rationale: pre-26.13 this function hit /api/apps and expected 200,
|
||||
which silently broke every upgrade across the auth boundary (26.10
|
||||
→ 26.11+) and auto-rolled back. Now we just need proof the new
|
||||
process came up.
|
||||
"""
|
||||
end = time.time() + deadline_s
|
||||
while time.time() < end:
|
||||
try:
|
||||
with urllib.request.urlopen(url, timeout=3) as resp:
|
||||
if resp.status == 200:
|
||||
# Any 2xx/3xx → alive. urllib follows redirects by
|
||||
# default, so a 302 → /login resolves to /login's 200.
|
||||
if resp.status < 500:
|
||||
return True
|
||||
except urllib.error.HTTPError as e:
|
||||
# 4xx → server is up, just refused us (auth, bad request,
|
||||
# whatever). Counts as healthy for the "did it come back"
|
||||
# check. 5xx → genuinely broken, don't accept.
|
||||
if 400 <= e.code < 500:
|
||||
return True
|
||||
except urllib.error.URLError:
|
||||
# Connection refused / DNS / timeout → not up yet, retry.
|
||||
pass
|
||||
time.sleep(1)
|
||||
return False
|
||||
|
|
|
|||
|
|
@ -54,7 +54,7 @@ mDNS is wired: `avahi-daemon` + `nss-mdns` come from `packages.extra`, the live
|
|||
Once `archinstall` finishes and you click **Reboot now**, the VM comes up into the installed system. No more port `:5000` — the wizard ISO is gone. Instead:
|
||||
|
||||
- **Console**: agetty shows `Furtka is ready. Open http://<hostname>.local …` with the IP fallback underneath.
|
||||
- **Browser** at `http://<hostname>.local` (default `http://furtka.local` — the form's default hostname is `furtka`; only the live-installer ISO uses `proksi`): Caddy-served landing page with three live status tiles (uptime, Docker version, free disk) refreshed every 30 s by `furtka-status.timer`. Since 26.4-alpha, `https://<hostname>.local` is also served via Caddy's `tls internal` — trust `rootCA.crt` from `/settings` to clear browser warnings.
|
||||
- **Browser** at `http://<hostname>.local` (default `http://furtka.local` — the form's default hostname is `furtka`; only the live-installer ISO uses `proksi`): Caddy-served landing page with three live status tiles (uptime, Docker version, free disk) refreshed every 30 s by `furtka-status.timer`. HTTPS is opt-in (26.15-alpha) — flip the toggle in `/settings` to switch on Caddy's `tls internal` on `:443`, then trust `rootCA.crt` from `/settings` to clear browser warnings.
|
||||
- **SSH**: `ssh <user>@<hostname>.local` works; `docker ps` works without `sudo` because the user is in the `docker` group.
|
||||
|
||||
This is a demo shell — no Authentik, no app store yet. The landing page lives at `/srv/furtka/www/`, served by Caddy on `:80` per `/etc/caddy/Caddyfile`. All of this is written into the target by `webinstaller/app.py`'s `_post_install_commands` via archinstall's `custom_commands`.
|
||||
|
|
@ -62,5 +62,4 @@ This is a demo shell — no Authentik, no app store yet. The landing page lives
|
|||
## Known rough edges
|
||||
|
||||
- **Disk space**: the first time you build on a fresh host, the squashfs/xorriso steps need ~15 GB free. If the host's LVM-root is smaller, `xorriso` silently dies at the very end with "Image size exceeds free space on media".
|
||||
- **Live-installer wizard is still HTTP-only**. `http://proksi.local:5000` during install has no TLS; the installed box gets Caddy + `tls internal` on `:443` once it reboots (26.4-alpha), but bringing the same story to the wizard itself is a later milestone.
|
||||
- **Boot USB could appear as an install target on bare metal**. On a VM the ISO is a CD-ROM (filtered) and SATA is the only disk, so the picker only shows the install target. On bare metal with a USB stick, the USB is `TYPE=disk` and shows up alongside the real install drive; a user could in theory pick the USB they just booted from. Mitigating this needs detecting the boot media (via `findmnt /run/archiso/bootmnt` or similar) and filtering it out in `webinstaller/drives.py`.
|
||||
- **Live-installer wizard is still HTTP-only**. `http://proksi.local:5000` during install has no TLS; once the box reboots, Caddy can serve `tls internal` on `:443` if the user opts in via `/settings` (26.15-alpha), but bringing TLS to the wizard itself is a later milestone.
|
||||
|
|
|
|||
|
|
@ -8,6 +8,23 @@ server {
|
|||
|
||||
charset utf-8;
|
||||
|
||||
gzip on;
|
||||
gzip_vary on;
|
||||
gzip_min_length 1024;
|
||||
gzip_comp_level 6;
|
||||
gzip_types
|
||||
text/css
|
||||
text/plain
|
||||
text/xml
|
||||
application/javascript
|
||||
application/json
|
||||
application/xml
|
||||
application/rss+xml
|
||||
application/atom+xml
|
||||
image/svg+xml
|
||||
font/woff
|
||||
font/woff2;
|
||||
|
||||
location / {
|
||||
try_files $uri $uri/ $uri.html =404;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
[project]
|
||||
name = "furtka"
|
||||
version = "26.6-alpha"
|
||||
version = "26.15-alpha"
|
||||
description = "Open-source home server OS — simple enough for everyone."
|
||||
requires-python = ">=3.11"
|
||||
readme = "README.md"
|
||||
|
|
|
|||
|
|
@ -99,4 +99,20 @@ upload_asset "$TARBALL"
|
|||
upload_asset "$SHA_FILE"
|
||||
upload_asset "$RELEASE_JSON"
|
||||
|
||||
# Optional: attach the live-installer ISO when dist/furtka-<version>.iso
|
||||
# exists. Release workflows that want this build the ISO via iso/build.sh
|
||||
# and move the output here before calling publish-release. Local runs
|
||||
# that skip the ISO step still publish the core release successfully.
|
||||
#
|
||||
# Soft-fail: the ISO is ~1 GB and Forgejo's reverse proxy has returned
|
||||
# 504 on the upload even when the write eventually succeeds. The core
|
||||
# tarball (which boxes need for self-update) is already uploaded above,
|
||||
# so don't let an ISO transport hiccup fail the whole release.
|
||||
ISO="$DIST_DIR/furtka-$VERSION.iso"
|
||||
if [ -f "$ISO" ]; then
|
||||
if ! upload_asset "$ISO"; then
|
||||
echo "warning: ISO upload failed — release published without ISO asset" >&2
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "Release $VERSION published: https://$HOST/$REPO/releases/tag/$VERSION"
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@ import urllib.request
|
|||
|
||||
import pytest
|
||||
|
||||
from furtka import api, dockerops
|
||||
from furtka import api, auth, dockerops
|
||||
|
||||
VALID_MANIFEST = {
|
||||
"name": "fileshare",
|
||||
|
|
@ -23,14 +23,47 @@ def fake_dirs(tmp_path, monkeypatch):
|
|||
apps = tmp_path / "apps"
|
||||
bundled = tmp_path / "bundled"
|
||||
catalog = tmp_path / "catalog"
|
||||
users_file = tmp_path / "users.json"
|
||||
static_www = tmp_path / "www"
|
||||
apps.mkdir()
|
||||
bundled.mkdir()
|
||||
static_www.mkdir()
|
||||
(static_www / "index.html").write_text("<html>landing page</html>")
|
||||
(static_www / "settings").mkdir()
|
||||
(static_www / "settings" / "index.html").write_text("<html>settings page</html>")
|
||||
monkeypatch.setenv("FURTKA_APPS_DIR", str(apps))
|
||||
monkeypatch.setenv("FURTKA_BUNDLED_APPS_DIR", str(bundled))
|
||||
monkeypatch.setenv("FURTKA_CATALOG_DIR", str(catalog))
|
||||
monkeypatch.setenv("FURTKA_USERS_FILE", str(users_file))
|
||||
monkeypatch.setenv("FURTKA_STATIC_WWW", str(static_www))
|
||||
# install_runner writes to /var/lib/furtka/install-state.json and
|
||||
# /run/furtka/install.lock by default — redirect into tmp_path so
|
||||
# test code doesn't need root.
|
||||
monkeypatch.setenv("FURTKA_INSTALL_STATE", str(tmp_path / "install-state.json"))
|
||||
monkeypatch.setenv("FURTKA_INSTALL_LOCK", str(tmp_path / "install.lock"))
|
||||
# install_runner caches env vars at import time, so reload it to
|
||||
# pick up the tmp-path env vars this fixture just set.
|
||||
import importlib
|
||||
|
||||
from furtka import install_runner
|
||||
|
||||
importlib.reload(install_runner)
|
||||
# Scrub any sessions or lockout counters that leaked from a prior
|
||||
# test — both stores are module-level.
|
||||
auth.SESSIONS.clear()
|
||||
auth.LOCKOUT.clear_all()
|
||||
return apps, bundled
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def admin_session(fake_dirs):
|
||||
"""Pre-create an admin account + live session. Returns a Cookie header
|
||||
value ready to drop into urllib.request.Request(headers=...)."""
|
||||
auth.create_admin("daniel", "hunter2-pw")
|
||||
session = auth.SESSIONS.create("daniel")
|
||||
return f"{auth.COOKIE_NAME}={session.token}"
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def no_docker(monkeypatch):
|
||||
"""Stub docker calls so install/remove can run without a daemon."""
|
||||
|
|
@ -39,6 +72,29 @@ def no_docker(monkeypatch):
|
|||
monkeypatch.setattr(dockerops, "compose_down", lambda app_dir, project: None)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def no_systemd_run(monkeypatch):
|
||||
"""Stub the systemd-run dispatch in _do_install so tests don't need it.
|
||||
|
||||
The install endpoint now spawns a background systemd-run unit to do
|
||||
the docker-facing phases. Tests that exercise the install path only
|
||||
care that the sync pre-phase succeeded and the dispatch was
|
||||
attempted with the right args — they shouldn't actually fire up
|
||||
systemd. subprocess.run gets monkeypatched to return a fake success
|
||||
CompletedProcess, and the call args get captured for assertions.
|
||||
"""
|
||||
import subprocess
|
||||
|
||||
calls = []
|
||||
|
||||
def fake_run(cmd, check=False, capture_output=False, text=False, **kwargs):
|
||||
calls.append(cmd)
|
||||
return subprocess.CompletedProcess(cmd, 0, stdout="", stderr="")
|
||||
|
||||
monkeypatch.setattr(subprocess, "run", fake_run)
|
||||
return calls
|
||||
|
||||
|
||||
def _write_bundled(bundled, name, manifest=None, env_example=None):
|
||||
app = bundled / name
|
||||
app.mkdir()
|
||||
|
|
@ -131,7 +187,7 @@ def test_list_available_inlines_icon_svg(fake_dirs):
|
|||
assert entry["icon_svg"] == _SIMPLE_SVG
|
||||
|
||||
|
||||
def test_list_installed_inlines_icon_svg(fake_dirs, no_docker):
|
||||
def test_list_installed_inlines_icon_svg(fake_dirs, no_docker, no_systemd_run):
|
||||
apps, bundled = fake_dirs
|
||||
app = _write_bundled(bundled, "fileshare", env_example="A=real")
|
||||
_write_icon(app, _SIMPLE_SVG)
|
||||
|
|
@ -140,12 +196,15 @@ def test_list_installed_inlines_icon_svg(fake_dirs, no_docker):
|
|||
assert entry["icon_svg"] == _SIMPLE_SVG
|
||||
|
||||
|
||||
def test_list_available_hides_already_installed(fake_dirs, no_docker):
|
||||
def test_list_available_hides_already_installed(fake_dirs, no_docker, no_systemd_run):
|
||||
apps, bundled = fake_dirs
|
||||
_write_bundled(bundled, "fileshare", env_example="A=real")
|
||||
status, _ = api._do_install("fileshare")
|
||||
assert status == 200
|
||||
# Now bundled should NOT include fileshare anymore.
|
||||
assert status == 202 # async dispatch
|
||||
# Now bundled should NOT include fileshare anymore — the app folder
|
||||
# exists on disk (install_from finished synchronously before the
|
||||
# dispatch), which is what _list_available uses for the "installed"
|
||||
# check.
|
||||
assert api._list_available() == []
|
||||
# But installed list should.
|
||||
installed = api._list_installed()
|
||||
|
|
@ -188,7 +247,7 @@ def test_remove_endpoint_unknown(fake_dirs, no_docker):
|
|||
assert status == 404
|
||||
|
||||
|
||||
def test_remove_endpoint_happy_path(fake_dirs, no_docker):
|
||||
def test_remove_endpoint_happy_path(fake_dirs, no_docker, no_systemd_run):
|
||||
apps, bundled = fake_dirs
|
||||
_write_bundled(bundled, "fileshare", env_example="A=real")
|
||||
api._do_install("fileshare")
|
||||
|
|
@ -199,23 +258,39 @@ def test_remove_endpoint_happy_path(fake_dirs, no_docker):
|
|||
assert not (apps / "fileshare").exists()
|
||||
|
||||
|
||||
def test_http_get_apps_route(fake_dirs, no_docker):
|
||||
def _request(port, path, cookie=None, method="GET", body=None):
|
||||
headers = {}
|
||||
if cookie is not None:
|
||||
headers["Cookie"] = cookie
|
||||
data = None
|
||||
if body is not None:
|
||||
headers["Content-Type"] = "application/json"
|
||||
data = json.dumps(body).encode()
|
||||
return urllib.request.Request(
|
||||
f"http://127.0.0.1:{port}{path}",
|
||||
data=data,
|
||||
headers=headers,
|
||||
method=method,
|
||||
)
|
||||
|
||||
|
||||
def test_http_get_apps_route(fake_dirs, no_docker, admin_session):
|
||||
"""Smoke test the actual HTTP server with a real socket, urllib client."""
|
||||
server = api.HTTPServer(("127.0.0.1", 0), api._Handler) # port 0 → ephemeral
|
||||
port = server.server_address[1]
|
||||
t = threading.Thread(target=server.serve_forever, daemon=True)
|
||||
t.start()
|
||||
try:
|
||||
with urllib.request.urlopen(f"http://127.0.0.1:{port}/api/apps") as r:
|
||||
with urllib.request.urlopen(_request(port, "/api/apps", cookie=admin_session)) as r:
|
||||
assert r.status == 200
|
||||
data = json.loads(r.read())
|
||||
assert data == []
|
||||
with urllib.request.urlopen(f"http://127.0.0.1:{port}/") as r:
|
||||
with urllib.request.urlopen(_request(port, "/apps", cookie=admin_session)) as r:
|
||||
assert r.status == 200
|
||||
assert b"Furtka Apps" in r.read()
|
||||
# Unknown route → 404 JSON.
|
||||
try:
|
||||
urllib.request.urlopen(f"http://127.0.0.1:{port}/api/nope")
|
||||
urllib.request.urlopen(_request(port, "/api/nope", cookie=admin_session))
|
||||
raise AssertionError("expected 404")
|
||||
except urllib.error.HTTPError as e:
|
||||
assert e.code == 404
|
||||
|
|
@ -224,17 +299,18 @@ def test_http_get_apps_route(fake_dirs, no_docker):
|
|||
server.server_close()
|
||||
|
||||
|
||||
def test_http_post_install_unknown_app(fake_dirs):
|
||||
def test_http_post_install_unknown_app(fake_dirs, admin_session):
|
||||
server = api.HTTPServer(("127.0.0.1", 0), api._Handler)
|
||||
port = server.server_address[1]
|
||||
t = threading.Thread(target=server.serve_forever, daemon=True)
|
||||
t.start()
|
||||
try:
|
||||
req = urllib.request.Request(
|
||||
f"http://127.0.0.1:{port}/api/apps/install",
|
||||
data=json.dumps({"name": "ghost"}).encode(),
|
||||
headers={"Content-Type": "application/json"},
|
||||
req = _request(
|
||||
port,
|
||||
"/api/apps/install",
|
||||
cookie=admin_session,
|
||||
method="POST",
|
||||
body={"name": "ghost"},
|
||||
)
|
||||
try:
|
||||
urllib.request.urlopen(req)
|
||||
|
|
@ -248,6 +324,447 @@ def test_http_post_install_unknown_app(fake_dirs):
|
|||
server.server_close()
|
||||
|
||||
|
||||
# --- Auth guard + login flow ------------------------------------------------
|
||||
|
||||
|
||||
def _start_server():
|
||||
server = api.HTTPServer(("127.0.0.1", 0), api._Handler)
|
||||
port = server.server_address[1]
|
||||
t = threading.Thread(target=server.serve_forever, daemon=True)
|
||||
t.start()
|
||||
return server, port
|
||||
|
||||
|
||||
def test_unauthenticated_api_returns_401(fake_dirs):
|
||||
# No admin_session fixture → no cookie on the request.
|
||||
server, port = _start_server()
|
||||
try:
|
||||
try:
|
||||
urllib.request.urlopen(_request(port, "/api/apps"))
|
||||
raise AssertionError("expected 401")
|
||||
except urllib.error.HTTPError as e:
|
||||
assert e.code == 401
|
||||
body = json.loads(e.read())
|
||||
assert body["error"] == "not authenticated"
|
||||
finally:
|
||||
server.shutdown()
|
||||
server.server_close()
|
||||
|
||||
|
||||
def test_unauthenticated_html_redirects_to_login(fake_dirs):
|
||||
server, port = _start_server()
|
||||
try:
|
||||
# Disable redirect following so we can inspect the 302.
|
||||
opener = urllib.request.build_opener(_NoRedirectHandler())
|
||||
try:
|
||||
opener.open(_request(port, "/apps"))
|
||||
raise AssertionError("expected 302")
|
||||
except urllib.error.HTTPError as e:
|
||||
assert e.code == 302
|
||||
assert e.headers["Location"] == "/login"
|
||||
finally:
|
||||
server.shutdown()
|
||||
server.server_close()
|
||||
|
||||
|
||||
class _NoRedirectHandler(urllib.request.HTTPRedirectHandler):
|
||||
def redirect_request(self, *args, **kwargs):
|
||||
return None
|
||||
|
||||
|
||||
def test_unauth_root_redirects_to_login(fake_dirs):
|
||||
"""/ was previously Caddy-direct static HTML, bypassing auth. Now
|
||||
Python serves it and the auth-guard applies — unauth visitor gets
|
||||
bounced to /login just like /apps does."""
|
||||
server, port = _start_server()
|
||||
try:
|
||||
opener = urllib.request.build_opener(_NoRedirectHandler())
|
||||
try:
|
||||
opener.open(_request(port, "/"))
|
||||
raise AssertionError("expected 302")
|
||||
except urllib.error.HTTPError as e:
|
||||
assert e.code == 302
|
||||
assert e.headers["Location"] == "/login"
|
||||
finally:
|
||||
server.shutdown()
|
||||
server.server_close()
|
||||
|
||||
|
||||
def test_unauth_settings_redirects_to_login(fake_dirs):
|
||||
server, port = _start_server()
|
||||
try:
|
||||
opener = urllib.request.build_opener(_NoRedirectHandler())
|
||||
for path in ("/settings", "/settings/"):
|
||||
try:
|
||||
opener.open(_request(port, path))
|
||||
raise AssertionError(f"expected 302 for {path}")
|
||||
except urllib.error.HTTPError as e:
|
||||
assert e.code == 302
|
||||
assert e.headers["Location"] == "/login"
|
||||
finally:
|
||||
server.shutdown()
|
||||
server.server_close()
|
||||
|
||||
|
||||
def test_authed_root_serves_static_index(fake_dirs, admin_session):
|
||||
server, port = _start_server()
|
||||
try:
|
||||
with urllib.request.urlopen(_request(port, "/", cookie=admin_session)) as r:
|
||||
assert r.status == 200
|
||||
assert r.read() == b"<html>landing page</html>"
|
||||
finally:
|
||||
server.shutdown()
|
||||
server.server_close()
|
||||
|
||||
|
||||
def test_authed_settings_serves_static(fake_dirs, admin_session):
|
||||
server, port = _start_server()
|
||||
try:
|
||||
for path in ("/settings", "/settings/"):
|
||||
with urllib.request.urlopen(_request(port, path, cookie=admin_session)) as r:
|
||||
assert r.status == 200
|
||||
assert r.read() == b"<html>settings page</html>"
|
||||
finally:
|
||||
server.shutdown()
|
||||
server.server_close()
|
||||
|
||||
|
||||
def test_authed_root_does_not_serve_apps_html(fake_dirs, admin_session):
|
||||
"""Regression guard: the pre-26.14 do_GET had `if self.path in ("/",
|
||||
"/apps", ...)` which served _HTML (the apps page) for / too, since
|
||||
Caddy wasn't proxying / so nobody noticed. Now that Caddy does
|
||||
proxy /, the two paths must serve different content."""
|
||||
server, port = _start_server()
|
||||
try:
|
||||
with urllib.request.urlopen(_request(port, "/", cookie=admin_session)) as r:
|
||||
root_body = r.read()
|
||||
with urllib.request.urlopen(_request(port, "/apps", cookie=admin_session)) as r:
|
||||
apps_body = r.read()
|
||||
assert root_body != apps_body
|
||||
assert b"Furtka Apps" in apps_body
|
||||
assert b"landing page" in root_body
|
||||
finally:
|
||||
server.shutdown()
|
||||
server.server_close()
|
||||
|
||||
|
||||
def test_get_login_renders_login_form_when_admin_exists(fake_dirs):
|
||||
auth.create_admin("daniel", "hunter2-pw")
|
||||
server, port = _start_server()
|
||||
try:
|
||||
with urllib.request.urlopen(_request(port, "/login")) as r:
|
||||
html = r.read().decode()
|
||||
assert r.status == 200
|
||||
assert "Furtka login" in html
|
||||
# No setup confirm-password field rendered in login mode.
|
||||
assert 'id="password2"' not in html
|
||||
assert "Repeat password" not in html
|
||||
finally:
|
||||
server.shutdown()
|
||||
server.server_close()
|
||||
|
||||
|
||||
def test_get_login_renders_setup_form_when_no_admin(fake_dirs):
|
||||
server, port = _start_server()
|
||||
try:
|
||||
with urllib.request.urlopen(_request(port, "/login")) as r:
|
||||
html = r.read().decode()
|
||||
assert r.status == 200
|
||||
assert "Set admin password" in html
|
||||
assert "password2" in html # setup confirm field rendered
|
||||
finally:
|
||||
server.shutdown()
|
||||
server.server_close()
|
||||
|
||||
|
||||
def test_get_login_redirects_when_already_authed(fake_dirs, admin_session):
|
||||
server, port = _start_server()
|
||||
try:
|
||||
opener = urllib.request.build_opener(_NoRedirectHandler())
|
||||
try:
|
||||
opener.open(_request(port, "/login", cookie=admin_session))
|
||||
raise AssertionError("expected 302")
|
||||
except urllib.error.HTTPError as e:
|
||||
assert e.code == 302
|
||||
assert e.headers["Location"] == "/apps"
|
||||
finally:
|
||||
server.shutdown()
|
||||
server.server_close()
|
||||
|
||||
|
||||
def test_post_login_setup_creates_admin(fake_dirs):
|
||||
server, port = _start_server()
|
||||
try:
|
||||
req = _request(
|
||||
port,
|
||||
"/login",
|
||||
method="POST",
|
||||
body={
|
||||
"username": "daniel",
|
||||
"password": "a-real-password",
|
||||
"password2": "a-real-password",
|
||||
},
|
||||
)
|
||||
with urllib.request.urlopen(req) as r:
|
||||
assert r.status == 200
|
||||
set_cookie = r.headers["Set-Cookie"]
|
||||
assert auth.COOKIE_NAME in set_cookie
|
||||
assert "HttpOnly" in set_cookie
|
||||
assert "SameSite=Strict" in set_cookie
|
||||
# users.json got written.
|
||||
assert auth.load_users()["admin"]["username"] == "daniel"
|
||||
# And the password really works.
|
||||
assert auth.authenticate("daniel", "a-real-password") is True
|
||||
finally:
|
||||
server.shutdown()
|
||||
server.server_close()
|
||||
|
||||
|
||||
def test_post_login_setup_rejects_password_mismatch(fake_dirs):
|
||||
server, port = _start_server()
|
||||
try:
|
||||
req = _request(
|
||||
port,
|
||||
"/login",
|
||||
method="POST",
|
||||
body={"username": "x", "password": "abcdefgh", "password2": "different"},
|
||||
)
|
||||
try:
|
||||
urllib.request.urlopen(req)
|
||||
raise AssertionError("expected 400")
|
||||
except urllib.error.HTTPError as e:
|
||||
assert e.code == 400
|
||||
body = json.loads(e.read())
|
||||
assert "match" in body["error"].lower()
|
||||
# No admin created.
|
||||
assert auth.setup_needed() is True
|
||||
finally:
|
||||
server.shutdown()
|
||||
server.server_close()
|
||||
|
||||
|
||||
def test_post_login_setup_rejects_short_password(fake_dirs):
|
||||
server, port = _start_server()
|
||||
try:
|
||||
req = _request(
|
||||
port,
|
||||
"/login",
|
||||
method="POST",
|
||||
body={"username": "x", "password": "short", "password2": "short"},
|
||||
)
|
||||
try:
|
||||
urllib.request.urlopen(req)
|
||||
raise AssertionError("expected 400")
|
||||
except urllib.error.HTTPError as e:
|
||||
assert e.code == 400
|
||||
finally:
|
||||
server.shutdown()
|
||||
server.server_close()
|
||||
|
||||
|
||||
def test_post_login_success_with_correct_credentials(fake_dirs):
|
||||
auth.create_admin("daniel", "hunter2-pw")
|
||||
server, port = _start_server()
|
||||
try:
|
||||
req = _request(
|
||||
port,
|
||||
"/login",
|
||||
method="POST",
|
||||
body={"username": "daniel", "password": "hunter2-pw"},
|
||||
)
|
||||
with urllib.request.urlopen(req) as r:
|
||||
assert r.status == 200
|
||||
set_cookie = r.headers["Set-Cookie"]
|
||||
assert auth.COOKIE_NAME in set_cookie
|
||||
finally:
|
||||
server.shutdown()
|
||||
server.server_close()
|
||||
|
||||
|
||||
def test_post_login_rejects_wrong_password(fake_dirs):
|
||||
auth.create_admin("daniel", "hunter2-pw")
|
||||
server, port = _start_server()
|
||||
try:
|
||||
req = _request(
|
||||
port,
|
||||
"/login",
|
||||
method="POST",
|
||||
body={"username": "daniel", "password": "nope"},
|
||||
)
|
||||
try:
|
||||
urllib.request.urlopen(req)
|
||||
raise AssertionError("expected 401")
|
||||
except urllib.error.HTTPError as e:
|
||||
assert e.code == 401
|
||||
finally:
|
||||
server.shutdown()
|
||||
server.server_close()
|
||||
|
||||
|
||||
def _post_wrong_login(port, username="daniel", password="nope"):
|
||||
req = _request(
|
||||
port,
|
||||
"/login",
|
||||
method="POST",
|
||||
body={"username": username, "password": password},
|
||||
)
|
||||
try:
|
||||
urllib.request.urlopen(req)
|
||||
raise AssertionError("expected HTTPError")
|
||||
except urllib.error.HTTPError as e:
|
||||
return e
|
||||
|
||||
|
||||
def test_post_login_locks_out_after_repeated_failures(fake_dirs, monkeypatch):
|
||||
auth.create_admin("daniel", "hunter2-pw")
|
||||
# Flatten the 0.5s speed-bump so the test doesn't take 5 seconds.
|
||||
monkeypatch.setattr(api.time, "sleep", lambda _s: None)
|
||||
server, port = _start_server()
|
||||
try:
|
||||
for _ in range(auth.LoginAttempts.MAX_FAILURES):
|
||||
err = _post_wrong_login(port)
|
||||
assert err.code == 401
|
||||
err = _post_wrong_login(port)
|
||||
assert err.code == 429
|
||||
assert err.headers.get("Retry-After") is not None
|
||||
assert int(err.headers["Retry-After"]) > 0
|
||||
finally:
|
||||
server.shutdown()
|
||||
server.server_close()
|
||||
|
||||
|
||||
def test_post_login_429_masks_correctness(fake_dirs, monkeypatch):
|
||||
"""Once locked, the correct password must also get 429 — no oracle."""
|
||||
auth.create_admin("daniel", "hunter2-pw")
|
||||
monkeypatch.setattr(api.time, "sleep", lambda _s: None)
|
||||
server, port = _start_server()
|
||||
try:
|
||||
for _ in range(auth.LoginAttempts.MAX_FAILURES):
|
||||
_post_wrong_login(port)
|
||||
req = _request(
|
||||
port,
|
||||
"/login",
|
||||
method="POST",
|
||||
body={"username": "daniel", "password": "hunter2-pw"},
|
||||
)
|
||||
try:
|
||||
urllib.request.urlopen(req)
|
||||
raise AssertionError("expected 429")
|
||||
except urllib.error.HTTPError as e:
|
||||
assert e.code == 429
|
||||
finally:
|
||||
server.shutdown()
|
||||
server.server_close()
|
||||
|
||||
|
||||
def test_post_login_success_clears_lockout_counter(fake_dirs, monkeypatch):
|
||||
auth.create_admin("daniel", "hunter2-pw")
|
||||
monkeypatch.setattr(api.time, "sleep", lambda _s: None)
|
||||
server, port = _start_server()
|
||||
try:
|
||||
# Get close to the threshold, then log in successfully.
|
||||
for _ in range(auth.LoginAttempts.MAX_FAILURES - 1):
|
||||
_post_wrong_login(port)
|
||||
req = _request(
|
||||
port,
|
||||
"/login",
|
||||
method="POST",
|
||||
body={"username": "daniel", "password": "hunter2-pw"},
|
||||
)
|
||||
with urllib.request.urlopen(req) as r:
|
||||
assert r.status == 200
|
||||
# Counter must have been cleared: another full MAX_FAILURES-1
|
||||
# fails shouldn't trigger 429.
|
||||
for _ in range(auth.LoginAttempts.MAX_FAILURES - 1):
|
||||
err = _post_wrong_login(port)
|
||||
assert err.code == 401
|
||||
finally:
|
||||
server.shutdown()
|
||||
server.server_close()
|
||||
|
||||
|
||||
def test_post_login_setup_not_rate_limited(fake_dirs, monkeypatch):
|
||||
"""First-run setup is never auth-ed against a hash, so the lockout
|
||||
must not apply — otherwise a clumsy admin could lock themselves out
|
||||
of a box that has no admin yet."""
|
||||
monkeypatch.setattr(api.time, "sleep", lambda _s: None)
|
||||
server, port = _start_server()
|
||||
try:
|
||||
# Many mismatched setup submissions (400s) — no 429 should appear.
|
||||
for _ in range(auth.LoginAttempts.MAX_FAILURES + 3):
|
||||
req = _request(
|
||||
port,
|
||||
"/login",
|
||||
method="POST",
|
||||
body={
|
||||
"username": "daniel",
|
||||
"password": "longenough",
|
||||
"password2": "different",
|
||||
},
|
||||
)
|
||||
try:
|
||||
urllib.request.urlopen(req)
|
||||
raise AssertionError("expected 400")
|
||||
except urllib.error.HTTPError as e:
|
||||
assert e.code == 400
|
||||
# Then a good setup still succeeds.
|
||||
req = _request(
|
||||
port,
|
||||
"/login",
|
||||
method="POST",
|
||||
body={
|
||||
"username": "daniel",
|
||||
"password": "longenough",
|
||||
"password2": "longenough",
|
||||
},
|
||||
)
|
||||
with urllib.request.urlopen(req) as r:
|
||||
assert r.status == 200
|
||||
finally:
|
||||
server.shutdown()
|
||||
server.server_close()
|
||||
|
||||
|
||||
def test_post_logout_revokes_session(fake_dirs, admin_session):
|
||||
server, port = _start_server()
|
||||
try:
|
||||
# Logout returns 200 and clears the cookie.
|
||||
with urllib.request.urlopen(
|
||||
_request(port, "/logout", cookie=admin_session, method="POST", body={})
|
||||
) as r:
|
||||
assert r.status == 200
|
||||
set_cookie = r.headers["Set-Cookie"]
|
||||
assert "Max-Age=0" in set_cookie
|
||||
# Subsequent API call with same cookie → 401 (session revoked).
|
||||
try:
|
||||
urllib.request.urlopen(_request(port, "/api/apps", cookie=admin_session))
|
||||
raise AssertionError("expected 401")
|
||||
except urllib.error.HTTPError as e:
|
||||
assert e.code == 401
|
||||
finally:
|
||||
server.shutdown()
|
||||
server.server_close()
|
||||
|
||||
|
||||
def test_post_to_protected_route_without_auth_is_401(fake_dirs):
|
||||
server, port = _start_server()
|
||||
try:
|
||||
req = _request(
|
||||
port,
|
||||
"/api/apps/install",
|
||||
method="POST",
|
||||
body={"name": "whatever"},
|
||||
)
|
||||
try:
|
||||
urllib.request.urlopen(req)
|
||||
raise AssertionError("expected 401")
|
||||
except urllib.error.HTTPError as e:
|
||||
assert e.code == 401
|
||||
finally:
|
||||
server.shutdown()
|
||||
server.server_close()
|
||||
|
||||
|
||||
# --- Settings endpoints ------------------------------------------------------
|
||||
|
||||
SETTINGS_MANIFEST = dict(
|
||||
|
|
@ -290,13 +807,13 @@ def test_get_settings_not_found(fake_dirs):
|
|||
assert status == 404
|
||||
|
||||
|
||||
def test_install_with_settings_writes_env_via_api(fake_dirs, no_docker):
|
||||
def test_install_with_settings_writes_env_via_api(fake_dirs, no_docker, no_systemd_run):
|
||||
_, bundled = fake_dirs
|
||||
_write_bundled(bundled, "fileshare", manifest=SETTINGS_MANIFEST)
|
||||
status, body = api._do_install(
|
||||
"fileshare", settings={"SMB_USER": "alice", "SMB_PASSWORD": "s3cret"}
|
||||
)
|
||||
assert status == 200, body
|
||||
assert status == 202, body
|
||||
apps, _ = fake_dirs
|
||||
env = (apps / "fileshare" / ".env").read_text()
|
||||
assert "SMB_USER=alice" in env
|
||||
|
|
@ -311,7 +828,7 @@ def test_install_with_settings_rejects_empty_required_via_api(fake_dirs, no_dock
|
|||
assert "SMB_PASSWORD" in body["error"]
|
||||
|
||||
|
||||
def test_update_settings_merges(fake_dirs, no_docker):
|
||||
def test_update_settings_merges(fake_dirs, no_docker, no_systemd_run):
|
||||
_, bundled = fake_dirs
|
||||
_write_bundled(bundled, "fileshare", manifest=SETTINGS_MANIFEST)
|
||||
api._do_install("fileshare", settings={"SMB_USER": "alice", "SMB_PASSWORD": "original"})
|
||||
|
|
@ -329,7 +846,7 @@ def test_update_settings_unknown_app(fake_dirs):
|
|||
assert status == 404
|
||||
|
||||
|
||||
def test_http_get_settings_route(fake_dirs, no_docker):
|
||||
def test_http_get_settings_route(fake_dirs, no_docker, admin_session):
|
||||
_, bundled = fake_dirs
|
||||
_write_bundled(bundled, "fileshare", manifest=SETTINGS_MANIFEST)
|
||||
server = api.HTTPServer(("127.0.0.1", 0), api._Handler)
|
||||
|
|
@ -337,7 +854,9 @@ def test_http_get_settings_route(fake_dirs, no_docker):
|
|||
t = threading.Thread(target=server.serve_forever, daemon=True)
|
||||
t.start()
|
||||
try:
|
||||
with urllib.request.urlopen(f"http://127.0.0.1:{port}/api/apps/fileshare/settings") as r:
|
||||
with urllib.request.urlopen(
|
||||
_request(port, "/api/apps/fileshare/settings", cookie=admin_session)
|
||||
) as r:
|
||||
assert r.status == 200
|
||||
data = json.loads(r.read())
|
||||
assert data["name"] == "fileshare"
|
||||
|
|
@ -391,7 +910,7 @@ def test_update_not_installed(fake_dirs):
|
|||
assert "not installed" in body["error"]
|
||||
|
||||
|
||||
def test_update_no_changes(fake_dirs, no_docker, update_docker_stubs):
|
||||
def test_update_no_changes(fake_dirs, no_docker, no_systemd_run, update_docker_stubs):
|
||||
_, bundled = fake_dirs
|
||||
_write_bundled(bundled, "fileshare", env_example="A=real")
|
||||
api._do_install("fileshare")
|
||||
|
|
@ -404,7 +923,7 @@ def test_update_no_changes(fake_dirs, no_docker, update_docker_stubs):
|
|||
assert update_docker_stubs["up_called"] == 0
|
||||
|
||||
|
||||
def test_update_changes_applied(fake_dirs, no_docker, update_docker_stubs):
|
||||
def test_update_changes_applied(fake_dirs, no_docker, no_systemd_run, update_docker_stubs):
|
||||
_, bundled = fake_dirs
|
||||
_write_bundled(bundled, "fileshare", env_example="A=real")
|
||||
api._do_install("fileshare")
|
||||
|
|
@ -424,7 +943,9 @@ def test_update_changes_applied(fake_dirs, no_docker, update_docker_stubs):
|
|||
assert update_docker_stubs["up_called"] == 1
|
||||
|
||||
|
||||
def test_update_skips_services_not_running(fake_dirs, no_docker, update_docker_stubs):
|
||||
def test_update_skips_services_not_running(
|
||||
fake_dirs, no_docker, no_systemd_run, update_docker_stubs
|
||||
):
|
||||
_, bundled = fake_dirs
|
||||
_write_bundled(bundled, "fileshare", env_example="A=real")
|
||||
api._do_install("fileshare")
|
||||
|
|
@ -438,7 +959,9 @@ def test_update_skips_services_not_running(fake_dirs, no_docker, update_docker_s
|
|||
assert update_docker_stubs["up_called"] == 0
|
||||
|
||||
|
||||
def test_update_returns_502_on_pull_error(fake_dirs, no_docker, update_docker_stubs):
|
||||
def test_update_returns_502_on_pull_error(
|
||||
fake_dirs, no_docker, no_systemd_run, update_docker_stubs
|
||||
):
|
||||
_, bundled = fake_dirs
|
||||
_write_bundled(bundled, "fileshare", env_example="A=real")
|
||||
api._do_install("fileshare")
|
||||
|
|
@ -549,7 +1072,9 @@ def test_furtka_update_status_endpoint(stub_furtka_updater):
|
|||
assert stub_furtka_updater["status_called"] == 1
|
||||
|
||||
|
||||
def test_http_post_update_route(fake_dirs, no_docker, update_docker_stubs):
|
||||
def test_http_post_update_route(
|
||||
fake_dirs, no_docker, no_systemd_run, update_docker_stubs, admin_session
|
||||
):
|
||||
_, bundled = fake_dirs
|
||||
_write_bundled(bundled, "fileshare", env_example="A=real")
|
||||
api._do_install("fileshare")
|
||||
|
|
@ -560,11 +1085,12 @@ def test_http_post_update_route(fake_dirs, no_docker, update_docker_stubs):
|
|||
t = threading.Thread(target=server.serve_forever, daemon=True)
|
||||
t.start()
|
||||
try:
|
||||
req = urllib.request.Request(
|
||||
f"http://127.0.0.1:{port}/api/apps/fileshare/update",
|
||||
data=b"{}",
|
||||
headers={"Content-Type": "application/json"},
|
||||
req = _request(
|
||||
port,
|
||||
"/api/apps/fileshare/update",
|
||||
cookie=admin_session,
|
||||
method="POST",
|
||||
body={},
|
||||
)
|
||||
with urllib.request.urlopen(req) as r:
|
||||
assert r.status == 200
|
||||
|
|
@ -576,7 +1102,7 @@ def test_http_post_update_route(fake_dirs, no_docker, update_docker_stubs):
|
|||
server.server_close()
|
||||
|
||||
|
||||
def test_http_post_install_with_settings(fake_dirs, no_docker):
|
||||
def test_http_post_install_with_settings(fake_dirs, no_docker, no_systemd_run, admin_session):
|
||||
_, bundled = fake_dirs
|
||||
_write_bundled(bundled, "fileshare", manifest=SETTINGS_MANIFEST)
|
||||
server = api.HTTPServer(("127.0.0.1", 0), api._Handler)
|
||||
|
|
@ -584,26 +1110,91 @@ def test_http_post_install_with_settings(fake_dirs, no_docker):
|
|||
t = threading.Thread(target=server.serve_forever, daemon=True)
|
||||
t.start()
|
||||
try:
|
||||
req = urllib.request.Request(
|
||||
f"http://127.0.0.1:{port}/api/apps/install",
|
||||
data=json.dumps(
|
||||
{
|
||||
req = _request(
|
||||
port,
|
||||
"/api/apps/install",
|
||||
cookie=admin_session,
|
||||
method="POST",
|
||||
body={
|
||||
"name": "fileshare",
|
||||
"settings": {"SMB_USER": "alice", "SMB_PASSWORD": "s3cret"},
|
||||
}
|
||||
).encode(),
|
||||
headers={"Content-Type": "application/json"},
|
||||
method="POST",
|
||||
},
|
||||
)
|
||||
with urllib.request.urlopen(req) as r:
|
||||
assert r.status == 200
|
||||
# Async: 202 Accepted + dispatched background job.
|
||||
assert r.status == 202
|
||||
body = json.loads(r.read())
|
||||
assert body["status"] == "dispatched"
|
||||
assert body["unit"] == "furtka-install-fileshare"
|
||||
# Sync phase wrote the .env before dispatch.
|
||||
apps, _ = fake_dirs
|
||||
assert "SMB_PASSWORD=s3cret" in (apps / "fileshare" / ".env").read_text()
|
||||
# And systemd-run was called exactly once with the expected cmd.
|
||||
assert len(no_systemd_run) == 1
|
||||
assert no_systemd_run[0][:4] == [
|
||||
"systemd-run",
|
||||
"--unit=furtka-install-fileshare",
|
||||
"--no-block",
|
||||
"--collect",
|
||||
]
|
||||
assert no_systemd_run[0][-3:] == ["app", "install-bg", "fileshare"]
|
||||
finally:
|
||||
server.shutdown()
|
||||
server.server_close()
|
||||
|
||||
|
||||
def test_do_install_returns_409_when_locked(fake_dirs, no_docker, no_systemd_run):
|
||||
_, bundled = fake_dirs
|
||||
_write_bundled(bundled, "fileshare", env_example="A=real")
|
||||
# Hold the install lock so _do_install fast-fails.
|
||||
fh = api.install_runner.acquire_lock()
|
||||
try:
|
||||
status, body = api._do_install("fileshare")
|
||||
assert status == 409
|
||||
assert "in progress" in body["error"]
|
||||
finally:
|
||||
fh.close()
|
||||
|
||||
|
||||
def test_do_install_returns_409_when_state_reports_running(fake_dirs, no_docker, no_systemd_run):
|
||||
"""Closes the race window where _do_install had already released
|
||||
the fcntl lock (so the systemd-run child could grab it) but a
|
||||
second POST tried to start a new install while the first was still
|
||||
mid-flight. The state file's non-terminal stage is the reliable
|
||||
"someone else is installing" signal."""
|
||||
_, bundled = fake_dirs
|
||||
_write_bundled(bundled, "fileshare", env_example="A=real")
|
||||
api.install_runner.write_state("pulling_image", app="jellyfin")
|
||||
status, body = api._do_install("fileshare")
|
||||
assert status == 409
|
||||
assert "in progress" in body["error"]
|
||||
assert "jellyfin" in body["error"]
|
||||
assert "pulling_image" in body["error"]
|
||||
|
||||
|
||||
def test_do_install_goes_through_after_terminal_state(fake_dirs, no_docker, no_systemd_run):
|
||||
"""After a successful or failed install, the state file stays at
|
||||
done/error — a new install must be accepted, not blocked."""
|
||||
_, bundled = fake_dirs
|
||||
_write_bundled(bundled, "fileshare", env_example="A=real")
|
||||
api.install_runner.write_state("done", app="previous", version="1.0.0")
|
||||
status, _ = api._do_install("fileshare")
|
||||
assert status == 202
|
||||
|
||||
api.install_runner.write_state("error", app="previous", error="oops")
|
||||
status, _ = api._do_install("fileshare")
|
||||
assert status == 202
|
||||
|
||||
|
||||
def test_do_install_status_returns_state(fake_dirs):
|
||||
# Write state directly, then GET it via the status handler.
|
||||
api.install_runner.write_state("pulling_image", app="jellyfin")
|
||||
status, body = api._do_install_status()
|
||||
assert status == 200
|
||||
assert body["stage"] == "pulling_image"
|
||||
assert body["app"] == "jellyfin"
|
||||
|
||||
|
||||
# --- Catalog endpoints ------------------------------------------------------
|
||||
|
||||
|
||||
|
|
@ -639,3 +1230,66 @@ def test_catalog_check_surfaces_forgejo_error(fake_dirs, monkeypatch):
|
|||
status, body = api._do_catalog_check()
|
||||
assert status == 502
|
||||
assert "forgejo api down" in body["error"]
|
||||
|
||||
|
||||
# --- Power endpoints --------------------------------------------------------
|
||||
|
||||
|
||||
def test_power_rejects_unknown_action(fake_dirs):
|
||||
status, body = api._do_power({"action": "format-harddrive"})
|
||||
assert status == 400
|
||||
assert "action" in body["error"]
|
||||
|
||||
|
||||
def test_power_rejects_missing_action(fake_dirs):
|
||||
status, body = api._do_power({})
|
||||
assert status == 400
|
||||
|
||||
|
||||
def test_power_reboot_dispatches_systemd_run(fake_dirs, monkeypatch):
|
||||
seen = []
|
||||
|
||||
class _FakeCompleted:
|
||||
returncode = 0
|
||||
stdout = ""
|
||||
stderr = ""
|
||||
|
||||
def fake_run(cmd, *, check=False, capture_output=False, text=False):
|
||||
seen.append(cmd)
|
||||
return _FakeCompleted()
|
||||
|
||||
monkeypatch.setattr("subprocess.run", fake_run)
|
||||
status, body = api._do_power({"action": "reboot"})
|
||||
assert status == 202
|
||||
assert body == {"action": "reboot", "scheduled_in_seconds": 3}
|
||||
# The dispatched command is a delayed systemd-run that eventually
|
||||
# invokes `systemctl reboot`. Asserting the key flags catches
|
||||
# accidental regressions (e.g. losing --no-block would block the API
|
||||
# thread until the unit completes).
|
||||
assert seen[0][:1] == ["systemd-run"]
|
||||
assert "--on-active=3s" in seen[0]
|
||||
assert "--no-block" in seen[0]
|
||||
assert seen[0][-2:] == ["systemctl", "reboot"]
|
||||
|
||||
|
||||
def test_power_poweroff_dispatches_systemctl_poweroff(fake_dirs, monkeypatch):
|
||||
seen = []
|
||||
|
||||
class _FakeCompleted:
|
||||
returncode = 0
|
||||
|
||||
monkeypatch.setattr("subprocess.run", lambda cmd, **kw: (seen.append(cmd), _FakeCompleted())[1])
|
||||
status, body = api._do_power({"action": "poweroff"})
|
||||
assert status == 202
|
||||
assert body["action"] == "poweroff"
|
||||
assert seen[0][-2:] == ["systemctl", "poweroff"]
|
||||
|
||||
|
||||
def test_power_surfaces_systemd_run_missing(fake_dirs, monkeypatch):
|
||||
def boom(*a, **kw):
|
||||
raise FileNotFoundError(2, "No such file", "systemd-run")
|
||||
|
||||
monkeypatch.setattr("subprocess.run", boom)
|
||||
status, body = api._do_power({"action": "reboot"})
|
||||
assert status == 502
|
||||
assert "systemd-run" in body["error"]
|
||||
|
|
|
|||
230
tests/test_auth.py
Normal file
230
tests/test_auth.py
Normal file
|
|
@ -0,0 +1,230 @@
|
|||
import json
|
||||
from datetime import UTC, datetime, timedelta
|
||||
|
||||
import pytest
|
||||
|
||||
from furtka import auth
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def tmp_users_file(tmp_path, monkeypatch):
|
||||
path = tmp_path / "users.json"
|
||||
monkeypatch.setenv("FURTKA_USERS_FILE", str(path))
|
||||
# Sessions and lockout state are module-level; wipe between tests so
|
||||
# one doesn't leak a valid token (or a stale failure counter) into
|
||||
# the next.
|
||||
auth.SESSIONS.clear()
|
||||
auth.LOCKOUT.clear_all()
|
||||
return path
|
||||
|
||||
|
||||
def test_hash_password_roundtrip():
|
||||
h = auth.hash_password("hunter2")
|
||||
assert h != "hunter2" # Not plain text.
|
||||
assert auth.verify_password("hunter2", h) is True
|
||||
assert auth.verify_password("hunter3", h) is False
|
||||
|
||||
|
||||
def test_hash_password_is_salted():
|
||||
# Two calls with the same password must produce different hashes.
|
||||
a = auth.hash_password("same")
|
||||
b = auth.hash_password("same")
|
||||
assert a != b
|
||||
# But both verify against the original.
|
||||
assert auth.verify_password("same", a)
|
||||
assert auth.verify_password("same", b)
|
||||
|
||||
|
||||
def test_load_users_returns_empty_when_missing(tmp_users_file):
|
||||
assert not tmp_users_file.exists()
|
||||
assert auth.load_users() == {}
|
||||
|
||||
|
||||
def test_load_users_returns_empty_on_junk(tmp_users_file):
|
||||
tmp_users_file.write_text("{not json")
|
||||
assert auth.load_users() == {}
|
||||
|
||||
|
||||
def test_load_users_returns_empty_on_non_dict(tmp_users_file):
|
||||
tmp_users_file.write_text("[]")
|
||||
assert auth.load_users() == {}
|
||||
|
||||
|
||||
def test_save_users_atomic_and_0600(tmp_users_file):
|
||||
auth.save_users({"admin": {"hash": "x", "username": "daniel"}})
|
||||
assert tmp_users_file.exists()
|
||||
mode = tmp_users_file.stat().st_mode & 0o777
|
||||
assert mode == 0o600, f"expected 0o600, got {oct(mode)}"
|
||||
loaded = json.loads(tmp_users_file.read_text())
|
||||
assert loaded["admin"]["username"] == "daniel"
|
||||
|
||||
|
||||
def test_setup_needed_true_on_missing_file(tmp_users_file):
|
||||
assert auth.setup_needed() is True
|
||||
|
||||
|
||||
def test_setup_needed_true_on_empty_dict(tmp_users_file):
|
||||
tmp_users_file.write_text("{}")
|
||||
assert auth.setup_needed() is True
|
||||
|
||||
|
||||
def test_setup_needed_false_when_admin_exists(tmp_users_file):
|
||||
auth.create_admin("daniel", "secret-pw")
|
||||
assert auth.setup_needed() is False
|
||||
|
||||
|
||||
def test_create_admin_overwrites_file(tmp_users_file):
|
||||
auth.create_admin("daniel", "secret-pw")
|
||||
auth.create_admin("robert", "new-pw")
|
||||
users = auth.load_users()
|
||||
assert users["admin"]["username"] == "robert"
|
||||
|
||||
|
||||
def test_authenticate_happy(tmp_users_file):
|
||||
auth.create_admin("daniel", "secret-pw")
|
||||
assert auth.authenticate("daniel", "secret-pw") is True
|
||||
|
||||
|
||||
def test_authenticate_wrong_username(tmp_users_file):
|
||||
auth.create_admin("daniel", "secret-pw")
|
||||
assert auth.authenticate("robert", "secret-pw") is False
|
||||
|
||||
|
||||
def test_authenticate_wrong_password(tmp_users_file):
|
||||
auth.create_admin("daniel", "secret-pw")
|
||||
assert auth.authenticate("daniel", "wrong") is False
|
||||
|
||||
|
||||
def test_authenticate_no_admin(tmp_users_file):
|
||||
assert auth.authenticate("daniel", "anything") is False
|
||||
|
||||
|
||||
# ---- Session store ---------------------------------------------------------
|
||||
|
||||
|
||||
def test_session_create_and_lookup(tmp_users_file):
|
||||
s = auth.SESSIONS.create("daniel")
|
||||
assert s.username == "daniel"
|
||||
assert s.token
|
||||
looked_up = auth.SESSIONS.lookup(s.token)
|
||||
assert looked_up is not None
|
||||
assert looked_up.username == "daniel"
|
||||
|
||||
|
||||
def test_session_lookup_unknown_token(tmp_users_file):
|
||||
assert auth.SESSIONS.lookup("not-a-real-token") is None
|
||||
|
||||
|
||||
def test_session_lookup_none_token(tmp_users_file):
|
||||
assert auth.SESSIONS.lookup(None) is None
|
||||
assert auth.SESSIONS.lookup("") is None
|
||||
|
||||
|
||||
def test_session_revoke(tmp_users_file):
|
||||
s = auth.SESSIONS.create("daniel")
|
||||
auth.SESSIONS.revoke(s.token)
|
||||
assert auth.SESSIONS.lookup(s.token) is None
|
||||
|
||||
|
||||
def test_session_expires(tmp_users_file, monkeypatch):
|
||||
# Build a session store with a 0-second TTL so lookup immediately
|
||||
# treats new sessions as expired.
|
||||
store = auth.SessionStore(ttl_seconds=0)
|
||||
s = store.create("daniel")
|
||||
# Force the clock forward a hair so the > check fires.
|
||||
monkeypatch.setattr(
|
||||
auth,
|
||||
"datetime",
|
||||
_FakeDatetime(datetime.now(UTC) + timedelta(seconds=1)),
|
||||
)
|
||||
# The module-local datetime reference inside SessionStore.lookup
|
||||
# resolves at call time. Verify that an expired session is dropped.
|
||||
assert store.lookup(s.token) is None
|
||||
|
||||
|
||||
class _FakeDatetime:
|
||||
"""Tiny shim — only `.now(tz)` is used from SessionStore."""
|
||||
|
||||
def __init__(self, fixed_utc):
|
||||
self._fixed = fixed_utc
|
||||
|
||||
def now(self, tz=None):
|
||||
if tz is None:
|
||||
return self._fixed.replace(tzinfo=None)
|
||||
return self._fixed.astimezone(tz)
|
||||
|
||||
|
||||
# ---- Login attempts / lockout ----------------------------------------------
|
||||
|
||||
|
||||
def test_lockout_under_threshold_still_allowed(tmp_users_file):
|
||||
store = auth.LoginAttempts(max_failures=3, window_seconds=60, lockout_seconds=60)
|
||||
key = ("daniel", "10.0.0.1")
|
||||
for _ in range(2):
|
||||
store.register_failure(key)
|
||||
assert store.is_locked(key) is False
|
||||
assert store.retry_after_seconds(key) == 0
|
||||
|
||||
|
||||
def test_lockout_triggers_at_threshold(tmp_users_file):
|
||||
store = auth.LoginAttempts(max_failures=3, window_seconds=60, lockout_seconds=60)
|
||||
key = ("daniel", "10.0.0.1")
|
||||
for _ in range(3):
|
||||
store.register_failure(key)
|
||||
assert store.is_locked(key) is True
|
||||
assert store.retry_after_seconds(key) > 0
|
||||
assert store.retry_after_seconds(key) <= 60
|
||||
|
||||
|
||||
def test_lockout_window_decay(tmp_users_file, monkeypatch):
|
||||
store = auth.LoginAttempts(max_failures=3, window_seconds=60, lockout_seconds=60)
|
||||
key = ("daniel", "10.0.0.1")
|
||||
for _ in range(3):
|
||||
store.register_failure(key)
|
||||
assert store.is_locked(key) is True
|
||||
# Jump 2 minutes ahead — all failures are older than the window
|
||||
# and should be pruned on the next check.
|
||||
monkeypatch.setattr(
|
||||
auth,
|
||||
"datetime",
|
||||
_FakeDatetime(datetime.now(UTC) + timedelta(seconds=121)),
|
||||
)
|
||||
assert store.is_locked(key) is False
|
||||
assert store.retry_after_seconds(key) == 0
|
||||
|
||||
|
||||
def test_lockout_clear_resets(tmp_users_file):
|
||||
store = auth.LoginAttempts(max_failures=2, window_seconds=60, lockout_seconds=60)
|
||||
key = ("daniel", "10.0.0.1")
|
||||
store.register_failure(key)
|
||||
store.register_failure(key)
|
||||
assert store.is_locked(key) is True
|
||||
store.clear(key)
|
||||
assert store.is_locked(key) is False
|
||||
assert store.retry_after_seconds(key) == 0
|
||||
|
||||
|
||||
def test_lockout_keys_are_independent(tmp_users_file):
|
||||
store = auth.LoginAttempts(max_failures=2, window_seconds=60, lockout_seconds=60)
|
||||
locked = ("daniel", "1.1.1.1")
|
||||
other_ip = ("daniel", "2.2.2.2")
|
||||
other_user = ("robert", "1.1.1.1")
|
||||
store.register_failure(locked)
|
||||
store.register_failure(locked)
|
||||
assert store.is_locked(locked) is True
|
||||
assert store.is_locked(other_ip) is False
|
||||
assert store.is_locked(other_user) is False
|
||||
|
||||
|
||||
def test_lockout_clear_all_wipes_every_key(tmp_users_file):
|
||||
store = auth.LoginAttempts(max_failures=2, window_seconds=60, lockout_seconds=60)
|
||||
a = ("daniel", "1.1.1.1")
|
||||
b = ("robert", "2.2.2.2")
|
||||
store.register_failure(a)
|
||||
store.register_failure(a)
|
||||
store.register_failure(b)
|
||||
store.register_failure(b)
|
||||
assert store.is_locked(a) and store.is_locked(b)
|
||||
store.clear_all()
|
||||
assert not store.is_locked(a)
|
||||
assert not store.is_locked(b)
|
||||
|
|
@ -32,9 +32,21 @@ def test_app_list_json_with_one_app(tmp_path, monkeypatch, capsys):
|
|||
"display_name": "Network Files",
|
||||
"version": "0.1.0",
|
||||
"description": "SMB",
|
||||
"description_long": "Long description here.",
|
||||
"volumes": ["files"],
|
||||
"ports": [445],
|
||||
"icon": "icon.svg",
|
||||
"open_url": "smb://{host}/files",
|
||||
"settings": [
|
||||
{
|
||||
"name": "SMB_USER",
|
||||
"label": "User",
|
||||
"description": "SMB user",
|
||||
"type": "text",
|
||||
"default": "furtka",
|
||||
"required": True,
|
||||
}
|
||||
],
|
||||
}
|
||||
)
|
||||
)
|
||||
|
|
@ -43,7 +55,14 @@ def test_app_list_json_with_one_app(tmp_path, monkeypatch, capsys):
|
|||
data = json.loads(capsys.readouterr().out)
|
||||
assert len(data) == 1
|
||||
assert data[0]["ok"] is True
|
||||
assert data[0]["manifest"]["name"] == "fileshare"
|
||||
m = data[0]["manifest"]
|
||||
assert m["name"] == "fileshare"
|
||||
assert m["description_long"] == "Long description here."
|
||||
assert m["open_url"] == "smb://{host}/files"
|
||||
assert len(m["settings"]) == 1
|
||||
assert m["settings"][0]["name"] == "SMB_USER"
|
||||
assert m["settings"][0]["required"] is True
|
||||
assert m["settings"][0]["default"] == "furtka"
|
||||
|
||||
|
||||
def test_reconcile_dry_run_empty(tmp_path, monkeypatch, capsys):
|
||||
|
|
@ -52,3 +71,35 @@ def test_reconcile_dry_run_empty(tmp_path, monkeypatch, capsys):
|
|||
assert rc == 0
|
||||
out = capsys.readouterr().out
|
||||
assert "0 actions" in out
|
||||
|
||||
|
||||
def test_app_install_bg_dispatches_to_runner(tmp_path, monkeypatch):
|
||||
"""CLI `app install-bg <name>` must call install_runner.run_install(name).
|
||||
|
||||
This is the entry point the HTTP API fires via systemd-run; regression
|
||||
here would leave the UI hanging at "pulling_image…" forever because
|
||||
the background never transitions state.
|
||||
"""
|
||||
_set_env(monkeypatch, tmp_path)
|
||||
from furtka import install_runner
|
||||
|
||||
called = []
|
||||
monkeypatch.setattr(install_runner, "run_install", lambda name: called.append(name))
|
||||
rc = main(["app", "install-bg", "fileshare"])
|
||||
assert rc == 0
|
||||
assert called == ["fileshare"]
|
||||
|
||||
|
||||
def test_app_install_bg_returns_1_on_failure(tmp_path, monkeypatch, capsys):
|
||||
_set_env(monkeypatch, tmp_path)
|
||||
from furtka import install_runner
|
||||
|
||||
def boom(name):
|
||||
raise RuntimeError("compose pull failed")
|
||||
|
||||
monkeypatch.setattr(install_runner, "run_install", boom)
|
||||
rc = main(["app", "install-bg", "fileshare"])
|
||||
assert rc == 1
|
||||
err = capsys.readouterr().err
|
||||
assert "install-bg failed" in err
|
||||
assert "compose pull failed" in err
|
||||
|
|
|
|||
|
|
@ -95,3 +95,23 @@ def test_drive_type_label_nvme_ssd_hdd():
|
|||
|
||||
def test_parse_lsblk_handles_empty_output():
|
||||
assert parse_lsblk_output("") == []
|
||||
|
||||
|
||||
def test_parse_lsblk_drops_boot_usb(monkeypatch):
|
||||
import drives
|
||||
|
||||
monkeypatch.setattr(drives, "_smart_status", lambda _: "passed")
|
||||
output = "sda 500G disk\nsdb 16G disk\nnvme0n1 1T disk\n"
|
||||
devices = parse_lsblk_output(output, boot_disk="sdb")
|
||||
names = [d["name"] for d in devices]
|
||||
assert "/dev/sdb" not in names
|
||||
assert names == ["/dev/nvme0n1", "/dev/sda"]
|
||||
|
||||
|
||||
def test_parse_lsblk_no_boot_disk_keeps_all(monkeypatch):
|
||||
import drives
|
||||
|
||||
monkeypatch.setattr(drives, "_smart_status", lambda _: "passed")
|
||||
output = "sda 500G disk\nsdb 16G disk\n"
|
||||
names = [d["name"] for d in parse_lsblk_output(output, boot_disk=None)]
|
||||
assert set(names) == {"/dev/sda", "/dev/sdb"}
|
||||
|
|
|
|||
|
|
@ -1,11 +1,15 @@
|
|||
"""Tests for furtka.https — fingerprint extraction + force-HTTPS toggle.
|
||||
"""Tests for furtka.https — fingerprint extraction + HTTPS toggle.
|
||||
|
||||
Since 26.15-alpha the toggle writes/removes TWO snippets atomically:
|
||||
- The top-level HTTPS listener snippet (enables :443 + tls internal)
|
||||
- The :80-scoped redirect snippet (forces HTTP → HTTPS)
|
||||
|
||||
The fingerprint case uses a throwaway self-signed EC cert with a known
|
||||
reference fingerprint (computed once via `openssl x509 -fingerprint
|
||||
-sha256 -noout`) so we verify the PEM → DER → SHA256 path without a
|
||||
runtime subprocess dependency. The toggle cases stub the caddy reload
|
||||
so we assert the snippet file is written / removed and that reload
|
||||
failures roll state back.
|
||||
so we assert both snippet files are written / removed together and that
|
||||
reload failures roll BOTH state back.
|
||||
"""
|
||||
|
||||
import subprocess
|
||||
|
|
@ -34,6 +38,22 @@ _TEST_CERT_FP_SHA256 = (
|
|||
)
|
||||
|
||||
|
||||
def _paths(tmp_path):
|
||||
"""Return the four paths the toggle touches, in a dict for kwargs
|
||||
spreading. Keeps each test's fixture boilerplate small."""
|
||||
return {
|
||||
"snippet_dir": tmp_path / "furtka.d",
|
||||
"snippet": tmp_path / "furtka.d" / "redirect.caddyfile",
|
||||
"https_snippet_dir": tmp_path / "furtka-https.d",
|
||||
"https_snippet": tmp_path / "furtka-https.d" / "https.caddyfile",
|
||||
"hostname_file": tmp_path / "etc_hostname",
|
||||
}
|
||||
|
||||
|
||||
def _prepare_hostname(tmp_path, value="testbox"):
|
||||
(tmp_path / "etc_hostname").write_text(f"{value}\n")
|
||||
|
||||
|
||||
def test_ca_fingerprint_matches_openssl(tmp_path):
|
||||
cert = tmp_path / "root.crt"
|
||||
cert.write_text(_TEST_CERT_PEM)
|
||||
|
|
@ -53,7 +73,7 @@ def test_ca_fingerprint_no_pem_block(tmp_path):
|
|||
|
||||
|
||||
def test_status_no_ca_no_snippet(tmp_path):
|
||||
s = https.status(ca_path=tmp_path / "root.crt", snippet=tmp_path / "redirect.caddyfile")
|
||||
s = https.status(ca_path=tmp_path / "root.crt", https_snippet=tmp_path / "https.caddyfile")
|
||||
assert s == {
|
||||
"ca_available": False,
|
||||
"fingerprint_sha256": None,
|
||||
|
|
@ -62,105 +82,135 @@ def test_status_no_ca_no_snippet(tmp_path):
|
|||
}
|
||||
|
||||
|
||||
def test_status_with_ca_and_snippet(tmp_path):
|
||||
def test_status_with_ca_and_https_snippet(tmp_path):
|
||||
ca = tmp_path / "root.crt"
|
||||
ca.write_text(_TEST_CERT_PEM)
|
||||
snippet = tmp_path / "redirect.caddyfile"
|
||||
snippet.write_text(https.REDIRECT_CONTENT)
|
||||
s = https.status(ca_path=ca, snippet=snippet)
|
||||
https_snip = tmp_path / "https.caddyfile"
|
||||
https_snip.write_text("furtka.local, furtka {\n\ttls internal\n\timport furtka_routes\n}\n")
|
||||
s = https.status(ca_path=ca, https_snippet=https_snip)
|
||||
assert s["ca_available"] is True
|
||||
assert s["fingerprint_sha256"] == _TEST_CERT_FP_SHA256
|
||||
assert s["force_https"] is True
|
||||
|
||||
|
||||
def test_set_force_enable_writes_snippet_and_reloads(tmp_path):
|
||||
snippet_dir = tmp_path / "furtka.d"
|
||||
snippet = snippet_dir / "redirect.caddyfile"
|
||||
def test_status_force_reflects_https_snippet_not_redirect(tmp_path):
|
||||
"""Authoritative signal for "HTTPS is on" is the listener snippet —
|
||||
a lone redirect without a :443 listener wouldn't actually serve
|
||||
HTTPS, so the status must NOT report it as on. Locks 26.15 semantic."""
|
||||
ca = tmp_path / "root.crt"
|
||||
ca.write_text(_TEST_CERT_PEM)
|
||||
s = https.status(ca_path=ca, https_snippet=tmp_path / "does-not-exist.caddyfile")
|
||||
assert s["force_https"] is False
|
||||
|
||||
|
||||
def test_set_force_enable_writes_both_snippets_and_reloads(tmp_path):
|
||||
_prepare_hostname(tmp_path)
|
||||
p = _paths(tmp_path)
|
||||
calls = []
|
||||
|
||||
def fake_reload():
|
||||
calls.append("reload")
|
||||
|
||||
result = https.set_force_https(
|
||||
True, snippet_dir=snippet_dir, snippet=snippet, reload_caddy=fake_reload
|
||||
)
|
||||
result = https.set_force_https(True, reload_caddy=fake_reload, **p)
|
||||
assert result is True
|
||||
assert snippet.read_text() == https.REDIRECT_CONTENT
|
||||
assert p["snippet"].read_text() == https.REDIRECT_CONTENT
|
||||
written = p["https_snippet"].read_text()
|
||||
assert "testbox.local, testbox" in written
|
||||
assert "tls internal" in written
|
||||
assert "import furtka_routes" in written
|
||||
assert calls == ["reload"]
|
||||
|
||||
|
||||
def test_set_force_disable_removes_snippet(tmp_path):
|
||||
snippet_dir = tmp_path / "furtka.d"
|
||||
snippet_dir.mkdir()
|
||||
snippet = snippet_dir / "redirect.caddyfile"
|
||||
snippet.write_text(https.REDIRECT_CONTENT)
|
||||
def test_set_force_uses_fallback_hostname_when_file_missing(tmp_path):
|
||||
# No /etc/hostname → fall back to 'furtka' so Caddy gets a parseable
|
||||
# block instead of an empty hostname that would fail config load.
|
||||
p = _paths(tmp_path)
|
||||
result = https.set_force_https(True, reload_caddy=lambda: None, **p)
|
||||
assert result is True
|
||||
assert "furtka.local, furtka" in p["https_snippet"].read_text()
|
||||
|
||||
result = https.set_force_https(
|
||||
False, snippet_dir=snippet_dir, snippet=snippet, reload_caddy=lambda: None
|
||||
)
|
||||
|
||||
def test_set_force_disable_removes_both_snippets(tmp_path):
|
||||
_prepare_hostname(tmp_path)
|
||||
p = _paths(tmp_path)
|
||||
p["snippet_dir"].mkdir()
|
||||
p["https_snippet_dir"].mkdir()
|
||||
p["snippet"].write_text(https.REDIRECT_CONTENT)
|
||||
p["https_snippet"].write_text("furtka.local { tls internal }\n")
|
||||
|
||||
result = https.set_force_https(False, reload_caddy=lambda: None, **p)
|
||||
assert result is False
|
||||
assert not snippet.exists()
|
||||
assert not p["snippet"].exists()
|
||||
assert not p["https_snippet"].exists()
|
||||
|
||||
|
||||
def test_set_force_disable_is_idempotent_when_already_off(tmp_path):
|
||||
snippet_dir = tmp_path / "furtka.d"
|
||||
snippet = snippet_dir / "redirect.caddyfile"
|
||||
|
||||
result = https.set_force_https(
|
||||
False, snippet_dir=snippet_dir, snippet=snippet, reload_caddy=lambda: None
|
||||
)
|
||||
p = _paths(tmp_path)
|
||||
result = https.set_force_https(False, reload_caddy=lambda: None, **p)
|
||||
assert result is False
|
||||
assert not snippet.exists()
|
||||
assert not p["snippet"].exists()
|
||||
assert not p["https_snippet"].exists()
|
||||
|
||||
|
||||
def test_reload_failure_rolls_back_enable(tmp_path):
|
||||
snippet_dir = tmp_path / "furtka.d"
|
||||
snippet = snippet_dir / "redirect.caddyfile"
|
||||
_prepare_hostname(tmp_path)
|
||||
p = _paths(tmp_path)
|
||||
|
||||
def failing_reload():
|
||||
raise subprocess.CalledProcessError(1, ["systemctl"], stderr="bad config")
|
||||
|
||||
with pytest.raises(https.HttpsError, match="caddy reload failed: bad config"):
|
||||
https.set_force_https(
|
||||
True, snippet_dir=snippet_dir, snippet=snippet, reload_caddy=failing_reload
|
||||
)
|
||||
# Rollback: since snippet didn't exist before, it must not exist after.
|
||||
assert not snippet.exists()
|
||||
https.set_force_https(True, reload_caddy=failing_reload, **p)
|
||||
# Rollback: since neither snippet existed before, neither exists after.
|
||||
assert not p["snippet"].exists()
|
||||
assert not p["https_snippet"].exists()
|
||||
|
||||
|
||||
def test_reload_failure_rolls_back_disable(tmp_path):
|
||||
snippet_dir = tmp_path / "furtka.d"
|
||||
snippet_dir.mkdir()
|
||||
snippet = snippet_dir / "redirect.caddyfile"
|
||||
original = "redir https://{host}{uri} permanent\n# marker\n"
|
||||
snippet.write_text(original)
|
||||
_prepare_hostname(tmp_path)
|
||||
p = _paths(tmp_path)
|
||||
p["snippet_dir"].mkdir()
|
||||
p["https_snippet_dir"].mkdir()
|
||||
original_redirect = "redir https://{host}{uri} permanent\n# marker\n"
|
||||
original_https = "# old https block\nfurtka.local { tls internal }\n"
|
||||
p["snippet"].write_text(original_redirect)
|
||||
p["https_snippet"].write_text(original_https)
|
||||
|
||||
def failing_reload():
|
||||
raise subprocess.CalledProcessError(1, ["systemctl"], stderr="bad config")
|
||||
|
||||
with pytest.raises(https.HttpsError):
|
||||
https.set_force_https(
|
||||
False, snippet_dir=snippet_dir, snippet=snippet, reload_caddy=failing_reload
|
||||
)
|
||||
# Rollback: snippet is restored to its exact prior contents.
|
||||
assert snippet.read_text() == original
|
||||
https.set_force_https(False, reload_caddy=failing_reload, **p)
|
||||
# Rollback: both snippets are restored to their exact prior contents.
|
||||
assert p["snippet"].read_text() == original_redirect
|
||||
assert p["https_snippet"].read_text() == original_https
|
||||
|
||||
|
||||
def test_systemctl_missing_raises_and_rolls_back(tmp_path):
|
||||
snippet_dir = tmp_path / "furtka.d"
|
||||
snippet = snippet_dir / "redirect.caddyfile"
|
||||
_prepare_hostname(tmp_path)
|
||||
p = _paths(tmp_path)
|
||||
|
||||
def missing_systemctl():
|
||||
raise FileNotFoundError(2, "No such file", "systemctl")
|
||||
|
||||
with pytest.raises(https.HttpsError, match="systemctl not available"):
|
||||
https.set_force_https(
|
||||
True, snippet_dir=snippet_dir, snippet=snippet, reload_caddy=missing_systemctl
|
||||
)
|
||||
assert not snippet.exists()
|
||||
https.set_force_https(True, reload_caddy=missing_systemctl, **p)
|
||||
assert not p["snippet"].exists()
|
||||
assert not p["https_snippet"].exists()
|
||||
|
||||
|
||||
def test_redirect_snippet_content_is_caddy_redir_directive():
|
||||
# Lock the exact directive. A regression here silently stops the
|
||||
# redirect from taking effect even though the file-swap looks fine.
|
||||
assert https.REDIRECT_CONTENT.strip() == "redir https://{host}{uri} permanent"
|
||||
|
||||
|
||||
def test_https_snippet_content_has_tls_internal_and_routes(tmp_path):
|
||||
# Lock the shape of the opt-in HTTPS listener block. Caddy parses
|
||||
# this verbatim — changing the shape without updating the test
|
||||
# risks shipping a silently-broken Caddyfile import.
|
||||
s = https._https_snippet_content("mybox")
|
||||
assert "mybox.local, mybox {" in s
|
||||
assert "\ttls internal" in s
|
||||
assert "\timport furtka_routes" in s
|
||||
assert s.endswith("}\n")
|
||||
|
|
|
|||
177
tests/test_install_runner.py
Normal file
177
tests/test_install_runner.py
Normal file
|
|
@ -0,0 +1,177 @@
|
|||
"""Tests for the background app-install runner.
|
||||
|
||||
Same shape as test_catalog.py / test_updater.py: fixture reloads the
|
||||
module with env-overridden paths, dockerops calls are stubbed so nothing
|
||||
touches a real daemon. Asserts that state transitions happen in the
|
||||
right order and that exceptions flip the state to "error" with the
|
||||
message before re-raising.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def runner(tmp_path, monkeypatch):
|
||||
apps = tmp_path / "apps"
|
||||
apps.mkdir()
|
||||
monkeypatch.setenv("FURTKA_APPS_DIR", str(apps))
|
||||
monkeypatch.setenv("FURTKA_INSTALL_STATE", str(tmp_path / "install-state.json"))
|
||||
monkeypatch.setenv("FURTKA_INSTALL_LOCK", str(tmp_path / "install.lock"))
|
||||
|
||||
import importlib
|
||||
|
||||
from furtka import install_runner as r
|
||||
from furtka import paths as p
|
||||
|
||||
importlib.reload(p)
|
||||
importlib.reload(r)
|
||||
return r
|
||||
|
||||
|
||||
def _write_installed_app(apps_dir: Path, name: str = "fileshare"):
|
||||
app = apps_dir / name
|
||||
app.mkdir()
|
||||
manifest = {
|
||||
"name": name,
|
||||
"display_name": "Fileshare",
|
||||
"version": "0.1.0",
|
||||
"description": "Test fixture",
|
||||
"volumes": ["files"],
|
||||
"ports": [445],
|
||||
"icon": "icon.svg",
|
||||
}
|
||||
(app / "manifest.json").write_text(json.dumps(manifest))
|
||||
(app / "docker-compose.yaml").write_text("services: {}\n")
|
||||
return app
|
||||
|
||||
|
||||
def test_write_and_read_state_round_trip(runner):
|
||||
runner.write_state("pulling_image", app="jellyfin")
|
||||
s = runner.read_state()
|
||||
assert s["stage"] == "pulling_image"
|
||||
assert s["app"] == "jellyfin"
|
||||
assert "updated_at" in s
|
||||
|
||||
|
||||
def test_read_state_returns_empty_when_missing(runner):
|
||||
assert runner.read_state() == {}
|
||||
|
||||
|
||||
def test_read_state_returns_empty_on_junk(runner):
|
||||
runner.state_path().parent.mkdir(parents=True, exist_ok=True)
|
||||
runner.state_path().write_text("{not json")
|
||||
assert runner.read_state() == {}
|
||||
|
||||
|
||||
def test_acquire_lock_prevents_concurrent_runs(runner):
|
||||
held = runner.acquire_lock()
|
||||
try:
|
||||
with pytest.raises(runner.InstallRunnerError, match="in progress"):
|
||||
runner.acquire_lock()
|
||||
finally:
|
||||
held.close()
|
||||
|
||||
|
||||
def test_run_install_happy_path(runner, monkeypatch):
|
||||
import furtka.dockerops as dockerops
|
||||
from furtka.paths import apps_dir
|
||||
|
||||
_write_installed_app(apps_dir(), "fileshare")
|
||||
|
||||
calls = []
|
||||
monkeypatch.setattr(dockerops, "compose_pull", lambda *a, **k: calls.append(("pull", a)))
|
||||
monkeypatch.setattr(dockerops, "ensure_volume", lambda name: calls.append(("vol", name)))
|
||||
monkeypatch.setattr(dockerops, "compose_up", lambda *a, **k: calls.append(("up", a)))
|
||||
|
||||
runner.run_install("fileshare")
|
||||
|
||||
# Ordering: pull first, then volumes, then up.
|
||||
assert [c[0] for c in calls] == ["pull", "vol", "up"]
|
||||
# Exactly the namespaced volume name got created.
|
||||
assert calls[1] == ("vol", "furtka_fileshare_files")
|
||||
# Final state is "done" with the manifest version.
|
||||
s = runner.read_state()
|
||||
assert s["stage"] == "done"
|
||||
assert s["app"] == "fileshare"
|
||||
assert s["version"] == "0.1.0"
|
||||
|
||||
|
||||
def test_run_install_writes_error_on_pull_failure(runner, monkeypatch):
|
||||
import furtka.dockerops as dockerops
|
||||
from furtka.paths import apps_dir
|
||||
|
||||
_write_installed_app(apps_dir(), "fileshare")
|
||||
|
||||
def boom(*a, **k):
|
||||
raise dockerops.DockerError("pull failed: registry unreachable")
|
||||
|
||||
monkeypatch.setattr(dockerops, "compose_pull", boom)
|
||||
monkeypatch.setattr(dockerops, "ensure_volume", lambda name: None)
|
||||
monkeypatch.setattr(dockerops, "compose_up", lambda *a, **k: None)
|
||||
|
||||
with pytest.raises(dockerops.DockerError):
|
||||
runner.run_install("fileshare")
|
||||
|
||||
s = runner.read_state()
|
||||
assert s["stage"] == "error"
|
||||
assert s["app"] == "fileshare"
|
||||
assert "registry unreachable" in s["error"]
|
||||
|
||||
|
||||
def test_run_install_writes_error_on_up_failure(runner, monkeypatch):
|
||||
import furtka.dockerops as dockerops
|
||||
from furtka.paths import apps_dir
|
||||
|
||||
_write_installed_app(apps_dir(), "fileshare")
|
||||
|
||||
monkeypatch.setattr(dockerops, "compose_pull", lambda *a, **k: None)
|
||||
monkeypatch.setattr(dockerops, "ensure_volume", lambda name: None)
|
||||
|
||||
def boom(*a, **k):
|
||||
raise dockerops.DockerError("compose up: container refused to start")
|
||||
|
||||
monkeypatch.setattr(dockerops, "compose_up", boom)
|
||||
|
||||
with pytest.raises(dockerops.DockerError):
|
||||
runner.run_install("fileshare")
|
||||
|
||||
s = runner.read_state()
|
||||
assert s["stage"] == "error"
|
||||
assert "refused to start" in s["error"]
|
||||
|
||||
|
||||
def test_run_install_releases_lock_after_done(runner, monkeypatch):
|
||||
import furtka.dockerops as dockerops
|
||||
from furtka.paths import apps_dir
|
||||
|
||||
_write_installed_app(apps_dir(), "fileshare")
|
||||
monkeypatch.setattr(dockerops, "compose_pull", lambda *a, **k: None)
|
||||
monkeypatch.setattr(dockerops, "ensure_volume", lambda name: None)
|
||||
monkeypatch.setattr(dockerops, "compose_up", lambda *a, **k: None)
|
||||
|
||||
runner.run_install("fileshare")
|
||||
|
||||
# Lock released — a fresh acquire must succeed.
|
||||
fh = runner.acquire_lock()
|
||||
fh.close()
|
||||
|
||||
|
||||
def test_run_install_releases_lock_after_error(runner, monkeypatch):
|
||||
import furtka.dockerops as dockerops
|
||||
from furtka.paths import apps_dir
|
||||
|
||||
_write_installed_app(apps_dir(), "fileshare")
|
||||
monkeypatch.setattr(
|
||||
dockerops, "compose_pull", lambda *a, **k: (_ for _ in ()).throw(dockerops.DockerError("x"))
|
||||
)
|
||||
|
||||
with pytest.raises(dockerops.DockerError):
|
||||
runner.run_install("fileshare")
|
||||
|
||||
fh = runner.acquire_lock()
|
||||
fh.close()
|
||||
|
|
@ -267,3 +267,91 @@ def test_read_env_values_roundtrip(tmp_path, fake_dirs):
|
|||
write_env(p, {"A": "plain", "B": "has space", "C": 'has "quote"', "D": ""})
|
||||
values = read_env_values(p)
|
||||
assert values == {"A": "plain", "B": "has space", "C": 'has "quote"', "D": ""}
|
||||
|
||||
|
||||
# --- path-type settings ------------------------------------------------------
|
||||
|
||||
PATH_MANIFEST = dict(
|
||||
VALID_MANIFEST,
|
||||
name="jellyfin",
|
||||
settings=[
|
||||
{
|
||||
"name": "MEDIA_PATH",
|
||||
"label": "Medienordner",
|
||||
"type": "path",
|
||||
"required": True,
|
||||
}
|
||||
],
|
||||
)
|
||||
|
||||
OPTIONAL_PATH_MANIFEST = dict(
|
||||
VALID_MANIFEST,
|
||||
name="jellyfin",
|
||||
settings=[{"name": "OPTIONAL_PATH", "label": "Optional", "type": "path", "required": False}],
|
||||
)
|
||||
|
||||
|
||||
def test_install_with_valid_path_succeeds(tmp_path, fake_dirs):
|
||||
media = tmp_path / "media"
|
||||
media.mkdir()
|
||||
src = _write_app_source(tmp_path, "jellyfin", PATH_MANIFEST)
|
||||
installer.install_from(src, settings={"MEDIA_PATH": str(media)})
|
||||
target = apps_dir() / "jellyfin"
|
||||
assert f"MEDIA_PATH={media}" in (target / ".env").read_text()
|
||||
|
||||
|
||||
def test_install_rejects_nonexistent_path(tmp_path, fake_dirs):
|
||||
src = _write_app_source(tmp_path, "jellyfin", PATH_MANIFEST)
|
||||
with pytest.raises(installer.InstallError, match="does not exist"):
|
||||
installer.install_from(src, settings={"MEDIA_PATH": str(tmp_path / "ghost")})
|
||||
|
||||
|
||||
def test_install_rejects_path_that_is_a_file(tmp_path, fake_dirs):
|
||||
f = tmp_path / "not-a-dir"
|
||||
f.write_text("hi")
|
||||
src = _write_app_source(tmp_path, "jellyfin", PATH_MANIFEST)
|
||||
with pytest.raises(installer.InstallError, match="is not a directory"):
|
||||
installer.install_from(src, settings={"MEDIA_PATH": str(f)})
|
||||
|
||||
|
||||
def test_install_rejects_relative_path(tmp_path, fake_dirs):
|
||||
src = _write_app_source(tmp_path, "jellyfin", PATH_MANIFEST)
|
||||
with pytest.raises(installer.InstallError, match="absolute path"):
|
||||
installer.install_from(src, settings={"MEDIA_PATH": "media"})
|
||||
|
||||
|
||||
def test_install_rejects_system_path(tmp_path, fake_dirs):
|
||||
src = _write_app_source(tmp_path, "jellyfin", PATH_MANIFEST)
|
||||
with pytest.raises(installer.InstallError, match="system path"):
|
||||
installer.install_from(src, settings={"MEDIA_PATH": "/etc"})
|
||||
|
||||
|
||||
def test_install_rejects_root_filesystem(tmp_path, fake_dirs):
|
||||
src = _write_app_source(tmp_path, "jellyfin", PATH_MANIFEST)
|
||||
with pytest.raises(installer.InstallError, match="system path"):
|
||||
installer.install_from(src, settings={"MEDIA_PATH": "/"})
|
||||
|
||||
|
||||
def test_install_rejects_deny_list_via_traversal(tmp_path, fake_dirs):
|
||||
# /mnt/../etc resolves to /etc — must be caught after Path.resolve().
|
||||
src = _write_app_source(tmp_path, "jellyfin", PATH_MANIFEST)
|
||||
with pytest.raises(installer.InstallError, match="system path"):
|
||||
installer.install_from(src, settings={"MEDIA_PATH": "/mnt/../etc"})
|
||||
|
||||
|
||||
def test_install_accepts_empty_optional_path(tmp_path, fake_dirs):
|
||||
src = _write_app_source(tmp_path, "jellyfin", OPTIONAL_PATH_MANIFEST)
|
||||
installer.install_from(src, settings={"OPTIONAL_PATH": ""})
|
||||
target = apps_dir() / "jellyfin"
|
||||
assert (target / ".env").exists()
|
||||
|
||||
|
||||
def test_update_env_rejects_invalid_path(tmp_path, fake_dirs):
|
||||
# First install with a valid path.
|
||||
media = tmp_path / "media"
|
||||
media.mkdir()
|
||||
src = _write_app_source(tmp_path, "jellyfin", PATH_MANIFEST)
|
||||
installer.install_from(src, settings={"MEDIA_PATH": str(media)})
|
||||
# Then try to update to a bad path.
|
||||
with pytest.raises(installer.InstallError, match="does not exist"):
|
||||
installer.update_env("jellyfin", {"MEDIA_PATH": str(tmp_path / "ghost")})
|
||||
|
|
|
|||
|
|
@ -95,6 +95,21 @@ def test_settings_optional_default_empty(tmp_path):
|
|||
m = load_manifest(path)
|
||||
assert m.settings == ()
|
||||
assert m.description_long == ""
|
||||
assert m.open_url == ""
|
||||
|
||||
|
||||
def test_open_url_stored_when_present(tmp_path):
|
||||
payload = dict(VALID_MANIFEST, open_url="smb://{host}/files")
|
||||
path = _write_app(tmp_path, "fileshare", payload)
|
||||
m = load_manifest(path)
|
||||
assert m.open_url == "smb://{host}/files"
|
||||
|
||||
|
||||
def test_open_url_non_string_rejected(tmp_path):
|
||||
payload = dict(VALID_MANIFEST, open_url=42)
|
||||
path = _write_app(tmp_path, "fileshare", payload)
|
||||
with pytest.raises(ManifestError, match="open_url"):
|
||||
load_manifest(path)
|
||||
|
||||
|
||||
def test_settings_parsed(tmp_path):
|
||||
|
|
@ -140,6 +155,27 @@ def test_settings_reject_unknown_type(tmp_path):
|
|||
load_manifest(path)
|
||||
|
||||
|
||||
def test_settings_accept_path_type(tmp_path):
|
||||
payload = dict(
|
||||
VALID_MANIFEST,
|
||||
settings=[
|
||||
{
|
||||
"name": "MEDIA_PATH",
|
||||
"label": "Medienordner",
|
||||
"description": "Absoluter Pfad zu deinen Medien",
|
||||
"type": "path",
|
||||
"required": True,
|
||||
}
|
||||
],
|
||||
)
|
||||
path = _write_app(tmp_path, "fileshare", payload)
|
||||
m = load_manifest(path)
|
||||
assert len(m.settings) == 1
|
||||
assert m.settings[0].name == "MEDIA_PATH"
|
||||
assert m.settings[0].type == "path"
|
||||
assert m.settings[0].required is True
|
||||
|
||||
|
||||
def test_settings_reject_duplicate_name(tmp_path):
|
||||
bad = dict(
|
||||
VALID_MANIFEST,
|
||||
|
|
|
|||
74
tests/test_passwd.py
Normal file
74
tests/test_passwd.py
Normal file
|
|
@ -0,0 +1,74 @@
|
|||
"""Tests for furtka.passwd — stdlib-only password hashing.
|
||||
|
||||
The primary contract: hash/verify roundtrips cleanly, AND the verifier
|
||||
accepts the werkzeug hash format that 26.11 / 26.12 boxes wrote to
|
||||
``users.json``. Losing that backward compat would lock out existing
|
||||
admins after a 26.13+ upgrade.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from furtka import passwd
|
||||
|
||||
|
||||
def test_hash_roundtrip():
|
||||
h = passwd.hash_password("hunter2")
|
||||
assert passwd.verify_password("hunter2", h)
|
||||
assert not passwd.verify_password("wrong", h)
|
||||
|
||||
|
||||
def test_hash_is_salted():
|
||||
# Two separate hashes of the same password must diverge.
|
||||
a = passwd.hash_password("same-pw")
|
||||
b = passwd.hash_password("same-pw")
|
||||
assert a != b
|
||||
assert passwd.verify_password("same-pw", a)
|
||||
assert passwd.verify_password("same-pw", b)
|
||||
|
||||
|
||||
def test_generated_hash_format():
|
||||
# Shape is pbkdf2:<hash>:<iter>$<salt>$<hex>
|
||||
h = passwd.hash_password("x")
|
||||
parts = h.split("$", 2)
|
||||
assert len(parts) == 3
|
||||
method, salt, digest = parts
|
||||
assert method.startswith("pbkdf2:sha256:")
|
||||
assert salt
|
||||
# digest is hex of pbkdf2_hmac sha256 → 64 hex chars
|
||||
assert len(digest) == 64
|
||||
assert all(c in "0123456789abcdef" for c in digest)
|
||||
|
||||
|
||||
def test_verify_werkzeug_scrypt_hash():
|
||||
"""Known werkzeug scrypt hash generated by 26.11 / 26.12 boxes.
|
||||
|
||||
Captured live off a .196 test VM after its auth bootstrap:
|
||||
username=daniel, password=test-admin-pw1
|
||||
Hash format: scrypt:32768:8:1$<salt>$<hex>
|
||||
If this regresses, every existing box that upgraded via 26.11 and
|
||||
set a password gets locked out on the next upgrade.
|
||||
"""
|
||||
known = (
|
||||
"scrypt:32768:8:1$yWZUqJodowt9ieI1$"
|
||||
"2d1059b3564da7492b4aa3c2be7fff6fef06085e5e1bfd52e897948c58246b7a"
|
||||
"9603400355b7264f61c4436eba7bf8c947adec3d7a76be03b50efb4227e15a80"
|
||||
)
|
||||
assert passwd.verify_password("test-admin-pw1", known)
|
||||
assert not passwd.verify_password("wrong-password", known)
|
||||
|
||||
|
||||
def test_verify_rejects_malformed_hashes():
|
||||
# Empty / missing delimiters / unknown method / bad int — all False.
|
||||
assert not passwd.verify_password("x", "")
|
||||
assert not passwd.verify_password("x", "nothingspecial")
|
||||
assert not passwd.verify_password("x", "pbkdf2:sha256:600000") # no $salt$digest
|
||||
assert not passwd.verify_password("x", "pbkdf2$salt$digest") # missing hash + iter
|
||||
assert not passwd.verify_password("x", "bcrypt:12$salt$digest") # unsupported algo
|
||||
assert not passwd.verify_password("x", "pbkdf2:sha256:abc$salt$digest") # bad iter int
|
||||
|
||||
|
||||
def test_verify_rejects_nonstring_inputs():
|
||||
# Defensive: users.json can be corrupted or have nulls.
|
||||
assert not passwd.verify_password(None, "pbkdf2:sha256:1000$salt$digest") # type: ignore[arg-type]
|
||||
assert not passwd.verify_password("x", None) # type: ignore[arg-type]
|
||||
assert not passwd.verify_password("x", 12345) # type: ignore[arg-type]
|
||||
|
|
@ -224,6 +224,76 @@ def test_refresh_caddyfile_substitutes_hostname_placeholder(updater, tmp_path):
|
|||
assert updater._refresh_caddyfile(src) is False
|
||||
|
||||
|
||||
def test_health_check_treats_4xx_as_healthy(updater, monkeypatch):
|
||||
"""26.11+ auth makes /api/apps return 401 on unauth requests. If the
|
||||
health check treated that as "down", every pre-auth → auth upgrade
|
||||
auto-rolls back. Server responding at all is enough signal for the
|
||||
health check."""
|
||||
import urllib.error
|
||||
|
||||
calls = {"n": 0}
|
||||
|
||||
class _FakeResp:
|
||||
def __init__(self, code):
|
||||
self.status = code
|
||||
|
||||
def __enter__(self):
|
||||
return self
|
||||
|
||||
def __exit__(self, *a):
|
||||
return False
|
||||
|
||||
def raising_401(url, timeout):
|
||||
calls["n"] += 1
|
||||
raise urllib.error.HTTPError(url, 401, "Unauthorized", {}, None)
|
||||
|
||||
monkeypatch.setattr("urllib.request.urlopen", raising_401)
|
||||
assert updater._health_check("http://127.0.0.1:7000/api/apps", deadline_s=2.0) is True
|
||||
# One call was enough — early exit on 4xx, no retry loop.
|
||||
assert calls["n"] == 1
|
||||
|
||||
|
||||
def test_health_check_rejects_5xx(updater, monkeypatch):
|
||||
"""500s mean the server is up but broken — that's NOT healthy.
|
||||
Distinguishes auth refusals (4xx = healthy) from real runtime
|
||||
errors (5xx = unhealthy, roll back)."""
|
||||
import urllib.error
|
||||
|
||||
def raising_500(url, timeout):
|
||||
raise urllib.error.HTTPError(url, 500, "Internal Server Error", {}, None)
|
||||
|
||||
monkeypatch.setattr("urllib.request.urlopen", raising_500)
|
||||
assert updater._health_check("http://127.0.0.1:7000/api/apps", deadline_s=1.5) is False
|
||||
|
||||
|
||||
def test_health_check_retries_on_connection_refused(updater, monkeypatch):
|
||||
"""While furtka-api is still starting, urlopen raises URLError.
|
||||
The loop must keep polling until the server comes up or deadline."""
|
||||
import urllib.error
|
||||
|
||||
calls = {"n": 0}
|
||||
|
||||
def flaky(url, timeout):
|
||||
calls["n"] += 1
|
||||
if calls["n"] < 3:
|
||||
raise urllib.error.URLError("connection refused")
|
||||
|
||||
class _Resp:
|
||||
status = 200
|
||||
|
||||
def __enter__(self):
|
||||
return self
|
||||
|
||||
def __exit__(self, *a):
|
||||
return False
|
||||
|
||||
return _Resp()
|
||||
|
||||
monkeypatch.setattr("urllib.request.urlopen", flaky)
|
||||
assert updater._health_check("http://127.0.0.1:7000/api/apps", deadline_s=10.0) is True
|
||||
assert calls["n"] == 3
|
||||
|
||||
|
||||
def test_current_hostname_falls_back_when_file_missing(updater, monkeypatch, tmp_path):
|
||||
monkeypatch.setenv("FURTKA_HOSTNAME_FILE", str(tmp_path / "missing"))
|
||||
import importlib
|
||||
|
|
|
|||
|
|
@ -54,7 +54,7 @@ def install_cmds(tmp_path, monkeypatch):
|
|||
fake = tmp_path / "payload.tar.gz"
|
||||
fake.write_bytes(b"not a real tarball")
|
||||
monkeypatch.setattr(app, "RESOURCE_MANAGER_PAYLOAD", fake)
|
||||
return app._post_install_commands("testhost")
|
||||
return app._post_install_commands("testhost", "daniel", "test-admin-pw")
|
||||
|
||||
|
||||
@pytest.mark.parametrize("target,asset_relpath", ASSET_TARGETS)
|
||||
|
|
@ -122,19 +122,39 @@ def test_caddyfile_asset_serves_from_current():
|
|||
assert "root * /var/lib/furtka" in caddy
|
||||
|
||||
|
||||
def test_caddyfile_serves_both_http_and_https():
|
||||
# :80 stays so users who haven't installed the CA still reach the box;
|
||||
# HTTPS is served via a named-hostname block so Caddy's `tls internal`
|
||||
# has something to issue a leaf cert for. A bare `:443 { tls internal }`
|
||||
# never triggers issuance — that was the 26.4-alpha regression.
|
||||
caddy = (ASSETS / "Caddyfile").read_text()
|
||||
def _strip_caddy_comments(text: str) -> str:
|
||||
"""Remove # comments + blank lines so string-match assertions can
|
||||
target actual Caddyfile directives, not the leading doc block.
|
||||
Comment intro is ``#`` at start-of-line or preceded by whitespace."""
|
||||
out = []
|
||||
for line in text.splitlines():
|
||||
stripped = line.split("#", 1)[0].rstrip()
|
||||
if stripped:
|
||||
out.append(stripped)
|
||||
return "\n".join(out)
|
||||
|
||||
|
||||
def test_caddyfile_serves_http_by_default_https_opt_in():
|
||||
# 26.15-alpha: HTTPS is opt-in. The default Caddyfile has a :80 block
|
||||
# and imports /etc/caddy/furtka-https.d/*.caddyfile at top level —
|
||||
# the /settings HTTPS toggle drops the hostname+tls-internal block
|
||||
# into that dir when the user explicitly enables HTTPS. Default
|
||||
# Caddyfile therefore contains no `tls internal` directive anywhere;
|
||||
# if a future refactor puts it back, every fresh install regresses
|
||||
# to the 26.14-era BAD_SIGNATURE trap. Strip comments first because
|
||||
# the doc-block DOES mention `tls internal` in prose.
|
||||
caddy_full = (ASSETS / "Caddyfile").read_text()
|
||||
caddy = _strip_caddy_comments(caddy_full)
|
||||
assert ":80 {" in caddy
|
||||
assert "__FURTKA_HOSTNAME__.local, __FURTKA_HOSTNAME__ {" in caddy
|
||||
assert "tls internal" in caddy
|
||||
# Shared routes live in a named snippet to avoid drift between the two
|
||||
# listeners — both site blocks must import it.
|
||||
assert "tls internal" not in caddy
|
||||
assert "__FURTKA_HOSTNAME__" not in caddy
|
||||
assert "import /etc/caddy/furtka-https.d/*.caddyfile" in caddy
|
||||
# Shared routes still live in a named snippet so the HTTPS toggle's
|
||||
# snippet can import the same routes without duplication.
|
||||
assert "(furtka_routes)" in caddy
|
||||
assert caddy.count("import furtka_routes") == 2
|
||||
# Default Caddyfile imports it once (inside :80). The HTTPS snippet,
|
||||
# when written by the toggle, imports it a second time.
|
||||
assert caddy.count("import furtka_routes") == 1
|
||||
|
||||
|
||||
def test_caddyfile_disables_caddy_auto_redirects():
|
||||
|
|
@ -167,16 +187,28 @@ def test_caddyfile_exposes_root_ca_download():
|
|||
assert "attachment; filename=furtka-local-rootCA.crt" in caddy
|
||||
|
||||
|
||||
def test_post_install_substitutes_hostname_in_caddyfile(install_cmds):
|
||||
# Fresh installs: the placeholder the asset ships with must be replaced
|
||||
# with the hostname the user picked in the form. The `testhost` value
|
||||
# comes from the install_cmds fixture. Without substitution Caddy's
|
||||
# `tls internal` never issues a leaf cert for the real hostname.
|
||||
def test_post_install_writes_caddyfile_without_hostname_placeholder(install_cmds):
|
||||
# 26.15-alpha: the shipped Caddyfile no longer carries the
|
||||
# __FURTKA_HOSTNAME__ marker — HTTPS + hostname now live in the
|
||||
# opt-in snippet written by set_force_https(), not in the base
|
||||
# Caddyfile. Verify the post-install writes the file as-is (no
|
||||
# substitution expected) and it has the opt-in import glob.
|
||||
caddyfile_cmd = next((c for c in install_cmds if " > /etc/caddy/Caddyfile" in c), None)
|
||||
assert caddyfile_cmd is not None
|
||||
written = _extract_written_content(caddyfile_cmd, "/etc/caddy/Caddyfile")
|
||||
written_full = _extract_written_content(caddyfile_cmd, "/etc/caddy/Caddyfile")
|
||||
written = _strip_caddy_comments(written_full)
|
||||
assert "__FURTKA_HOSTNAME__" not in written
|
||||
assert "testhost.local, testhost {" in written
|
||||
assert "import /etc/caddy/furtka-https.d/*.caddyfile" in written
|
||||
assert "tls internal" not in written
|
||||
|
||||
|
||||
def test_post_install_creates_https_snippet_dir(install_cmds):
|
||||
# The top-level HTTPS opt-in snippet dir must exist before Caddy's
|
||||
# first start — its glob import tolerates an empty directory, but
|
||||
# not a missing one on older Caddy builds. Parallel guarantee to
|
||||
# test_post_install_creates_furtka_d_snippet_dir below.
|
||||
matching = [c for c in install_cmds if "/etc/caddy/furtka-https.d" in c and "install -d" in c]
|
||||
assert matching, "no install -d command creates /etc/caddy/furtka-https.d"
|
||||
|
||||
|
||||
def test_post_install_creates_furtka_d_snippet_dir(install_cmds):
|
||||
|
|
@ -202,3 +234,28 @@ def test_read_asset_raises_for_missing_file():
|
|||
|
||||
def test_assets_dir_resolves_to_repo_tree():
|
||||
assert app._ASSETS_DIR == ASSETS
|
||||
|
||||
|
||||
def test_post_install_writes_users_json_with_hashed_password(install_cmds):
|
||||
"""The Furtka-admin users.json is created during the chroot post-install.
|
||||
|
||||
Without this, a fresh-install box lands at /login in first-run setup
|
||||
mode and the user has to go through the browser to set a password —
|
||||
which defeats the "step-1 password works for everything" design. Also
|
||||
check that the file is chmod 0600 (the PBKDF2 hash is a secret even
|
||||
if it's slow to crack).
|
||||
"""
|
||||
import json as _json
|
||||
|
||||
from werkzeug.security import check_password_hash
|
||||
|
||||
users_cmd = next((c for c in install_cmds if " > /var/lib/furtka/users.json" in c), None)
|
||||
assert users_cmd is not None, "no command writes /var/lib/furtka/users.json"
|
||||
assert "chmod 600" in users_cmd, "users.json must be chmod 0600"
|
||||
body = _extract_written_content(users_cmd, "/var/lib/furtka/users.json")
|
||||
parsed = _json.loads(body)
|
||||
assert "admin" in parsed
|
||||
assert parsed["admin"]["username"] == "daniel" # matches fixture
|
||||
# Hash is a real werkzeug hash, not the plaintext password.
|
||||
assert parsed["admin"]["hash"] != "test-admin-pw"
|
||||
assert check_password_hash(parsed["admin"]["hash"], "test-admin-pw")
|
||||
|
|
|
|||
|
|
@ -8,6 +8,7 @@ import os
|
|||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
from datetime import UTC
|
||||
from pathlib import Path
|
||||
|
||||
from drives import list_scored_devices
|
||||
|
|
@ -348,7 +349,35 @@ def _furtka_json_cmd(hostname):
|
|||
)
|
||||
|
||||
|
||||
def _post_install_commands(hostname):
|
||||
def _users_json_cmd(username, password):
|
||||
"""Write /var/lib/furtka/users.json with the admin account hashed.
|
||||
|
||||
The core furtka-api reads this file on every login attempt; the
|
||||
auth.py module treats `admin.username` + `admin.hash` as the only
|
||||
credential. Hashing happens here in the webinstaller (werkzeug is a
|
||||
flask transitive dep so it's already installed in this environment)
|
||||
— the chroot doesn't need pip. Mode 0600 so nobody but root on the
|
||||
installed box can read the PBKDF2 hash.
|
||||
"""
|
||||
from datetime import datetime
|
||||
|
||||
from werkzeug.security import generate_password_hash
|
||||
|
||||
users = {
|
||||
"admin": {
|
||||
"username": username,
|
||||
"hash": generate_password_hash(password),
|
||||
"created_at": datetime.now(UTC).isoformat(timespec="seconds"),
|
||||
}
|
||||
}
|
||||
return _write_file_cmd(
|
||||
"/var/lib/furtka/users.json",
|
||||
json.dumps(users, indent=2) + "\n",
|
||||
mode="600",
|
||||
)
|
||||
|
||||
|
||||
def _post_install_commands(hostname, admin_username, admin_password):
|
||||
# nss-mdns: splice `mdns_minimal [NOTFOUND=return]` before `resolve` on
|
||||
# the hosts line so `*.local` works from the installed system too. Guarded
|
||||
# so a re-run (or a future Arch default that already ships mdns) is a
|
||||
|
|
@ -366,6 +395,14 @@ def _post_install_commands(hostname):
|
|||
# an empty dir but not a missing one on every Caddy version, so we
|
||||
# create it up front and stay on the safe side.
|
||||
"install -d -m 0755 -o root -g root /etc/caddy/furtka.d",
|
||||
# Parallel dir for the top-level HTTPS-listener snippet, written
|
||||
# by /api/furtka/https/force (26.15-alpha+) when the user opts
|
||||
# into HTTPS. Empty by default so fresh installs never generate
|
||||
# a tls internal cert — that was the 26.14 regression where
|
||||
# Firefox hit unbypassable SEC_ERROR_BAD_SIGNATURE because
|
||||
# Caddy's fixed intermediate-CN clashed with any cached trust
|
||||
# from a previously-reinstalled Furtka box.
|
||||
"install -d -m 0755 -o root -g root /etc/caddy/furtka-https.d",
|
||||
# The Caddyfile lives at /etc/caddy/Caddyfile per Caddy's convention
|
||||
# (systemd unit points there). Content comes from the shipped asset,
|
||||
# which we copy in at install time so updates that change routing
|
||||
|
|
@ -389,6 +426,12 @@ def _post_install_commands(hostname):
|
|||
# furtka.json depends on /opt/furtka/current/VERSION, so it has to
|
||||
# run after the resource-manager commands.
|
||||
_furtka_json_cmd(hostname),
|
||||
# Admin account for the Furtka web UI. Hashed here (werkzeug is
|
||||
# already in scope for the Flask webinstaller) and materialised
|
||||
# into /var/lib/furtka/users.json at mode 0600 on the target
|
||||
# partition — the installed core's auth.py picks it up on first
|
||||
# login.
|
||||
_users_json_cmd(admin_username, admin_password),
|
||||
]
|
||||
|
||||
|
||||
|
|
@ -447,7 +490,7 @@ def build_archinstall_config(s):
|
|||
# page, status timer, and welcome banner into place.
|
||||
"custom_commands": [
|
||||
f"gpasswd -a {s['username']} docker",
|
||||
*_post_install_commands(s["hostname"]),
|
||||
*_post_install_commands(s["hostname"], s["username"], s["password"]),
|
||||
],
|
||||
"network_config": {"type": "iso"},
|
||||
"ssh": True,
|
||||
|
|
|
|||
|
|
@ -1,6 +1,41 @@
|
|||
import subprocess
|
||||
|
||||
|
||||
def _boot_disk_name():
|
||||
"""Return the parent disk name of the live-ISO boot media (e.g. "sdb"), or None.
|
||||
|
||||
On a normal box `/run/archiso/bootmnt` does not exist and we return None,
|
||||
leaving the device list untouched. On bare metal booted from USB this is
|
||||
the stick we booted from — we want to filter it out so the user can't
|
||||
accidentally pick it as the install target.
|
||||
"""
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["findmnt", "-no", "SOURCE", "/run/archiso/bootmnt"],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
)
|
||||
except FileNotFoundError:
|
||||
return None
|
||||
if result.returncode != 0:
|
||||
return None
|
||||
partition = result.stdout.strip()
|
||||
if not partition:
|
||||
return None
|
||||
try:
|
||||
parent = subprocess.run(
|
||||
["lsblk", "-no", "PKNAME", partition],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
)
|
||||
except FileNotFoundError:
|
||||
return None
|
||||
if parent.returncode != 0:
|
||||
return None
|
||||
name = parent.stdout.strip().splitlines()[0] if parent.stdout.strip() else ""
|
||||
return name or None
|
||||
|
||||
|
||||
def _smart_status(device):
|
||||
try:
|
||||
result = subprocess.run(
|
||||
|
|
@ -75,11 +110,14 @@ def score_device(device, size_gb):
|
|||
return get_drive_type_score(device) + get_drive_health(device) + get_size_score(size_gb)
|
||||
|
||||
|
||||
def parse_lsblk_output(output):
|
||||
def parse_lsblk_output(output, boot_disk=None):
|
||||
"""Parse `lsblk -dn -o NAME,SIZE,TYPE` output into scored device dicts.
|
||||
|
||||
Keeps only TYPE=disk so the live ISO's own squashfs (loop) and the boot
|
||||
CD-ROM (rom) don't show up as install targets.
|
||||
CD-ROM (rom) don't show up as install targets. If `boot_disk` is given,
|
||||
that disk is also dropped — it's the USB stick the live ISO booted from
|
||||
on bare metal, where it appears as TYPE=disk and would otherwise be a
|
||||
valid-looking install target.
|
||||
"""
|
||||
devices = []
|
||||
for line in output.strip().split("\n"):
|
||||
|
|
@ -91,6 +129,8 @@ def parse_lsblk_output(output):
|
|||
name, size, dev_type = parts[0], parts[1], parts[2]
|
||||
if dev_type != "disk":
|
||||
continue
|
||||
if boot_disk and name == boot_disk:
|
||||
continue
|
||||
device = f"/dev/{name}"
|
||||
size_gb = parse_size_gb(size)
|
||||
status = _smart_status(device)
|
||||
|
|
@ -120,7 +160,7 @@ def list_scored_devices():
|
|||
except subprocess.CalledProcessError as e:
|
||||
print(f"Error listing devices: {e}")
|
||||
return []
|
||||
return parse_lsblk_output(result.stdout)
|
||||
return parse_lsblk_output(result.stdout, boot_disk=_boot_disk_name())
|
||||
|
||||
|
||||
def main():
|
||||
|
|
|
|||
|
|
@ -6,6 +6,8 @@
|
|||
{% block content %}
|
||||
<h1>Rebooting…</h1>
|
||||
<p class="lede">The machine is restarting. This page will stop responding in a moment — that's expected.</p>
|
||||
<p><strong>Remove the USB stick now</strong> — if it's still plugged in when the machine reboots, some BIOS setups will boot into this installer again instead of starting Furtka.</p>
|
||||
<p class="muted">If the installer does come back anyway, your BIOS is set to boot from USB before the disk. Press the one-time boot menu key at startup (often <kbd>F11</kbd>, <kbd>F12</kbd>, or <kbd>Esc</kbd> — it flashes briefly on screen) and pick the internal disk, or change the boot order in BIOS settings.</p>
|
||||
<p>When the machine comes back up (~1 minute), open Furtka in your browser:</p>
|
||||
<p><a href="http://{{ hostname }}.local" class="btn btn-primary">http://{{ hostname }}.local</a></p>
|
||||
<p class="muted">If that doesn't resolve, your network may not support mDNS — use the IP address shown on the machine's console instead.</p>
|
||||
|
|
|
|||
|
|
@ -6,6 +6,10 @@
|
|||
--accent: #c03a28;
|
||||
--accent-hover: #a0301f;
|
||||
--border: #e4e3dc;
|
||||
--accent-glow: rgba(192, 58, 40, 0.2);
|
||||
--card-bg: rgba(247, 246, 243, 0.72);
|
||||
--card-border: var(--border);
|
||||
--scene-opacity: 0.18;
|
||||
--font-sans:
|
||||
-apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue",
|
||||
Arial, "Noto Sans", sans-serif;
|
||||
|
|
@ -23,6 +27,10 @@
|
|||
--accent: #ff6b56;
|
||||
--accent-hover: #ff8b78;
|
||||
--border: #232326;
|
||||
--accent-glow: rgba(255, 107, 86, 0.4);
|
||||
--card-bg: rgba(23, 23, 26, 0.65);
|
||||
--card-border: #26262b;
|
||||
--scene-opacity: 0.34;
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -43,6 +51,25 @@ body {
|
|||
flex-direction: column;
|
||||
min-height: 100vh;
|
||||
text-rendering: optimizeLegibility;
|
||||
isolation: isolate;
|
||||
}
|
||||
|
||||
/* ── Animated background canvas (home only) ─────────────── */
|
||||
|
||||
.scene-canvas {
|
||||
position: fixed;
|
||||
inset: 0;
|
||||
width: 100vw;
|
||||
height: 100vh;
|
||||
z-index: 0;
|
||||
pointer-events: none;
|
||||
}
|
||||
|
||||
.site-header,
|
||||
main.container,
|
||||
.site-footer {
|
||||
position: relative;
|
||||
z-index: 1;
|
||||
}
|
||||
|
||||
.container {
|
||||
|
|
@ -171,11 +198,36 @@ main.container {
|
|||
.home h1 {
|
||||
font-family: var(--font-sans);
|
||||
font-weight: 800;
|
||||
font-size: clamp(3.25rem, 10vw, 6.5rem);
|
||||
line-height: 0.95;
|
||||
letter-spacing: -0.035em;
|
||||
font-size: clamp(3.5rem, 14vw, 11rem);
|
||||
line-height: 0.9;
|
||||
letter-spacing: -0.04em;
|
||||
margin: 0 0 1.5rem;
|
||||
color: var(--fg);
|
||||
background-image: linear-gradient(180deg, var(--fg) 0%, var(--accent) 110%);
|
||||
-webkit-background-clip: text;
|
||||
background-clip: text;
|
||||
-webkit-text-fill-color: transparent;
|
||||
}
|
||||
|
||||
@media (prefers-color-scheme: dark) {
|
||||
.home h1 {
|
||||
filter: drop-shadow(0 0 28px var(--accent-glow));
|
||||
}
|
||||
.home .lede {
|
||||
color: #c8c8cc;
|
||||
}
|
||||
}
|
||||
|
||||
.hero {
|
||||
min-height: 78vh;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
justify-content: center;
|
||||
padding-block: 4.5rem 3rem;
|
||||
}
|
||||
|
||||
.home .lede {
|
||||
font-weight: 450;
|
||||
}
|
||||
|
||||
.home .lede {
|
||||
|
|
@ -258,3 +310,132 @@ main.container {
|
|||
outline-offset: 3px;
|
||||
border-radius: 2px;
|
||||
}
|
||||
|
||||
/* ── Primary CTA ─────────────────────────────────────────── */
|
||||
|
||||
.cta-row { margin-top: 2.5rem; }
|
||||
|
||||
.cta {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
gap: 0.55rem;
|
||||
padding: 1.1rem 2rem;
|
||||
font-family: var(--font-sans);
|
||||
font-weight: 600;
|
||||
font-size: 1.02rem;
|
||||
letter-spacing: 0.005em;
|
||||
text-decoration: none;
|
||||
border-radius: 0.7rem;
|
||||
transition: transform 180ms, box-shadow 180ms, background 180ms, color 180ms;
|
||||
}
|
||||
.cta--primary {
|
||||
background: linear-gradient(135deg, var(--accent), var(--accent-hover));
|
||||
color: #fff;
|
||||
box-shadow: 0 10px 36px var(--accent-glow),
|
||||
0 0 0 1px color-mix(in srgb, var(--accent) 40%, transparent);
|
||||
animation: cta-pulse 2.8s ease-in-out infinite;
|
||||
}
|
||||
.cta--primary:hover {
|
||||
transform: translateY(-3px);
|
||||
box-shadow: 0 18px 52px var(--accent-glow),
|
||||
0 0 0 1px var(--accent);
|
||||
animation-play-state: paused;
|
||||
}
|
||||
.cta--primary:active { transform: translateY(-1px); }
|
||||
.cta--primary span { transition: transform 180ms; }
|
||||
.cta--primary:hover span { transform: translateX(4px); }
|
||||
|
||||
@keyframes cta-pulse {
|
||||
0%, 100% { box-shadow: 0 10px 36px var(--accent-glow),
|
||||
0 0 0 1px color-mix(in srgb, var(--accent) 40%, transparent); }
|
||||
50% { box-shadow: 0 14px 48px var(--accent-glow),
|
||||
0 0 0 1px color-mix(in srgb, var(--accent) 70%, transparent); }
|
||||
}
|
||||
|
||||
@media (prefers-reduced-motion: reduce) {
|
||||
.cta--primary { animation: none; }
|
||||
}
|
||||
|
||||
/* ── Intro paragraph (home, between hero and feature grids) ─ */
|
||||
|
||||
.intro {
|
||||
max-width: 38rem;
|
||||
margin: 0 0 4rem;
|
||||
font-size: 1.15rem;
|
||||
line-height: 1.55;
|
||||
color: var(--fg);
|
||||
}
|
||||
.intro p { margin: 0 0 1rem; }
|
||||
.intro p:last-child { margin: 0; }
|
||||
.intro strong { font-weight: 600; }
|
||||
|
||||
/* ── Feature sections (home) ─────────────────────────────── */
|
||||
|
||||
.feature-section { margin-block: 4rem; }
|
||||
|
||||
.section-eyebrow {
|
||||
font-family: var(--font-sans);
|
||||
font-weight: 500;
|
||||
font-size: 0.72rem;
|
||||
letter-spacing: 0.14em;
|
||||
text-transform: uppercase;
|
||||
color: var(--fg-muted);
|
||||
margin: 0 0 1.25rem;
|
||||
}
|
||||
|
||||
.feature-grid {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(auto-fit, minmax(17rem, 1fr));
|
||||
gap: 1rem;
|
||||
}
|
||||
|
||||
.feature-card {
|
||||
background: var(--card-bg);
|
||||
border: 1px solid var(--card-border);
|
||||
border-radius: 1rem;
|
||||
padding: 1.5rem 1.5rem 1.4rem;
|
||||
-webkit-backdrop-filter: blur(10px);
|
||||
backdrop-filter: blur(10px);
|
||||
transition: transform 240ms, border-color 240ms, box-shadow 240ms;
|
||||
}
|
||||
.feature-card:hover {
|
||||
border-color: var(--accent);
|
||||
box-shadow: 0 10px 32px var(--accent-glow);
|
||||
transform: translateY(-2px);
|
||||
}
|
||||
.feature-card p {
|
||||
margin: 0;
|
||||
font-size: 1rem;
|
||||
line-height: 1.55;
|
||||
color: var(--fg);
|
||||
}
|
||||
.feature-card strong {
|
||||
font-weight: 600;
|
||||
color: var(--fg);
|
||||
}
|
||||
|
||||
/* ── Closer prose (home, after feature grids) ────────────── */
|
||||
|
||||
.closer {
|
||||
margin-top: 4rem;
|
||||
max-width: var(--measure);
|
||||
}
|
||||
|
||||
/* ── Reveal-on-load (hero) and reveal-on-scroll (cards) ──── */
|
||||
|
||||
.js .reveal,
|
||||
.js [data-gsap="card"] {
|
||||
opacity: 0;
|
||||
transform: translateY(40px);
|
||||
will-change: opacity, transform;
|
||||
}
|
||||
|
||||
@media (prefers-reduced-motion: reduce) {
|
||||
.scene-canvas { display: none; }
|
||||
.js .reveal,
|
||||
.js [data-gsap="card"] {
|
||||
opacity: 1 !important;
|
||||
transform: none !important;
|
||||
will-change: auto;
|
||||
}
|
||||
}
|
||||
|
|
|
|||
25
website/assets/js/animations.js
Normal file
25
website/assets/js/animations.js
Normal file
|
|
@ -0,0 +1,25 @@
|
|||
(function () {
|
||||
if (window.matchMedia('(prefers-reduced-motion: reduce)').matches) return;
|
||||
if (!window.gsap || !window.ScrollTrigger || !window.Lenis) return;
|
||||
|
||||
gsap.registerPlugin(ScrollTrigger);
|
||||
|
||||
const lenis = new Lenis({ lerp: 0.1, smoothWheel: true });
|
||||
lenis.on('scroll', ScrollTrigger.update);
|
||||
gsap.ticker.add((time) => { lenis.raf(time * 1000); });
|
||||
gsap.ticker.lagSmoothing(0);
|
||||
|
||||
// Hero stagger — runs once on load.
|
||||
gsap.to('.hero .reveal', {
|
||||
y: 0, opacity: 1, duration: 1.1, ease: 'power3.out', stagger: 0.12
|
||||
});
|
||||
|
||||
// Card reveals — batched so cards in the same row come in together.
|
||||
ScrollTrigger.batch('[data-gsap="card"]', {
|
||||
start: 'top 90%',
|
||||
onEnter: (els) => gsap.to(els, {
|
||||
y: 0, opacity: 1, scale: 1,
|
||||
duration: 0.9, ease: 'power3.out', stagger: 0.08, overwrite: true
|
||||
})
|
||||
});
|
||||
})();
|
||||
98
website/assets/js/scene.js
Normal file
98
website/assets/js/scene.js
Normal file
|
|
@ -0,0 +1,98 @@
|
|||
(function () {
|
||||
if (window.matchMedia('(prefers-reduced-motion: reduce)').matches) return;
|
||||
if (!window.WebGLRenderingContext || !window.THREE) return;
|
||||
|
||||
const canvas = document.getElementById('scene');
|
||||
if (!canvas) return;
|
||||
|
||||
const root = document.documentElement;
|
||||
const readVar = (name) => getComputedStyle(root).getPropertyValue(name).trim();
|
||||
const readOpacity = () => parseFloat(readVar('--scene-opacity')) || 0.18;
|
||||
|
||||
const scene = new THREE.Scene();
|
||||
const camera = new THREE.PerspectiveCamera(
|
||||
60, window.innerWidth / window.innerHeight, 0.1, 100
|
||||
);
|
||||
const renderer = new THREE.WebGLRenderer({ canvas, antialias: true, alpha: true });
|
||||
renderer.setSize(window.innerWidth, window.innerHeight, false);
|
||||
renderer.setPixelRatio(Math.min(window.devicePixelRatio || 1, 2));
|
||||
|
||||
const geometry = new THREE.TorusKnotGeometry(2.5, 0.4, 130, 20);
|
||||
const material = new THREE.MeshPhongMaterial({
|
||||
color: readVar('--accent') || '#c03a28',
|
||||
wireframe: true,
|
||||
transparent: true,
|
||||
opacity: readOpacity()
|
||||
});
|
||||
const core = new THREE.Mesh(geometry, material);
|
||||
scene.add(core);
|
||||
|
||||
scene.add(new THREE.AmbientLight(0xffffff, 0.6));
|
||||
const dir = new THREE.DirectionalLight(0xffffff, 0.8);
|
||||
dir.position.set(5, 5, 5);
|
||||
scene.add(dir);
|
||||
|
||||
const BASE_Z = 9;
|
||||
camera.position.z = BASE_Z;
|
||||
|
||||
let scrollY = window.scrollY || 0;
|
||||
window.addEventListener('scroll', () => {
|
||||
scrollY = window.scrollY || 0;
|
||||
}, { passive: true });
|
||||
|
||||
let baseOpacity = readOpacity();
|
||||
|
||||
let running = true;
|
||||
function tick() {
|
||||
if (!running) return;
|
||||
requestAnimationFrame(tick);
|
||||
|
||||
// Continuous slow drift.
|
||||
core.rotation.y += 0.0015;
|
||||
core.rotation.z += 0.0006;
|
||||
|
||||
// Scroll-driven motion: zoom in, scale up, tilt.
|
||||
const s = Math.min(scrollY, 2000);
|
||||
camera.position.z = BASE_Z - s * 0.0022;
|
||||
const scale = 1 + s * 0.00035;
|
||||
core.scale.set(scale, scale, scale);
|
||||
core.rotation.x = s * 0.0008;
|
||||
|
||||
// Fade past hero so feature cards stay readable.
|
||||
const vh = window.innerHeight;
|
||||
const fadeStart = vh * 0.5;
|
||||
const fadeEnd = vh * 1.4;
|
||||
const t = Math.max(0, Math.min(1, (scrollY - fadeStart) / (fadeEnd - fadeStart)));
|
||||
material.opacity = baseOpacity * (1 - t * 0.92);
|
||||
|
||||
renderer.render(scene, camera);
|
||||
}
|
||||
tick();
|
||||
|
||||
window.addEventListener('resize', () => {
|
||||
camera.aspect = window.innerWidth / window.innerHeight;
|
||||
camera.updateProjectionMatrix();
|
||||
renderer.setSize(window.innerWidth, window.innerHeight, false);
|
||||
});
|
||||
|
||||
document.addEventListener('visibilitychange', () => {
|
||||
if (document.hidden) {
|
||||
running = false;
|
||||
} else if (!running) {
|
||||
running = true;
|
||||
tick();
|
||||
}
|
||||
});
|
||||
|
||||
const mql = window.matchMedia('(prefers-color-scheme: dark)');
|
||||
const updateTheme = () => {
|
||||
const accent = readVar('--accent');
|
||||
if (accent) material.color.set(accent);
|
||||
baseOpacity = readOpacity();
|
||||
};
|
||||
if (mql.addEventListener) {
|
||||
mql.addEventListener('change', updateTheme);
|
||||
} else if (mql.addListener) {
|
||||
mql.addListener(updateTheme);
|
||||
}
|
||||
})();
|
||||
19
website/assets/js/vendor/PROVENANCE.md
vendored
Normal file
19
website/assets/js/vendor/PROVENANCE.md
vendored
Normal file
|
|
@ -0,0 +1,19 @@
|
|||
# Vendored JavaScript libraries
|
||||
|
||||
These minified bundles are checked into the repo so furtka.org has zero
|
||||
third-party-CDN dependencies at runtime. Pin date: **2026-04-27**.
|
||||
|
||||
| File | Version | Source |
|
||||
|---|---|---|
|
||||
| `three.min.js` | r128 | https://cdnjs.cloudflare.com/ajax/libs/three.js/r128/three.min.js |
|
||||
| `gsap.min.js` | 3.12.2 (core only) | https://cdnjs.cloudflare.com/ajax/libs/gsap/3.12.2/gsap.min.js |
|
||||
| `ScrollTrigger.min.js` | 3.12.2 | https://cdnjs.cloudflare.com/ajax/libs/gsap/3.12.2/ScrollTrigger.min.js |
|
||||
| `lenis.min.js` | @studio-freight/lenis 1.0.33 | https://unpkg.com/@studio-freight/lenis@1.0.33/dist/lenis.min.js |
|
||||
|
||||
All four expose UMD globals (`THREE`, `gsap`, `ScrollTrigger`, `Lenis`).
|
||||
None are ES modules, so no `js.Build` step is needed — Hugo just fingerprints them.
|
||||
|
||||
GSAP "Club" plugins (SplitText, MorphSVG, etc.) are **not** free for commercial use.
|
||||
Only `gsap` core + `ScrollTrigger` (both standard MIT-style license) are bundled.
|
||||
|
||||
To refresh: re-run `curl -sSfL -o <file> <url>` and bump the pin date here.
|
||||
11
website/assets/js/vendor/ScrollTrigger.min.js
vendored
Normal file
11
website/assets/js/vendor/ScrollTrigger.min.js
vendored
Normal file
File diff suppressed because one or more lines are too long
11
website/assets/js/vendor/gsap.min.js
vendored
Normal file
11
website/assets/js/vendor/gsap.min.js
vendored
Normal file
File diff suppressed because one or more lines are too long
1
website/assets/js/vendor/lenis.min.js
vendored
Normal file
1
website/assets/js/vendor/lenis.min.js
vendored
Normal file
File diff suppressed because one or more lines are too long
6
website/assets/js/vendor/three.min.js
vendored
Normal file
6
website/assets/js/vendor/three.min.js
vendored
Normal file
File diff suppressed because one or more lines are too long
|
|
@ -1,33 +1,33 @@
|
|||
---
|
||||
title: "Furtka"
|
||||
description: "Offenes Heimserver-Betriebssystem — einfach genug für alle."
|
||||
status: "<span class=\"mono\">26.6-alpha</span> — in Arbeit"
|
||||
status: "<span class=\"mono\">26.15-alpha</span> — in Arbeit"
|
||||
# features_today / features_next müssen index-parallel zu content/_index.md bleiben.
|
||||
intro: |
|
||||
**Furtka** ist ein offenes Heimserver-Betriebssystem.
|
||||
USB-Stick einstecken, durch einen Assistenten klicken, und aus jedem
|
||||
alten Rechner wird eine private Cloud für den Haushalt — mit eigenen
|
||||
Apps, eigenem Namen im Netz, eigenen Daten.
|
||||
|
||||
Das Ziel ist einfach: **dein Vater soll das einrichten können.**
|
||||
features_today_label: "Was heute schon geht"
|
||||
features_today:
|
||||
- "Vom USB-Stick booten und Furtka auf die Festplatte einrichten"
|
||||
- "Ein Assistent fragt nach Name, Benutzer und Netzwerk — fertig"
|
||||
- "Danach: Bedienseite im Browser öffnen"
|
||||
- "Erste App: **Dateifreigabe im Heimnetz** (alle im WLAN sehen den Ordner)"
|
||||
- "Apps mit einem Klick installieren und entfernen"
|
||||
- "Eine installierte App mit einem Klick aktualisieren (holt das neueste Container-Image)"
|
||||
- "Furtka selbst mit einem Klick aktualisieren — keine Neuinstallation mehr für neue Features"
|
||||
features_next_label: "Was als Nächstes kommt"
|
||||
features_next:
|
||||
- "Apps für Fotos, Dateien, Smarthome, Spiele-Streaming und Medien"
|
||||
- "Einfachere Sprache im Einrichtungs-Assistenten"
|
||||
- "Sichere Verbindung im Heimnetz (ohne Warnmeldung im Browser)"
|
||||
- "Mehrere Server zusammenschalten"
|
||||
---
|
||||
|
||||
**Furtka** ist ein offenes Heimserver-Betriebssystem.
|
||||
USB-Stick einstecken, durch einen Assistenten klicken, und aus jedem
|
||||
alten Rechner wird eine private Cloud für den Haushalt — mit eigenen
|
||||
Apps, eigenem Namen im Netz, eigenen Daten.
|
||||
|
||||
Das Ziel ist einfach: **dein Vater soll das einrichten können.**
|
||||
|
||||
### Was als Nächstes kommt
|
||||
- Apps für Fotos, Dateien, Smarthome, Spiele-Streaming und Medien
|
||||
- Einfachere Sprache im Einrichtungs-Assistenten
|
||||
- Sichere Verbindung im Heimnetz (ohne Warnmeldung im Browser)
|
||||
- Mehrere Server zusammenschalten
|
||||
|
||||
### Was heute schon geht
|
||||
- Vom USB-Stick booten und Furtka auf die Festplatte einrichten
|
||||
- Ein Assistent fragt nach Name, Benutzer und Netzwerk — fertig
|
||||
- Danach: Bedienseite im Browser öffnen
|
||||
- Erste App: **Dateifreigabe im Heimnetz** (alle im WLAN sehen den Ordner)
|
||||
- Apps mit einem Klick installieren und entfernen
|
||||
- Eine installierte App mit einem Klick aktualisieren (holt das neueste Container-Image)
|
||||
- Furtka selbst mit einem Klick aktualisieren — keine Neuinstallation mehr für neue Features
|
||||
|
||||
|
||||
Wir sind zu zweit und bauen das öffentlich, abends und am Wochenende.
|
||||
Es ist früh.
|
||||
|
||||
Mitlesen? Schreib an <a href="mailto:hallo@furtka.org">hallo@furtka.org</a>.
|
||||
Mitlesen? Schreib an <hallo@furtka.org>.
|
||||
|
|
|
|||
|
|
@ -1,33 +1,33 @@
|
|||
---
|
||||
title: "Furtka"
|
||||
description: "Open-source home server OS — simple enough for everyone."
|
||||
status: "<span class=\"mono\">26.6-alpha</span> — work in progress"
|
||||
status: "<span class=\"mono\">26.15-alpha</span> — work in progress"
|
||||
# Keep features_today / features_next index-aligned with content/_index.de.md.
|
||||
intro: |
|
||||
**Furtka** is an open-source home server OS.
|
||||
Boot from USB, click through a wizard, and any old computer
|
||||
turns into a private cloud for your household — with your own apps,
|
||||
your own name on the network, your own data.
|
||||
|
||||
The goal is simple: **your dad should be able to set this up.**
|
||||
features_today_label: "What works today"
|
||||
features_today:
|
||||
- "Boot from USB stick and install Furtka onto the hard drive"
|
||||
- "A wizard asks for name, user and network — done"
|
||||
- "Then: open the control page in your browser"
|
||||
- "First app: **file sharing on the home network** (everyone on Wi-Fi sees the folder)"
|
||||
- "Install and remove apps with one click"
|
||||
- "Update an installed app with one click (pulls the newest container image)"
|
||||
- "Update Furtka itself with one click — no reinstalling for new features"
|
||||
features_next_label: "What's coming next"
|
||||
features_next:
|
||||
- "Apps for photos, files, smart home, game streaming and media"
|
||||
- "Plainer language in the setup wizard"
|
||||
- "Secure connection on your home network (no browser warning)"
|
||||
- "Linking several servers together"
|
||||
---
|
||||
|
||||
**Furtka** is an open-source home server OS.
|
||||
Boot from USB, click through a wizard, and any old computer
|
||||
turns into a private cloud for your household — with your own apps,
|
||||
your own name on the network, your own data.
|
||||
|
||||
The goal is simple: **your dad should be able to set this up.**
|
||||
|
||||
### What's coming next
|
||||
- Apps for photos, files, smart home, game streaming and media
|
||||
- Plainer language in the setup wizard
|
||||
- Secure connection on your home network (no browser warning)
|
||||
- Linking several servers together
|
||||
|
||||
### What works today
|
||||
- Boot from USB stick and install Furtka onto the hard drive
|
||||
- A wizard asks for name, user and network — done
|
||||
- Then: open the control page in your browser
|
||||
- First app: **file sharing on the home network** (everyone on Wi-Fi sees the folder)
|
||||
- Install and remove apps with one click
|
||||
- Update an installed app with one click (pulls the newest container image)
|
||||
- Update Furtka itself with one click — no reinstalling for new features
|
||||
|
||||
|
||||
We're two people building it in public on evenings and weekends.
|
||||
It's early.
|
||||
|
||||
Want to follow along? Write to <a href="mailto:hallo@furtka.org">hallo@furtka.org</a>.
|
||||
Want to follow along? Write to <hallo@furtka.org>.
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ enableRobotsTXT = true
|
|||
|
||||
[params]
|
||||
description = "Open-source home server OS — simple enough for everyone."
|
||||
version = "26.6-alpha"
|
||||
version = "26.15-alpha"
|
||||
contactEmail = "hallo@furtka.org"
|
||||
|
||||
[markup.goldmark.renderer]
|
||||
|
|
|
|||
|
|
@ -1,13 +1,15 @@
|
|||
<!DOCTYPE html>
|
||||
<html lang="{{ .Site.Language.Lang }}">
|
||||
<html lang="{{ .Site.Language.Lang }}" class="no-js">
|
||||
<head>
|
||||
{{ partial "head.html" . }}
|
||||
</head>
|
||||
<body>
|
||||
{{ if .IsHome }}<canvas id="scene" class="scene-canvas" aria-hidden="true"></canvas>{{ end }}
|
||||
{{ partial "header.html" . }}
|
||||
<main class="container">
|
||||
{{ block "main" . }}{{ end }}
|
||||
</main>
|
||||
{{ partial "footer.html" . }}
|
||||
{{ if .IsHome }}{{ partial "scripts.html" . }}{{ end }}
|
||||
</body>
|
||||
</html>
|
||||
|
|
|
|||
|
|
@ -2,13 +2,46 @@
|
|||
<article class="home">
|
||||
<header class="hero">
|
||||
{{ with .Params.status }}
|
||||
<p class="status-chip">{{ . | safeHTML }}</p>
|
||||
<p class="status-chip reveal">{{ . | safeHTML }}</p>
|
||||
{{ end }}
|
||||
<h1>{{ .Title }}</h1>
|
||||
{{ with site.Params.description }}<p class="lede">{{ . }}</p>{{ end }}
|
||||
<h1 class="reveal">{{ .Title }}</h1>
|
||||
{{ with site.Params.description }}<p class="lede reveal">{{ . }}</p>{{ end }}
|
||||
<p class="cta-row reveal">
|
||||
<a class="cta cta--primary" href="https://forgejo.sourcegate.online/daniel/furtka/releases">
|
||||
{{ if eq site.Language.Lang "de" }}Neuestes Release{{ else }}Latest release{{ end }}
|
||||
<span aria-hidden="true">→</span>
|
||||
</a>
|
||||
</p>
|
||||
</header>
|
||||
<div class="prose">
|
||||
{{ .Content }}
|
||||
|
||||
{{ with .Params.intro }}
|
||||
<section class="intro">{{ . | markdownify }}</section>
|
||||
{{ end }}
|
||||
|
||||
{{ if .Params.features_today }}
|
||||
<section class="feature-section">
|
||||
{{ with .Params.features_today_label }}<p class="section-eyebrow">{{ . }}</p>{{ end }}
|
||||
<div class="feature-grid">
|
||||
{{ range .Params.features_today }}
|
||||
<article class="feature-card" data-gsap="card">{{ . | markdownify }}</article>
|
||||
{{ end }}
|
||||
</div>
|
||||
</section>
|
||||
{{ end }}
|
||||
|
||||
{{ if .Params.features_next }}
|
||||
<section class="feature-section">
|
||||
{{ with .Params.features_next_label }}<p class="section-eyebrow">{{ . }}</p>{{ end }}
|
||||
<div class="feature-grid">
|
||||
{{ range .Params.features_next }}
|
||||
<article class="feature-card" data-gsap="card">{{ . | markdownify }}</article>
|
||||
{{ end }}
|
||||
</div>
|
||||
</section>
|
||||
{{ end }}
|
||||
|
||||
{{ with .Content }}
|
||||
<section class="prose closer">{{ . }}</section>
|
||||
{{ end }}
|
||||
</article>
|
||||
{{ end }}
|
||||
|
|
|
|||
|
|
@ -1,7 +1,10 @@
|
|||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||
<script>document.documentElement.classList.replace('no-js','js');</script>
|
||||
<title>{{ if .IsHome }}{{ site.Title }} — {{ site.Params.description }}{{ else }}{{ .Title }} · {{ site.Title }}{{ end }}</title>
|
||||
<meta name="description" content="{{ with .Params.description }}{{ . }}{{ else }}{{ site.Params.description }}{{ end }}">
|
||||
<meta name="theme-color" content="#f7f6f3" media="(prefers-color-scheme: light)">
|
||||
<meta name="theme-color" content="#0d0d0f" media="(prefers-color-scheme: dark)">
|
||||
<link rel="icon" type="image/svg+xml" href="/favicon.svg">
|
||||
<meta property="og:site_name" content="{{ site.Title }}">
|
||||
<meta property="og:title" content="{{ if .IsHome }}{{ site.Title }}{{ else }}{{ .Title }} · {{ site.Title }}{{ end }}">
|
||||
|
|
|
|||
12
website/layouts/partials/scripts.html
Normal file
12
website/layouts/partials/scripts.html
Normal file
|
|
@ -0,0 +1,12 @@
|
|||
{{ $three := resources.Get "js/vendor/three.min.js" | fingerprint }}
|
||||
{{ $gsap := resources.Get "js/vendor/gsap.min.js" | fingerprint }}
|
||||
{{ $st := resources.Get "js/vendor/ScrollTrigger.min.js" | fingerprint }}
|
||||
{{ $lenis := resources.Get "js/vendor/lenis.min.js" | fingerprint }}
|
||||
{{ $scene := resources.Get "js/scene.js" | fingerprint }}
|
||||
{{ $anim := resources.Get "js/animations.js" | fingerprint }}
|
||||
<script defer src="{{ $three.RelPermalink }}" integrity="{{ $three.Data.Integrity }}"></script>
|
||||
<script defer src="{{ $gsap.RelPermalink }}" integrity="{{ $gsap.Data.Integrity }}"></script>
|
||||
<script defer src="{{ $st.RelPermalink }}" integrity="{{ $st.Data.Integrity }}"></script>
|
||||
<script defer src="{{ $lenis.RelPermalink }}" integrity="{{ $lenis.Data.Integrity }}"></script>
|
||||
<script defer src="{{ $scene.RelPermalink }}" integrity="{{ $scene.Data.Integrity }}"></script>
|
||||
<script defer src="{{ $anim.RelPermalink }}" integrity="{{ $anim.Data.Integrity }}"></script>
|
||||
Loading…
Add table
Reference in a new issue