pollux (192.168.178.165) wedged at the network level during an
end-to-end install test today — mkinitcpio on a 4 GiB smoke VM +
the cached 1.5 GB ISO + a busy runner container pushed the host into
OOM, taking pveproxy and the SSH path down with it. Recovered by
physical reset.
Smoke VM now defaults to 8192 MiB / 2 vCPU, configurable via
PVE_TEST_VM_MEMORY / PVE_TEST_VM_CORES. Host has 64 GiB, so one
smoke VM at 8 GiB is well within headroom.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
api() was swallowing Proxmox's error body because callers pipe its
output to /dev/null. With a bare "curl: (22) 403" in the log we can't
tell which permission is missing. Now we capture the response body,
print it to stderr on failure, and only emit it to stdout on success.
No behaviour change on the happy path.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The delete branch required Datastore.Allocate (or was hitting a
privilege-separated token ACL edge case) and produced 403s on re-runs
against the same commit SHA. Since the ISO bytes are reproducible for
a given SHA — furtka-<sha>.iso is content-addressed — we can just
reuse whatever is already in PVE storage instead of cycling it.
Fixes the "runs-on-same-sha" re-dispatch case without needing any extra
PVE permission, and shaves ~2 min off repeated smoke runs.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
After build-iso, a new smoke-vm job uploads the freshly built ISO to
the test Proxmox at 192.168.178.165 via PVE API token, boots it in a
fresh VM (VMID range 9000-9099, MAC derived from commit SHA so the
runner can find the DHCP IP by scanning the LAN), and curls :5000 to
confirm the webinstaller answers HTTP 200. Last 5 smoke VMs + their
ISOs are kept for post-mortem; older ones are purged. continue-on-error
on the smoke job so a VM-side flake doesn't mark the ISO build red.
Shortens the feedback loop on ISO regressions from "next manual VM
test session" (days) to "next push" (minutes) — the 2026-04-15/16 VM
sessions each found real boot-time bugs that unit tests missed.
Docs at docs/smoke-vm.md. Requires Forgejo secrets PVE_TEST_HOST and
PVE_TEST_TOKEN (dedicated smoke@pve!ci PVE token, privilege-separated).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The 26.2-alpha release workflow hung for 15+ minutes on
"apt-get install -y jq" — the runner's apt mirror was unreachable
(or very slow), and the whole publish stalled.
jq was only used for two tiny things: building the release-create
POST body and reading the release id from the response. Both are
one-liners in Python, which is guaranteed-present on the Forgejo
Actions ubuntu-latest runner image. Replaced both uses; removed
the apt-get step from release.yml entirely. Slow mirrors no
longer block tagged releases.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Root cause of today's 403 on a fresh install: assets/ lived inside the
Python package at furtka/assets/, so the resource-manager tarball
extracted to /opt/furtka/versions/<ver>/furtka/assets/. But Caddyfile
has `root * /opt/furtka/current/assets/www`, systemd units point at
/opt/furtka/current/assets/bin/furtka-status, and the install-time
`systemctl link /opt/furtka/current/assets/systemd/*.service` expected
the top-level layout. All three found nothing:
- Caddy → 403 Forbidden (empty/missing document root)
- systemctl link → silent no-op, nothing ever linked into
/etc/systemd/system/
- furtka-api.service + furtka-reconcile.service → "inactive" because
they were never registered
Nothing in the Python package ever imported furtka.assets — these are
shell scripts, HTML/CSS, systemd units, and a Caddyfile, which is
config data, not package data. Promoting assets/ to the repo root
matches how it's referenced everywhere downstream and eliminates the
path mismatch.
Changes:
- git mv furtka/assets assets
- iso/build.sh: tarball-staging step now also `cp -a "$REPO_ROOT/assets"`
so the tarball ships ./assets at its root, and the live-ISO copy
reads from $REPO_ROOT/assets instead of $REPO_ROOT/furtka/assets.
- scripts/build-release-tarball.sh: same for release tarballs.
- webinstaller/app.py: _resolve_assets_dir's dev fallback walks one
level up to REPO_ROOT/assets/.
- tests/test_webinstaller_assets.py: ASSETS constant updated.
Tests still green (150/150) because both paths were fs-level — no
code imports changed. Next ISO build will land assets at the path
everything downstream expects.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Slice 2 of the self-update story. Tagging a release on main now
produces a downloadable self-update payload on the Forgejo releases
page, and a running box can pull it down, verify it, atomically swap
to the new version, and health-check the result.
New pieces:
- scripts/build-release-tarball.sh <version> — packages the furtka/
package + bundled apps/ + a root-level VERSION file as
dist/furtka-<version>.tar.gz, plus a .sha256 sidecar and a
release.json metadata blob.
- scripts/publish-release.sh <version> — uses the Forgejo v1 API to
create a release (body pulled from the CHANGELOG section for this
tag, pre-release auto-flagged on -alpha/-beta/-rc) and upload the
three assets sequentially. Needs \$FORGEJO_TOKEN.
- .forgejo/workflows/release.yml — tag-triggered, runs both scripts
with the new \$FORGEJO_RELEASE_TOKEN repo secret.
- furtka/updater.py — check_update, prepare_update, apply_update,
run_update, rollback. Atomic symlink swap, sha256 verify (TOCTOU-
safe: re-hashes on-disk file), health-check post-restart with
auto-rollback on failure, stage-by-stage progress persisted to
/var/lib/furtka/update-state.json so the UI can poll independent
of the (restarting) API process. Path overrides via FURTKA_ROOT /
FURTKA_STATE_DIR / FURTKA_LOCK_PATH so tests pin a tmpdir.
- furtka/cli.py — \`furtka update [--check] [--json]\` and
\`furtka rollback\`.
- tests/test_updater.py — 15 tests: version compare, sha256 verify,
tarball extract (including traversal refusal), lockfile, apply
happy + rollback paths, rollback CLI, check_update with stubbed
Forgejo.
- iso/build.sh — writes VERSION at the tarball root so the install
path matches the self-update path (previously assumed only the
release script did this).
RELEASING.md now points at the automated flow — no more manually
clicking "Create release" on the Forgejo UI.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>